Thursday, October 21, 2010

Windows Azure and Cloud Computing Posts for 10/20/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31133  
• Update 10/21/2010: Articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageSee my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post of 10/21/2010, which lists all PDC 2010 sessions related to Windows Azure, SQL Azure, Windows Azure AppFabric, Codename “Dallas” and OData, in the Cloud Computing Events section below.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

• The SQL Azure Team announced SQL Azure Sessions at PDC 2010 in this 10/20/2010 post:

imageThe session list for the sold out PDC 2010 two day conference in Redmond have been posted here.  The SQL Azure sessions announced on the PDC 2010 web site are listed below.

Building Offline Applications using the Sync Framework and SQL Azure

By: Nina Hu

In this session you will learn how to build a client application that operates against locally stored data and uses synchronization to keep up-to-date with a SQL Azure database. See how Sync Framework can be used to build caching and offline capabilities into your client application, making your users productive when disconnected and making your user experience more compelling even when a connection is available. See how to develop offline applications for Windows Phone 7 and Silverlight, plus how the services support any other client platform, such as iPhone and HTML5 applications, using the open web-based sync protocol.

Building Scale-Out Database Solutions on SQL Azure

By: Lev Novik

SQL Azure provides an information platform that you can easily provision, configure, and use to power your cloud applications. In this session we will explore the patterns and practices that help you develop and deploy applications that can exploit the full power of the elastic, highly available, and scalable SQL Azure Database service. The talk will detail modern scalable application design techniques such as sharding and horizontal partitioning and dive into future enhancements to SQL Azure Databases.


imageSee my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post of 10/21/2010, which lists all PDC 2010 sessions related to Windows Azure, SQL Azure, Windows Azure AppFabric, Codename “Dallas” and OData, in the Cloud Computing Events section below.


Abel Avram posted The Future of WCF Is RESTful to the InfoQ blog on 10/21/2010:

imageGlenn Block, a Windows Communication Foundation (WCF) Program Manager, said during an online webinar entitled “WCF, Evolving for the Web” that Microsoft’s framework for building service-oriented applications is going to be refactored radically, the new architecture being centered around HTTP.

Block started the online session by summarizing the current trends in the industry:

  • a move to cloud-based computing
  • a migration away from SOAP
  • a shift towards browsers running on all sorts of devices
  • an increase in the adoption of REST
  • emerging standards like OAuth, WebSockets

He mentioned that the current architecture of WCF is largely based on SOAP as shown in this slide:

image

While it affords communicating to many types of services, the current WCF architecture is complex and it does not scale well, so Block is looking forward to see a simpler communication between services based on HTTP as depicted in the following slide:

image

HTTP was introduced in .NET 3.5, allowing the creation of services accessed via HTTP, but “it does not give access to everything HTTP has to offer, and it is a very flat model, RPC oriented, whereas the Web is not. The Web is a very rich set of resources,” according to Block. Instead of retrofitting the current WCF to work over HTTP, Block considers WCF should be re-architected with HTTP in mind using a RESTful approach.

WCF will contain helper APIs for pre-processing HTTP requests or responses, doing all the parsing and manipulation of arguments, encapsulating the HTTP information in objects that can be later transferred for further processing. This will relieve the user from dealing with HTTP internals directly if he wants to. This feature will also present a plug-in capability for media-type formatters of data formats like JSON, Atom, OData, etc. WCF will support some of them out of the box, but the user will be able to add his own formatters.

The new WCF is already being built, Block demoing sample code using it, but he mentioned that the feature set and what WCF is going to look like is not set in stone. They will publish an initial version of the framework on CodePlex in the near future for the community to be able to test and react, shaping the future of WCF. More details are to come during PDC 2010.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

image7223See my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post of 10/21/2010, which lists all PDC 2010 sessions related to Windows Azure, SQL Azure, Windows Azure AppFabric, Codename “Dallas” and OData, in the Cloud Computing Events section below.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

John Hagel III and John Seely Brown published Learning From The Cloud: Cloud computing's new vistas, like research to Forbes.com on 10/20/2010:

image

It's easy to see why startups love the idea of cloud computing. When you're burning through expensive early-stage capital, no one wants to use dedicated, perishable resources like computing power, bandwidth, storage and software when you can get what you need, when you need it, from the cloud. Larger enterprises are coming onto the cloud more slowly, lured by the prospect of lower IT operating costs as well as the ability to easily and cheaply handle unexpected fluctuations in demand.

image

Even the government is adopting cloud computing. As its budget experiences painful cuts, the city of Miami is using Microsoft's ( MSFT - news - people ) Windows Azure platform to host a computing-intensive mapping application that visualizes calls to the city's 311 municipal information service. The approach costs 75% less than in-house servers, and dramatically increases the transparency and awareness of previously hidden municipal problems.

As these projects show, cloud computing is, at the moment, largely viewed as a low-cost form of IT outsourcing. That's all well and good. But organizations focusing only on this one perspective on the cloud will be missing the next wave of potential for cloud computing: using the cloud as a platform for experimentation and learning.

Largely unnoticed by analysts, a few companies of all sizes have begun to appreciate the ability of cloud computing to spur broader experimentation. Because cloud computing reduces the cost of technology initiatives, cloud-aware organizations are more open to launching multiple prototypes, testing pilot ideas in parallel. Projects that take off can scale up more quickly, while those that stall at the gate can be painlessly dialed back. As a result, learning accelerates.

For example, Varian, a scientific-instruments company, uses the cloud to run intensive "Monte Carlo" computer simulations of design prototypes, leading to more rapid feedback cycles. A design for a mass spectrometer that would have required six weeks with internal processors took only a day on Amazon's Elastic Compute Cloud (EC2). Varian reports that running a calculation on one machine for 100 hours costs the same as using 100 machines for an hour.

The forms of processor-intensive research being shifted to the cloud run the gamut from analyzing molecular data to mapping the Milky Way. In Palo Alto, Calif., a nanotech startup that's still in stealth mode rents time remotely over the Internet for experiments using sophisticated electron microscopes--devices it could never have afforded before.

Software code-testing is also shifting to the cloud, as are simulations in the financial-services industry. Semiconductor companies are experimenting with cloud computing for the electronic design automation that speeds up chip design. Processes with relatively light computing requirements, such as software regression testing, already fit well with cloud computing. More intensive processes, like simulations and high-end design tools, are starting to migrate to the cloud.

Even more interesting are the experiments using cloud computing to accelerate learning from customers, speeding up both time-to-market and customer feedback. For instance, Amazon uses its own cloud-based Relational Database Service to much more quickly and cheaply manipulate the tremendous amount of data it generates from simulations of its 98 million active customers.

And 3M ( MMM - news - people ) in May launched an inexpensive Web-based application that uses Microsoft data centers to give any designer anywhere in the world the ability to upload a design--a logo, a packaging concept, a store environment--and to then use complex algorithms to immediately analyze its effectiveness. 3M's "Visual Attention Service" maps the "hot spots" on a design where people's eyes naturally go and evaluates the image's "visual saliency." In future versions, 3M plans to allow customers to create entire databases of test images and experiment with a variety of designs. The cloud tool significantly reduces the time and cost of design iterations, while improving the overall impact of the designs.

Read more: 2, Next >

John Hagel III is co-chairman and John Seely Brown is independent co-chairman of the Deloitte Center for the Edge. Their books include The Power of Pull, The Only Sustainable Edge, Out of the Box, Net Worth, and Net Gain.


imageSee my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post of 10/21/2010, which lists all PDC 2010 sessions related to Windows Azure, SQL Azure, Windows Azure AppFabric, Codename “Dallas” and OData, in the Cloud Computing Events section below.


The Windows Azure Team published a Real World Windows Azure: Interview with Richard Godfrey, Cofounder and CEO, KoodibooK case study on 10/21/2010:

As part of the Real World Windows Azure series, we talked to Richard Godfrey, cofounder and CEO of KoodibooK, about using the Windows Azure platform to develop the company's unique self-publish service, which lets customers design and publish personalized photo albums in minutes. Here's what he had to say:

MSDN: Can you please tell us about KoodibooK?

Godfrey: KoodibooK provides a unique "Create Once, Publish Anywhere" experience for people looking to capture memorable moments in their lives with a personalized photo album.

MSDN: What were the company's main goals in designing and building the KoodibooK publishing application?

Godfrey: Our primary mission in developing the KoodibooK service was to make it the fastest way for consumers to design, build, and publish their own photo albums. We wanted to give our customers the chance to build and start enjoying their book in as little as 10 minutes.

MSDN: Can you describe how the Windows Azure platform helped you meet those goals?

Godfrey: The Windows Azure platform is ideal to meet our needs. The simplicity of the architecture model of Windows Azure made it extremely easy to develop services to handle the publishing workflow from start to finish.

MSDN: Can you describe how the KoodibooK solution uses other Microsoft technologies, together with Windows Azure?

Godfrey: The actual book design application-the templates and tools that people use to organize their photos, create page layouts, and add effects-is built on the Windows Presentation Foundation. Customers download the application from our site and run it on their computer.

As the images are written to Windows Azure Blob storage, all reference data about individual publishing projects is simultaneously stored in Microsoft SQL Azure databases. To give users a high-fidelity, interactive viewing experience online, we use the Microsoft Silverlight browser plug-in to handle the presentation of the published book. Users can see a full-scale version of their book, flip through pages, use the Deep Zoom feature of Silverlight to see incredible detail, and much more.

MSDN: What makes the KoodibooK photo album solution unique?

Godfrey: With our solution, people can create their own custom album in minutes instead of hours, mainly because our design solution uses a client application, instead of relying on a web-based system. And we let people pull in content from just about anywhere-from online photo storage locations, such as their Facebook account, from Flickr, from blogs, or from local drives. This helps simplify our service and opens it up to a wider range of user preferences. And then we give people lots of options in terms of publishing their finished book. They can print a bound version from one of our professional print vendors, or they can share their book online so that people can view it on a PC or their mobile computing device.

With the Koodibook publishing tool, customers can interact with a 3-D preview of their personalized photo album before printing.

MSDN: Can you describe the benefits KoodibooK has gained through the use of Windows Azure, along with Windows Presentation Foundation and Microsoft Silverlight?

Godfrey: As a startup with just a handful of employees, we simply couldn't allocate a lot of time or budget to building and managing the infrastructure. Setting up and configuring services to run in Windows Azure is such a straightforward process. We just created a cloud service project in Visual Studio, published it to Windows Azure, and it worked.

Because of the interoperability of Microsoft technologies, we've been able to reuse code from the client application to optimize server-side components. This means we can quickly develop and deploy new functionality that works throughout all of the different parts of the solution. So we've been able to roll out product improvements on a consistent basis, which is a critical part of our growth strategy.

Read the full story at: http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008355

To read more Windows Azure customer success stories, visit: www.windowsazure.com/evidence


Mary Jo Foley (@maryjofoley) reported Microsoft starts moving more of its own services onto Windows Azure on 10/18/2010:

image Up until recently, relatively few of Microsoft own products or services were running on the company’s Windows Azure operating system.

Some of Live Mesh was on it. The Holm energy-tracking app was an Azure-hosted service. Pieces of its HealthVault solution were on Azure. But Hotmail, CRM Online, the Business Productivity Online Suite (BPOS [now Office 365]) of hosted enterprise apps? Nope, nope and nope.

imageI asked the Softies earlier this year if the lack of internal divisions hosting on top of Azure could be read as a lack of faith in Microsoft’s cloud OS. Was it just too untried and unproven for “real” apps and services?

The Azure leaders told me to watch for new and next-generation apps for both internal Microsoft use and external customer use to debut on Azure in the not-too-distant future. It looks like that’s gradually starting to happen.

imageMicrosoft Research announced on October 18 a beta version of WikiBhasha, “a multilingual content creation tool for Wikipedia, codeveloped by WikiPedia and Microsoft. The beta is an open-source MediaWiki extension, available under the Apache License 2.0, as well as in user-gadget and bookmarklet form. It’s the bookmarklet version that is hosted on Azure.

Gaming is another area where Microsoft has started relying on Azure. According to a case study Microsoft published last week, the “Windows Gaming Experience” team built social extensions into Bing Games on Azure, enabling that team to create in five months a handful of new hosting and gaming services. (It’s not the games themselves hosted on Azure; it’s complementary services like secure tokens, leaderboard scores, gamer-preferences settings, etc.) The team made use of Azure’s hosting, compute and storage elements to build these services that could be accessed by nearly two million concurrent gamers at launch in June — and that can scale up to “support five times the amount of users,” the Softies claim.

Microsoft also is looking to Azure as it builds its next-generation IntelliMirror product/service, according to an article Microsoft posted to its download center. Currently, IntelliMirror is a set of management features built into Windows Server. In the future (around the time of Windows 8, as the original version of the article said), some of thesee services may actually be hosted in the cloud.

An except from the edited, October 15 version of the IntelliMirror article:

The “IntelliMirror service management team, like many of commercial customers of Microsoft, is evaluating the Windows Azure cloud platform to establish whether it can offer an alternative solution for the DPM (data protection manager) requirements in IntelliMirror. The IntelliMirror service management team sees the flexibility of Windows Azure as an opportunity to meet growing user demand for the service by making the right resources available when and where they are needed.

“The first stage of the move toward the cloud is already underway. Initially, IntelliMirror service management team plans to set up a pilot on selective IntelliMirror and DPM client servers by early 2011, to evaluate the benefits of on-premises versus the cloud for certain parts of the service.”

There’s still no definitive timeframe as to when — or even if — Microsoft plans to move things like Hotmail, Bing or BPOS onto Azure. For now, these services run in Microsoft’s datacenters but not on servers running Azure.


Brent Stineman (@brentcodemonkey) pled for free or low cost Windows Azure instances for developers in his Its about driving adoption post of 10/20/2010:

image So I exchanged a few tweets with Buck Woody yesterday. For those not familiar, Buck is an incredibly passionate SQL Server guy with Microsoft who recently moved over to their Azure product family. Its obvious from some of the posts that Buck has made that he was well versed in ‘the cloud’ before the move, but he hasn’t let this stop him from being very vocal about sharing his excitement about the possibilities. But I digress…

The root of the exchange was regarding access to hosted Azure services for developer education/training. Now before I get off on my rant, I want to be a bit positive. MSFT has taken some great steps to making Azure available for developers. The CTP was nice and long and there were few restrictions (at least in the US) to participation. They also gave out initial MSDN and BizSpark Azure subscription benefits that were fair and adequate. They have even set up free labs for SQL Azure and the Azure AppFabric. And of course, there’s the combination of free downloads and the local development fabric.

All in all, it’s a nice set of tools for allow for initial learning on the Windows Azure platform. The real restriction remains the ability to actual deploy hosted services and test them in a “production environment”. Now the Dev Fabric is great and all, but as anyone that’s spent any time with Azure will tell you, you still need to test your apps in the cloud. There’s simply no substitute. And unfortunately, there is no Windows Azure lab.

Affective November 1st 2010, several of the aforementioned benefits are being either removed (AzureUSAPass) or significantly reduced. Now I fully understand that it costs money for MSFT to provide these benefits and I am grateful for what I’ve gotten. But I’m passionate about the platform and I’m concerned that with these changes, it will be even more difficult to help “spread the faith” as it were.

So consider this my public WTF. Powers that be, please consider extending these programs and the current benefit levels indefinitely. I get questions on a weekly basis from folks about “how can I learn about Azure”. I’d hate to have to start telling them they need to have a credit card. Many of these folks are the grass roots types that are doing this on their own time and dime but can help influence LARGE enterprises. Furthermore, you have HUGE data centers with excess capacity available. I’m certain that a good portion of that capacity is kept in an up state and as such is consuming resources. So why not put it to good use and help equip an army. An army of developers all armed with the promise and potential of cloud computing.

The more of these soldiers we have, the easier it will be to tear down the barriers that are blocking cloud adoption and overcome the challenges that these solutions face.

Ok… my enthusiasm about this is starting to make me sound like a revolutionary. So before I end up on another watch list I’ll cut this tirade short. Just please, either extend these programs or give us other options for exploring learning your platform.

I’ve been lobbying for lowering developers’ entry cost for Windows Azure, SQL Azure and Azure AppFabric for more than a year with only moderate success.

See Amazon Web Services announced a new AWS Free Usage Tier on 10/21/2010 in the Other Cloud Computing Platforms and Services section below.


Bruce Kyle asked Want to Know When You Approach Your Azure Limit? in a 10/20/2010 post to the US ISV Evangelist blog:

image One issue customers have had in the past is that they would exceed their predefined limits on Windows Azure Platform without notification (especially when using MSDN offerings and such).  Now we have a way to notify you. 

We are now sending customers emails notifications when their compute hour usage exceeds 75%, 100% and 125% of the number of compute hours included in their plan.

You can also get email every week of your consumption for the first 13 weeks of your subscription. After that, you’ll receive alerts when you reach certain thresholds.

For more information, see the Windows Azure team blog post Announcing Compute Hours Notifications for Windows Azure Customers.

You can also view you current usage and unbilled balance at any time on the Microsoft Online Services Customer Portal.

This isn’t news, but a reminder might be helpful to those who missed the first post.


Karsten Januszewski revealed The Latest Twitpocolypse in a 10/19/2010 post:

imageThere’s another Twitpocolypse on the horizon.  If you’ve developed against the Twitter API—and especially if you’re using a JSON parser for deserialization—you’d better read up.

In summary, there’s a serialization issue now that Twitter is moving to 64 bit signed integers, since Javascript can’t handle numbers greater than 53 bits. And Twitter passes the tweet id as a number, not a string, so they’ve introduced a new property that passes the id as a string. This means you’ll have to change your code to parse the id_str property, which is passed as a string, as opposed to the id property:

{"id": 10765432100123456789, "id_str": "10765432100123456789"} 

If you’re using .NET and deserializing into an int instead of a long or a string, you could be at risk. So check your code.

imageFortunately, both Archivist Web and Archivist Desktop are immune to this issue. In the case of Archivist Web, which uses the most excellent TweetSharp library, ids are cast to long variables. In the case of Archivist Desktop, which uses a custom-built deserialization framework, ids are cast to string variables.

Interesting to watch as Twitter continues to scale and deal with id generation issues. You can see an earlier example of issues around id generation in this lab note.

See examples of The Archivist’s tweet histories in the left frame of the OakLeaf Systems blog.


Nicole Hemsoth described Cloud-Driven Tools from Microsoft Research Target Earth, Life Sciences in a 10/19/2010 post to the HPC in the Cloud blog:

image Following its eScience Workshop at the University of California, Berkeley last week, Microsoft made a couple of significant announcements to over 200 attendees about new toolsets available to aid in ecological and biological research.

imageAt the heart of its two core news items is a new ecological research tool called MODISAzure coupled with the announcement of the Microsoft Biology Foundation, both of which are tied to Microsoft's Azure cloud offering, which until relatively recently has not on the scientific cloud computing radar to quite the same degree as Amazon's public cloud resource.

image While the company's Biology Foundation announcement is not as reliant on the cloud for processing power as much as it supplies a platform for collaboration and information-sharing, the ecological research tool provides a sound use case of scientific computing in the cloud. All of the elements for what is useful about the cloud for researchers is present: dynamic scalability, processing power equivalent or more powerful than local clusters, and the ability for researchers to shed some of the programming and cluster management challenges in favor of on-demand access.

MODISAzure and Flexible Ecological Research
Studies of ecosystems, even on the minute, local scale are incredibly complex undertakings due to the fact that any ecosystem is comprised of a large number of elements; from water, climate and plant cycles to external influences, including human interference, the list of constituent parts that factor into the broader examination of an ecosystem seems almost endless. Each element doubles onto itself, forming a series of sub-factors that must be considered -- a task that requires supercomputer assistance, or at least used to.

Last week at its annual eScience Workshop, Microsoft Research teamed up with the University of California, Berkeley to announce a new research tool that simplifies complex data analysis that creators claim will focus on "the breathing of the biosphere." Notice how the word "breathing" here implies that there will be a near real-time implication to the way data is collected and analyzed, meaning that researchers will be able to see the ecosystem as it exists in each moment -- or as it "breathes" or exists in a particular moment.

In order to monitor the breathing of a biosphere, data from satellite images from the over 500 FLUXNET towers are analyzed in minute detail, often down to what the team describes as a single-kilometer-level, or, if needed, on a global scale. The FLUXNET towers themselves, which are akin to a network of sensor arrays that measure fluctuations in carbon dioxide and water vapor levels, can provide data that can then be scaled over time, meaning that researchers can either get a picture of the present via the satellite images or can take the data and look for patterns that stretch back over a ten-year period if needed.

It is in this flexibility of timelines that researchers have to draw from that the term "breathing" comes into play. According to Catharine van Ingen, a partner architect on the project from Microsoft Research, "You see more different things when you can look big and look small. The ability to have that kind of living, breathing dataset ready for science is exciting. You can learn more and different things at each scale."

To be more specific, as Microsoft stated in its release, the system "combines state-of-the-art biophysical modeling with a rich cloud-based dataset of satellite imagery and ground-based sensor data to support carbon-climate science synthesis analysis on a global scale."

This system is based on MODISAzure, which Microsoft describes as a "pipeline for downloading, processing and reducing diverse satellite imagery." This satellite imagery, which is collected from the network of FLUXNET towers, employs the Windows Azure platform to gain the scalable boost it needs to deliver the results to researchers' desktops.

What this means, in other words, is that in theory, scientists studying the complex interaction of forces in an ecosystem and would otherwise rely on supercomputing capacity to handle such tasks, are now granted a maintenance- and hassle-free research tool via the power of Microsoft's cloud offering.

Read more: 2, All »


The Azure Forum Support Team posted a list of chapters to date of their Bring the clouds together: Azure + Bing Maps series on 8/11/2010 (missed when posted):

imageWe will start to write series of articles under the name of "Bring the clouds together". Those series contain articles, sample applications, and live demonstrations, that show how you can combine the power of Microsoft cloud computing solutions, as well as related technologies. We will provide both source code and written guidance to help you design and develop cloud applications of your own.

This first series will focus on how to combine Windows Azure, SQL Azure, and Bing Maps. You can find a preview of live demonstration on http://sqlazurebingmap.cloudapp.net/. Currently this is a preview, which may contain some bugs. As we go through this series, we will release the source code, and fix the issues in the live demonstration.

image

Chapters list

The chapters list will grow as new articles become available.

Topic Description
Chapter 1: Introducing the Plan My Travel application Describes the sample application, including its features, and high level overviews of the implementation.
Chapter 2: Choosing the right platform and technology Discusses how to make decition when architecting for a typical cloud based consumer oriented application.
Chapter 3: Designing a scalable cloud database Discusses how to design a scalable cloud database.
Chapter 4: Working with spatial data Discusses how to work with spatial data in SQL Azure.
Chapter 5: Accessing spatial data with Entity Framework Discusses how to access spatial data in a .NET application, and a few considerations you must take when working with SQL Azure.
Chapter 6: Expose data to the world with WCF Data Services Discusses how to expose data to the world using WCF Data Services. In particular, it walks through how to create a reflection provider for WCF Data Services.

Earlier, the team posted:


<Return to section navigation list> 

Visual Studio LightSwitch

Matt Thalman explained Query Reuse in Visual Studio LightSwitch on 10/21/2010:

image One of the features available in Visual Studio LightSwitch is to model queries that can be reused in other queries that you model.  This allows developers to write a potentially complex query once and be able to define other queries which reuse that logic.  In V1 of LightSwitch, this query reuse is exposed through the concept of a query source.

image22242All queries that you create have a source.  The source of a query defines the set of entities on which the query will operate.  By default, any new query that you create will have its source set to the entity set of the table for which you created the query.  So if I create a query for my Products table, the source of the query will be the Products entity set.  You can think of the entity set as being similar to a SELECT * FROM TABLE for the table.  The entity set is always the root; it does not have a source.  Developers have the ability to change a query’s source.  The source of a query can be set to another query that has already been defined.  (You can do this as long as both queries are returning the same entity type.  You can’t, for example, define a query for customers and a query for products and define the source of the products query to be the customer query.)  This effectively creates a query chain where the results of one query are fed to the next query where the results are further restricted.

To illustrate how this works, let’s start with a sample set of Product data to work with:

image

I’ve defined a query named ProductsByCategory that returns the products of a given category:

ProductsByCategory

This query will return the following data when “Beverages” is passed as the Category parameter:

image 

Now this query can be reused for any other query which wants to retrieve data for a specific category but also wants to further refine the results.  In this case, I’ll define a query named DiscontinuedProducts that returns all discontinued products for a given category.

DiscontinuedProducts

Notice the red bounding box which I’ve added to indicate that this query’s source is set to the ProductsByCategory query.  You’ll also notice that this query “inherits” the Category parameter from its source query.  This allows the DiscontinuedProducts query to consume that parameter in its filter should it need to do so.  The result of this query is the following:

 image

It may be helpful to think of object inheritance in the context of query sources.  In other words, the DiscontinuedProducts query inherits the logic of ProductsByCategory and extends that logic with its own. 

It should be noted that any sorts that are defined for queries are only applicable for the query that is actually being executed.  So if ProductsByCategory sorted its products by name in descending order and DiscontinuedProducts sorted its products by name in ascending order, then the results of the DiscontinuedProducts query will be in ascending order.  In other words, the sorting defined within base queries are ignored.


Kunal Chowdhury posted a detailed, fully illustrated Beginners Guide to Visual Studio LightSwitch (Part–1) on 10/20/2010:

image Visual Studio LightSwitch is a new tool for building data-driven Silverlight Application using Visual Studio IDE. It automatically generates the User Interface for a DataSource without writing any code. You can write a small amount of code also to meet your requirement.

imageRecently, I got some time to explore Visual Studio LightSwitch. I created a small DB application with proper data inserting UI within a small amount of time (without any XAML or C# code).

Here in this article, I will guide you to understand it with the help of a small application. There will be a series of articles on this topic regularly. Read the complete article to learn about creating a Silverlight data driven application with the help of Visual Studio LightSwitch.

Setting up LightSwitch Environment
image22242Microsoft Visual Studio LightSwitch Beta 1 is a flexible, business application development tool that allows developers of all skill levels to quickly build and deploy professional-quality desktop and Web business applications. To start with LightSwitch application development, you need to install the Visual Studio LightSwitch in your development machine. To do so, follow the below steps:
  • Install Visual Studio 2010
  • Install Visual Studio LightSwitch

The LightSwitch installation will install all other components to your PC one by one including SQL Express, Silverlight 4, LightSwitch Beta server etc.
You can download the LightSwitch Beta 1 from here: Microsoft Download Center (Visual Studio LightSwitch Beta 1)
Creating a LightSwitch Project
Once you installed Visual Studio LightSwitch, Run the Product to create a new project. Go to File –> New –> Project or press Ctrl + Shift + N to open the “New Project”. From the left panel, select “LightSwitch”. In the right pane, it will filter the LightSwitch project templates. This will include of type VB & C#. Select your respective type. Here I will use the C# version.
image
In the above dialog Window, enter the name of the project, select proper location for the project to create and hit “Ok”. This will create the blank LightSwitch project for you by the Visual Studio 2010 IDE. It will take some time for the project creation. So, be patient.
Create a Database Table

After the project has been created by the IDE, it will open up the following screen:

image
You can see that, it has two options in the UI. You can create a new table for your application. Also, you can attach an external database. If you open the Solution Explorer, you will see that, the project is totally empty. It has only two folders named “Data Sources” and “Screens”.

“Data Sources” stores your application data i.e. Database Tables. On the other side, the “Screens” folder stores the UI screens created by you. I will describe them later in this tutorial.

Let’s create a new Table for our application. Click on the “Create new table” to continue. This will bring up the following screen in your desktop:
image
In the above page, you can design your Table structure as you do in the SQL server while creating a new table. Each table will have a “Id” assigned to the table of type “Int32” & it is a Primary Key to the table. Add some additional columns to the table.
image
In the above snapshot, you can see that, there are several column types (data types) available in LightSwitch application. For our sample application, we will create 4 additional columns called “FirstName (String)”, “LastName (String)”, “Age (Int16)” and “Marks (Decimal)”. Change the title of the table from “Table1Item” to “StudentTable”. This name will be used while saving the table. Save the table now. If you want to change the name of the table later, just rename the table header and save it. This will automatically update the table name.
image
Be sure to set all the fields as “Required” field. This will be helpful validating the data. I will show it to you later.

Create a Data Entry Screen
Once you are done with structuring your database table columns, you will need to create a UI screen for your application to insert records. Click on the “Screen…” button from the top panel as shown in the below snapshot:
image
This will open the “Add New Screen” dialog window to the screen. Select “New Data Screen” from the Screen Template, provide a Screen Name in the right panel & chose the database table as Screen Data from the dropdown.
image
Click “Ok” to continue. This will create a new UI screen for your application to insert new data record by end user. Now, in the solution explorer you can see that, the “Data Sources” folder has one database named “ApplicationData” and it has a table called “StudentTables”. In the “Screens” folder you can find the recently created data entry screen named “CreateNewStudentTable”.
You can change the design of the UI from the below screen:
image
You can add or delete new field or controls. You can also rearrange the controls in the UI. For our first sample application, we will go with the default layout controls.
See the Application in Action

Woo!!! Our application is ready. We will able to insert new records in our database table from our application. No need to write a single line of code. What? You are not agreeing with me! Let’s run the application by pressing F5. This will build your solution. It will take some time to compile the solution. Once it builds successfully, it will open the following UI in your desktop:

image
It is a Silverlight OOB Application. If you want to confirm, right click on the application & you will see the Silverlight context menu pops up in the screen.
OMG!!! We didn’t do anything to design the above UI! The Visual Studio LightSwitch automatically created the screen for you with a “Save” and “Refresh” button. You can see a collapsible “Menu” panel at the left of the screen. In the right side, you will see tabular panel containing some labels and TextBox which will be require to insert data in your application database.
In the top right corner of the screen, you will see a “Customize Screen” button. Once you click this, it will pop up another Child Window for you to customize the application screen at runtime. This will not be visible, once you deploy the application. We will cover them later in different article.

Kunal continues with

  • Validation of Fields
  • More on Save
  • Customize the Screen
  • End Note

Topics


Return to section navigation list> 

Windows Azure Infrastructure

• Joe Panetieri analyzed Microsoft’s Cloud Strategy: Three Reasons to Worry in a 10/20/2010 post to the MSPMentor blog:

When Microsoft announced plans to re-brand its SaaS platform from BPOS to Office 365, I took a day or so to digest the news. No doubt, Microsoft has made some solid SaaS and cloud computing moves in the past year. But I believe this week’s rebranding efforts reveal that Microsoft’s cloud initiatives are experiencing considerable turbulence. Here’s why.

Consider two recent moves:

1. Exit, Stage Left: Microsoft earlier this week revealed that Chief Software Architect Ray Ozzie will be leaving the company. CEO Steve Ballmer put a positive spin on Ozzie’s departure, claiming Microsoft’s “progress in services and the cloud” was “now full speed ahead in all aspects of our business.”

2. What’s In A Name?: On October 19, Microsoft announced plans to dump the BPOS (Business Productivity Online Suite) brand, in favor of a new Office 365 brand. Matt Weinberger, over on The VAR Guy, likes the branding move and he also applauds Microsoft’s SMB cloud efforts. But I’m not sure I fully agree.

image When it debuts in 2011, Microsoft says Office 365 will include Microsoft Exchange Online, Microsoft SharePoint Online, Microsoft Lync Online and the latest version of Microsoft Office Professional Plus desktop suite. Translation: Microsoft is taking one of its strongest brands — Office — and linking it to the cloud. (Here’s a comprehensive Q&A from Microsoft.)

Three Signs of Trouble?

What is Microsoft really saying here? A few potential answers, based purely on my speculation:

1. The BPOS Brand was stalled: I suspect many customers and partners could not follow YATAFR (yet another technology acronym from Redmond). Perhaps similarly, Microsoft may have faced challenges with Office Communications Server, which is now rebranded as Microsoft Lync.

2. The BPOS Brand was tarnished: Recent BPOS outages cast a bit of a cloud over Microsoft and its SaaS strategy. By changing the brand to Office 365, Microsoft is suggesting that (A) the company’s SaaS strategy is ubiquitous like Microsoft Office and (B) Microsoft’s SaaS platform is reliable and online 365 days out of the year. But the new name is hardly original. Just ask Seagate’s i365 storage as a service business.

3. Perhaps Ray Wasn’t Really the Man: No doubt, Microsoft has some promising cloud platforms. I hear positive buzz about Windows Azure, which continues to attract more and more ISV interest. Moreover, offerings like SharePoint Online and Exchange Online are widely popular in the SaaS world.

But something just wasn’t clicking with Microsoft’s All In cloud efforts. A few examples: Exchange 2010 has been widely available on-premise for nearly a year but it’s not yet widely available in Microsoft BPOS. That’s a real head-scratcher. Moreover, SaaS service providers like Intermedia have long offered hosted Exchange 2010, meaning that Microsoft trails its own Exchange partners in the cloud market.

Are the items above Ray Ozzie’s fault? I doubt it. Was it a coincidence that Microsoft announced Ray Ozzie’s imminent departure the same week that the company announced rebranding plans for BPOS? Perhaps.

But something doesn’t add up…


• Bob Warfield recommended PaaS Strategy: Sell the Condiments, Not the Sandwiches in this 10/20/2010 essay:

This is part two of a two-part series I’ve wanted to do about strategy for PaaS (Platform-as-a-Service) vendors.  The overall theme is that Platforms as a Service require too much commitment from customers.  They are Boil the Ocean answers to every problem under the sun.  That’s great, but it requires tremendous trust and commitment for customers to accept such solutions.  I want to explore alternate paths that have lower friction of adoption and still leave the door open in the long run for the full solution.  PaaS vendors need to offer a little dating before insisting on marriage and community property.

In Part one of this PaaS Strategy series, I covered the idea of focusing on getting the customer’s data over before trying to get too much code.  We talked about Analytics, Integration with other Apps, Aggregation, and similar services as being valuable PaaS offerings that wouldn’t require the customer to rewrite their software from the ground up to start getting value from your PaaS.  Now I want to talk about the idea of starting to get some code to come over to the PaaS without having to have all of it.  My fundamental premise is to create a series of packages that can be adopted into the architecture of a product without forcing the product to be wholesale re-architected.  I use the term “packages” very deliberately.

Consider Ruby on Rails Gems as a typical packaging system.  A gem can be anything from a full blown RoR application to a library intended to be used as part of an application.  We’re more focused on the latter.  The SaaS/PaaS world has a pretty good handle on packaging applications in the form of App Stores, but it needs to take the next step.  A proper PaaS Store (we’re gonna need a better name!) would include not only apps built on the platform, but libraries usable by other apps and data too.  Harkening back to my part one article, data is valuable and the PaaS vendor should make it possible to share and monetize the data.  Companies like Hoovers and LinkedIn make it very clear that there is data that is valuable and would add value if you could link your data to that data.

What are other packages that a PaaS PackageStore might offer?

I am fond of saying that when you set out to write a piece of software, 70% of the code written adds no differentiated business advantage for your effort at all.  It’s just stuff you have to get done.  Stuff like your login and authentication subsystem.  You’re not really going to try to build a better login and authentication system, are you?  You just want it to work and follow the industry best practices.  A Login system is a pretty good example of a useful PaaS package for a variety of reasons:

  • It doesn’t have a lot of UI, and what UI it does have is pretty generic.  Packages with a lot of UI are problematic because they require a lot of customization to make them compatible with your product’s look and feel.  That’s not to say it can’t be done, just that it’s a nuisance.
  • It adds a lot of value and it has to be done right.  As a budding young company, I’d pay some vendor who can point to much larger companies who use their login package.  It would make my customers feel better to know this critical component was done well.
  • It involves a fair amount of work to get one done.  There is a fair amount of code and it has to be tested very carefully.
  • I can’t really add unique value with it, it just has to work, and everyone expects it to work the same way.
  • It is a divisible subsystem with a well-defined API and a pretty solid “bulkhead” interface.  What goes on the other side of the bulkhead is something my architecture can largely ignore.  It doesn’t have to spread its tendrils too far and wide through my system in order to add value.  Therefore, it doesn’t perturb my architecture, and if I had to, I could replace it pretty easily.

These are all great qualities for such a package.  What are some others?  Think about the Open Source libraries you’ve used for software in the past, because all of that is legitimate territory:

  • Search:  Full text search as delivered by packages like Lucene is a valuable adjunct.
  • Messaging:  Adobe and Amazon both have messaging services available.
  • Mobile:  A variety of services could be envisioned for mobile ranging from making it easy to deal with voice delivered to and from telephones to SMS messages to full blown platforms that facilitate delivering transparent access to your SaaS app on a smartphone.
  • Billing:  There are companies like Zuora out there focused on exactly this area.  Billing and payment processing comes in all shapes and sizes, and many businesses need access to it.  You needn’t have full-blown Zuora to add value.
  • Attachments:  Many apps like to have a rich set of attachments.  There’s a whole series of problems that have to be solved to make that happen, and most of it adds no value at all to your solution.  Doing a great job of storing, searching, viewing, and editing attachments would be an ideal PaaS package.  There are loads of interesting special cases too.  Photos and Videos, for example.

I want to touch on an interesting point of competitive differentiation and selling.  Let’s pick one of these that has a lot of potential for richness like Photos.  Photos are a world unto themselves as you start adding facilities like resizing, cropping, and other image processing chores, not to mention face recognition.  There’s tons of functionality there that the average startup might never get to, but that their users might think was pretty neat.  After a while, the PaaS can set the bar for what’s expected when an app deals with photos.  They do this when their Photos package is plush enough and adopted widely enough, that people come to expect it’s features.  When it reaches the point where the features are expected, but a startup can’t begin to write them from scratch, the PaaS vendor wins big.  The PaaS customers can win big too, because in the early days, before that plushness becomes the norm, a good set of packages can really add depth to the application being built.

PaaS vendors that want to take advantage of this for their own marketing should focus on packages that deliver sizzle.  I can imagine a PaaS package vendor that totally focuses on sizzle.  They’ve got photos, video, maps, charts (bar charts to Gantt charts), calendars, Social Media integrations, mobile, messaging, all the stuff that when it appears in the demo, delivers tons of sizzle and conveys a slick User Experience.

Before moving on, I want to briefly consider the opposite end of the spectrum from sizzle.  There’s a lot of really boring stuff that has to get done in an application too.  We’ve touched on login/authentication.  SaaS Operations is another area that I predict a PaaS could penetrate with good success.  Ops covers a whole lot of territory and the ops needs of SaaS can be quite a bit different than the ops needs of typical on-prem software.  For one thing, it needs to scale cheaply.  You can’t throw bodies at it.  For another, you have to diagnose and manage problems remotely.  One of the most common complaints at SaaS companies I’ve worked with is the customer saying performance is terrible.  Is it a problem with the servers?  Is it a problem in the Internet between your servers and the customer?  Is it a problem inside their firewall?  Or is it a problem on their particular machine?  Having a PaaS subsystem that instrumented every leg of the journey, and made it easy to diagnose and report all that would be another valuable, though not particularly sexy offering.

I hope I have shown that there is a lot of evolution left for the PaaS world.  It started out with boil the ocean solutions that demand an application be completely rewritten before it can gain advantage from the platform.  I believe the future is in what I’ll call “Incremental PaaS”.  This is PaaS that adds value without the rewrite.  It’s still a service, and you don’t have to touch code beyond the API’s to access the packages, but it adds value and simplifies the process of creating new Cloud Applications of all kinds.

Read Part one of Bob’s essay.


imageSee my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post of 10/21/2010, which lists all PDC 2010 sessions related to Windows Azure, SQL Azure, Windows Azure AppFabric, Codename “Dallas” and OData, in the Cloud Computing Events section below.


David Lemphers reviewed Gartner’s “Strategic Technologies” list in an Oooh, Cloud + Mobile + Analytics post of 10/20/2010:

image So Gartner released it’s Top 10 Strategic Technologies for 2011 yesterday, and I had to jump on the back with some of my own observations.

Now, the lens I’m looking at Gartner’s list with is heavily based on my last 30 days at PwC. When we think about Cloud Computing @ PwC, which happens to be #1 on Gartner’s list, we tend to think about it in a slightly different way to most.

See, Cloud to us is an instrument, a technique, a mindset, I mean, let’s go all out, it’s a lifestyle. We take some of the smartest folks in leading business verticals, like Finance or Media & Entertainment for example, and challenge them to identify new ways of doing business where cloud computing (and other leading technologies by the way) is a key enabler. This leads to some amazing ideas and business outcomes!

When I look at the Gartner list, three things jump out at me. Cloud obviously, and I think Garner is right on the money when they suggest a lot of the action will happen in understanding how to select a cloud provider and how to maintain cloud services from a governance and policy perspective. Mobile is a logical extension of the cloud paradigm shift; if you start moving massive computing to the utility domain, then more immersive, intimate mobile devices are key. Business and lifestyle now collapse onto each other, and you need a mobile computing platform that can not only provide you with a meaningful social experience, but also one that is cognizant and capable from a business point of view.

imageAnd finally, analytics. Well, my time at Live Labs and Windows Azure taught me one thing, if you can get lots of machines, talking to lots of other services (like Twitter, Bing, etc), and use that to correlate your own information needs, you’re on a winner. Being able to qualify a factoid against the web and social corpus, at real-time speed, directly from your palm, is infinitely powerful.

I’m also going to add in Interactive Experiences as a leading area, I mean, Kinect creates amazing possibilities in this area.

I’m super excited about where Cloud, Mobile and Analytics will intersect in 2011. What about you?


Gartner claimed “Analysts Examine Latest Industry Trends During Gartner Symposium/ITxpo, October 17-21, in Orlando” and put cloud computing in the first position of its Gartner Identifies the Top 10 Strategic Technologies for 2011 of 10/19/2010:

image Gartner, Inc. today highlighted the top 10 technologies and trends that will be strategic for most organizations in 2011. The analysts presented their findings during Gartner Symposium/ITxpo, being held here through October 21.

Gartner defines a strategic technology as one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt.

A strategic technology may be an existing technology that has matured and/or become suitable for a wider range of uses. It may also be an emerging technology that offers an opportunity for strategic business advantage for early adopters or with potential for significant market disruption in the next five years.   As such, these technologies impact the organization's long-term plans, programs and initiatives.

image “Companies should factor these top 10 technologies in their strategic planning process by asking key questions and making deliberate decisions about them during the next two years,” said David Cearley, vice president and distinguished analyst at Gartner.

“Sometimes the decision will be to do nothing with a particular technology,” said Carl Claunch, vice president and distinguished analyst at Gartner. “In other cases, it will be to continue investing in the technology at the current rate. In still other cases, the decision may be to test or more aggressively deploy the technology.”

The top 10 strategic technologies for 2011 include:

  1. Cloud Computing. Cloud computing services exist along a spectrum from open public to closed private. The next three years will see the delivery of a range of cloud service approaches that fall between these two extremes. Vendors will offer packaged private cloud implementations that deliver the vendor's public cloud service technologies (software and/or hardware) and methodologies (i.e., best practices to build and run the service) in a form that can be implemented inside the consumer's enterprise. Many will also offer management services to remotely manage the cloud service implementation. Gartner expects large enterprises to have a dynamic sourcing team in place by 2012 that is responsible for ongoing cloudsourcing decisions and management.
  2. Mobile Applications and Media Tablets. Gartner estimates that by the end of 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing an ideal environment for the convergence of mobility and the Web. Mobile devices are becoming computers in their own right, with an astounding amount of processing ability and bandwidth. There are already hundreds of thousands of applications for platforms like the Apple iPhone, in spite of the limited market (only for the one platform) and need for unique coding.

The quality of the experience of applications on these devices, which can apply location, motion and other context in their behavior, is leading customers to interact with companies preferentially through mobile devices. This has lead to a race to push out applications as a competitive tool to improve relationships and gain advantage over competitors whose interfaces are purely browser-based. …

Read about the remaining eight strategic technologies here.


Robert Duffner introduced himself to the Windows Azure community in his Hi, I'm Robert post of 10/20/2010:

image I'm Robert Duffner, director of Product Management for Windows Azure and I'm kicking off a new series of blog posts where I will be interviewing thought leaders in cloud computing.  I'll be reaching out broadly across the globe to industry, public sector, and academia. You definitely don't want to miss these intellectually stimulating and thought provoking discussions on the cloud, my first interview is with Chris C. Kemp, NASA's first chief technology officer for IT and the driving force behind the Nebula cloud computing pilot.

imageWhether I'm engaging senior IT executives or developers, I try to focus on amplifying the voice of the customer with the various advocacy programs I'm currently driving.  This is all in an effort to help shape the future product direction of the Windows Azure Platform.  Some of these include leading the Windows Azure Platform Customer Advisory Board as well as driving the Windows Azure Technology Adoption Program (TAP) to help our early adopter customers and partners validate the platform and shape the final product by providing early and deep feedback.  I also stay engaged with the top Windows Azure experts in the world as the community leader for the Windows Azure Most Valued Professionals (MVP) program.  

Outside of work I'm an explorer and anthropologist who enjoys traveling anywhere.  You can see some of my exploits here.  I've actually been to all 7 continents, including Antarctica!  But lately, I love to hike in the Pacific Northwest - the Cascades, the Olympics, and Mount Rainier.  More details about my experience and background can be found here.

Thanks for your interest in Windows Azure and stay tuned!


David Linthicum warns “With the increased success of cloud computing, we're bound to create a few monsters will bedevil a dependent IT” in a deck for his The danger of the coming 'big cloud' monopolies post of 10/20/2010 to InfoWorld’s Cloud Computing blog:

image Fast-forward five years: The Senate convenes a meeting to discuss recent price hikes by the three largest cloud computing providers. Businesses are up in arms because cloud computing subscription prices are tied directly to their IT spending and, thus, their bottom line. Also, we've grown so dependent on these large cloud computing providers that moving to other clouds or to internal data centers is just not practical. In other words, we work in a functional monopoly where a few providers control the public cloud market. We all know Big Oil and Big Tobacco. How about Big Cloud?

image We've been here before with commodity markets such as energy and food, but never with IT technology. In light of the cloud's rising trajectory as an efficient and cheap way of doing computing, cloud monopolies are nearly guaranteed to pop up at some point.

The logic behind this is clear. Cloud computing providers need many points of presence (local data centers) to deliver the reliability, compliance, and performance that most businesses will demand and governments will mandate. The ultimate spending on infrastructure to support this will easily creep past $1 billion.

To get there, cloud providers will have to combine assets, merging and merging again until they rival GM, BP, Acher Daniels Midland, Monsanto, DuPont, AT&T, Koch Industries, Apple, and Microsoft in terms of size and revenue. In many cases, the existing big technology and/or cloud providers will combine, or groups of cloud providers will merge just to survive -- all of this to live up to the infrastructure needs required by the market.

The trouble with this scenario, as with any other functional monopoly that has come along over the years, is too much control is the hands of a few. Thus, businesses are left vulnerable to shifting priorities and costs.

Of course, we have our government to check in on these matters, but as we've seen many times before, the numbers of Senate hearings with our elected representatives jumping uglies on some CEO typically does not lead to productive outcomes. Instead, we're left hoping the market will eventually right the wrongs. In fact, U.S. businesses -- and Silicon Valley in particular -- usually discourage the government from getting involved in market activities until disaster strikes, as in the cases of Enron and the recent banking scandals. The market almost always rules.

Read more: next page ›, 2


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted Authentication is not enough. Authorization is a must for all integrated services – whether infrastructure components, applications, or management frameworks as a preface to her Authorization is the New Black for Infosec post of 10/20/2010 to F5’s DevCentral blog:

image If you’ve gone through the process of allowing an application access to Twitter or Facebook then you’ve probably seen OAuth in action. Last week a mini-storm was a brewing over such implementations, primarily regarding the “overly-broad permission structure” implemented by Twitter.

blockquote Currently Twitter application developers are given 2 choices when registering their apps – they can either request “read-only access” or “read & write” access. For Twitter “read & write” means being able to do anything through the API on a user’s behalf.

Twitter’s overly-broad permission structure amplifies the concern around OAuth token security because of what those tokens allow apps to do.

-- Twitter Permissions & Security

Reading that blog post and the referenced articles led to one of those “aha” moments when you realize something is a much larger problem than it first appears. After all, many folks as a general rule do not allow any external applications access to their Facebook or Twitter accounts, so they aren’t concerned by such broad permission structures. But if you step back and think about the problem in more generalized terms, in terms of how we integrate applications and (hopefully) infrastructure to enable IT as a Service, you might start seeing that we have a problem, Houston, and we need to address it sooner rather than later.

image

Interestingly enough, as I was getting ready to get my thoughts down on this subject I was also keeping track of Joe Weinman and his awesome stream of quotes coming out of sessions he attended at the Enterprise Cloud Summit held at Interop NY. He did not disappoint as suddenly he started tweeting quotes from a talk entitled, “Infrastructure and Platforms: A Combined Strategy”, given by Nimbula co-founder and vice- president of products, Willem van Biljon.

The sentiment exactly mirrored my thoughts on the subject: traditional permissions are “no longer enough” and are highly dependent “on the object being acted upon” as well as the actor. It’s contextual, requiring that the strategic point of control enforcing and apply policies does so on a granular level and in the context of the actor, the object acted upon (the API call, anyone?) and even location. While it may be permissible for an admin to delete an object whilst doing so from within the recognized organization management network, it may not be permissible for the same action to be carried out from iPad that’s located in Paris. Because that may be indicative of a breach, especially when you don’t have an office in Paris.

More disturbing, if you think about it, is the nature of infrastructure integration via an API, a la Infrastructure 2.0. Like Twitter’s implementation and many integration solutions that were designed with internal-to-the-organization only use cases in mind, one need only authenticate to the component in order to gain complete access to all functions possible through the API.

The problem is, of course, that just because you want to grant User/Application A access to everything, you might not want to allow User/Application B to have access to the entire system. Application delivery infrastructure solutions, for example, have a very granular set of API calls that allow third-party applications to control everything. Once authenticated a user/application can as easily modify the core networking configuration as they can read the status of a server.

There is no real authorization scheme, only authentication. That’s obviously a problem.

THE FIVE Ws of CONTEXT-AWARE AUTHORIZATION

context-aware-authorizationWhat’s needed is the ability to “tighten” security around an API because at the core of most management consoles is the concept of integration with the components they managed via an API. Organizations need to be able to apply granular authorization – access to specific API calls/methods based on user or at the very least role – so that only those who should  be able to tweak the network configuration can do so. This is not only true for infrastructure; it’s especially important for applications with APIs, the purpose for which is application integration across the Internet.

Merely obtaining credentials and authenticating them is not enough. Yes, that’s Joe Bob from the network team, that’s nice. What we need to know is what  he’s allowed to do and to what and perhaps from where and even when and perhaps how. We need to have the ability to map out these variables in a way that allows organizations to be as restrictive – or open – as needs be to comply with organizational security policies and any applicable regulations. Perhaps Joe Bob can modify the network configuration of component A but only from the web management console and only during specified maintenance windows, say Saturday night. Most organizations would be fine with less detail – Job Bob can modify the network from any internal device/client at any time, but Mary Jo can only read status information and health check data and she can do so from any location.

Whether we’re talking fine-grained or broad-grained permissions at the API call level is less the issue than simply having some form of authorization scheme beyond system-wide “read/write/execute” permissions, which is where we are today with most infrastructure components and most Web 2.0 sites. We need to drill down into the APIs and start examining authorization on a per-call basis (or per-grouping-of-calls basis, at a minimum).

And in the case of cloud computing and multi-tenant architectures, we further need to recognize that it’s not just the API-layer that needs authorization but also the “tenant”. After all, we don’t want Joe Bob from Tenant A messing with the configuration for Tenant B, unless of course Joe Bob is part of the operations team for the provider and needs that broad level of access. image

AN INFOSEC STRETCH GOAL

The goal to provide a complete automated and authorization-enabled data center is certainly a “stretch” goal, and one that will likely remain a “work in progress”  for the foreseeable future. Many, many components in data centers today are not API-enabled, and there’s no guarantee that they will be any time in the future. Those that are currently so enabled are not necessarily multi-tenant, and there’s no guarantee that they will be any time in the future. And many organizations simply do not have the skills required to perform the work integration work necessary to build out a collaborative, dynamic infrastructure. And there’s no guarantee that they will any time in the future. But we need to start planning and viewing security with an eye toward authorization as the goal, recognizing that the authentication schemes of the past were great when access was to an operator or admin and only from a local console or machine. The highly distributed nature of new data center architectures and the increasing interest in IT as a Service make it necessary to consider not just who can manage infrastructure but what they can manage and from where.

The challenges to addressing the problem of authorization and security are many but not insurmountable. The first step is to recognize that it is necessary and develop a strategic architectural plan for moving forward. Then infosec professionals can assist in developing the incremental tactics necessary to implement such a plan. Just as a truly dynamic data center takes time and will almost certainly evolve through stages, so too will implementing an authorization-focused identity management strategy for both applications and the infrastructure required to deliver them.


Jonathan Penn described Why Cloud Radically Changes The Face Of The Security Market in a 10/20/2010 post to his Forrester blog:

image When does a shift create new market? When you have to develop new products, sell them to different people than before who serve different roles, have a different value proposition for your solutions, and they're sold with different pricing and profitability models - well, that in my view is a different market.

Cloud represents such a disruption for security. And it's going to be a $1.5 billion market by 2015. I discuss the nature of this trend and its implications in my latest report, "Security And The Cloud".

Most of the discussion about cloud and security solutions has been about security SaaS: the delivery model for security shifting from on-premise to cloud-based. That's missing the forest for the trees. Look at how the rest of IT (which is about 30 times the size of the security market) is moving to the cloud. What does that mean in terms of how we secure these systems, applications, and data? The report details how the security market will change to address this challenge and what we're seeing of that today.

Vendors have finally started to come to market with solutions, though as you'll see from the report, we're still at the early stages with far more to go. And developing solutions for cloud environments requires a lot more than scaling up and supporting multi-tenancy. But heightened pressure by cloud customers and prospects is fueling the rapid evolution of solutions. How rapid and radical an evolution? By 2015, security will shift from being the #1 inhibitor of cloud to one of the top enablers and drivers of cloud services adoption.

Read more


<Return to section navigation list> 

Cloud Computing Events

image• My Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post of 10/21/2010 lists all PDC 2010 sessions related to Windows Azure, SQL Azure, Windows Azure AppFabric, Codename “Dallas” and OData. The list was gathered from all session channels (tracks), because the Cloud channel is missing many Azure-related presentations.

image

The PDC site runs on Windows Azure, by the way.


• Cloudcor, Inc. announced its hybrid Up 2010 Cloud Computing Conference to be held 11/15/2010 in San Francisco, CA, 11/18/2010 in Islandia, NY, and 11/16, 11/17 and 11/19/2010 on the Internet as a virtual conference. Following are Microsoft’s conference sessions:

Headline Keynote
The Move is On: Cloud Strategies for Businesses
by Doug Hauger - General Manager, Cloud Infrastructure Services Product Management group at Microsoft.

Keynote Session
PaaS Lessons from cloud pioneers running enterprise scale solutions
by Shawn Murray – Microsoft Incubation Sales

image

Keynote Session
Microsoft’s PaaS Solution: The Windows Azure Platform
by Zane Adam - General Manager, Marketing and Product Management, Azure and Integration Services at Microsoft.

Keynote Session
Windows Azure Platform Security Essentials for TDMs
by Yousef Khalidi, Distinguished Engineer at Microsoft.

Keynote Session
The Future of SaaS, and Why Kentucky Board of Education Moved 700k Seats into the Cloud, in One Weekend
by Danny Kim, CTO, FullArmor.

Microsoft was the sole “Diamond Sponsor” of the conference when this article was posted.


• Geva Perrry announced the speaker roster for the QCon San Francisco 2010 Cloud Track on 10/21/2010:

image Back in July I wrote that I am hosting the QCon San Francisco Cloud Computing track this year and made an informal call for speakers.

image Well, I wanted to update on the awesome line-up of speakers that we have for this event and offer my blog readers a $100 discount on registering to the conference, which takes place November 1-5 at the Westin San Francisco. The cloud track takes place all in one day on Friday, November 5.

To receive the $100 discount, sign up here with registration code PERR100.

Here's the agenda. [Click the links for detailed abstracts and speaker bios.] Hope to see you all there!

Track: Real Life Cloud Architectures; Host: Geva Perry

This track covers how a variety of applications are using Cloud computing today, with a focus on mature organizations who have merged some aspect of cloud into their offerings rather than new/small companies which started out in the cloud. The aim of this track is to provide concrete takeaways for how to start using cloud computing in-house when you have an existing application to work with.

 

Time Session Speaker
10:35

Get Satisfaction uses Ruby on Rails and cloud computing platform to achieve scalability and reliability

Thor Muller

12:05

Implementing private clouds

Andrew Clay Shafer

14:05

Netflix’s Transition to High-Availability Storage Systems

Siddharth Anand

15:35

Out of This World Cloud Computing

John L. Callas & Khawaja Shams

16:50

Panel: Data in the Cloud

Amr Awadallah, Roger Bodamer, Simon Guest, Damien Katz & Razi Sharir


• Wes Yanaga invited ISVs (and others) on 10/20/2010 to Attend PDC10 Live Event at Microsoft Silicon Valley Campus:

image Missed your opportunity to attend PDC10 in Redmond? If so, you can still join in on the excitement via the live stream and in-person delivered sessions.  Attend this October 28th event at the Microsoft Silicon Valley Campus - this year’s content will focus on the next generation of Cloud Services, client & devices, and framework & tools. You can get the highlights of PDC without heading to Redmond. 

imageWelcome keynote by Dan’l Lewin (CVP, Strategic and Emerging Business Development for Microsoft) and two in-person session on OData & Windows Phone 7. Join us for an opportunity to receive limited edition t’s, enter to win an XBOX w/Kinect, Windows Phone 7 and more!  Register here: 1032464622


Andrew R. Hickey reported Cloud Computing Shines At Interop New York in this 10/20/2010 article for CRN:

Cloud computing is taking center stage at this week's Interop New York 2010, with dozens of cloud players showcasing their latest and greatest solutions to make the leap to the cloud a smooth one.

Of the more than 150 exhibitors on the Interop New York show floor, a strong percentage are cloud computing-focused or have their hand somewhere in the cloud cookie jar, whether they're the old guard like Microsoft (NSDQ:MSFT) and HP (NYSE:HPQ) bulking up their traditional offerings to target the cloud or young startups like Vembu or Netlist.

The cloud has been evident in the keynote presentations as well, with marquee speakers like Red Hat CEO Jim Whitehurst and Cisco (NSDQ:CSCO) vice president Ben Gibson highlighting the cloud's force on the IT landscape and how the current model is changing.

And the vendors are taking that to heart. Microsoft made its cloud muscle and its "all in" mantra inescapable on the show floor showcasing its Windows Azure cloud platform and its BPOS suite of cloud productivity applications.

HP, too, put the cloud front and center at Interop New York. HP's mission at Interop New York is to show IT pros how to "future-proof" their data centers, their businesses and their careers. To aid in that, HP showcased its converged network strategy that blends server, storage and networking while offering Interop New York attendees a glimpse at its ProLiant servers, BladeSystem Matrix, storage and networking gear like the A12500 core data center switch.

Next: Cloud Infrastructure, Security And Storage

Read more: 2 | Next >>


 Josh Twist posted his Cloud Artwork to the UK Premier Support for Developers blog on 10/19/2010:

Today I had a really enjoyable time presenting at the Microsoft Online Cloud Conference. A number of attendees have been in touch to ask for the slides. Confused as to why they were so popular I asked - they were mostly just pictures and not text – people seemed to like the images.

This was very flattering as I’d spent hours and hours hand-drawing and scanning in these images the night before to create what I hoped was a visually appealing presentation (primarily because the demos were going to be visually very unappealing!).

Here’s a few examples:

WebWeb Role WorkerWorker Role
BlobBlob Storage SQLSQL Azure

I’ve created a zip containing all the original images. Feel free to use them in your own presentations and diagrams. However, all I’d ask for in return is a mention or maybe even a link to this blog post: http://www.thejoyofcode.com/Cloud_Artwork.aspx. Thanks!

Downloads

To everybody who attended – thanks, I hope you enjoyed it!


The Florida.NET user group will present West Palm Dev: All About Azure - Scott Klein, Microsoft MVP -Blue Syntax Consulting at CompTec West Palm Beach on 10/26/2010 at 6:30 PM:

image Join us as Scott Klein, Microsoft MVP and Author of Pro. SQL Azure (APRESS), shares with us the latest and greatest about the Microsoft Azure Platform and demonstrates the value it brings in today's competitive economy.

We will be enjoying Free Pizza at the end of our event sponsored by Adroit Technology Enterprises, LLC, as well as ample networking time.

At the end of our event we will have our free raffle: -Scott will be bringing us some copies of his book to raffle at the end of our meeting. -One Free Telerik Premium Collection license valued at $1900 -One Free Infragistics License valued at $2599 -One Free license of a GrapeCity Product -Shirts courtesy of our friends at the Southwest .Net User Group -Many More items: Pens, Mouse pads, etc. We will also be distributing a free copy of the latest DevProConnections Magazine as our group is back on their magazine distribution list.

We look forward to seeing you there!


Sift Media announced that their Business Cloud Summit 2010 will take place on 11/30/2010 at the Novotel London West, Hammersmith, UK:

The Business Cloud Summit promises to be the UK’s premier Cloud event of 2010.....

Why attend?

  • Dedicated content streams for public and private sector technology professionals
  • Drilling into Cloud Computing for central and local government, the NHS, education and the third sector
  • Exploring specific line-of-business issues covering HR, CRM, IT and Finance


What will you gain?

  • Unique, original, top-level industry insight and end-user experience, with a strict 'no PowerPoint' rule
  • Traditional conference-expo replaced with a hi-tech, real-time ‘networking arena’
  • An unparalleled glimpse into what the future of Cloud Computing means for your organisation
  • The confidence to make well-informed IT strategy decisions

Who should attend?

  • Infrastructure providers, buyers, end-users, influencers and decision makers gathering from around the Cloud industry
  • An opportunity for CIOs, CEOs and CFOs to explore the future of the Cloud and what it means for their organisation
  • From HR and Finance Directors and Managers, to Industry Analysts and Infrastructure Directors, anyone with a stake in The Cloud will benefit from attending The Business Cloud Summit 2010

According to IDC, 2009 was the year that Cloud Computing was ‘seeded’. Now in 2010, end users are embracing the cost and productivity benefits of the model with enthusiasm. At a time when the world is still emerging carefully from the worst economic downturn in living memory, CIOs, CEOs and CFOs in organisations across every business sector are taking advantage of the lower start-up costs and reduced total cost of ownership of Cloud Computing, delivering ROI of over 1000% in some cases.


David Pallman announced an Upcoming Webcast: Cloud Computing Assessments, Part 1 scheduled for 10/21/2010 at 10:00 AM PDT:

image Tomorrow (10/21) at 10 AM Pacific time I'll be giving the first in a series of webcasts on Microsoft Cloud Computing Assessments. Register here. These webcasts go hand in hand with my cloud computing assessment article series.

Microsoft Cloud Computing Assessments: The Right Way to Evaluate and Adopt Cloud Computing
Event Code: 151050
10/21/2010
10:00 AM - 11:00 AM
Welcome Time: 09:55 AM
Time Zone: Pacific
Event Language: Not Specified

Connection information for this Webcast will be sent in your event confirmation.
Registration: https://www.clicktoattend.com/invitation.aspx?code=151050


image

Featured Product/Topic: Windows Azure platform
Recommended Audiences: Technology Executives, IT Managers, IT Professionals, Business Executives, CIO, CTO, IT Directors, Business Decision Maker, Technical Decision Makers, Developers

Cloud computing offers so much promise, but it is new and confusing to many. You may be wondering whether cloud computing is right for your business, what the financial return might be, and how to go about getting started with it without making a mistake. In this webcast you'll be introduced to an assessment process for Microsoft cloud computing that puts the value proposition of the cloud into sharp focus for your business. You'll see how an assessment sheds light on risk vs. reward, identifies promising opportunities, analyzes applications financially and technically, and helps you figure out your cloud computing strategy. With the clarity and plan that comes out of an assessment you will be able to evaluate and adopt cloud computing responsibly, maximizing the benefits while managing risk.


Eric Nelson (@EricNel) posted his Slides and Links for Windows Azure Platform session at Software Architect 2010 on 10/20/2010:

image Today (20th Oct 2010) I delivered a 90min session to architect on the Windows Azure Platform.

Are you an ISV?

ISV = Independent Software Vendors - that is you write some kind of product that you sell to more than one customer. My new team is all about helping ISVs and we have a team blog and brand new twitter account which I will increasingly be found on. If you are an ISV, please fave the blog and follow the twitter account. And if you are an ISV please keep an eye on (and sign up to) http://bit.ly/ukmprhome, especially if you are interested in the Windows Azure Platform.

image

FREE access to the Windows Azure Platform

If you are looking for the cheapest way to explore Azure then check out the free Introductory Special – not many compute hours per month but you do get a SQL Azure database free for three months. (A while back I did a walkthrough for this offer and for the even better MSDN subscriber offer)

Slides

10 things every architect should know about the Windows Azure Platform - ericnel

View more presentations from Eric Nelson.

Links


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Amazon Web Services announced a new AWS Free Usage Tier on 10/21/2010:

image To help new AWS customers get started in the cloud, AWS is introducing a new free usage tier. Beginning November 1, new AWS customers will be able to run a free Amazon EC2 Micro Instance for a year, while also leveraging a new free usage tier for Amazon S3, Amazon Elastic Block Store, Amazon Elastic Load Balancing, and AWS data transfer. AWS’s free usage tier can be used for anything you want to run in the cloud: launch new applications, test existing applications in the cloud, or simply gain hands-on experience with AWS.

Below are the highlights of AWS’s new free usage tiers. All are available for one year (except Amazon SimpleDB, SQS, and SNS which are free indefinitely):

Sign Up Now

AWS’s free usage tier starts November 1, 2010. A valid credit card is required to sign up. See offer terms.

AWS Free Usage Tier (Per Month):

  • 750 hours of Amazon EC2 Linux Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month*
  • 750 hours of an Elastic Load Balancer plus 15 GB data processing*
  • 10 GB of Amazon Elastic Block Storage, plus 1 million I/Os, 1 GB of snapshot storage, 10,000 snapshot Get Requests and 1,000 snapshot Put Requests*
  • 5 GB of Amazon S3 storage, 20,000 Get Requests, and 2,000 Put Requests*
  • 30 GB per of internet data transfer (15 GB of data transfer “in” and 15 GB of data transfer “out” across all services except Amazon CloudFront)*
  • 25 Amazon SimpleDB Machine Hours and 1 GB of Storage**
  • 100,000 Requests of Amazon Simple Queue Service**
  • 100,000 Requests, 100,000 HTTP notifications and 1,000 email notifications for Amazon Simple Notification Service**

In addition to these services, the AWS Management Console is available at no charge to help you build and manage your application on AWS.

* These free tiers are only available to new AWS customers and are available for 12 months following your AWS sign-up date. When your free usage expires or if your application use exceeds the free usage tiers, you simply pay standard, pay-as-you-go service rates (see each service page for full pricing details). Restrictions apply; see offer terms for more details.

** These free tiers do not expire after 12 months and are available to both existing and new AWS customers indefinitely.

The new AWS free usage tier applies to participating services across all AWS regions: US – N. Virginia, US – N. California, EU – Ireland, and APAC – Singapore. Your free usage is calculated each month across all regions and automatically applied to your bill – free usage does not accumulate.

I hope the Windows Azure team is listening.


Alex Williams reported OpenStack: An Open Cloud Initiative Makes its 1st Release in a 10/21/2010 article for the ReadWriteCloud:

openstack.jpgIt's official. Open Stack has made its first release. It's a major moment for the nascent open cloud initiative, a service that combines the Rackspace object storage capabilities with NASA's Nebula, the open computing effort from the U.S federal space agency.

image It feels like the start of something, doesn't it? Just writing "U.S. federal space agency," gives us a sense of what makes this exciting for the cloud computing movement. We are on the edge of the great beyond in many ways. The compute power of open networks is just beginning to be understood. How OpenStack fares has the potential to play a part in defining how cloud computing will evolve in the years ahead. If successful, the opportunity is huge with the potential of opening the world of storage and big data to a wide ranging spectrum of communities from the commercial, nonprofit and government sectors.

OpenStack is split into two projects: OpenStack Object Storage and OpenStack Compute.

OpenStack Object Storage

This is the storage environment that Rackspace turned over to OpenStack. Rackspace released the code in July. It consists of a network of commoditized servers that operate in clusters for redundant, and large-scale storage of static objects, written to multiple hardware devices. This means that if a node goes down, another can take its place.

Dell community evangelist Barton George did this interview with project lead Will Reese:

From the OpenStack site:

"Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used in lieu of more expensive equipment. "

According to Adrian Otto, some use cases for OpenStack Object Storage include:

  • Storing media libraries (photos, music, videos, etc.)
  • Archiving video surveillance files
  • Archiving phone call audio recordings
  • Archiving compressed log files
  • Archiving backups (
  • Storing and loading of OS Images, etc.
  • Storing file populations that grow continuously on a practically infinite basis.
  • Storing small files (
  • Storing billions of files.
  • Storing Petabytes (millions of Gigabytes) of data.
OpenStack Compute

According to the OpenStack web site, "OpenStack Compute is software for provisioning and managing large-scale deployments of compute instances." In other words, it is the platform for heavy duty processing of big data. Remember, NASA uses this technology to explore space.

Again, we'll reference Barton's blog and the interview he did with Rick Clark, who is the project lead and chief architect "former engineering manager at Canonical for Ubuntu server and security as well as lead on their virtualization for their cloud efforts."

OpenStack API

Of note to mention is the OpenStack API, known internally as the "artist formerly known as the Rackspace API."

The API provides the access to the object storage and the underlying platform. According to Jim Curry: "It will also include additional functionality such as role-based access controls and additional networking actions. This API will be the official OpenStack API and it will evolve with the platform and needs of the community."

What that means is the API will be tied to the OpenStack road map. But to support the widest possible community, it will also work with Amazon's APIs:

"The EC2 compatible API, already in the code base today, will remain and be maintained; however, it is important for the project to have an official API that is tied directly to the OpenStack roadmap and feature set. We want to ensure that future OpenStack innovation can be driven by the community and not be restricted to the functionality of outside cloud APIs. The sub-projects are built in a way that will allow multiple APIs to be supported, so if there is an existing API that is really important to you (or one that comes along in the future), it is possible for you to add in support for that as well."

In that vein, OpenStack will be hypervisor agnostic, supporting XenServer, KVM and UML.

What's Next

OpenStack will be on a three-month development cycle. What we expect to see come out of this is the continued rise of cloud management services.

We have moved past virtualization density and entered a new phase that is focused primarily on automation. In an open cloud environment, that's pretty important.

If open clouds do proliferate then the automation will be a core component. It means a move away from manually coded configurations that only a few wizards understand. And that's a really good thing.

Alex posted VMware and Google Launch 1st Series of Development Tools on the same day:

HTML CodeVMware and Google announced they will launch a series of tools to make it easier for for enterprise developers to build, manage and deploy applications in the cloud.

The stated goal of the partnership is to offer developers a set of tools that can be used on any platform with any device.

imageIt's another sign that the market is preparing for the next generation of cloud computing that demands a greater need for automation and faster development cycles.

image The collaborative projects that will be available in the next two weeks include Spring Roo and Google Web Toolkit, Spring Insight and Google Speed Tracer, SpringSource Tool Suite and Google Plugin for Eclipse.

The next collaboration projects will focus on mobile application support and accessing data in the cloud. In that context, VMware now has the ability to deploy a SQL-based Spring application on Google App Engine for Business.

imageGoogle App Engjne has grown significantly over the past several months since the integration with the Spring platform. The development is largely coming from within the enterprise. Developers are using the platform to launch Java-based apps onto the network and as SaaS tools.

Here's a summary of the tools now available through the parnership:

Spring Roo and Google Web Toolkit: Spring Roo is an application development tool. It now works with the Google Web Toolkit (GWT) for developers to build browser apps in enterprise production environments. These GWT powered applications leverage modern browser technologies such as AJAX and HTML5 for use on both desktops and mobile browsers.

Spring Insight and Google Speed Tracer: Google's Speed Tracer with VMware's Spring Insight performance tracing technology enable end to end performance visibility into cloud applications. This integration provides a view into the web application performance, improving the end-user experience by optimizing the client side as well as the server side.

SpringSource Tool Suite and Google Plugin for Eclipse: The integration of SpringSource Tool Suite version 2.5 and Google Plugin for Eclipse allows develpers to build and maintain large scale, web-based, enterprise applications. It puts tools that were previously only available when building desktop and server solutions in the hands of those building web apps.

There is a lot more to this story. It demonstrates the need for tool and services to quickly launch apps into the enterprise. End-to-end production environments are hot these days. Atlassian is a company that is reaching into this space as are several others in the IT Services space.


Derrick Harris asked Will Scalable Data Stores Make NoSQL a Non-Starter? in a 10/17/2010 to the GigaOm Structure blog:

image The discussion around NoSQL seems to have evolved from abolishing SQL databases to coexisting with SQL databases, and then to SQL is actually regaining momentum. Is SQL winning back favor, even among webscale types? Was it ever out of favor?

We saw evidence of this momentum shift back to SQL-based databases this week, with Facebook’s Jonathan Heiliger signing onto the advisory board of clustered SQL startup Clustrix. Facebook famously invented the NoSQL Cassandra database but still relies on the venerable MySQL-plus-memcached combination for the brunt of its critical operations. Additionally, Xeround now offers a scalable MySQL database on Amazon EC2, and database guru Michael Stonebraker recently launched his latest SQL-based startup, VoltDB. Will a scalable SQL option always win out against a NoSQL option? Even for unstructured data?

Once we’re no longer talking about serving data, but rather just about storing large volumes of it, NoSQL can seem nearly obsolete. For organizations willing to pay for data warehousing and analysis tools, the options are limitless: massively parallel software, data warehouse appliances, distributed file systems, and the list goes on. Pick your poison. Have lots of unstructured data to analyze and don’t want to pay for software? Try Hadoop. Plus, it might very well work with your existing data management software.

None of this is to say that NoSQL databases aren’t quality options. They actually vary greatly in terms of ideal uses, and some are gaining quite a bit of popularity. Aside from Membase, projects like Cassandra, CouchDB, MongoDB and Riak are maturing fast and gaining in popularity. But they’ve also been the cause of some noteworthy outages as of late. Perhaps these are just growing pains, but try telling that to most CIOs.

It’s a case of familiar versus unfamiliar, and the voices backing a better version of the status quo are getting louder. It will be tough, but not impossible, for NoSQL to be heard.

Read the full post here.


William Vambenepe (@vambenepe) pans Google App Engine’s management features in his Lifting the curtain on PaaS Cloud infrastructure (can you handle the truth?) post of 10/19/2010:

image The promise of PaaS is that application owners don’t need to worry about the infrastructure that powers the application. They just provide application artifacts (e.g. WAR files) and everything else is taken care of. Backups. Scaling. Infrastructure patching. Network configuration. Geographic distribution. Etc. All these headaches are gone. Just pick from a menu of quality of service options (and the corresponding price list). Make your choice and forget about it.

In theory.

image In practice no abstraction is leak-proof and the abstractions provided by PaaS environments are even more porous than average. The first goal of PaaS providers should be to shore them up, in order to deliver on the PaaS value proposition of simplification. But at some point you also have to acknowledge that there are some irreducible leaks and take pragmatic steps to help application administrators deal with them. The worst thing you can do is have application owners suffer from a leaky abstraction and refuse to even acknowledge it because it breaks your nice mental model.

Google App Engine (GAE) gives us a nice and simple example. When you first deploy an application on GAE, it is deployed as just one instance. As traffic increases, a second instance comes up to handle the load. Then a third. If traffic decreases, one instance may disappear. Or one of them may just go away for no reason (that you’re aware of).

It would be nice if you could deploy your application on what looks like a single, infinitely scalable, machine and not ever have to worry about horizontal scale-out. But that’s just not possible (at a reasonable cost) so Google doesn’t try particularly hard to hide the fact that many instances can be involved. You can choose to ignore that fact and your application will still work. But you’ll notice that some requests take a lot more time to complete than others (which is typically the case for the first request to hit a new instance). And some requests will find an empty local cache even though your application has had uninterrupted traffic. If you choose to live with the “one infinitely scalable machine” simplification, these are inexplicable and unpredictable events.

Last week, as part of the release of the GAE SDK 1.3.8, Google went one step further in acknowledging that several instances can serve your application, and helping you deal with it. They now give you a console (pictured below) which shows the instances currently serving your application.

I am very glad that they added this console, because it clearly puts on the table the question of how much your PaaS provider should open the kimono. What’s the right amount of visibility, somewhere between “one infinitely scalable computer” and giving you fan speeds and CPU temperature?

I don’t know what the answer is, but unfortunately I am pretty sure this console is not it. It is supposed to be useful “in debugging your application and also understanding its performance characteristics“. Hmm, how so exactly? Not only is this console very simple, it’s almost useless. Let me enumerate the ways.

Misleading

Actually it’s worse than useless, it’s misleading. As we can see on the screen shot, two of the instances saw no traffic during the collection period (which, BTW, we don’t know the length of), while the third one did all the work. At the top, we see an “average latency” value. Averaging latency across instances is meaningless if you don’t weight it properly. In this case, all the requests went to the instance that had an average latency of 1709ms, but apparently the overall average latency of the application is 569.7ms (yes, that’s 1709/3). Swell.

No instance identification

What happens when the console is refreshed? Maybe there will only be two instances. How do I know which one went away? Or say there are still three, how do I know these are the same three? For all I know it could be one old instance and two new ones. The single most important data point (from the application administrator’s perspective) is when a new instance comes up. I have no way, in this UI, to know reliably when that happens: no instance identification, no indication of the age of an instance.

Average memory

So we get the average memory per instance. What are we supposed to do with that information? What’s a good number, what’s a bad number? How much memory is available? Is my app memory-bound, CPU-bound or IO-bound on this instance?

Configuration management

As I have described before, change and configuration management in a PaaS setting is a thorny problem. This console doesn’t tackle it. Nowhere does it say which version of the GAE platform each instance is running. Google announces GAE SDK releases (the bits you download), but these releases are mostly made of new platform features, so they imply a corresponding update to Google’s servers. That can’t happen instantly, there must be some kind of roll-out (whether the instances can be hot-patched or need to be recycled). Which means that the instances of my application are transitioned from one platform version to another (and presumably that at a given point in time all the instances of my application may not be using the same platform version). Maybe that’s the source of my problem. Wouldn’t it be nice if I knew which platform version an instance runs? Wouldn’t it be nice if my log files included that? Wouldn’t it be nice if I could request an app to run on a specific platform version for debugging purpose? Sure, in theory all the upgrades are backward-compatible, so it “shouldn’t matter”. But as explained above, “the worst thing you can do is have application owners suffer from a leaky abstraction and refuse to even acknowledge it“.

OK, so the instance monitoring console Google just rolled out is seriously lacking. As is too often the case with IT monitoring systems, it reports what is convenient to collect, not what is useful. I’m sure they’ll fix it over time. What this console does well (and really the main point of this blog) is illustrate the challenge of how much information about the underlying infrastructure should be surfaced.

Surface too little and you leave application administrators powerless. Surface more data but no control and you’ll leave them frustrated. Surface some controls (e.g. a way to configure the scaling out strategy) and you’ve taken away some of the PaaS simplicity and also added constraints to your infrastructure management strategy, making it potentially less efficient. If you go down that route, you can end up with the other flavor of PaaS, the IaaS-based PaaS in which you have an automated way to create a deployment but what you hand back to the application administrator is a set of VMs to manage.

That IaaS-centric PaaS is a well-understood beast, to which many existing tools and management practices can be applied. The “pure PaaS” approach pioneered by GAE is much more of a terra incognita from a management perspective. I don’t know, for example, whether exposing the platform version of each instance, as described above, is a good idea. How leaky is the “platform upgrades are always backward-compatible” assumption? Google, and others, are experimenting with the right abstraction level, APIs, tools, and processes to expose to application administrators. That’s how we’ll find out.

Related posts:

  1. The PaaS Lament: In the Cloud, application administrators should administrate applications
  2. Cloud platform patching conundrum: PaaS has it much worse than IaaS and SaaS
  3. The necessity of PaaS: Will Microsoft be the Singapore of Cloud Computing?
  4. PaaS portability challenges and the VMforce example
  5. Desirable technical characteristics of PaaS
  6. Is Business Process Execution the killer app for PaaS?


Carl Brooks (@eekygeeky) reported NASDAQ puts market data in the [Amazon] cloud in a 10/20/2010 article for TechTarget’s SearchCloudComputing.com blog:

image The NASDAQ stock exchange is taking its vast store of public market data to the everyman by making it available as a cloud computing service. It's also proving out another interesting use for cloud storage in the way it has put Amazon's Simple Storage Service (S3) to work.

NASDAQ collects and sorts hundreds of thousands of pieces of information about the stocks it trades every day for its Market Replay service, an online viewing tool for researchers and analysts.

image "We have every single quote that comes out of the NASDAQ securities information processor and the SIAC processor for the last three years," said Jeff Kimsey, associate vice president for product management at NASDAQ OMX. The data is Level I stock information, a count of what stock was traded and for how much, and the new service was a whole new way to think about accessing and using that data.

The NASDAQ Data-On-Demand service will allow users to connect programmatically to NASDAQ's massive data store instead of having to either look at it online or download giant chunks of raw data and sort it themselves. An application programming interface (API) will let advanced users (financial analysts are known to tinker with computing) run queries against the data directly, so existing analytical tools can simply tap into it directly.

A customer could get a specific targeted piece of information for researching compliance, training or their own historical information, Kimsey said. It will give algorithmic traders the ability to "back-test" new methods of gaming the stock market by running their automated trading schemes against years of real performance data, something that will be a boon to those who study the market in detail.

"If you're creating a historical application, all you really need to do is flip a switch," he said.

NASDAQ delivers this data as a cloud service, but that's possible because it's already got that data on S3. The technical challenge lies in making sense of the flood of information that comes in every day.

Flat files speed transaction handling
imageSome heavily-traded stocks might have many megabytes worth of transactions recorded in a day, while others will have much less. In order to break the data into manageable, easily catalogued chunks, it is stored in 10-minute increments in flat files on S3. Search and query start from there. This was the most efficient way to store the data quickly and cheaply, Kimsey said.

"Our application was really designed to get to a specific instance in time," he said, "and because we wanted to make the application really quick to get to, we store it in 10-minute [intervals] -- that allows for quick, easy access to data that's meaningful."

"That was the nature of the challenge: we're dealing with terabytes of data in each database, how we do that without creating a million-dollar database?" Kimsey asked.

Since NASDAQ has so much historical data, the costs to store and use the data on Amazon Web Services (AWS) can add up to hundreds or possibly thousands of dollars per month; spare change from an enterprise perspective on operating costs. Kimsey said the service will be available in a few weeks. At that time, users will be able to sign up and get started without assistance, or they can work with NASDAQ to get greater access.

NASDAQ worked with Xignite for the new service, which specializes in on-demand data products for the finance world. Chas Cooper, marketing director at Xignite, said the financial firms and information processors like the NASDAQ are starting to feel the pressure to get with the times.

"This year, we've seen [cloud computing] pick up in a big way. We're kind of a bellwether for financial services in the cloud," he said.

Cooper said Xignite delivered a Web services framework on top of the data that NASDAQ collected. The company had constructed a number of actions that could be performed against the data, and Cooper added that it was relatively straightforward to make more if a user had new ideas for queries. "We provide Web services at the API layer where you would plug in the data access layer," he said.

Cooper thinks this is part of a long-term trend; in the financial sector, where so much important information is publicly available and highly regulated, something he calls the "market data cloud" will eventually be fairly standard.

NASDAQ's Jeff Kimsey agrees. He's watching his business go from processing information to distributing it.

"Having less and less data stored in different spaces [and more in central repositories] is where we're going," he said.

An OData-formatted JSON service would be more interesting to many developers. Using the AtomPub protocol probably would accrue too much overhead.


Jeff Barr asserted Amazon Elastic MapReduce - Now Even Stretchier! in a 10/20/2010 post to the Amazon Web Services blog:

image Our customers have used Amazon Elastic MapReduce to process very large-scale data sets using an array of Amazon EC2 instances. One such customer, Seattle's Razorfish, was able to side-step the need for a capital investment of over $500K while also speeding up their daily processing cycle (read more in our Razorfish case study).

image Our implementation makes it easy for you to create and run a complex multi-stage job flow composed of multiple job steps. Until now, the same number of EC2 instances (known as slave nodes in Hadoop terminology) would be used for each step in the flow. Going forward, you now have more control over the number of instances in the job flow:

  • You can add nodes to a running job flow to speed it up. This is similar to throwing more logs on a fire or calling down to the engine room with a request for "more power!" Of course, you can also remove nodes from a running job.
  • A special "resize" step can be used to change the number of nodes between steps in a flow. This allows you to tune your overall job to make sure that it runs as quickly and as cost-efficiently as possible.
  • As a really nice side effect of being able to add nodes to a running job, Elastic MapReduce will now automatically provision new slave nodes in the event that an existing one fails.

You can initiate these changes using the Elastic MapReduce APIs, the command line tools, or the AWS SDK for Java. You can also monitor the overall size and status of each job from the AWS Management Console.

We've got a number of other enhancements to Elastic MapReduce in the works, so stay tuned to this blog.


<Return to section navigation list> 

0 comments: