Sunday, April 24, 2011

Windows Azure and Cloud Computing Posts for 4/19/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

The Windows Azure Team reviewed third-party secure storage services for Windows Azure in Storing Encrypted Data in Windows Azure of 4/22/2011:

imageAs companies weigh the benefits of moving their data, processes, and systems to the cloud-from reduced infrastructure costs to rapid, success-based scaling-security is a perennial concern. Executives want to know: How will we protect our company's most sensitive data against an ever-expanding array of threats?

To mitigate risk, some companies are turning to the cloud mainly as a way to access cost-efficient data storage. Several independent software vendors (ISVs) have recognized the demand for solutions that enable businesses to use their on-premises environment to operate on their most sensitive data, while capitalizing on the elasticity of cloud resources to store encrypted data. Building this kind of solution requires a cloud technology platform that incorporates strong data encryption and decryption capabilities.

As an application development and hosting platform, Windows Azure offers rich security functionality, including deep support for standardized encryption protocols. Developers can use the Cryptographic Service Providers (CSPs) built into the Microsoft .NET Framework to access Advanced Encryption Standard (AES) algorithms, along with Secure Hash Algorithm (SHA-2) functionality to handle such tasks as validating digital signatures. Moreover, the Windows Azure platform builds on the straightforward key management methods incorporated into the .NET security model, so developers can retain custom encryption keys within the Windows Azure storage services.

ISV Success Stories

Thoroughly addressing security needs while pursuing rapid time-to-market, lower operating costs, and scalability improvements is no small challenge. However, an increasing number of technology providers have used the Windows Azure SDK to build solutions that help customers encrypt their most sensitive data for cost-efficient, scalable storage in the cloud.

Cloud-Based Storage Without Compromising Control

The amount of content that companies are adding to their shared files drives, Microsoft SharePoint Server and Microsoft Exchange Server databases, and virtual machine libraries is exploding, making it increasingly difficult to achieve cost-efficient data protection. In managing all of this information, one important consideration is that not all content is created equal. The working set of content that users interact with most frequently should be prioritized. To balance performance, data protection, disaster recovery, cost, and capital and operating expenditures, it is critical to manage this working set of content differently from other data.

Administrators and users both want the speed of on-premises, high-speed disks; instant data accessibility from multiple geographies; and the elasticity that cloud-based infrastructure offers. At the same time, they want the safety afforded by traditional on-premises tape backup systems. To improve application performance, many companies turn to such costly solutions as adding on-premises servers. The introduction of Windows Azure has provided a new alternative.

StorSimple is a Santa Clara, California-based independent software vendor that focuses on solving storage-related issues-such as performance, scalability, manageability, security, and cost-for business-critical applications. Specifically, StorSimple concentrates on solving the storage issues that companies experience with high-growth applications.

Leaders at StorSimple saw an opportunity to offer a hybrid solution that integrates customers' content-intensive applications with on-demand, cloud-based storage. In addition to helping customers more effectively scale capacity and better manage data storage costs, they recognized the critical importance of providing rigorous data protection.

To meet a growing market need for highly secure, scalable, and cost-effective storage, the company developed its flagship StorSimple solution. StorSimple lets customers seamlessly connect their on-premises infrastructure with enterprise cloud storage through service platforms such as Windows Azure. To address performance issues related to linking cloud-based and on-premises data storage, StorSimple transparently tiers data across solid-state drives (SSD) and Serial Attached SCSI (SAS) drives, and Windows Azure by using a unique BlockRank algorithm, enabling both SSD performance and cloud elasticity. The solution deduplicates data, a data compression process that eliminates redundant data segments and minimizes the amount of storage space that an application consumes.

In addition to optimizing storage space for data-intensive applications, StorSimple uses Cloud Clones-a patented StorSimple technology-to persistently archive copies of application storage volumes in the cloud. Cloud Clone technology automatically stores a deduplicated snapshot of data volumes to the cloud. "By using StorSimple and cloud storage services, customers get critical data protection while eliminating the use of tapes for off-site backup," says Guru Pangal, StorSimple Cofounder and President.

Data is protected at multiple levels. Each file is broken into blocks; deduplication occurs at the block so that only changed blocks are stored. Each block that is sent to the cloud is encrypted with military-grade AES 256-bit encryption and compressed-and the private key is stored at the client premises, not in the cloud. "The Cloud Clones volumes can be directly mounted, reducing recovery time from days to minutes when compared to tape," says Pangal. "The cloud means that everyone can now afford a highly secure, redundant data center for disaster recovery. The only difference is you only pay for it when you need it."

With StorSimple and Windows Azure, customers maintain control of their data and can take advantage of public cloud services with security-enhanced data connections. StorSimple uses Windows Azure AppFabric Access Control to provide rule-based authorization to validate requests and connect the on-premises applications to the cloud.

By using StorSimple with Windows Azure, customers also gain the reliability of Microsoft data centers, which are ISO 27001:2005 accredited with Statement on Auditing Standards No. 70 (SAS 70) Type I and Type II attestations. "We understand the questions that customers have about data security in the cloud, but with StorSimple and Windows Azure, we offer customers a solution that helps them rest easy," says Pangal.

Read the full StorSimple case study at: www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008345

Security-Enhanced Data Backup and Recovery

Based in Seattle, Washington, Datacastle is an independent software vendor that helps customers protect critical data on endpoint assets and mobile devices. The company's flagship product, Datacastle RED, promotes "resilient endpoint data"-a security-enhanced cloud service that helps organizations back up and recover information from client-side devices, such as desktop and portable computers and tablet PCs, so that if a computer or device is lost or damaged, employees can get back to business quickly.

Datacastle needed a solution that would continue its tradition of helping keep customer data secure. The company prides itself on knowing that, with Datacastle RED, companies can store high-visibility, high-impact data both on-premises and in the cloud with high levels of security. Datacastle wanted to ensure that any cloud provider it worked with would support, if not further enhance, its security-enhanced data backup solution. So, in early 2010, the company decided to convert its Datacastle RED solution for deployment to Windows Azure.

When a customer uses the solution, data is first encrypted before it is sent to an on-premises server or to Windows Azure. Core elements of a file are broken down into blocks-or "Data DNA"-and sequenced in a specific order and indexed. The data is queried against existing data that has been backed up to ensure that data is not unnecessarily duplicated. Datacastle RED assigns each device a cryptographically generated random key and also assigns every Data DNA block a unique key as well.

"By the time data reaches Windows Azure, it is encrypted with multiple keys, which are not available on the back end," says Gary Sumner, CTO for Datacastle. Although there are millions of encryption keys generated for a typical customer deployment, the customer's IT department only has to manage one key, which is generated during setup.

In addition to the strong encryption policies with keys for every device and every block of data within a single file, which can only be retrieved by customers, all of the encrypted data and metadata is stored in Windows Azure at Microsoft data centers, which are ISO 27001:2005 accredited with SAS 70 Type I and Type II attestations.

Datacastle currently uses Microsoft SQL Azure to store user accounts and logon information for the Datacastle RED management dashboard. In the future, the company plans to implement Windows Azure AppFabric Access Control, which will provide federated, claims-based access control for the web services the solution uses to communicate between the client-side installation on devices and the server-side installation in Windows Azure, further enhancing data security.

Read the full Datacastle case study at: www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000008339

Ironclad Transaction Security

Zuora is the global leader for on-demand subscription billing and commerce solutions. Its core offerings-Z-Billing, Z-Payments, and Z-Commerce-give companies the ability to launch, run, and operate their recurring revenue businesses without the need for custom-built infrastructure or costly billing systems.

To capitalize on the burgeoning demand for a cloud-based subscription billing solution, Zuora sought to provide Microsoft ISVs and developers with an easy-to-use toolkit. "Our goal was to create the building blocks for a cloud-based solution that ISVs and developers could use to automate subscription billing and usage," says Jeff Yoshimura, Head of Marketing at Zuora. "We wanted to develop it as a drop-in solution so that customers could go live with a customizable, subscription-enabled storefront and start collecting payments in 5 to 10 minutes. And of course, we had to make sure that it met the two biggest requirements for any e-commerce solution: scalability and ironclad transaction security."

In June 2010, the company delivered the Zuora Toolkit for Windows Azure, which provides documentation, APIs, and code samples so developers can quickly automate commerce from within Windows Azure-based applications and connect those applications to the Zuora Z-Commerce for the Cloud subscription service. Zuora ensured that the Zuora Toolkit for Windows Azure meets data protection requirements in compliance with Payment Card Industry (PCI) standards. Throughout its development process, the company adhered to Microsoft Security Development Lifecycle (SDL) guidance regarding the use of cryptographic operations in its Z-Commerce for the Cloud application. The application uses Advanced Encryption Standards (AES) to protect customer data and meets standards for PCI Level 1 compliance.

The Zuora team also took advantage of Windows Azure AppFabric Access Control to enhance the security of its Z-Commerce for the Cloud solution. Windows Azure AppFabric Access Control is an interoperable, claims-based service that provides federated authorization and authentication solutions for any resource, whether in the cloud, behind a firewall, or on a smart device. Because it uses the OAuth Web Resource Authorization Protocol (WRAP) specification-an open protocol supported by global technology vendors-it enabled developers to quickly create a claims-aware application that was highly secure and able to connect to services across multiple platforms. "We didn't want to sacrifice interoperability for security-and vice versa," says Yoshimura. "Because it builds on core Microsoft identity and access management technologies and uses open authentication standards, Windows Azure offers the best of both worlds, so we didn't have to make that tradeoff."

Read the full Zuora case study at: www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008478

Additional Resources


<Return to section navigation list> 

SQL Azure Database and Reporting

Cihan Biyikoglu described Federation Repartitioning Operations: Database SPLIT in Action in a 4/22/2011 post:

image Repartitioning operations is a large part of the value of federations. The ability to repartition databases enable elasticity in the database tier. With federations online repartitioning operations, database tiers can be made even stretch-ier. Scheduling downtime not required! So apps can expand and contract to engage larger of fewer nodes any time for the database workload just like the middle tier.

imageOne of the repartitioning operations we ship in federation v1 is the SPLIT command. SPLIT is initiated through an ALTER FEDERATION command and specifies the federation distribution key value to use for the operation.

ALTER FEDERATION Orders_FS SPLIT AT(tenant_id=1000)

SPLIT is executed in 2 phases; first phase is executed synchronously and focuses on setting up the operation in motion. The second phase happens asynchronously and typically is where bulk of the time is spent.

  • Phase I:

In the initial phase, SQL Azure sets up the new destination databases that will receive the filtered copies of data. The destination databases are marked with a SPLITTING state in sys.databases.

First phase also initiates the filtered copy operations and sets up the metadata that will report on the progress of the operation in sys.dm_federation_operation* views. The new destination databases are not yet members of the federation so they do not show up in sys.federation system views yet.

At the end of phase I, control returns back from the ALTER FEDERATION statement much like the database copy statement. The app continues to stay online, thus the green dot in the app box. App can continue to execute DML or DDL. Here is what the setup looks like;

image

  • Phase II:

In this phase longer transaction of moving schema and data to the destination databases is executed. Schema of the source database that is being split and all reference tables are cloned to the destination databases. Schema includes all artifacts from user accounts to permissions to triggers, functions and stored procedures and more. Data in the federated tables go through a filtered copy operation at the SPLIT point specified by the command. The copy streams also build the secondary copies required for all SQL Azure databases as part of the operation. The source database continue to receive changes in the meantime, The changes may be DML statements or schema changes or changes to the property of the source database.

Once all changes are copied to the destination databases, a switchover happens that cuts existing connections and transactions in the source federation member and takes it offline. At this moment the app receives errors for ongoing transactions, thus the red dot in the app box. This behavior is similar to failover behavior in SQL Azure due to HW failures or load balancing actions. Apps with retry logic handle and recover without bubbling the error to users of the system.

image

Immediately following that, the destination member state is marked online and start accepting connections. Given all participant are ready and this switchover is simply a metadata operations, all of this happen very quickly in the system.

At the switchover, the source member is taken off of the federation metadata in sys.federations* views and destination databases take over and start showing up in the federation metadata. Source database is also dropped as part of the switchover. Federation operation views (sys.dm_federation_operations*) mark the operation complete and is cleaned up.

image

There may be conditions that causes errors in the process. For example one of the nodes hosting the destination database may fail. SQL Azure continues to retry under these conditions until the operation succeeds. SPLIT can also be run when source member is over its size quota. However waiting until you reach the size quota is not a great strategy given that split may take some time execute. Lets dive into that as well;

Performance Considerations for Split Operation

There are many factors to consider when considering the performance of complex operations such as split. The above picture simplifies the view of SQL Azure databases and hides the 2 additional replicas maintained by SQL Azure for high availability. The replicas exist for both the source and the destination federation members and are built and maintained through the split operation. When considering latency and performance of split it is important to keep in mind that the source members primary copy push data to its own 2 replicas for high availability and filter copy to 6 additional replicas as part of the split operation.

The size of the data that needs to be moved also play a role in the execution time of the split operation. The amount of data that needs to be moved will also be impacted by the rate of change that happen during the duration of the filtered copy at the source database.

Failures is another factor that may effect duration of split. If one of the participating nodes fail over, this cause retries and will also slow down the split operation.

SQL Azure cluster also governs the concurrent copy operations across the nodes and may queue split operation until a channel is acquired for the copy operation.

As always, if you have questions, reach out through the blog.

Cihan’s SQL Azure Federation articles would be much more useful to me if I had a CTP of the feature to test their content.


The SQL Server Team reported SQL Server 2008 R2 SP1 CTP Now Available for Testing on 4/22/2011:

image SQL Server 2008 R2 SP1 CTP contains cumulative updates 1-6 for SQL Server 2008 R2, and fixes to issues that have been reported through our customer feedback platforms.  These include supportability enhancements and issues that have been reported through Windows Error Reporting. The following highlight a few key enhancements in this CTP:

  • Dynamic Management Views for increased supportability
  • ForceSeek for improved querying performance
  • Data-tier Application Component Framework (DAC Fx) for improved database upgrades, on premises and for SQL Azure
  • Disk space control for PowerPivot

Customers running SQL Server 2008 R2 can download and test the SP1 CTP and send feedback to Microsoft for continuous product improvement. We look forward to your feedback!

SQL Azure developers might want to join the CTP to learn what new features and fixes might be in store for SQL Azure when the team releases SP1.


The Unbreakable Cloud blog reported a Horizontally Scalable NewSQL Database: ScaleDB, almost like Oracle RAC using MySQL in a 4/21/2011 post:

image Did we forget to mention there is another set of databases on the rise? They are called as NewSQL databases. They are not NoSQL databases but they are NewSQL, a newly scalable databases with ACID characteristics. These are not “eventually consistent” database as you would see them in NoSQL databases. ScaleDB is one of the scalable NewSQL database based on MySQL. It is almost like Real Application Clusters of Oracle. Its like that using MySQL. It can run on the cloud as you can add or remove database nodes dynamically on the fly to leverage database virtualization. It can run within your corporate data center or on the public cloud. ScaleDB automatically recovers the failed nodes without any interruption to the application.

As you might have guessed, its a shared everything database. According to ScaleDB, it offers: Large data sets Large numbers of concurrent users Large numbers of tables with complex relationships (e.g. using joins, materialized views, etc.) ACID compliant transaction processing Load balancing (e.g. to address temporal shifts in usage patterns) High-availability with smooth fail-over An evolving application with changing data storage requirements Lower Total Cost of Ownership (TCO)

Related: ScaleBase – A Database Load Balancer for Cloud


Andrew Brust (@andrewbrust) posted NoSQL, No Peace on 4/13/2011:

image After several months of research, review and revision, a white paper I wrote for the SQL Azure team, “NoSQL and the Windows Azure Platform”, has been published by Microsoft. If you go to http://www.microsoft.com/windowsazure/whitepapers and do a find within the page for “NoSQL” you’ll see a link for it. If you’d rather download the PDF directly, you can do so by clicking here.  The 25-page (not including cover and TOC) paper provides an introduction to NoSQL database technology, and its major subcategories, for those new to the subject; an examination of NoSQL technologies available in the cloud using Windows Azure and SQL Azure; and a critical discussion of the NoSQL and relational database approaches, including the suitability of each to line-of-business application development.

imageAs I conducted my research for the paper, and read material written by members of the NoSQL community, I found a consistent sentiment toward, and desire for, cleaning the database technology slate.  NoSQL involves returning to a basic data storage and retrieval approach. Many NoSQL databases, including even Microsoft’s Azure Table Storage, are premised on basic key-value storage technology – in effect, the same data structure used in collections, associative arrays and cache products.  I couldn’t help thinking that the recent popularity of NoSQL is symptomatic of a generational and cyclical phenomenon in computing. As product categories (relational databases in this case) mature, products within them load up on features and create a barrier to entry for new, younger developers. The latter group may prefer to start over with a fresh approach to the category, rather than learn the wise old ways of products whose market presence predates their careers – sometimes by a couple of decades.

The new generation may do this even at the risk of regression in functionality.  In the the case of NoSQL databases, that regression may include loss of “ACID” (transactional) guarantees; declarative query (as opposed to imperative inspection of collections of rows); comprehensive tooling; and wide availability of trained and experienced professionals. Existing technologies have evolved in response to the requirements, stress tests, bug reports, and user suggestions accumulated over time.  And sometimes old technologies can even be used in ways equivalent to the new ones.  Two cases in point: the old SQL Server Data Services was a NoSQL store, and its underlying implementation used SQL Server.  Even the developer fabric version of Azure Table Storage is implemented using SQL Server Express Edition’s XML columns.

So if older technologies are proven technologies, and if they can be repurposed to function like some of the newer ones, what causes such discomfort with them?  Is it mere folly of younger developers?  Are older developers building up barriers of vocabulary, APIs and accumulated, sometimes seldom used, features in their products, to keep their club an exclusive one?  In other engineering disciplines, evolution in technology is embraced, built upon, made beneficial to consumers, and contributory to overall progress.  But the computing disciplines maintain a certain folk heroism in rejecting prior progress as misguided. For some reason, we see new implementations of established solutions as elegant and laudable.  And virtually every player in the industry is guilty of this.  I haven’t figured out why this phenomenon exists, but I think it’s bad for the industry. It allows indulgence to masquerade as enlightenment, and it holds the whole field back.

Programming has an artistic element to it; it’s not mere rote science. That’s why many talented practitioners are attracted to the field, and removing that creative aspect of software work would therefore be counter-productive.  But we owe it to our colleagues, and to our customers, to conquer fundamentally new problems, rather than provide so many alternative solutions to the old ones.  There’s plenty of creativity involved in breaking new ground too, and I dare say it brings more benefit to the industry, even to society.  NoSQL is interesting technology and its challenge to established ways of thinking about data does have merit and benefit.  Nevertheless, I hope the next disruptive technology to come along says yes to conquering new territory. At the very least, I hope it doesn’t start with “No.”


<Return to section navigation list> 

MarketPlace DataMarket and OData

Alik Levin offered Windows Azure AppFabric Access Control Service - Visio Diagrams Available For Download in a 4/22/2011 post:

image I thought it will be useful to share with you set of images and diagrams I use for depicting Windows Azure AppFabric Access Control Service scenarios.

Download the diagrams in Visio format.

Web Scenario – Solution

Web Scenario – Solution

WCF Service Scenario - Solution

WCF Service Scenario - Solution

WIF – Authorization

WIF – Authorization

Generic scenario

Generic scenario

Management Service Scenarios

Management Service Scenarios

ACS and SaaS Scenario

ACS and SaaS Scenario

ACS and PaaS Clouds Scenarios

ACS and PaaS Clouds Scenarios

ACS and SaaS Cloud Scenarios

ACS and SaaS Cloud Scenarios

ACS Components

ACS Components

Web Application Scenario

Web Application Scenario

Challenge

Challenge

Cloud Challenge

image

Traditional Authentication

Traditional Authentication

WCF Service Scenario

WCF Service Scenario

Windows Phone 7 Scenario

Windows Phone 7 Scenario

Windows Phone 7 Scenario – Solution

Windows Phone 7 Scenario – Solution

Working With ACS – Workflow

Working With ACS – Workflow

Attachment: Azure AppFabric Access Control Service (ACS).zip

Very kind of you, Alik.


Glenn Gailey (@ggailey777) explained Counting Entities in an OData Service in a 4/21/2011 post:

image We love counting things, a census, and statistics are very popular. Of course, entities in a data service that support the Open Data Protocol (OData) are no different. For example, having a count of the total numbers of entities that you are returning to customers is useful when calculating the total number of pages that can be returned when employing client-side paging. In this blog post, I will endeavor to describe the rationale behind and the means by which you can count entities in the data service.

Service-Side versus Client-Side Counting

imageOne way to count entities in an entity set (OData feed) or that are returned by some specific query is to request the entities from the data service, and count the number of entry elements in the returned response, or the number of materialized entities in the returned IEnumerable<TEntity> collection. But this may not be the best method, especially if you don’t need the entire set of entity data on the client. Because loading a bunch of entities just to count them is a very expensive way to take a census, OData defines the following two methods that enable the data service to count entities for you:

$count Endpoint

OData defines a special $count endpoint. This endpoint, when applied to an entity set (feed) in the URI returns a single value that is a count of all entities in the set, without any XML or JSON decoration; no entity data is returned. The following query returns a value of 91:

http://services.odata.org/Northwind/Northwind.svc/Customers/$count

The $count endpoint also works with filtered queries, for example, this returns a value of 6, which is the number of Northwind customers in London:

http://services.odata.org/Northwind/Northwind.svc/Customers/$count/?$filter=substringof('London', City)

For more info on the $count endpoint, see Section 3.0 Resource Path in the OData specification.

$inlinecount Query Option

OData also defines an $inlinecount system query option. This query option returns the same value as does $count, but it does so in the <count> element within the response feed, along with the entity data itself. You would use this method to get a total count when paging and you wanted the first page of entities. This URI returns the same count (6) as the previous URI, but it does so along with the six Customer entities:

http://services.odata.org/Northwind/Northwind.svc/Customers/?$filter=substringof('London', City)&$inlinecount=allpages

For more information, see Section 4.9 Inlinecount System Query Option in the OData specification.

(A limitation in both of these methods is that counts are always applied against the resource path, which is limited to a specific entity or collection, which means that you cannot request a count of, say, both a set of selected customers and all their related orders in a single query. You would instead have to either iterate through returned customers and issue requests for a count of each set of related orders, or use $expand to include related orders in the response and count them on the client.)

Counting Support in OData Clients

The main Microsoft-created OData clients (WCF Data Services client, WCF Data Services client for Silverlight, and OData client for Windows Phone) all enable you to request these counts and access the results in a client application. The topic Querying the Data Service covers how to use both the IncludeTotalCount and Count/LongCount methods of DataServiceQuery<T> to get these counts when synchronously accessing the data service. However, things get a little trickier when accessing the data service asynchronously.

Asynchronous Counts

When accessing the data service asynchronously, which is required by both Silverlight and Windows Phone applications, counting gets a little more tricky. This is because there are no BeginCount/EndCount methods. This means that there is no way to asynchronously access the $count endpoint to get only the count without the entity data, as you can by calling DataServiceQuery<T>.Count(). Fortunately, you can get as close as possible to this behavior by using IncludeTotalCount while requesting that zero entities by returned. In this example, we request the inline count, but we also use the Take() method (which is converted to $top in the URI) to ensure that no entity data is returned, just the count element in the response message:

      {
       …
          context =
               new NorthwindEntities(new Uri("http://services.odata.org/Northwind/Northwind.svc/"));
           var  query = (from cust in context.Customers.IncludeTotalCount()
                       where cust.City == "London"
                       select cust).Take(0) as DataServiceQuery<Customer>;           
           query.BeginExecute(OnQueryCompleted, query);
       }

       public void OnQueryCompleted(IAsyncResult result)
       {
           var query = result.AsyncState as DataServiceQuery<Customer>;
           var response = query.EndExecute(result) as QueryOperationResponse<Customer>;
           int count = (int)response.TotalCount;
       }

If you are interested, I used the $inlinecount option with client-side paging when composing the query URI for the OData Windows Phone quickstart.


Michael Pizzo explained Using Microsoft WCF Data Services Reference Data Caching Extensions (CTP) in a 4/20/2011 post to the WCF Data Services Team blog:

image Last week at MIX we announced proposed extensions to the OData protocol for obtaining changes (deltas) to the results of an OData Request. The extensions provide a hypermedia-driven model for obtaining a "Delta Link" from a delta-enabled feed that can later be used to obtain changes from the original results.

imageAlong with the announcement, we released a new CTP of WCF Data Services named "Microsoft WCF Data Services For .NET March 2011 Community Technical Preview with Reference Data Caching Extensions". As the (incredibly long) name implies, this CTP builds on (and is fully side-by-side with) the functionality exposed in the "Microsoft WCF Data Services March 2011 CTP 2 for .NET Framework 4", adding the ability to expose and consume delta links in WCF Data Services.

This "Reference Data Caching CTP" is accompanied by a walkthrough that shows how to create a code-first entity model that supports delta links, and how to expose that model through WCF Data Services. The combination of Code-First for ease of use and delta support for change tracking makes it easy to build reference data caching applications without having to worry about the details of tracking changes in the database. But what if you have existing data that you want to expose and track changes on? In this write-up we explore how to delta-enable your service for more advanced scenarios by looking at the WCF Data Services component in a little more detail, and showing how to expose and consume delta queries over existing entity models.

Please keep in mind that this CTP is only a preview of the Reference Data Caching Extensions and is subject (okay, guaranteed) to change, but this should be representative of the way we are thinking about defining the functionality and we'd love to get your input.

Reference Data Caching with the Entity Framework

As part of this CTP, the Data Service Provider for the Entity Framework has been enhanced to support getting and applying "delta tokens" to queries against your entity model. While the production version will likely be based on extensions built into the Entity Framework, this release works with the existing Entity Framework version 4.0. To support delta tokens in this release you need to a single new (possibly virtual) entityset for tombstones, along with a separate (possibly virtual) entityset for each delta-enabled entityset. This "deltaset" exposes LastChange values; Int64 values that are guaranteed to increase with each change to a set.

Exposing the ability to request LastChange and tombstone values through the model enables delta tracking to be supported on top of a variety of different implementations. For example, a database may track change information through rowversion (i.e., timestamp) columns in base tables, insert/update/delete triggers to populate physical side tables, or storage-specific built-in change tracking mechanisms.

The current CTP relies on naming conventions for determining the entitysets in the entity model that contain the delta information. In a future version we expect to use annotations to allow you to explicitly specify the source for delta information for a given entityset.

Exposing LastChange values through your Entity Model

In order to track changes against an entityset in your model, define an entityset in your Entity Model named "Get[EntityTypeName]Deltas" (for example: "GetCustomerDeltas.") This rather strange naming convention is in anticipation of using composable table valued functions in a future version of the Entity Framework.

This “deltaset” is expected to have the same key fields as the target entityset, along with an Int64 column named (again, currently by convention) "LastChange." It may be backed by an actual table (or view) in the database, or may be defined through a defining query in SSDL.

Example: Exposing LastChange values for Northwind.Customers

Taking the Northwind Customers table as an example, the easiest way to expose a LastChange column is by adding a timestamp (rowversion) column to the table:

alter table Customers add LastChange rowversion

This column doesn't need to be exposed (mapped) in the base entityset in your entity model (i.e., Customers), but can be exposed as a separate (virtual) entityset in the SSDL section of your .edmx file through something like the following:

<EntitySet Name="GetCustomerDeltas" EntityType="NorthwindModel.Store.CustomersDelta">
<DefiningQuery>
        Select CustomerID, convert(bigint, LastChange) as LastChange
        From [Customers]
</DefiningQuery>
</EntitySet>

Here NorthwindModel.Store.CustomersDelta entity type is defined as:

<EntityType Name="CustomersDelta">
<Key>
        <PropertyRef Name="CustomerID" />
</Key>
<Property Name="CustomerID" Type="nchar" Nullable="false" MaxLength
="5" />
<Property Name="LastChange" Type="bigint" StoreGeneratedPattern="Computed" />
</EntityType>

Having defined this virtual entity set in your storage model, you can use the designer to map it to an entityset named "GetCustomerDeltas" in your entity model.

Note that this entityset does not need to be exposed by your DataService. The set of entitysets that are exposed by your WCF Data Service are controlled through the configuration object passed to the InitializeService method of your DataService:

config.SetEntitySetAccessRule("GetCustomerDeltas", EntitySetRights.None);

Note that EntitySetRights.None is the default, so unless you’ve given read rights to “*” (which is not recommended) you should not need to add this line.

Repeat the above steps to define additional “deltasets” for each entityset for which you want to track changes.

Exposing Tombstones through your Entity Model

In order to support deletions through the Data Service Provider for the Entity Framework, you need to expose an additional entityset named (again by convention) “GetTombstones”. The entities returned by this entityset must contain the following properties:

TableName – The name of the entityset mapped to the table whose row was deleted

KeyValues – A string representation of the key values of the deleted row. This must match the format of Key Values in an OData key reference; strings must be enclosed in single quotes and multi-part keys must be comma separated, in the order they appear in the entity type declaration, and qualified with “[KeyPropertyName]=” (for example, “OrderID=123,ProductID=1”).

DeletedTime – The DateTime value when the column was deleted

LastChange – The delta value for the deletion

In the future we will likely have separate tombstone tables for each entityset so that we can break keys out as individual columns, but for now we merge them into a single “KeyValues” column as described above.

Example: Exposing Tombstones for Northwind.Customers

Going back to the Northwind database, you can expose tombstones by first creating a table to contain information for deleted rows.

CREATE TABLE [dbo].[GetTombstones](
      [TableName] [nvarchar](125) NOT NULL,
      [KeyValues] [nvarchar](255) NOT NULL,
      [DeletedTime] [datetime] NOT NULL,
      [LastChange] [timestamp] NOT NULL,
CONSTRAINT [PK_Deletions] PRIMARY KEY CLUSTERED (
               [TableName] ASC,
               [KeyValues] ASC,
               [LastChange] ASC))

In order to populate the tombstone table when a customer is deleted, you can define a DELETE trigger on your Customers table:

CREATE TRIGGER [dbo].[Customers_Delete] 
ON [dbo].[Customers]
     AFTER DELETE
AS
BEGIN
     SET NOCOUNT ON;
INSERT INTO Tombstones(TableName, KeyValues, DeletedTime) (
SELECT 'Customers' as TableName,
'''' + CustomerID + '''' as KeyValues,
getdate() as DeletedTime
FROM Deleted )
END

Define similar DELETE triggers for each additional table you on which you want to track deletions.

Finally, map this table to an EntitySet in your EntityModel named “GetTombstones”. Again, this entityset does not have to be exposed through your data service; it will be picked up and used by the Data Service Provider for the Entity Framework.

This example shows one way of mapping a delta-enabled model that can be applied to virtually any store with simple timestamp and trigger capabilities. But by using the Entity Framework to decouple how changes are tracking in the database from how delta information is exposed in the entity model we can support a number of different change-tracking capabilities, such as built-in change tracking functions or more advanced Change Data Capture (CDC) functionality in Microsoft SQL Server 2008.

Referencing the Delta-Enabled WCF Data Services CTP in your Service

Finally, to use the Delta-enabled WCF Data Services you will need to reference the delta-enabled assemblies, Microsoft.Data.Services.Delta.dll and Microsoft.Data.Services.Delta.Client.dll, instead of the shipping System.Data.Services.dll and System.Data.Services.Client.dll. In addition, you will need to make sure that the mark-up on your .svc class references the correct assembly. To do this, right-click your .svc file, select “View Markup”, and change “System.Data.Services” to “Microsoft.Data.Services.Delta” as in the following example:

Change:

<%@ ServiceHost Language="C#" Factory="System.Data.Services.DataServiceHostFactory, System.Data.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" Service="MyDataService" %>

To:

<%@ ServiceHost Language="C#" Factory="System.Data.Services.DataServiceHostFactory, Microsoft.Data.Services.Delta, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" Service="MyDataService" %>

Assuming you have set everything up correctly, you should see a delta link returned on the last page of results from any “delta-capable” request against your entityset. To be delta-capable, the request must not contain any conditions on non-key fields and must not include $expand.

Reference Data Caching with Custom Providers

In addition to supporting delta tracking on top of a delta-enabled Entity Framework model, this WCF Data Services CTP supports delta tracking against custom providers through a new IDataServiceDeltaProvider interface. Stay tuned for information on implementing this interface in order to support delta tracking over your custom Data Service Provider.

In the meantime, looking forward to your thoughts & comments, either in the comments for this blog post or in our prerelease forums here.

Michael Pizzo
Principal Architect
Data Frameworks


Marcelo Lopez Ruiz explained Supporting JSONP callbacks in WCF Data Services in a 4/19/2011 post:

imageBy default, WCF Data Services does not support the $format and $callback options to support JSONP.

If you're using datajs to access a service and have set the enableJsonpCallback flag to true but you're seeing an error on the response that looks like "The query parameter '$callback' begins with a system-reserved '$' character but is not recognized." (you'll need a network capture to see this), then the server doesn't support JSONP. Thanks to Marco Bettiolo for the problem report and network diagnostics!

imageA very common way to add support is described by Pablo Castro in this blog post, or if you prefer to get up and running faster, the sample is here.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Vittorio Bertocci (@vibronet) reported his appearance in The New ACS on CloudCover video segment on 4/22/2011:

image

image This week Steve was at Mobile/Cloud Connections in Vegas presenting on WP7 and Windows Azure. Wade kindly invited me to co-host Cloud Cover for this week, and I happily jumped on the opportunity to shed more light on the new ACS and spread the Sesame + ACS tip I’ve learned from Justin. I took the task of substituting Steve very seriously, but I failed pretty fast (within 20 seconds or so). Thank you Wade & Steve for having me and for the LOLz :)

Here there’s the caption from the ch9 pages. Have fun!

Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show @CloudCoverShow.

In this episode, Vittorio Bertocci joins Wade as they explore the Umbraco Accelerator for Windows Azure. The accelerator is designed to make it easy to deploy Umbraco applications into Windows Azure, allowing users to benefit from automated service deployment, reduced administration, and the high availability & scalability provided by Windows Azure. Additionally, Vittorio discusses last week's launch of the Access Control Service 2.0.

In the news:

Get the Sesame Data Browser from Vittorio's tip


Vittoria Bertocci (@vibronet) explained on 4/22/2011 that this Fell Thru the Cracks: “Re-Introducing the Windows Azure AppFabric Access Control Service” on MSDN Magazine 12/2010:

imageOh boy, I am am really getting old. While I was compiling today’s yearly blog roundup I *knew* there was something missing, and then it came to me: somehow I forgot to blog about the article that the amazing Wade and I wrote about ACS back in December! In fact we wrote it much earlier, but December is the MSDN Magazine issue for which it came out.

The screenshots are a bit outdated, the portal got much prettier since then, however I think it retains its value as an introduction to ACS.

If somebody asks you what is ACS, just point them to the article; they’ll get the basis, and the subsequent conversations will be much easier.
It works, garantito al limone: I’ve been successfully using this technique with colleagues & customers since December.

Bonus: ACSは日本語で利用についてのエッセイ随笔ACS的中文版本!Available in other languages in general, if you can figure out the “secret” URI structure of the MSDN magazine… have fun!


The [Windows] Identity and Access Team reported AD FS 2.0 Content Map is published on 4/21/2011:

imageWe have published an AD FS 2.0 content map wiki page which is intended to act as a content map for all members of the AD FS 2.0 community.

image722322222This is an on-going effort. Members of the AD FS product team will monitor this article on a regular basis and will post new links as they become available on Microsoft.com. The following is the current TOC list of this article:

We would like to enlist your help in adding useful links to this article in order to make hot AD FS 2.0 topics and solutions more discoverable to the overall community. If you know any useful AD FS 2.0 content that that is not listed in this article or if you would like to have a hot AD FS 2.0 topic documented, please send your feedback to AD FS Product Team.


The Windows Azure AppFabric Team reported Updated IP addresses for AppFabric Data Centers on 4/20/2011:

image722322222Today (4/20/2011) the Windows Azure platform AppFabric has updated the IP Ranges on which the AppFabric nodes are hosted.  If your firewall restricts outbound traffic, you will need to perform the additional step of opening your outbound TCP Ports and IP Addresses for these nodes. Please see the 1/28/2010 “Additional Data Centers for Windows Azure platform AppFabric” posted which was updated today to include the new set of IP Addresses.


Keith Bauer of the Windows Azure AppFabric Customer Advisory Team posted Understanding the Windows Azure AppFabric Service Bus [QuotaExceededException] on 4/19/2011:

image This post provides insight into some of the undocumented details of the Windows Azure AppFabric Service Bus connection quotas. In particular, it answers the following questions: Under what circumstances will you receive the QuotaExceededException when establishing Service Bus connections? What are the “real” Service Bus connection quotas? What can you do to overcome them? And, finally, are there any plans to address these quota limitations?

Overview

image722322222There is no magic in interpreting the Service Bus QuotaExceededException message. It is not cryptic, and the basic information is provided in the exception message itself: “User quota was exceeded. Maximum number of connections per solution exceeded.” This is not too difficult to comprehend… you exceeded some maximum value (i.e. the quota) and the Service Bus cannot process your request. The challenge with the exception is in understanding what quota you exceeded, and what options you have for dealing with this. The background for this post is based on a recent customer engagement where we started receiving this message as the deployed solution exceeded the maximum number of allowed Service Bus connections. However, this came as a surprise, because the FAQ on Windows Azure Pricing states the quota is 500 Service Bus connections, and in this particular test, the solution only had one-tenth of that being consumed, or about 50+ connections, before we started seeing this error.

For reference, here’s the actual error that was encountered:

[QuotaExceededException] User quota was exceeded. Maximum number of connections per solution exceeded.; STACK: Server stack trace: at Microsoft.ServiceBus.RelayedSocketInitiator.Connect(Uri uri, TimeSpan timeout) at Microsoft.ServiceBus.Channels.BufferedConnectionInitiator.Connect(Uri uri, TimeSpan timeout) at Microsoft.ServiceBus.Channels.ConnectionPoolHelper.EstablishConnection(TimeSpan timeout) at Microsoft.ServiceBus.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout) at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout) at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at …

Understanding the Current Limits

After a few calls to support, and some collaboration with the engineers, it appears there are some undocumented quotas based on the size of your connection pack. Figure 1 represents the connection packs currently offered (as of Apr. 2011) and the “actual” per namespace quota limitations imposed on each package size. This clearly illustrates the distinction between the connection pack size (i.e. your discounted pricing based on number of connections) and the associated quotas (i.e. the maximum number of concurrent connections allowed before you start getting the QuotaExceededException).

image

Figure 1 – Quotas by Connection Pack Size

Given this newfound information, it was easy to determine why we started encountering the aforementioned exception. In our case, we were using the “pay as you go” model, and reached the internal limit of 50 connections.

How to Overcome the Current Quota Limits

Now that we have a basic understanding of why we were encountering this exception, as well as an understanding of what the quotas are, we can evaluate the options at our disposal for overcoming these limits. Here are two options:

1. Upgrade your connection pack size until you get to a level where you will not be exceeding the quotas. This can be accomplished by navigating to the new Service Bus management  portal, which has recently been integrated into the overall Windows Azure Management Portal experience.

2. What if you know you will have more than 835 connections (i.e. the current limit for the 500 connection pack)? In this case, you can work with the Azure Business Desk and request a “custom” connection pack size (i.e. > 500) and a “custom” quota (i.e. whatever you need this to be!). That’s right, in addition to requesting a new connection pack size, you can (and should) request a new quota/connection limit. You can request this by calling the phone number appropriate for your region, which can be found by completing the questions at our Microsoft Support for Windows Azure site.

Note Note:

Based on the current pricing plan (April 2011), if you exceed the number of concurrent connections (as defined by your connection pack size), you will be charged at the “pay as you go” rate for each additional connection. It is only after you reach the quota/limit that you will receive the QuotaExceededException message.

Plans to Address the Quota Limits

For those of you that have plans to exceed the 835 current connection limit, or for those of you that want to continue a “pay as go” model without upgrading your connection pack size for the sole purpose of increasing your quota, you will be happy to know that a fix has been planned and the product group will introduce a new quota limit. The new quota limit (which should be available within the next few weeks/months) is expected to be 2000 concurrent connections, regardless of the connection pack used! While the process for overcoming the quota limit will likely remain the same, you should be happy to know that you should experience fewer QuotaExceededException messages as a result of this planned change.

Resources and References:

Authored by: Keith Bauer
Reviewed by: Christian Martinez, Paolo Salvatori, James Podgorski


Bruno Terkaly posted Source code for my AppFabric Discussion on 4/19/2011:

image The Weather and Expose Image applications

I’ve have traveled and will continue to travel numerous cities:

  • Denver
  • San Francisco
  • Tempe
  • Bellevue
  • Seattle
  • Irvine
  • Los Angeles

image722322222I committed to making my code available.

MyImage

Expose Image

image


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, Traffic Manager, RDP and CDN

Maarten Balliauw described Geographically distributing Windows Azure applications using Traffic Manager in a 4/22/2011 post:

image With the downtime of Amazon EC2 this week, it seems a lot of websites “in the cloud” are down at the moment. Most comments I read on Twitter (and that I also made, jokingly :-)) are in the lines of “outrageous!” and “don’t go cloud!”. While I understand these comments, I think they are wrong. These “clouds” can fail. They are even designed to fail, and often provide components and services that allow you to cope with these failures. You just have to expect failure at some point in time and build it into your application.

imageLet me rephrase my introduction. I just told you to expect failure, but I actually believe that clouds don’t “fail”. Yes, you may think I’m lunatic there, providing you with two different and opposing views in 2 paragraphs. Allow me to explain: "a “failing” cloud is actually a “scaling” cloud, only thing is: it’s scaling down to zero. If you design your application so that it can scale out, you should also plan for scaling “in”, eventually to zero. Use different availability zones on Amazon, and if you’re a Windows Azure user: try the new Traffic Manager CTP!

The Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more hosted services, all accessible with the same URL, in one or more Windows Azure datacenters. It’s basically a distributed DNS service that knows which Windows Azure Services are sitting behind the traffic manager URL and distributes requests based on three possible profiles:

  • Failover: all traffic is mapped to one Windows Azure service, unless it fails. It then directs all traffic to the failover Windows Azure service.
  • Performance: all traffic is mapped to the Windows Azure service “closest” (in routing terms) to the client requesting it. This will direct users from the US to one of the US datacenters, European users will probably end up in one of the European datacenters and Asian users, well, somewhere in the Asian datacenters.
  • Round-robin: Just distribute requests between various Windows Azure services defined in the Traffic Manager policy

As a sample, I have uploaded my Windows Azure package to two datacenters: EU North and US North Central. Both have their own URL:

I have created a “performance” policy at the URL http://certgen.ctp.trafficmgr.com/, which redirects users to the nearest datacenter (and fails-over if one goes down):

Windows Azure Traffic Manager geo replicate

If one of the datacenters goes down, the other service will take over. And as a bonus, I get reduced latency because users use their nearest datacenter.

So what’s this have to do with my initial thoughts? Well: design to scale, using an appropriate technique to your specific situation. Use all the tools the platform has to offer, and prepare for scaling out and for scaling '”in”, even to zero instances. And as with backups: test your disaster strategy now and then.

PS: Artwork based on Josh Twist’s sketches


The Windows Azure Team suggested on 4/21/2011 that you Don't Miss The Academy Live Session, "Windows Azure Traffic Manager", Friday, April 22, 2011, 8:00 am PDT. It might be too late, but:

Don't miss tomorrow's Academy Live session, "Windows Azure Traffic Manager" to learn more about the Windows Azure Traffic Manager Community Technology Preview (CTP).  Windows Azure Traffic Manager allows customers to geo-distribute their applications for business continuity, performance, and load distribution.

During this session, presenter and Windows Azure Infrastructure Senior Product Manager Tina Stewart will provide an overview of the Windows Azure Traffic Manager, including its benefits and how to use it. Click here to learn more and to register.

There was no indication as of 4/24/2011 that the session was recorded and available from an archive.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Avkash Chauhan (@avkashchauhan) posted Source Code: Hosting Silverlight Pivot in Windows Azure on 4/22/2011:

image You can download this full VS2010 Solution and open in VS2010 admin mode.

Edit MainPage.xaml.cs and enable to following line so you can use my prebuilt Pivot in your Code and then just press F5 to test sample in compute emulator:

private static readonly Uri s_GalaxiesCollectionUri = new Uri("http://galaxiespivot.cloudapp.net/ClientBin/galaxies.cxml");
public MainPage()
{
InitializeComponent();
pivotWidgets.LoadCollection(s_GalaxiesCollectionUri.ToString(), string.Empty);
}

imageThe VS2010 Solution looks like as below:

Download the source code from: http://pivotviewerinazure.codeplex.com/

Glad to see Avkash’s new mug shot from his Twitter account. Following.


Bruce Kyle posted an ISV Video: NewsGator Moves 3 Million Blog Posts Per Day on Windows Azure case study to the US ISV Evangelism Blog on 4/22/2011:

image Through its Datawire service, NewsGator aggregates approximately 1.6 million content sources on an hourly basis. This service used to be housed in their own data center; NewsGator recently moved this service to Windows Azure.

The VP of Mobile and Data Services, Walker Fenton, and Software Developer Brian Reischl describe tips and tricks of moving this high availability service which aggregates more than 3.25 million posts per day.

Video Link: NewsGator Moves 3 Million Blog Posts Per Day on Azure.

clip_image001The team tells me how they started with an existing ASP.NET application, but during the transition had to change how they thought about several facets of the solution – from technical aspects of storage and network connectivity to the business model.

Very important to them was team members need no longer carry a pager to handle issues in the day-to-day operations of the data center, such as hard drive failures. Brian says, "No one gets woken up in the middle of the night because an Azure hard drive has failed."

Walker describes the newness of the cost model — how seemingly small transactions can add up. He also talks of the deployment efficiencies, and how they no longer have the "one weird box" problem -- where one machine in the data center is somehow different from the rest, and always causing headaches around deployment, scalability, and maintenance.

About NewsGator

NewsGator, who has earned the ISV Gold Competency, offers award-winning enterprise social computing solutions. Direct integration with SharePoint’s business collaboration platform means NewsGator Social Sites runs hassle-free on thousands of organizations’ existing computing infrastructures. Capabilities familiar to consumer-oriented social software, such as microblogging, activity streams, social profiles, mobile clients, and expertise location, ensure users extract real business value from collaboration and knowledge management activities.

Founded in 2004 and with over 2.5 million paid users, NewsGator serves Fortune 200 and government knowledge workers across the globe – including Accenture, Biogen Idec, Charles Schwab, Deloitte, Edelman, Fujitsu, General Mills, JPMorgan Chase & Co., Kraft Foods, Novartis, Unisys Corporation, the United States Air Force, and the United States Army.

Together, SharePoint and Social Sites are propelling the future of productivity - http://www.newsgator.com.

Other ISV Videos

For videos on Windows Azure Platform, see:

For other videos about independent software vendors (ISVs):


Robert Duffner continued his interview series with Thought Leaders in the Cloud: Talking with Scott Morehouse, Director of Software Development at Esri on 4/21/2011:

image Scott Morehouse [pictured at right] was involved in the early development of GIS (geographic information system) in the Harvard Graphics Lab and is now director of software development for Esri. He was responsible for the initial design and architecture of Esri's ARC product.

In this interview, we discuss:

  • How the cloud is enabling more collaboration around GIS data
  • The cost and complexity in setting up on-site GIS solutions, vs. using cloud based or on-demand solutions
  • The opportunity for "mashups" where users combine their on-site data with cloud-based data
  • Opportunities created by Azure virtual machines and database instances

Robert Duffner: To get us started, could you please take a moment to introduce yourself and tell us about your role at Esri?

Scott Morehouse: I direct the software and product development activities of Esri. I've been involved in building information systems for working with maps and geographic data for 25 or 30 years. We built systems for workstation and client/server environments, then we built web-based systems, and now we're building systems that leverage cloud services and infrastructure.

My background is in geography and software engineering. We're heavily involved in applying the appropriate computing technology and leveraging general-purpose computing infrastructure to serve our users, who work with maps and geographic information.

Robert: You've been involved in GIS for quite some time, going back to the Harvard Graphics Lab. How have you seen the field change over the decades, and where do you see it going?

Scott: It's interesting to see the technological changes, but the fundamentals actually are quite the same, in terms of bringing geographic information to life in support of real users, real decision-making processes, and real work flows.

One thing that has especially become easier with modern technology is building collaborative systems and making information available to everyone in an organization, rather than having it locked up in departmental systems or information silos. Using web technologies and mobile device styles of system building has made it a lot easier to allow people in a given community to participate in implementation of the information.

Robert: You've talked about the underpinnings moving from client/server to a web-based modality, and now leveraging cloud computing. How do you see cloud providing benefits for GIS?

Scott: There are a number of different dimensions that make cloud interesting to us and our users. First is the simple fact that information systems have been moving from a client/server pattern to a web-centered pattern. By that I mean that even intranet or internal systems within an organization are being built around a web programming pattern and around a web style of user interactions.

Building a web-style information system implies an easy-to-use, browser-based modality that is stateless and uses a certain programming pattern. It implies making the information available to devices like iPhones and tablets as well as to work stations. It implies a certain style of documentation and leveraging a community of people for a more collaborative environment.

People are very interested in building applications that work that way, because that's the highest style of technology that they're used to. Nobody works with command prompts anymore except for system administrators and developers.

Another trend is the complexity of building and managing a computing infrastructure for an organization or even for yourself. It's really a difficult process to create the right infrastructure for hard drives, CPU cores, network connectivity, security, software patches, and so forth. So the notion of being able to grant or tie into a hosted infrastructure rather than having to build and maintain your own is very attractive to our users. They just want to turn a switch and get a new server that they can deploy a workload to.

A third thing is the ability to combine and mash up functionality and information that comes from other places. Users like to be able to take our case maps and data that others have created, and use them together in their own applications.

Robert: With SaaS applications, you want multi-tenancy and for each customer's data set to be completely isolated. That's sometimes true with GIS, but at other times you want to share and use community data. How does the cloud facilitate that?

Scott: The cloud facilitates the sharing of information in a couple of ways. One is that web-style system architecture makes information accessible through services. The notion that information is accessible through RESTful services or web-style interfaces really reduces the problems of getting at information. You're not having to ETL data from one database to another or these types of things.

In that context, you have to be clear as to what information is private, what information is semi-private, and what information is public. I think there's an implication that if information is easily accessible through a web-style interface, it also has to be public information. That's not necessarily the case, and we can put security around information in that context.

I think the question of whether a system is based on a multi-tenant architecture or whether it's based on having actual instances per user is kind of a fine point of implementation.

SQL Azure is multi-tenant, but there are individual database instances within that. Some people can own and control their own database container, but the system is optimized in such a way that it scales and has other attributes that multi-tenant applications give you. We see a combination of services that are implemented in a multi-tenant style and applications that are instance-per-organization style.

In the context of SharePoint, for example, there's a role for both a multi-tenant approach, for sharing documents and collaborating on them, as well as for allowing people to rent their own SharePoint instance in a hosted or cloud environment.

Robert: You've also talked about the cloud lowering the barriers for people to utilize GIS because they don't have to stand up servers to have GIS capabilities. Can you talk a little bit more about that?

Scott: The main barrier for people to get into this web style of system building with geographic information is setting up and managing servers. The cloud makes that easier in a couple of ways. First, people can stand up and manage their traditional enterprise-type servers and services using the cloud as a hosting environment, or a virtual data center for their servers, if you will.

It also allows us to create new services that are lighter weight, leveraging the scalability of frameworks like Azure. So people can basically get going with information managed and delivered through services accessible from web clients a lot easier than if they had to buy their own hardware and connect it to the Internet.

Robert: Esri itself has a bit of a hybrid model, where you host your own servers but you also use Amazon and Windows Azure. Can you talk a little bit about your architecture and how you decide what to keep in house and what to host in the cloud?

Scott: Our fundamental architecture is web-centric, meaning that we've been working to expose maps and geographic information through open, web-accessible interfaces, primarily REST and JSON, but also SOAP and some other types. We've engineered our front end as clients to these services, so this web-centric system architecture can be deployed within an enterprise entirely, but it's also well suited to running on the Internet. It's also well suited to having elements of it, namely some of these services, actually hosted in a cloud infrastructure.

Since everything is a service, it really doesn't matter whether the service is running on physical hardware that's connected to your LAN or on virtual hardware that's physically located in an Amazon or Azure data center. We just make practical decisions about which aspects of the system make sense to run in our customer's data center, which services should run in the Azure cloud environment, and which ones should run in Amazon's cloud. We look at requirements such as what functionality is most efficiently implemented in which infrastructure and which environments meet the security and access requirements.

Robert: How do you see other enterprises using hybrid models where they may keep a large number of servers and applications on site for the foreseeable future, but consume cloud services like those that Esri provides?

Scott: It's not necessarily the case that to take advantage of cloud computing, you need to rewrite or move all of your applications from an enterprise-centric architecture to a cloud-centric architecture. It's certainly possible to build on-premises enterprise applications that combine information coming from your enterprise systems with data feeds, information, and functionality that are coming through a subscription to a cloud service.

We're seeing lots of mash up patterns where people combine geographic information from our hosted services with their enterprise information and even build their enterprise systems using on-premises web sites or thick-client applications.

Robert: I think a lot of companies with products they've traditionally sold as on-premises offerings see the cloud as something of a threat, but Esri has really embraced the cloud and pivoted to this technology. What advice do you have for other companies or organizations that have on-premises solutions about adopting the cloud?

Scott: Every organization is different, and we've really just focused on recognizing that this new pattern of building systems that leverage browsers and mobile devices is a pattern of systems that people expect. They want to get at their corporate reports or their geographic information from their iPhone as easily as they can get to their music from their iPhone.

There's an opportunity there to grow and support that style of solution as well as more traditional desktop computing. Amazon and Microsoft have both worked hard to make it relatively simple to migrate applications from a traditional server computing environment to a hosted computing environment.

In particular, the latest release of Azure has virtual machines and other capabilities that work both on premises in private clouds as well as off-premises in hosted ones. I don't see cloud-based applications completely replacing on-premises based ones, but I see the two complementing one another, and I see a lot of cases where you can design a system that will work well in both environments.

Robert: Key software providers like Esri providing services in the cloud definitely provides an opportunity that wasn't there before, in terms of enabling customers to co-locate, for lack of a better term, their software with your software in the same cloud. What are your thoughts on that?

Scott: People can build systems that take advantage of the cohesion of software components if they share a common cloud infrastructure and common application fabric. We are certainly exposing aspects of our system that allow people to take advantage of that, for example, building web roles that work with our worker roles and our data services, tying them into a common application fabric.

Another interesting thing about this sort of web-centric architecture is that, if it's truly service-based, it is to some extent agnostic as to where the services are coming from, and that's a good thing. We don't want to have to replicate or copy the same information and functionality across to six different ways of storing and managing blobs in a web-addressable way.

We can definitely have applications that mash things up between Windows enterprise architecture and Azure cloud architecture, as well as other hosted environments like Rackspace and other virtual environments like Amazon. You can build in a degree of system cohesion, and it's not necessary to rewrite everything so that it runs entirely inside of Azure, Amazon, or any of the others.

Robert: I came across a paper for the 1997 Esri User Conference titled "Democratizing GIS: Are We There Yet?" Where do you think we are on the path of democratizing GIS?

Scott: A lot of the technical challenges have been overcome. The challenges now are about how to create a lot of great content and communities that can collaborate around it.

Robert: How would you characterize the value of platform as a service, as opposed to infrastructure as a service?

Scott: I think the whole distinction between platform as a service and infrastructure as a service is a false one that creates a lot of confusion. I prefer to think in terms of "system as a service." To build a system, you use the appropriate technology, whether it's database technology or client technology. When people have big debates about whether the business logic should live in the database tier or the middle tier of a three-tier architecture, the real answer is that it should live where it's best suited, and where you can build and maintain a system most appropriately.

I really like what's been going on with this latest release of Azure, because from a practical standpoint, we're actually blurring that distinction. The religious people that refuse to let Azure be platform as a service have relented and allowed us to have virtual machines and allowed us to have database instances with SQL Azure.

That's really opened up a lot of opportunities for moving more conventionally architected systems to the cloud and then adding functionality that might leverage the fabric or platform-as-a-service capabilities. I look at Azure and Amazon not as differences of kind but as differences of quality. Both allow you to build cloud-based systems or system as a service, and you can use both to do tier services, build user experiences, or even have databases.

What's different, really, is the quality of the relational data store, the quality of the runtime environment for hosted app instances as Web roles, and how easy it is to build and manage a system as a service in one or the other.

Robert: Thanks, Scott. I really appreciate your insights.

Scott: Thank you.


Avkash Chauhan (@avkashchauhan) posted Windows Azure SDK 1.4 Refresh: Installation Walkthrough on 4/21/2011:

image Windows Azure SDK 1.4 Refresh is ready to install which has an important plugin name "Web Deploy" which will help you to [save time deploying Windows Azure Compute instances, with these limitations]:

  • Web Deploy only works with a single role instance.
  • The tool is intended only for iterative development and testing scenarios
  • Web Deploy bypasses the package creation. Changes to the web pages are not durable. To preserve changes, you must package and deploy the service

[See also, Avkash’s third post below.]

Click here for more info and download details....

imageNow following is the Windows Azure SDK 1.4 Refresh Installation walk through:

The new SDK deployment is does with application WindowsAzureToolsVS2010.exe which is very small and download all necessary components when you start the installation.

Launching  WindowsAzureToolsVS2010.exe  will give you the following dialog:

Select "Install":

As you can see I already have Windows Azure SDK 1.4 installed so I just need to install "Refresh" only. Now select "I Accept" and installer will progress as below:

And when the installation is completed, the following dialog shows the progress:

Now when I will select "Finish" the new dialog will shows the list of application ready to install with web deploy as below:

Above you can see that most of components are not in my machine however IIS 7 Recommended Configuraton is already installed. As I don't have VS2010 SP1 in my machine lets give a try here.

Above I selected VS2010 SP1 installation by using "Add" option and then now select "Install" to start the installation:

Above click "I Accept" and you will have VS2010 SP1 installation begin as below:

In the middle of installation, I was notified to reboot the machine as below:

After reboot is completed, the VS2010 SP1 installation resume as below:

Once installation is completed the following dialog box appears:

Now you will be back to same initial window. If you try to install, pre-install components you see the following screen shows the components are already installed:

So that was the Windows Azure SDK 1.4 Refresh Installation Walk Through!!


Avkash Chauhan (@avkashchauhan) described Windows Azure SDK 1.4 Refresh: Web Deploy Addition to original SDK 1.4 on 4/21/2011:

imageOn 15th April Windows Azure Team released Windows Azure SDK 1.4 Refresh which only included "Web Deploy" plugin in addition to original Windows Azure SDK 1.4.

After the Refresh is installed you can see new addition to the SDK plugins folder and new Web Deploy plug in installed at C:\Program Files\Windows Azure SDK\v1.4\bin\plugins\WebDeploy location.

imageYou can download Windows Azure SDK 1.4 from the link below and it will install full 1.4 SDK with Web Deploy, if you don't have SDK 1.4 installed. If you already have SDK 1.4 installed, then it will just install Web Deploy Plugin as described above.

http://www.microsoft.com/windowsazure/getstarted/default.aspx

The new SDK deployment is does with applicaiton WindowsAzureToolsVS2010.exe which is very small and download all necessary components when you start the installation.

More Info on Web Deploy:

  • Web Deploy only works with a single role instance.
  • The tool is intended only for iterative development and testing scenarios
  • Web Deploy bypasses the package creation. Changes to the web pages are not durable. To preserve changes, you must package and deploy the service


The Windows Azure Case Studies Team posted Real World Windows Azure: Interview with Brad Johnson, VP of Product and Channel Marketing at SOASTA on 4/20/2011:

MSDN: Tell us about your company and the solution that you have created.

Johnson: SOASTA was founded in 2006 on the basis that web technology and scale has expanded far beyond the reach of testing technologies built in the days of smaller and simpler architectures. Particularly in the web performance testing area, where a mere mention by Oprah or Groupon causes sites to crash with millions of users, and 20 percent of the Internet is consumed by users watching Netflix. Hardware is always a constraint to test these systems, and traditional test tools fail when exposed to technology like Microsoft Silverlight or AJAX, or tasked to analyze performance issues within large datasets. We built a full technology stack, CloudTest, aimed at finding results by analyzing the source of web performance issues at any scale. Complementing this are patented test creation capabilities to realistically drive systems as real users do, to deliver real web scale and geographic load, and we built the whole thing to utilize the cloud.

MSDN:What are the key benefits of using Windows Azure with CloudTest?

Johnson: CloudTest is a capacity hog. Real web users are global, and their numbers are immense. To simulate them, we need globally distributed compute power. With Windows Azure, we can simulate hundreds or millions of web users from United States, Europe, and Asia. When we learned of the Windows Azure distributed locations and nearly limitless capacity in early 2010, we instantly signed up. The low cost of Windows Azure compute resources makes our services affordable enough for customers to run tests of any scope and scale, anytime.

MSDN:Could you give us an overview of the solution?

Johnson: SOASTA has delivered thousands of tests for hundreds of customers using CloudTest. The common thread for all is a critical website or application and looming events-whether daily sales spikes, open enrollment, world crises, celebrity media launches, or mobile phone releases. Because we run in test labs through to live production, the problems we find range from code bugs, database limitations, third-party application contention, Content Delivery Network (CDN) needs, firewall meltdowns, inadequate load balancers, and limited bandwidth.

The CloudTest platform is used by SOASTA's team of expert Performance Testing Engineers to deliver services to customers ranging from Telco's rolling out mobile devices to leading real estate websites. We also deliver an on-premises version so performance testing teams have full control to build, deploy, and analyze running on Windows Azure or other internal and external hardware.

  • Core Testing Services: Everything needed to build, run, view, and discover performance issues
  • Test Cloud Management: Allows teams to deploy and configure cloud or internal-based hardware for driving tests
  • Real-Time Analytics: The latest in-memory analytical processing engine built for instantaneous delivery of a diverse set of end-to-end performance metrics during testing
  • The Global Test Cloud: Windows Azure and other public cloud platforms, plus a customer's internal private cloud and even bare-metal hardware are in the hands of CloudTest users for any test

MSDN: Is this solution available to the public right now?

Johnson: Yes. CloudTest On-Demand services are delivered as full turnkey performance testing engagements. Delivered within a day or two, and, on occasion, hours, our team works with the customer to determine test needs. We then build the tests, deploy the cloud-based load generators, and at a scheduled time, the customer team joins ours via a web meeting, and the test begins. Real-time results are analyzed during the live event, and many issues are isolated and often resolved during the test.

CloudTest Pro puts all the web performance testing capabilities we offer, and control of the entire public cloud infrastructure, in the hands of test teams for internal and external performance testing across the software delivery cycle.

SOASTA offers our Windows Azure Performance Certification program for customers deploying on Windows Azure. The service verifies web application performance for up to 10,000 users in a simple package. For information, please visit: http://www.soasta.com/azure-certification.    

Windows Azure Performance Certification ranks your application on Windows Azure.

Click here to read the SOASTA Windows Azure case study about CloudTest used to test Office.com.

Watch this Channel 9 video to get an overview of SOASTA, presented at CloudExpo by SOASTA CEO Tom Lounibos and VP Product and Channel Marketing Brad Johnson.

To learn more about SOASTA and the CloudTest Solution, attend the Academy Live Session Thursday, April 21, 2011 at 8:00 AM PST, "Cloud Testing on Azure with SOASTA: Certifying Web Performance at any Scale" with SOASTA CEO Tom Lounibos.  Register here.


Maarten Balliauw reported Windows Azure SDK for PHP v3.0.0 BETA released in a 4/20/2011 post:

imageMicrosoft and RealDolmen are very proud to announce the availability of the Windows Azure SDK for PHP v3.0.0 BETA on CodePlex. This releases is something we’ve been working on in the past few weeks, implementing a lot of new features that enable you to fully leverage the Windows Azure platform from PHP.

imageThis release is BETA software, which means it is feature complete. However, since we have one breaking change, we’re releasing a BETA first to ensure every edge case is covered. Of you are using the current version of the Windows Azure SDK for PHP, feel free to upgrade and let us know your comments.

A comment we received a lot for previous versions was the fact that for table storage, datetime values were returned as strings and parsing of them was something you as a developer should do. In this release, we’ve broken that: table storage entities now return native PHP DateTime objects instead of strings for Edm.DateTime properties.

The feature we’re most proud of is the support for the management API: you can now instruct WIndows Azure from PHP, where you would normally do this through the web portal. This means that you can fully automate your Windows Azure deployment, scaling, … from a PHP script. I even have sample of this, check my blog post “Windows Azure and scaling: how? (PHP)”.

Another nice feature is the new logging infrastructure: if you are used to working with loggers and appenders (like for example in Zend Framework), this should be familiar. It is used to provide logging capabilities in a mayor production site, www.hotelpeeps.com (yes, that is PHP on Windows Azure you’re seeing there!). Thanks, Lucian, for contributing this!

Last but not least: the session handler has been updated. It relied on table storage for storing session data, however large session objects were not supported since table storage has a maximum amount of data per record. If you are creating large session objects (which I do not recommend, as a best practice), feel free to pass a blob storage client to the session handler instead to have sessions stored in blob storage.

To close this post, here’s the official changelog:

  • Breaking change: Table storage entities now return DateTime objects instead of strings for Edm.DateTime properties
  • New feature: Service Management API in the form of Microsoft_WindowsAzure_Management_Client
  • New feature: logging infrastructure on top of table storage
  • Session provider now works on table storage for small sessions, larger sessions can be persisted to blob storage
  • Queue storage client: new hasMessages() method
  • Introduction of an autoloader class, increasing speed for class resolving
  • Several minor bugfixes and performance tweaks

Get it while it’s hot: http://phpazure.codeplex.com/releases/view/64047

Do you prefer PEAR? Well... pear channel-discover pear.pearplex.net & pear install pearplex/PHPAzure should do the trick. Make sure you allow BETA stability packages in order to get the fresh bits.

PS: We’re running a PHP on Windows Azure contest in Belgium and surrounding countries. The contest is closed for registration, but there’s good value in the blog posts coming out of it. Check www.phpazurecontest.com for more details.


Avkash Chauhan explained Hosting Silverlight PivotViewer control in Windows Azure Web Role in a 4/20/2011 post:

imageIf you decide to host Silverlight PivotViewer control in Windows Azure ASP.NET web role, here are the instructions:

Tools:

Install Silverlight PivotViewer SDK: ttp://www.silverlight.net/learn/pivotviewer/

Microsoft Live Labs Pivot Tools: http://research.microsoft.com/en-us/downloads/dd4a479f-92d6-496f-867d-666c87fbaada/default.aspx

Prebuild Pivot content:

imageFor example let's assume that you already have pivot hosted at some other site as:

You can verify that the pivot does work fine in Live Labs Pivot Viewer:

The following steps with help you to call a Hosted Pivot URL within Silverlight application to run in Windows Azure:

1.       Create ASP.NET Web Role

2.       Add a new Silverlight Application to the application and integrated with Web Role created in step #1

3.       In the Silverlight Application add the following references:

a.       System.Windows.Pivot

b.       System.Windows.Pivot.Model

c.        System.Windows.Pivot.SharedUI

d.       System.Windows.Pivot.StringResources

e.       System.Windows.Pivot.Utilities

4.       In the Mainpage.xaml add the following highlighted code:

<UserControl x:Class="PivotSilverlightApp.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:System.Windows.Pivot;assembly=System.Windows.Pivot"
    mc:Ignorable="d"
    d:DesignHeight="300" d:DesignWidth="400">
    <Grid x:Name="LayoutRoot" Background="White">
   <local:PivotViewer x:Name="pvWidgets" />
    </Grid>
</UserControl>

5.       In the MainPage.xaml.cs please add the highlighted code as below:

namespace PivotSilverlightApp
{
    public partial class MainPage : UserControl
    {
        public MainPage()
        {
            InitializeComponent();
//pvWidgets.LoadCollection("http://xpert360.net/MIX11/MIX11.cxml",null);
pvWidgets.LoadCollection("http://galaxiespivot.cloudapp.net/ClientBin/galaxies.cxml", string.Empty);
// Add Hosted Pivot Uri above to launch in Cloud with Silverlight 
        }
    }
}

6.       Now check the Web Role and Silverlight application in "Browser" that it is working.

7.       Now Check the Web Role in compute Emulator and verify it is working fine.

8.       Now Let's work in Web Role to remove other contents which are not necessary (you sure can keep it if you wish):

a.       Remove App_data folder

b.       Remove Account Folder

c.        Remove Scripts Folder

d.       Remove Styles Folder

e.       Remove Global.asax

f.         Remove About.aspx and About.aspx.cs

g.       Remove Site.Master collection

9.       Remove the following from the web.config and add the highlighted lines:

<?xml version="1.0"?>
<configuration>
  <system.diagnostics>
    <trace>
      <listeners>
        <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
name="AzureDiagnostics">
          <filter type="" />
        </add>
      </listeners>
    </trace>
  </system.diagnostics>
  <connectionStrings>
    <add name="ApplicationServices"
connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true"
providerName="System.Data.SqlClient" />
  </connectionStrings>
  <system.web>
    <compilation debug="true" targetFramework="4.0" />
    <authentication mode="Forms">
      <forms loginUrl="~/Account/Login.aspx" timeout="2880" />
    </authentication>
    <membership>
      <providers>
        <clear/>
        <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="ApplicationServices"
enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false"
maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10"
applicationName="/" />
      </providers>
    </membership>
    <profile>
      <providers>
        <clear/>
        <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/"/>
      </providers>
    </profile>
    <roleManager enabled="false">
      <providers>
        <clear/>
        <add name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="ApplicationServices" applicationName="/" />
        <add name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" applicationName="/" />
      </providers>
    </roleManager>
  </system.web>
  <system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
    <staticContent>
            <mimeMap fileExtension=".cxml" mimeType="text/xml" />
            <mimeMap fileExtension=".dzc" mimeType="text/xml" />
            <mimeMap fileExtension=".dzi" mimeType="text/xml" />
     </staticContent>
  </system.webServer>
</configuration>

11.   Test the Application in the Compute Emulator

12.   When test is done successfully, please package the application and deploy in Azure.

I have the site running at : http://galaxiespivot.cloudapp.net/


Beth Massi (@bethmassi) announced her Channel 9 Interview: Speed Up Azure Deployments with the New Web Deployment Feature on 4/19/2011:

imageIn this interview, Anders Hauge, a Program Manager on the Azure tools team, shows us how the new Web Deployment feature along with Remote Desktop allows you to incrementally publish your web applications to the Azure cloud. This means that you can make updates to your already deployed web apps with just the changes onto the same running instance, which greatly accelerates the process of deploying updates while debugging your cloud-based web applications.

Make sure to watch as Anders (humorously) goes into the details of how it works and what the limitations are. This new feature is available in the Azure SDK 1.4.1 refresh which you can install via the Web Platform Installer. For more information see the Windows Azure news from MIX 11.

Speed Up Azure Deployments with the New Web Deployment Feature
Watch: Speed Up Azure Deployments with the New Web Deployment Feature

I haven’t had this much fun in an interview in a long time. Anders is a funny dude,  and I laugh a lot during this short one.


Brian Loesgen (@BrianLoesgen) announced the April Update of Windows Azure Platform Training Kit on 4/19/2011:

imageThe April Update of Windows Azure Platform Training Kit is available now, you can get it here.

From the site:

The Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform, including: Windows Azure, SQL Azure and the Windows Azure AppFabric.

imageThe April 2011 update of the Windows Azure Platform Training Kit has been updated for Windows Azure SDK 1.4, Visual Studio 2010 SP1, includes three new HOLs, and updated HOLs and demos for the new Windows Azure AppFabric portal.

Some of the specific changes with the April update of the training kit includes:

  • [New] Authenticating Users in a Windows Phone 7 App via ACS, OData Services and Windows Azure lab
  • [New] Windows Azure Traffic Manager lab
  • [New] Introduction to SQL Azure Reporting Services lab
  • [Updated] Connecting Apps with Windows Azure Connect lab updated for Connect refresh
  • [Updated] Windows Azure CDN lab updated for CDN refresh
  • [Updated] Introduction to the AppFabric ACS 2.0 lab updated to the production release of ACS 2.0
  • [Updated] Use ACS to Federate with Multiple Business Identity Providers lab updated to the production release of ACS 2.0
  • [Updated] Introduction to Service Bus lab updated to latest AppFabric portal experience
  • [Updated] Eventing on the Service Bus lab updated to latest AppFabric portal experience
  • [Updated] Service Remoting lab updated to latest AppFabric portal experience
  • [Updated] Rafiki demo updated to latest AppFabric portal experience
  • [Updated] Service Bus demos updated to latest AppFabric portal

Craig Kitterman (@craigkitterman) described WordPress on Windows Azure: A discussion with Morten Rand-Hendriksen in a 4/19/2011 post to the Interoperability @ Microsoft blog:

image I finally had the chance to sit down with Morten at MIX11 in Las Vegas last week to discuss the work he is doing on WordPress with Windows Azure to solve some common challenges with multi-site WordPress installations using traditional hosting.

imageIn Morten's words: "I am building a garden just for me and my clients...I control it...but the security and management of the garden is run by a very large company...they also will make sure that it works!"

Read Morten's blog on http://www.designisphilosophy.com and find him on Twitter @Mor10


Jialiang Ge of the All-In-One Code Framework Team posted Introducing 14 new ASP.NET and Azure code samples released in April on 4/19/2011:

image A new release of Microsoft All-In-One Code Framework is available on April 16th. This blog introduces 14 ASP.NET and Azure code samples in the release. The rest new samples will be introduced next week.

Download address: http://1code.codeplex.com/releases/view/64551

Alternatively, you can download the code samples using Sample Browser v3. The new Sample Browser gives you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about new releases.

If it’s the first time that you hear about Microsoft All-In-One Code Framework, please read this Microsoft News Center article http://www.microsoft.com/presspass/features/2011/jan11/01-13codeframework.mspx, or watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/.

---------------------------------------

New Azure Code Samples

CSAzureWorkflowService4, VBAzureWorkflowService4

imageDownloads
CSAzureWorkflowService4: http://code.msdn.microsoft.com/CSAzureWorkflowService4-2519c571
VBAzureWorkflowService4: http://code.msdn.microsoft.com/VBAzureWorkflowService4-62a20440

This sample demonstrates how to run a WCF Workflow Service on Windows Azure. It uses Visual Studio 2010 and WF 4.  While currently Windows Azure platform AppFabric does not contain a Workflow Service component, WCF Workflow Services can be run directly in a Windows Azure Web Role. By default, a Web Role runs under full trust, so it supports the workflow environment.

The workflow in this sample contains a single Receive activity. It compares the service operation's parameter's value with 20, and returns "You've entered a small value." or "You've entered a large value.". The client application invokes the Workflow Service twice, passing a value less than 20, and a value greater than 20.


Craig Kitterman (@craigkitterman) described New features in the April 2011 CTP the Windows Azure Plugin for Eclipse with Java in a 4/18/2011 post to the Interoperability @ Microsoft blog:

imageIn case you missed the previous announcement, the plugin adds to Eclipse a set of wizards and dialogs which guide the Java developer through the configuration of all relevant projects settings when targeting Windows Azure. The plugin builds on top of the Windows Azure Starter Kit for Java, which is primarily a command-line toolset based on a simple Windows Azure project template which includes elements required to package and deploy your Java application to Windows Azure.

images we said in our previous blog posts this project is evolving quickly. Our goal is to use the stream of community feedback to nail down the correct experience for Java developers. So today, we’re taking the next iteration forward and announcing the April 2011 Community Technology Preview (CTP) of the Windows Azure Plugin for Eclipse with Java. Here’s the list of the features, including the new ones:

  1. Eclipse wizards to create and build new Windows Azure projects in Eclipse,
  2. Shortcuts to deploy and start the project in the Windows Azure Compute Emulator,
  3. Association of *.cscfg and *.csdef files with the Eclipse XML editor for easier XML editing,
  4. New with April CTP: Eclipse wizards to add/remove/configure Windows Azure roles for your project during project creation or project properties editing
  5. New with April CTP: Eclipse wizards to add/remove/configure role endpoints during project creation or project properties editing (ports)

clip_image002

The Windows Azure Plugin for Eclipse with Java is an Open Source project released under the Apache 2.0 license, and the source code is available at: http://sourceforge.net/projects/waplugin4ej.
The best way to get started is to go through the steps explained in our updated tutorial: Deploying a Java application to Windows Azure with Eclipse .

As always, we look forward to your comments and feedback!


Brian Loesgen (@BrianLoesgen) reported Just released: Windows Azure Migration Scanner on 4/12/2011 (missed when posted):

imageFrom the great minds of Hanu Kommalapati, Bill Zack, David Pallmann comes a new rules-based tool that can scan your existing .NET app and alert you to any issues you may have in migrating to Windows Azure.

Great work guys!

imageYou can get it at http://wams.codeplex.com/


<Return to section navigation list> 

Visual Studio LightSwitch

The LightSwitch Team posted Course Manager Sample Part 3 – User Permissions & Admin Screens (Andy Kung) on 4/20/2011:

In Course Manager Sample Part 2 – Setting up Data, we created tables, relationships, queries, and learned some useful data customizations. Now we’re ready to build screens. One of the first things I usually like to do (as a LightSwitch developer) is to build some basic screens to enter and manipulate test data. They can also be useful end-user screens for a data admin. I call them the “raw data screens.” Obviously we don’t want to give all users access to these special screens. They need to be permission based.

In this post, we will set up permissions, create raw data screens, write permission logic, and learn some screen customization techniques. Let’s continue our “SimpleCourseManager” project from Part 2!

Download the Course Manager Sample Application

Setting up the App

Course Manager is a Desktop application using Windows authentication. Let’s go ahead a set that up. Double click on Properties in the Solution Explorer to open the application designer.

clip_image001[4]

Application Type

Under “Application Type” tab, make sure Desktop is selected (this is the default).

clip_image003[4]

Authentication

Under the “Access Control” tab, select “Use Windows authentication” and “Allow any authenticated Windows user.” By the way “Allow any authenticated Windows user” is a new feature introduced in Beta 2. If you need a general app for every employee in your company, you no longer have to manually create user accounts for everyone. Instead, any authenticated Windows user has access to the application. You can then define special permissions on top of this general permission (which is what we will be doing).

User Permissions

You will also find a built-in SecruityAdministration in the list of permissions. LightSwitch uses this permission to show or hide the built-in screens that define Users and Roles. As the developer, I want to make sure that I have all the permissions under debug mode (so I can see all the screens). Check “Granted for debug” box for SecurityAdministration.

clip_image005[4]

Let’s add another permission called “ViewAdminScreen” to the list. We will use it to show or hide our raw data screens later. Again, check “Granted for debug” box.

clip_image006[4]

At this point, if you hit F5 to launch the running app, you will see a Desktop application with your Windows credential at the lower right corner. You will also see the Administration menu group with the built-in Roles and Users screens (because we’ve enabled SecurityAdministration permission for debug mode).

clip_image008[4]

Raw Data Screens

Now let’s create our first screen. Create a list-details screen on Students called “ManageStudents.”

clip_image009[4]

In the screen designer, use the “Write Code” dropdown menu to select ManageStudents_CanRun. This method defines the condition ManageStudents screen will show up in the running application.

clip_image010[4]

We only want to show this raw data screen for users with ViewAdminScreen permission (that we defined in earlier steps). In the code editor, write:

Private Sub ManageStudents_CanRun(ByRef result As Boolean)

result = User.HasPermission(Permissions.ViewAdminScreen)

End Sub

Let’s hit F5 again. We will see the Manage Students screen we just created. By default, it is place under Tasks menu group. Let’s organize all the raw data screens under Administration menu group.

clip_image012[4]

Let’s go back in the LightSwitch IDE and open application designer. Under “Screen Navigation” tab, delete Manage Students screen from Tasks group and add it under Administration group.

clip_image013[4]

Follow the same steps to add other raw data screens (with ViewAdminScreen permission):

image

Organize your screens to have this menu structure.

clip_image014[4]

Use these screens to enter some sample data for students, instructors, categories, courses, and sections.

Screen Customizations

In the Course Manager  sample project, we’ve also made some screen customizations to improve user experience. Here are the highlights:

Manage Students

In “Manage Students” screen, we’d like to:

  • Show the list items as links
  • Use “Address Editor” control for the address fields
  • Add time tracking fields and show them as read-only text boxes

clip_image015[4]

Show list items as links

In the screen designer, select Student’s Summary control (under the List control). Check “Show as Link” in the Properties.

clip_image016[4]

Use Address Editor control for the address fields

In the screen designer, right click on “Student Details” node, select “Add Group” to add a group node.

clip_image017[4]

Change the group node to use “Address Editor” control.

clip_image018[4]

Now, drag the Street node and drop it in (STREET LINE 1). Repeat the drag-and-drop for City, State, and Zip.

clip_image019[4]

Add time tracking fields and show them as read-only text boxes

In the screen designer, select “Student Details” node. Click on “+ Add” button and select “Created Time” to add the field. Repeat and add “Updated Time” and “Updated By.”

clip_image020[4]

Use the “Write Code” dropdown menu in the command bar. Select ManageStudents_Created.

clip_image021[4]

<Screen>_Created is called after the UI elements on the screen have been constructed. We want to make them read-only. Write:

Private Sub ManageStudents_Created()
    FindControl("CreatedTime").IsReadOnly = True
    FindControl("UpdatedTime").IsReadOnly = True
    FindControl("UpdatedBy").IsReadOnly = True
End Sub

Manage Courses

In “Manage Courses” screen, we’d like to:

  • Use a multi-line text box for Description field (set Lines property for the TextBox control)
  • Use a “List box mover” UI for the many-to-many relationship (see “How to Create a Many-to-Many Relationship”)

clip_image023[4]

Conclusion

We now have a functional Desktop application using Windows authentication. We also defined permission logic to limit certain screens to privileged users. Coming up next: Course Manager Sample Part 4: Implementing the Workflow.


Beth Massi (@bethmassi) reported a New Video: How Do I Deploy a Visual Studio LightSwitch Application? on 4/20/2011:

image A new video was just released on the Developer Center video page:

How Do I: Deploy a Visual Studio LightSwitch Application?

image2224222222I originally did this one for Beta 1 but now this updated with the latest Beta 2 release. In this video you will learn about what deployment options you have for a LightSwitch Application and why you should choose one over the other. I show you how to set up your database and deploy a 2-tier application as well as how to set up and deploy a 3-tier application onto a web server running Internet Information Services (IIS).

Also check out the following walkthroughs for more information on deployment:

I’ve been busy travelling and speaking at conferences but I managed to get this one done. :-) And I just got my Azure account set up (YAY!) so I’ll be planning on doing one on deploying to Azure next (once I get back from DevDays).


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

David Linthicum claimed “Despite making strides, IT is still not widely embracing cloud computing, and businesspeople are becoming frustrated” in a deck for his IT's cloud resistance is starting to annoy businesses article of 4/21/2011 for InfoWorld’s Cloud Computing blog:

image The party line around enterprise cloud computing is that the cloud is always a welcome addition to IT technology portfolios. However, in the real world, that's not always the case. Indeed, IT is still skeptical about the value and use of cloud computing. As a result, the business side is becoming impatient over the speed and innovation of IT or, more accurately, the perceived lack thereof.

image But don't take my word for it. A new study from Accenture and the London School of Economics and Political Science's Outsourcing Unit shows that IT people still see issues like security and privacy as a barrier to cloud adoption.

Accenture and the LSE surveyed more than 1,035 business and IT executives and conducted more than 35 interviews with cloud providers, system integrators, and cloud service users. The key finding: There's a gap between business and IT. Businesspeople see the excitement and business benefits of cloud computing, so they're pushing for it. However, IT people see cloud computing as causing issues with security and lock-in, so they're pushing back.

Business is already frustrated about the speed with which IT delivers business solutions that aid the bottom line; IT has the reputation of being the Department of No when it comes to moving into new markets, expanding the enterprise through acquisition, or supporting other business events. Those who drive the business see cloud computing as a way to get around many of the reasons that IT says no. IT's response is to say no to the cloud.

Read more: 2, next page ›


Dana Gardner asserted “The pace of change, and explosion around the uses of new devices and data sources are placing new demands on data centers” as a preface to his IT Maturity Assessment and Data Center Transformation post of 4/21/2011 to the BriefingsDirect blog:

The pace of change, degrees of complexity, and explosion around the uses of new devices and increased data sources are placing new requirements and new strain on older data centers. Research shows that a majority of enterprises are either planning for or are in the midst of data center improvements and expansions.
Deciding how to best improve your data center is not an easy equation.

Those building new data centers now need to contend with architectural shifts to cloud and hybrid infrastructure models, as well as the need to cut total cost and reduce energy consumption for the long-term.

An added requirement for new data centers is to satisfy the needs of both short-and long-term goals, by effectively jibing the need for agility now with facility and staffing decisions that may well impact the company for 20 years or more.

All these fast-moving trends are accelerating the need for successful data center transformation (DCT). As a means to beginning such a DCT journey, to identify some proven ways that explore how to do DCT effectively, BriefingsDirect now examines two ongoing HP workshops as a means of accurately assessing a company’s maturity in order to know then how to best begin and take a DCT journey.

Join BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions, as he interviews three HP experts on the Data Center Transformation Experience Workshop and the Converged Infrastructure Maturity Model Workshop: Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solution for HP Enterprise Business; Mark Edelmann, Senior Program Manager at HP’s Enterprise Storage, Servers, and Network Business Unit, and Mark Grindle, Business Consultant for Data Center Infrastructure Services and Technology Services in HP Enterprise Business.
Here are some excerpts:

Tang: What the world is demanding is essentially instant gratification. You can call it sort of an instant-on world, a world where everything is mobile, everybody is connected, interactive, and things just move very immediately and fluidly. All your customers and constituents want their need satisfied today, in an instant, as opposed to days or weeks. So, it takes a special kind of enterprise to do just that and compete in this world.

You need to be able to serve all of these customers, employees, partners, and citizens -- 0r if you happen to be a government organization -- with whatever they want or need instantly, any point, any time, through any channel. This is what HP is calling the Instant-On Enterprise, and we think it's the new imperative.

There are a lot of difficulties for technology, but also if you look at the big picture, we live in extremely exciting times. We have rapidly changing and evolving business models, new technology advances like cloud, and a rapidly changing workforce.

Architecture shifts
A Gartner stat: In the next four years, 43 percent of CIOs will have the majority of their IT infrastructure and organizations and apps running in the cloud or in some sort of software-as-a-service (SaaS) technology. Most organizations aren’t equipped to deal with that.
There’s an explosion of devices being used: smartphones, laptops, TouchPads, PDAs. According to the Gartner Group, by 2014, that’s less than three years, 90 percent of organizations will need to support their corporate applications on personal devices. Is IT ready for that? Not by a long shot today.

Last but not least, look at your workforce. In less than 10 years about half of the workforce will be millennials, which is defined as people born between the year of 1981 and 2000 -- the first generation to come of age in the new millennium. This is a Forrester statistic.

This younger generation grew up with the Internet. They work and communicate very differently from the workforce of today and they will be a main constituency for IT in less than 10 years. That’s going to force all of us to adjust to different types of support expectations, different user experiences, and governance.

Maturity is a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate and adaptive manner.

Your organization is demanding ever more from IT -- more innovation, faster time to market, more services -- but at the same time, you're being constrained by older architectures, inflexible siloed infrastructure that you may have inherited over the years. How do you deliver this new level of agility and be able to meet those needs?
You have to take a transformational approach and look at things like converged infrastructure as a foundation for moving your current data center to a future state that’s able to support all of this growth, with virtualized resource pools, integrated automated processes across the data center, with an energy-efficient future-proofed physical data center design, that’s able to flex and meet these needs.

image Edelmann: The IT Maturity Model approach consists of an overall assessment, and it’s a very objective assessment. It’s based on roughly 60 questions that we go through to specifically address the various dimensions, or as we call them domains, of the maturity of an IT infrastructure.

We've found it’s much more valuable to sit down face to face with the customer and go through this, and it actually requires an investment of time. There’s a lot of background information that has to be gathered and so forth, and it seems best if we're face to face as we go through this and have the discussion that’s necessary to really tease out all the details.

We apply these questions in a consultative, interactive way with our customers, because some of the discussions can get very, very detailed. Asking these questions of many of our customers that have participated in these workshops has been a new experience. We're going to ask our customers things that they probably never thought about before or have only thought of in a very brief sort of a way, but it’s important to get to the bottom of some of these issues.
From that, as we go through this, through some very detailed analysis that we have done over the years, we're able to position the customer’s infrastructure in one of five stages:

  • The first stage, which is where most people start, is in Stage 1; we call that Compartmentalized and Legacy, which is rather essentially the least-mature stage.
  • From there we move to Stage 2, which we call Standardized.
  • Stage 3 then is Optimized.
  • Stage 4 gets us into Automated and a Service-Oriented Architecture (SOA), and,
  • Stage 5 is more or less IT utopia necessary to become the Instant-On Enterprise that Helen just talked about. We called that Adaptively Sourced Infrastructure.

We evaluate each domain under several conditions against those five stages and we essentially wind up with a baseline of where the customer stands.

As a result of examining the infrastructure’s maturity along these lines, we're able to establish a baseline of the maturity of the infrastructure today. And, in the course of interviewing and discussing this with our customers, we also identify where they would like to be in terms of their maturity in the future. From that, we can put together a plan of how to get from here to there.

Even further behind
Most of our customers find out that they are a lot further behind than they thought they were. It's not necessarily due to any fault on their part, but possibly a result of aging infrastructure, because of the economic situation we have been in, disparate siloed infrastructure as a result of building out application focused stacks, which was kind of the way we approached IT historically.

Also, the impact of mergers and acquisitions has kind of forced some customers to put together different technologies, different platforms, using different vendors and so forth. Rationalizing all that can leave them in kind of a disparate sort of a state. So, they usually find that they are a lot further behind than they thought.

We've been doing this for a while and we've done a lot of examinations across the world and across various industries. We have a database of roughly 1,400 customers that we then compare the customer’s maturity to. So, the customer can determine where they stand with regards to the overall norms of IT infrastructures.

It's a difficult and a long journey to get to that level, but there are ways to get there, and that’s what we're here for.

We can also illustrate to the customer what the best-in-class behavior is, because right now, there aren’t a whole lot of infrastructures that are up at Stage 5. It's a difficult and a long journey to get to that level, but there are ways to get there, and that’s what we're here for.

Grindle: This process can also be structured if you do the Data Center Transformation Experience Workshop first and then follow that up with the Maturity Model.

The DCT workshop was originally designed and set up based on HP IT’s internal transformation. It's not theoretical and it's also extremely interactive. So, it's based on exactly what we went through to accomplish all the great things that we did, and we've continued to refine and improve it based on our customer experiences too. So, it's a great representation of our internal experiences as well as what customers and other businesses and other industries are going through.

During the process, we walk the customer through everything that we've learned, a lot of best practices, a lot of our experiences, and it's extremely interactive.

Then, as we go through each one of our dimensions, or each one of the panels, we probe with the customer to discuss what resonates well with them, where they think they are in certain areas, and it's a very interactive dialog of what we've learned and know and what they've learned and know and what they want to achieve.

The outcome is typically a very robust document and conversation around how the customer should proceed with their own transformation, how they should sequence it, what their priorities are, and true deliverables -- here are the tasks you need to take on and accomplish -- either with our help or on their own.

It’s a great way of developing a roadmap, a strategy, and an initial plan on how to go forward with their own transformational efforts.

Designed around strategy
It's definitely designed around strategy. Most people, when they look at transformation, think about their data centers, their servers, and somewhat their storage, but really the goal of our workshop is to help them understand, in a much more holistic view, that it's not just about that typical infrastructure. It has to do with program management, governance, the dramatic organizational change that goes on if you go through transformation.

Applications, the data, the business outcomes, all of this has to be tied in to to ensure that, at end of the day, you've implemented a very cost-effective solution that meets the needs of the businesses. That really is a game-changing type of move by your organization.
The financial elements are absolutely critical. There are very few businesses today that aren’t extremely focused on their bottom line and how they can reduce the operational cost.

Certainly, from the HP IT experience, we can show, although it's not a trivial investment to make this all happen, the returns are not only normally a lot larger than your investment, but they are year-over-year savings. That’s money that typically can be redeployed to areas that really impact the business, whether it's through manufacturing, marketing, or sales. This is money that can be reinvested in the business, and allowed to help grow the areas that really will have future impact on the growth of the business, while reducing the cost of your data centers and your operation.

Even though you're driving down the cost of your IT organization, you're not giving up quality and you are not giving up technology.

Interestingly enough, what we find is that, even though you're driving down the cost of your IT organization, you're not giving up quality and you are not giving up technology. You actually have to implement new technologies and robust technologies to help bring your cost down. Things like automations, operational efficiency, ITIL processes all help you drive the saving while you are allowed to upgrade your systems and your environments to current technologies and new technologies.

And, while we're on the topic of cost savings, a lot of times when we are talking to customer about transformation, it's normally being driven by some critical IT imperative, like they're out of space in their data center and they're about to look at building out a new data center or perhaps a obtaining a collocation site. A lot of times we find that we sit down and talk with them about how they can modernize their application, tier their storage, go with higher density equipment, virtualize their servers, they actually can free up space and avoid that major investment of the new data center.

I am working with a company right now that was looking at going to eight data centers and by implementing a lot of these new technologies -- higher virtualization rates, improvements to their applications, and better management of their data on their storage. We're trying to get them down into two data centers. So right there is a substantial change. And, that’s just an example of things that I have seen time and time again, as we've done these workshops.
It's all about walking through the problems and the issues that are at hand and figuring out what the right answers are to meet their needs, while trying to control the expense.

Tang: Both workshops are great. It's not really an either/or. I would start with the Data Center Transformation Experience Workshop, because that sets the scene in the background of how I start to approach this problem. What do I think about? What are the key areas of consideration? And, it maps out a strategy on a grander scale.

The CI Maturity Model Assessment specifically gets into when you think about implementation. Let's dive in and really drill deep into your current state versus future state when it comes to the five domains.
You say, "Okay, what would be the first step?" A lot of times, it makes sense to standardize, consolidate. Then, what is the next step? Sometimes that’s modernizing applications, and so on. That’s one approach we have seen.

In the more transformational approach, whereby you have the highest level of buy-in, all the way up to the CIO and sometimes CFO and CEO, you lay out an actual 12-18 month plan. HP can help with that, and you start executing toward that.

A lot of organizations don’t have the luxury of going top-down and doing the big bang transformation. Then, we take that more project-based approach. It still helps them a lot going through these two workshops. They get to see the big picture and all the things that are possible, but they start picking low-hanging fruit that would yield the highest ROI and solve their current pain points.

A lot of organizations don’t have the luxury of going top-down and doing the big bang transformation.

Edelmann: Often, the journey is a little bit different from one customer to the other.

The Maturity Model Workshop you might think of as being at a little lower level than the Data Center Transformation Workshop. As a result of the Maturity Model Workshops, we produce a report for the customer to understand -- A is where I'm at, and B is where I'm headed. Those gaps that are identified during the course of the assessment help lead a customer to project definitions.

In some cases, there may be some obvious things that can be done in the short term and capture some of that low-hanging fruit -- perhaps just implement a blade system or something like that -- that will give them immediate results on the path to higher maturity in their transformation journey.

Multiple starting points
There are multiple starting points and consequently multiple exit points from the Maturity Model Workshop as well.

Grindle: The result of the workshop is really a sequence series of events that the customer should follow up on next. Those can be very specific items, like gather your physical server inventories so that that can be analyzed, to other items such as run a Maturity Model Workshop, so that you can understand where you are in each of the areas and what the gaps are, based on where you really want to be.

It’s always interesting when we do these workshops, because we pull together a group of senior executives covering all the domains that I've talked about -- program management, governance -- their infrastructure people, their technology people, their applications people, and their operational people, and it’s always funny, the different results we see.

I had one customer that said to me that the deliverable we gave them out in the workshop was almost anti-climatic versus what they learned in the workshop. What they had learned during this one was that many people had different views of where the organization was and where it wanted to go.

It’s a great learning collaborative event that brings together a lot of the thoughts on where they want to head.

Each was correct from their particular discipline, but from an overarching view of what are we trying to do for the business, they weren’t all together on all of that. It’s funny how we see those lights go on as people are talking and you get these interesting dialogs of people saying, "Well, this is how that is." And someone else going, "No, it’s not. It’s really like this."

It’s amazing the collaboration that goes on just among the customer representatives above and beyond the customer with HP. It’s a great learning collaborative event that brings together a lot of the thoughts on where they want to head. It ends up motivating people to start taking those next actions and figuring out how they can move their data centers and their IT environment in a much more logical, and in most cases, aggressive fashion than they were originally thinking.
The place to learn more would be hp.com/go/dct. To learn more about the CI Maturity Model, you can go to hp.com/go/cimm.

You [might] also be interested in:


Patrick Butler Monterde explained Windows HPC with Burst to Windows Azure: Application Models and Data Considerations in a 4/20/2011 post:

image Windows HPC Server 2008 R2 SP1 enables administrators to increase the power of the on-premises cluster by adding computational resources in Windows Azure. With the Windows Azure “burst” scenario, various types of HPC applications can be deployed to Windows Azure nodes and run on these nodes in the same way they run in on-premises nodes.

image

Link: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=ACDE41C6-153A-4181-912E-78024FCC86DA&amp;displaylang=en


James Hamilton analyzed Facebook’s Open Compute Server Design in a detailed 4/20/2011 post:

image Last Thursday Facebook announced the Open Compute Project where they released pictures and specifications for their Prineville Oregon datacenter and the servers and infrastructure that will populate that facility. In my last blog, Open Compute Mechanical System Design I walked through the mechanical system in some detail. In this posting, we’ll have a closer look at the Facebook Freedom Server design.

Chassis Design:

image The first thing you’ll notice when looking at the Facebook chassis design is there are only 30 servers per rack. They are challenging one of the strongest held beliefs in the industry that is density is the primary design goal and more density is good. I 100% agree with Facebook and have long argued that density is a false god. See my rant Why Blade Servers aren’t the Answer to all Questions for more on this one. Density isn’t a bad thing but paying more to get denser designs that cost more to cool is usually a mistake. This is what I’ve referred to in the past as the Blade Server Tax.

When you look closer at the Facebook design, you’ll note that the servers are more than 1 Rack Unit (RU) high but less than 2 RU. They choose a non-standard 1.5RU server pitch. The argument is that 1RU server fans are incredibly inefficient. Going with 60mm fans (fit in 1.5RU) dramatically increases their efficiency but moving further up to 2RU isn’t notably better. So, on that observation, they went with 60mm fans and a 1.5RU server pitch.

I completely agree that optimizing for density is a mistake and that 1RU fans should be avoided at all costs so, generally, I like this design point. One improvement worth considering is to move the fans out of the server chassis entirely and go with very large fans on the back of the rack. This allows a small gain in fan efficiency by going with larger still fans and allows a denser server configuration without loss of efficiency or additional cost. Density without cost is a fine thing and, in this case, I suspect 40 to 80 servers per rack could be delivered without loss of efficiency or additional cost so would be worth considering.

The next thing you’ll notice when studying the chassis above is that there is no server case. All the components are exposed for easy service and excellent air flow. And, upon more careful inspection, you’ll note that all components are snap in and can be serviced without tools. Highlights:

  • 1.5 RU pitch
  • 1.2 MM stamped pre-plated steel
  • Neat, integrated cable management
  • 4 rear mounted 60mm fans
  • Tool-less design with snap plungers holding all components
  • 100% front cable access

Motherboards:

The Open Compute project supports two motherboard designs where 1 uses an Intel processors and the other uses AMD.

Intel Motherboard:

AMD Motherboard:

Note that these boards are both 12V only designs.

Power Supply:

The power supply (PSU) is an usual design in two dimensions: 1) it is a single output voltage 12v design and 2) it’s actually two independent power supplies in a single box. Single voltage supplies are getting more common but commodity server power supplies still usually deliver 12V, 5V, and 3.3V. Even though processors and memory require somewhere between 1 and 2 volts depending upon the technology, both typically are fed by the 12V power rail through a Voltage Regulator Down (VRD) or Voltage Regulator Module (VRM). The Open Compute approach is to use deliver 12V only to the board and to produce all other required voltages via an Voltage Regulator Module on the mother board. This simplifies the power supply design somewhat and they avoid cabling by having the motherboard connecting directly to the server PSU.

The Open Compute Power Supply is has two power sources. The primary source is 277V alternating current (AC) and the backup power source is 48V direct current (DC). The output voltage from both supplies is the same 12V DC power rail that is delivered to the motherboard.

Essentially this supply is two independent PSUs with a single output rail. The choice of 277VAC is unusual with most high-scale data centers run on 208VAC. But 277 allows one power conversion stage to be avoided and is therefore more power efficient.

Most data centers have mid-voltage transformers(typically in the 13.2kv range but it can vary widely by location). This voltage is stepped down to 480V three phase power in North America and 400V 3 phase in much of the rest of the world. The 480VAC 3p power is then stepped down to 208VAC for delivery to the servers.

The trick that Facebook is employing in their datacenter power distribution system is to avoid one power conversion by not doing the 480VAC to 208VAC conversion. Instead, they exploit the fact that each phase of 480 3p power is 277VAC between the phase and neutral. This avoids a power transformation step which improves overall efficiency. The negatives of this approach are 1) commodity power supplies can’t be used (277VAC is beyond the range of commodity PSUs) and 2) the load on each of the three phases need to be balanced. Generally, this is a good design tradeoff where the increase in efficiency justifies the additional cost and complexity.

An alternative but very similar approach that I like even better is to step down mid-voltage to 400VAC 3p and then play the same phase to neutral trick described above. This technique still has the advantage of avoiding 1 layer of power transformation. What is different is the resultant phase to neutral voltage delivered to the servers is 230VAC which allows commodity power supplies to be used. The disadvantage of this design is that the mid-voltage to 400VAC 3p transformer is not in common use in North America. However this is a common transformer in other parts of the world so they are still fairly easily attainable.

Clearly, any design that avoids a power transformation stage is a substantial improvement over most current distribution systems. The ability to use commodity server power supplies unchanged makes the 400 3p to neutral trick look slightly better than the 480VAC 3p approach but all designs need to be considered in the larger context in which they operate. Since the Facebook power redundancy system requires the server PSU to accept both a primary alternating current input and a backup 48VDC input, special purpose build supplies need to be used. Since a custom PSU is needed for other reasons, going with 277VAC as the primary voltage makes perfect sense.

Overall a very efficient and elegant design that I’ve enjoyed studying. Thanks to Amir Michaels of the Facebook hardware design team for the detail and pictures.


Lori MacVittie (@lmacvittie) asserted Active Endpoints introduces Cloud Extend for Salesforce.com and reminds us that commoditization most benefits providers, customization most benefits customers in a preface to her Cloud Extend: Because One Size Does Not Fit All post of 4/20/2011 to F5’s DevCentral blog:

image

In the context of cloud computing we often mention the driving force behind many of its financial benefits is commoditization. Commoditization drives standardization which reduces costs of the product itself as well as the management systems needed to interact with them. Commoditization drives the cost of manufacturing, of creating and/or providing a good or service down for the provider. It is usually the case, expected in fact, that those savings are passed on to the consumer in the form of lower prices.

image Thus, the commoditization of compute, network storage resources results in a lower cost for cloud computing providers and they have, thus far, seen fit to pass that along to would-be customers. The actual product, while perhaps being highly commoditized itself, however, must still be adaptable to fit the customer’s often unique use case. For many organizations, for example, business applications are a necessary component to managing business. For others, they encapsulate processes that are considered competitive advantages.  Applications, even those commoditized, must be able to support both styles of use while maintaining the low cost realized through commoditization.

While the core processes many applications encapsulate are the same, there are always tweaks and modifications required that reflect the slight differences in markets, businesses, and even the product being offered. The underlying processes are different from organization to organization and that needs to be reflected in the software, somehow. The general use of some software applications has become generalized. It’s not commoditized, but it’s close. The general process, the data, the purpose of software such as CRM (Customer Relationship Management) and SFA (Sales Force Automation) is generally applicable to all organizations. But the way in which an organization manages customer relationships, sells customers products, and interacts with its customers always comprises some difference that needs to be reflected in the software.

Successful SaaS (Software as a Service) providers like Salesforce.com knew that from the beginning. Data was customizable, the GUI was even customizable to reflect the differences in terminology across vertical industries and organizations alike. But it also knew that wasn’t enough, that organizations would find the restrictions on being forced to adhere to a certain codified process would eventually become an impediment to continued adoption. So it built a platform upon which an ecosystem of supporting applications and services could be provided that would enhance, modify, and otherwise allow customers to tailor the core application to better suit their needs.

Active Endpoints puts that platform, Force.com, to good use with the introduction of its latest business process management (BPM) offering: Cloud Extend for Salesforce.com.

DROP DEAD SIMPLE ce_graphic2_thumb[7]
To set expectations, understand that Cloud Extend™ for Salesforce.com (implying there will be other Cloud Extend solutions, which Active Endpoints confirms) is built on the Force.com platform and targeted at Salesforce.com customers.

The reason the overall solution is worth discussing in a broader context is the underlying framework and integration that makes the Salesforce.com solution so elegant can certainly be applied to other software – and potentially infrastructure – solutions.

What Cloud Extend offers is an easy to use, guided method of codifying a process within Salesforce.com that simultaneously allows for integration with the growing set of data integration points within Salesforce.com. For example, say a business or sales leader needs to guide customer service in a specific direction regarding a forthcoming upgrade of its software solution. This sales leader can, with no technical training – seriously – lay out the “script” by which customer service folks can engage customers in a discussion regarding the upgrade and properly collect the appropriate data while simultaneously creating any necessary events or e-mail or what-have-you within the Salesforce.com system.

After having spent many grueling hours with a variety of interfaces designed to provide drag-n-drop creation of processes, Cloud Extend was the first one that actually delivered on its promise of “drop dead simple.”

Now, part of that simplicity is driven by the limitation of what kind of activities can be included in a process and that’s where IT comes in – and the possibilities for other uses in the data center become clear.

IT’S ABOUT the SERVICES
What makes the integration of Cloud Extend with Salesforce.com seamless is under its hood. When users are creating or invoking activities in the business process it’s really executing service-calls to a cloud-hosted business process management solution called Socrates, based on
imageActive Endpoints’ acticeVOS product.

Active Endpoints platform is what provides the services and the integration with Salesforce.com necessary to enable a drop-dead simple interface for customers. When a user needs to specify an action, i.e. invoke a service, in the business process it is accomplished by means of a drop down list of available services, retrieved via standard service-oriented methodologies under the covers. In ancient days, business process codification required the administrator to not only know what a WSDL was, but how to find it, retrieve it, and in some cases, pass parameters to it in order to take advance of services. With Cloud Extend all the minutia that makes such efforts tedious (and requires technical skills) is hidden and presented via a very responsive and intuitive interface. The services and the process automation engine that drives the guidance of users through the process are deployed in Active Endpoints cloud; integrated via a standardized, service-oriented integration model that leverages Salesforce.com APIs to provide the data and object integration necessary to make the experience a seamless one for users.

What this solution offers for Salesforce.com customers is customization of not just the solution, but the business processes required by the organization. Cloud computing is primarily about commoditization and SaaS is no exception. The problem with commoditization of business-related functions is that, well, one size does not actually fit all and every organization will have its own set of quirks and customizations to its sales force automation and customer service processes. Virtually all of the more than 90K Salesforce.com customers customize their offering to some degree. But customizing SaaS, in general, aside from the typical naming of columns in the database and some tweaking of the interface is not a trivial task. What Active Endpoints offers in Cloud Extend is exactly what customers need to be more responsive to changes in the business environment and to enable a more consistent sales force and customer service experience for its customers. Scripted, guided processes enable the rapid dissemination of  new processes or information that may be required by customers or sales to address new product offerings or other business-related issues.

ONE SIZE DOES not FIT ALL
The concept of commoditization works well in general. Each of the three core cloud computing models – IaaS, PaaS, and SaaS – commoditize different resources as a means to create an inexpensive and highly scalable environment. But they all recognize the need – if even slightly – to customize the environment, the services, the flow, the application delivery chain, the application.

Offering a platform upon which such customizations can be offered, the foundation of an ecosystem, is a requirement but in and of itself the platform does little to enhance the customizability of the resources it supports. For that you need developers and producers of software and services.

This is not a concept that is applicable to only software. Custom ring-tones, themes. Hey, there’s an app for that! The ability to customize even the most standardized products like a smartphone assure us that consumers and IT organizations alike not only enjoy but demand the ability to customize, to make their own, every piece of software and hardware that falls under their demesne. You can customize the basic functions, but you also absolutely must provide the means by which the product can be customized to fit the specific needs of the customer. For Salesforce.com that’s Force.com. For Apple it’s an SDK and the AppStore. For others, it’s the inherent programmability of the platform, of the ability to extend its functionality and reach into other areas of the data center using service-enabled SDKs, scripting languages, and toolkits.

Where other Business Process Management (BPM ) solutions have often failed in the past to achieve the ease of use required to make good on the promise that business stakeholders can automate, codify and ultimately deploy business process solutions, Active Endpoints appears to have succeeded. Infrastructure automation and orchestration vendors should take note of the progress made in providing simple interfaces to solve complex, service-oriented problems like those associated with automation of deployment and provisioning processes, specifically those requiring the collaboration of network and application delivery network infrastructure components. The service-enablement of components a la infrastructure 2.0 makes them well suited for automation and orchestration via what have traditionally been viewed as software and process automation solutions. There is nothing stopping an organization from taking advantage of a solution like activeVOS and Socrates to create an on-premise solution that leverages the lessons learned from business process automation in a way that positively improves operational process management.


Cory Fowler (@SyntaxC4) explained the Windows Azure Role Startup Life Cycle in a 4/13/2011 post (missed when posted):

image As you begin to deploy applications to Windows Azure you may find that the application being deployed needs more functionality than is provided on the Windows Azure Guest OS Base Image [Read: How to Change the Guest OS on Windows Azure]. This is possible to do as of Windows Azure SDK 1.3, this is called a Startup Task, also introduced with SDK 1.3 was the concept of Full IIS Support which now takes priority over the legacy [but still functional] Hosted Web Core Deployment Model [Read: New Full IIS Capabilities: Differences from Hosted Web Core].

image

Much like an ASP.NET Web Forms Developer Learns the ASP.NET Page Life Cycle, it’s important for a Developer that is building applications for Windows Azure to know the Role Startup Life Cycle when they are deciding between creating Startup scripts and leveraging the Windows Azure VM Role.

Windows Azure Role Startup Players

There are a few players in the Role Startup process on Windows Azure [SDK 1.3]. Please note that I am only calling out the players in the Full IIS Support Process [This is not Hosted Web Core].

Startup Tasks

In Windows Azure a startup task is a Command-Line Executable file that gets referenced in the Cloud Service Definition file (.csdef). When a deployment is uploaded to Windows Azure a component of the Windows Azure Platform called the Fabric Controller reads the Definition file to allocate the necessary resources for your Deployment. Once the environment is set up the Fabric Controller initializes the Role Startup Process.

To best understand startup tasks lets dive into the “Task” Element of the Cloud Service Definition file.

<ServiceDefinition name="MyService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
   <WebRole name="WebRole1">
      <Startup>
         <Task commandLine="Startup.cmd" executionContext="limited" taskType="simple">
         </Task>
      </Startup>
   </WebRole>
</ServiceDefinition>
CommandLine Attribute

This is the name of the file to be executed when the startup task runs. How this script or executable process is run in the role is left up to the executionContext and the taskType.

ExecutionContext Attribute

The executionContext for a startup task defines the level of permissions that the script or executable file is able to run at within a particular role. There are two types of Execution Contexts:

  1. Limited: Runs the script/executable at the same permission level of the role runs at.
  2. Elevated: Runs the script/executable with Administrative privileges.
TaskType Attribute

The taskType for a startup task defines how a startup task process is executed. The TaskType is a key contributor to the length of your startup process. The reasoning behind this is the different task types execute either Synchronously or Asynchronously. Obviously the Synchronous task execution will take longer to execute so be cautious of the length of your installations as startup tasks should not execute any longer than 5 minutes. Here are the three different types of Startup tasks:

  • Simple (Default): (Synchronous) The Task is launched and the Instance is Blocked until the task completes. [Note: If the task fails, the instance is blocked and will fail to start]
  • Background: (Asynchronous) This starts the Task process and continues the Role startup. [Note: Fire and Forget. Be sure to Log these processes for debugging]
  • Foreground: (Asynchronous) The task is executed but doesn’t allow the Role to shutdown until the foreground tasks exit.
IISConfigurator.exe

The IISConfigurator process is an executable file that you’ll find in both the Compute Emulator as well as the Windows Azure Environment in the Cloud. IISConfigurator.exe is responsible for iterating through the Sites Node to add and configure the necessary Websites [, Virtual Directories & Applications], Ports and Host Headers within IIS. I wrote a post previously that talked about IISConfigurator.exe in the context of the Compute Emulator.

[Web/Worker]Role.cs : RoleEntryPoint

Within a Web or Worker Role you will find a class that inherits from RoleEntryPoint. RoleEntryPoint exposes 3 events OnStart, Run and OnStop, these methods are used to perform tasks over the lifecycle of the Role. I will explain these events in more detail later on in this post.

Windows Azure Startup Life Cycle

Let’s take a look at the Process that Windows Azure takes when it’s deploying our Hosted Service.

The Entire process is kicked off when you upload your Cloud Service Package and Cloud Service Configuration to Windows Azure. [It is a good practice to upload your deployments into Blob Storage in order to Deploy your applications. This allows you keep versions of your Deployments and keep copies on hand incase the Hosted Service needs to be rolled back to a previous version].

Next the Fabric Controller goes to work searching out the necessary Hardware for your Application as described in the Cloud Service Definition. Once the Fabric Controller has found and allocated the necessary resources for your Hosted Service, the appropriate Guest OS Image is deployed to the Server and initializes the Guest OS Boot Process.

Flow Chart: Windows Azure Startup Life Cycle

This is where things begin to get interesting.

If the current Role contains a Startup node in the Cloud Service Definition the Role begins executing each Task in the same Sequence in which they are ordered in your Cloud Service Definition. Depending on the TaskType [explained above] these startup tasks may run Synchronously [simple] or Asynchronously [background, foreground].

When the ‘Simple’ [Synchronous] startup tasks complete execution the role begins its next phase of the Role Startup Life Cycle. In this phase of the Life Cycle the IISConfigurator Process is executed Asynchronously, at the same time [or not too long after] the OnStart event is fired.

Once the OnStart event Completes, the Role should be in the Ready State and will begin accepting requests from the Load Balancer. At this point the Run event is fired. Even though the Run event isn’t present in the WebRole.cs code template you are able to override the event and provide some tasks to be executed. The Run event is most used in a worker role which is implemented as an infinite loop with a  periodic call to Thread.Sleep if there is no work to be done.

After a long period of servicing requests there will most likely come a time when the role will be recycled, or may no longer be needed and shut down. This will fire the OnStop event, to handle any Role clean up that may be required such as moving Diagnostics data or persisting files from the scratch disk to Blob Storage. Be aware that the OnStop method does have a time restriction, so it is wise to periodically transfer critical data while your role is still healthy and not attempting to shut down.

After the OnStop Event has exited [either code execution completes, or times out] the Role begins the stopping process and terminates the Job Object. Once the Job Object is terminated the role is restarted and once again follows the process explained above of the Startup Life Cycle. This is important to know as it is possible for your Startup Tasks to be executed multiple times on the same Role, for this reason it is important that your startup tasks be Idempotent.

Issues that may Arise during the Startup Life Cycle

As Software Developers we are all aware of what some call the “Happy Path” this is a term that is used to describe a process that goes exactly as planned. However, in the real world, no process is iron clad and we need to be aware of situations that may go awry.

Here is a quick list of potential issues during the Startup Life Cycle:

  • Startup Tasks may fail to Execute
  • Startup Tasks may never Complete
  • IISConfigurator.exe may overwrite Startup Task based modifications to IIS
  • IISConfigurator.exe and OnStart might end up in a race condition
  • Startup Tasks and OnStart may end up in a race condition
  • Startup Process might not be idempotent

How to Avoid Race Conditions in Role Startup

There are a number of things you can do to help avoid Race Conditions within your startup Process. This is typically done by placing checks and balances around the resources that will be affected by the specific item in the startup process. The other thing that could be done is to create a VM Role.

The VM Role is a Windows Server 2008 R2 Image on a VHD which you configure with all the components you need installed on your Role. You will also need to install the Windows Azure Integration Components and perform a generalization with sysprep.

The VM Role is a good approach if:

  1. Your startup tasks are starting to effect your deployment time
  2. Application installations are Error-prone
  3. Application installation requires manual interaction.

I will cover more on the VM Role in other blog post, as it’s a large enough topic on it’s own.

Once you have configured your VM Role instance it is uploaded to the cloud by using the csupload command-line tool, or using the Cloud Tools in Visual Studio. This new VM Role image you created can now be Provisioned in place of one of Microsoft’s Guest OS Images. It is good to note that startup tasks are not supported in the VM Role as it is assumed the image is uploaded as required.

Conclusion

Knowing the Life Cycle of what you’re working on is very important. It’s that one piece of context that you can always come back to in order to regain a little more clarity in a situation.

Happy Clouding!

Special Thanks to: Steve Marx, David Murray, Daniel Wang, Terri Schmidt, Adam Sampson & [my failover twin] Corey Sanders [from the Windows Azure Team] for taking the time out of their busy schedules to ensure this information is both accurate and complete.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Kevin Remde (@KevinRemde) explained Building Cloudy Apps (“Cloudy April” - Part 24) on 4/24/2011:

imageAre you a developer?

“No.”

Well, perhaps you’re not.. but someone reading this blog is.  And they may have seen my articles about Windows Azure or the Windows Azure Appliance and may be wondering how exactly they’re going to get started building applications for it. 

image

“Okay.. for the sake of argument, let’s say I am a developer.  How do I get started in learning about and developing for Windows Azure?  (..you’re going to tell me anyway…)"

I’m glad you asked.  The easiest way is:

What’s really nice about the tools is that, to get started, you don’t need to buy anything.  The Web Platform Installer installs everything you’ll need for developing, testing (locally) and then packaging your Windows Azure applications: Internet Information Services (IIS), SQL Server Express, .NET Framework and even the Visual Web Developer 2010 Express Edition

Will you run your apps in the cloud?Once you’ve got the tools installed, you’ll want some samples and some labs and free training resources.  Fortunately for you, there are these great options:

And finally, to get plugged-in to the experts and be involved with the community, you should check out these community resources:

---

Are you developing for Windows Azure?  Have you tried the tools?  What did you think about the training kit?  Share your thoughts in the comments.

Tomorrow in Part 25 I’m going to give you some ideas on how to manage Windows Azure.

---

** Just a little developer humor.  Okay… it also proves that I was a developer a LONG TIME AGO.

Kevin is an IT Pro Evangelist with Microsoft Corporation.


Kevin Remde (@KevinRemde) posted Cloud-in-a-Box (“Cloudy April” - Part 23) on 4/23/2011:

image In part 6 of my “Cloudy April” series  we talked about the PaaS solution from Microsoft, Windows Azure.  And more and more companies are realizing the benefits of building applications that can run on stateless, easily scalable machine instances that are quickly deployed or decommissioned as the demand on their applications changes.  Also, the benefit of high-availability that just simply works.. and the automation of load balancing and storage and… you get the idea. 

And even though one of the great benefits of Windows Azure is the ability to run even portions of your application in “the cloud” and leave others in your own datacenter (using technologies like Windows Azure Connect), some companies still want (or are required) to run their applications entirely in their own datacenter.  In a large company it’s not uncommon to have dozens or even hundreds of purely internal applications, and the groups developing and supporting those applications could easily benefit from a PaaS solution for their own businesses.

Cloudy Datacenter“So.. couldn’t a company just build or buy something like Windows Azure to run in their own datacenters?”

Yes.  Well… almost.  Right now you really only have one option.. but there is another currently in the works, and I’ll get to that shortly.  Right now your option is to build and support your own IaaS “cloud”, using the Self Service Portal 2.0 on top of SCVMM.  It’s not Windows Azure.. and it’s not PaaS, so there is still some upkeep of the virtualized OS that you’ll have to support (a situation that will improve greatly with SCVMM 2012 – currently in beta).  But the Hyper-V Cloud does allow the datacenter to provide IT-as-a-Service nicely.

This page features paintings from Dana Ellyn's "31 Days in July" project.

“But I want to run Windows Azure!  I want it in my own datacenter!  I want it now!”

Sorry, Veruca, you can’t have it now.  At least not if you’re not already running it as one of our first testers. 

The solution that is forthcoming, and that a few customers are testing for us right now, is something called the Windows Azure Appliance:

“Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. It is a turnkey cloud platform you can deploy in your datacenter. Service providers, governments and large enterprises who would, for example, invest in a 1000 servers at a time, will be able to deploy the Windows Azure platform on their own hardware in their datacenter. Microsoft Windows Azure platform appliance is optimized for scale out applications – such as eBay– and datacenter efficiency across hundreds to thousands to tens-of-thousands servers.”

(Quote from the Windows Azure Appliance FAQ.)

“So is it really an appliance?  Is it a big box?”

No, not really.  The final form-factor hasn’t been announced, but it’s being called an appliance because it’s an all-or-nothing purchase of specific server hardware (or a choice from a very small set of vendors) that meets the strict requirements of running Windows Azure and SQL Azure in your own datacenter.  It’s a “turn-key cloud solution on highly standardized, preconfigured hardware”.  The server, storage and networking hardware is all going to be pre-configured and installed as a group.

Find out more on the Windows Azure Appliance page here: http://www.microsoft.com/windowsazure/appliance/

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Lydia Leong (@cloudpundit) answered Why transparency matters in the cloud in a 4/24/2011 post:

image A number of people have asked if the advice that Gartner is giving to clients about the cloud, or about Amazon, has changed as a result of Amazon’s outage. The answer is no, it hasn’t.

In a nutshell:

1. Every cloud IaaS provider should be evaluated individually. They’re all different, even if they seem to be superficially based off the same technology. The best provider for you will be dependent upon your use case and requirements. You absolutely can run mission-critical applications in the cloud — you just need to choose the right provider, right solution, and architect your application accordingly.

2. Just like infrastructure in your own data center, cloud IaaS requires management, governance, and a business continuity / disaster recovery plan. Know your risks, and figure out what you’re going to do to mitigate them.

3. If you’re using a SaaS vendor, you need to vet their underlying infrastructure (regardless of whether it’s their own data center, colo, hosting, or cloud).

The irony of the cloud is that you’re theoretically just buying something as a service without worrying about the underlying implementation details — but most savvy cloud computing buyers actually peer at the underlying implementation in grotesquely more detail than, say, most managed hosting customers ever look at the details of how their environment implemented by the provider. The reason for this is that buyers lack adequate trust that the providers will actually offer the availability, performance, and security that they claim they will.

Without transparency, buyers cannot adequately assess their risks. Amazon provides some metrics about what certain services are engineered to (S3 durability, for instance), but there are no details for most of them, and where there are metrics, they are usually for narrow aspects of the service. Moreover, very few of their services actually carry SLAs, and those SLAs are narrow and specific (as everyone discovered recently in this last outage, since it was EBS and RDS that were down and neither have SLAs, with EC2 technically unaffected, so nobody’s going to be able to claim SLA credits).

Without objectively understanding their risks, buyers cannot determine what the most cost-effective path is. Your typical risk calculation multiplies the probability of downtime by the cost of downtime. If the cost to mitigate the risk is lower than this figure, then you’re probably well-advised to go do that thing; if not, then, at least in terms of cold hard numbers, it’s not worth doing (or you’re better off thinking about a different approach that alters the probability of downtime, the cost of downtime, or the mitigation strategy).

Note that this kind of risk calculation can go out the window if the real risk is not well understood. Complex systems — and all global-class computing infrastructures are enormously complex under the covers — have nondeterministic failure modes. This is a fancy way of saying, basically, that these systems can fail in ways that are entirely unpredictable. They are engineered to be resilient to ordinary failure, and that’s the engineering risk that a provider can theoretically tell you about. It’s the weird one-offs that nobody can predict, and are the things that are likely to result in lengthy outages of unknown, unknowable length.

It’s clear from reading Amazon customer reactions, as well as talking to clients (Amazon customers and otherwise) over the last few days, that customers came to Amazon with very different sets of expectations. Some were deep in rose-colored-glasses land, believing that Amazon was sufficiently resilient that they didn’t have to really invest in resiliency themselves (and for some of them, a risk calculation may have made it perfectly sane for them to run just as they were). Others didn’t trust the resiliency, and used Amazon for non-mission-critical workloads, or, if they viewed continuous availability as critical, ran multi-region infrastructures. But what all of these customers have in common is the simple fact that they don’t really know how much resiliency they should be investing in, because Amazon doesn’t reveal enough details about its infrastructure for them to be able to accurately judge their risk.

Transparency does not necessarily mean having to reveal every detail of underlying implementation (although plenty of buyers might like that). It may merely mean releasing enough details that people can make calculations. I don’t have to know the details of the parts in a disk drive to be able to accept a mean time between failure (MTBF) or annualized failure rate (AFR) from the manufacturer, for instance. Transparency does not necessarily require the revelation of trade secrets, although without trust, transparency probably includes the involvement of external auditors.

Gartner clients may find the following research notes helpful:

and also some older notes on critical questions to ask your SaaS provider, covering the topics of infrastructure, security, and recovery.

JP Morgenthal (@jpmorgenthal) delivered A Concise List of Business Use Cases for Cloud Computing on 4/21/2011:

This blog entry can also be seen at: http://blogs.smartronix.com/?p=114

image At the upcoming Interop Las Vegas Show (5/10/2011) I’m presenting “A Roadmap for Transitioning to Cloud Computing”. One of the key steps in my roadmap is identifying a business use case for cloud computing. I started with a quick search on the topic to see what others were offering up as use cases and came upon two well-defined documents, NIST’s Cloud Computing Use Cases and the Open Cloud Manifesto’s Cloud Computing Use Cases Version 4. Both of these documents have a wealth of information and each approach the problem domain from different angles. However, I still didn’t see a concise set of business cases that rapidly illustrate the problem domain for which cloud computing provides a solution. Hence, a great reason for a blog entry!
While I’m sure there’s some that I will miss, the ones that I readily discuss with customers are:

  • Reduce Costs. Okay, saying reduce costs and cloud in the same sentence is like saying it’s hot in the Moave Desert. The business use case for costs surrounding cloud computing is multi-faceted and based on your deployment model (e.g. public, private, hybrid). At a high-level, in certain cases and depending upon the overall utilization of compute resources in your own data center you could save money using public cloud because you only pay for the quantity of compute resources you need when you need it. Of course, if you’re seeking to develop a net new environment, public cloud options allow you to respond to opportunities more quickly without having to first fund a complete data center prior to delivering the service, which is a great financial benefit, but not really costs savings. Finally, private cloud environments will provide savings over traditional data center designs by using lower power, require less cooling, require less headcount to support and delivering the same or better performance with fewer resources. There’s a lot more meat on these bones, but for purposes of this entry, I’ll stop here.
  • Scale Out. There are certain times where scaling out is more effective than simply scaling up. Scaling up allows more users to be satisfied by a given environment, but has inherent levels of saturation, typically based on the size of the network. Scaling out allows more users to be satisfied simultaneously. A good analogy is scale up is to serial as scale out is to parallel. By spreading the demand across more environments, there is greater resiliency in the overall solution and you are placing less load on a single environment leading to greater performance. Cloud computing makes it easier to develop solutions that can scale out as well as up. Of note, scale out business case incorporates what the market is calling "Big Data". Big Data requires massive parallel processes for storage and analytics that requires the ability scale up and out to support.
  • Elasticity. When hardware is acquired for the purposes of supporting a single application or solution it is typically sized for some growth, but mostly static in design. That is when the application is not being used, for example from 8pm to 6am the next morning, it is very difficult to a) turn that hardware off so it is not using power and b) repurpose that same equipment to support other functions, such as large batch jobs in that same time period. Elasticity put control in the hands of the cloud services consumer over where compute resources will be applied and when they will be applied.
  • Disaster Recovery (DR) / Continuity-of-Operations (COOP). The cloud simplifies the ability for organizations to develop cold, warm and hot disaster recovery sites at a reasonable cost. In the past, DR/COOP was a significant overhead, in some cases, requiring double the amount of hardware to be acquired for any single solution. Depending upon need, the cloud can provide various levels of DR/COOP support up to and including active-active failover scenarios.
  • Mobile/Secure Computing. While mobile computing does not require cloud computing, public cloud and hybrid cloud architectures can be very effective for rolling out mobile computing solutions. The main benefit of this architecture is to create a scalable and responsive application for mobile devices that operates in a separate security domain from the data and other key business applications. In essence, one could say that the use case here is creating advanced security architectures that better protect critical business IT assets by simplifying the development of hybrid architectures that are loosely-coupled.
  • Community Cloud. Some clouds are designed specifically to support the needs of a certain communities, for example, many service providers are now developing Federal Community Clouds to support the needs of the US Federal government to operate in a domain that is physically separated and meets specific security requirements and goes through the government’s certification processes. In this case, the business case is to leverage the body of work that has already been provided to deliver applications more quickly and more effectively. In essence, this use case could fall under cost savings because taking advantage of this effort significantly reduces the cost of deploying the application. However, it’s important enough to stand alone since the community may operate outside the bounds of the business itself, such as a standards body or non-profit agency.

If others have business cases that were not explained here and that are not a derivative benefit of one the business cases already defined here I would love to hear from you.

Update: Here's a couple of additional business cases sent to me by Jeff Schneider.

  • Improved Time-to-Market / Agility. This was touched upon by a couple of the aforementioned use cases, but certainly deserves its own category. Sometimes solution and/or application delivery can be impeded by acquisition of the environment for development. This includes development, testing, and staging that occurs before deployment. The cloud is an excellent resource for support of the software development life cycle. Of course, the ability to transition the staging environment into a production environment with a some small scripting or a few flicks oft the mouse can dramatically speed time to market/completion of a solution.
  • Improved Contract Terms. Jeff's point on this was that the cloud offers better terms than traditional managed service providers (MSP). Businesses looking for greater flexibility and shorter commitment periods will certainly gain by using the cloud in these circumstances.


Yong-Gon Chon answered What goes into a FISMA certification? in a guest post to the MSOnline Services blog:

image I work for SecureInfo, an exclusive provider of cybersecurity services. We have served the Federal government for over 10 years and are involved in securing information assets used across the US Civilian Sector and the Department of Defense. We routinely conduct independent third-party assessments of systems used by these organizations in support of Federal Information Security Management Act (FISMA) compliance programs.

NEW - SecureInfo_tagline

In the thousands of FISMA related engagements we’ve performed over the years, we have been actively involved in assessing information systems that serve US citizens and enable warfighters, including complex, mission-critical, and highly classified systems. SecureInfo customers include the Department of Homeland Security, the United States Air Force and the United States Army, as well as private industry clients that must meet Federal security requirements. Most recently, we worked with Microsoft to perform an independent assessment of the Business Productivity Online Suite – Federal (BPOS-F) offering support and guidance to achieve an Authority to Operate (ATO).

The successful security authorization of Microsoft BPOS-F through the United  States Department of Agriculture (USDA) at a Moderate Impact level is a major accomplishment as it safely enables the consolidation of 21 different messaging platforms and provides Software as a Service (SaaS) to 120,000 end users in 5,000 offices within the US and over 100 countries world-wide. SecureInfo’s assessment was performed using the National Institute of Standards and Technology (NIST) Risk Management Framework.. We used the most recent version of the NIST SP 800-53A publication as the key procedural guidance in our assessment activities.

Our assessment was rigorous and required Microsoft to demonstrate effective  implementation of approximately 160 different management, operational and technical controls to a team of subject matter experts with a combined total of 99 years of industry experience. Our testing included an extensive review of their policies and procedures, interviews with their key personnel involved in delivering and supporting BPOS-F, examination of security related configuration settings, vulnerability scans of all components included within the environment (operating systems, databases, and web applications) and penetration testing. Additionally, SecureInfo assessed the physical and environmental controls of the Microsoft Global Foundation Services (GFS) domestic United States data centers where BPOS-F is hosted.

Based on our review, we’re confident Microsoft understands that operating and delivering a secure cloud computing solution is an ongoing process. As a result, SecureInfo will be supporting Microsoft’s continuous monitoring program consistent with the requirements outlined in the NIST continuous monitoring guidance (SP 800-137).


Richard Santaleza reported Our "Contracting for Cloud Computing" Free Webinar Now Available On Demand in a 4/20/2011 post to the Information Law blog:

Last week, the InfoLawGroup presented a free one-hour webinar on Contracting for Cloud Computing in conjunction with Zenith Infotech, Ltd and MSPtv.  The feedback received was very positive and our presentation covered a wide-ranging number of contracting issues specifically applicable to cloud computing scenarios.
This  next installment in our webinar series on cloud computing is now available on demand for viewing via Real streaming video or Windows streaming video, after free registration.
To register and view this free one hour webinar click here.


Details: Information Law Group attorneys, Richard Santalesa and David Navetta examine legal issues posed by contracting for cloud computing services and review a proposed cloud user’s "Bill of Rights" that any company considering cloud computing should keep in mind given their specific goals, industry, data and security needs.

Deciding whether to take the plunge into cloud computing is a serious decision.  The real benefits of cloud computing must always be weighed against: the risk of relinquishing direct control of infrastructure and applications, the potential use of unknown subcontractors, implications of data flow and storage location, liability and indemnification issues, privacy and data breach notification considerations, and compliance with the overlapping web of federal, state, local and international laws and regulations.


Tanya Forsheit posted The Kerry-McCain Bill to the Information Law Web site on 4/18/2011:

image Dave, Scott and I recently spoke with Nymity about the Commercial Privacy Bill of Rights Act of 2011 introduced by Senators John Kerry (D-MA) and John McCain (R-AZ) last Tuesday. You can read the interview here. We provide a general summary of the bill and identify some of the key challenges organizations will face if the bill becomes law.


<Return to section navigation list> 

Cloud Computing Events

Brian Loesgen reported San Diego Code Camp June 25 and 26 on 4/24/2011:

image

image Registration is open for SoCal Code Camp in San Diego, June 25th and 26th.

You can submit sessions here: http://www.socalcodecamp.com/sessions.aspx

image

I’ll be doing three Azure sessions:

This is always a great event, hope to see you there!


Brian Loesgen (@BrianLoesgen) was Looking forward to SOA and Cloud Symposium Brazil next week on 4/24/2011:

image

imageI’m really looking forward to being at the SOA and Cloud Symposium in Brazil next week, as well as CloudCamp.

I’ll be doing two sessions at the conference, “Windows Azure Architectural Patterns” and “The Windows Azure Cloud Service Roadmap”. I’ll also be on at least one panel, and can’t seem to go to a CloudCamp without getting involved somehow.

image

I’m looking forward to catching up with some friends there, and taking some well deserved time off traveling around, including a full week at world heritage site Fernando de Noronha, which is a wildlife/marine sanctuary about 300 miles off the coast. Nothing much to do there except for diving, eating fish, hiking, going to the beach and sleeping. Perfect!


Steve Yi reported Cloud Data at MIX: SQL Azure, Windows Azure DataMarket, and OData in a 4/22/2011 post to the SQL Azure Team blog:

Yet another great MIX event happened last week on April 12-14 and it was a fantastic opportunity to have personal conversations with web developers about how they're using Microsoft development tools and the Windows Azure platform.  For me, MIX was also an opportunity to step back and take in all the investment and progress Microsoft is making in the web, cloud, and mobile.

The dozens of conversations I had onsite this year were different than ones in the past and are indicative about the trend towards the pervasiveness of the cloud in the next generation of existing and new applications.  While historically there have been distinct and separate conversations about the web, mobile, and the cloud - these conversations are now closely intermingled with developers creating multi-platform user experiences spanning both device and web, and utilizing the cloud run those applications.

Lynn Langit, Senior Developer Evangelist, leading a user group meeting.  Lynn has a great blog on developer topics and how to use SQL Azure at http://blogs.msdn.com/b/socaldevgal/.

A critical factor in architecting a solution for multiple user experiences is utilizing a consistent data tier in the cloud and protocols that can reach them and we talked about that in several sessions at the event, which are now all available online. 

David Robinson, SQL Azure Senior Program Manager, preparing for his session at MIX - Powering Data On the Web and Beyond With SQL Azure.

Sessions on SQL Azure, DataMarket, and Windows Azure Platform:

Powering Data On the Web and Beyond With SQL Azure: David Robinson delivered this session on utilizing SQL Azure as the relational data store for web and mobile applications on Windows Phone 7.

Mashing Up Data On the Web and Windows Phone with Windows Azure DataMarket: Max Uritsky demonstrated how to deliver syndicated data to Windows Phone.

What's New In the Windows Azure Platform: James Conard presented an overall update on all of the Azure services.

Max Uritsky (center) and Christian Liensberger (right) from the Windows Azure DataMarket team manning the booth.  Max delivered a great session on DataMarket available here.  Asad Khan (left) presented on DataJS - the new JavaScript libraries for utilizing OData in the browser; his session can be viewed here.

Sessions on OData:

You've heard me mention OData recently as the enabling protocol for querying and updating data on the web.  When describing it to people who are new to it, I describe it as "the language of data on the web".  There are several sessions at MIX worth checking out:

Data In An HTML5 World: Asad Khan presented the new DataJS libraries that enable you to utilize OData with JavaScript and JQuery - making it easy to access SQL Azure or any other cloud data source via JavaScript calls on any browser.

I also encourage you to view these additional OData sessions that were presented at MIX:

Overall it was a great event.  Were you there?  Tell us your thoughts.

Yours truly...on the left.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Brian Gracely (@bgracely) observed An Interesting Set of Coincidences on 4/24/2011:

image I'm not a big believer in conspiracy theories, but it is sort of fun and ironic when certain coincidences all line up around the same time. For example:

imageAnd of course there's this fictional prediction...

Not sure there are any great lessons here, except to work hard, be humble and keep in mind that fundamentals (technology, design, etc.) matter.


Brent Stineman started off his Digest for April 23, 2011 with:

image… the big news this week [is] about public cloud outages. Microsoft started off the week with an outage for their Azure Storage service in Asia, then just a couple days later Amazon takes a larger hit in their US-East-1 center. This service disruption took out so many public web sites that there is plenty of analysis; just see Gartner, CNN Money, and the New York Times. I even took a few moments to document my own thoughts on it. In a nutshell, this doesn’t impact cloud as much as it raises awareness that even in the cloud, you need to own your disaster recovery plans.


Alex Williams asked Poll: How Did Amazon Web Services Handle its Worst Ever Disruption? in a 4/22/2011 post to the ReadWriteCloud blog:

image A lack of communication is one of the real concerns that comes out of the Amazon Web Services (AWS) disruption this week.

To its credit, AWS did update its dashboard that gives updates about the service. That was pretty much it. The AWS blog had nothing about the incldent. It was updated today. But it had nothing to do with the outage. It was about AWS training.

imageThis is while customers, many startups included, were trying to keep things together.

BigDoor CEO Keith Smith summed it up this way:

Starting at 1:41 a.m. PST, Amazon's updates read as if they were written by their attorneys and accountants who were hedging against their stated SLA rather than being written by a tech guy trying to help another tech guy.

image There are ramifications for the silence. First off, it puts AWS in a poor light. This incident will pass but it is going to have its effects. It fuels speculation about AWS service not being up to par for application developers. It looks like Joyent has already jumped on that issue.

That's one effect. Then there is the downstream problems that occur. Customers and partners face lots of scrutiny when they have outages. The silence from AWS only makes it harder to communicate what is happening. They don't have anything they can say because there is limited information available to share.

The effect on the market is harder to pin down. On the upside, the outage will hopefully help people realize how little we know about the level of scaling that comes with cloud computing. Let's just hope AWS contributes to that conversation.

What do you think?

How Did Amazon Web Services Handle its Worst Ever Disruption?online surveys


Chris Czarnecki reported about Amazons Cloud Outage: The Impact on Cloud Computing to the Learning Tree blog on 4/22/2011:

image Yesterday (21st April) saw Amazon’s Cloud Computing service suffer serious disruptions. This has led many people to jump up and use the outage as a confirmation to not use Cloud Computing. Such a view, in my opinion is not well founded or well informed. Why do I think this ? Let me try and explain.

Many high profile services over the last few years have experienced outages such as:

  • Skype (for many days)
  • Google’s GMail
  • Twitter is routinely unavailable
  • Facebook
  • Salesforce.com

image What is different between these outages and the Amazon outage is that Amazons affected so many Web sites not just one. High profile sites such as FourSquare, Reddit, Zynga and many others were not available as a result. It is this that has provided the Cloud Computing sceptics with fuel for their argument. To counter this we need to consider the alternative. Would the disruption these companies suffered yesterday have been less over time if they had decided to build their own server farms or used a managed hosting company ? The answer cannot be definitely provided but is highly likely to be no. In fact many of the companies built on Amazon Cloud Computing may not have even existed because startup costs and speed to market would have been prohibitive without the Amazon pay per use scalability.

So, whilst the outage is highly undesirable, it is not unique to Amazon and it must be balanced against other infrastructure alternatives. What has been excellent is the way Amazon have kept their customers updated as to what the problem is and their progress in fixing it. Whilst customers will not be happy, at least knowing what Amazon is doing gives them some idea on the scale of the problem. Also, they have some of the best engineers in the world working to fix the problems.

In summary, Amazon is not alone in suffering Cloud Computing outages. Google with Google Apps and App engine have had problems, Microsoft with Azure has too. Amazon is such big news because it is so big and hosts so many high profile Web sites. The outage has highlighted that by adopting Cloud Computing, an organisation has to build a strategy that considers cloud unavailability and disaster recovery. But this is no different to any traditional computing strategy, just with Cloud Computing much of the technology is located in the cloud. Learning Trees Cloud Computing course discusses how such a strategy can be defined for organisations of all sizes. Often during the course we take an attendees organisation as a case study and highlight the risks and how they can be minimised or eliminated. If you want to find out more about Cloud Computing, beyond the marketing hype and often biased view of marketing materials, the course will provide you with a vendor neutral view of the technology and its application to your business.


Lydia Leong (@cloudpundit) described the Amazon outage and the auto-immune vulnerabilities of resiliency in a 4/21/2011 post:

imageToday is Judgment Day, when Skynet becomes self-aware. It is, apparently, also a very, very bad day for Amazon Web Services.

Lots of people have raised questions today about what Amazon’s difficulties today mean for the future of cloud IaaS. My belief is that this doesn’t do anything to the adoption curve — but I do believe that customers who rely upon Amazon to run their businesses will, and should, think hard about the resiliency of their architectures.

imageIt’s important to understand what did and did not happen today. There’s been a popular impression that “EC2 is down”. It’s not. To understand what happened, though, some explanation of Amazon’s infrastructure is necessary.

Amazon divides its infrastructure into “regions”. You can think of a region as basically analogous to “a data center”. For instance, US-East-1 is Amazon’s Northern Virginia data center, while US-West-1 is Amazon’s Silicon Valley data center. Each region, in turn, is divided into multiple “availability zones” (AZs). You can think of an AZ is basically analogous to “a cluster” — it’s a grouping of physical and logical resources. Each AZ is designated by letters — for instance, US-East-1a, US-East-1b, etc. However, each of these designations are customer-specific (which is why Amazon’s status information cannot easily specify which AZ is affected by a problem).

Amazon’s virtual machine offering is the Elastic Compute Cloud (EC2). When you provision an EC2 “instance” (Amazon’s term for a VM), you also get an allocation of “instance storage”. Instance storage is transient — it exists only as long as the VM exists. Consequently, it’s not useful for storing anything that you actually want to keep. To get persistent storage, you use Amazon’s Elastic Block Store (EBS), which is basically just network-attached storage. Many people run databases on EC2 that are backed by EBS, for instance. Because that’s such a common use case, Amazon offers the Relational Database Service (RDS), which is basically an EC2 instance running MySQL.

Amazon’s issues today are with EBS, and with RDS, both in the US-East-1 region. (My guess is that the issues are related, but Amazon has not specifically stated that they are.) Customers who aren’t in the US-East-1 region aren’t affected (customers always choose which region and specific AZs they run in). Customers who don’t use EBS or RDS are also unaffected. However, use of EBS is highly commonplace, and likely just about everyone using EC2 for a production application or Web site is reliant upon EBS. Consequently, even though EC2 itself has been running just fine, the issues have nevertheless had a major impact on customers. If you’re storing your data on EBS, the issues with EBS have made your data inaccessible, or they’ve made access to that data slow and unreliable. Ditto with RDS. Obviously, if you can’t get to your data, you’re not going to be doing much of anything.

In order to get Amazon’s SLA for EC2, you, as a customer, have to run your application in multiple AZs within the same region. Running in multiple AZs is supposed to isolate you from the failure of any single AZ. In practice, of course, this only provides you so much protection — since the AZs are typically all in the same physical data center, anything that affects that whole data center would probably affect all the AZs. Similarly, the AZs are not totally isolated from one another, either physically or logically.

However, when you create an EBS volume, you place it in a specific availability zone, and you can only attach that EBS volume to EC2 instances within that same availability zone. That complicates resiliency, since if you wanted to fail over into another AZ, you’d still need access to your data. That means if you’re going to run in multiple AZs, you have to replicate your data across multiple AZs.

One of the ways you can achieve this is with the Multi-AZ option of RDS. If you’re running a MySQL database and can do so within the constraints of RDS, the multi-AZ option lets you gain the necessary resiliency for your database without having to replicate EBS volumes between AZs.

As one final caveat, data transfer within a region is free and fast — it’s basically over a local LAN, after all. By contrast, Amazon charges you for transfers between regions, which goes over the Internet and has the attendant cost and latency.

Consequently, there are lots of Amazon customers who are running in just a single region. A lot of those customers may be running in just a single AZ (because they didn’t architect their app to easily run in multiple AZs). And of the ones who are running in multiple AZs, a fair number are reliant upon the multi-AZ functionality of RDS.

That’s why today’s impacts were particularly severe. US-East-1 is Amazon’s most popular region. The problems with EBS impacted the entire region, as did the RDS problems (and multi-AZ RDS was particularly impacted), not just a single AZ, so if you were multiple-AZ but not multi-region, the resiliency you were theoretically getting was of no help to you. Today, people learned that it’s not necessarily adequate to run in multiple AZs. (Justin Santa Barbara has a good post about this.)

My perspective on this is pretty much exactly what I would tell a traditional Web hosting customer who’s running only in one data center: If you want more resiliency, you need to run in more than one data center. And on Amazon, if you want more resiliency, you need to not only be multi-AZ but also multi-region.

Amazon’s SLA for EC2 is 99.95% for multi-AZ deployments. That means that you should expect that you can have about 4.5 hours of total region downtime each year without Amazon violating their SLA. Note, by the way, that this outage does not actually violate their SLA. Their SLA defines unavailability as a lack of external connectivity to EC2 instances, coupled with the inability to provision working instances. In this case, EC2 was just fine by that definition. It was EBS and RDS which weren’t, and neither of those services have SLAs.

So how did Amazon end up with a problem that affected all the AZs within the US-East-1 region? Well, according to their status dashboard, they had some sort of network problem last night in their east coast data center. That problem resulted in their automated resiliency mechanisms attempting to re-mirror a large number of EBS volumes. This impacted one of the AZs, but it also overloaded the control infrastructure for EBS in that region. My guess is that RDS also uses this same storage infrastructure, so the capacity shortages and whatnot created by all of this activity ended up also impacting RDS.

My colleague Jay Heiser, who follows, among other things, risk management, calls this “auto-immune disease” — i.e., resiliency mechanisms can sometimes end up causing you harm. (We’ve seen auto-immune problems happen before in a prior Amazon S3 outage, as well as a Google Gmail outage.) The way to limit auto-immune damage is isolation — ensuring limits to the propagation.

Will some Amazon customers pack up and leave? Will some of them swear off the cloud? Probably. But realistically, we’re talking about data centers, and infrastructure, here. They can and do fail. You have to architect your app to have continuous availability across multiple data centers, if it can never ever go down. Whether you’re running your own data center, running in managed hosting, or running in the cloud, you’re going to face this issue. (Your problems might be different — i.e., your own little data center isn’t going to have the kind of complex problem that Amazon experienced today — but you’re still going to have downtime-causing issues.)

There are a lot of moving parts in cloud IaaS. Any one of them going wrong can bork your entire site/application. Your real problem is appropriate risk mitigation — the risk of downtime and its attendant losses, versus the complications and technical challenges and costs created by infrastructure redundancy.


Brent Stineman (@BrentCodeMonkey) discussed Amazon’s Service Outtage in a 4/21/2011 post:

image As some of you are likely aware, Amazon is currently experiencing a significant outage in several of its services includes EC2, EBS, and beanstalk. Is this a strike against cloud computing? Or a cautionary tale against making assumptions about a cloud provider’s SLA and your own lack of backup/recovery plans?

imageWhile cloud detractors are raising this as a prime example of why the cloud isn’t ready for the enterprise, the real truth here is that this outage is a great example of what happens when you put all your faith in a provider and don’t plan propely. While your provide may be giving you a financially backed SLA… Is being paid back $2,000 for SLA violations acceptable if you’re losing $10,000 an hour due to down services?

Many of the sites/services that are experiencing downtime today were hosted solely at Amazon’s Virginia data center and didn’t have disaster recovery or failover plans. One high profile client (I don’t have permissions to I won’t name names) isn’t having the same issue because they were prepared, they followed Amazon’s published guidance to have redundant copies of their services sitting ready in other facilities to take the load should something happen.

Does this mean you’re doubling your costs? Probably not, as you can (and likely should) have those secondary service sites set up at reduced capacity but prepared to quickly expand/scale to handle load. Its ultimately up to your organization to determine, much like purchasing insurance, how much coverage you need and how quickly they need to be able to adjust. And its precisely this ability to keep it small and scale it up when needed that is one of the cost benefits to the cloud. So you could potentially argue that this model is even supporting the whole argument for cloud computing.

As cloud evangelists and supporters, its counseling potential adopters on issues like these that will help us win their confidence when it comes to cloud computing. Even just raising the potential risk can get them to stop looking at us as someone with a sales pitch, and instead view us as an informed partner that wants to help them be successful in this new world. Regardless of which platform they are considering, they need to fully understand what SLA’s mean but more importantly know what the impact is to their business if the vendor violates the SLA, be it for a minute, an hour, or days on end.

Brent is a Widows Azure MVP.


David Linthicum asserted “Cloud Foundry's open source nature has more to do with VMware looking different from other catch-up services” as a preface to his VMware's bet on open source PaaS not a slam dunk post to InfoWorld’s Cloud Computing blog:

image Should you care about EMC VMware's open source PaaS service called Cloud Foundry?

The company promotes it as development tools to build applications on public clouds, private clouds, and even desktops, using your choice of several development languages, such as Ruby on Rails, and Java and databases, such as MongoDB, Postgres, and MySQL. If it's missing something, it's open source, so you can add what's missing.

image That's the promise, anyhow. Although you can have your way with the source code if needed, have you ever modified an open source product? It's not so trivial a process.

The real reason VMware is bringing this product to the market is to play catch-up with the existing PaaS offerings (from Amazon.com, Engine Yard, Microsoft, and Salesforce.com) through the open source angle. For some, the use of open source is a marketing gimmick; for others, it is an outright religion. Still, it may be the only approach to provide a differentiator so that VMware won't look like a "me too" provider in the several-years-old PaaS market.

Cloud Foundry will garner some adopters, perhaps even from smaller public IaaS services that are looking to stand up a PaaS offering as well. Rackspace has expressed an interest, and it is already drinking the open source Kool-Aid with the OpenStack IaaS offering. Moreover, many enterprises are looking to stand up a private PaaS in their data center, and Cloud Foundry may fit into that strategy.

I don't think Google, Microsoft, or Salesforce.com are quivering in their boots yet. Open source may not be as much of a differentiator in a cloud computing world as VMware is assuming.


Jeff Barr (@jeffbarr) described Live Streaming With Amazon CloudFront and Adobe Flash Media Server on 4/18/2011:

image You can now stream live audio or video through AWS using the Adobe Flash Media Server using a cost-effective pay-as-you-go model that makes uses of Amazon EC2, Amazon CloudFront, and Amazon Route 53, all configured and launched via a single CloudFormation template.

We've used AWS CloudFormation to make the signup and setup process as simple and straightforward as possible. The first step is to actually sign up for AWS CloudFormation. This will give you access to all of the AWS services supported by AWS CloudFormation, but you'll pay only for what you use.

image I've outlined the major steps needed to get up and running below. For more information, you'll want to consult our new tutorial, Live Streaming Using Adobe Flash Media Player and Amazon Web Services.

Once you've signed up, you need to order Flash Media Server for your AWS Account by clicking here. After logging in, you can review the subscription fee and other charges before finalizing your order:

Then you need to create a Route 53 hosted zone and an EC2 key pair. The tutorial includes links to a number of Route 53 tools and you can create the key pair using the AWS Management Console.

The next step is to use CloudFormation to create a Live Streaming stack. As you'll see in the documentation, this step makes use of a new feature of the AWS Management Console. It is now possible to construct a URL that will open up the console with a specified CloudFormation template selected and ready to use. Please feel free to take a peek inside the Live Streaming Template to see how it sets up all of the needed AWS resources.

When you initiate the stack creation process you'll need to specify a couple of parameters:

Note that you'll need to specify the name of the Route 53 hosted domain that you set up earlier in the process so that it can be populated with a DNS entry (a CNAME) for the live stream.

The CloudFormation template will create and connect up all of the following:

  • An EC2 instance of the specified instance type running the appropriate Flash Media Server AMI and accessible through the given Key Pair. You can, if desired, log in to the instance using the SSH client of your choice.
  • An EC2 security group with ports 22, 80, and 1935 open.
  • A CloudFront distribution.
  • An A record and a CNAME in the hosted domain.

The template will produce the URL of the live stream as output:

The resulting architecture looks like this:

The clients connect to the EC2 instance every 4 seconds to retrieve the manifest.xml file. This is specified in the template and can be modified as needed. You have complete access to the Flash Media Server and you can configured it as desired.

Once you've launched the Flash Media Server, you can install and run the Flash Media Live Encoder on your desktop, connect it up to your video source, and stream live video to your heart's content. After you are done, you can simply delete the entire CloudFormation stack to release all of the AWS resources. In fact, you must do this in order to avoid on-going charges for the AWS resources.

The CloudFormation template specifies the final customizations to be applied to the AMI at launch time. You can easily copy and then edit the script if you need to make low-level changes to the running EC2 instance.

As you can see, it should be easy for you to set up and run your own live streams using the Adobe Flash Media Server and AWS if you start out with our tutorial. What do you think?

Despite the negative publicity about outages in Amazon’s Virginia data center, neither Jeff nor Werner Vogels has mentioned the issue in blog posts. Here’s the status from the AWS dashboard on 4/24/2011 at 9:35 AM PDT:

image


<Return to section navigation list> 


0 comments: