Sunday, May 15, 2011

Windows Azure and Cloud Computing Posts for 5/12/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

Updated 5/15/2011 with new articles marked by Mary Jo Foley, Gary Orenstein, Welly Lee, and Wouter Seye.

• Updated 5/14/2011 with new articles marked by David Pallman, Steve Yi, Vittorio Bertocci, Ananthanarayan Sundaram, Michael Washington, Tony Champion, Lydia Leong and me.

Reposted 5/13/2011 11:00 AM PDT with updates after Blogger outage recovery. Read Ed Bott’s Google's Blogger outage makes the case against a cloud-only strategy post of 5/13/2011:

The same week that Google made its strongest pitch ever for putting your entire business online, one of its flagship services has failed spectacularly. …

and a description of the problem by Blogger’s Eddie Kessler in Blogger is Back of 5/13/2011.


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Welly Lee reported a SQL Server Migration Assistant session in his SSMA @TechEd post of 5/14/2011 to the SQL Server Migration Assistant (SSMA) Team blog:

Quick entry to let you know that I will be presenting at North America TechEd next week.  The session will include several demos and discuss how SSMA solves database migration issues. For details about the session click here (or see below).

I will also be at Microsoft booth throughout the week. Hope to see you all there!


DBI307 Automating Database Migration to Microsoft SQL Server
  • Monday, May 16 | 4:45 PM - 6:00 PM | Room: B313
  • Session Type: Breakout Session
  • Level: 300 - Advanced
  • Track: Database & Business Intelligence
  • Evaluate: For a chance to win an Xbox 360
  • Speaker: Welly Lee
Does your organization have Oracle, Sybase, MySQL or Access databases that you like to migrate to SQL Server or SQL Azure? Do you know that Microsoft offers a free tool to automate database migration? In this demo-heavy session, we examine how SQL Server Migration Assistant (SSMA) migrates other databases to SQL Server. Discover how the tool perform assessment to analyze complexity of the database to help you plan for migration. Explore how the tool map data types to the equivalent SQL Server’s and convert different SQL dialects inside function, procedure, triggers, and cursors to T-SQL. Learn how the tool emulates other database features not natively supported in SQL Server. Get a peek of what's being planned for SSMA in 2011. Come to this session to learn the ins-and-outs of database migration and understand how SSMA can save time and cost for your migration.

Product/Technology: Microsoft® SQL Server®, Microsoft® SQL Server® – Upgrade and Migration


Chihan Biyikoglu announced Federations Product Evaluation Program Now Open for Nominations! on 5/13/2011:

image Hi everyone, Microsoft SQL Azure Federations Product Evaluation Program nomination survey is now open. To nominate an application and get access to the preview of this technology, please fill out this survey.

The goal of the program is to help customers targeting building solution on SQL Azure to experiment with the technology before its public release. It will also be a chance to provide the development team with detailed feedback on the federations technology before it appears in SQL Azure.

imageThe preview is available to only a limited set of customers. Customer who are selected for the program, will receive communication once the program is kicked off in the months of May and June 2011.

Looking forward to all the nominations! Thanks

Cihan Biyikoglu – SQL Azure – Program Manager

Finally! (The survey has 53 questions. I submitted mine for a demo app this morning.)

See my Build Big-Data Apps in SQL Azure with Federation cover article for Visual Studio Magazine’s 3/2011 issue for more details about SQL Azure sharding.


The McKinsey Quarterly announced the availability of “A new report explores the explosive growth of digital information and its potential uses” in a The challenge—and opportunity—of ‘big data’ post of 5/13/2011 (requires free site registration):

image The proliferation of data has always been part of the impact of information and communications technology. Now, as computers and cell phones continue to pervade our daily activities and as millions of networked sensors are being embedded in these devices (as well as in automobiles, “smart” meters, and other machines), the amount of data available for analysis is exploding. The scale and scope of the changes that such “big data” are bringing about have reached an inflection point. Companies capture trillions of bytes of information about customers, suppliers, and operations. Many citizens look with suspicion at the amount of data collected on every aspect of their lives. Can big data play a useful role?

New research from the McKinsey Global Institute (MGI) finds that collecting, storing, and mining big data for insights can create significant value for the world economy, enhancing the productivity and competitiveness of companies and the public sector and creating a substantial economic surplus for consumers. The report Big data: The next frontier for innovation, competition, and productivity explores the state of digital data, how different domains can use large data sets to create value, and the implications for the leaders of private-sector companies and public-sector organizations, as well as for policy makers. The report’s analysis is supplemented by a detailed examination of five domains—health care, retailing, the public sector, manufacturing, and personal-location data.

MGI’s analysis shows that companies and policy makers must tackle significant hurdles to fully capture big data’s potential. The United States alone faces a shortage of 140,000 to 190,000 people with analytical and managerial expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the study of big data (exhibit). Companies and policy makers must also tackle misaligned incentives around issues such as privacy and security, access to data, and technology deployment.

The report identified five broadly applicable ways to leverage big data:

  1. Make big data more accessible and timely. Transparency, enabled by big data, can unlock a great deal of value. In the public sector, increasing access to data across separate departments can sharply reduce search and processing times. In manufacturing, integrating data from R&D, engineering, and manufacturing units to facilitate concurrent engineering can cut time to market.
  2. Use data and experiments to expose variability and raise performance. As organizations create and store more transactional data in digital form, they can collect more accurate and detailed performance information on everything from product inventories to sick days.
  3. Segment populations to customize. Big data allow organizations to create ever-narrowing segmentations and to tailor services precisely to meet customer needs. This approach is well known in marketing and risk management but can be revolutionary in areas such as the public sector.
  4. Use automated algorithms to replace and support human decision making. Sophisticated analytics can substantially improve decision making, minimize risks, and unearth valuable insights that would otherwise remain hidden. Such analytics have applications from tax agencies to retailers.
  5. Innovate with new business models, products, and services. To improve the development of next-generation offerings and to create innovative after-sales services, manufacturers are leveraging data obtained from the use of products. The emergence of real-time location data has created a new set of location-based mobile services from navigation to people tracking.

Read the executive summary or download the full report on the McKinsey & Company Web site.

Keep the conversation going on Twitter

Interested in big data? Use the #McKBigData and #BigData hashtags to follow the conversation on Twitter. We’ll be responding to your comments via our McKinsey Quarterly Twitter account, @McKQuarterly.


Steve Lohr reviewed the McKinsey report in his New Ways to Exploit Raw Data May Bring Surge of Innovation, a Study Says article of 5/13/2011 for the NY Times:

image Math majors, rejoice. Businesses are going to need tens of thousands of you in the coming years as companies grapple with a growing mountain of data.

Data is a vital raw material of the information economy, much as coal and iron ore were in the Industrial Revolution. But the business world is just beginning to learn how to process it all.

imageThe current data surge is coming from sophisticated computer tracking of shipments, sales, suppliers and customers, as well as e-mail, Web traffic and social network comments. The quantity of business data doubles every 1.2 years, by one estimate.

Mining and analyzing these big new data sets can open the door to a new wave of innovation, accelerating productivity and economic growth. Some economists, academics and business executives see an opportunity to move beyond the payoff of the first stage of the Internet, which combined computing and low-cost communications to automate all kinds of commercial transactions.

The next stage, they say, will exploit Internet-scale data sets to discover new businesses and predict consumer behavior and market shifts.

Others are skeptical of the “big data” thesis. They see limited potential beyond a few marquee examples, like Google in Internet search and online advertising.

The McKinsey Global Institute, the research arm of the consulting firm, is coming down on the side of the optimists in a lengthy study to be published on Friday. The report, based on nine months of work is “Big Data: The Next Frontier for Innovation, Competition and Productivity.” It makes estimates of the potential benefits from deploying data-harvesting technologies and skills.

Read Steve’s entire article here.


Mark Kromer (@mssqldude) announced on 5/13/2011 availability of his Part 3 of 5: A Microsoft Cloud BI Example article:

image Yesterday, I put up part of 3 of my 5-part series on building a Microsoft Cloud BI solution with SQL Azure, Azure Reporting Services and Microsoft SQL Server tools on the SQL Server Magazine BI Blog: http://www.sqlmag.com/blogs/sql-server-bi/entryid/76415/all-the-pieces-microsoft-cloud-bi-pieces-part-3:


imageSo far, in part 1 & part 2, we’ve talked about migrating data into data marts in SQL Azure and then running analysis against that data from on-premises tools that natively connect into the SQL Azure cloud database like PowerPivot for Excel. That approach can be thought of as a “hybrid” approach because PowerPivot is still requiring on-premises local infrastructure, such as PowerPivot, for the reporting.

image Now we’re going to build a dashboard that will exist solely in the cloud in Microsoft’s new Azure Reporting Services. This part of the Azure platform is only in an early limited CTP (beta), so you will need to go to the Microsoft Connect site to request access to the CTP: http://connect.microsoft.com/sqlazurectps.

Think of Azure Reporting Services as SSRS in the cloud. You will author reports using the normal SQL Server Reporting Services 2008 R2 tools. In this demo, I’m going to use the business user ad-hoc reporting tool called Report Builder 3.0. One of the great benefits of the overall Microsoft cloud platform called Azure, is that you can develop applications and databases and you can migrate to Azure from your on-premises solutions to Azure very easily. The tools that you are already familiar with: Visual Studio, SQL Server Management Studio, BIDS, Report Builder, SQL Server Management Studio, all work with SQL Azure and Windows Azure.

For example, in Report Builder, you will design a report just as you normally would and enter in the URL for the Azure Reporting Services as the site to store or retrieve the report definition:

image

Design the report based on a data source that is from SQL Azure, since this is a pure Cloud BI solution:

image

Whether you are using Report Builder or BIDS in Visual Studio, you will enter the Azure service name for your report server in the deploy information, just as you do today with SSRS. You will receive an extra prompt for your credentials because all authorization on SQL Azure and Azure Reporting Services today is not yet federated and is based on SQL Authentication.

After you’ve designed a report in Report Builder and deployed to the Cloud, as you can see below, you are able to share & view that report. The Web rendering of the reports in your browser are identical to that of a local on-premises SSRS server:

image

One of the really nice additional benefits of moving to the Cloud with SQL Azure and Azure Reporting Services, beyond the simplicity of migrating to Azure, is that your reports and solution now exist over HTTP or HTTPS and can be easily accessed over the Internet through browser-based devices like mobile devices and not just Windows Phone 7 which is the screenshot below from my Windows Phone 7 emulator:

image

In Part 4, I will examine creating Azure applications that use the ReportViewer control to embed Azure Reporting Services reports to make a complete end-to-end Cloud BI custom application. I’ll also touch on making that available as a mobile browser-based app beyond simply pointing to the Azure Reporting Services service URL as I did in this post. Then I’m going to wrap this series up in a few weeks with a final Part 5 post where I will point out some of the future-looking opportunities and features coming in the Microsoft BI stack to deploy a complete Cloud BI solution on the Microsoft platform including how to incorporate Silverlight controls into your BI app.


The MSDN Library reported What's New in SQL Azure (SQL Azure Database) on 5/13/2011:

image SQL Azure Database May 2011 release offers several enhancements. These enhancements include:

  • imageSQL Azure Database Management REST API: The SQL Azure Database Management REST API is a new API that allows you to programmatically manage SQL Azure servers provisioned for a Windows Azure subscription along with the firewall rules that control access to servers. For more information, see Management REST API Reference.
  • Multiple servers per subscription: SQL Azure now supports creating multiple servers for each Windows Azure platform subscription. For more information, see SQL Azure Provisioning Model.
  • JDBC driver: When writing applications for SQL Azure Database, you can now use the updated version of SQL Server JDBC Driver 3.0. For example code, see How to: Connect to SQL Azure Using JDBC.
  • Upgrading a data-tier application package: The DAC Framework 1.1 introduces in-place upgrades, which are supported on SQL Azure. For more information, see Data-tier Applications.

ImportantImportant

Upcoming Increased Precision of Spatial Types: For the next major service release, some intrinsic functions will change and SQL Azure will support increased precision of Spatial Types. This will have an impact on persisted computed columns as well as any index or constraint defined in terms of the persisted computed column. With this service release SQL Azure provides a view to help determine objects that will be impacted by the change. Query sys.dm_db_objects_impacted_on_version_change (SQL Azure Database) in each database to determine impacted objects for that database.

See Also: Other Resources


The AppFabric CAT Team described Reliable Retry Aware BCP for SQL Azure on 5/12/2011:

Introduction – Motivation

imageMicrosoft has a number of tools to import data into SQL Azure but as of now none of those are retry-aware.  In other words, if a transient network fault occurs during the data import process, an error is raised and the executable is terminated.  Cleanup of the partially completed import and re-execution of the process is left to the user.  This cleanup is often met with a level of frustration.  I was thusly motivated to create a BCP type program (I call it bcpWithRetry) to import data into SQL Azure in a reliable manner by responding to those transient fault conditions and resubmitting any unsuccessful batches.

image The second motivation for writing such an application was to support the forthcoming release of the SQL Azure Federations feature.  SQL Azure Federations provide a number of connectivity enhancements that provide better connection pool management and cache coherency.  The cost is that a ‘USE FEDERATION’ statement must be executed to route queries to the appropriate Federated Member which contains the desired shard of data.  This T-SQL statement must be performed prior to executing any queries at the backend SQL Azure database.  Currently none of our products support SQL Azure Federations nor offer options or extensions points upon which import data into a Federated Member.  In the future I will provide a follow-up to this blog post with an update to the bcpWithRetry client in which I include support for SQL Azure Federations.  You can learn more about SQL Azure Federations by reading Cihan Biyiloglu’s SQL Azure posts.

In Closing

I will finish off the blog with a section describing the code and walk-through of some of the application arguments.  I have tested the executable with a number of the input dimensions and was able to achieve 200k rows/sec importing a simple comma separated file with 10 million lines into my local SQL Server database.  The throughput is 14k/sec when importing the same file into a SQL Azure database.  You can find the code here.

Code Walk-Through

The bcpWithRetry program will import data directly into SQL Azure in a reliable manner.  The foundation for this program is the SqlBulkCopy class and a custom implementation of the IDataReader interface which provides the ability to read directly from a flat text file.  The scope is simple, the program (DataReader) reads data from a flat file line-by-line and bulk imports (SqlBulkCopy) this data into SQL Azure using fixed size transactionally scoped batches.  If a transient fault is detected, retry logic is initiated and the failed batch is resubmitted. 

The TransientFaultHandling Framework from the AppFabricCat Team was used to implement the retry logic.  It provides a highly extensible retry policy framework with custom retry policies and fault handling paradigms for the Microsoft cloud offerings like Windows Azure AppFabric Service Bus and SQL Azure.  It catches ‘9’ different SQL Azure exceptions with a default retry policy out-of-the-box.  There was no reason for me to either implement my own or shop anywhere else. 

The snippet of the code below shows the technique used to import data into SQL Azure.  The CSVReader implements the IDataReader interface while the SqlAzureRetryPolicy class offers the necessary retry policy to handle transient faults between the client and SQL Azure. The ExecuteAction method of the SqlAzureRetryPolicy class wraps a delegate which implements the bulk copy import of data into SQL Azure.  The SqlRowsCopied event handler will output progress information to the user.  The RetryOccurred event provides notification of a transient fault condition and provided the hooks upon which to inform the CSVReader to reread rows from an internal ring buffer.

            using (CSVReader sr = new CSVReader(CSVFileName, columns, batchSize, Separator))
            {
                _csvReader = sr;

                SqlAzureRetryPolicy.RetryOccurred += RetryOccurred;
                SqlAzureRetryPolicy.ExecuteAction(() =>
                {
                    using (SqlConnection conn = new SqlConnection(ConnStr))
                    {
                        conn.Open();
                        using (SqlBulkCopy bulkCopy =
                            new SqlBulkCopy(conn, SqlBulkCopyOptions.UseInternalTransaction |
                                Options, null))
                        {
                            bulkCopy.DestinationTableName = TableName;
                            bulkCopy.BatchSize = batchSize;
                            bulkCopy.NotifyAfter = batchSize;
                            bulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(OnSqlRowsCopied);

                            foreach (var x in columns)
                                bulkCopy.ColumnMappings.Add(x, x);

                            sw.Start();
                            Console.WriteLine("Starting copy...");
                            bulkCopy.WriteToServer(sr);
                        }
                    }
                });
            }

The import facets of the CSVReader implementation is shown below.  SqlBulkCopy will make a call to the IDataReader. Read () method to read a row of data and continue to do so until it fills its internal data buffer.  The SqlBulkCopy API will then submit a batch to SQL Azure with the contents of the buffer.  If a transient fault occurs during this process, the entirety of the batch will be rolled back.  It is at this point that the transient fault handling framework will notify the bcpWithRetry client through its enlistment in the RetryOccurred event.   The handler for this event merely toggles the CSVReader.  BufferRead property to true.  The CSVReader then read results from its internal buffer upon which the batch can be resubmitted to SQL Azure.

        private bool ReadFromBuffer()
        {
            _row = _buffer[_bufferReadIndex];
            _bufferReadIndex++;
            if (_bufferReadIndex == _currentBufferIndex)
                BufferRead = false;

            return true;
        }

        private bool ReadFromFile()
        {
            if (this._str == null)
                return false;
            if (this._str.EndOfStream)
                return false;

            // Successful batch sent to SQL if we have already filled the buffer to capacity
            if (_currentBufferIndex == BufferSize)
                _currentBufferIndex = 0;

            this._row = this._str.ReadLine().Split(Separator);
            _numRowsRead++;

            this._buffer[_currentBufferIndex] = this._row;
           _currentBufferIndex++;

            return true;
        }
 
Application Usage

The bcpWithRetry application provides fundamental but yet sufficient options to bulk import data into SQL Azure using retry aware logic.  The parameters mirror those of the bcp.exe tool which ships with SQL Server.  Trusted connections are enabled so as to test against an on-premise SQL Server database.  The usage is shown below.

C:\> bcpWithRetry.exe
usage: bcpWithRetry {dbtable} { in } datafile

[-b batchsize]           [-t field terminator]
[-S server name]         [-T trusted connection]
[-U username]            [-P password]
[-k keep null values]    [-E keep identity values]
[-d database name]       [-a packetsize]
...
c:\>bcpWithRetry.exe t1 in t1.csv –S qa58em2ten.database.windows.net –U mylogin@qa58em2ten –P Sql@zure –t , –d Blog –a 8192
10059999 rows copied.
Network packet size (bytes): 8192
Clock Time (ms.) Total     : 719854   Average : (13975 rows per sec.)
Press any key to continue . . .
 

Reviewers : Curt Peterson, Jaime Alva Bravo,  Christian Martinez, Shaun Tinline-Jones

More details propaganda for SQL Azure Federations without even an estimated date for its first CTP!


<Return to section navigation list> 

MarketPlace DataMarket and OData

Zane Adam announced DataMarket goes International with new publishers, free trials and new user experiences in a 5/12/2011 post:

image I am really excited to announce availability of Windows Azure Marketplace Service Update 2. This release include key enhancements to the marketplace based on our customer feedback.

Internationalization

With this release Windows Azure Marketplace is open for business in 8 new countries. DataMarket was made commercially available in Nov 2010 with ability to purchase datasets only in USA. Customers could use free datasets from across the globe. Customers worldwide have been requesting us to enable international purchases for few months. This release is our wave 1 for international roll  out and we are enabling purchasing datasets from Australia, Austria, Canada, France, Germany, Italy, Spain, UK in local currencies. We will continue to expand this list.

Rich Visualizations

Our customers have been asking us for rich data visualization tools on the marketplace. With this release we have enhanced our service explorer to not just display tabular data but create  graphs like line, column, bar and pie. We also enable customers to  export the data in various formats like CSV, XLS or XML. This is in addition to exporting data to PowerPivot and partners tools like Tableau.

Free Trials

Customers have also asked us over past few months to enhance our trial experience. There is lots of premium data with no trial data. We have added a new capability which allows customers to use selected offers for free for first 30 days on premium datasets. Content Publishers can still choose to opt-in. Our goal is to work with all our data publishers and have trial experience for most datasets. To begin with MetricMash, Dun & Bradstreet, Boundary Solutions, and StrikeIron all have free offerings to get started.

OAuth 2.0 Support

OAuth 2.0 support enables developers to provide embedded experience for dataset purchase. This is crucial in providing great experience for ISV applications. Great example of this is our Excel Add-in CTP2.

Excel Add-In consent flow reflects OAuth v2 integration

Facebook Integration

Customers can share their favorite datasets using Facebook like button on the marketplace.

For more information on this announcement check out the DataMarket team blog and datamarket.azure.com


Sarah McDevitt described Spatial Types in the Entity Designer in a 5/12/2011 post to the Entity Framework Design blog:

One of the highly-anticipated features coming in the next version of Entity Framework is Spatial support. We’d like to give you a look at the experience of developing with Spatial types in the Entity Designer. If you haven’t already, take a look at what is going on under the hood with Spatial types in the Entity Framework in the blog post here.

We’d like to show you what is coming your way for Spatial types in the Entity Designer and of course, let us know what you think and give us feedback on the experience!

Goals

The goals for supporting Spatial types in the Entity Designer align with Entity Framework design goals for Spatial as well:

  • imageFirst class support for Spatial types in the Entity Designer
  • Model-first and Database-first support through the Entity Designer

Spatial Types

  • Geometry
  • Geography

Experience

Since the Entity Framework adds Spatial types as primitive EDM types, we treat them exactly as you are used to using primitive types in the Entity Designer.

  1. Add Property to Entity Type
  2. Set Data Type as Geography through the Property Window

Restrictions

Spatial types cannot be used as entity keys, in associations, or as discriminators. We give design-time validation errors if you attempt to use one of those invalid patterns.

  1. Attempt to set property of type Geometry as entity key
  2. Validation error in Error List

Feedback

As always, please let us know what is most valuable to you in the Entity Designer for using Spatial data types. Feedback is always appreciated!

The OData Mailing List is heavy into setting initial specifications for supporting geography and geometry data types in the Open Data Protocol.


Alex James (@adjames) will conduct a DEV374-INT OData Unplugged Interactive Discussion session at TechEd North America 2011:

image Wednesday, May 18 | 1:30 PM - 2:45 PM | Room: B303

  • Session Type: Interactive Discussion
  • Level: 300 - Advanced
  • Track: Developer Tools, Languages & Frameworks
  • Evaluate: for a chance to win an Xbox 360
  • Speaker(s): Alex James
imageBring your toughest OData or Windows Communication Foundation (WCF) Data Services questions to this chalk talk and have them answered by the OData team! All questions are fair game. We look forward to your thoughts and comments. 
  • Product/Technology: Open Data Protocol (OData)
  • Audience: Developer, Web Developer/Designer
  • Key Learning: Whatever they want to learn about OData


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Mary Jo Foley (@maryjofoley) claims TechEd North America (#TechEd_NA, #msteched) 2011 clairvoyance re Windows Azure AppFabric in her The Microsoft week ahead: What's on for TechEd 2011? post of 5/15/2011 for ZDNet’s All About Microsoft blog:

image* Perhaps there will be some other Visual Studio-inspired announcements on tap, given that Jason Zander, the Corporate Vice President of Visual Studio, is one of the lead-off keynoters on Monday morning. Maybe we’ll hear more on the tech previews of the new Visual Studio-based designer experience for the AppFabric Composition Model and the accompanying AppFabric Composite App Service CTP that the Softies mentioned at the Professional Developers Conference in fall 2010? I’m thinking it might still be too early for hhints about the next Visual Studio, which may or may not be named Visual Studio 2012… But you never know.

* I’m thinking TechEd also might be a good place for Microsoft to provide updates image722322222on some other Azure building blocks that company officials said last fall would be available in the “first half” of 2011. Seems like this might be a good place to take the wraps off SQL Azure Reporting, SQL Azure Data Sync, some of those promised Azure AppFabric Service Bus improvements, Windows Azure Connect (Project Sydney, a k a, technology for securely connecting cloud and on-premises servers), and maybe even that promised technology preview for Team Foundation Services on Windows Azure.

I’d certainly like to hear more about the AppFabric Composition Model and it’s time for SQL Azure Reporting Services, SQL Azure Data Sync,and Windows Azure Connect to come out of private beta.

Click here to read all of Mary Jo’s TechEd prognostications.


•• Wouter Seye described how to Host WCF Services in Windows Azure with Service Bus Endpoints in a 5/15/2011 post to the CODit blog:

image In this post I will highlight what the different pain points are when hosting a WCF Service in Windows Azure with Service Bus Endpoints (e.g. using HttpRelayBinding)

Including Microsoft.ServiceBus in your deployment

image722322222As many people know already, the Windows Azure AppFabric SDK is not installed on Windows Azure. Which means that you won’t have the Microsoft.ServiceBus.dll once you deploy your solution to Windows Azure. This first problem is easily fixed by setting the ‘Copy Local’ property of this reference to true.

HttpRelayBinding error

Now chances are you have a web.config to accompany your svc file that uses one of the relay bindings that come with the AppFabric SDK.

   1:  <service name="test">
   2:      <endpoint name="PublishServiceEndpoint"
   3:                      contract="WCFServiceWebRole1.IService1"
   4:                      binding="basicHttpRelayBinding" />   
   5:  </service>

Once you deploy and navigate to your svc you will get an error saying ….

image

Of course you can modify your web.config to include the necessary extensions manually, as specified in this post, but it won’t work anymore on your dev machine, since installing the AppFabric SDK already updated your machine.config with all the extensions. I wanted a solution where the machine.config was updated on Windows Azure prior to executing my code.

Using Startup Tasks

Luckily the AppFabric SDK comes with a tool called RelayConfigurationInstaller.exe which installs the Machine.config settings necessary for the Service Bus bindings to be supported in App.config. Now this just screams Startup Task.

So I went ahead and created a new solution and added the RelayConfigurationInstaller.exe to the project along with a RelayConfigurationInstaller.exe.config file which looks like this:

   1:  ?xml version ="1.0"?>
   2:  <configuration>
   3:    <startup>
   4:      <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/>
   5:    </startup>
   6:  </configuration>

I also added a Startup.cmd file which simply calls RelayConfigurationInstaller.exe /i. Then went into the ServiceDefinition.csdef and added the Startup Task with the call to Startup.cmd. This is what my project looks like so far:

image

Deploy…waiting 15 minutes… and no difference, I still get the same error, what went wrong?

Installing Microsoft.ServiceBus in the GAC

I found out that executing my startup task actually resulted in an error:

image

Which means that the Microsoft.ServiceBus.dll should be installed in the GAC prior to calling the executable.

Now this presents a problem, there is no Windows SDK installed on Windows Azure, so we can’t execute a gacutil to install the assembly. However I discovered that you can use the System.EnterpriseServices library to publish a dll into the GAC. Time to include powershell in my startup task.

I’ve created this ps script that installs the Microsoft.ServiceBus.dll in the GAC and added it to my project:

   1:  BEGIN {
   2:      $ErrorActionPreference = "Stop"
   3:
   4:      if ( $null -eq ([AppDomain]::CurrentDomain.GetAssemblies() |? { $_.FullName -eq "System.EnterpriseServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" }) ) {
   5:          [System.Reflection.Assembly]::Load("System.EnterpriseServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a") | Out-Null
   6:      }
   7:
   8:      $publish = New-Object System.EnterpriseServices.Internal.Publish
   9:  }
 10:  PROCESS {
 11:      $dir = [Environment]::CurrentDirectory=(Get-Location -PSProvider FileSystem).ProviderPath
 12:      $assembly = Join-Path ($dir) "Microsoft.ServiceBus.dll"
 13:
 14:      if ( -not (Test-Path $assembly -type Leaf) ) {
 15:          throw "The assembly '$assembly' does not exist."
 16:      }
 17:
 18:      if ( [System.Reflection.Assembly]::LoadFile($assembly).GetName().GetPublicKey().Length -eq 0 ) {
 19:          throw "The assembly '$assembly' must be strongly signed."
 20:      }
 21:
 22:      Write-Output "Installing: $assembly"
 23:
 24:      $publish.GacInstall($assembly)
 25:  }

Next I’ve modified the startup.cmd file to first call the ps script and then call the RelayConfigurationInstaller.exe to modify the machine.config with the needed bindings. This is what the final cmd file looks like:

1:  powershell.exe Set-ExecutionPolicy RemoteSigned -Force
2:  powershell.exe .\Startup\gacutil.ps1
3:  Startup\RelayConfigurationInstaller.exe /i
Caveats
  1. Make sure you’ve selected osFamily=”2” in your ServiceConfiguration.cscfg file, since this gives you Windows Server 2008 R2, needed for caveat 2.
  2. If you want to execute powershell scripts, you have to set the ExecutionPolicy to RemoteSigned (see line 1 of startup.cmd)


Vittorio Bertocci (@vibronet) announced TechEd USA 2010: Identity, Identity, Identity and Book Signing in a 5/13/2011 post:

image

These days practically everybody I meet on campus is preparing to fly to Atlanta for TechEd, with the notable exception of Steve Marx (it seems I may be losing our bet after all).

image722322222Atlanta holds a special meaning for me. Back in 2004 I won the Circle of Excellence award, which (among various awesome things) included a 1-hour long meeting with Bill Gates. You can of course imagine how many times that episode got told and retold, to the point that the memory is now a memory of a memory(^n) and entered the Myth; and with it the entire Atlanta city experience, with its raging thunderstorms and the weird-flavored sodas at the Coke Museum.

image Well, this Sunday I am scheduled to fly down to Atlanta again, where I’ll be presenting three sessions and hold a book signing session.
Book signing session, I say? Why would somebody want to get their copy of the book written all over, which is very likely to lower the price it could command on eBay, is beyond me… but I’ll be happy to oblige! It will be Tuesday the  17th, at 11:00 AM at the O’Reilly booth (#1817). Or the bookstore?

For the sessions, we’ll have a couple of new entries. Let’s go in order:

SIM324 Using Windows Azure Access Control Service 2.0 with Your Cloud Application
  • Tuesday, May 17 | 8:30 AM - 9:45 AM | Room: C302
  • Level: 300 - Advanced
  • Track: Security, Identity & Management

The Windows Azure Access Control Service 2.0 provides comprehensive federation and authorization services for cloud applications, so that you don't have to build identity infrastructure yourself. Come to this session to learn how your application can take advantage of your user's existing Active Directory, Windows Live ID, Google, Yahoo, and Facebook accounts when they access your cloud application. This session is aimed at developers building cloud applications.

  • Product/Technology: Cloud Power: Delivered, Windows Azure™, Windows® Identity Foundation
  • Audience: Architect, Developer, Security Administrator, Solutions Architect, Strategic IT Manager, Systems Administrator, Systems Engineer, Tactical IT Manager, Web Administrator/Webmaster, Web Developer/Designer
  • Key Learning: Understand how to simplify authorization in your applications using ACS 2.0 in Windows Azure

Finally, a session all about ACS. Although all ACS features appear in the generic claims in the cloud talk (see below), I never have the time to linger a bit on the how of the service: also, in this session I’ll try to touch on features a rarely have the time to show off.

Now, some extra comments here. Putting together a behemoth conference like TechEd is a monumental task, which is spread through multiple people. For example, I wrote the abstract for the sessions but not the Audience and Key Learning entries there, and they both contain some imperfection that may mislead you.

  • The audience for this talk is people in development roles. System Administrator types are NOT a target. I went to great lengths to be super clear in the title and the abstract about the audience, but something probably fell through the cracks.
  • About the key learning. ACS can do some authorization, but that is far from being its only (or even primary) feature. Don’t come with the wrong expectations!

Neeext!

SIM322 Developer's View on Single Sign-On for Applications Using Windows Azure
  • Tuesday, May 17 | 3:15 PM - 4:30 PM | Room: B312
  • Level: 300 - Advanced
  • Track: Security, Identity & Management

Signing users in and granting them access is a core function of almost every cloud-based application. In this session we show you how to simplify your user experience by enabling users to sign in with an existing account such as a Windows Live ID, Google, Yahoo, Facebook or on-premises Active Directory account, implement access control and make secure connections between applications. Learn how the AppFabric Access Control Service, Windows Identity Foundation, and Active Directory Federation Services use a cloud-based identity architecture to help you to take advantage of the shift toward the cloud while still fully leveraging your on-premises investments.

  • Product/Technology: Cloud Power: Delivered, Windows Azure™, Windows® Identity Foundation
  • Audience: Architect, Infrastructure Architect, Solutions Architect, Strategic IT Manager, Systems Administrator, Systems Engineer, Tactical IT Manager, Web Administrator/Webmaster, Web Developer/Designer
  • Key Learning: How to simplify your approach enabling access to applications across on-premises and cloud

Nothing to say on this one. This is the usual scenarios enumeration talk which I’ve been giving around since PDC. COme only if you don’t know much about claims or our offering in that space for developers.

SIM325 Deep Dive: Windows Identity Foundation for Developers
  • Thursday, May 19 | 1:00 PM - 2:15 PM | Room: B313
  • Level: 300 - Advanced
  • Track: Security, Identity & Management

Hear how Windows Identity Foundation makes advanced identity capabilities and open standards first-class citizens in the Microsoft .NET Framework. Learn how the Claims-Based access model integrates seamlessly with the traditional .NET identity object model while also giving developers complete control over every aspect of authentication, authorization and identity-driven application behavior. See examples of the point and click tooling with tight Microsoft Visual Studio integration, advanced STS capabilities, and much more that Windows Identity Foundation consistently provides across on-premise, service-based, Microsoft ASP.NET and Windows Communication Foundation (WCF) applications.

  • Product/Technology: Windows® Identity Foundation
  • Audience: Architect, Developer, Security Administrator, Solutions Architect, Strategic IT Manager, Systems Administrator, Systems Engineer, Tactical IT Manager, Web Administrator/Webmaster, Web Developer/Designer
  • Key Learning: Learn how to use WIF to externalize authentication and authorization from your application.

Now this one is interesting. This session was supposed to be a 400, but yesterday I discovered that the catalog lists it as 300. Well, it’s a deep dive: hence it may end up being fairly 400ish, depending on the vibe I'll find the the room.

The key learning is good here, but unfortunately the audience is pretty off. Here “developers” is in the title, hence I won’t even start…

Well, that’s it. As usual, I am super happy to meet you guys at conferences: please do not hesitate to come and chat. Apart from the talk and book signing, you’ll see me hanging around the Identity and Windows Azure booths. See you next week!


Clemens Vasters (@clemensv) offered A Bit Of Service Bus NetTcpRelayBinding Latency Math in a 5/11/2011 post:

image From: John Doe 
Sent: Thursday, May 12, 2011 3:10 AM
To: Clemens Vasters
Subject: What is the average network latency for the AppFabric Service Bus scenario?
Importance: High

Hi Clemens,

A rough ballpark range in milliseconds per call will do. This is a very important metric for us to understand performance overhead.

Thanks,
John


From: Clemens Vasters
Sent: Thursday, May 12, 2011 7:47 AM
To: John Doe
Subject: RE: What is the average network latency for the AppFabric Service Bus scenario?

Hi John,

image722322222Service Bus latency depends mostly on network latency. The better you handle your connections, the lower the latency will be.

Let’s assume you have a client and a server, both on-premise somewhere. The server is 100ms avg roundtrip packet latency from the chosen Azure datacenter and the client is 70ms avg roundtrip packet latency from the chosen datacenter. Packet loss also matters because it gates your throughput, which further impacts payload latency. Since we’re sitting on a ton of dependencies it’s also worth telling that a ‘cold start’ with JIT impact is different from a ‘warm start’.

With that, I’ll discuss NetTcpRelayBinding:

  1. There’s an existing listener on the service. The service has a persistent connection (control channel) into the relay that’s being kept alive under the covers.
  2. The client connects to the relay to create a connection. The initial connection handshake (2) and TLS handshake (3) take about 5 roundtrips or 5*70ms = 350ms. With that you have a client socket.
  3. Service Bus then relays the client’s desire to connect to the service down the control channel. That’s one roundtrip, or 100ms in our example; add 50ms for our internal lookups and routing.
  4. The service then sets up a rendezvous socket with Service Bus at the machine where the client socket awaits connection. That’s just like case 2 and thus 5*100ms=500ms in our case. Now you have an end-to-end socket.
  5. Having done that, we’re starting to pump the .NET Framing protocol between the two sites. The client is thus theoretically going to get its first answer after a further 135ms.

So the handshake takes a total of 1135ms in the example above. That’s excluding all client and service side processing and is obviously a theoretical number based on the latencies I picked here. You mileage can and will vary and the numbers I have here are the floor rather than the ceiling of relay handshake latency.

Important: Once you have a connection set up and are holding on to a channel all subsequent messages are impacted almost exclusively by the composite roundtrip network latency of 170ms with very minimal latency added by our pumps. So you want to make a channel and keep that alive as long as you can.

If you use the Hybrid mode for NetTcpRelayBinding and the algorithm succeeds establishing the direct socket, further traffic roundtrip time can be reduced to the common roundtrip latency between the two sites as the relay gets out of the way completely. However, the session setup time will always be there and the Hybrid handshake (which follows establishing a session and happens in parallel) may very well up to 10 seconds until the direct socket is available.

For HTTP the story is similar, with the client side socket (not the request; we’re routing the keepalive socket) with overlaid SSL/TLS triggering the rendezvous handshake.

I hope that helps,

Clemens also provided a brief Talking about Service Bus latency on my way to work podcast about this subject.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP, Traffic Manager and CDN

• Lydia Leong described new CDN concepts in her 3Crowd, a new fourth-generation CDN post of 3/14/2011:

image 3Crowd has unveiled its master plan with the recent launch of its CrowdCache product. Previously, 3Crowd had a service called CrowdDirector, essentially load-balancing for content providers who use multiple CDNs. CrowdCache is much more interesting, and it gives life and context to the existence of CrowdDirector. CrowdCache is a small, free, Java application that you can deploy onto a server, which turns it into a CDN cache. You then use CrowdDirector, which you pay for as-a-service on a per-object-request basis, to provide all the intelligence on top of that cache. CrowdDirector handles the request routing, management, analytics, and so forth. What you get, in the end, at least in theory, is a turnkey CDN.

I consider 3Crowd to be a fourth-generation CDN. (I started writing about 4th-gen CDNs back in 2008; see my blog posts on CDN overlays and MediaMelon, on the launch of CDN aggregator Aflexi, and 4th-gen CDNs and the launch of Conviva).

To recap, first-generation CDNs use a highly distributed edge model (think: Akamai), second-generation CDNs use a somewhat more concentrated but still highly distributed model (think: Speedera), and third-generation CDNs use a megaPOP model of many fewer locations (think: Limelight and most other CDNs founded in the 2005-2008 timeframe). These are heavily capital-intensive models that require owning substantial server assets.

Fourth-generation CDNs, by contrast, represent a shift towards a more software-oriented model. These companies own limited (or even no) delivery assets themselves. Some of these are not (and will not be) so much CDNs themselves, as platforms that reside in the CDN ecosystem, or CDN enablers. Fourth-generation CDNs provide software capabilities that allow their customers to turn existing delivery assets (whether in their own data centers, in the cloud, or sometimes even on clients using peer-to-peer) into CDN infrastructure. 3Crowd fits squarely into this fourth-generation model.

3Crowd is targeting three key markets: content providers who have spare capacity in their own data centers and would like to deliver content using that capacity before they resort to their CDN; Web hosters who want to add a CDN to their service offerings; and carriers who want to build CDNs of their own.

In this last market segment, especially, 3Crowd will compete against Cisco, Juniper (via the Ankeena acquisition), Alcatel-Lucent (via the Velocix acquisition), EdgeCast, Jet-Stream, and other companies that offer CDN-building solutions.

No doubt 3Crowd will also get some do-it-yourselfers who will decide to use 3Crowd to build their own CDN using cloud IaaS from Amazon or the like. This is part of what’s generating buzz for the company now, since their “Garage Startup” package is totally free.

I also think there’s potentially an enterprise play here, for those organizations who need to deliver content both internally and externally, who could potentially use 3Crowd to deploy an eCDN internally along with an Internet CDN hosted on a cloud provider, substituting for caches from BlueCoat or the like. There are lots of additional things that 3Crowd needs to be viable in that space, but it’s an interesting thing to think about.

3Crowd has federation ambitions, which is to say: Once they have a bunch of customers using their platform, they’d like to have a marketplace in which capacity-trading can be done, and, of course, also enable more private deals for federation, something which tends to be of interest to regional carriers with local CDN ambitions, who look to federation as a way of competing with the global CDNs.

Conceptually, what 3Crowd has done is not unique. Velocix, for instance, has similar hopes with its Metro product. There is certainly plenty of competition for infrastructure for the carrier CDN market (most of the world’s carriers have woken up over the last year and realize that they need a CDN strategy of some sort, even if their ambitions do not go farther than preventing their broadband networks from being swamped by video). What 3Crowd has done that’s notable is an emphasis on having an easy-to-deploy complete integrated solution that runs on commodity infrastructure resources, and the relative sophistication of the product’s feature set.

The baseline price seemed pretty cheap to me at first, and then I did some math. At the baseline pricing for a start-up, it’s about 2 cents per 10,000 requests. If you’re doing small object delivery at 10K per file, ten thousand requests is about 100 MB of content. So 1 GB of content of 10k-file requests would cost you 20 cents. That’s not cheap, since that’s just the 3Crowd cost — you still have to supply the servers and the network bandwidth. By comparison, Rackspace Cloud Files CDN-enabled delivery via Akamai, is 18 cents per GB for the actual content delivery. Anyone doing enough volume to actually have a full CDN contract and not pushing their bits through a cloud CDN is going to see pricing a lot lower than 18 cents, too.

However, the pricing dynamics are quite different for video. if you’re doing delivery of relatively low-quality, YouTube-like social video, for instance, your average file size is probably more like 10 MB. So 10,000 requests is 100 GB of content, making the per-GB surcharge a mere $0.02 cents. This is an essentially negligible amount. Consequently, the request-based pricing model makes 3Crowd far more cost-effective as a solution for video and other large-file-centric CDNs, than it does for small object delivery.

I certainly have plenty more thoughts on this, both specific to 3Crowd, and to the 4th-gen CDN and carrier CDN evolutionary path. I’m currently working on a research note on carrier CDN strategy and implementation, so keep an eye out for it. Also, I know many of the CDN watchers who read my blog are probably now asking themselves, “What are the implications for Akamai, Limelight, and Level 3?” If you’re a Gartner client, please feel free to call and make an inquiry.


Wade Wegner (@wadewegner) posted a 00:47:45 Cloud Cover Episode 46 - Windows Azure Traffic Manager Webcast to Channel9 on 3/13/2011:

Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show @CloudCoverShow.

In this episode, Steve and Wade discuss the Windows Azure Traffic Manager. Traffic Manager provides several ways to load balance traffic to multiple hosted services. You can choose from three load balancing methods: Performance, Failover, or Round Robin. Watch as Wade demonstrates how to configure Traffic Manager policies and test with your own applications.

In the news:

Grab the source code for the Windows Azure Toolkit for iOS.


The Windows Azure Customer Contact team sent me a “Welcome to the Windows Azure Traffic Manager CTP!” message on 3/11/2011:

imageYour subscription is now enabled to use Windows Azure Traffic Manager.  Thank you for your continuing engagement with Windows Azure and participating in our CTP. 

How to get started with Traffic Manager?

Word document: Getting Started with Windows Azure Traffic Manager

What are some quick tips for Traffic Manager?

  • All three load balancing methods (Performance, Failover, and Round Robin) monitor and respond to the health of your hosted services.
  • It can take up to 2 minutes for your policy to be published worldwide.
  • Performance load balancing locates end users based on the IP address of their local DNS server. During the CTP we will be updating this service to improve its accuracy. 

What are the limitations?

  • There is no SLA. Your service may be disrupted.
  • Traffic Manager only works with deployments in the production slot of a hosted service.
  • Policies should be used for testing and learning purposes only.
  • The Windows Azure Platform Management Portal does not display the monitoring status of your hosted services.
  • There is no Traffic Manager API. Use the Management Portal to manage your policies.
  • The Traffic Manager domain will change after CTP from <your-prefix>.ctp.trafficmgr.com to <your-prefix>.trafficmgr.cloudapp.net.

Please note this is not a monitored alias. For additional questions or support please use the resources available here.

Privacy Statement

If you signed up for the Traffic Manager beta in the Windows Azure Portal, you can expect a similar message about a week after. Here’s a screen capture of the Portal’s Traffic Manager startup page:

image


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• David Pallman reported Windows Phone 7 Quickstarts for Windows Azure in a 5/13/2011 post:

image Some new quickstart samples for Windows Phone 7 on Windows Azure are now available on codeplex at http://wp7azurequickstarts.codeplex.com/.

These quickstarts assume the use of the recently-released Windows Azure Toolkit for Windows Phone 7. The Windows Azure Toolkit for Windows Phone 7 is a great way to get started on phone + cloud development. One of its most valuable pieces is a demo app that you can get up and running on quickly as a starting point. A number of Windows Azure MVPs have collaborated to create an expanded set of samples derived from that demo app, which we’ve released today on codeplex.

imageThere are currently 3 samples on the codeplex project site, with more to come. The apps are simple--no one will mistake them for polished commercial apps!--but if they coincide with a category of application you need to build they may give you a head start and help shorten your learning curve around combining Windows Phone 7 with Windows Azure.

Corporate Directory Viewer
image A corporate directory viewer, with a list view and a photo view. You can call or email a person by clicking on the appropriate action button.

Windows Azure Table Storage holds the information for each person (last name, first name, phone, email, location, etc.).

Windows Azure Blob Storage holds the images for each person.
Contributor: David Pallmann (dpallmann), Neudesic, Windows Azure MVP

Next Event App for a User Group
This application provides information for members of a User Group to see what is coming up in their Next Meeting.

This app demonstrates some Windows Phone techniques for integrating with Windows Azure Blob storage, without having other server-side infrastructure, and without requiring that Azure storage keys be available on the Windows Phone device. The blob storage is loaded (and cached) by the Windows Phone app, demonstrating how to access blob metadata from the phone through HTTP headers, caching data in isolated storage, and updating. The metadata and blob content retrieved are displayed in several ways, including in a web browser control that loads from isolated storage (in case you are offline), and an address provided through metadata is geocoded and shown on a Bing map.
Contributor: Bill Wilder (codingoutloud), Windows Azure MVP

Locator
An app for locating nearby items of interest. The sample implementation is for public restrooms, but you can adapt to any kind of location-based resource.
Windows Azure Table Storage holds the information for each item (in this case a public restroom).

Windows Azure Blob Storage holds the images for each public restroom.
Contributor: Michael S. Collier (mcollier), Neudesic, Windows Azure MVP


Cory Fowler (@SyntaxC4) described Continuous Integration in the Cloud in a 5/13/2011 post:

image At the recent At the Movies Event put on by ObjectSharp, I demonstrated how to automate deployment of Windows Azure Applications in a TFS Build using a custom TFS Workflow Activity. Automated Deployment to Windows Azure is useful functionality as it replaces a rather tedious repetitive task from our daily routine.

imageThere are a number of ways to automate the Deployment Process to Windows Azure. In this entry I’ll outline how you can use TFS Builds or Powershell to Automate Windows Azure Deployments.

Using TFS Build Activities to Deploy to Windows Azure

To begin Automating Windows Azure Deployments today, download the Deploy To Azure open source project on CodePlex. To understand how to use the Deploy to Azure Activities on Codeplex, read .

Deploying a Windows Azure Applications is considered Management Functionality, this means it is required to upload a Management Certificate to your Windows Azure Account. This Management Certificate [can be Self Signed and] is used to Authenticate access to your Windows Azure Portal  Remotely [Read: How to Create and Export a Certificate].

Once you have your Management Certificate uploaded to the Windows Azure Portal, you will be able to use the Certificate to interact with the Windows Azure Service Management API. If you wish to build your own set of TFS Build Activities like the ones mentioned above Microsoft has created some Sample Code which is a .NET Wrapper of the Management API.

Using Powershell to Deploy to Windows Azure

If you’re an IT Pro or a Developer that is into scripting, it is possible to use powershell to deploy to Windows Azure. Ryan Dunn while he was in the Technical Evangelist role at Microsoft [recently moved to Cumulux] created a set of Commands in the Azure Management Tools Snap-in which allow you to leverage the Windows Azure Service Management API using Powershell. Since Ryan’s Departure Wade Wegner has taken over the project and has been maintaining the updates to the CommandLets with each change of the Windows Azure SDK.

Powershell is very powerful and I can see it becoming a very important part of Windows Azure Development. Just to give the Windows Azure Commandlets a try a created a re-usable powershell script that will deploy an application to Windows Azure. [This script needs to be executed from the same directory as the ServiceConfiguration.cscfg file, or modified to accept the path of the Service Configuration file as an argument.]

#Gracefully Add the Windows Azure Service Management SnapIn
Add-PSSnapin -Name AzureManagementToolsSnapIn -ErrorAction SilentlyContinue

#Collect Required Variables 
$serviceName = $args[0]
$slot = $args[1]
$subscriptionId = $args[2]
$certThumb = $args[3]
$package = $args[4]
$label = $args[5]

#Retrieve the Management Cert from the Current User Personal Cert Store
$cert = Get-Item cert:\CurrentUser\My\$certThumb

#Find the Current Directory of the Executing Script
$fullPathIncFileName = $MyInvocation.MyCommand.Definition
$currentScriptName = $MyInvocation.MyCommand.Name
$currentExecutingPath = $fullPathIncFileName.Replace($currentScriptName, "")

#Deploy the New Service
Get-HostedService $serviceName -SubscriptionId $subscriptionId -Certificate $cert |
Get-Deployment $slot | 
New-Deployment -package $package -configuration "$currentExecutingPath\ServiceConfiguration.cscfg" -label $label

#Wait for the Service to Upload
sleep 30

#Start the new Deployment
Set-DeploymentStatus -serviceName $serviceName -subscriptionId $subscriptionId -Certificate $cert -status running -slot $slot

As you can see the New-Deployment command will only upload your deployment, it is necessary to execute a second command Set-DeploymentStatus in order to Start your Application after it’s been deployed to the Cloud.

Conclusion

Automating deployment of your applications into Windows Azure is a great way to take repetitive time intensive tasks our of your day to day schedule. Whether using TFS or another Source Code Repository and Automated build agent automated deployment is available to you.


The Windows Azure Team posted Introducing Real World Windows Azure Guidance on 5/13/2011:

imageWe’re introducing a new section in the Windows Azure technical library called Real World Windows Azure Guidance, which features articles written by MVPs and other members of the community. The articles provide useful information based on real-world experience with Windows Azure. More guidance will be published soon, so bookmark this page and return later to see what’s new.

The first article in this section Real World: Startup Lifecycle of a Windows Azure Role was written by Cory Fowler, a Windows Azure MVP. Cory’s article explains how role startup works in the Windows Azure so that you know what to expect when you deploy an application.


Andy Cross (@andybareweb) explained Azure Howto: Programmatically modify web.config on WebRole Startup in a 5/13/2010 post:

image In a Windows Azure web role, your web application’s configuration is subtly different from a standard ASP.NET web application. Since the platform is designed for scalability, the web.config file is no longer the primary way of getting and setting configuration. Since an update to web.config intrinsically requires an appPool restart, and there would be a challenge to ensure that each service was responsive before moving onto the next web.config to update in a multi-instance solution. Instead, you should consider using RoleEnvironment.GetConfigurationSettingValue(“key”) rather than ConfigurationManager.AppSettings["key"].

imageBut what if you are stuck with an ASP.NET website that uses web.config that you cannot or may not modify, and you need some Azure specific runtime variables stored in web.config that aren’t available before deployment, such as InternalEndpoint Port number or DeploymentId? This blog shows you a solution.

It should be noted that I consider this approach potentially dangerous and open to misuse. To be clear, I only consider this robust if used during WebRole Startup (OnStart in RoleEntryPoint or in a Startup Task) which is a point in the Web Role execution lifecycle where it is not capable of servicing requests anyway.

When faced with a challenge like this, it is important to realise why it’s a little difficult to achieve, and why there’s no simple interface for achieving this provided by Microsoft Windows Azure SDK. Every modification and save of a web.config causes the ASP.NET application pool that it is the root of to restart. This is a built in mechanism for ensuring consistency of an application’s configuration settings. If you programmatically modify a web.config every 10 seconds, then the application pool restarts every 10 seconds. Every restart causes the termination of executing processes and threads, meaning your users will get dropped connections and Service Unavailable messages. In a multi-instance environment doing so causes a strange experience where only the instances terminating at that given point in time is servicing their request. Their friend on the computer next to them doing the same thing may be absolutely fine for any given page request, when they may get a service unavailable message. This sort of problem arises when the load balancer on top of the Web Role is bypassed in this way. It is not a robust solution during execution.

That said (if I haven’t put you off enough yet!) I do think there is a safe point in time where the modifications can be made. This is during application startup, where you have the opportunity to “prime” your appSettings, connectionStrings etc and make other changes to your role that were not possible before you deployed. This will still cause an application pool restart, but since no users are yet able to consume the web application, it is of little consequence.

Methodology

In our example we seek to add two settings to AppSettings – the port of an internal endpoint and the deploymentId of the running instance. These are both accessible using the SDK and appsettings isn’t a very useful place for them, but in our example we must assume a static codebase that already somehow requires these settings.

Firstly we need to create a new Cloud Project in Visual Studio.

Blank Cloud Project

Blank Cloud Project

This project then contains the place where we will be doing most of our work, WebRole.cs:

This is where we will do most of our work

This is where we will do most of our work

Firstly we need to add a reference to Microsoft.Web.Administration.dll. This can be found in C:\Windows\SysWOW64\inetsrv\Microsoft.Web.Administration.dll. Make sure the assembly is set to “CopyLocal” = True in its properties.

Add this reference

Add this reference

Next we should prepare our demonstration by adding an Internal Endpoint to our WebRole by modifying the ServiceDefinition.csdef as such:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="ProgrammaticWebConfig" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole1">
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
<Binding name="InternalEndpoint1" endpointName="InternalEndpoint1" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="Endpoint1" protocol="http" port="80" />
<InternalEndpoint name="InternalEndpoint1" protocol="http" />
</Endpoints>
<Imports>
<Import moduleName="Diagnostics" />
</Imports>
</WebRole>
</ServiceDefinition>

Note that we haven’t specified a port number for our InternalEndpoint – this is up to the Azure Fabric to calculate, and so we can’t know this before deployment. This is a good use case for this approach.

Next we modify Default.aspx to output our ApplicationSettings, which to start with have no value:

<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
CodeBehind="Default.aspx.cs" Inherits="WebRole1._Default" %>
<asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent">
</asp:Content>
<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
<h2>
Welcome to Web.Config manipulation!
</h2>
<p><%: ConfigurationManager.AppSettings["deploymentId"] %></p>
<p><%: ConfigurationManager.AppSettings["internalEndpointPort"] %></p>
</asp:Content>

Once we have this we need to use the ServiceManager class from Microsoft.Web.Administration in our WebRole.cs OnStart method to add in the new configuration values. Here is the code:

First add the using namespace statement: 

using Microsoft.Web.Administration;

Then add the logic to your OnStart method: 

public override bool OnStart()
{
    using (var server = new ServerManager())
    {
        // get the site's web configuration
        var siteNameFromServiceModel = "Web"; // TODO: update this site name for your site.
        var siteName =
            string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);
        var siteConfig = server.Sites[siteName].GetWebConfiguration();

        // get the appSettings section
        var appSettings = siteConfig.GetSection("appSettings").GetCollection();

        AddElement(appSettings, "deploymentId", RoleEnvironment.DeploymentId);
        AddElement(appSettings, "internalEndpointPort", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints
            .First(t=>t.Key=="InternalEndpoint1").Value
            .IPEndpoint.Port.ToString());

        server.CommitChanges();
    }
    return base.OnStart();
}

For clarity I provided an AddElement method that made the adding of settings safer, in case they happen to exist. In dev fabric, this is often the case but in Azure the web.config shouldn’t be touched more than one, so the settings are quite unlikely to exist. 

private void AddElement(ConfigurationElementCollection appSettings, string key, string value)
{
    if (appSettings.Any(t => t.GetAttributeValue("key").ToString() == key))
    {
        appSettings.Remove(appSettings.First(t => t.GetAttributeValue("key").ToString() == key));
    }

    ConfigurationElement addElement = appSettings.CreateElement("add");
    addElement["key"] = key;
    addElement["value"] = value;
    appSettings.Add(addElement);
}

As you can see, the implementation is quite concise, getting the AppSettings Element from Web.Config and then adding elements to it, before committing changes. When you commit this, you may notice Visual Studio prompting you to reload web.config (if you have it open), this is a good sign!

As the application loads, you can see the result as:

Loading the values from config!

Loading the values from config!

Source code is at the end of this post.

Caution

If you use this solution and don’t use one time deployment provisioning, instead scaling up and down at will, you may find inconsistent application settings if you choose to load from RoleEnvironment.GetConfigurationSetting(“key”). Imagine the scenario with a RoleEnvironment “version” setting at version 1, and a startup task that puts this into appsettings:

  1. Deploy 50 instances, in each ConfigurationManager.AppSettings["version"] always = 1
  2. Change RoleEnvironment “version” to 2
  3. Increase instance count by an extra 10 instances, in 1/6 of the roles ConfigurationManager.AppSettings["version"] is 2, in 5/6 ths it is 1.
Source

Source code is here: ProgrammaticWebConfig

Postscript

A special thanks to Neil Mackenzie who pointed me in the right direction. This is derived from the recommended (outdated now) approach to machine key synchronisation http://bit.ly/gLKRCs.


Yves Goeleven (@yvesgoeleven) explained Building Global Web Applications With the Windows Azure Platform – Monitoring in an 5/12/2011 post:

image In the fourth installment of the series on building global web applications I want to dive a bit deeper into monitoring your instances, as measuring and monitoring is key to efficient capacity management. The goal of capacity management should be to optimally use the instances that you have, ideally all aspects of your instances are utilised for about 80% before you decide to pay more and scale out.

Windows azure offers a wide range of capabilities when it comes to monitoring, by means of the WAD (Windows Azure Diagnostics) service, which can be configured to expose all kinds of information about your instances, including event logs, trace logs, IIS logs, performance counters and many more. The WAD can be configured both from code as by means of a configuration file that can be included in your deployment. See http://msdn.microsoft.com/en-us/library/gg604918.aspx for more details on this configuration file.

Personally I prefer using the configuration file for anything that is not specific to my code, like machine level performance counters, but I do use code for things like trace logs. To enable a specific performance counter on all your instances, specify it in the performance counters, including the rate at which the counter should be collected.

<PerformanceCounters bufferQuotaInMB="512" scheduledTransferPeriod="PT1M">
    <PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT5S" />
    <PerformanceCounterConfiguration counterSpecifier="\Memory\% Committed Bytes In Use" sampleRate="PT5S" />
</PerformanceCounters>

Note that I only collect processor time and memory consumption from the instances, bandwidth throttling is performed at the network level, not the instance level, so you cannot collect any valuable data for this metric.

The diagnostics manager will transfer this information to your storage account, that you specified in your service configuration file under the key Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString, at the rate mentioned in the ScheduledTransferPeriod property of the PerformanceCounters element.

Now, I admit, today the Windows Azure management tooling offered by MS is a bit lacking in terms of visualising diagnostics and monitoring information. But there is a third party product, Diagnostics Manager by Cerebrata, that covers this gap very well. Here you can see how Diagnostics Manager visualises the memory and cpu usage in my instance.

Note, the consumption rates are very low now, only 20% of memory and just a few percent cpu is effectively used at the time of measurement. this is because I upscaled to a small web role in the mean time and wasn’t executing any tests when monitoring the instance.

So, now that you know how to monitor your instances efficiently it is time to start filling up the free capacity that is sitting idle in your machines, but that is for next time when I will discuss the holy grail of capacity management: dynamically work load allocation.

I agree with Yves’ comments about the Cerebrata Azure Diagnostics Manager.

Full disclosure: Cerebrata provided me with no-charge licenses for their Azure Diagnostics Manager (ADM) and Cloud Storage Studio.


Josepth Fultz described Load Balancing Private Endpoints on Worker Roles in an article for the 5/2011 issue of MSDN Magazine. From the introduction:

image Early in January, David Browne and I worked on a solution to load balance internal service points on Windows Azure Worker Roles. Generally, service endpoints in Worker Roles are published, so the load balancer can take care of balancing the calls across the instances. However, the customer with whom we were working needed endpoints that were not publicly addressable. In addition, they didn’t want to take on the latency of some type of queuing operation. How would we address this requirement?

imageDuring an internal event meant for us to explore various technologies and solutions, David and I came up with two different approaches for solving the challenge. For this month’s column, I’ll cover my design considerations and the bits of code used to prototype one of these approaches.

Not wanting to inadvertently bottleneck the final solution, we ruled out a software proxy-style solution. Instead, I chose a software mechanism that will provide a valid IP for service calls and the calling node will cache the endpoint for a given duration to reduce the overhead of endpoint resolution. The three primary strategies I considered were:

  • Static assignment: assign a service endpoint to each calling node
  • Centralized control: one node tracks and controls assignment of each calling node
  • Cooperative control: allow any given node to indicate if it’s available to service calls

Each of these choices brings with it a set of benefits and a set of disadvantages.

Static assignment has the upside of being simple to implement. If the mapping of units of work between the caller and the worker are equal, then this might be a feasible approach for balancing because the load-balancing solution for the Web Role will by extension balance the calls on the Worker Role.

The two primary disadvantages are that it doesn’t address high availability for the service, nor does it address any discrepancy in load between the caller and the service node. If I attempt to morph the static assignment solution to address the problems, the solution almost assuredly starts to move toward either centralized or cooperative control. …

image

image

Read the entire article here.

Joe is an architect at the Microsoft Technology Center in Dallas.


The Windows Azure Team posted Real World Windows Azure VIDEO: New South Wales Education Department Hosts Online Science Testing in The Cloud with Windows Azure on 5/12/2011:

image

The New South Wales Department of Education (DET) is the largest department of education in the southern hemisphere. They wanted to improve the way they conducted Year 8 science tests to replicate what students did in the laboratory and believed interactive online science testing could test a wider range of skills than just pure scientific knowledge. However, DET estimated for them to host an online test for 65,000 students simultaneously would require a A$200,000 investment in server infrastructure. With the help of partner, Janison Solutions, DET launched its Essential School Science Assessment (ESSA) online exam. In 2010, they trialed an online science exam on Windows Azure that went out to 65,000 students in 650 schools simultaneously. Paying A$40 per hour for 300 Windows Azure servers, DET estimated the cost of hosting the online exam for one day was just $500.

Kate O'Donnell, Director EMSAD, New South Wales DET, Australia

Click here to see the full video case study.  Click here to learn how other customers are using Windows Azure.


Emil Velinov posted Windows Azure-Hosted Services: What to expect of latency and response times from services deployed to the Windows Azure platform to the AppFabric CAT blog on 5/12/2011:

image I’ve stayed quiet in the blogosphere for a couple of months (well, maybe a little longer)…for a good reason – since my move from Redmond to New Zealand as a regional AppFabric CAT resource for APAC, I’ve been aggressively engaging in Azure-focused opportunities and projects, from conceptual discussions with customers to actual hands-on development and deployment.

imageBy all standards, New Zealand is one of the best tourist destinations in the world; however, the country doesn’t make it to the top of the list when it comes to Internet connectivity and broadband speeds. I won’t go into details and analysis of why that is, but one aspect of it is the geographical location – distance does matter in that equation too. Where am I going with this, you might be wondering? Well, while talking to customers about cloud in general and their Windows Azure platform adoption plans, one of the common concerns I’ve consistently heard from people is the impact that connectivity will have on their application, should they make the jump and move to a cloud deployment. This is understandable, but I always questioned how much of this was justified and backed up by factual data and evidence matched against real solution requirements. The general perception seems to be that if it is anything outside of the country and remote, it will be slow…and yes – New Zealand is a bit of an extreme case in that regard due to the geographical location. Recently I came across an opportunity to validate some of the gossip.

imageThis post presents a real-life solution and test data to give a hands-on perspective on the latency and response times that you may expect from a cloud-based application deployed to the Windows Azure platform. It also offers some recommendations for improving latency in Windows Azure applications.

I want to re-emphasize that the intent here is to provide an indication of what to expect from Azure-hosted services and solutions performance rather than provide hard baseline numbers! Depending on exactly how you are connected to the Internet (think a dial-up modem, for example, and yes – it is only an exaggeration to make the point!), what other Internet traffic is going in and out of your server or network, and a bunch of other variables, you may observe vastly different connectivity and performance characteristics. Best advice – test before you make ANY conclusions – either positive or not so positive J

The Scenario

Firstly, in order to preserve the identity of the customer and their solution, I will generalize some of the details. With that in mind, the goal of the solution was to move an on-premises implementation of an interactive “lookup service” to the Windows Azure platform. When I say interactive, it means performing lookups against the entries in a data repository in order to provide suggestions (with complete details for each suggested item), as the user types into a lookup box of a client UI. The scenario is very similar to the Bing and Google search text boxes – you start typing a word, and immediately the search engine provides a list of suggested multi-word search phrases based on the characters already typed into the search box, almost at every keystroke.

The high level architecture of the lookup service hosted on the Windows Azure platform is depicted in the following diagram:

Key components and highlights of the implementation:

  • A SQL Azure DB hosts the data for the items and provides a user defined function (UDF) that returns items based on some partial input (a prefix)
  • A Windows Azure worker role, with one or more instances hosting a REST WCF service that ultimately calls the “search” UDF from the database to perform the search
  • An Azure Access Control Service (ACS) configuration used to 1) serve as an identity provider (BTW, it was a quick way to get the testing under way; it should be noted that ACS is not designed as a fully-fledged identity provider and for a final production deployment we have ADFS, or the likes of Windows Live and Facebook to play that role!), and 2) issue and validate OAuth SWT-based tokens for authorizing requests to the WCF service
  • The use of the Azure Cache Service was also in the pipeline (greyed out in the diagram above) but at the time of writing this article this extension has not been implemented yet.

To come back to the key question we had to answer – could an Azure-hosted version of the implementation provide response times that support lookups as the user starts typing in a client app using the service, or was the usability going to be impacted to an unacceptable level?

Target Response Times

Based on research the customer had done and from the experience they had with the on-premises deployment of the service, the customer identified the following ranges for the request-response round-trip, measured at the client:

  • Ideal< 250ms
  • May impact usability – 250-500ms
  • Likely to impact usability – 500ms – 1 sec
  • “No-go” – > 1 sec

Also, upon customer’s request, we also had to indicate the “Feel” factor – putting the response time numbers aside, does the lookup happen fast enough for a smooth user experience.

With the above in mind, off to do a short performance lab on Azure!

The Deployment Topology

After a couple of days working our way through the migration of the existing code base to SQL Azure, Worker Role hosting, and ACS security, we had everything ready to run in the cloud. I say “we” because I was lucky to have Thiago Almeida, an MVP from Datacom (the partner for the delivery of this project), working alongside with me.

In order to find the optimal Azure hosting configuration in regards to performance, we decided to test a few different deployment topologies as follows:

  • Baseline: No “Internet latency” – the client as well as all components of the service run in the same Azure data center (DC) – either South Central US or South East Asia. In fact, for test purposes, we loaded the test client into the same role instance VM that ran the WCF service itself:

    image

  • Deployment 1: All solution components hosted in the South Central US

  • Deployment 2: All solution components hosted in the South East Asia data center (closest to New Zealand)

  • Deployment 3: Worker role hosting the WCF service in South East Asia, with the SQL Azure DB hosted in South Central US

  • Deployment 4: Same as Deployment 3 with reversed roles between two DCs

Notes:

1) In the above topologies except the Baseline, the client, a Win32 app that we created for flexible load testing from any computer, was located in New Zealand; tests were performed from a few different ISPs and networks, including the Microsoft New Zealand office in Auckland and my home (equipped with a “blazing fast” ADSL2+ connection – for the Kiwi readers, I’m sure you will appreciate the healthy dose of sarcasm here J)

2) Naturally, Deployment 1 and 2 represent the intended production topology, while all other variations were tested to evaluate the network impact between a client and the Azure DCs, as well as between the DCs themselves.

  • Deployment 5: For this scenario, we moved the client app to an Azure VM in South East Asia, while all of the service components (SQL Azure DB, worker role, ACS configuration) were deployed to South Central US.

    image

The Results

Here we go – this is the most interesting part. Let’s start with the raw numbers, which I want to highlight are averages of tests performed from a few different ISPs and corporate networks, and at different times over a period of about 20 days:

I can’t resist from quoting the customer when we presented these numbers to them: “These [the response times] are all over the place!” Indeed, they are, but let’s see why that is, and what it all really means:

  • The Baseline test with the client (load test app) running within the same Azure DC as all service components, the response times do not incur any latency from the Internet – all traffic is between servers within the same Azure DC/network and hence the ~65ms average time. This is mostly the time it takes for the WCF service to do its job under the best possible network conditions.
  • Response times for Deployment 1 & 2 vary from 170ms up to about 250-260ms with averages for South Central US and South East Asia of 225ms and 184ms respectively. As mentioned earlier, these two deployments are the intended production service topology, with all service components as close to each other as possible, within the same Azure DC. The results also show that from New Zealand, the South East Asia DC will probably be the preferred choice for hosting; however the difference between South East Asia and South Central is not really significant – this will also be country and location specific. What is more important though is that the response times from both data centers are well within the Ideal range for the interactive lookup scenario. Needless to say – the customer was very pleased with this finding! And so were Thiago and I! J
  • Next we have Deployment 3 & 4 (mirrors of each other) where we separate the WCF service implementation from the database onto different Azure DCs. Wow – suddenly the latency goes up to 660-690ms! Initially when I looked at this, and without knowing much about the service implementation (remember we migrated an existing code base from the on-premises version to Azure), I thought it was either a very slow network between the two Azure DCs or something in the ADO.NET/ODBC connection/protocol that was slowing down the queries to the SQL Azure DB hosted in the other DC. I was wrong with both assumptions – it turned out that the lookup service code wasn’t querying the DB just once, meaning that this scenario was incurring cross-DC Internet latency multiple times for a single request to the WCF service – comparing apples to oranges here.

    Lesson learned: The combination between implementation logic and the deployment topology of your solution components may have a dramatic impact. Naturally – try to keep everything in your solution on the same DC. There are of course valid scenarios where you may need to run some components in different data centers (I know of at least one such scenario from a global ISV from the Philippines) – if this is your case too, be extremely careful to design for the minimum number of cross-DC communications possible. A great candidate for such scenarios is the Azure Cache Service which can cache information that has already been transferred from one DC to the other as part of a previous request.

  • Deployment 5 as a scenario was included in the test matrix mostly based on the interest sparked by the significant response times observed while testing Deployment 3 and 4. The idea was to see how fast the network between two of the Azure DCs is. The result – well, at the end of the day it is just Internet traffic; and as such it is subject to exactly the same rules as the traffic from my home to either of the DCs, for example. In fact, the tests show that Internet traffic from New Zealand (ok, from my home connection) to either of the DCs is marginally faster than the traffic between the South East Asia and South Central US DCs. This surprised me a little initially but when you think about it – again, it’s just Internet traffic over long distances.
Traffic Manager

One other topology that came late into the picture but is well worth a paragraph on its own, is a “geo-distributed” deployment to two DCs, fronted by the new Azure Traffic Manager service recently released as a Community Technology Preview (CTP). In this scenario, data is synchronized between the DCs using the Azure Data Sync service (again in CTP at the time of writing). The deployment is depicted in the diagram below:

Requests from the client are routed to one of the DCs based on the most appropriate policy as follows:

(If you’ve read some of my previous blog articles, you’d already know that I keep pride in my artistic skill. The fine, subtle touches in the screenshot above are a clear, undisputable confirmation of that fact!)

In our case we chose to route requests based on best performance, which is automatically monitored by the Azure Traffic Manager service. BTW, if you need a quick introduction to Traffic Manager, this is an excellent document that covers most of the points you’ll need to know. The important piece of information here is that for Deployment 3 and 4, combined with Traffic Manager sitting in front, the impact on the response time was only about 10-15ms. Even with this additional latency our response times were still within the Ideal range (<250ms) for the lookup service, but now enhanced with the “geo-distribution and failover” capability! Great news and a lot of excitement around the board room table during the closing presentation with our customer!

Important Considerations and Conclusions

Well, since this post is getting a little long I’ll keep this short and sweet:

  • If possible, keep all solution components within the same Azure Data center
  • In a case where you absolutely need to have a cross-DC solution – 1) minimize the number of calls between data centers through smart design (don’t get caught!), and 2) cache data locally using the Azure Cache service
  • Azure Traffic Manager is a fantastic new feature that makes cross-DC “load balancing” and “failover” a breeze, at minimal latency cost. As a pleasantly surprising bonus, it’s really simple – so, use it!!!
  • Internet traffic performance will always have variations to some extent. All our tests have shown that with the right deployment topology, an Azure-hosted service will deliver the performance, response times, and usability for as demanding scenarios as interactive “on-the-fly” data lookups…even from an ADSL2+ line down in New Zealand! And to come back to the customer “Feel” requirement – it scored 5 stars. J

Thanks for reading!

Author: Emil Velinov

Contributors and reviewers: Thiago Almeida (Datacom), Christian Martinez, Jaime Alva Bravo, Valery Mizonov, James Podgorski, Curt Peterson


Simon Guest posted a Microsoft releases Windows Azure Toolkit for iOS review to InfoQ on 5/11/2011:

image Following on from the recent release of the Windows Azure Toolkit for Windows Phone 7, Microsoft announced on May 9, 2011 that they were making available a version for Apple’s iOS, and are planning to release an Android version within the next month.

imageJamin Spitzer, Senior Director of Platform Strategy at Microsoft, emphasized that the primary aim of the toolkits is to increase developer productivity when creating mobile applications that interact with the cloud.

Using the toolkits, developers can use the cloud to accelerate the creation of applications on the major mobile platforms. Companies, including Groupon, are taking advantage to create a unified approach to cloud-to-mobile user experience.

image Microsoft is making the library, sample code, and documentation for the iOS version of the toolkit available on GitHub under the Apache License. With XCode’s native support for GitHub repositories, this means that developers can more easily access the toolkit in their native environment.

What can developers expect from the v1.0 release of the iOS toolkit?

This first release of the toolkit focuses on providing developers easy access to Windows Azure storage from native mobile applications. Windows Azure has three different storage mechanisms:

  • Blob storage - used for storing binary objects, such as pictures taken on the phone.
  • Table storage – used for storing structured data in a scalable way, such as user profiles or multiple high score tables for a game.
  • Queues – a durable first-in, first-out queuing system for messages. For example, this could be used to pass messages between devices.

All of the above services are exposed via a REST API, however accessing these natively from the phone can be challenging, especially for developers who are new to iPhone development. The toolkit wraps the necessary REST calls into a native library that not only abstracts the underlying networking elements, but also reduces many operations (such as uploading a photo to Azure blob storage) to just a few lines of code.

Wade Wegner, Windows Azure Technical Evangelist, has put together a walkthrough for the toolkit, showing how the Windows Azure storage services can be accessed in two ways:

  • Directly from the client, using an account name and access key obtained from the Windows Azure portal.
  • Via a proxy service, for those not wanting to store their account name and access key on the device. The proxy service works by using ASP.NET authentication provider to validate a set of credentials, and then creating a shared key that can be used to access the storage for the duration of the session.

In his tutorial, Wegner shows how to create an XCode 4 project from scratch, import the library, and create code samples to index blob and table storage.

Future additions to the toolkit

As well as an Android version of the toolkit slated for June, Wegner also expands on additional features in other versions of the device toolkits, including:

  • Support for Windows Azure ACS (Access Control Service) - providing an identity mechanism for developers looking to add authentication to their mobile apps, including federation with Facebook connect and other providers.
  • Push notifications – the ability to construct and send Push notifications from an Azure role to a registered device.

Although the device toolkits are in their early stages, developers creating mobile applications with a need to interact with Windows Azure storage and other services will likely find the toolkits a useful addition.


<Return to section navigation list> 

Visual Studio LightSwitch

Mary Jo Foley (@maryjofoley) claims TechEd North America (#TechEd_NA, #msteched) 2011 clairvoyance re Visual Studio LightSwitch in her The Microsoft week ahead: What's on for TechEd 2011? post of 5/15/2011 for ZDNet’s All About Microsoft blog:

image * Seems to me that TechEd might be a good venue for Microsoft to unveil the final version of Visual Studio LightSwitch, as well. LightSwitch, codenamed “KittyHawk,” is a rapid-application-development (RAD) tool targeted at fledgling coders interested in building business applications. Microsoft released the first public beta of LightSwitch in August 2010, and the second beta in March 2011.

Seems to me LightSwitch Beta 2 hasn’t been around long enough to serve as a release (or even a release candidate) at this point.

Click here to read all of Mary Jo’s TechEd prognostications.


• Michael Washington (@adefwebserver) explained This Is How LightSwitch Does MVVM in a 5/13/2011 post:

image This article, Silverlight View Model Style: An (Overly) Simplified Explanation, explains what MVVM is. Basically:

  • Model – The Data
  • View Model – Collections, Properties, and Commands
  • View – The UI (User Interface)

image2224222222This article, covers some of the problems in implementing MVVM, mostly that is is verbose,  time-consuming, and difficult for many developers to understand and implement. That is not to say that MVVM is not a superior pattern once you have mastered it, but using a tool such as LightSwitch greatly eases it’s use.

image

This article, Simple Silverlight Configurator/Pivot (View Model/MVVM) , demonstrates a simple MVVM application that displays a Silverlight Pivot Viewer [see below post].

image

The article shows the View Model (on the right side of the image above), and how it is bound to the View.

LightSwitch IS A MVVM “Tooling” Framework

There are many MVVM Frameworks out there to assist in creating MVVM code. However, LightSwitch is different, because it is a ”MVVM Tooling Framework”. Do not discount the importance of a tool when using a Framework. Building houses has established patterns and frameworks, but try getting that nail into the wood without a hammer.

With LightSwitch, you can use Silverlight Custom Controls for the View… that’s it. Everything else, The Model and the View Model, are created inside the Visual Studio LightSwitch “tool”. You do not want to fight this. When creating the Pivot Viewer in LightSwitch, one would be tempted to put the sliders, and the people display, on the same Silverlight Custom Control. However, this would mean that the Silverlight Custom Control would be trying to handle logic that is suppose to be handled by the View Model (for example, changing the age from 18 to 50).

image

With LightSwitch, you create the View Model, by placing Properties, Collections, and Commands (called Methods in LightSwitch) , on the left-hand side of the Screen designer. You then bind UI elements, (either the the build-in LightSwitch controls, control extensions, or Silverlight Custom Controls), to those Properties, Collections, and Commands.

image

LightSwitch easily guides you through creating the View Model. When you click the button to Add Data Item…

image

Your only choice is to add a Property, Collection, or Command (LightSwitch calls a Command a Method).

The LightSwitch Pivot Sample

image

Let’s walk through the steps required to implement the Silverlight Pivot application inside LightSwitch (we will not have the animations of the original project because they use Behaviors and you cannot use Behaviors with LightSwitch).

The Model

image

The Model is the data. In LightSwitch you create the Model by simply creating tables.

However, in this example, we have a situation where we will have a “Gender” dropdown that needs to run a different query if the user selects “M” or “F”, or the user selects “All”.

That means a simple query, or “filter”, will not work. We need “if… then” logic in our query. When you are faced with this situation, you need to use a PreProcessQuery. to use a PreProcessQuery, you create a query on your table.

image

We right-click on the People table we just created and select Add Query.

image

We add a parameter called Gender, and select PreProcessQuery from the Write Code dropdown.

We write the following code:

    partial void PivotQuery_PreprocessQuery(string Gender, ref IQueryable<Person> query)
    {
        // If Gender is not All then filter it
        if (Gender != "All")
        {
            query = query.Where(x => x.Gender == Gender);
        }
    }

That’s it. Our Model is complete. We will use this query as the data for our View Model rather than just the People table.

Note that LightSwitch is very powerful. We can create a query that is based on another query and so on.

The View Model

image

We create a Screen and add the properties using the Add Data Item… button.

image

We also add the PivotQuery we created earlier that has the PreProcessQuery code.

image

We edit the query so it resembles the image above. Notice that Gender is greyed out. This shows that this parameter can only be edited by opening the underlying query, yet it is a parameter that will need to be mapped in the next step.

image

We return to the screen designer and map all the parameters to properties.

image

To set the default values for some of the parameters, we select Write Code, then the InitializeDataWorkspace method.

We enter the following code:

    partial void PivotViewer_InitializeDataWorkspace(List<IDataService> saveChangesTo)
    {
        // Set Defaults
        this.WeightHigh = 250;
        this.AgeHigh = 60;
        this.Gender = "All";
    }

This is the process of creating a View Model in LightSwitch.

The View

image

We need a dropdown for Gender, so we click on Gender on the left-hand side of the Screen, and then in Properties, we click Choice List and enter the choices.

image

When we add Gender to the object tree on the right-hand side of the Screen, it automatically shows as a dropdown control.

image

We take the XAML for the people display from the original project, removing the sliders, and putting them in separate Silverlight Custom Controls. We change the binding to start with “Screen”, referring to the LightSwitch View Model, followed by the Property, Collection, or Method (Command) we want to bind to (in this case PivotQuery).

We lay out the object tree on the right-hand side of the screen and add the Silverlight Custom Controls. See the article at this link for the steps on adding a Silverlight Custom Control to the object tree.

image

We complete the remaining bindings. The image above shows what is bound to what.

Download

You can download the code at this link.


• Tony Champion delivered a tutorial for the Silverlight PivotViewer that’s useful for LightSwitch projects in his Lessons in PivotViewer CodePlex project of 4/26/2011 (missed when posted):

Project Description
image The Silverlight PivotViewer is a great control with a great deal of potential. However, there are a lot of questions on how to use and customize the PivotViewer. This project will attempt to answer those questions by providing a series of lessons on the PivotViewer.

The first batch of lessons have been posted. A series on these lessons will be coming soon to http://tonychampion.net.

image2224222222Live demo of the current version can be found here : http://pivotviewer.championds.com.

Lesson Details

  • Lesson 1 : Basic Properties and Events
  • Lesson 2 : Custom Actions
  • Lesson 3 : Basic Styling
  • Lesson 4 : Custom Loading Screen
  • Lesson 5 : Custom Info Pane
  • Lesson 6 : Filter Pane Fonts added 04/26/2011


Michael Bridge asked Visual Studio LightSwitch Hosting :: What is So Special About Microsoft’s Visual Studio LightSwitch? in a 5/12/2011 post:

image2224222222Microsoft has announced a new edition of Visual Studio called LightSwitch, now available in beta, and it is among the most interesting development tools I’ve seen. That does not mean it will succeed; if anything it is too radical and might fail for that reason, though it deserves better. Here’s some of the things you need to know.

1. LightSwitch builds Silverlight apps. In typical Microsoft style, it does not make the best of Silverlight’s cross-platform potential, at least in the beta. Publish a LightSwitch app, and by default you get a Windows click-once installation file for an out-of-browser Silverlight app. Still, there is also an option for a browser-hosted deployment, and in principle I should think the apps will run on the Mac (this is stated in one of the introductory videos) and maybe on Linux via Moonlight. Microsoft does include an “Export to Excel” button on out-of-browser deployments that only appears on Windows, thanks to the lack of COM support on other platforms.

I still find this interesting, particularly since LightSwitch is presented as a tool for business applications without a hint of bling – in fact, adding bling is challenging. You have to create a custom control in Silverlight and add it to a screen.
Microsoft should highlight the cross-platform capability of LightSwitch and make sure that Mac deployment is easy. What’s the betting it hardly gets a mention? Of course, there is also the iPhone/iPad problem to think about. Maybe ASP.NET and clever JavaScript would have been a better idea after all.

2. There is no visual form designer – at least, not in the traditional Microsoft style we have become used to. Here’s a screen in the designer:

Now, on one level this is ugly compared to a nice visual designer that looks roughly like what you will get at runtime. I can imagine some VB or Access developers will find this a difficult adjustment.

On the positive side though, it does relieve the developer of the most tedious part of building this type of forms application – designing the form. LightSwitch does it all for you, including validation, and you can write little snippets of code on top as needed.

I think this is a bold decision – it may harm LightSwitch adoption but it does make sense.

3. LightSwitch has runtime form customization. Actually it is not quite “runtime”, but only works when running in the debugger. When you run a screen, you get a “Customize Screen” button at top right:

which opens the current screen in Customization Mode, with the field list, property editor, and a preview of the screen.

It is still not a visual form designer, but mitigates its absence a little.

4. LightSwitch is model driven. When you create a LightSwitch application you are writing out XAML, not the XAML you know that defines a WPF layout, but XAML to define an application. The key file seems to be ApplicationDefinition.lsml, which starts like this:

Microsoft has invested hugely in modelling over the years with not that much to show for it. The great thing about modelling in LightSwitch is that you do not know you are doing it. It might just catch on.

Let’s say everyone loves LightSwitch, but nobody wants Silverlight apps. Could you add an option to generate HTML and JavaScript instead? I don’t see why not.

5. LightSwitch uses business data types, not just programmer data types. I mean types like EmailAddress, Image, Money and PhoneNumber:

I like this. Arguably Microsoft should have gone further. Do we really need Int16, Int32 and Int64? Why not “Whole number” and “Floating point number”? Or hide the techie choices in an “Advanced” list?

6. LightSwitch is another go at an intractable problem: how to get non-professional developers to write properly designed relational database applications. I think Microsoft has done a great job here. Partly there are the data types as mentioned above. Beyond that though, there is a relationship builder that is genuinely easy to use, but which still handles tricky things like many-to-many relationships and cascading deletes. I like the plain English explanations in the too, like “When a Patient is deleted, remove all related Appointment instances” when you select Cascade delete.


Now, does this mean that a capable professional in a non-IT field – such as a dentist, shopkeeper, small business owner, departmental worker – can now pick up LightSwitch and and write a well-designed application to handle their customers, or inventory, or appointments? That is an open question. Real-world databases soon get complex and it is easy to mess up. Still, I reckon LightSwitch is the best effort I’ve seen – more disciplined than FileMaker, for example, (though I admit I’ve not looked at FileMaker for a while), and well ahead of Access.

This does raise the question of who is really the target developer for LightSwitch? It is being presented as a low-end tool, but in reality it is a different approach to application building that could be used at almost any level. Some features of LightSwitch will only make sense to IT specialists – in fact, as soon as you step into the code editor, it is a daunting tool.

7. LightSwitch is a database application builder that does not use SQL. The query designer is entirely visual, and behind the scenes LINQ (Language Integrated Query) is everywhere. Like the absence of a visual designer, this is a somewhat risky move; SQL is familiar to everyone. LINQ has advantages, but it is not so easy to use that a beginner can express a complex query in moments. When using the Query designer I would personally like a “View and edit SQL” or even a “View and edit LINQ” option.

8. LightSwitch will be released as the cheapest member of the paid-for Visual Studio range. In other words, it will not be free (like Express), but will be cheaper than Visual Studio Professional.

9. LightSwitch applications are cloud-ready. In the final release (but not the beta) you will be able to publish to Windows Azure. Even in the beta, LightSwitch apps always use WCF RIA Services, which means they are web-oriented applications. Data sources supported in the beta are SQL Server, SharePoint and generic WCF RIA Services. Apparently in the final release Access will be added.

10. Speculation – LightSwitch will one day target Windows Phone 7. I don’t know this for sure yet. But why else would Microsoft make this a Silverlight tool? This makes so much sense: an application builder using the web services model for authentication and data access, firmly aimed at business users. The first release of Windows Phone 7 targets consumers, but if Microsoft has any sense, it will have LightSwitch for Windows Phone Professional (or whatever) lined up for the release of the business-oriented Windows Phone.

Re #9. Michael appears to be unaware that Visual Studio LightSwitch Beta 2 offers one-click publishing to Windows Azure.


Beth Massi (@bethmassi) explained Getting Started with Visual Studio LightSwitch in a 5/12/2011 post to her Sharing the Goodness that is VB blog:

image Now that we’ve had Beta 2 released for over a month we’re seeing a LOT more people asking questions in our forums and visiting our developer center which is awesome! Sometimes people don’t know where to start, especially as more and more people start posting more advanced LightSwitch content out there on the web. So here’s where you start:

LightSwitch Developer Center - http://msdn.com/lightswitch

image2224222222There on the home page you will see clear steps to get started.


Download Visual Studio LightSwitch Beta 2


Watch the instructional LightSwitch "How Do I?" videos

Get essential training

Get essential training


Ask questions in the LightSwitch forums

If you click on “Get essential training” then you will be taken to the Learn page which is broken down into the sections, Getting Started, Essential Topics, Advanced Topics. The getting started section is what you want to start with (I guess we named the section appropriately.)

Getting Started

Are you completely new to Visual Studio LightSwitch? This information will help you get started using LightSwitch to develop business applications for the desktop or the cloud. Be sure to check out the step-by-step “How Do I”videos.

Create Your First LightSwitch Application
Create Your First LightSwitch Application

Watch the LightSwitch How Do I Videos
Watch the LightSwitch How Do I Videos

Visual Studio LightSwitch Training Kit
Download the Visual Studio LightSwitch Training Kit

I’ll also recommend watching the following video of my session from Dev Days a couple weeks ago to give you an idea of what LightSwitch can do and how to quickly build a business application:

Watch the LightSwitch How Do I Videos

Video Presentation: Introduction to Visual Studio LightSwitch

When you’ve worked yourself through the getting started section, move into the Essential Topic areas to drill in more. We add content here often so if you’re learning LightSwitch, you’ll want to bookmark this page.

Have fun and Enjoy!


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

My (@rogerjenn) Very Short Grace Period for Free 30-Day Windows Azure Account Expiration? article of 5/14/2011 complains about “Strange goings on in MOCP:”

On 5/14/2011 at 7:18 AM PDT I received the following two messages from the Windows Azure Pass Administration alias:

image

image

No grace period appears to have been given to collect my data, assuming I had any data to collect. No 30-day passes have appeared in the Windows Azure Developer Portal for my account ID in several months.

At one point, as I recall, 30-day trial users were given the option to extend the validity period by 15 days. …

Read the rest of the post here.


James Urquhart (@jamesurquhart) seconded David Linthicum’s assertion (see below) as he explained Why definitions of cloud are creating 'false' debates in a 5/13/2011 post to C|Net News’ The Wisdom of Clouds blog:

image Why, when so many have already begun evaluating and even executing on cloud strategies, are there still so many debates about what is and isn't "cloud"? When we've seen a growing number of stories about enterprises successfully consuming both public- and private-cloud infrastructure services, why are there still so many debates about whether one or the other is a smart thing to do?

image The answer, I believe, stems from a growing division (or misunderstanding?) among technology and business decision makers about the very nature of cloud computing. I hesitate to go into cloud definitions, but I believe that people are arriving at one of two conclusions about cloud, namely that cloud is either a business model or an operations model.

What the nature of these models are, and the differences between them, explains so much about why we are seeing differing points of view even within enterprises themselves. To understand, let's explore each one:

  1. Cloud as a business model. The concept that cloud is a way of selling IT capability as services is probably as old as the introduction of Amazon's pioneering S3 and EC2 services. Many in the industry immediately saw the utility of these services, and quickly went on to declare that cloud is something you buy over the Internet on a pay-per-use basis.

    You can see this in older versions of the Wikipedia page on cloud computing.

    The idea that cloud is about acquiring IT resources over the Internet is also the prevalent view of the consumer market. Storing your photographs "in the cloud" means acquiring a photo service from a vendor over the Internet.

  2. Cloud as an operations model. The other way to look at cloud computing is as a way to operate IT capabilities. I wrote about this in depth some time back, but the core idea is that cloud is not a new computing technology, but it is simply a new way of operating those technologies as on-demand, self-service elastic services.

    An enterprise that operates a service that meets this criteria, then, is providing a cloud service. It may or may not be as efficient or cost-effective as someone else's service, but it is a cloud under this model, nonetheless.

You can immediately see the disparity between the two models. If you are looking at cloud as a business model, there is no way that an enterprise can meet the bill unless they use an internal cross-charging model, and even then the cloud "business" will struggle to reach commercial economies of scale.

However, if you look at cloud as an operations model, the value of running an efficient resource pool with reduced bureaucracy is highly compelling, even if you can't reach the efficiencies of a larger public-cloud provider. Given the complexities of moving data, applications, processes and everything else IT to the public cloud, an internal cloud service becomes a highly compelling option.

Given that business decision makers are probably most aware of the business model experience of consumers, and many IT operators are most comfortable with the operations model view, you can see why even internal factions have trouble seeing eye to eye on what cloud adoption means.

On Monday, I had the honor of participating in a panel at the Enterprise Cloud Summit held in conjunction with Interop 2011. The topic of the panel was "The False Cloud Debate," and my fellow panelists represented all sides of the debate quite well. "False cloud" is a term applied to private cloud by some executives of purely public-cloud companies.

Included on the panel were James Watters of VMware, John Keagy of GoGrid, and Peter Coffee of Salesforce.com. The panel was moderated by David Linthicum of Blue Mountain Labs. Unfortunately, the panel wasn't recorded, but yesterday Linthicum provided a decent overview of the conversation.

In short, while the panel disagreed to the degree in which private cloud makes sense to deploy, all agreed that a hybrid cloud model made sense for most large enterprises. One key reason for this? Data has mass, as my friend David McCrory analogized, and it is much more difficult to move large volumes of data to the cloud than it is to "bring the cloud" to where the data already sits.

Right or wrong, the enterprise is moving forward with projects that target on-demand, self-service elastic infrastructure with some form of cost "show-back" to regulate use; services that fulfill a cloud operations model. They are also consuming plenty of cloud-computing services online in a cloud business model.

The argument that private cloud is a "false cloud" is therefore irrelevant.



Damon Edwards (@damonedwards) posted Getting Started With Devops to his dev2ops blog on 5/12/2011:

image This week I spoke at Interop in Las Vegas and the term "Devops" was new to a lot of people. I was asked if I could put a getting started list together. Here goes...

Good First Reads
Great Videos
Devops Events

For another slant on DevOps, see my How DevOps brings order to a cloud-oriented world article of 3/9/2011 for SearchCloudComputing.com.


David Linthicum asserted “The focus should be on which cloud formats can best serve businesses, not on whether private clouds count as 'real' clouds” as a deck for his The 'is the private cloud a false cloud?' debate is false post of 5/12/2011 to InfoWorld’s Cloud Computing blog:

image I was at Interop in Las Vegas this week moderating a panel on the "false cloud debate." In short, the debate asks if private clouds are really clouds, or if "private cloud" is a marketing label for data centers and confusing the value of cloud computing. The panel consisted of James Waters from EMC VMware, James Urquhart from Cisco Systems, Peter Coffee from Salesforce.com, and John Keagy from GoGrid.

image What struck me most about the "debate" was that it was not much of one at all. Although the panel started off bickering around the use, or overuse, of private clouds, the panelists quickly agreed that the private clouds have a place in the enterprise (to very different degrees), and that the end game is mixing and matching private and public cloud resources to meet the requirements of the business.

Public cloud advocates have said for years that the core value of public clouds is the ability to scale and provision on demand and on the cheap -- they're right. However, many fail to accept there may be times when the architectural patterns of public clouds best serve the requirements of the business when implemented locally -- in a private cloud.

If you accept that the value of cloud computing is in some circumstances best expressed in a private cloud, it should become apparent that the movement to the cloud should be prefaced by good architecture, requirements gathering, and planning. Those who view the adoption of cloud computing as simply a matter of private versus public are destined to not understand the core business issues, and they risk making costly mistakes.

Architecture has to lead the day, and sane minds need to focus on the ability of clouds to serve the business: private, public, hybrid, or none. There is no debate about that.


Charles Babcock (pictured below) claimed “A Windows Server instance costs between 5 cents and 96 cents an hour because Microsoft has been able to drive down operational expenses” in a deck to his Microsoft: 'Incredible Economies Of Scale' Await Cloud Users article of 5/11/2011 for InformationWeek’s Interop Las Vegas 2011 Special Report:

image There are "incredible economies of scale in cloud computing" that make it a compelling alternative to traditional enterprise data centers, said Zane Adam, general manager of Microsoft's Windows Azure cloud and middleware.

Adam is one of the first cloud managers to speak out on the specifics of large data center economics. He did so during an afternoon keynote address Tuesday at Interop 2011 in Las Vegas, a UBM TechWeb event.

Interop Las Vegas 2011But Adam ended up emphasizing the cloud as spurring "faster innovation" in companies because of its ability to supply reliable infrastructure, as users concentrate on increasing core business value.

To make his point on economies of scale, he cited Microsoft's new cloud data center outside Chicago, which started operating in September 2009. It hosts Microsoft Bing searches, Microsoft's Dynamics CRM, Sharepoint, and Office Live software as a service, and over 31,000 Microsoft Azure customers.

Microsoft spent $500 million to build a cement floor, warehouse-type facility with trucks able to drive in on the ground level and drop off containers packed with 1,800 to 2,500 blade servers. The 700,000-square foot facility has 56 parking places for containers, and containers in each spot can be double stacked. When each 40-foot container is plugged into the data center's power supply, the servers inside start humming. They can be brought into production use in eight hours.

The building has more typical server racks on its upper floor. It was built for a total of 300,000 servers. It is served by 60 megawatts of power and contains 190 miles of conduit.

Adam said building a data center on such a scale is being done by a limited number of cloud computing suppliers, including Google, Amazon Web Services, Rackspace, and Terremark. In contrast, less than 1,000 Microsoft customers are running over 1,000 servers and only "a few" have 10,000 or more, Adam said.

Consequently, there are economies of scale possible in such a setting that are impossible in the more heterogeneous, raised floor, enterprise data center. Although they are rapidly moving away from the practice, enterprise data centers at one time assigned a server administrator to devote much of his time to a single application running on one server.

At the Chicago center, one administrator is responsible for "several thousand servers," Adam said. "It costs an estimated $8,000 a year to run a typical server. For us, the cost goes down to less than $1,000."

If operations is typically 15% of data center costs, Microsoft has driven out 70% of that cost, Adam estimated. Microsoft executives have told the Chicago business press that they operate the facility with just 45 people, including security guards and janitors.

Given the scale at which Microsoft builds, it can negotiate special prices on volume orders of servers. Some are supplied by Dell, which has a container/server manufacturing capability. Microsoft gets tax breaks and commercial credits "that small data centers can't get," when it comes into a suburban community to build such a facility, he added.

The location is known for its low-cost power and access to fiber optic communications. "Special power deals lower our cost of power 90%" over what a more typical industrial customer would have to pay, Adam said.

Microsoft charges between 5 cents and 96 cents an hour for an extra-small to an extra-large instance of Windows Server. Azure opened for pay-for-use computing in February 2010 with a small instance of Windows Server at 12 cents an hour.

But at the end of his delineation of data center costs, Adam said it wasn't its low cost that would drive use of public clouds. "In the cloud, there's no patching or version updating," saving the cloud customer a major headache that eats up IT staff time.

The availability of reliable infrastructure at low prices will enable companies of all sizes to use more computing in the business. In the end, "it's that rapid innovation that will drive acceptance," he said.

image

Click here to watch the 00:21:00 Flash video of Zane’s session.

Re claim that “over 31,000 Microsoft Azure customers” were hosted at Microsoft’s Chicago (North Central US) data center: Reports began circulating on 2/1/2011 that Microsoft had acquired a total of 31,000 customers for Windows Azure (not just users of the North Central US data center.) See Sharon Pian Chen’s Azure now has 31,000 customers article of 2/1/2011 for the Seattle Times.


Ed Scannell asserted IT pros seek details on Microsoft cloud computing strategy in a 5/11/2011 post to SearchWindowsServer.com:

image In recent years, Microsoft has dribbled bits and pieces of its cloud computing strategy on the Windows manager faithful. Now they think it’s time for the company to articulate a cohesive story.

imageMicrosoft has a golden opportunity to do so at TechEd North America in Atlanta next week. The company has certainly produced a steady stream of news about Windows Azure, Office 365, Intune and a handful of other online-based applications and tools over the past couple of years. But IT pros still seek clarity on Microsoft’s cloud vision as it cobbles together these piece parts, much in the way IBM did in its cloud announcement last month.

image "I think Microsoft is at a point in its cloud evolution where it needs a bigger picture presented,” said Dana Gardner, president and principal analyst at Interarbor Solutions, Inc. in Gilford, N.H. “It does have some building blocks but IT shops wonder how they (IT shops) can bring this together with existing infrastructures in a harmonious way so they aren't throwing out the baby with the bath water,"

With fast moving cloud competitors, such as Google and Amazon, which lack the on-premises infrastructure baggage owned by Microsoft, the software company might be wise to state its intentions to enterprises sooner rather than later.

"Microsoft has had trouble before communicating comprehensive strategies, as was the case with its on-premises software,” Gardner added. “It can't afford to make that mistake again with a lot of other players now pointing to the Microsoft hairball and offering cleaner, pure-play cloud offerings."

The adoption of cloud strategies can bring with it added expense and technological uncertainty. For this reason, IT professionals would find real world success stories involving a collection of Microsoft products somewhat reassuring.

"I know exactly what my IT costs are right now for things like licenses, people and upgrades,” said Eugene Lee, a systems administrator with a large bank. “But with the cloud those fixed costs can become unpredictable, so I need to know more about how all this would work. I would like to see some success stories from some (Windows-based) cloud shops.”

How much of the big picture Microsoft reveals publicly about its cloud strategy at this year's conference remains unclear. Corporate executives are expected to disclose more details about the company’s cloud strategy with regard to the Windows Azure Platform, SQL Azure and the Windows Azure App Fabric, though the company declined to comment on details.

"Even a little more visibility on how seamlessly, or not, applications written for the Azure (platform) will work with Office 365 could be useful for planning purposes," Lee said.

IT pros can also expect to hear more about the next generation of its management suite, System Center 2012. The latest version of the suite adds Concero, a tool that lets administrators manage virtual machines running on Microsoft Hyper-V through System Center Virtual Machine Manager. The software also lets administrators manage a variety of services that run on Windows Azure and Azure-based appliances.

More on Microsoft and the cloud

Full Disclosure: I’m a paid contributor to TechTarget’s SearchCloudComputing.com blog.


Hanu Kommalapati asserted “Cloud computing isn’t just about outsourcing IT resources. We explore ten benefits IT architects and developers will realize by moving to the cloud” as an introduction to his 10 Reasons Why Architects and Developers Should Care about Cloud Computing article of 5/10/2011 for Enterprise Systems Journal:

image Cloud computing has come a long way during the past few years; enterprises no longer are curious onlookers but are active participants of this IT inflexion. Application developers and architects have much to gain from this new trend in computing. Cloud enables architects and developers with a self-service model for provisioning compute, storage, and networking resources at the speed the application needs.

image The traditional on-premise computing model constrains IT agility due to the less-than-optimal private cloud implementations, heterogeneous infrastructures and procurement latencies, and suboptimal resource pools. Because cloud platforms operate highly standardized infrastructures with unified compute, storage, and networking resources, it solves the IT agility issues pervasive in the traditional IT model.

The following are among the benefits IT architects and developers will realize by moving to the cloud:

Benefit #1: Collection of Managed Services

Managed services are those to which a developer can subscribe without worrying about how those services were implemented or are operated. In addition to highly automated and self-service enabled compute services, most cloud platforms tend to offer storage, queuing, cache, database (relational and non-relational), security, and integration services. In the absence of managed services, some applications (such as hybrid-cloud applications) are impossible to write or will take a long time for developers to create the necessary plumbing code by hand.

Using cache service as an example, a developer can build a highly scalable application by merely declaring the amount of cache (e.g., 4GB) and the cache service will automatically provision the necessary resources for the cache farm. Developers need not worry about the composition and size of cache nodes, installation of the software bits on the nodes, or cache replication, and they don’t have to manage the uptime of the nodes. Similar software appliance characteristics can be attributed to the other managed services such as storage and database.

Benefit #2: Accelerated Innovation through Self-service

Developers can implement their ideas without worrying about the infrastructure or the capacity of the surrounding services. In a typical on-premise setting, developers need to acquire the VM capacity for computing tasks and database capacity for persistence, and they must build a target infrastructure environment even before deploying a simple application such as “Hello, world!”

Cloud allows developers to acquire compute, storage, network infrastructure, and managed services with a push of a button. As a result, developers will be encouraged to try new ideas that may propel companies into new markets. As history proves, often great technology success stories come from grass-root developer and architect innovations. …

Read more: 2, 3, 4, next »

Hanu is a Sr. Technical Director for Microsoft.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Mary Jo Foley (@maryjofoley) claims TechEd North America (#TechEd_NA, #msteched) 2011 clairvoyance re WAPA in her The Microsoft week ahead: What's on for TechEd 2011? post of 5/15/2011 for ZDNet’s All About Microsoft blog:

image* Microsoft execs told me earlier this year that we’d be getting an update on the Windows Azure Appliances — those “private cloud in a box” systems that Microsoft announced a year ago (and so far, seem to be largely missing in action).

I believe Microsoft execs are too embarrassed about delays and loss of HP as a reseller to come clean with a delivery date on the long-awaited WAPA. Maybe Scott Gu can light a fire under the WAPA team. In the meantime, they keep hiring more program and product managers, as well as software design engineers in test (SDETs.)

Click here to read all of Mary Jo’s TechEd prognostications.


Gavin Clarke (@gavin_clarke) posted Microsoft claims Windows Azure appliances exist on 5/13/2011 to The Register:

image HP? Dell? Fujitsu? Hello?

Microsoft remains enthusiastic about the idea of selling a cloud-in-a-box, but it has conceded that its Windows Azure Appliances, unveiled nearly a year back, are few and far between.

image

Server and tools general manager Charles Di Bona has told financial analysts that Microsoft has started to roll out "some" Windows Azure Appliances, adding "they are early-stage at this point."

image He didn't provide more of what financial types like to call "color", but the "some" he mentioned are presumably sitting in data centers at Dell, Hewlett-Packard, Fujitsu, and eBay.

Di Bona also hinted at a shift in priorities away from trying to get customers to use mega public cloud services from the likes of Dell, and moving towards the notion of private clouds where the customer can keep more control of both the actual service and their data running inside it. In the public cloud, the service provider calls the shots and puts your data on the same servers as everybody else.

Di Bona was speaking at the Jefferies Global Technology, Internet, Media & Telecom Conference presentation on May 11, and his words were picked up here. You can read the transcript here (warning: PDF) .

Microsoft seems to be looking at the Azure appliance as an alternative to Oracle chief Larry Ellison's monstrous Exadata database machine and storage server, which is being deployed by large enterprises. Asked by the event moderator about the potential for integrated appliances like Exadata, Di Bona said such appliances are very interesting and presents the future of the cloud.

But there are differences. "We think of the private cloud based on what we're doing in the public cloud, and [we're] sort of feeding that back into that appliance, which would be more of a private cloud," he said. "It has some of the same characteristics of what Oracle is doing with Exadata. So, we don't think of it as mutually exclusive, but we think that the learnings we're getting from the public cloud are different and unique in a way that they're not bringing into Exadata."

Microsoft jumped on the public cloud bandwagon last year when it unveiled the Azure Appliance and named Dell, HP, Fujitsu and eBay as early adopters of what it said was a "limited production" release of the appliance at its World Wide Partner conference in July 2010. It was the height of the public versus private cloud frenzy, with Salesforce.com CEO Mark Benioff warning people to "beware the false cloud" – meaning the private cloud, run by you, as opposed to public clouds run by companies like Salesforce.

Microsoft's plan was for the first Windows Azure appliance to be delivered later in 2010, with companies building and running services before the end of the year.

In other words: Dell, Hewlett-Packard, Fujitsu and eBay would turn into internet service providers by hosting services for their customers running on Azure. The companies were also guinea pigs, who'd help Microsoft design, develop, and deploy the appliances.

You can read how it was supposed to go down here, from president of Microsoft's server and tools Bob Muglia.

The deadlines were missed, however, and nearly 12 months later, Microsoft's partners are saying very little. Muglia is leaving Microsoft after a disagreement with chief executive Steve Ballmer.

In March, the chief executive of HP, Microsoft's biggest Windows partner, evaded a Reg question during his company's strategy day on whether HP's forthcoming infrastructure and platform cloud services would use Azure. Subsequently, it emerged that HP's services, branded Scalene, will feature APIs and languages bindings for Java, Ruby, and other open source languages with GUI and command line interface controls for both Linux/Unix and Windows. There was no mention of Windows Azure, SQL Azure, or even .NET.

Dell separately promised The Reg that it plans to offer at least two "public clouds" - one based on Microsoft's Azure platform and another based on something else that it's not ready to announce. The "other" is extremely likely to be OpenStack, the "Linux for the cloud" project founded by Rackspace and NASA. Dell is an OpenStack member.

In July 2010, Fujitsu committed to train 5,000 "or so" Azure specialists and to deploy Azure appliances in its data centers in Japan. We've heard nothing from Fujitsu since. eBay is also mum.

Hopefully, Scott Gu will goose the WAPA team. HP appears to have fallen off the Azure bandwagon.

Mary Jo Foley (@maryjofoley) asserted Windows Azure [Platform] Appliances are still in Microsoft's plans in a 5/12/2011 post to her All About Microsoft blog for ZDNet:

image Microsoft and its server partners have been noticeably mum about Microsoft’s planned Windows Azure Appliances, leading to questions as to whether Microsoft had changed its mind about the wisdom of providing customers with a private cloud in a box.

image

But based on comments made by a Microsoft Server and Tools division exec during his May 10 Jefferies Global Technology, Internet, Media & Telecom Conference presentation, it seems the Azure Appliances are still on the Microsoft roadmap.

In July 2010, Microsoft took the wraps off its plans for the Windows Azure Appliance, a kind of “private-cloud-in-a-box” available from select Microsoft partners. At that time, company officials said that OEMs including HP, Dell and Fujitsu would have Windows Azure Appliances in production and available to customers by the end of 2010. In January 2011, I checked in with Microsoft’s announced Windows Azure Appliance partners, some of whom hinted there had been delays. In March, HP began giving off mixed signals as to whether it would offer an Azure-based appliance after all.

Cut to May 10. Charles Di Bona, a General Manager with Server and Tools, was asked by a Jeffries conference attendee about his perception of integrated appliances, such as the ones Oracle is pushing. Here’s his response, from a transcript:

“I would hesitate to sort of comment on their (Oracle’s) strategy. But, look, those appliances are very interesting offerings.  In some ways it’s sort of presenting a large enterprise, obviously, where the footprint is big a large enterprise with a sort of cohesive package where they don’t have to muck around with the inner workings, which in many ways is what the cloud does for a much broader audience.

“So, it’s an interesting way of sort of appealing to that sort of very high end constituency, which is sort of the sweet spot of what Oracle has done the past, with a sort of integrated offering that replicates in many ways the future of the cloud. Now, we still believe that the public cloud offers certain capabilities, and certain efficiencies that an appliance, a scaled-up appliance like Exadata is not going to offer them, and we think that that’s the long-term future of where things are going.  But it’s an interesting way of appealing to that very high-end user now before they might feel comfortable going to the cloud.”

At this point, I was thinking: Wow! Microsoft has decided to go all in with the public cloud and deemphasize the private cloud, even at the risk of hurting its Windows Server sales. I guess the moderator thought the same, as he subsequently asked, “Microsoft’s approach is going to be more lean on that public cloud delivery model?”

Di Bona responded that the Windows Azure Appliances already are rolling out. (Are they? So far, I haven’t heard any customers mentioned beyond EBay — Microsoft’s showcase Appliance customer from last summer.) Di Bona’s exact words, from the transcript:

“Well, no, we’ve already announced about a year ago at this point that we’ve started to rollout some Windows Azure Appliances, Azure Appliances.They are early stage at this point.  But we think of the private cloud based on what we’re doing in the public cloud, and the learnings we’re getting from the public cloud, and sort of feeding that back into that appliance, which would be more of a private cloud.  It has some of the same characteristics of what Oracle is doing with Exadata.  So, we don’t think of it as mutually exclusive, but we think that the learnings we’re getting from the public cloud are different and unique in a way that they’re not bringing into Exadata.”

My take: There has been a shift inside Server and Tools, in terms of pushing more aggressively the ROI/savings/appeal of the public cloud than just a year ago. The Server and Tools folks know customers aren’t going to move to the public cloud overnight, however, as Di Bona made clear in his remarks.

“(W)e within Server and Tools, know that Azure is our future in the long run. It’s not the near-term future, the near-term future is still servers, right, that’s where we monetize,” he said. “But, in the very, very long run, Azure is the next instantiation of Server and Tools.”

Microsoft execs told me a couple of months ago that the company would provide an update on its Windows Azure Appliance plans at TechEd ‘11, which kicks off on May 16 in Atlanta. Next week should be interesting….

The most important reason I see for fulfilling Microsoft’s WAPA promise is to provide enterprise users with a compatible platform to which they can move their applications and data if Windows Azure and SQL Azure don’t pan out.

I added the following comment to Mary Jo’s post:

Microsoft has been advertising more job openings for the Windows Azure Platform Appliance (WAPA) recently. Job openings for the new Venice (directory and identity) Team specifically reference WAPA as a target. It's good to see signs of life for the WAPA project, but MSFT risks a stillborn SKU if they continue to obfuscate whatever problems they're having in productizing WAPA with marketese.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

• Vittorio Bertocci (@vibronet) reported 25 Free Copies of “Programming Windows Identity Foundation” at Teched in a 5/13/2011 post:

Yesterday I posted about my sessions at TechEd and the book signing event. My friends at O’Reilly, though, graciously pointed out that I didn’t mention the juiciest fact:

Make sure you come at 11:00am on Tuesday the 17th at the O’Reilly booth (or is it the bookstore?) and we’ll get you something to read on your flight back!


• Ananthanarayan Sundaram recommended Exciting Tech-Ed session: Explore the full breadth and depth of System Center Cloud and Datacenter Management! in a 5/13/2011 post to the System Center blog:

Come to SIM208 where Ryan O' Hara, our Senior Director (System Center and Forefront) will wow you with a great overview of what's coming in System Center 2012! Do you want to know more about how Microsoft is bringing it's vast experiential learning running global cloud services to your datacenter with System Center? Learn about how you can deliver your private cloud  with full control, while benefiting from the agility and operational effectiveness that cloud computing offers. How can you holistically manage your applications and services, rather than stop at VMs or boxes? Think about cool scenarios like provisioning a multi-tier app, monitoring apps at a line-of-code level, or just a single pane of glass experience to manage all your apps across your private clouds or Windows Azure?  And yes, we do realize you've got physical and virtual infrastructure too - and that needs integrated management - not to worry, with System Center 2012, you will need just one single toolset to achieve all this and more!

[SIM208 Management in the Datacenter

  • Monday, May 16 | 1:15 PM - 2:30 PM | Room: C211
  • Session Type:
  • Breakout Session
  • Level: 200 - Intermediate
  • Track: Security, Identity & Management
  • Evaluate: - for a chance to win an Xbox 360
  • Speaker(s): Ryan O'Hara
Join us for an overview of how Microsoft can help IT operations evolve to embrace cloud computing, how you can enable the efficiency and flexibility of the private cloud, and how you can optimize, secure and manage your datacenter today. Learn from key management executives how Microsoft's cloud and datacenter vision will be enhanced over the next 18 months with upcoming releases and additional enhancements to the products you use today including Microsoft System Center Operations Manager, Microsoft System Center Virtual Machine Manager, Microsoft Forefront Identity Manager, and Opalis.  
  • Product/Technology: Cloud Power: Delivered, Microsoft® Forefront®, Microsoft® System Center, Microsoft® System Center Virtual Machine Manager, Opalis
  • Audience: Architect, Infrastructure Architect, Solutions Architect, Strategic IT Manager, Systems Administrator, Systems Engineer, Tactical IT Manager]

Doesn't that sound exciting? I can't wait!

We look forward to seeing you there,

Cheers,

Anant S. (ansundar@microsoft.com), Senior Product Manager, Server and Cloud Platform Marketing  


• Steve Yi suggested TechEd 2011 – What To See, Where To Go For Cloud Data Services in a 12/13/2011 post to the SQL Azure team blog:

image TechEd is next week!  May 16-19 to be exact, in Atlanta.  Unfortunately I won’t be there, I was at InterOp earlier this week in Vegas (I’ll send a recap of that in the next few days) and I’ll be sitting this one out and catching up on some work and start work on an idea for a cloud application that I’ve been thinking about for a while.

My colleague, Tharun Tharian, will be there so I encourage you to say hi if you see him.  Of course he’ll be joined by the SQL Azure, DataMarket, and OData engineering teams delivering the latest on the respective services and technologies, and also manning the booth to answer your questions.  You’ll find them hanging out at the Cloud Data Services booth if you want to chat with them during the conference.

For those of you going, below are the SQL Azure and OData sessions that I encourage you to check out the sessions listed below – either live or over the web.  I encourage you to attend either Quentin Clark or Norm Judah’s foundational sessions right after the keynote to learn more about how to bridge on-premises investments with Windows Azure platform, and how SQL Server “Denali” and SQL Azure will work together in both the services offered, and updates in the tooling and development experience.  You may even hear some new announcements about SQL Azure and other investments in cloud data services.

Enjoy the conference!


Foundational Sessions:

FDN04 | Microsoft SQL Server: The Data and BI Platform for Today and Tomorrow

FDN05 | From Servers to Services: On-Premise and in the Cloud

SQL Azure Sessions:

DBI403 | Building Scalable Database Solutions Using Microsoft SQL Azure Database Federations

  • Speaker(s): Cihan Biyikoglu
  • Monday, May 16 | 3:00 PM - 4:15 PM | Room: C201

DBI210 | Getting Started with Cloud Business Intelligence

  • Speaker(s): Pej Javaheri, Tharun Tharian
  • Monday, May 16 | 4:45 PM - 6:00 PM | Room: B213

COS310 | Microsoft SQL Azure Overview: Tools, Demos and Walkthroughs of Key Features

  • Speaker(s): David Robinson
  • Tuesday, May 17 | 10:15 AM - 11:30 AM | Room: B313

DBI323 | Using Cloud (Microsoft SQL Azure) and PowerPivot to Deliver Data and Self-Service BI at Microsoft

  • Speaker(s): Diana Putnam, Harinarayan Paramasivan, Sanjay Soni
  • Tuesday, May 17 | 1:30 PM - 2:45 PM | Room: C208

DBI314 | Microsoft SQL Azure Performance Considerations and Troubleshooting

  • Wednesday, May 18 | 1:30 PM - 2:45 PM | Room: B312

DBI375-INT | Microsoft SQL Azure: Performance and Connectivity Tuning and Troubleshooting

  • Speaker(s): Peter Gvozdjak, Sean Kelley
  • Wednesday, May 18 | 3:15 PM - 4:30 PM | Room: B302

COS308 | Using Microsoft SQL Azure with On-Premises Data: Migration and Synchronization Strategies and Practices

  • Speaker(s): Mark Scurrell
  • Thursday, May 19 | 8:30 AM - 9:45 AM | Room: B213

DBI306 | Using Contained Databases and DACs to Build Applications in Microsoft SQL Server Code-Named "Denali" and SQL Azure

  • Speaker(s): Adrian Bethune, Rick Negrin
  • Thursday, May 19 | 8:30 AM - 9:45 AM | Room: B312
DataMarket & OData Sessions:

COS307 | Building Applications with the Windows Azure DataMarket

  • Speaker(s): Christian Liensberger, Roger Mall
  • Wednesday, May 18 | 3:15 PM - 4:30 PM | Room: B312

DEV325 | Best Practices for Building Custom Open Data Protocol (OData) Services with Windows Azure

  • Speaker(s): Alex James
  • Thursday, May 19 | 1:00 PM - 2:15 PM | Room: C211

See Vittorio Bertocci (@vibronet) announced TechEd USA 2010: Identity, Identity, Identity and Book Signing in a 5/13/2011 post in the Windows Azure AppFabric: Access Control, WIF and Service Bus section above.

image

The Windows Azure Team described a New Online Event Series, “Cloud Power: Create the Next BIG App”, Teaches How to Harness The Cloud with Windows Azure in a 5/12/2011 post:

image

Do you have an idea for that next big application that you would like to run from the cloud? Then you’ll want to tune into the new five-part online series, “Cloud Power:  Create the Next BIG App”, to learn how you can use Windows Azure to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities.  During this training, you’ll get a 30-day free Windows Azure Platform pass and have opportunities to work with Windows Azure trough hands-on lessons and code samples. Below is a brief description of each session; click on the title for more information and to register.

Friday, May 13, 2011, 10-11:30 AM PDT: Migrating Data into the Cloud

  • The focus of this first session is on learning about the challenge of having data sitting on “intermittently maintained infrastructure” and how best to migrate to a cloud environment that is persistently up to date.

Tuesday, May 17, 2011,  9-10:30 AM PDT: Extending Your SharePoint Practice via the Cloud

    • Learn how to deploy services into Windows Azure to increase end-user availability, leverage cloud storage for your SharePoint solutions to accommodate your customers' needs and benefit from your .NET skills and codebase to connect across devices and platforms.

      Tuesday, May 24, 2011, 9-10:30 AM PDT:  Tips and Tricks to Make Mobile Apps Scale AND Perform

      • This session will show you how to use libraries for your mobile apps and architect for scalability.

      Tuesday, May 31, 2011, 9-10:30 AM PDTCreating Facebook Apps That Can Scale

        • It's not unheard of to create a promotion or app on Facebook that draws 150,000 views in three days In this session, we will walk thru assets you can use today and applicable lessons that will help you architect for scale for Facebook.

          Tuesday, June 2, 2011, 9-10:30 AM PDT:  Add Scale Easily to ASP.NET Apps

          • This session will walk you through the steps needed to address changing server needs and web requirements.


          Nancy Medica (@nancymedica) announced on 5/12/2011 a one-hour How Microsoft is changing the game webinar scheduled for 5/18/2011 at 9:00 to 10:00 AM PDT:

          • image The difference between Azure (PAAS) and Infrastructure As A
            service and standing up virtual machines.
          • How the datacenter evolution is driving down the cost of
            enterprise computing.
          • Modular datacenter and containers.

          Intended for:CIOs, CTOs, IT Managers, IT Developers, Lead Developers

          Presented by Nate Shea-han, Solution Specialist with Microsoft:

          image His focus area is on cloud offerings focalized on the Azure platform. He has a strong systems management and security background and has applied knowledge in this space to how companies can successfully and securely leverage the cloud as organization look to migrate workloads and application to the cloud. Nate currently resides in Houston, Tx and works with customer in Texas, Oklahoma, Arkansas and Louisiana.


          <Return to section navigation list> 

          Other Cloud Computing Platforms and Services

          Gary Orenstein asserted VMware Is The New Microsoft, Just Without an OS in a 5/14/2011 post to Giga Om’s Structure blog:

          image In the past 10 years VMware has executed a remarkable strategy to topple enterprise software incumbents and emerge as an ecosystem kingpin. More recently, the company has plunged head first into cloud computing from infrastructure to applications. Time and again, it seems as though VMware is beating Microsoft at its own game. But a look deeper reveals that is no surprise.

          image VMware began with a concept to run multiple operating systems independently on a single physical machine through the use of virtual machines and hypervisors. Simple enough. But the implications are powerful as critical decisions once made at the operating system level quickly became less relevant. Companies such as Microsoft no longer need to be factored in up front for new data center architectures. This is how VMware put the first chink in Microsoft’s armor; operating system choices now take a back seat to hypervisor strategies.

          image VMware’s promise was visible early enough to EMC that it acquired the company in 2003. But it took until July 2008 for it to inject the company with leadership from Paul Maritz, a 14-year Microsoft veteran who joined EMC in February 2008. That is the same time EMC acquired Pi, a cloud computing company Maritz founded.

          Martiz’s resume covers all bases of software infrastructure: platforms, operating systems, development tools, database products, productivity suites, and email. He also brought in two VMware co-presidents from Microsoft giving the executive suite a combined 47 years of Microsoft experience. So the VMware roadmap since then became easy to read. Start with a base infrastructure that can unseat the operating system, woo developers to create a robust ecosystem, and deliver value added applications up the software stack that reach directly to end users.

          Pursuing the influential developer community, in 2009 VMware acquired SpringSource in a move to bolster their strength around the Spring Java development framework, and what was noted as a move towards an integrated platform-as-a-service offerings.

          Building a collaboration portfolio, in 2010, VMware acquired Zimbra setting off another chapter in the cloud collaboration wars.

          Most recently VMware launched Cloud Foundry, a platform-as-a-service that supports development in Java, Ruby on Rails, Node.js and other frameworks, further cementing its role as a market maker for new infrastructure approaches.

          Along the way, VMware built a formidable enterprise ecosystem around virtual machine management and the fact that customers want VMware to tell them which other products they should choose to make virtual machine deployment easier. Think Windows Compatibility List 2.0.

          With VMware holding the keys to the hypervisor layer and management, then the platform layer, and even the cloud applications layer with email from Zimbra, and presentations from Slide Rocket, why do I need Windows?

          This story is far from finished. VMware has less than twenty percent of Microsoft’s market cap today. But if I were tracking the growth, and more importantly, enterprise influence, VMware appears to be making the right moves.

          Want to learn more about the big moves in enterprise and cloud infrastructure? Check out the GigaOM Structure Conference June 22 and 23 in San Francisco.

          Gary Orenstein is the host of The Cloud Computing Show.

          Claiming “VMware is the new Microsoft” appears to be a bit of an overreach, if not breathless journalism to me.


          Jim Webber described Square Pegs and Round Holes in the NOSQL World in a 5/11/2011 post to the World Wide Webber blog:

          image The graph database space is a peculiar corner of the NOSQL universe. In general, the NOSQL movement has pushed towards simpler data models with more sophisticated computation infrastructure compared to traditional RDBMS. In contrast graph databases like Neo4j actually provide a far richer data model than a traditional RDBMS and a search-centric rather than compute-intensive method for data processing.

          Strangely the expressive data model supported by graphs can be difficult to understand amid the general clamour of the simpler-is-better NOSQL movement. But what makes this doubly strange is that other NOSQL database types can support limited graph processing too.

          This strange duality where non-graphs stores can be used for limited graph applications was the subject of a thread on the Neo4j mailing list, which was the inspiration for this post. In that thread, community members discussed the value of using non-graph stores for graph data particularly since prominent Web services are known to use this approach (like Twitter's FlockDB). But as it happens the use-case for those graphs tends to be relatively shallow - "friend" and "follow" relationships and suchlike. In those situations, it can be a reasonable solution to have information in your values (or document properties, columns, or even rows in a relational database) to indicate a shallow relation as we can see in this diagram:

          1

          At runtime, the application using the datastore  (remember: that’s code you typically have to write) follows the logical links between stored documents and creates a logical graph representation. This means the application code needs to understand how to create a graph representation from those loosely linked documents.

          2

          If the graphs are shallow, this approach can work reasonably well. Twitter's FlockDB is an existential proof of that. But as relationships between data become more interesting and deeper, this is an approach that rapidly runs out of steam. This approach requires graphs to be structured early on in the system lifecycle (design time), meaning a specific topology is baked into the datastore and into the application layer. This implies tight coupling between the code that reifies the graphs and the mechanism through which they're flattened in the datastore. Any structural changes to the graph now require changes to the stored data and the logic that reifies the data.

          Neo4j takes a different approach: it stores graphs natively and so separates application and storage concerns. That is, where your documents have relationships between them, that's they way they're stored, searched, and processed in Neo4j even if those relationships are very deep. In this case, the logical graph that we reified from the document store can be natively (and efficiently) persisted in Neo4j.

          3

          What's often deceptive is that in some use-cases, projecting a graph from a document or KV store and using Neo4j might begin with seemingly similar levels of complexity. For example, we might create an e-commerce application with customers and items they have bought. In a KV or document case we might store the identifiers of products our customers had bought inside the customer entity. In Neo4j, we'd simply add relationships like PURCHASED between customer nodes and the product nodes they'd bought. Since Neo4j is schema-less, adding these relationships doesn’t require migrations, nor should it affect any existing code using the data. The next diagram shows this contrast: the graph structure is explicit in the graph database, but implicit in a document store.

          4

          Even at this stage, the graph shows its flexibility. Imagine that a number of customers bought a product that had to be recalled. In the document case we'd run a query (typically using a map/reduce framework) that grabs the document for each customer and checks whether a customer has the identifier for the defective product in their purchase history. This is a big undertaking if each customer has to be checked, though thankfully because it's an embarrassingly parallel operation we can throw hardware at the problem. We could also design a clever indexing scheme, provided we can tolerate the write latency and space costs that indexing implies.

          With Neo4j, all we need to do is locate the product (by graph traversal or index lookup) and look for incoming PURCHASED relations to determine immediately which customers need to be informed about the product recall. Easy peasy!

          As the e-commerce solution grows, we want to evolve a social aspect to shopping so that customers can receive buying recommendations based on what their social group has purchased. In the non-native graph store, we now have to encode the notion of friends and even friends of friends into the store and into the logic that reifies the graph. This is where things start to get tricky since now we have a deeper traversal from a customer to customers (friends) to customers (friends of friends) and then into purchases. What initially seemed simple, is now starting to look dauntingly like a fully fledged graph store, albeit one we have to build.

          6

          Conversely in the Neo4j case, we simply use the FRIEND relationships between customers, and for recommendations we simply traverse the graph across all outgoing FRIEND relationships (limited to depth 1 for immediate friends, or depth 2 for friends-of-friends), and for outgoing PURCHASED relationships to see what they've bought. What's important here is that it's Neo4j that handles the hard work of traversing the graph, not the application code as we can see in the diagram above.

          But there's much more value the e-commerce site can drive from this data. Not only can social recommendations be implemented by close friends, but the e-commerce site can also start to look for trends and base recommendations on them. This is precisely the kind of thing that supermarket loyalty schemes do with big iron and long-running SQL queries - but we can do it on commodity hardware very rapidly using Neo4j.

          For example, one set of customers that we might want to incentivise are those people who we think are young performers. These are customers that perhaps have told us something about their age, and we've noticed a particular buying pattern surrounding them - they buy DJ-quality headphones. Often those same customers buy DJ-quality decks too, but there's a potentially profitable set of those customers that - shockingly - don't yet own decks (much to the gratitude of their flatmates and neighbours I suspect).

          With a document or KV store, looking for this pattern by trawling through all the customer documents and projecting a graph is laborious. But matching these patterns in a graph is quite straightforward and efficient – simply by specifying a prototype to match against and then by efficiently traversing the graph structure looking for matches.

          6

          This shows a wonderful emergent property of graphs - simply store all the data you like as nodes and relationships in Neo4j and later you'll be able to extract useful business information that perhaps you can't imagine today, without the performance penalties associated with joins on large datasets.

          In these kind of situations, choosing a non-graph store for storing graphs is a gamble. You may find that you've designed your graph topology far too early in the system lifecycle and lose the ability to evolve the structure and perform business intelligence on your data. That's why Neo4j is cool - it keeps graph and application concerns separate, and allows you to defer data modelling decisions to more responsible points throughout the lifetime of your application.

          So if you're fighting with graph data imprisoned in Key-Value, Document or relational datastores, then try Neo4j.

          Be sure to read the comments. Thanks to Alex Popescu for the heads up for this tutorial.

          Jim Webber of Neo Technologies is a leader of the Neo4j team.

          Microsoft Research’s Dryad is a graph database whose Academic Release is available for public download.


          Richard Seroter (@rseroter) described 6 Quick Steps for Windows/.NET Folks to Try Out Cloud Foundry in a 5/11/2011 post:

          image I’m on the Cloud Foundry bandwagon a bit and thought that I’d demonstrate the very easy steps for you all to try out this new platform-as-a-service (PaaS) from VMware that targets multiple programming languages and can (eventually) be used both on-premise and in the cloud.

          image To be sure, I’m not “off” Windows Azure, but the message of Cloud Foundry really resonates with me.  I recently interviewed their CTO for my latest column on InfoQ.com and I’ve had a chance lately to pick the brains of some of their smartest people.  So, I figured it was worth taking their technology for a whirl.  You can too by following these straightforward steps.  I’ve thrown in 5 bonus steps because I’m generous like that.

          1. Get a Cloud Foundry account.  Visit their website, click the giant “free sign up” button and click refresh on your inbox for a few hours or days.
          2. Get the Ruby language environment installed.  Cloud Foundry currently supports a good set of initial languages including Java, Node.js and Ruby.  As for data services, you can currently use MySQL, Redis and MongoDB.  To install Ruby, simply go to http://rubyinstaller.org/ and use their single installer for the Windows environment.  One thing that this package installs is a Command Prompt with all the environmental variables loaded (assuming you selected to add environmental variables to the PATH during installation).
          3. Install vmc.You can use the vmc tool to manage your Cloud Foundry app, and it’s easy to install it from within the Ruby Command Prompt. Simply type:
            gem install vmc

            You’ll see that all the necessary libraries are auto-magically fetched and installed.

            2011.5.11cf01

          4. Point to Cloud Foundry and log In.  Stay in the Ruby Command Prompt and target the public Cloud Foundry cloud.  You could also use this to point at other installations, but for now, let’s keep it easy. 
            2011.5.11cf02
            Next, login to your Cloud Foundry account by typing “vmc login” to the Ruby Command Prompt. When asked, type in the email address that you used to register with, and the password assigned to you.
          5. Create a simple Ruby application. Almost there.  Create a directory on your machine to hold your Ruby application files.  I put mine at C:\Ruby192\Richard\Howdy.  Next we create a *.rb file that will print out a simple greeting.  It brings in the Sinatra library, defines a “get” operation on the root, and has a block that prints out a single statement. 
            require 'sinatra' # includes the library
            get '/' do	# method call, on get of the root, do the following
            	"Howdy, Richard.  You are now in Cloud Foundry! "
            end
          6. Push the application to Cloud Foundry.  We’re ready to publish.  Make sure that your Ruby Command Prompt is sitting at the directory holding your application file.  Type in “vmc push” and you’ll get prompted with a series of questions.  Deploy from current directory?  Yes.  Name?  I gave my application the unique name “RichardHowdy”. Proposed URL ok?  Sure.  Is this a Sinatra app?  Why yes, you smart bugger.  What memory reservation needed?  128MB is fine, thank you.  Any extra services (databases)?  Nope.  With that, and about 8 seconds of elapsed time, you are pushed, provisioned and started.  Amazingly fast.  Haven’t seen anything like it. My console execution looks like this:2011.5.11cf03
            And my application can now be viewed in the browser at http://richardhowdy.cloudfoundry.com.

            Now for some bonus steps …

          7. Update the application.  How easy is it to publish a change?  Damn easy.  I went to my “howdy.rb” file and added a bit more text saying that the application has been updated.  Go back to the Ruby Command Prompt and type in “vmc update richardhowdy” and 5 seconds later, I can view my changes in the browser.  Awesome.
          8. Run diagnostics on the application.  So what’s going on up in Cloud Foundry?  There are a number of vmc commands we can use to interrogate our application. For one, I could do “vmc apps” and see all of my running applications.2011.5.11cf04
            For another, I can see how many instances of my application are running by typing in “vmc instances richardhowdy”. 
            2011.5.11cf06
          9. Add more instances to the application.  One is a lonely number.  What if we want our application to run on three instances within the Cloud Foundry environment?  Piece of cake.  Type in “vmc instances richardhowdy 3” where 3 is the number of instances to add (or remove if you had 10 running).  That operation takes 4 seconds, and if we again execute the “vmc instances richardhowdy” we see 3 instances running. 
            2011.5.11cf05
          10. Print environmental variable showing instance that is serving the request.  To prove that we have three instances running, we can use Cloud Foundry environmental variables to display the instance of the droplet running on the node in the grid.  My richardhowdy.rb file was changed to include a reference to the environmental variable named VMC_APP_ID.
            require 'sinatra' #includes the library
            get '/' do	#method call, on get of the root, do the following
            	"Howdy, Richard.  You are now in Cloud Foundry!  You have also been updated. App ID is #{ENV['VMC_APP_ID']}"
            end

            If you visit my application at http://richardhowdy.cloudfoundry.com, you can keep refreshing and see 1 of 3 possible application IDs get returned based on which node is servicing your request.

          11. Add a custom environmental variable and display it.  What if you want to add some static values of your own?  I entered “vmc env-add richardhowdy myversion=1” to define a variable called myversion and set it equal to 1.  My richardhowdy.rb file was updated by adding the statement “and seroter version is #{ENV['myversion']}” to the end of the existing statement. A simple “vmc update richardhowdy” pushed the changes across and updated my instances.

          Very simple, clean stuff and since it’s open source, you can actually look at the code and fork it if you want.  I’ve got a todo list of integrating this with other Microsoft services since I’m thinking that the future of enterprise IT will be a mashup of on-premise services and (mix of) public cloud services.  The more examples we can produce of linking public/private clouds together, the better!


          Charles Babcock asserted “Company executives tout the application platform provider's ability to quickly transfer customers out of the crashing Amazon cloud in April” as a deck for his Engine Yard Sees Ruby As Cloud Springboard report of 5/11/2011 from Interop Las Vegas 2011 for InformationWeek:

          image Providing programming platforms in the cloud, which seemed a little simple-minded when they first appeared--after all, programmers were already skilled at using the cloud, so how was the platform going to hold onto them--may not be so silly after all.

          For one thing, everybody is starting to think it's a good idea. VMware is doing it with the Spring Framework and CloudFoundry.org. Red Hat is doing it with Ruby, Python, and PHP in OpenShift. Microsoft did it with Windows Azure and its .Net languages. And Heroku did it for Ruby programmers.

          Interop Las Vegas 2011Another Ruby adherent in the cloud is Engine Yard, perhaps less well known than its larger San Francisco cousin, Heroku. Engine Yard is a pure platform play. It provides application building services on its site and, when an application is ready for deployment, it handles the preparation and delivery, either to Amazon Web Service's EC2 or Terremark, now part of Verizon Business.

          image Engine Yard hosts nothing itself. It depends on the public cloud infrastructure behind it. Nevertheless, Ruby programmers don't have to do anything to get their applications up and running in the cloud. Engine Yard handles all the details, and monitors their continued operation.

          So what did the management of Engine Yard, a San Francisco-based cloud service for Ruby programmers, think last December of the acquisition of Heroku by Salesforce.com for $212 million. "We couldn't be happier," said Tom Mornini, co-founder and CTO as he sat down for an interview at Interop 2011 in Las Vegas, a UBM TechWeb event.

          "Five years ago, I said, 'Ruby is it,'" he recalled. He respects Python and knows programmers at Google like that open source scripting language. But he thinks he made the right call in betting on Ruby. "Python is going nowhere. You're not seeing the big moves behind Python that you do with Ruby. You're not seeing major new things in PHP. For the kids coming out of college, Ruby is hot," he said.

          Mike Piech, VP of product management, gave a supporting assessment. Salesforce's purchase was "one of several moves validating this space." Piech is four months out of Oracle, where he once headed the Java, Java middleware, and WebLogic product lines. "I love being at Engine Yard," he said.

          When asked what's different about Engine Yard, he succinctly answered, "Everything." Mainly, he's enjoying the shift from big company to small, with more say over his area of responsibility.

          Read more: Page 2:  An Anomaly In The Programming World


          <Return to section navigation list> 

          0 comments: