Friday, September 02, 2011

Windows Azure and Cloud Computing Posts for 9/2/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 9/2/2011 4:00 PM PDT with articles marked from Wade Wegner, Ernest Mueller, Shayne Burgess, Windows Azure Team, Michael Collier

Tip: Copy to the Clipboard, press Ctrl+f to open the Find text box, press Ctrl+v to paste the bullet symbol to the text box and search with it to find new articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

Avkash Chauhan discussed Windows Azure Table Storage returns "412 Precondition Failed" Error in a 9/1/2011 post:

imageRecently Windows Azure Table Storage access code which was working fine started giving the following error:

412 Precondition Failed

The server does not meet one of the preconditions that the requester put on the request

imageStack Trace:

System.Data.Services.Client.DataServiceRequestException was unhandled
Message=An error occurred while processing this request.
Source=Microsoft.WindowsAzure.StorageClient
StackTrace:
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImplWithRetry[T](Func`2 impl, RetryPolicy policy)
at Microsoft.WindowsAzure.StorageClient.TableServiceContext.SaveChangesWithRetries(SaveChangesOptions options)
at Microsoft.WindowsAzure.StorageClient.TableServiceContext.SaveChangesWithRetries()
…….
……..
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart(Object obj)
InnerException: System.Data.Services.Client.DataServiceClientException
Message=<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>ConditionNotMet</code>
<message xml:lang="en-US">The condition specified using HTTP conditional header(s) is not met.
RequestId:43b34531-c1ed-34ab-654a-53b3f3238764
Time:2011-08-12T12:27:13.6404017Z</message>
</error>
Source=System.Data.Services.Client
StatusCode=412
StackTrace:
at System.Data.Services.Client.DataServiceContext.SaveResult.<HandleBatchResponse>d__1e.MoveNext()
InnerException:

The potential problem could be that you have a context which is tracking table update process. Later due to eTag mismatch this updated is failed and now that context is reused (even for other changes), it will continue sending the failed update.


See also Steve Marx (@smarx) posted Analytics, Leasing, and More: Extensions to the .NET Windows Azure Storage Library on 9/1/2011 in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below.


<Return to section navigation list>

SQL Azure Database and Reporting

Eric Nelson (@ericnel) asked Universal ASP.NET Providers??? Getting SQL Azure to handle your Session State in a 9/2/2011 post:

While pulling together this post on the August release of the Windows Azure Tools I noted that the ASP.NET MVC 3 template included “the new universal ASP.Net providers that support SQL Azure”. Which made me pause and think … “What universal ASP.NET Providers?”

Looks like their existence completely passed me by :-)

Scott Hanselman summarised the purpose of the Universal Providers back in June. Simply put they extend Session, Membership, Roles and Profile support to include SQL Compact Edition and SQL Azure. In all other ways they work like the existing SQL-based providers. They are released via a NuGet Package (something else I need to dig into more).

What this means is we now have a supported way of doing session state with SQL Azure, rather than via workarounds (e.g. this one from Wayne)

By default, the NuGet package sets the connection string to use a SQL Server Express database:

"Source=.\SQLEXPRESS;
AttachDbFilename=|DataDirectory|\aspnetdb.mdf;
Initial Catalog=aspnet;
Integrated Security=True;
User Instance=True;
MultipleActiveResultSets=True" 
providerName="System.Data.SqlClient"

For SQL Azure you simply change to:

"Source=.\SQLEXPRESS;
AttachDbFilename=|DataDirectory|\aspnetdb.mdf;
Initial Catalog=aspnet;
Integrated Security=True;
User Instance=True;
MultipleActiveResultSets=True" 
providerName="System.Data.SqlClient"

Related Links


Jason Short posted an update to his DAC SQL Azure Import Export Service Client V 1.2 CodePlex project on 9/1/2011. From the Import Export Service Client documentation page:

DAC Import/Export Hosted as a Service
imageThe new Import/Export Service for SQL Azure CTP is now live. The service will directly import or export between a SQL Azure database and Windows Azure BLOB storage. The service complements the client side tools already available. Databases exported to a BACPAC using the client side tools can be uploaded to Windows Azure BLOB storage and imported using the service. Similarly, databases exported to a BACPAC using the service can be imported using the client side tools.
Requirements

imageAll you need is the EXE and the configuration file provided as a download in this release as well as a public internet connection. No other downloads are necessary to interact with the service.

Note the service imports and export between SQL Azure and Windows Azure BLOB storage only - you must have a storage account set up in order to use the service.

Usage
The new EXE provided as a part of this CodePlex release submits import, export, or status requests to the service. Once successfully submitted, your request is assigned a unique request ID which you can check for status.
  • To submit an export request:
    • Note: the BACPAC will be saved to your Windows Azure storage account
    • DacIESvcCli.exe -s serverName.database.windows.net -d AdventureWorkse -u MyAdminLogin -p MyPassword –bloburl http://blobAccountName.blob.core.windows.net/blobContainerName/bacpacFilename.bacpac -blobaccesskey MyKey -accesskeytype storage –x
  • To submit an import request:
    • Note: the BACPAC must already be in your Windows Azure storage account prior to submitting an import request
    • DacIESvcCli.exe -s server.database.windows.net -d AdventureWorkse –edition Web –size 1 -u MyAdminLogin -p MyPassword –bloburl http://blobAccountName.blob.core.windows.net/blobContainerName/bacpacFilename.bacpac -blobaccesskey MyKey -accesskeytype storage –i
  • To check the status of all requests for a server:
    • DacIESvcCli.exe -s server.database.windows.net -u MyAdminLogin -p MyPassword -status
  • To check the status of a specific request:
    • DacIESvcCli.exe -s server.database.windows.net -u MyAdminLogin -p MyPassword -requestid myRequestID -status

Note: if you are experiencing issues with request submission, we recommend you put quotes around BLOB URI and keys values
The EXE provided as a part of this CodePlex release serves as a reference implementation for how to submit requests to the Import/Export service.
Backups

The export process is not intrinsically transactionally consistent. In order to make your export consistent, we recommend you copy the database first and then export from the copy. Alternatively you can use the Red Gate SQL Azure Backup Tool which does this for you automatically.

Other Backup Notes

  • Passwords are not stored in the BACPAC therefore you'll need to set the passwords on relevant logins upon import to a new server
  • SQL variant data types are not yet supported, support will be added in the next release of the Import/Export Service
  • The service is in CTP until the next service release
  • You can import (restore) BACPACs created by the service by using the client side tools also available in this project

Jason is a software developer on the SQL Server Manageability team.


The SQL Azure and Entity Framework Teams recently posted a Tutorial: Developing a Windows Azure Data Application Using Code First and SQL Azure:

Intended audience

imageRead this tutorial if you are a developer with a basic understanding of ASP.NET and SQL Server who would like to create a web application that runs on Windows Azure and uses a SQL Azure database. This tutorial assumes you have some limited knowledge of Windows Azure, so it moves quickly through the steps of creating and deploying your Windows Azure application. For a more in-depth introduction to developing web applications on Windows Azure, see Creating a Windows Azure Hello World Application Using ASP.NET MVC 3.

Objective

In this tutorial, you will learn how to:

  • Create a Windows Azure application that includes an ASP.NET MVC 3 web role
  • Build and run your application locally
  • Use SQL Azure to store data
  • Deploy your application to Windows Azure
  • Delete your deployment
Tutorial workflow

This tutorial will guide you through the process of creating a web application hosted on Windows Azure, adding a local data component that enables users to plan a road trip, and migrating the database components to SQL Azure. It uses the Entity Framework Code First feature to allow you to easily create a data model for the application. When you complete this tutorial, you’ll have a simple road trip application, as shown below:

Road Trip Application Screenshot

Prerequisites
  1. A Windows Azure subscription and the Windows Azure tools and SDK, including the August 2011 tools release; see Installing the SDK and Getting a Subscription for instructions on getting a subscription and installing the necessary software.
Time requirement

It will take about 30 minutes to complete all of the modules in this tutorial.


<Return to section navigation list>

MarketPlace DataMarket and OData

Shayne Burgess (@shayneburgess) announced an Update to the OData Library Available on CodePlex and NuGet in a 9/2/2011 post to the WCF Data Services blog:

imageWe have just released a new drop of ODataLib and EdmLib on CodePlex as a shared source project. ODataLib is a library used for advanced OData serialization and deserialization scenarios, and EdmLib is a library used to manipulate entity data models. We invite you to download the code and test it out – this is very much an alpha release so any and all feedback is welcome.

imageYou can download the source for these libraries (and a number of others that we have published) at odata.codeplex.com. The libraries are also available as a NuGet package at http://nuget.org/List/Packages/ODataLibrary.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Michael Collier (@MichaelCollier) described Accessing a Service on the Windows Azure Service Bus from Windows Phone 7 in an 8/19/2011 post (missed when published):

imageRecently I was working on putting together a sample of how to use the service relay feature of the Windows Azure Service Bus. By using the Windows Azure Service Bus as a service relay, it is possible to expose on-premises services in a secure way, without the need to punch holes in your firewall or stand up a lot of new infrastructure. Part of the sample included accessing the service from Windows Phone 7. Easy enough, right?

Setting up the server side to register on the Service Bus is fairly straightforward. There are several good examples online, the Azure AppFabric SDK, and the Windows Azure Platform Training Kit. So, I’m not going to go into much detail here on that. However, I do want to be sure to point out that services can have one of two types of client authentication – none and an access token. Obviously, not requiring a token is a lot easier – just call the service like any other WCF REST service. Adding a security token ups the complexity a little. By requiring an access token a client would need to authenticate with Windows Azure Access Control Services first, and then provide the token as part of the service call. An area that can be confusing when dealing with authentication in this situation is the Service Bus still uses ACS v1 for authentication, not the newer, cooler, ACS v2. You will see in the Windows Azure Management Portal that the Service Bus is set to use ACS V1, but when you look at Access Control in the portal, you’ll just see your ACS V2 namespace.

The client security requirement is configured in the server side .config file.

<bindings> 
 <webHttpRelayBinding> 
 <binding name=""> 
 <!--<security relayClientAuthenticationType="None"  mode="Transport"/>--> 
<security relayClientAuthenticationType="RelayAccessToken"  
mode="Transport"/> 
 </binding> 
 </webHttpRelayBinding> 
</bindings>

Now that I had the server side set up, it was time to work on the WP7 side of the sample. As I mentioned before, there are a lot of samples available for setting up both the service and client side of a Service Bus connection. What there was not (at least that I could find) was examples of setting up a client to authenticate with ACS, without using assemblies from the Azure AppFabric SDK such as Microsoft.ServiceBus.dll. Adding to the “fun” was that all web calls on Windows Phone 7 must be done asynchronously – meaning the System.Net.WebClient behaves a little differently on WP7 than it does on a desktop or server app. So, that said, let’s get on with it!

There are two things that are needed – authentication with ACS and sending the ACS provided token to the service. Let’s start with authenticating the client with ACS. For my sample I decided to go with what is the most basic form of authentication, a shared secret. This will involve creating a WRAP message, sending it to ACS, and then extracting the ACS token from the response.

private void GetDataUsingAcsSecurity()
{
 var webClient = new WebClient();             
 string acsUri = 
@https://mynamespace-sb.accesscontrol.windows.net/WRAPv0.9/;                          
 
string data = 
 string.Format("wrap_name={0}&wrap_password={1}&wrap_scope={2}",                 
 HttpUtility.UrlEncode("owner"),                  
 HttpUtility.UrlEncode("iALK8XkWQb89G5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"),
 HttpUtility.UrlEncode(http://mynamespace.servicebus.windows.net));

 webClient.Headers["Content-Type"] = 
 "application/x-www-form-urlencoded";             

 webClient.UploadStringCompleted += webClient_UploadStringCompleted;
 webClient.UploadStringAsync(new Uri(acsUri), "POST", data);         
}

There are a few things to point out about this code. Now, maybe it’s because I haven’t written a lot of C# code to access a REST service, some of the code to make this work were not obvious to me. First, you must set the “Content-Type” HTTP header. Second, you’ll want to use the STS Endpoint that is listed in the Windows Azure Management Portal.

STS Endpoint

So, now we need the code to handle the response. That code will need to extract the ACS token from the response and pass it on to the service.

void webClient_UploadStringCompleted(object sender, UploadStringCompletedEventArgs e)
{
string token = 
e.Result .Split('&').Single(x => x.StartsWith("wrap_access_token=",
StringComparison.OrdinalIgnoreCase)).Split('=')[1];
string decodedToken = HttpUtility.UrlDecode(token); 

var uriBuilder = 
new UriBuilder(https://mynamespace.servicebus.windows.net/claims);

uriBuilder.Path += 
string.Format("/claim/{0}/amount", PolicyIdBox.Text); 

var webClient = new WebClient(); 
webClient.Headers["Authorization"] = 
string.Format("WRAP access_token=\"{0}\"", decodedToken);

webClient.DownloadStringCompleted += 
(s, args) => ParseAndShowResult(args);

webClient.DownloadStringAsync(uriBuilder.Uri);
}

The key, at least for me, was the line which decodes the token received from ACS. Note – ParseAndShowResult() is simply a helper method to parse the result of the service call and show on the phone, so nothing exciting going on there.

In the end, this wasn’t all that bad. The problems for me were really related to a lack of understanding of what was needed with regards to authenticating with ACS and likely a little inexperience in working with REST services. Hopefully this quick sample will save someone else from some frustrating 2:00 AM coding sessions.

image72232222222No significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Steve Marx (@smarx) and Wade Wegner (@WadeWegner) star in Cloud Cover Episode 58 - Exploring the Windows Azure Toolkit for Android of 9/2/2011:

Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

In this episode, Steve and Wade cover the recent release of the Windows Azure Toolkit for Android, along with updates to the Windows Azure Toolkits for Windows Phone and iOS. Additionally, Steve discusses some tips for managing topology changes with the RoleEnvironment Changing event.

In the news:


The Windows Azure Team (@WindowsAzure) posted New Blog Post Spotlights, “Who Else is Using Windows Azure…?” In the UK on 9/2/2011:

Windows Azure Developer Evangelist David Gristwood, spotlights several UK customers who have had great success on Windows Azure in his most recent post, “Who Else is Using Windows Azure…?” The post also includes videos taken during a recent TechDays UK event with four partners talking about why they chose Windows Azure and what their customers are building. Click on each logo to watch the video.


Gavin Clarke (@gavin_clarke) asked Who says Azure isn’t cool and trendy now? in a 9/2/2011 post to The Register:

imageApple has selected Microsoft's Azure and Amazon's AWS to jointly host its iCloud service, The Reg has learned.

We understand that Apple has barred Microsoft and Amazon from discussing what would otherwise be a high-profile deal, especially for Microsoft's fledgling Azure cloud service.

imageBut Reg sources close to Microsoft this week confirmed rumours circulating in June that Apple's iCloud is running on Azure and Amazon. Customers' data is being striped between the pair. iCloud was released as a beta in August and is expected by the end of this year.

Apple and Amazon did not respond to our requests to comment, while Microsoft told us: "At this time, we don't have any comment around whether Apple is a Windows Azure customer."

According to our sources, Microsoft insiders see the iCloud deal as a validation of Azure. So far, Microsoft has pushed Azure using the marketing 101 playbook. Redmond has flagged up the start-ups and websites it has attracted in an attempt to prove to other devs that Azure is "cool". It is also promoting those corporate customers who've floated onboard to prove its cloud is being taken seriously by business users.

imageiCloud puts Azure into a different league, given the brand love for Apple and the Apple management's fanatical attitude to perfection. It is a "huge consumer brand, a great opportunity to get Azure under a very visible workload," our sources told us.

Apple is understood to have elected to outsource the plumbing of iCloud because its core competence lies in "building great consumer experiences". It didn't make sense for Apple to become a cloud provider.

By selecting two suppliers, both very different in their services and their level of maturity, Apple is reducing its risk of becoming hostage to a single supplier.

Microsoft and Amazon will now need to ensure they keep up with the other on reliability, new features, security, and price.

More to learn

Apple has had a recent unpleasant experience in providing online services: in a famous memo, Steve Jobs admitted his company had "more to learn about internet services" following the outages and failures of his precursor to iCloud for email, contacts, calendar, photos and other files – MobileMe.

Also, there's the cost and delay involved in building the infrastructure that iCloud requires as well as assembling and building the core services. Buildings, power, servers, storage, the recruitment of personnel and having the facility certified would cost a minimum of $100m. A more realistic cost for full-scale roll-out could be closer to $1bn.

Microsoft has already built several mega data centres to run Azure, in addition to its search engine Bing, in anticipation of big customers. The company has at least 24 data centres running Azure worldwide.

To give you an idea of the scale, the first phase of one of these in Chicago is 700,000 square feet; it uses a modular design based on containers. Chicago has a capacity of 112 containers, with each holding 224,000 servers – Microsoft uses Dell.

That said, Apple could be biding its time in using Microsoft and Amazon.

Apple is building a $500m data centre in North Carolina. If reports of the hardware going in there are correct, the centre's data capacity should run into tens of petabytes and be more than suited to running iCloud – for now, at least.

Striping

iCloud is believed to be running on the full Azure service – the Windows Azure compute and controller part and SQL Azure storage which hosts tables, queues and flat files. It's not clear how many of Microsoft's Dell servers are hosting iCloud.

The iCloud data is being striped between the Amazon and Microsoft clouds. That means Apple or Microsoft or Amazon or all three have to implement through the software a way of identifying which user's information is stored in what locations and then to route requests to the correct server.

If the data is duplicated, then software would handle load-balancing or randomly send user's requests to one cloud or the other, or change access policies depending on things like network speed and server availability.

The striping process segments logically sequential data such as single files so segments can be written to different physical devices. The process can help speed up access to data because you don't rely on read/write access speeds of a single disk in a machine.

The challenge in running two clouds under an overall service, if there is one, will be in smoothly managing a unified system where the controllers could well be running on different operating systems or be written in different languages.

This is a very real possibility; while AWS and Azure emulate virtual servers, most AWS users run on Linux while all Azure users have to run on Windows. Even if a cross-platform language like Java is used to bridge the gap, then tuning the software for both will mean additional cost and complexity.

One way to avoid managing different code bases and ensuring the best levels of performance could be for iCloud to also run on Windows on AWS. This would be a potentially even bigger victory for Microsoft as it would mean iCloud isn't just running on Azure from Microsoft but is also running on Windows while on Amazon.

I believe Gavin’s claim that Microsoft “has at least 24 data centres running Azure worldwide” is grossly over exaggerated.


Steve Marx (@smarx) posted Analytics, Leasing, and More: Extensions to the .NET Windows Azure Storage Library on 9/1/2011:

imageI’ve just published the smarx.WazStorageExtensions NuGet package and a corresponding GitHub repository. Some of it is just a repackaging (with minor improvements) of code I’ve published on this blog before, but I’ve also added some support for the new storage analytics API, based loosely on the code from the Windows Azure storage team (logging and metrics).

imageFrom the GitHub README:

smarx.WazStorageExtensions is a collection of useful extension methods for Windows Azure storage operations that aren't covered by the .NET client library.

It can be install[ed] using the NuGet package via install-package smarx.WazStorageExtensions and contains extension methods and classes for the following:

Here’s some sample code to get you started with the library:

// enable metrics with a 7-day retention policy
static void Main(string[] args)
{
    var blobs = CloudStorageAccount.Parse(args[0]).CreateCloudBlobClient();
    var props = blobs.GetServiceProperties();
    props.Metrics.Enabled = true;
    props.Metrics.RetentionPolicy = new RetentionPolicy { Enabled = true, Days = 7 };
    blobs.SetServiceProperties(props);
}
        
// try to lease a blob and write "Hello, World!" to it
static void Main(string[] args)
{
    var blob = CloudStorageAccount.Parse(args[0]).CreateCloudBlobClient().GetBlobReference(args[1]);
    var leaseId = blob.TryAcquireLease();
    if (leaseId != null)
    {
        blob.UploadText("Hello, World!", leaseId);
        blob.ReleaseLease(leaseId);
        Console.WriteLine("Blob written!");
    }
    else
    {
        Console.WriteLine("Blob could not be leased.");
    }
}

// check the existence of a blob
static void Main(string[] args)
{
    var blob = CloudStorageAccount.Parse(args[0]).CreateCloudBlobClient().GetBlobReference(args[1]);
    Console.WriteLine("The blob {0}.", blob.Exists() ? "exists" : "doesn't exist");
}

I hope you find these useful. Let me know if you have any feedback! Those links again are:


Stephen Withers asserted “Microsoft's Azure service and Sydney-based Hands-on Systems played important roles in getting Harvey Norman Big Buys up and running as quickly as it did” as a deck for his Harvey Norman Big Buys sees blue skies with Azure post of 9/1/2011 to the IT Wire blog:

imageHarvey Norman Big Buys, an online franchisee of the well-known retail operation, launched in April this year, just two months after the decision was made to proceed with the idea.

Kaine Escott, director of Harvey Norman Big Buys, told iTWire "We were on a very tight timeframe" of around seven to eight weeks to launch the site, and "we knew we were going to be in a volatile environment" not just with normal seasonal fluctuations but due to the effects of Gerry Harvey's media appearances.
Taken together, these factors meant "the cloud was almost the only solution," he said. From there, picking Azure "was a fairly easy decision" due to Mr Escott's previous relationship with Microsoft when he worked at Harvey Norman. "We didn't even shop it around."

image"It went very well. We had a fantastic partner in Hands-on," he said. The company's ecommerce application architecture was generally suited to use on Azure, and Microsoft provided some Azure expertise.
"Eight weeks later we were live," he said. "I have a history in IT and that was an amazing result."

imageThe company usually runs three instances of the software on Azure, typically at 5% utilisation. But when Big Buys heard that Gerry Harvey was going to appear on Today Tonight and A Current Affair on the same evening, it was clear that more resources would be needed.

The double appearance was confirmed at 4.30pm, and at 6.15pm another three instances were started. While the programs were screening, the six instances ran at around 80% utilisation.

The spike in page views was massive - where the site usually saw 15,000-20,000 page views per day, there were around 370,000 that evening. A traditional enviroment "would have fallen over" unless the company had spent a fortune on hardware, and even then probably couldn't have ramped up quickly enough, Mr Escott said.

And it wasn't just a case of a good first impression: "I'd never do it any other way again, said Mr Escott, who is looking forward to the possible availability of Dynamics NAV (the Microsoft ERP system used by Big Buys) on Azure. "I'd do that in a heartbeat," he said.

Disclosure: Mr Escott was interviewed at Tech.Ed [Australia 2011], where the writer was the guest of Microsoft.


Maria Deutscher claimed Microsoft Azure Toolkit for Android Ramps Mobile Competition in a 9/1/2011 post to the SiliconANGLE blog:

imageSoftware giant Microsoft has become invested in both the PaaS and mobile markets, and Azure is proving to be a very handy tool bolstering the company’s market share on both ends. Microsoft took an interesting approach by offering Azure toolkits for Windows Phone 7’s two main competitors (alongside the platform itself): iOS, and now Android.

The company launched version 0.8 for Android devs – what is essentially a carved out core product without some of the add-ins that would finally push it up a notch. ZDNet’s Mary-Jo Foley explained:

“The numbering sequence (0.8) is not consequential at the moment,” a spokesperson told me. “The only reason the toolkit is v0.8 is because 1) we haven’t yet built out the documentation for the API, and 2) the sample application (while fully functional) is still an extremely basic application and not as functional as the iOS/WP equivalents.”

imageIn addition to the launch of version 0.8 for Android, Microsoft took the opportunity to update the two other versions of its Azure toolkit on the same day. The one for Windows Phone 7, v1.3, now features SQL Azure support as a membership provider and data source using OData, and the Windows Phone Developer Tools 7.1 Release Candidate. The iOS edition, v1.2.1, can now power Objective-C and XCode-based iPhone and iPad apps.

Microsoft is just as aggressive about Azure as Amazon is about AWS, or the very least making efforts to follow the latter’s price slashing campaign that’s been going on throughout the past couple of years. A couple weeks ago, Microsoft lowered its rate for extra small compute by 20 percent and offered customers an introductory deal.

Going back to the mobile twist on Azure, VMware is another example of a company gunning for the mobile space with a cloud offering; it launched the fifth version of its desktop virtualization offering two days. View 5.0 is priced at $150 per concurrent user for the Enterprise Edition, and $250 per concurrent user for the Premier Edition.

In the same vein:

Chloe Herrick (@chloe_CW) asserted “Westpac has moved to dispel a number of common myths associated with banks using Cloud services, following its own recent journey to the Cloud” as a deck for her Tech Ed [Australia] 2011: Westpac debunks Cloud myths for banks article for Computerworld:

imageWestpac has moved to dispel a number of common myths associated with banks using Cloud services, following its own recent journey to the Cloud.

imageAddressing attendees at this year’s Tech Ed conference on the Gold Coast, Westpac principal architect, Ward Britton, detailed the bank’s deployment of Microsoft’s Azure hosting platform.

imagePreviously, the bank’s quantitative analysts used an in-house analytics platform to carry out risk calculations overnight but to produce increasingly granular information, it needed much more computing power.

image“The journey started with one of our quantitative analysts ... who said, ‘Look I’ve got some issues here, this isn’t working, it’s taking a long time, when it’s running my desktop is locked up and I can’t do anything, but I need to do it quicker, and I need to it reliability as sometimes it even crashes’,” Britton said.

Westpac kicked off a proof of concept (POC) to prove that they could take the application (Numerix) and the job running on the analyst’s desktop, plug in HPC (High Performance Computing) and Azure and “make it run way faster” and more reliably.

The Numerix application plugs into Excel and enables the quantitative analyst to utilise a mathematical library and a pricing library, which is provided in the package.

Myth number one, he said, is that banks cannot use the public Cloud as it is too insecure for the sensitive data.

“Customer information is extremely important to banks, it’s looked after with maximum care and security, it doesn’t leave the bank, go offshore or get outsourced and as such it doesn’t belong on the public Cloud,” he said.

“But what about data that doesn’t have customer related information in it ... and what if the processing requirements for this information requires a huge compute capacity, perhaps this could be a candidate to put in the public Cloud.”

After establishing there was no customer information being used in the POC, Westpac charged ahead with the Cloud.

“With the right data, the appropriate amount of governance and diligence around that I reckon this myth is busted,” Britton said.

Britton also quashed the myth that buying a bigger desktop will prevent desktop apps from locking it down.

“Desktop apps can grind the desktop to a halt, Numerix plugs into Excel 2010, runs Monte Carlo simulations, it uses a lot of CPU and RAM to do that, so the biggest baddest desktop you put there to run this on will eventually run out of pump,” he said.

“By shifting the Numerix workload off the PC and onto the HPC grid, we could run it pretty quickly, so this myth that desktop apps will lock up your PC is busted.”

The third myth, Britton said, was that having the capability to burst to the Cloud would be “cheap, quick and easy”.

“To solve the initial problem of Excel locking up the desktop, we offloaded the work onto HPC, the desktop was still running, numbers came back and results happened quickly, essentially problem solved,” he said.

One of the objectives of the POC was to be cost effective, Britton said, therefore standing up an HPC framework including the compute nodes, the patching, the backup and all of the “watering and feeding” to keep it going has associated costs.

“We needed to find another way and that’s where HPC Cloud bursting piece came in,” he said.

“When we first began negotiations with Microsoft we thought it would be easy, just deploy the module and you’re away, but it definitely wasn’t like that.

Chloe Herrick travelled to Tech Ed 2011 as a guest of Microsoft Australia.

WestPac is one of Australia’s top five banks.


Eric Nelson (@ericnel) reported August 2011 release of the Windows Azure Tools adds some welcome bits on 9/1/2011:

imageDownloadable through the Web Platform Installer this update adds some very nice features.

Profile applications running in Windows Azure

imageWith profiling support in the Windows Azure Tools you can easily detect performance bottlenecks in your application while it is running in Windows Azure.

image

Creating MVC3 web roles

The new template includes the new universal ASP.Net providers that support SQL Azure and it will also make sure that MVC assemblies are deployed with your application when you publish to Windows Azure.

image

Multiple Configurations

The Windows Azure tools now support multiple service configurations in the same Windows Azure Project. This is especially useful for managing different Windows Azure Storage connection strings for local debugging and running in the cloud.

image

The following explains the relationship:

Multiple Service Configurations

Read more about the new features here.

Related Links


Wade Wegner (@WadeWegner) described Building Windows Phone Applications Using Windows Azure in an 8/31/2011 post to the Windows Phone Blog (missed when published):

imageWindows Phones provide many opportunities for developers to build great applications. Nevertheless, in some cases a developer is limited by the attributes specific to any mobile device – constrained processing, battery life, limited storage, and intermittent connectivity. Consequently, it’s important to tie into services off of the device, ideally in a location with scalable processing, plenty of power, elastic storage, and ubiquitous connectivity. Enter cloud computing with Windows Azure.

Phone + CloudIn many ways, cloud computing levels the playing field. Any developer can tap into a larger – and virtually limitless – pool of resources from which to pull. For developers, Windows Azure – Microsoft’s cloud-computing platform – is a great compliment to mobile application development. Simply put, Windows Azure allows you to focus on your application. You don’t have to worry about managing or monitoring the operating system, just as you don’t have to worry about the hardware our network. As a managed service, Windows Azure takes care of things items for you.

imageTo make it easier for Windows Phone developers to use Windows Azure, we have created the Windows Azure Toolkit for Windows Phone. This toolkit provides a set of Visual Studio project templates that give you an advanced starting point for building Windows Phone applications tied into services running in Windows Azure. The toolkit also includes libraries, sample applications, and documentation.

imageToday we’ve released version 1.3 which includes some great updates, including:

For more information on this release you can watch this video on Channel 9:

To get started, visit the Windows Azure Toolkit for Windows Phone on CodePlex. While you can review the source code online, I recommend you download the self-extracting executable. This tool not only gives you all the source code, but also a Visual Studio extension that includes the project templates and a dependency checker that ensures you have all the required prerequisites.

Configuration Wizard

Once installed, you’ll get two new project templates under Cloud templates – Window Phone Cloud Application and Windows Phone Empty Cloud Application.

Windows Phone Cloud Application

Create a new Windows Phone Cloud Application. This will launch a wizard that will collect information from you required for running your application with services. The wizard is adaptive, and will only request information based on what you select.

New Windows Phone Cloud Application Project

Choose both Windows Azure Storage and SQL Azure Database – this way you can try everything you. Just as Windows Phone provides an emulator for development, so too does Windows Azure provide an emulator to simulate running applications in the cloud. Consequently, in the next step, choose the Use Storage Emulator – you can always change this setting later. Similarly, for SQL Azure, choose Use local SQL Server instance.

One of the gems in this toolkit is the built-in support for the Microsoft Push Notification Service (MPNS) – without having to write a single line of code you can host your MPNS services in Windows Azure that are already connected to your application.

Microsoft Push Notification Service

In the last step, you can choose how to manage user authentication. The toolkit provides two forms of user management – a simple ASP.NET membership store (which provides typical username/password support) or the Access Control Service (ACS). ACS is a Windows Azure service that allows you to tap into existing identity providers such as Live ID, Google, Yahoo, and Facebook – in fact, even corporate identities are supported through ADFS. The toolkit makes using ACS extremely easy – not only will it collect information needed to use ACS, but it will also reach out to the ACS management rest endpoints to set everything up automatically. For a detailed explanation, see Windows Azure Toolkit for Windows Phone 7 1.2 Will Integrate With ACS from Vittorio Bertocci.

User Authentication

Once the wizard has run its course you’ll have a solution ready to run – so hit F5!

Login

You are first presented with the opportunity to login. Depending on the user authentication mode you choose, you’ll either use ACS or create a new user.

NOTE: In order to consume the REST services over HTTPS in a phone device or in the Windows Phone Emulator, you need to use a trusted SSL certificate. If you do not have one, you can use a self-signed certificate, but you need to install it in the phone before consuming the services. Since the Computer Emulator always uses the 127.0.0.1 self-signed certificate, we need to install it in the Windows Phone Emulator before continuing with the next steps.

Once you’ve logged in you can start to try out the various aspects of the toolkit. There are five areas to explore:

  • Push Notifications
  • Windows Azure tables
  • Windows Azure blobs
  • Windows Azure queues
  • SQL Azure

To make it really simple to try out and test the push notifications – as well as provide you a demonstration of how to go about registering a notification channel, storing it in Windows Azure tables, and then sending a message to the phone – we have also included a simple web application that you can use to send notifications to the phone. First, enable push notifications on the client …

Enabled MPNS

… then log into the web application running in the Windows Azure compute emulator (admin login is listed in the documentation) and choose the Microsoft Push Notifications tab. You should see a channel established for your user. Type a message then click Send Raw.

Send a notification

Back on the emulator you will see that the application has received the message from the MPNS. Try it out for toast and tile notifications too!

Receiving a raw messageReceiving a toast and tile message

Not bad for an out-of-the-box experience! You can also test out toast and tile notifications.

While this is a sample application, the value is that it includes all the required piping to handle the Windows Phone push requests from the client, surfacing them through the web application, and then letting the admin send notifications back to the phone.

And of course, there’s more – explore the ability to create and delete tables, enter rows of data, upload pictures from the device camera into blob storage, enqueue and dequeue messages in queues, and display read-only data in SQL Azure.

BabelCam

One of the more recent updates to the toolkit includes the sample application BabelCam. BabelCam started as a proof-of-concept application I built for my MIX11 talked Building Windows Phone 7 Applications with the Windows Azure Platform. Since then we’ve not only cleaned up and included the source code, but we’ve also published to the Windows Phone marketplace – download BabelCam and try it out!

We’ve been able to move very quickly in developing this toolkit, based largely on great feedback we’ve received from users – please keep it coming! As a refresher, here are some of the updates we’ve made over the last six months:

Mango opens up a lot of new opportunities for us to build new capabilities and applications that combine the best of Windows Azure and Windows Phone – exciting times ahead!

Be sure to download the Windows Azure Toolkit for Windows Phone today.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Andrew Coates posted the notes for his DEV308 Visual Studio LightSwitch - Beyond the Basics session at TechEd New Zealand 2011:

image222422222222Apologies that this post is fairly nonsensical. I've put my raw NZ TechEd 2011 notes up here for my reference. I'd like to think that I'll refine them over time, but that probably won't be the case.

Slidesblogs.msdn.com/acoat@coatsy

Makes forms over data easier. For the desktop and the cloud.
The simplest way to create business applications for the desktop and the cloud. Technically adept user.

Uses best practices behind the scenes.

Submit Pipeline - places to hook in for extensibility.
Access Control hooks

VS Pro
Custom WCF Services
SilverLight Controls

MVVM Pattern for GUI


DataWorkspace is the in memory graph of all data at the moment.

Bing Map Control - need a Bing API key

If the above were all Andrew’s notes, the presentation must have been short.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

The Windows Azure Case Studies Team posted Implementing Support and Monitoring For a Business-Critical Application Migrated to Windows Azure to TechNet in 8/2011:

imageMicrosoft IT had recently migrated BCWeb—a complex, business-critical application—to the Windows Azure™ platform. To ensure ongoing application availability, the team needed to implement a reliable and comprehensive monitoring and support solution for BCWeb. Microsoft IT accomplished this by combining the Windows Azure integration and monitoring capabilities with the Microsoft® System Center Operations Manager management capabilities.

Download: DownloadTechnical Case Study, 299 KB, Microsoft Word file

image

Introduction

Business Case Web (BCWeb) is an internal, web-based application that Microsoft uses to create the business case for product pricing exemptions. BCWeb is composed of three distinct application components: the core BCWeb component, the Workflow Routing and Approval system (WRAP), and Rapport. The core BCWeb component is responsible for providing a user interface, and for the underlying functionality that enables users to generate business cases for pricing exceptions. WRAP routes the pricing exception requests for approval within the Microsoft corporate infrastructure. Rapport provides a user interface for the WRAP approval process.

BCWeb has a user base of 2,500 internal Microsoft employees. In 2010, Microsoft used BCWeb to process approximately 27,000 pricing exception requests.

BCWeb Platform Overview

BCWeb was migrated to Windows Azure as a pilot project to develop and capture best practices for migrating enterprise applications to Windows Azure. The core BCWeb components are hosted on the Windows Azure platform. However, BCWeb is also integrated with a number of components that are hosted on the Microsoft IT corporate network, and are external to the Windows Azure platform.

Situation

The primary reason for migrating BCWeb to Windows Azure was as a migration pilot project. However, BCWeb was also experiencing performance and reliability issues in its previous environment. Although the Windows Azure migration brought increased reliability and performance to BCWeb, ongoing tuning of the application environment was required. Microsoft IT realized that it needed a comprehensive monitoring solution to enable ongoing reliability, and to measure internally established service level agreements (SLAs).

BCWeb Architecture

BCWeb is divided into three distinct Windows Azure Services, which in turn house the main application components: BCWeb, WRAP, and Rapport. The three applications are separated by design to enable a modular approach to application updates and refactoring.

Windows Azure Components

The first component application—the BCWeb core—is implemented as a Windows Azure Web role that hosts the UI for generating business case documents. BCWeb uses two Worker roles: the first Worker role hosts the core BCWeb Service and other Windows Communication Foundation (WCF)–based services, and the second Worker role hosts background and notification processes used by the BCWeb application. The WRAP application is implemented as a multi–instance Worker role that contains all of the necessary services required to perform the routing and approval operations for BCWeb–generated business case documents. The Rapport Windows Azure Service hosts the Rapport application. Rapport is composed of a Web role that hosts the UI, and a Worker role that hosts the Rapport Windows Communication Foundation (WCF) Service. SQL Azure databases host native data storage for the entire BCWeb application infrastructure.

On-Premises Distributed Components

BCWeb includes several critical components that are not hosted on the Windows Azure platform. These components primarily provide access to external data that is required for BCWeb functionality. The two primary external components are SAP (for business data), and the Microsoft corporate Active Directory® Domain Services database (for infrastructure and organizational data). Both of these components are outside the management scope of BCWeb, but are critical to its functionality. Both components are also hosted on-premises within the Microsoft corporate network. An on-premises database—the Licensing Information Repository (LIR)—hosts information used for data warehousing. The BCWeb transactional SQL Azure databases export information on an ongoing basis to the on-premises LIR database (hosted on Microsoft SQL Server®)—for reporting purposes.

BCWeb Windows Azure Architecture Diagram

Figure 1. BCWeb Windows Azure Architecture

Figure 1. BCWeb Windows Azure Architecture

Solution

Microsoft IT knew that implementing a support and monitoring solution for BCWeb would be a challenging task. The BCWeb migration to Windows Azure meant that the support and monitoring processes used with the previous BCWeb version would require reassessment and redesigning to accommodate the new application infrastructure.

Design Goals

Microsoft IT began planning for the BCWeb support and monitoring solution with several general design goals in mind:

  • The solution must provide support and monitoring for all critical aspects of BCWeb functionality, including components hosted on the Windows Azure platform, and components hosted on-premises that are external to Windows Azure.
  • BCWeb monitoring should be centralized and consolidated into one management console.
  • The solution should leverage existing Microsoft IT infrastructure as much as possible
  • Windows Azure–based monitoring components should be used as much as possible.
Providing Support for a Distributed Application

The new version of BCWeb contained both components from the Microsoft corporate network, and components from the Windows Azure platform. As a result, several changes to the previous support model were required.

The distributed nature of BCWeb on the Windows Azure platform forced Microsoft IT to reassess the methods used to support the application. In the previous BCWeb version, the scope of support was limited to the Microsoft corporate network. One of the important considerations when leveraging Windows Azure for internal enterprise applications is that corporate network users connect to resources outside of the of the network (Windows Azure) to run "internal" applications.

In the BCWeb Windows Azure version, the following components and their associated support teams became part of the application's support infrastructure:

  • Windows Azure - core application
  • SQL Azure - data storage
  • Active Directory Federation Services (AD FS) - authentication
  • The Microsoft corporate internet connection - access to Windows Azure components

These systems would need to be incorporated into the BCWeb support model, and the previously established SLAs would require reassessment to reflect the BCWeb support requirements' increased complexity.

The BCWeb team was still the contact point for end users, but BCWeb support now relied on the Windows Azure platform support team, the AD FS support team, and the Microsoft IT network support team, to provide support for their associated systems.

As a result, the following areas needed reassessment:

  • SLAs for response and resolution time. The BCWeb support team had to include the response times for the other support teams in its overall response and resolution time SLAs.
  • SLAs for performance and availability. BCWeb application SLAs needed to integrate performance and availability benchmarks from all integrated components. Performance and availability for BCWeb was now subject to the performance and availability of several components outside the control of the BCWeb team.

The support team quickly discovered that with a hybrid application, support complexity and dependencies increase as more third-party components are involved. All of these components had an impact on the BCWeb end-to-end SLAs.

Determining Key Points of Failure

The first task in establishing a reliable and comprehensive monitoring solution for BCWeb was to determine the key points of failure for the application. The BCWeb support team identified the key points of failure within BCWeb, and then put the appropriate monitoring processes in place to either prevent failure, or quickly identify when a failure occurred.

When Microsoft IT designed the monitoring solution, these Points of failure were the first aspects of BCWeb that they addressed.

Designing Operational Monitoring for BCWeb

Microsoft IT outlined the following general monitoring requirements for BCWeb:

  • Error logging. Record warning and error-related messages from all applicable components.
  • Platform monitoring. Monitor important aspects of Windows Azure platform health, including:
    • Operating system/SQL/Internet Information Services health
    • Services health
    • Disk capacity
    • Basic performance counters
  • Application monitoring. Monitor performance and reliability for all critical aspects of BCWeb application functionality.
    • Key external services monitoring. Monitor performance and availability of connections with external services including:
    • SAP
    • AD DS

When considering monitoring methods for BCWeb, Microsoft IT identified that the Windows Azure platform could not natively support the level of monitoring that BCWeb would require. Additionally, the on-premises components outside of Windows Azure would need monitoring. Thus, Microsoft IT required a monitoring solution that would allow the BCWeb support team to accurately assess the application's condition based on all of its various components.

Leveraging System Center Operations Manager 2007 R2 to Consolidate Monitoring and Support
  • Microsoft IT decided to use System Center Operations Manager 2007 R2 to monitor the new version of BCWeb. Microsoft IT chose System Center Operations Manager for the following reasons:
  • Monitoring could be centralized into one console, and consolidated to include Windows Azure and on-premises components.
  • BCWeb used System Center Operations Manager–compliant instrumentation (Windows Events and Performance Counters).
  • System Center Operations Manager was already in use in the environment, thus no significant time or capital investment was required.
  • Using System Center Operations Manager limited the amount of custom coding required.
  • System Center Operations Manager already had available a Windows Azure Management Pack that provided monitoring solutions for some of the BCWeb key components.
Using, Extending, and Creating System Center Operations Manager Functionality

Microsoft IT identified four key BCWeb-monitoring categories:

  • End-user perspective and SLA requirements
  • Web and Worker role performance
  • Application health
  • SQL Azure performance and state

Microsoft IT approached each of these categories differently using System Center Operations Manager.

End-User Perspective and SLA Requirements

Microsoft IT used the System Center Operations Manager Web Application template to enable scripted website navigation that mimicked typical end-user interactions with the different BCWeb UI components. This enabled the team to monitor true availability of the web applications and implement alerts. It also enabled Microsoft IT to collect historical availability data to compare with established SLAs.

Web and Worker Role Performance

The development team discovered that the built-in Windows Azure Diagnostics feature could provide a large amount of diagnostic information regarding the state of the Windows Azure Compute roles—the Web and Worker roles in the case of BCWeb. When the development team combined System Center Operations Manager with the Windows Azure Management Pack, they were able to access a large number of performance counters and events that contained the information they needed about the Web and Worker roles. By building trending and alerting functionality, the team was able to monitor the health of the Compute roles. The team used the Windows Azure Management Pack to:

  • Discover each Windows Azure application.
  • Provide status of each Windows Azure role instance.
  • Collect and monitor Windows Azure performance information.
  • Collect and monitor Windows events.
  • Collect and monitor the Microsoft .NET Framework trace messages from each Windows Azure role instance.
  • Selectively delete performance, event, and .NET Framework trace data from the Windows Azure storage account to manage storage space.
Application Health

The overall health of BCWeb depends on several components, including Windows Azure. To monitor the Windows Azure part of BCWeb, and address some of the aspects of the BCWeb application that were not natively monitored by the Windows Azure Management Pack—especially monitoring on-premises components—the development team extended the capabilities of the Windows Azure Management Pack to monitor key aspects of application health. Specifically, they created performance counters that monitored application-specific items such as requests to ASP.NET Application objects and .NET Framework CLR exceptions. The development team also extended the Windows Azure management pack to monitor business logic exception events when accessing on-premises components.

For on-premises components, the development team also leveraged built-in .NET Framework components to monitor application health through performance and historical trends. For example, the team planned to use the StopWatch class to time calls to the SAP web service, and then represent the results as a performance counter that System Center Operations Manager could then monitor.

SQL Azure Performance and State

One large deficiency in the available solutions through System Center Operations Manager was the lack of any monitoring capability for SQL Azure.

In the previous version of BCWeb, a large portion of system monitoring used tools native to SQL Server. Unfortunately, three keys legacy BCWeb tools were not available on SQL Azure:

image

SELECT SUM(reserved_page_count)*8.0/1024 FROM sys.dm_db_partition_stats; GO

The development team also used the following T-SQL script that provided the number of connections to a SQL Azure database.

SELECT Count(*) FROM sys.dm_exec_sessions

The result of this script was a performance counter that System Center Operations Manager monitored every five minutes.

Additionally, the development team examined the application code for references to DMV information that was not available in SQL Azure, and then refactored the code to remove the references and retrieve the information from alternate DMV locations in SQL Azure.

Benefits

Microsoft IT used System Center Operations Manager 2007 R2, the Windows Management Pack for System Center Operations Manager, and custom-designed performance counters within Windows Azure to realize the following benefits:

  • A consolidated management and support environment within System Center Operations Manager 2007 R2
  • Accurate and timely monitoring and alerting for BCWeb critical components
  • A large number of reusable monitoring components that can be leveraged in future Windows Azure applications
Best Practices

Microsoft IT established the following best practices when implementing Windows Azure monitoring:

  • Use System Center Operations Manager 2007 R2 and the Windows Azure Management Pack for consolidated and centralized application monitoring.
  • Extend or create management packs for non-Azure application components.
  • Create custom monitoring components for SQL Azure.
  • Use Worker roles to host custom code for application monitoring.
  • Develop applications with the most recent version of the Windows Azure Software Development Kit (SDK) to implement the newest monitoring features.
Conclusion

By using System Center Configuration Manager 2007 R2, the Windows Management Pack for System Center Operations Manager, and custom-designed management pack components, Microsoft IT was able to provide a robust and centralized monitoring environment for BCWeb.

The solution included monitoring of the BCWeb Windows Azure-based components, and the critical aspects of on-premises components that were not native to Windows Azure. Microsoft IT also captured numerous best practices that will be used in future distributed application migrations.

Products & Technologies
  • Windows Azure Web role
  • Windows Azure Worker role
  • Windows Azure AppFabric
  • SQL Azure
  • Microsoft SQL Server 2008 R2
  • Microsoft Visual Studio® 2010
  • Windows Azure SDK 1.4
  • System Center Operations Manager 2007 R2
  • Windows Azure Management Pack for Operations Manager

Ernest Mueller (@ernestmueller) posted Addressing the IT Skeptic’s View on DevOps on 8/2/2011:

imageA recent blog post on DevOps by the IT Skeptic entitled DevOps and traditional ITSM – why DevOps won’t change the world anytime soon got the community a’frothing. And sure, the article is a little simmered in anti-agile hate speech (apparently the Agilistias and cloud hypesters and cowboys are behind the whole DevOps thing and are leering at his wife and daughter and dropping his property values to boot) but I believe his critiques are in general very perceptive and that they are areas we, the DevOps movement, should work on.

Go read the article – it’s really long so I won’t sum the whole thing up here.

Here’s the most germane critiques and what we need to do about them. He also has some poor and irrelevant or misguided critiques, but why would I waste time on those? Let’s take and action on the good stuff that can make DevOps better!

Lack of a coherent definition

This is a very good point. I went to the first meeting of an Austin DevOps SIG lately and was treated to the usual debate about “the definition of DevOps” and all the varied viewpoints going into that. We need to emerge more of a structured definition that either includes and organizes or excludes the various memetic threads. It’s been done with Agile, and we can do it too. My imperfect definition of DevOps on this site tries to clarify this by showing there are different levels (principles, methods, and practices) that different thoughts about DevOps slot into.

Worry about cowboys

This is a valid concern, and one I share. Here at NI, back in the day programmers had production passwords, and they got taken away for real good reasons. “Oh, let’s just give the programmers pagers and the root password” is not a responsible interpretation of DevOps but it’s one I’ve heard bandied about; it’s based on a false belief that as long as you have “really smart” developers they’ll never jack everything up.

Real DevOps shops that are uptaking practices that could be risky, like continuous deployment, are doing it with extreme levels of safeguard put into place (automated testing, etc.). This is similar to the overall problem in agile – some people say “agile? Great! I’ll code at random,” whereas really you need to have a very high percentage of unit test coverage. And sure, when you confront people with this they say “Oh, sure, you need that” but there is very little constructive discussion or tooling around it. How exactly do I build a good systems + app code integration/smoke test rig? “Uh you could write a bunch of code hooked to Hudson…” This should be one of the most discussed and best understood parts of the chain, not one of the least, to do DevOps responsibly.

We’re writing our own framework for this right now – James is doing it in Ruby, it’s called Sparta, and devs (and system folks) provide test chunks that the framework runs and times in an automated fashion. It’s not a well solved problem (and the big-dollar products that claim to do test automation are nightmares and not really automated in the “devs easily contribute tests to integrate into a continuous deploy” sense.

Team size

Working at a large corporation, I also share his concern about people’s cunning DevOps schemes that don’t scale past a 12 person company. “We’ll just hire 7 of the best and brightest and they’ll do everything, and be all crossfunctional, and write code and test and do systems and ops and write UIs and everything!” is only a legit plan for about 10 little hot VC funded Web 2.0 companies out there. The rest of us have to scale, and doing things right means some specialization and risks siloization.

For example, performance testing. When we had all our developers do their own performance testing, the limit of the sophistication of those tests was “I’ll run 1000 hits against it and time how long it takes to finish. There, 4 seconds. Done, that’s my performance testing!” The only people who think Ops, QA, etc. are such minor skill sets that someone can just do them all is someone who is frankly ignorant of those fields. Oh, P.S. The NoOps guys fall into this category, please don’t link them to DevOps.

We have struggled with this. We’ve had to work out what testing our devs do versus how we closely align with external test teams. Same with security, performance, etc. The answer is not to completely generalize or completely silo – Yahoo! had a great model with their performance team, where their is a central team of super-experts but there are also embedded folks on each product team.

Hiring people

Very related to the previous point – again unless you’re one of the 10 hottest Web 2.0 plays and you can really get the best of the best, you are needing to staff your organization with random folks who graduated from UT with a B average. You have to have and manage tiers as well as silos – some folks are only ready to be “level 1 support” and aren’t going to be reading some dev’s Java code.

Traditional organizations and those following ITIL very closely can definitely create structures that promote bad silos and bad tiering. But just assuming everyone will be of the same (high) skill level and be able to know everything is a fallacy that is easy to fall into, since it’s those sort of elite individuals who are the leading uptakers of DevOps. Maybe Gene Kim’s book he’s working on (“Visible DevOps” or similar) will help with that.

Tools fixation

Definitely an issue. An enhanced focus on automation is valuable. Too many ops shops still just do the same crap by hand day after day, and should be challenged to automate and use tools. But a lot of the DevOps discussions do become “cool tool litanies” and that’s cart before the horse. In my terminology, you don’t want to drive the principles out of the practices and methods – tooling is great but it should serve the goals.

We had that problem on our team. I had to talk to our Ops team and say “Hey, why are we doing all these tool implementations? What overall goal are they serving? “ Tools for the sake of tools are worse than pointless.

Process

It is true that with agile and with DevOps that some folks are using it as an excuse to toss out process. It should simply be a different kind of process! And you need to take into account all the stuff that should be in there.

A great example is Michael Howard et al. at Microsoft with their Security Development Lifecycle. The first version of it was waterfall. But now they’ve revamped it to have an agile security development lifecycle, so you know when to do your threat modeling etc.

Build instead of buy

Well, there are definitely some open source zealots involved with most movements that have any sysadmins involved. We would like to buy instead of build, but the existing tools tend to either not solve today’s problems or have poor ROI.

In IT, we implemented some “ITIL compliant” HP tools for problem tracking, service desk, and software deployment. They suck, and are very rigid, and cost a lot of money, and required as much if not more implementation time than writing something from scratch that actually addressed our specific requirements. And in general that’s been everyone’s experience. The Ops world has learned to fear the HP/IBM/CA/etc systems management suites because it’s just one of those niches that is expensive and bad (like medical or legal software).

But having said that, we buy when we can! Splunk gave us a lot more than cobbling together our own open source thing. Cloudkick did too. Sure, we tend to buy SaaS a lot more than on prem software now because of the velocity that gives us, but I agree that you need to analyze the hidden costs of building as part of a build/buy – you just need to also see the hidden costs and compromised benefits of a buy.

Risk Control

This simply goes back to the cowboy concern. It’s clearly shown that if you structure your process correctly, with the right testing and signoff gates, then agile/devops/rapid deploys are less risky.

We came to this conclusion independently as well. In IT, we ran (still do) these Web go lives once a month. Our Web site consists of 200+ applications and we have 70 or so programmers, 7 Web ops, a whole Infrastructure department, a host of third party stuff (Oracle and many more)… Every release plan was 100 lines long and the process of planning them and executing on them was horrific. The system gets complex enough, both technically and organizationally, that rollbacks + dependencies + whatnot simply turn into paralysis, and you have to roll stuff out to make money. When the IT apps director suggested “This is too painful – we should just do these quarterly instead, and tell the business they get to wait 2 more months to make their money,” the light went on in my mind. Slower and more rigorous is actually worse. It’s not more efficient to put all the product you’re shipping for the month onto a huge ass warehouse on the back of a giant truck and drive it around doing deliveries, either; this should be obvious in retrospect. Distribution is a form of risk management. “All the eggs in one big basket that we’ll do all at one time” is the antithesis of that.

The Future

We started DevOps here at NI from the operations guys. We’d been struggling for years to get the programmers to take production responsibility for their apps. We had struggled to get them access to their own logs, do their own deploys (to dev and test), let business users input Apache redirects into a Web UI rather than have us do it… We developed a whole process, the Systems Development Framework, that we used to engage with dev teams and make sure all the performance, reliability, security, manageability, provisioning, etc. stuff was getting taken care of… But it just wasn’t as successful as we felt like it could be. Realizing that a more integrated model was possible, we realized success was actually an option. Ask most sysadmin shows if they think success is actually a possible outcome of their work, and you’ll get a lot of hedging kinds of “well success is not getting ruined today” kinds of responses.

By combining ops and devs onto one team, by embedding ops expertise onto other dev teams, by moving to using the same tools and tracking systems between devs and ops, and striving for profound automation and self service, we’ve achieved a super high level of throughput within a large organization. We have challenges (mostly when management decides to totally change track on a product, sigh) but from having done it both ways – OMG it’s a lot better. Everything has challenges and risks and there definitely needs to to be some “big boy” compatible thinking on DevOps – but it’s like anything else, those who adopt early will reap the rewards and get competitive advantage on the others. And that’s why we’re all in. We can wait till it’s all worked out and drool-proof, but that’s a better fit for companies that don’t actually have to produce/achieve any more (government orgs, people with more money than God like oil and insurance…).


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Simon May described Why a hybrid approach will help you get to the Cloud in a 9/2/2011 post to the BusinessCloud9 blog:

imageFirst off I’ll say that Cloud purists aren’t going to like what they’re about to read but I think it’s a far more representative view of the real world, and one that can enable more businesses than the overly simplistic idea of ‘everything in the Cloud’. Cloud-based services, be they platforms or finished services, are excellent solutions for providing scale and flexibility and there’s no doubt that they are changing and challenging the world.

microsoft_logo.jpegDisruption is great when it challenges and leads to innovation but applying those innovations to all businesses can be really hard. One of the best approaches in my opinion is to create a bridge – a ‘hybrid cloud’ - that allows the benefits of that disruption to be embraced without throwing out everything that’s gone before. For those who aren’t familiar with the term, hybrid Cloud refers to the connecting of two distinct clouds.

imageThe NIST definition of Cloud Computing provides a little clarity: the Cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., Cloud bursting for load-balancing between clouds).

There are a number of compelling reasons to connect clouds together, but broadly I’d categorise them as:

  • Privacy & Security
  • Localisation
  • Integration
  • Management & Control.

All these are useful in as much as they help to reach the benefits that public clouds provide more rapidly; namely: agility, flexibility and cost savings. We’ll take a look at each of these reasons to go hybrid in a little more detail to fully understand what’s involved and why they’re good ways to get the benefits of cloud.

Privacy & security is rightly a primary concern of anyone looking at a public Cloud solution and common questions like “will my data be safe” and “is the data stored in x country” leap to mind (probably even before you got passed the words Privacy and Security).

One of the highest priorities in moving to the public Cloud is to understand the data that you place there. Ensuring that you’re protecting data that you need to protect and, of course, implied in that is that some data you own doesn’t need to be guarded as tightly. This leads to data stratification. There will be some data, especially in tightly regulated industries, that you cannot place into a public setting and for that reason a hybrid approach becomes the enabler.

imageA simple example is using a public cloud, such as Windows Azure, to work on anonymised personal data (where names are replaced by an ID number for example), and the only link between an ID number and names is stored in a database held on your premises. This is already a common technique and has been used with outsourced data processing for years. [Emphasis added.]

Localisation is often a blocker with public Cloud solutions and is often highlighted when people think of broadband speeds in the UK. I regularly get asked about how an organisation can adopt cloud-based productivity like Office 365 when they have 200 people working out of an office in a remote area of the country with limited connectivity. What’s the bandwidth requirement?

I usually counter with the suggestion that it’s an architectural decision. Office 365 includes, at its very core, the ability to integrate with existing on premises email solutions such as Microsoft Exchange. Given the above scenario it may well be best to build a solution whereby the 200 people in the office remain on a local Microsoft Exchange server whilst the same company’s mobile sales team move to a Cloud solution – Office 365. Everyone is still connected to the same integrated infrastructure, they all see each other in the same address book, they all email each other just the same and even place telephone or video calls to each other with one click but the majority of people only check their mail on the local server.

The same is true of data in a large database, if those 200 people were all crunching the numbers in a large database and pulling it over their “limited connectivity” constantly, things would soon slow to halt. Technology to take local copies of the data, even synchronised local copies, helps to reduce connectivity impacts and should be a consideration when looking at Cloud services.

Integration with existing systems is massively important to adoption and acceptance of services that operate from the Cloud – after all why would you want to make it harder to access a service? Authentication is one of the most simple but effective forms of integration that is most often enabled by a hybrid approach. Almost everyone who signs into their computer in a corporate environment signs into something called a “directory”, normally Microsoft Active Directory, and then when they need to access resources on their network they are just passed through, without the need to enter a username and password again and again.

If they are prompted to enter a password repeatedly it leads to a bad experience: first they need to take an extra step – enter the password - and secondly that password is yet one more for them to forget, leading to simple and insecure password use, which in turn leads to risk and helpdesk calls.

There are simple ways with free services, like Active Directory Federation Services (ADFS), to “glue” things together to make them seamless. To the end user everything should feel like a single, designed system.

Management and Control are two aspects that IT people take for granted in traditional on premises environments but when you look at many Cloud services today you see that there are one or two holes. Large organisations typically have some type of service desk and change management function and they typically try to follow processes such as those prescribed by ITIL. They run these functions to provide a safety net to the organisation, a way of knowing what’s happening and what’s changing and yet most Cloud services don’t provide much of a view in.

Operations frameworks like ITIL are also all about simplifying management as much as possible and, again, many Cloud solutions don’t integrate that well with what’s currently being used to manage the infrastructure. To my mind any Cloud should be manageable be it private or public, but moreover I’d also prefer them to be manageable through a common system like System Center.

The reason for commonality is simple: it’s cheaper to run everything on a common management infrastructure, not only in terms of running the system but a single pane of glass lets me spot issues more quickly.

So a hybrid approach where I manage both my private and public clouds in a common way is a bit of a no-brainer to me. Smaller organisations might not see the need for deep levels of management and control and may only see cost and complexity; here they should be looking for new ways that Cloud simplifies management, such as surfacing detail about the health of their PC estate through a Cloud dashboard.

Hopefully that explains a little about why I think a hybrid approach to Cloud is the better one to take. Ultimately I don’t see that the world is ready to go 100% Cloud because it just doesn’t provide enough flexibility for those of us in the real world and presents an ultimatum that most people aren’t happy with.


Scott Bekker (@scottbekker) asserted “There has been activity among Microsoft's appliance partners revolving around Azure -- but you wouldn't know it from this year's WPC, where the product played a backstage role” as a deck for his Windows Azure Appliance: Out of the Spotlight but Alive article of 9/1/2011 for the Redmond Channel Partner blog:

The literal centerpiece of the Microsoft Worldwide Partner Conference (WPC) last year in Washington, D.C., was the Windows Azure Platform Appliance. The tractor-trailer-sized prototype dominated the 2010 WPC show floor.

imageA year later, the Windows Azure appliance played a strictly backstage role at the 2011 WPC in Los Angeles this summer. While it was absent, however, at least it was mentioned.

Microsoft and its appliance partners -- HP, Dell, Fujitsu and eBay -- had been mostly tight-lipped about the appliance in the year between the conferences, even though executives had said last July that services based on it would start becoming available in late 2010.

Last year, Microsoft and OEM executives said the appliances would initially consist of Windows Azure, SQL Azure and the Microsoft-specified configuration of nearly 900 servers, along with storage and networking gear. Microsoft would remotely manage the appliances and provide platform software updates.

With little public discussion of the appliances in the interim, and with two of the boxes' key public advocates -- Bob Muglia and Ray Ozzie -- gone from Microsoft, the future of the devices was very much in doubt. Meanwhile, Microsoft has recently ramped up emphasis on the related concept of private cloud, which is more a software play and more in line with Redmond's traditional strengths.

While no appliances were on display at the Dell, Fujitsu, HP or Microsoft booths this year, Microsoft did confirm that work is continuing on the joint projects.

In June, Microsoft and Fujitsu announced that the Fujitsu Global Cloud Platform (FGCP/A5) service was set to launch in August. The service runs from a Windows Azure Platform Appliance in Fujitsu's datacenter in Japan.

FGCP/A5 offers application development frameworks, such as the Microsoft .NET Framework, Java and PHP, and data-storage capabilities. The services are consistent with the Windows Azure platform provided by Microsoft, but they're hosted independently of Microsoft datacenters and services. The main benefit of the approach, according to a Microsoft FAQ, is that "it provides the benefits of the Windows Azure platform with greater physical control, geographic proximity, regulatory compliance and data sovereignty."

In a blog post, the Microsoft Server & Tools Business team detailed progress by HP and eBay. According to the blog, HP has an operational appliance at its datacenter and plans to make services available to customers later this year, and eBay is in the early stages of implementing on the Windows Azure Platform Appliance.

Missing from the blog statement was any mention of Dell. But in an interview, Microsoft Corporate Vice President of the Worldwide Partner Group Jon Roskill confirmed that Dell was still working on an appliance.

Progress toward cloud appliances all around the globe isn't happening as fast as it seemed like it might in July 2010, but the concept is moving forward.

Scott is editor in chief of Redmond Channel Partner magazine.

Full disclosure: I’m a contributing editor to Visual Studio Magazine, a sister publication to 1105 Media’s Redmond Channel Partner.


<Return to section navigation list>

Cloud Security and Governance


<Return to section navigation list>

Cloud Computing Events

Mary Jo Foley claimed “Microsoft’s Build event for Windows developers is just 10 days away. Here are a few tidbits about agenda changes, an unofficial preconference and more for those following the twists and turns” in a deck for a Microsoft Build Windows 8 confab: What's the latest? post to her All About Microsoft blog for ZDNet:

imageThere’s just over a week until Microsoft’s Build conference in Anaheim, Calif. — the place where “all will be revealed” around the Windows 8 and Windows Azure development stories.

There’s still no official word as to any names or details about the sessions planned for the four-day event. But as of September 2, thanks to a slightly tweaked agenda, we now know that there will be two keynotes (Days 1 and 2), a number of “big picture sessions” (Day 1) and two Microsoft-sponsored parties (Day 1 and 2).

imageWe also know that one of the planned “local” Build events slated for France has been cancelled. Besides France, Microsoft had scheduled a number of day-long Build events in Belgium, Denmark, Dubai, Greece, Ireland, Italy, the Netherlands, Poland, Portugal, Spain and Sweden, according to Microsoft Community Contributor Francisco Martin Garcia.

imageOn August 30, Microsoft France representative sent the following note (translated from the original French) to those who had signed up to attend the French Build event:

Hello:

You are registered to the event local entitled “//Build/”, organized by Microsoft in France. This local event has been cancelled. We apologize for this change of program and will inform you of new events.

I invite you to consult the Windows 8 (http://blogs.msdn.com/b/b8_fr/) blog if you want to be informed in real time on this product. The information available on the Build of September 13, 2011 in Anaheim, California Conference are published on http://www.buildwindows.com/.

To be regularly informed of the events with the Microsoft news, feel free to consult frequently with our etMSDN sitesTechNet and subscribe to our newsletters.

Thank you for your interest in Microsoft.

I asked a Windows spokesperson in the U.S. about the cancellation and whether the other planned Build events were still on. I didn’t receive a direct answer to either question. Instead, I was sent the following statement via e-mail:

“Microsoft is extending its newest developer event BUILD globally in multiple ways including live streamed keynotes, having a major international presence in Anaheim and engaging directly with developers in their home country to learn about the future of Windows…”

Anyone registered to attend any of the other local Build events heard about any other cancellations from Microsoft?

Microsoft announced on August 1 that Build was sold out, but that Microsoft plans to live stream the keynotes from the show, and to make available publicly webcasts of the sessions the day after they happen.

Microsoft also cancelled the planned day of pre-conferences for Build. One Microsoft Regional Director, Billy Hollis, is running his own “unofficial” pre-conference session on creating user experiences on September 12.

In addition to learning about the Windows 8 UI and “new app model” at Build, Microsoft developers with whom I’ve spoken are hoping to hear about the next version of Visual Studio (VS 2012), Microsoft’s plans for Blend (and for HTML5 development tools in general) and how Microsoft plans to support Silverlight and Windows Presentation Foundation (WPF) in the Windows 8 timeframe.

Microsoft is expected to provide Build attendees with the Windows 8 bits in test form, and may even give paying attendees some tablet hardware to test them on.

I have press credentials, so probably won’t get the rumored Win8 tablet. I’ll be reporting  cloud-related keynotes and sessions for TechTarget’s SearchCloudComputing.com.

Eric Nelson (@ericnel) posted Welcome to Six Weeks of Windows Azure to a new UK blog on 9/1/2011:

imageGreat of you to pop by. We are not ready yet to share details of what exactly we are doing (as we haven’t worked them out yet) – but if all goes to plan we should be ready to announce in October.

But in essence, we hope to be able to significantly help UK software product authors (ISVs, startups etc) build superb Windows Azure applications during the early part of 2012.

imageIn the meantime we would suggest you keep an eye on Eric’s blog and twitter and check out the steps in Getting started with Windows Azure.

Eric and David
Microsoft UK

And… a swift poll… [see site for live version:]

image


<Return to section navigation list>

Other Cloud Computing Platforms and Services

James Urquhart (@jamesurquhart) continued his series with Can any cloud catch Amazon Web Services? (part 2) with a 9/2/2011 post to his Wisdom of the Clouds blog for C|Net news:

imageIn part 1 of this two-part series, I explained why I thought Amazon Web Services is the leader in public infrastructure as a service (IaaS), and why no other company really seems poised to catch it at this point. There is no doubt in my mind that Amazon's developer-centric approach to cloud sets it far apart--and ahead of--the hosting companies trying to compete in that market.

imageThat said, Amazon's market position is not invulnerable, and there are several ways that it can be beaten--or at least challenged. What it will take is a different approach to the public-cloud market equally as disruptive as Amazon's. In this post, I'd like to go over a few that I've thought about over the last three years. I'll present them in the order of least to most likely to succeed.

Complete head-to-head with Amazon
The most obvious way to take market share from Amazon would be to address the developer market as well or better than it does. There are so many elements to this that the barrier to entery would be very, very high, but in theory it could be done.

Here's what you'd have to do:

  • Build/buy/already own several large data centers worldwide.
  • Hire/acquire a top-notch team of software developers and systems designers who can quickly build extremely large-scale computing services.
  • Figure out how to beat Amazon's price or feature set.
  • Get the e-commerce part of cloud service sales right.
  • Attract an ecosystem of third-party management tools, development tools, and services.

In my humble opinion, none of the hosting companies has the capital, skills, and patience to do this on their own. The telecommunications companies might be able to pull it off, but they'd have to acquire and nurture the skills they don't already have. (Interestingly, Verizon told me that its acquisition of CloudSwitch was about about acquiring software skills.)

Another way this could be done, however, is if a heavily capitalized investor (say, a Warren Buffett) were to determine that building such a business (likely on top of an existing business) was worth the long-term return on a significant upfront investment. They'd have to be ruthless in pursuing all of the aspects I listed above, however, or they would fail.

Change the rules of the game
Another way to go about competing with Amazon today is to introduce new capabilities which disrupt Amazon's "services as a platform" approach. There are two companies that I think exemplify this approach today, and both are taking Amazon with enhanced platform as a service (PaaS) models:

  • At VMworld this week, virtualization leader VMware spent a lot of time talking about its Cloud Foundry platform. Like other platform as a service (PaaS) software, Cloud Foundry allows developers to deploy scalable applications without having to directly manage the virtual machines, operating systems, or middleware in which the code runs. Just deploy your code, and let Cloud Foundry manage the cloud infrastructure needed to scale the applications appropriately.

    imageVMware's plans for Cloud Foundry don't stop at attracting a developer community to VMware's revenue stream; VMware is one of many companies that understands that cloud enables a new disruption in software development and deployment. As I noted in part 1, developing PaaS without restrictions is challenging, but VMware believes building an open ecosystem (in terms of supported languages, services and clouds) enables flexibility for developers.

  • Another company that has been looking to change the game for some time is Microsoft. Azure is developer-centric, and it took a "PaaS-first" approach to building cloud applications. In Microsoft's case, however, I think the only ecosystem it has attracted so far is the existing .NET ecosystem--probably in part due to the reliance of Azure on .NET architectures and frameworks. [Emphasis added.]

    imageThat may change, however, as Microsoft supports an increasing number of languages, supports an IaaS service within Azure, and builds an ecosystem of third-party services that run on or with the Azure service. It has a huge uphill climb in front of it, however, if its wishes to attract those committed to open-source software practices. Amazon doesn't have that problem.

Join an open cloud ecosystem
One of the metaphors I like to lovingly use with respect to Amazon is the following: AWS is the AOL of cloud computing. To be clear, today they are AOL circa 1985--they have most of the customers and the most interesting "content"--and they have a huge profitable road ahead of them for some time.

However, the nature of cloud services lends itself to the appearance of an open, multi-vendor ecosystem over time, and when that ecosystem reaches maturity, Amazon will have a decision to make. Does it dig a wider moat, or does it take action to join and nurture the open ecosystem. AOL failed to embrace the Internet, attempting instead to use its content as a carrot to attract and keep membership, a strategy which eventually knocked it off the leadership perch.

imageThis is where open-source cloud infrastructure and cloud service projects become extremely important. Projects such as OpenStack and Red Hat's Aeolus promise to give some baseline capabilities to the entire cloud marketplace that will serve as the launching point for new classes of services and technologies.

This, in turn, will attract both entrepreneurs, as they can create services that the users of those technologies can easily acquire and deploy, and service providers, as there is a long tail of service offerings for the platform that they can choose from to differentiate themselves from their competition--without requiring proprietary systems to operate the service.

My bet is on an open ecosystem to even the playing field with Amazon in the medium to long term. I also believe that Amazon might remain the market share leader if it were to join such an ecosystem voluntarily. However, the economics of choice are just too powerful for a single vendor to counter an open ecosystem unless it controls the method of distribution of its product or service. Much like AOL, Amazon doesn't control the Internet, so that doesn't apply here.

Not everyone agrees, by the way, that OpenStack is a sure thing for generating an ecosystem in the face of AWS. Adrian Cockcroft, who is in charge of Netflix's cloud architecture, is unimpressed with what he's heard about OpenStack so far. He dug a bit deeper into why he doesn't see it as an alternative to AWS at this time. The short, short version: OpenStack's community may be getting too large, with too much focus on building the project itself, and not enough attention on building new services on top of it.

Regardless, time has a way of chiseling away at the dominance of any one vendor in any given market. Whether an individual entrepreneur or an entire market figures out how to capture market share remains to be seen, but it will happen. The question is, is any cloud ready to begin that process today?

Read more: http://news.cnet.com/8301-19413_3-20100535-240/can-any-cloud-catch-amazon-web-services-part-2/#ixzz1WouMxdKs

Graphic credit: Dave Crocker/Geograph.


Nati Shalom described the need for a Big Data Application Platform in a 9/2/2011 post:

imageIt's time to think of the architecture and application platforms surrounding "Big Data" databases. Big Data is often centered around new database technologies mostly from the emerging NoSQL world. The main challenge that these databases solve is how to handle massive amount of data at a reasonable cost and without poor performanc - distributed databases emerged to address this challenge and today we're seeing high adoption rate and quite impressive success stories that indicate the speed in which this market evolves.

The need for a Big Data Application Platform

Application platforms provide a framework for making the development of applications simpler. They do this by carving out the generic parts of applications such as security, scalability, and reliability (which are attributes of a 'good' application) from the parts of the applications that are specific to our business domain.

Most of the existing application platforms such as Java EE and Ruby on Rails were designed to work with centralized relational databases in mind. Clearly, that model doesn’t fit well to the Big Data world simply because it wasn’t designed to deal with massive amount of data in first place. In addition to that, frameworks such as Hadoop are considered too complex as noted in VP/Research Director for Forrester Research Mike Gilpin, in his post, "Big Data" technology: getting hotter, but still too hard:

"Big Data" also matters to application developers - at least, to those who are building applications in domains where "Big Data" is relevant. These include smart grid, marketing automation, clinical care, fraud detection and avoidance, criminal justice systems, cyber-security, and intelligence.

One "big question" about "Big Data": What’s the right development model? Virtually everyone who comments on this issue points out that today’s models, such as those used with Hadoop, are too complex for most developers. It requires a special class of developers to understand how to break their problem down into the components necessary for treatment by a distributed architecture like Hadoop. For this model to take off, we need simpler models that are more accessible to a wider range of developers - while retaining all the power of these special platforms.

Other existing models for handling Big Data such as Data Warehouse don’t cut it either, as noted in Dan Woods' post on Forbes, Big Data Requires a Big, New Architecture:

...to take maximum advantage of big data, IT is going to have to press the re-start button on its architecture for acquiring and understanding information. IT will need to construct a new way of capturing, organizing and analyzing data, because big data stands no chance of being useful if people attempt to process it using the traditional mechanisms of business intelligence, such as a data warehouses and traditional data-analysis techniques.

To effectively write Big Data applications, we need an Application Platform that would put together the different patterns and tools that are used by pioneers in that space such as Google, Yahoo, and Facebook in one framework and make them simple enough so that any organization could make use of them without the need to go through huge investment.

Here's my personal view on how that platform could look like based on my experience covering the NoSQL space for a while now and through my experience with GigaSpaces.

Characteristics of Big Data Application Platform

As with any Application Platform, a Big Data application platform needs to support all the functionality expected from any application platform such as scalability, availability, security, etc.

Big Data Application platforms are unique in the sense that they need to be able handle massive amounts of data and therefore need to come with built-in support for things like Map/Reduce, Integration with external NoSQL databases, parallel processing, and data distribution services and on top of that, they should make the use of those new patterns simple from a development perspective.

Below is a more concrete list of the specific characteristics and features that define what Big Data Application Platform ought to be. I've tried to point to the specific Java EE equivalent API and how it would need be extended to support Big Data application.

Support Batch and Real Time analytics

Most of the existing application platforms were designed for handling of transactional web applications and have little support for business analytics applications. Hadoop has become the de facto standard for handling Batch processing; Real Time analytics, however, is done through other means outside of the hadoop framework, mostly through an event processing framework, as I already outlined in details during my previous post Real Time Analytics for Big Data: An Alternative Approach.

A Big Data Application Platform would need to make Big Data application development closer to mainstream development by providing a built-in stack that includes integration with Big Data databases from the NoSQL world, and Map/Reduce frameworks such as Hadoop and distributed processing, etc.

It also needs to extend the existing Transaction processing and Event Processing semantics that come with JavaEE for handling of Real Time analytics that fits into the Big Data world as outlined in the references below:

Making Big Data Application Closer to Mainstream development practices

[Detailed requirements list elided for brevity.] …

Built in support for public/private cloud

Big Data applications tend to consume lots of compute and storage resources. There are a growing number of cases where the use of the cloud enables significantly better economics for running Big Data applications. To take advantage of those economics, Big Data Application Platforms need to come with built-in support for public/private clouds that will include seamless transition between the various cloud platforms through integration with frameworks such as JClouds. Cloud Bursting provides a hybrid model for using cloud resources as spare capacity to handle load. To effectively handle Cloud Bursting with Big Data well have to make the data available for both the public and private side of the cloud under reasonable latency – which often requires other services such as data replication.

Open & Consistent management and orchestration across the stack

A typical Big Data application stack includes multiple layers such as the database itself, the web tier, the processing tier, caching layer, the data synchronization and distribution layer, reporting tools, etc. One of the biggest challenge is that each of those layers come with different management, provisioning, monitoring and troubleshooting tools. Big Data applications tend to be fairly complex by their very nature; the lack of consistent management, monitoring and orchestration across the stack makes the maintenance and management of this sort of application significantly harder.

In most of the existing Java EE management layers, the management application assumed control of the entire stack. With Big Data applications, that assumption doesn’t apply. The stack can vary quite significantly between different application layers therefore the management layer of Big Data Application Platform needs to come with a more open management that could host different databases, web-containers etc., and provide consistent management and monitoring through this entire stack.

Final words

Java EE application servers played an important role in making the development of database-centric web application closer to mainstream. Other frameworks such as Spring and Ruby on Rails later emerged to increase the development productivity of those applications. Big Data Application Platforms have a similar purpose – they should provide the framework for making the development, maintenance and management of Big Data Applications simpler. In a way, you could think of Big Data Application platforms as a natural evolution of the current application platforms.

With the current shift of Java EE application platforms toward PaaS, we're going to see even stronger demand for running Big Data applications on cloud based environments due to the inherent economic and operational benefits. Compared to the current PaaS model, moving data to the cloud is more complex and would require more advance support for data replication across sites, cloud bursting etc.

The good news is that Big Data Application platforms are being implemented with these goals in mind, and you can already see migration yielding the benefits one should expect.

References

Jeff Barr (@jeffbarr) reported New S3 Features for The AWS Management Console in a 9/1/2011 post:

imageWe have added three new features to the Amazon S3 tab of the AWS Management Console:

  1. Easier Access - You no longer need to install Adobe Flash or provide outbound access to port 843 on your network in order to use the S3 tab.
  2. Folder Upload - You can now upload the contents of an entire folder with a single selection using a new Advanced Uploader.
  3. Jump - You can now search for objects or folders by simply typing the first few characters of the name.

Easier Access
imageThe console no longer uses Adobe Flash. You don't need to install it (problematic in some locked-down corporate environments) and you don't have to enable outbound access on port 843 (required by Flash). You can now use the S3 tab from behind a regular or transparent proxy.

Folder Upload
The new Advanced Uploader allows you to upload entire folders at once. It also allows you to upload individual objects that are larger than 5 GB. You must enable the uploader (a Java applet) in order to take advantage of these new features. To do so, simply click on the Upload button and then Enable Enhanced Uploader:


Once you have done this you can select one or more folders each time you click on Add Files. You can even click this button more than once if you'd like:

Jump
If you have buckets with lots of objects, you'll love the new Jump feature. Start entering the prefix of the objects or folders that you are looking for and the console will jump to the items that match or follow what you type:

We'll do our best to keep making the AWS Management Console even better. To do so, we need your feedback. Please feel free to post your suggestions in the S3 Forum.


<Return to section navigation list>

0 comments: