Monday, January 17, 2011

Windows Azure and Cloud Computing Posts for 1/17/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

imageNo significant articles today.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Sudhir Hasbe and Bruno Aziza will deliver a Microsoft DataMarket: Leveraging cloud to deliver public domain and commercial data to millions session on 2/2/2011 to the O’Reilly Strata Conference 2011 in the Mission City B4 room, Santa Clara Convention Center, Santa Clara, CA:

imageWindows Azure Marketplace includes data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources. Customers have access to datasets such as demographic, environmental, financial, retail, weather and sports. Developers can build applications for various platforms like PC, Servers, Azure, Windows Phone, IPhone, IPAD etc using data from DataMarket. Developers can access the data as a service through an industry standard ODATA API. information workers can use the data to perform analysis using tools like Excel, PowerPivot and 3rd party applications. DataMarket also includes visualizations and analytics to enable insight on top of data.

Sudhir Hasbe, Microsoft

Photo of Sudhir HasbeSudhir Hasbe is a Sr. Product Manager at Microsoft focusing on SQL Azure and Middleware. Mr. Hasbe is responsible for DataMarket, Microsoft’s information marketplace. Mr. Hasbe is responsible for building ISV partner ecosystem around Windows Azure Platform. Mr. Hasbe has passion for incubation businesses and has worked at few startup businesses prior to Microsoft.

Sudhir Hasbe has rich background in ERP, EAI and Supply chain management space. Prior to Microsoft Mr. Hasbe has worked with more than 55 companies in 10 countries over 10 years implementing and integrating ERP systems with various supply chain and data collection solutions.

Bruno Aziza, Microsoft

Photo of Bruno AzizaBruno Aziza is a recognized authority on Business Intelligence and Information Management. He is the co-author of best-selling book, “Drive Business Performance: Enabling a Culture of Intelligent Execution” and a Fellow at the Advanced Performance Institute, a world-leading and independent advisory group specialized in organizational performance. Drs. Kaplan & Norton, of Balanced Scorecard fame, praise Aziza for moving “the field of performance management forward in important new directions.”

Aziza has guest lectured at Stanford University in the US and the Cranfield School of Management in the UK. Aziza was educated in France, Germany, the UK and the US and completed Executive Education programs from Harvard Business School , the Kellogg School of Management and the MIT Sloan School of Management. He holds a Master’s degree in Business and Economics and speaks French, German and English.

Aziza has held management positions at Apple Inc., Business Objects (SAP), AppStream (Symantec) and Decathlon SA. He currently works on Microsoft Business Intelligence go-to-market strategy and and execution for partners, services, sales and marketing. Aziza lives in Seattle with his family and enjoys sports and travelling.


Alistair Croll asked Who Owns Your Data? in a 1/12/2011 post to the Mashable blog:

image Karl Marx said that the industrial revolution polarized the world into two groups: those who own the means of production and those who work on them.

Today’s means of production aren’t greasy cogs and steam-spewing engines, but that doesn’t mean they don’t divide us. Industrial data is all around us, and search engines, governments, financial markets, social networks and law enforcement agencies rely on it.

We willingly embrace this “Big Data” world. We share, friend, check in and retweet our every move. We swipe loyalty cards and enter frequent flyer numbers. We leave a growing, and apparently innocent trail of digital breadcrumbs in our wake.

But as we use the Internet(Internet) for “free,” we have to remember that if we’re not paying for something, we’re not the customer. We are in fact the product being sold — or, more specifically, our data is.

So here’s a tricky question: Who owns all that data?

Why Data Ownership Is Hard

The fundamental problem with data ownership is that bits don’t behave like atoms. For most of human history, our laws have focused on physical assets that couldn’t be duplicated. The old truism “possession is nine-tenths of the law” doesn’t apply in a world where making a million copies, each as good as the original, is nearly effortless.

It’s not just the ability to copy that makes data different, however. How data is used affects its value. If I share a movie with someone, the copyright holder loses a potential sale. On the other hand, they may make money: freely sharing Monty Python videos online increased DVD sales by 23,000%. Some kinds of information were meant to be shared. If I give my phone number to someone, surely it’s gained value. But if it’s written on a bathroom wall, presumably it’s lost some.

It is hard to get data control right, too. In 2009, Burning Man required that Burners give organizers control of any of their images that were shared by a third party. The well-meaning effort to protect unwanted distribution sparked a vigorous debate about what electronic freedom really means. More recently, WikiLeaks(Wikileaks) has forced us to ask: Do thousands of leaked cables belong to the government, U.S. citizens, WikiLeaks or the newspapers that published them?

Old Laws, New Problems

These questions of data ownership are all nuanced issues, quick to anger and hard to resolve. We’re struggling to cope with them, both legislatively and culturally.

In a number of recent cases, outdated laws are being repurposed and abused, alternately defending and restricting freedom. We’re using ancient wiretapping laws to imprison people who record law enforcement officials. At the same time that reading a spouse’s e-mail is a criminal offense, Google(Google) search history is now an admissible form of evidence. And the U.S. attorney general has just subpoenaed the private Twitter messages of a foreign citizen.

Big Data Makes Its Own Gravy

But if data law is confusing, Big Data makes it downright Byzantine. That’s because the act of collecting and analyzing massive amounts of public and private data actually generates more data, which is often as useful as the original information — and belongs to whomever performed the analysis. Put another way: Big Data makes its own gravy.

In August 2006, America Online published a dataset of search results, hoping to provide raw material to researchers. The data had been anonymized, so that each searcher’s identity was just a number. Five days later, The New York Times had tracked down one of those searchers by linking her search history to other public data, such as the phone book.

At big data companies, this kind of thing happens constantly; the companies just aren’t ignorant enough to do it in public. Since 2006, our willingness to share data has risen dramatically. So has companies’ ability to mine it for new insights, and not always in ways we’d approve of. Consider the Netflix Prize, awarded for figuring out how to use our film preferences to suggest other movies. That kind of power could also be used for an “Insurance Challenge” that turns our online behavior into actuarial tables that dictate premiums and deny some of us coverage.

The Rise of the Data Marketplace

imageFor decades, lawyers and traders have relied on companies like Thomson Reuters for the latest stock and legal news. Now startups like Gnip, Infochimps, Windows Azure DataMarket, Factual, and Datamarketplace (acquired by Infochimps) are making data easier to acquire and massage. These data marketplaces seldom create new data; rather, they clean it up, ensuring it’s current, and connecting buyers and suppliers. Their value comes not from the data, but from making it usable and accessible.

These data marketplaces can teach us an important lesson about data ownership. Ultimately, the question of who owns information is a red herring.

It’s Not Really About Who Owns the Data

Thirty years ago, Stewart Brand observed that, “On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time.”

Data will leak out, as it always does, despite the best efforts of hardware companies. It’ll be around forever, even if we try to impose a statute of limitations on it. And we’ll find new ways analyze it, making still more data. Yesterday’s online chaff may be the cornerstone of tomorrow’s new startup.

The important question isn’t who owns the data. Ultimately, we all do. A better question is, who owns the means of analysis? Because that’s how, as Brand suggests, you get the right information in the right place. The digital divide isn’t about who owns data — it’s about who can put that data to work.

More Data Resources from Mashable:

Image courtesy of iStockphoto(iStockphoto), fpm, grybaz

Alistair Croll is the co-chair of O’Reilly’s Strata conference, which tackles the convergence of Big Data, ubiquitous computing, and new interfaces.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Ron Jacobs posted Windows Server AppFabric and the new Web Platform Installer (3.0) on 1/17/2011:

image When the new Web Platform Installer (3.0) was released I had trouble finding Windows Server AppFabric.  The new UI can be a little confusing so to save you some trouble here are the instructions for installing Windows Server AppFabric.

The Simple Way
  • Use this link it will launch the web platform installer and just install it.
The Web Platform Installer UI

Go to http://www.microsoft.com/web and download then run the Web Platform Installer

SNAGHTML5e65214

In the search box type AppFabric and press enter

SNAGHTML5e832d1

The search results will take you to AppFabric (and let you know if it was already installed)

SNAGHTML5e92bb0

Note: I have some old AppAbric Setup artifact on this machine – you might not see this.

image722322This section also covers the Windows Server AppFabric.

 


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Adron Hall (@adronbh) published Gritty Technical Info on Windows Azure Worker Roles on 1/17/2011:

image In the last blog entry, “Gritty Technical Info on Windows Azure Web Roles“, I covered the creation and startup of a web role within the Windows Azure Development Fabric and observing the web role with the Windows Azure Compute Emulator.  In this blog entry I’ll cover the worker role.

imageOpen the Windows Azure Web Role Sample Solution.  Right click on the Windows Azure and select New Worker Role Project.

New Worker Role Project...

New Worker Role Project...

Once the worker role project SampleWorkerRole is added the solution explorer will display the project just like the web role, albeit fewer files.

Solution Explorer

Solution Explorer

Next right click on the SampleWorkerRole instance in the Windows Azure Web Role Sample and select properties.  Now set the instance count to 2 and the VM size to extra large.

SampleWorkerRole Properties

SampleWorkerRole Properties

Click on F5 to run the application.  Now when the application executes the 6 web role instances will start and the 2 worker role instances will start.

Windows Azure Compute Emulator

Windows Azure Compute Emulator

Examine the first worker role instance.

SampleWorkerRole Instance Status

SampleWorkerRole Instance Status

The worker role instance displays a number of new diagnostic messages in a similar way to the web role.  The first half of the trace diagnostics are configuration and instance messages.  The second half of the trace diagnostics are status messages that are printed from the worker role running.

Open up the code in the WorkerRole.cs file in the SampleWorkerRole Project.  As a comparison open the WebRole.cs file in the SampleWebRole Project.

using System.Diagnostics;
using System.Net;
using System.Threading;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace SampleWorkerRole
{
    public class WorkerRole : RoleEntryPoint
    {
        public override void Run()
        {
            Trace.WriteLine("SampleWorkerRole entry point called", "Information");

            while (true)
            {
                Thread.Sleep(10000);
                Trace.WriteLine("Working", "Information");
            }
        }

        public override bool OnStart()
        {
            ServicePointManager.DefaultConnectionLimit = 12;
            return base.OnStart();
        }
    }
}

In the WorkerRole.cs file the code inherites from the RoleEntryPoint for the WorkerRole. In the WorkerRole Class the Run and OnStart Methods are overridden to provide some basic trace information and set the default connection limit.

The Run method has a basic while loop that updates every 10000 milliseconds, which displays on the Windows Azure Compute Emulator as “Information: Working”.

using Microsoft.WindowsAzure.ServiceRuntime;

namespace SampleWebRole
{
    public class WebRole : RoleEntryPoint
    {
        public override bool OnStart()
        {
            return base.OnStart();
        }
    }
}

In the code for the WebRole.cs file there is very little actually going on.  Take a closer look at the OnStart method override.  Technically this code doesn’t even need to be in the generated file and can be deleted, but provides a good starting point to add any other code needed in the start of the web role.

Next I’ll add some code in the worker role to provide a telnet prompt that responds with worker role information.  To work through this exercise completely download a telnet client like Putty (http://www.chiark.greenend.org.uk/~sgtatham/putty/).

If Visual Studio 2010 is no longer open, launch it and open the Windows Azure Web Role Sample Solution.  Right click on the SampleWorkRole Role in the Windows Azure Web Role Sample Project.  Click on the Endpoints tab of the properties window and click on Add Endpoint and call it TelnetServiceEndpoint.

Endpoint

Endpoint

Add a private member and create a run method with the following code.

private readonly AutoResetEvent _connectionWait = new AutoResetEvent(false);

public override void Run()
{
    Trace.WriteLine("Starting Telnet Service...", "Information");

    TcpListener listener;
    try
    {
        listener = new TcpListener(
            RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["TelnetServiceEndpoint"].IPEndpoint) { ExclusiveAddressUse = false };
        listener.Start();

        Trace.WriteLine("Started Telnet Service.", "Information");
    }
    catch (SocketException se)
    {
        Trace.Write("Telnet Service could not start.", "Error");
        return;
    }

    while (true)
    {
        listener.BeginAcceptTcpClient(HandleAsyncConnection, listener);
        _connectionWait.WaitOne();
    }
}

After adding this code, add the following code for the role information to write to a stream.

private static void WriteRoleInformation(Guid clientId, StreamWriter writer)
{
    writer.WriteLine("--- Current Client ID, Date & Time ----");
    writer.WriteLine("Current date: " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
    writer.WriteLine("Connection ID: " + clientId);
    writer.WriteLine();

    writer.WriteLine("--- Current Role Instance Information ----");
    writer.WriteLine("Role ID: " + RoleEnvironment.CurrentRoleInstance.Id);
    writer.WriteLine("Role Count: " + RoleEnvironment.Roles.Count);
    writer.WriteLine("Deployment ID: " + RoleEnvironment.DeploymentId);
    writer.WriteLine();

    writer.WriteLine("--- Instance Endpoints ----");

    foreach (KeyValuePair<string, RoleInstanceEndpoint> instanceEndpoint in RoleEnvironment.CurrentRoleInstance.InstanceEndpoints)
    {
        writer.WriteLine("Instance Endpoint Key: " + instanceEndpoint.Key);

        RoleInstanceEndpoint roleInstanceEndpoint = instanceEndpoint.Value;

        writer.WriteLine("Instance Endpoint IP: " + roleInstanceEndpoint.IPEndpoint);
        writer.WriteLine("Instance Endpoint Protocol: " + roleInstanceEndpoint.Protocol);
        writer.WriteLine("Instance Endpoint Type: " + roleInstanceEndpoint);
        writer.WriteLine();
    }
}

Now add a handle method for the asynchronous call.

private void HandleAsyncConnection(IAsyncResult result)
{
    var listener = (TcpListener)result.AsyncState;
    var client = listener.EndAcceptTcpClient(result);
    _connectionWait.Set();

    var clientId = Guid.NewGuid();
    Trace.WriteLine("Connection ID: " + clientId, "Information");

    var netStream = client.GetStream();
    var reader = new StreamReader(netStream);
    var writer = new StreamWriter(netStream);
    writer.AutoFlush = true;

    var input = string.Empty;
    while (input != "3")
    {
        writer.WriteLine(" 1) Display Worker Role Information");
        writer.WriteLine(" 2) Recycle");
        writer.WriteLine(" 3) Quit");
        writer.Write("Enter your choice: ");

        input = reader.ReadLine();
        writer.WriteLine();

        switch (input)
        {
            case "1":
                WriteRoleInformation(clientId, writer);
                break;
            case "2":
                RoleEnvironment.RequestRecycle();
                break;
        }

        writer.WriteLine();
    }

    client.Close();
}

Finally override the OnStart() method and setup the RoleEnvironmentChanging Event.

public override bool OnStart()
{
    ServicePointManager.DefaultConnectionLimit = 12;

    DiagnosticMonitor.Start("DiagnosticsConnectionString");

    RoleEnvironment.Changing += RoleEnvironmentChanging;

    return base.OnStart();
}

private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
    if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
    {
        e.Cancel = true;
    }
}

Now run the role by hitting F5. When the application runs open up the Windows Azure Compute Emulator to check for the end point and verify the instances of the role are running.

Endpoint displayed in the Windows Azure Compute Emulator

Endpoint displayed in the Windows Azure Compute Emulator

The Service Name should be Windows_Azure_Web_Role, with an Interface Type of SampleWorkerRole running on the tcp://*:1234 URL and the IP is 127.0.0.1P:1234.

SampleWorkerRole

SampleWorkerRole

Click on one of the instances, which should be green, and assure that each has started up appropriately.

PuTTY

PuTTY

Startup a telnet application, such as Putty and enter the information in as shown in screenshot above.

Telnet

Telnet

Start the telnet prompt connecting to the Windows Azure Worker Role. The prompt with the three choices will display. Click to recycle and then display worker role information a few times, just to make sure all the information is available and the worker role telnet application service is working. Select 3 to exit out and the prompt should close while the role continues to run on the development fabric.

Bravo! Adron


The Microsoft Case Studies Team posted an 00:08:23 Firm Streamlines Field-Service Tasks with Cloud Solution video segment about another startup using the Windows Azure platform:

image

image Broad Reach Mobility wanted to create a full-featured solution for the field-service industry—but wanted to avoid the cost and complexity of on-premise software and managing their own data center. The firm decided to build its new ServiceReach solution on the Windows Azure platform. Windows Azure provides the scalability, flexibility, and cost-effectiveness that helped Broad Reach Mobility deploy a compelling field-service solution that can be used by any sized business. [Link added.]


Bill Zack asked Contemplating a Move to the Cloud? in a 1/17/2010 post to his Ignition Showcase blog:

image Are you an ISV contemplating moving your product or service to the cloud?

Cumulux is an ISV that has their own line of products and they also have an interesting methodology for helping other ISVs and customers to move their applications and services to the Windows Azure cloud.

image They were also recently named Microsoft’s 2101 Global ISV Cloud Partner of the Year.  Ryan Dunn: Cumulux Director of Cloud Services recently joined them from Microsoft where he was well known as a key Microsoft Windows Azure Evangelist.

image

Recently Cumulux conducted a webcast: The Business Case for Azure (Part 1).

(I originally posted the links to this webcast series here.)

This webcast focused on the business value of moving to Azure mainly from the ISVs perspective. There is no recording posted yet but in the meantime you can get the slides here.


Andy Cross (@andybareweb) posted Running Fiddler in Windows Azure with AzureFiddlerCore on 1/15/2010:

image AzureFiddlerCore allows any .net application running in Windows Azure to capture HTTP traffic that the application produces, and automatically store it with BlobStorage.

imageIt is the result of an amalgamation of two of my passions, the power and scalability of Windows Azure with the insight possible through using Fiddler. With version 1.3 of the Azure SDK, it is now possible to remote desktop onto any of your instances, meaning you can download and install Fiddler on the instances. It is possible, but in practise it is a bit tricky. The instances all have hard security enabled so downloading Fiddler requires messing around with trusted sites (and remembering to set the trusted site for the download to GetFiddler.com rather than fiddler2.com!). Additionally, you are installing a HTTP proxy onto a machine, which could negatively effect the performance of the instance should you forget to disable it afterwards. Then you have the problem of getting the traces off the instance in order to debug them, which may be a little tricky.

AzureFiddlerCore shortcuts these problems by embedding a core version of Fiddler directly into your Azure application, that writes its traces directly to BlobStorage. You simply startup the proxy, passing it a little information about storage account, and then for any particular web requests you are interested in you simply get a WebProxy class from AzureFiddlerCore and AzureFiddlerCore will handle logging the information to the specified storage account.

Setup
AzureFiddlerCoreLib.AzureFiddlerCore.Connect(storageAccount, filterList, localResource);

Unlike Fiddler’s normal operation, the default behaviour of AzureFiddlerCore is to only capture those requests that you manually pass through to it. Furthermore, you can specify a list of hostname domains to allow capture for, and thus anything not in this list is ignored. I call this an entropy reduction approach – only those requests you want to trace are logged.

using (var wc = new WebClient())
{
wc.Headers.Add(HttpRequestHeader.UserAgent, "AzureFiddlerCore.Demo");
var _proxy = AzureFiddlerCoreLib.AzureFiddlerCore.GetProxy();

if (_proxy != null)
{
wc.Proxy = _proxy;
}

Output

Once you have set up your capture and made some HTTP requests from within your application, you will find that containers are created for you in BlobStorage for the account you specified. The below shows these containers, using the default container naming policy (which you can change to supply your own container name). This is made up of “afcsaz-” a prefix short for AzureFiddlerCore Session Archive Zip, the Tick count (to make a unique name) and the name of the role that did the logging.

Azure Fiddler Core output

Azure Fiddler Core output

Opening up one of these containers shows how you can get at the Fiddler output – the SAZ files in this container can be downloaded and opened in the full Fiddler desktop client, allow you to inspect the remote HTTP request/responses.

Container contents

Container contents

A session in Fiddler Desktop client, that was captured in Azure

A session in Fiddler Desktop client, that was captured in Azure

Acknowledgements

This project wouldn’t have been possible without Eric Lawrence, the creator of Fiddler. In addition, he supplied the technology required to export to a SAZ archive (http://fiddler.wikidot.com/saz-files). In the implementation of SAZ creation, the software uses DotNetZip, so I’d like to thank them for their reliable software.

Source Code

If you are interested in what this can provide, I suggest you download the source code, which contains an example project. It is ready to run on DevFabric, but if you update the storage account details you will be able to get it to run in Azure with no problems.

The source code for this project is available on CodePlex at http://azurefiddlercore.codeplex.com/


Andy Cross (@andybareweb) explained Implementing Custom Performance Counters in Windows Azure SDK 1.3 on 1/14/2010:

imagePerformance counters in Window and as implemented by .NET are one of the most powerful ways of charting the performance of your software. Windows comes with a huge variety of built in counters that help chart your hardware and software, from memory usage and processor utilisation to ASP.NET errors/sec. However, what Windows cannot supply is a set of counters that understand what your application is doing “under the hood”. As the developer, you are the only person who can truly understand that.

imageFortunately, the .net framework provides a set of mechanisms for adding your own Performance Counters into Windows to allow the tracking of arbitrary processes inside your application. These counters are available equally in Windows Azure, which is exceptionally useful given a key motivator for utilising the power of Cloud Computing is performance and scalability. You may try tracking load and performance of your system based on Processor utilisation, but estimations using this data this is always an approximation of what your software is doing, since you’re examining the side-effects of your software, rather than tracing its operation directly.

This post will show how to introduce and implement your very own Custom Performance Counters using Windows Azure Diagnostics from SDK v1.3. I have posted the source code as well at the end of the post.

Setup

Firstly we need to build the scaffolding of our application, which will be a Cloud Project in Visual Studio 2010. The Cloud Project contains one Worker Role, but this technique works equally well with any type of role.

image

Create a new Cloud Project with a Worker Role

Now we need to add a new Console Application to this Cloud Project. This may seem a little bizarre, as a Console App isn’t normally one of the types of project that we typically deploy to the Cloud, but we will use it as a <Startup/> task, and a straight forward synchronous Console App is the easiest way to achieve this.

Right Click on Solution, Add New Project, Console Application

Right Click on Solution, Add New Project, Console Application

When you have added this, your Solution structure should look like the below:

image

Correct Solution Structure

We need to make sure our CounterInstaller is built and deployed in an efficient way once we deploy to the Azure Fabric. I chose to make sure that the assembly compiled from this project is always in Release mode, meaning ready for the Cloud rather than with debug symbols attached to the file. Additionally, it means that I can always look for the output of the project in the /bin/Release/ folder, rather than having to go back and forth between /bin/Debug/ and /bin/Release/. To do this, I made a change to the Build | Configuration Options menu option, setting BareWeb.CounterInstaller to be Release mode:

Build Menu option

Build Menu option

Setting Configuration Option to Release mode

Setting Configuration Option to Release mode

Now I have to find a way to get the output of the CounterInstaller into my WorkerRole. I want to make sure that every time I compile the CounterInstaller project I don’t need to add the file again, so copying the output and adding it in statically isn’t good enough. Instead, I need to browse to the output of the CounterInstaller project (set to Release earlier) and add the assembly generated as a Linked File.

Add an Existing Item for the CounterInstaller assembly

Add an Existing Item for the CounterInstaller assembly

Browse to the output location for Release

Browse to the output location for Release

Add the file as a Linked File

Add the file as a Linked File

We also need to make sure that we mark this Linked file as content that should be always moved to the output location by right clicking on the file, accessing its properties and setting its Copy to Output Directory property to “Copy Always”.

Copy to Output Directory ALWAYS

Copy to Output Directory ALWAYS

We are almost there with our setup now, all we need to add is a .cmd file to the Worker role as a batch file to give us an entry point into the Startup task. I did this by Right Clicking on the Project and adding a new Text File, but giving it a .cmd file extension. Then remember to set the Copy to Output Directory to “Copy Always”.

Now we need to do the wiring up of the Startup Task from Windows Azure‘s point of view. At the moment, all we have is an executable in the root of a WorkerRole, which is never called, referenced or anything similar. Thus simply, nothing will happen at the moment.

Open the ServiceDefinition.csdef and add the following node inside the <WorkerRole> node:

<Startup>
<Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">
</Task>
</Startup>

Note that executionContext=”elevated” means that it will start with UAC admin privileges granted, so it will be able to create the PerformanceCounters, and taskType=”simple” means that the Task will have to complete before WorkerRole will continue to execute (i.e. it is a synchonous task).

JohScott has added community content to the msdn article describing these settings, that I will reproduce here for clarity:

Startup tasks can be run at 2 privilege levels:

Limited: Runs with the same privileges that the role runs at.

Elevated: Runs with Administrative privileges.

Startup tasks are of 3 types:

Simple (Default): (Synchronous) In this case the task is launched and forward progress in the instance is blocked until the task is complete.

Background: (Asynchronous Fire and Forget) In this case the task is launched and role starts up continues.

Foreground: (Asynchronous) Similar to Background, but role shutdown can only happen when all the foreground tasks exit.

That has completed the basic setup of our Solution. All we have to do now is fill in the blanks where the code should be! First I’ll just go over any likely problems we might have caused.

Troubleshooting

In the process described above, I found a couple of problems (that I’ve documented around above but I will list below anyway).

Firstly, if you do not set the Copy to Output Directory for the .cmd or .exe you may get an error such as “CloudServices64 : Cannot find file named ‘approot\Startup.cmd’ for startup task Startup.cmd  of role BareWeb.CustomCounterIterator.”

Error finding .cmd

Error finding .cmd

Simply setting the Copy to Output Directory to “Copy Always” (or technically you could do “Copy if Newer”) will solve this:

Ensure .exe and .cmd are Copied Always

Ensure .exe and .cmd are Copied Always

If you get an error where the role is not starting, make sure that your .cmd file is saved as ANSI rather than UTF-8 – Visual Studio creates it as UTF-8 and saves it as such every time you save it. Use Notepad Save As to set the content type to ANSI, and you should be ok. You can check whether this is the case by running the .cmd at a command prompt, and the indicative error is something like:

C:\Users\andy>C:\Dev\BareWeb.CustomPerfCounters\BareWeb.CustomCounterIterator\Startup.cmd

C:\Users\andy>´╗┐echo if you have problems with startup, make sure this file is not UTF-8 but ANSI

‘´╗┐echo’ is not recognized as an internal or external command, operable program or batch file.

The characters ’´╗┐ should not be there (and won’t be visible), but are indicative of an encoding problem. Changing to ANSI encoding will solve this:

Symptom of potential encoding error

Symptom of potential encoding error

Save with Notepad as ANSI

Save with Notepad as ANSI

Code

Now on to the good bit! We have a solution that all compiles, but doesn’t do anything!!! Starting with the CounterInstall project, we need a simple few lines of code to create  a PerformanceCounterCategory and add the PerformanceCounters within it. The code is relatively straight forward, the only thing to mention is that I am adding two types of Counter into the Category and this isn’t necessary, you can add as many or as few into a Category as you want. My method also allows for the deletion of existing categories if necessary. This may be overkill, since any startup task will often run on a vanilla host in the cloud, but it will be re-run many times when developing with devFabric locally. 

private static void CreateCustomCounters(string category, bool deleteIfExists)
{
if (deleteIfExists && PerformanceCounterCategory.Exists(category))
{
PerformanceCounterCategory.Delete(category);
}
if (!PerformanceCounterCategory.Exists(category))
{
CounterCreationDataCollection counterCollection = new CounterCreationDataCollection();

// add a counter tracking operations per second
CounterCreationData opsPerSec = new CounterCreationData();
opsPerSec.CounterName = "# operations /sec";
opsPerSec.CounterHelp = "Number of operations executed per second";
opsPerSec.CounterType = PerformanceCounterType.RateOfCountsPerSecond32;
counterCollection.Add(opsPerSec);

// add a counter tracking operations per second
CounterCreationData operationTotal = new CounterCreationData();
operationTotal.CounterName = "Total # operations";
operationTotal.CounterHelp = "Total number of operations executed";
operationTotal.CounterType = PerformanceCounterType.NumberOfItems32;
counterCollection.Add(operationTotal);

PerformanceCounterCategory.Create(category,
"A custom counter category that tracks execution", PerformanceCounterCategoryType.SingleInstance,
counterCollection);
}
else
{
Console.Error.WriteLine("Counter already exists, try specifying parameters for categoryname and or insisting on deleting");
}
}
}

Next we need to edit WorkerRole.cs to do two things. Firstly we need to make sure that all the performance counters we’re working with are copied into cloud storage, otherwise we will need to remote desktop onto the servers in order to visualise it (not perfect!) and we need to instantiate instances of the installed counters in order to increment their value.

To increment the values, just instantiate an instance of an object that represents the Performance Counter (or both in my case) and call the Increment() method. You can also call IncrementBy(int) if you want to do something more advanced.

public override void Run()
{
// This is a sample worker implementation. Replace with your logic.
Trace.WriteLine("BareWeb.CustomCounterIterator entry point called", "Information");

// create the local instances of the counters
PerformanceCounter totalOperationsCounter = new PerformanceCounter();
totalOperationsCounter.CategoryName = _categoryName;
totalOperationsCounter.CounterName = _operationTotalCounterName;
totalOperationsCounter.MachineName = ".";
totalOperationsCounter.ReadOnly = false;

PerformanceCounter perSecondCounter = new PerformanceCounter();
perSecondCounter.CategoryName = _categoryName;
perSecondCounter.CounterName = _opsPerSecCounterName;
perSecondCounter.MachineName = ".";
perSecondCounter.ReadOnly = false;
var counterExists = PerformanceCounterCategory.Exists(_categoryName);

while (true)
{
// increment if counters exist
if (counterExists)
{
Trace.WriteLine("Incrementing", "Information");
totalOperationsCounter.Increment();
perSecondCounter.Increment();
}
Thread.Sleep(10000);
Trace.WriteLine("Working", "Information");
}
}

Finally, setup the role to transfer the Performance Counters to cloud storage:

string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));

RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
DiagnosticMonitorConfiguration config = roleInstanceDiagnosticManager.GetCurrentConfiguration();

PerformanceCounterConfiguration opsPerSecConfig = new PerformanceCounterConfiguration();
opsPerSecConfig.CounterSpecifier = @"\bareweb.customcounter\# operations /sec";
opsPerSecConfig.SampleRate = System.TimeSpan.FromSeconds(1.0);
config.PerformanceCounters.DataSources.Add(opsPerSecConfig);

PerformanceCounterConfiguration totalOpsConfig = new PerformanceCounterConfiguration();
totalOpsConfig.CounterSpecifier = @"\bareweb.customcounter\Total # operations";
totalOpsConfig.SampleRate = System.TimeSpan.FromSeconds(1.0);
config.PerformanceCounters.DataSources.Add(totalOpsConfig);

config.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);

roleInstanceDiagnosticManager.SetCurrentConfiguration(config);

And there you have it!

In Action

DevFabric traces that it is actually incrementing

DevFabric traces that it is actually incrementing

Visualisations

If you want to look at the performance through Remote Desktop or on your local machine, you can use PerfMon to see the results. If you want to see them as they come from Cloud Storage, I suggest you look at Cerebrata’s Diagnostics Manager. This is how to select them from PerfMon:

Add the Custom Counters to Perfmon

Add the Custom Counters to Perfmon

And this is how they look:

Perfmon output

Perfmon output

Other visualisations are possible. You could for instance view the raw data using Azure Storage Explorer, or some similar tool for viewing TableStorage:

Azure Storage Explorer

Azure Storage Explorer

Alternatively, you can use the excellent Cerebrata Azure Diagnostics Manager to select and visualise the performance counters:

Cerebrata Performance Counter options

Cerebrata Performance Counter options

Once you have selected the date range and such, you can choose which counters to visualise:

Choose counters in Cerebrata

Choose counters in Cerebrata

Then you can select to visualise the performance counters:

Cerebrata Performance Counter visualisation

Cerebrata Performance Counter visualisation

As one final hint, if you give your Categories a different name to me (I recommend that you do!) you can use the typeperf /q command to find what they are installed as, and modify the code listed above as             opsPerSecConfig.CounterSpecifier = @”\bareweb.customcounter\# operations /sec”;. If you also specify the counter category name, you get a much narrower result, which helps.

typeperf /q categoryname

typeperf /q categoryname

The source code is available here: BareWeb.CustomPerfCounters

Thanks
Andy

Update – I have updated the source code package and above listings to correct a problem with not transferring the second of the two counters to the TableStorage! Please contact me or post a comment if you have any questions.


Andy Cross (@andybareweb) described Implementing Azure Diagnostics with SDK V1.3 on 1/13/2011:

imageThe Windows Azure Diagnostics infrastructure has a good set of options available to activate in order to diagnose potential problems in your Azure implementation. Once you are familiar with the basics of how Diagnostics work in Windows Azure, you may wish to move on to configuring these options and gaining further insight into your application’s performance.

imageHere is a cheat-sheet table that I have built up of the ways to enable the Azure Diagnostics using SDK 1.3. This cheat-sheet assumes that you have already built up a DiagnosticMonitorConfiguration instance named “config”, with code such as the below. This code may be placed somewhere like the “WebRole.cs” “OnStart” method:

                string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
                CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));                

                RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
                DiagnosticMonitorConfiguration config = roleInstanceDiagnosticManager.GetCurrentConfiguration();

image

After you have done this, remember to set the Configuration back for use, otherwise all of your hard work will be for nothing!

roleInstanceDiagnosticManager.SetCurrentConfiguration(config);

Hopefully some of you will find this useful. I will be posting later with more details of custom logging.


The Microsoft Case Studies Team posted Auto Maker Uses Scalable Cloud Solution to Deliver Internet Services for Electric Car on 1/10/2011 (missed when published):

imageDaimler, a leading manufacturer of premium passenger cars and commercial vehicles, started to release its smart fortwo electric drive car to selected customers in 2009. Through a web-based application, drivers have access to data about their vehicle, such as charge state. Daimler used the Windows Azure platform to quickly bring its solution to market cost-effectively.

image

Business Needs
image Daimler, the parent company of Mercedes-Benz Cars, Mercedes-Benz Vans, Daimler Trucks, Daimler Buses, and Daimler Financial Services, is a global producer of premium passenger cars and one of the world’s largest manufacturers of commercial vehicles. In November 2009, the company started production of the smart fortwo electric drive, a battery-electric car.

Though electric cars have been prototyped for decades, they have not been released to the broader market. In order to gain more experience concerning the market requirements and consumer behavior, Daimler has delivered the second-generation smart fortwo electric drive to selected customers in several countries. As the demand has exceeded all expectations, smart has increased the production volume from 1,000 to more than 1,500 vehicles. From 2012, the smart fortwo electric drive will be available to anyone interested in almost 40 smart markets.

Daimler offers an Internet service for the smart fortwo electric drive that allows customers to access charging data from any browser on an Internet-connected device, such as their home computer or mobile phone. The service provides customers with a personalized Web page, where they can check the charge state of their vehicle, find locations of public charging spots in relation to the vehicle’s location by using Bing Maps , and configure certain vehicle functions, such as billing information. However, with the vehicle development well underway, the company needed a solution that it could use to deliver the off-board Internet service without requiring the lengthy process of procuring and configuring hardware. It also needed to bring the solution to market within a few months.

Also, because the cars are not yet produced in a large-scale production, Daimler wanted a solution that had minimized capital expenditures compared to traditional hosting methods to help control research and development costs.

Solution
First, Daimler developed a web application using PHP as a rapid prototype, hosted on a Linux server. Daimler evaluated the option of developing a Java-based website on its own, but decided to join the Microsoft Technology Adoption Program for the Windows Azure platform so that it could work closely with Microsoft to develop the application on the Windows Azure platform.

Windows Azure is a cloud services operating system that serves as the development, service hosting, and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage web applications on the Internet through Microsoft data centers.

Developers from Daimler and Microsoft Germany worked together to develop the initial application in just three months. They used the Microsoft Visual Studio 2008 Professional development system, along with Microsoft Visual Studio Team System 2008 Team Foundation Server, to coordinate the source code between their two teams. The teams used two web roles in Windows Azure to send data between the vehicle and the application, and can easily add more web roles as needed. Microsoft SQL Azure stores relational data, including user and vehicle account information.

The application provides relevant data about the car and its charge state, and is accessible from any Internet-enabled device. For instance, drivers can use the application through an Internet browser on their desktop or mobile phone, which is integrated with Bing Maps, to find public charge spots for their cars. This helps drivers to plan trips and alleviates the fear that they won’t be able to charge their vehicle en route to a destination. Drivers can also check the charge state of their cars in order to easily see when it is charged for their next trip.

Benefits
By developing its web application on the Windows Azure platform, Daimler was able to bring its new offering to market cost-effectively. In addition, the auto maker will be able to easily scale up the application when demand grows.

  • Fast Time-to-Market. Daimler was able to develop its application for the electric car in just three months. In addition, team members can quickly add new features to the application. With Windows Azure, Daimler can develop and test new features locally and then deploy them to production. This is particularly important to the company, as it releases new features at least once a month based on the feedback it receives from drivers in the test market.
  • Reduced Capital Expenditures. With the Windows Azure platform, Daimler relies on Microsoft data centers for its processing and storage needs, and a pay-as-you-go pricing model once suitable invoicing is available. This means Daimler does not have to procure and configure server hardware or incur capital expenses for the smart fortwo Internet services project. In addition, the company has a cost-efficient model for ongoing usage, paying only for the compute and storage that it uses each month—helping ensure that it doesn’t overspend on hardware that might otherwise go unused.
  • Improved Scalability. Daimler looks forward to scaling its application to meet higher demand in the future. To start, the company used two web roles, but when Daimler deploys new features and needs to scale up, it can quickly add new web roles in Windows Azure to handle increased demand with a couple of clicks.

For more information about other Microsoft customer successes, please visit:
www.microsoft.com/casestudies


<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure

Problems with the duration of my Windows Azure Platform Cloud Essentials for Partners subscription confounded Microsoft Online Technical Service as described in this update of 1/17/2011 to my Windows Azure Compute Extra-Small VM Beta Now Available in the Cloud Essentials Pack and for General Use post of 1/9/2011:

Problem with Microsoft Online Services and Duration of Cloud Essentials for Partners Benefit

On 1/13/2011, I checked my subscriptions in the MOCP and discovered that the duration of my Cloud Essentials for Partners Benefit had been reduced to one month. I then sent the following message to a member of the Solution Partner Expert team:

The Partner Network Experts Team handed me off to you to solve the following problem with my Windows Azure Platform Cloud Essentials for Partners subscription:

When I first created this subscription (see my Windows Azure Compute Extra-Small VM Beta Now Available in the Cloud Essentials Pack and for General Use post of  1/9/2011 for details), MOCP showed the subscription active for one year (1/8/2011 to 1/8/2012), as expected. Unfortunately, I didn’t retain a screen capture with the popup at the time.

Now the active period appears as one month, as shown below. This is obviously incorrect. What can I do to make sure the benefit lasts for a full year?

clip_image002[Figure 19]

Thanks in advance,

Roger Jennings

Today, I received this belated mail from Greg at Microsoft Online Services:

Hello Roger,

Thank you for contacting Microsoft Online Services. For your reference, the Service Request ID that has been assigned to your issue is 110117-001384.

I understand that you had a question about the end date changing on your Windows Azure Platform Introductory Special for Partners, subscription number 2141229813.

The reason for this is we are ending the program on March 31, 2011. Fortunately, you will be able to go to the Microsoft Partner Network website and sign up for the Partner Essentials. Here is the link: https://partner.microsoft.com/US/40118760.

If you have any further questions, please let us know how we can further assist you.

Thank you for contacting Microsoft Online Services.

It’s clear to me from Figures 16 and 19 that I have a Cloud Essentials for Partners subscription, not a Windows Azure Platform Introductory Special for Partners subscription. Also, Figure 12 states that the Windows Azure Platform Introductory Special for Partners benefit expires on 1/6/2011 and the Cloud Essentials for Partners starts on 1/7/2011.

I wonder why it wasn’t equally clear to Greg from Microsoft Online Technical Services (HelpNow@microsoft.com) who finally handled my request.


Steve Gillmor’s Gillmor Gang 1.15.11 (TCTV) post to the TechCrunch blog and ScreenCast to TechCrunch TV of 1/15/2011 discussed Bob Muglia’s departure from Microsoft, inter alia:

image

image Bob Muglia moved on from Microsoft this week, and I for one was not a little surprised. You see, Bob is one of the few Microsoftees that sits (sat) across the two worlds of Microsoft. One is the old world, of Windows and Office and the predominant position in the technology community. The other is where the company sits today. Bob was comfortable in both places, in a way that no-one has been since, well, Bill Gates roamed the halls.

That’s not to say that Bob is a direct peer of Bill, but rather that Bill was able to sit across old and new through sheer force of saying it was so and therefore making the distinction irrelevant. Bob had a more parochial role, but his understanding of the underlying dynamics, what the strategy was and would be, was comprehensive in its ecumenical flavor. When he and Ray Ozzie played doubles with the media, they fit together in surprising ways.

Such was Bob’s skill that he would turn a softball aimed at Ozzie into a screamer hit back at the unsuspecting questioner. Ask Ray whether Silverlight was going to replace Windows Presentation Foundation and effectively subsume Windows into an Internet OS, and he would say no by saying yes. Then Bob would say yes by saying no. Put the two together and you got one answer. Tuesday that answer changed.

When Ray Ozzie quit, there was a reasonable interpretation that things would continue as planned. When Bob Muglia quit, you could no longer make that assumption. Ray had Bill’s blessing, Bob had a business unit with growing revenues. In effect, he was a consigliore to Ozzie, the guy that could manage the often challenging relationship between what makes money at Microsoft and what that would have to become in the Cloud era. Put another way, he could walk into a poker game with Sinofsky and put some chips down to call a bluff.

The bluff is that Windows revenue trumps everything, that Windows Phone will get its share, that a Microsoft tablet will stop both Apple and Android from eating the heart out of Office. As we found out on today’s Gillmor Gang, Google is being called on another such bluff. Namely, that yanking H.264 from Chrome is all about the open Web. That WebM will stop Apple from eating the heart out of Android and Chrome and maybe YouTube. Already Google is re-explaining the move.

But not soon enough to stop Danny Sullivan, Robert Scoble, Kevin Marks, John Taschek and me from having some fun on the Gang this week. Danny Sullivan’s filibuster about who is the better friend of the user is worth the price of admission alone. Robert Scoble is getting smarter by the week, and Kevin Marks, well, it was fun to see the ex-Googler voice outrage at Google’s moronic move. Even noted Android fanboy John Taschek recognized that the more pressure Apple puts on the carriers, the happier users get regardless of which phone they buy.

In the good old days of tech media, Microsoft led the charge in impossibly convoluted contortions around self-interested maneuvers. Today Google has taken over that role. And the new Microsoft stands as a pale shadow of itself, fighting tooth and nail to rescue defeat from the jaws of victory. With Steve Ballmer as Donald Trump: Nice job, Bob. You’re fired. Thanks for the material, guys.


The msdev.com partner site added a new member of its Journey to the Cloud ISV Webinar Series on 1/13/2011 (missed when published):

imageJourney to the Cloud ISV Webinar Series: Deploying Your Cloud Offering to Customers (00:30:00) Reducing the time to value and driving long term ROI for your customers are critical elements of cloud services deployments. This webinar discusses how to exceed customer expectations and successfully launch your cloud

This Webinar apparently didn’t receive much promotion; the link reported only 2 views so far.

Earlier members of the series are:

Journey to the Cloud ISV Webinar Series: The Impact of the Cloud on Your Sales Strategy (12/17/2010, 00:30:00) Cloud services involve a new type of software sale. This webinar discusses what this looks like and best practice sales strategies that will enable you to engage most effectively with your best opportunity buyers.

Journey to the Cloud ISV Webinar Series: Driving Demand for Your Cloud Offering (12/8/2010, 00:15:00) This webinar discusses both traditional and online demand generation tactics and provides a structured process for organizing, planning, and executing an online marketing strategy.

Journey to the Cloud ISV Webinar Series: The Financial Implications of Cloud Services (11/22/2010, 00:30:00) In the webinar, we look at the key success factors for cloud services financial success and discuss some important financial considerations including pricing, financial planning and control, as well as some financial …

Journey to the Cloud ISV Webinar Series: The Cloud Services Opportunity (11/18/2010, 00:30:00) The cloud offers tremendous and exciting new opportunities. This webinar discusses the cloud services opportunity and why it is such an excellent opportunity for you to grow revenue, expand your reach and gain …


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Mary Jo Foley (@maryjofoley) asked Where are those Windows Azure Appliances? in a 1/17/2010 post to her All About Microsoft blog for ZDNet:

image In July 2010, Microsoft took the wraps off its plans for the Windows Azure Appliance, a kind of “private-cloud-in-a-box” available from select Microsoft partners. At that time, company officials said that OEMs including HP, Dell and Fujitsu would have Windows Azure Appliances in production and available to customers by the end of 2010.

image

We’re half way through January 2011, but the promised Azure Appliances have yet to materialize.

image I noticed MSPMentor.net, in January 6 interview with Microsoft channel chief Jon Roskill, asked about the appliances. In that interview, Roskill reportedly said Azure Appliances should be available in another nine months or so. So does that mean the Azure Appliances are almost a year behind schedule?

I asked the Azure team for comment and was told by a spokesperson that Microsoft had no update to share at this time.

I also asked HP, Dell and eBay — the customer Microsoft focused on as part of the Windows Azure Appliance launch last year — for updates.

HP didn’t respond.

Dell sent me this statement from Kris Fitzgerald, Dell Services CTO: “At the time of the announcement with Microsoft about Windows Azure, we stated it would be operational in Q1 of 2011. We are on target for those dates for initial customers and full deployment will occur shortly thereafter.” He also noted that Dell’s Q1 quarter is February 1 – April 30.

(For the record, I looked on Dell’s site and found no references to Q1 2011 timing — whether it be calendar or fiscal quarter — in Dell’s Azure Appliance press materials. I did find a quote from Fitzgerald on News.com, saying Dell planned to have an Azure Appliance up and running in its datacenter by the end of January 2011.)

An eBay spokesperson e-mailed the following update: “On the Windows Azure platform appliance side, we continue to work together with Microsoft on the integration and will be able to provide further update soon.”

As described by Microsoft last summer, Windows Azure Appliances will be preconfigured containers with between hundreds and thousands of servers running the Windows Azure platform. These containers will be housed, at first, at Dell’s, HP’s and Fujitsu’s datacenters, with Microsoft providing the Azure infrastructure and services for these containers.

In the longer term, Microsoft is expecting some large enterprises, like eBay, to house the containers in their own datacenters on site — in other words, to run their own “customer-hosted clouds.”Over time, smaller service providers also will be authorized to make Azure Appliances available to their customers, as well.

I’m not implying that Microsoft is backing away from its Azure Appliance plans. (I’ve seen posts from folks who’ve received training to prepare for the forthcoming appliances.) But an official update from Microsoft’s Azure team would be nice….

Could missing WAPAs be part of the reason for the demise of Bob Muglia’s STB presidency?


<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) posted Revisiting Virtualization & Cloud Stack Security – Back to the Future (Baked In Or Bolted On?) on 1/17/2011:

image [Like a good w[h]ine, this post goes especially well with a couple of other courses such as Hack The Stack Or Go On a Bender With a Vendor?, Incomplete Thought: Why Security Doesn’t Scale…Yet, What’s The Problem With Cloud Security? There’s Too Much Of It…, Incomplete Thought: The Other Side Of Cloud – Where The (Wild) Infrastructure Things Are… and Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where…]

There are generally three dichotomies of thought when it comes to the notion of how much security should fall to the provider of the virtualization or cloud stack versus that of the consumer of their services or a set of third parties:

  1. The virtualization/cloud stack provider should provide a rich tapestry of robust security capabilities “baked in” to the platform itself, or
  2. The virtualization/cloud stack provider should provide security-enabling hooks to enable an ecosystem of security vendors to provide the bulk of security (beyond isolation) to be “bolted on,” or
  3. The virtualization/cloud stack provider should maximize the security of the underlying virtualization/cloud platform and focus on API security, isolation and availability of service only while pushing the bulk of security up into the higher-level programatic/application layers, or

So where are we today?  How much security does the stack, itself, need to provide. The answer, however polarized, is somewhere in the murkiness dictated by the delivery models, deployment models, who owns what part of the real estate and the use cases of both the virtualization/cloud stack provider and ultimately the consumer.

I’ve had a really interesting series of debates with the likes of Simon Crosby (of Xen/Citrix fame) on this topic and we even had a great debate at RSA with Steve Herrod from VMware.  These two “infrastructure” companies and their solutions typify the diametrically opposed first two approaches to answering this question while cloud providers who own their respective custom-rolled “stacks” at either end of IaaS and SaaS spectrums such as Amazon Web Services and Salesforce bringing up the third.

As with anything, this is about the tenuous balance of “security,” compliance, cost, core competence and maturity of solutions coupled with the sensitivity of the information that requires protection and the risk associated with the lopsided imbalance that occurs in the event of loss.

There’s no single best answer which explains why were have three very different approaches to what many, unfortunately, view as the same problem.

Today’s “baked in” security capabilities aren’t that altogether mature or differentiated, the hooks and APIs that allow for diversity and “defense in depth” provide for new and interesting ways to instantiate security, but also add to complexity, driving us back to an integration play.  The third is looked upon as proprietary and limiting in terms of visibility and transparency and don’t solve problems such as application and information security any more than the other two do.

Will security get “better” as we move forward with virtualization and cloud computing.  Certainly.  Perhaps because of it, perhaps in spite of it.

One thing’s for sure, it’s going to be messy, despite what the marketing says.

Related articles


Ankur Chadda asserted “Cloud computing offers tangible benefits, but security issues have the potential of negating those benefits” in a preface to his Securing the Cloud an Impossible Feat? Think Again post of 1/17/2011:

A virtualized data center must be supported by a virtualized security system, which must be validated by a virtualized test systems and test methodologies.

The rapid rise of cloud computing has delivered cost and productivity benefits to thousands of organizations as over 200 cloud providers have emerged in the last decade. But questions of cloud security reveal that the growth of the networking and computing capabilities has outstripped the development of technologies to protect the cloud from cyber attacks.

Greg Day, security analyst at McAfee, told ComputerWeekly.com, "As cloud computing gains popularity, cyber-criminals are likely to target these services to steal information for financial gain."

At the heart of the issue is virtualization, the ability to run multiple server instances inside virtual machines (VMs) on a single physical server. This basic element is both the foundation of cloud computing and the source of new vulnerabilities that are already being exploited.

At an RSA security conference in San Francisco, John Chambers, Chairman and CEO of Cisco Systems, said that while cloud computing posed exciting opportunities, "It is a security nightmare and it can't be handled in traditional ways."

Traditional vs Virtual Security
When implemented and configured correctly, current cyber security solutions do a good job of detecting and blocking a wide range of malicious traffic from outside and even inside the data center. This is true because mature technology underlies security applications like intrusion detection systems (IDS), intrusion prevention systems (IPS) and deep-packet inspection (DPI).

Validation is the essential element in the technology cycle that drives maturity. Current security technology reached maturity through the iterative development of test methodologies that assessed and validated specific implementations. As we shall see, cloud-aware test methodologies are the key to bringing security to cloud computing.

Figure 1: Technology Cycle

Some may assume that existing security solutions are adequate to protect the cloud. After all, the virtual servers reside on physical servers that are behind the firewall. To see why this is not the case, we must look at the relationship between virtualization and security, more specifically, where security is traditionally implemented in a data center.

Figure 2: Traditional end-to-end client server network diagram

Security typically sits at the border of the LAN and WAN, protecting the data center infrastructure from threats. A firewall inspects all incoming and outgoing traffic, passes through legitimate traffic and blocks malicious traffic from the outside. In addition, a firewall can sit at the top-of-rack or end-of-row, monitoring traffic on the LAN to detect and contain inter-server threats from spreading through the LAN. These could be attacks that somehow got past the firewall or threats introduced internally, either unconsciously by uploading an infected file or intentionally through sabotage.

In the typical scenario, it is not feasible to deploy an IPS in front of every server. The best that can be done is to have an IPS per row or per rack and attempt to contain inter-server threats within a small segment of the data center. In addition, nothing sits inside a server, detecting and stopping an intra-server threat, whether it is a hacked hypervisor or a rogue VM attacking and infecting other VMs in the same server.

For example, a compromised VM could send counterfeit transactions, destroying the integrity of back-end databases. Since all the traffic that leaves the physical server appears legitimate, traditional security systems can't detect and stop this breach.

Infra/Inter/Intra Vulnerabilities
Traditional data centers have inter-server and infrastructure vulnerabilities, such as the possibility of performance and security weaknesses internally between servers, externally at the gateway, and in the end-to-end network. Virtualization intensifies these potential threats and adds another level of vulnerability, intra-server, i.e., threats between VMs inside a single physical server.

Infrastructure
Traditional end-to-end testing validates the performance of an entire system. System testing is even more important in the era of virtualization. With dozens of VMs per physical server, the amount of traffic one box can generate increases dramatically, easily filling a 10 Gigabit Ethernet link. The cloud can be composed of hundreds or thousands of physical servers.

Inter-server
Device testing evaluates the performance of a device interacting with other devices. For example, testing a security appliance involves sending legitimate traffic mixed with malicious traffic to the appliance and evaluating its ability to deflect threats while forwarding legitimate traffic at acceptable levels. The increase in utilization due to virtualization means an increase in traffic, placing more demands on the performance of the security appliance.

Intra-server
Now that we have multiple applications running in separate VMs on a single server, we have the possibility of security threats residing completely inside a physical server. Intra-server traffic never sees the network, so traditional methods of implementing and testing security are completely ineffective for intra-server threats. If a rogue application is spawned in a VM and launches a DOS attack on other VMs on the server, a software appliance in the DMZ will never know.

Virtual Security for Virtual Machines
Traditional security approaches are inadequate to protect the cloud because they can't detect and deflect intra-server threats. Virtual machines require virtual firewalls.

Figure 3: Today's virtualized environment network diagram

A virtual IPS performs the same functions as a physical IPS. The difference is where it is located. In the case of a virtual appliance, it resides in a service VM on the physical server along with the application VMs. A redirect policy allows a virtual controller to inspect and control VM-to-VM communications and direct the traffic to the appropriate appliance, whether physical or virtual. This arrangement places a virtual IPS in front of every connection to allow the traffic to and from every VM to be inspected.

A cyber security system that combines physical IPS appliances with virtual IPS appliances has end-to-end visibility of the data center network, from the DMZ at the demarcation point to every VM in every server, and all devices of interest in between.

Metrics of Virtual Service: PASS
Here is where cloud-aware test methodologies come into play. Like the traditional data center, the virtualized data center has fundamental and critical network attributes - performance, availability, security, and scalability (PASS). Established test methodologies answer the critical questions related to the PASS attributes. However, virtualization fundamentally changes the environment that these methodologies address.

Performance
Traditional over-provisioning methods of fixed resources - physical servers, storage drives, network switches-no longer apply in the virtualized environment. At the service level, the cloud designer must take this into account by ensuring an adequate number of VM instances are provisioned to make dynamic access possible for all users. Cloud security must deliver the maximum number of new connections per second and firewall bandwidth throughput while blocking threats and malicious traffic.

Availability
The traditional methods of providing local redundancy must also be reconsidered in a virtualized environment. Servers that can support 1,000 or more VMs can become a single point of failure if appropriate approaches to VM load balancing, automated resource scheduling and live migration to other hardware are not built into the design. Cyber security in the cloud requires maintaining optimum application response time at maximum throughput.

Security
Traditionally, cyber security is placed in strategic physical locations, such as at the WAN edge where requests and traffic from the Internet can be filtered and decrypted. However, geographic locations of physical servers have less meaning in a virtualized cloud, as users might be tapping resources from VMs located on one of any number of servers or even data centers. Virtual security must be cloud-aware. In the case of live migration, where a VM moves to another server with VMotion, the security solution must migrate the profile to allow legitimate traffic access to the new physical machine to avoid downtime for the end user.

Scalability
The promise of infinite scale is appealing, but the elasticity of the physical infrastructure has finite limits. Addressing this risk requires a well-thought-out network infrastructure where aggregation and core interconnects do not become the bottlenecks of the elastic demand and scale that the cloud promises, maintaining the maximum number of secure concurrent connections at maximum throughput.

Virtual Test Systems for Virtual Security
For both traditional and virtual data centers, testing answers questions related to PASS. In particular, testing provides the answer to the question: How secure is any given cloud? Testing a cyber security solution addresses two vital questions at a high level:

  1. Does the solution block all threats while allowing legitimate traffic to pass?
  2. How does the solution affect throughput, performance and scalability?

Answering these questions is the goal, whether testing a legacy data center or a virtualized data center. Like the virtualization of a security application, the innovation of testing virtualization lies in extending the test endpoints.

As the world of computing has employed the VM to provide the many benefits of cloud computing, test systems have extended to the virtual level to validate the functionality of applications running in the VMs, and through the iterative development process, to facilitate improvements in performance, availability, security, and scalability, the critical metrics of data center efficiency.

A virtual tester is a software-based test system implemented in a virtual machine. To the network devices under test, and to the test engineer, it looks and behaves exactly as if it were a hardware tester. A virtual tester makes it possible to test cloud security at all the levels it has impact: intra-server, inter-server and infrastructure.

Figure 4: Test setup for virtualized environment

When assessing a cyber security system that employs virtual and physical appliances, testers reside at the endpoints to generate traffic and accumulate results.

  • Intra-server: Virtual testers for each VM in the physical server serve as endpoints.
  • Inter-server traffic: A virtual tester for each VM in the separate physical servers can serve as endpoints, or a virtual tester on one end and a physical tester on the other.
  • Infrastructure: Virtual testers for each VM in the test serve as endpoints and a physical tester at the gateway serves as the other.

The result is end-to-end testing of any IDS/IPS scenario, whether the endpoints span the whole of the data center or reside in a single physical server.

A recent test conducted by Broadband Testing demonstrated the use of cloud-aware PASS methodologies to validate a cloud-aware cyber security solution.

Conclusion
Cloud computing offers tangible benefits for increasing efficiency and reducing capital and operating costs for enterprises and other organizations, but security issues have the potential of negating those benefits. A virtualized data center must be supported by a virtualized security system, which must be validated by a virtualized test systems and test methodologies.

Ankur is a product marketing manager at Spirent Communications.


<Return to section navigation list> 

Cloud Computing Events

See Sudhir Hasbe and Bruno Aziza will deliver a Microsoft DataMarket: Leveraging cloud to deliver public domain and commercial data to millions session on 2/2/2011 to the O’Reilly Strata Conference 2011 in the Mission City B4 room, Santa Clara Convention Center, Santa Clara, CA: in the Marketplace DataMarket and OData section above.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

CloudHarmony.com asked and answered Do SLAs really matter? A 1 year case study of 38 cloud services on 1/17/2011:

In late 2009 we began monitored the availability of various cloud services. To do so, we partnered or contracted with cloud vendors to let us maintain, monitor and benchmark the services they offered. These include IaaS vendors (i.e. cloud servers, storage, CDNs) such as GoGrid and Rackspace Cloud, and PaaS services such as Microsoft Azure and AppEngine. We use Panopta to provide monitoring, outage confirmation, and availability metric calculation. Panopta provides reliable monitoring metrics using a multi-node outage confirmation process wherein each outage is verified by 4 geographically dispersed monitoring nodes. Additionally, we attempt to manually confirm and document all outages greater than 5 minutes using our vendor contacts or the provider's status page (if available). Outages triggered due to scheduled maintenance are removed. DoS ([distributed] denial of service) outages are also removed if the vendor is able to restore service within a short period of time. Any outages triggered by us (e.g. server reboots) are also removed.

The purpose of this post is to compare the availability metrics we have collected over the past year with vendor SLAs to determine if in fact there is any correlation between the two.

SLA Credit Policies

In researching various vendor SLA policies for this post, we discovered a few general themes with regards to SLA credit policies we'd like to mention here. These include the following:

  • Pro-rated Credit (Pro-rated): Credit is based on a simple pro-ration on the amount of downtime that exceeded the SLA guarantee. Credit is issued based on that calculated exceedance and a credit multiple ranging from 1X (Linode) to 100X (GoGrid) (e.g. with GoGrid a 1 hour outage gets a 100 hour service credit). Credit is capped at 100% of service fees (i.e. you can't get more in credit than you paid for the service). Generally SLA credits are just that, service credit and not redeemable for a refund
  • Threshold Credit (Threshold): Threshold-based SLAs may provide a high guaranteed availability, but credits are not valid until the outage exceeds a given threshold time (i.e. the vendor has a certain amount of time to fix the problem before you are entitled to a service credit). For example, SoftLayer provides a network 100% SLA, but only issues SLA credit for continuous network outages exceeding 30 minutes
  • Percentage Credit (Percentage): This SLA credit policy discounts your next invoice X% based on the amount of downtime and the stated SLA. For example, EC2 provides a 10% monthly invoice credit when annual uptime falls below 99.5%

The most fair and simple of these policies seems to be the pro-rated method, while the threshold method seems to give the provider the greatest protection and flexibility (based on our data, most outages tend to be shorter than the thresholds used by the vendors). In the table below, we will attempt to identify which of these SLA credit policies used by each vendor. Vendors that apply a threshold policy are highlighted in red.

SLAs versus Measured Availability

The SLA data provided below is based on current documentation provided on each vendor's website. The Actual column is based on 1 year of monitoring (a few of the services listed have been monitored for less than 1 year), using servers we maintain with each of these vendors. We have included 38 IaaS providers in the table. We currently monitor and maintain availability data on 90 different cloud services. The Actual column is highlighted green if it is equal to or exceeds the SLA.

Provider Data Center Total # Outages / Mins Down SLA Credit Policy SLA Actual
AWS EC2 US East 0/0
Percentage
10% invoice credit anytime annual uptime falls below 99.5%
99.5% 100%
AWS EC2 US West 0/0
Percentage
10% invoice credit anytime annual uptime falls below 99.5%
99.5% 100%
GoGrid US West 0/0
Pro-rated
100x credit for any downtime
100% 100%
Linode VPS London 0/0
Pro-rated
1x credit for downtime exceeding 0.1%
99.9% 100%
OpSource Cloud VA, US 0/0
Percentage
5% invoice credit for 60 minutes downtime 10% for up to 120 minutes and so on
100% 100%
Storm on Demand MI, US 0/0
Pro-rated
10x credit for any downtime
100% 100%
VoxCLOUD EU 0/0
Percentage
5% invoice credit per 0.1% downtime up to 100%
100% 100%
GoGrid US East 1/2.3
Pro-rated
100x credit for any downtime
100% 99.999%
Joyent Smart Machines Andover, MA 1/3
Percentage
5% of the monthly fee for each 30 minutes of downtime
100% 99.999%
VoxCLOUD Singapore 1/5.5
Percentage
5% invoice credit per 0.1% downtime up to 100%
100% 99.999%
Speedyrails VPS Peer1 Quebec 1/2.2
Percentage
3% of monthly fees for every 0.1% of downtime
99.9% 99.999%
Rackspace Cloud Dallas, TX 1/8.7
Threshold/ Percentage

5% of the fees for each 30 minutes of network downtime (1 hour for hardware) up to 100% Host hardware failures guaranteed to be fixed within 1 hour of problem identification
100%1 99.998%
SoftLayer CloudLayer Dallas, TX 4/13.9
Threshold/ Percentage

5% monthly invoice credit for each continuous network outage over 30 minutes 20% monthly invoice credit for failed hardware not replaced within 2 hours Max 100% credit
100%1 99.997%
Hosting.com Colorado 1/1.4
Percentage
1/30th monthly invoice credit for every 30 minutes network downtime 1/30th monthly invoice credit for every 30 minutes hardware downtime after 1 hour buffer for hardware repair
100%1 99.997%
AWS EC2 APAC 5/14.8
Percentage
10% invoice credit anytime annual uptime falls below 99.5%
99.5% 99.996%
Linode Atlanta 10/26.9
Pro-rated
Pro-rated 1x credit for downtime exceeding 0.1%
99.9% 99.995%
Joyent Smart Machines Emeryville, CA 4/15.2
Percentage
5% of the monthly fee for each 30 minutes of downtime
100% 99.994%
Terremark vCloud FL, US 7/37.9 Unique $1 for every fifteen 15 minute downtime period up to a maximum amount equal to 50% of the usage fees 100% 99.993%
AWS EC2 EU West 3/36
Percentage
10% invoice credit anytime annual uptime falls below 99.5%
99.5% 99.993%
Speedyrails VPS Canix Quebec 9/38.7
Percentage
3% of monthly fees for every 0.1% of downtime
99.9% 99.992%
Linode Fremont, CA 13/71.92
Pro-rated
1x credit for downtime exceeding 0.1%
99.9% 99.986%
Zerigo CO, CA 9/66.8
Pro-rated
4x the total (starting from 100%, not 99.99%) non-compliant time
99.99% 99.985%
SoftLayer CloudLayer DC, US 31/86.7
Threshold/ Percentage

5% monthly invoice credit for each continuous network outage over 30 minutes 20% monthly invoice credit for failed hardware not replaced within 2 hours Max 100% credit
100%1 99.984%
SoftLayer CloudLayer WA, US 13/106.8
Threshold/ Percentage

5% monthly invoice credit for each continuous network outage over 30 minutes 20% monthly invoice credit for failed hardware not replaced within 2 hours Max 100% credit
100%1 99.980%
Linode NJ, CA 14/145.7
Pro-rated
1x credit for downtime exceeding 0.1%
99.9% 99.972%
VoxCLOUD NY, US 12/146.33
Percentage
5% invoice credit per 0.1% downtime up to 100%
100% 99.972%
CloudSigma Switzerland 22/59.9
Threshold/ Percentage
50x credit for any downtime (network or hardware) over 15 minutes
100% 99.972%
Hosting.com KY, US 4/38.74
Percentage
1/30th monthly invoice credit for every 30 minutes network downtime 1/30th monthly invoice credit for every 30 minutes hardware downtime after 1 hour buffer for hardware repair
100%1 99.955%
ThePlanet Cloud Servers TX, US 34/144.3
Threshold/ Percentage

5% monthly invoice credit for first 5 minute continuous outage (hardware or network) Then, 5% additional credit for each additional 30 minute continuous outage
100% 99.955%
Gandi VPS France 4/147.7
Pro-rated
1 day credit for every outage over 7 minutes within a single day
99.95% 99.955%
Linode Dallas 21/258.2
Pro-rated
1x credit for downtime exceeding 0.1%
99.9% 99.951%
NewServers FL, US 39/288.7
Pro-rated
24x credit for every 1 hour of downtime exceeding 0.001%
99.999% 99.945%
VPS.NET UK 8/250.3
Percentage
10% monthly invoice credit for each hour of downtime
100%5 99.921%
VPS.NET US Central 12/342.9
Percentage
10% monthly invoice credit for each hour of downtime
100%5 99.892%
Flexiant UK 83/820.36
Percentage
5% monthly invoice credit for each 30 minutes of downtime
100% 99.844%
VPS.NET US West 32/576.5
Percentage
10% monthly invoice credit for each hour of downtime
100%5 99.819%
ReliaCloud MN, US 23/1941.57
Pro-rated
30x hourly credit for each hour downtime
100% 99.626%
VPS.NET US East 6/1224.18
Percentage
10% monthly invoice credit for each hour of downtime
100%5 99.616%

1 Applies to network connectivity only, not hardware outages

2 Linode does not own or operate this data center (or any of it's data centers to our knowledge). This particular data center in Fremont, CA is owned and operated by Hurricane Electric. About 20 minutes of the outages triggered for this location were due to data center wide power outages completely outside of the control of Linode

3 A majority of this downtime (114 minutes) was due to a SAN failure on 10/15/2010

4 A majority of this downtime (34.5 minutes) was due to an internal network failure on 1/5/2011. We've been told this problem has since been resolved

5 Applies only for clients who have signup for the VPS.net "Managed Support" package ($99/mo). It appears that VPS.net does not provide any SLA guarantees to other customers.

6 Approximately 560 minutes of these outages occurred due to failure of their SAN

7 A majority of these outages (1811 minutes) occurred between Jan-Feb 2010 immediately following ReliaCloud's public launch (post beta). A majority of the downtime seems to have occurred due to SAN failures

8 Explanation provided for approximately 1200 minutes of these outages (2 separate outages) was "We had a problem on the cloud. Now your VPS is up and running"

Is there a correlation between SLA and actual availability?

The short answer based on the data above is absolutely not. Here is how we arrived at this conclusion:

Total # of services analyzed: 38
Services that meet or exceeded SLA: 15/38 [39%]
Services that did not meet SLA: 23/38 [61%]
Vendors with 100% SLAs: 23/38 [61%]
Vendors with 100% SLAs achieving their SLA: 4/23 [17%]
Mean availability of vendors with 100% SLAs: 99.929% [6.22 hrs/yr]
Median availability of vendors with 100% SLAs: 99.982% [1.58 hrs/yr]

It is very interesting to observe that the bottom 6 vendors all provided 100% SLAs, while 3 of the top 7 provide the lowest SLAs of the group (EC2 99.5% and Linode 99.9%). SLAs were only achieved for a minority (39%) of the vendors. This is particularly applicable to vendors with 100% SLAs where only 4 of 23 (17%) actually achieved 100% availability.

Vendors with generous SLA credit policies

In most cases SLA credit policies provide extremely minimal financial recourse not considering all of the hoops you'll have to jump through to get them. Not one of the SLA we reviewed allowed for more than 100% of service fees to be credited. There are a few vendors that stood out by providing relatively generous SLA credit policies:

  • GoGrid: provides a 100x credit policy combined with 100% SLA for any hardware and network outages and no minimum thresholds (e.g. 1 hour outage = 100 hour credit). This is by far the most generous of the 38 IaaS vendors we evaluated. GoGrid's service is also one of the most reliable IaaS services we currently monitor (100% US West and 99.999% US East)
  • Joyent: provides a 5% invoice SLA credit for each 30 minutes of monthly (non-continuous) downtime (equates to about 72x pro-rated credit) combined with 100% SLA and no minimum outage thresholds
  • VoxCloud: provides a 5% invoice credit per 0.1% of monthly (non-continuous) downtime (about every 45 minutes - equates to about 48x pro-rated credit) combined with 100% SLA and no minimum outage thresholds
Some Extra Cool Stuff: Cloud Availability Web Services and RSS Feed

We've recently released web services and an RSS feed to make our availability metrics and monitoring data more accessible. Up to this point, this data was only available on the the Cloud Status Tab of our website. We currently offer 30 different REST and SOAP web services for accessing cloud benchmarking and monitoring data, and vendor information.

Cloud Outages RSS Feed

This feed provides information about potential outages we are currently observing with any of the 90 cloud services we monitor. Click here to view and subscribe to this feed.

getAvailability Web Service
This post includes a small snapshot of the data we maintain on cloud availability. We have released a new web service that allows users to calculate availability and retrieve outage details (including supporting comments) for any of the 90 cloud services we currently monitor. Monitoring for many of these services began between October 2009 and January 2010, but we are also continually adding new services to the list. This web service allows users to calculate availability and retrieve outage information for any time frame, service type, vendor, etc. To get you started, we have provided a few example RESTful request URLs. These example requests return JSON formatted data. To request XML formatted data append &ws-format=xml to any of these URLs. Full API documentation for this web service is provided here. A SOAP WSDL is also provided here. You may invoke this web service for free up to 5 times daily. To purchase a web service token allowing additional web service invocations click here.
Retrieve availability for all IaaS vendors for the past year (first 10 of 46 results)
http://cloudharmony.com/ws/getAvailability?serviceType=server&interval=1y
Retrieve availability for all IaaS vendors for the past year (results 11-20 of 46)
http://cloudharmony.com/ws/getAvailability?serviceType=server&interval=1y&ws-offset=10
Retrieve availability for all CDNs for 2010 (first 10 of 13 results)
http://cloudharmony.com/ws/getAvailability?serviceType=cdn&start=1/1/2010&stop=12/31/2010
Retrieve availability for all CDNs for 2010 (results 11-13 of 13)
http://cloudharmony.com/ws/getAvailability?serviceType=cdn&start=1/1/2010&stop=12/31/2010&ws-offset=10
Retrieve availability for all AWS services (EC2, S3, CloudFront) for the past 6 months
http://cloudharmony.com/ws/getAvailability?cloudId=AWS&interval=6m
Retrieve availability for GoGrid Cloud Servers for the past 2 weeks
http://cloudharmony.com/ws/getAvailability?serviceId=GoGrid:Servers&interval=2w
Retrieve availability for VPS.net's US East data center since 1/1/2010 - include full outage documentation
http://cloudharmony.com/ws/getAvailability?serviceId=VPS.NET:Servers&dataCenter=GA,+US&start=1/1/2010&includeOutages=1
Summary

Don't let SLAs lull you into a false sense of security. SLAs are most likely influenced more by marketing and legal wrangling than having any basis in technical merits or precedence. SLAs should not be relied upon as a factor in estimating the stability and reliability of a cloud service or for any form of financial recourse in the event of an outage. Most likely any service credits provided will be a drop in the bucket relative to the reduced customer confidence and lost revenue the outage will cause your business.

The only reasonable way to determine the actual reliability of a vendor is to use their service or obtain feedback from existing clients or services such as ours. For example, AWS EC2 maintains the lowest SLA of any IaaS vendor we know of, and yet they provide some of the best actual availability (100% for 2 regions, 99.996% and 99.993%). Beware of the fine print. Many cloud vendors utilize minimum continuous outage thresholds such as 30 minutes or 2 hours (e.g. SoftLayer) before they will issue any service credit regardless of whether or not they have met their SLA. In short, we are of the opinion that SLAs really don't matter much at all.


Vern Burke continues the Gartner IaaS/WebSiteHosting Magic Quadrant controversy with Gartner: clueless in the cloud of 1/12/2011 (missed when published):

Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading about the uproar over the Gartner “Magic Quadrant” for cloud computing infrastructure. I think they need some remedial education before they pass themselves off as cloud computing pundits.

Defining “cloud computing” precisely is something that people have been squabbling over quite some time now. This is because most of the people squabbling can’t separate the core characteristics that truly define cloud computing from all the little details that define the many flavors of it.

Even though people tend to disagree on what cloud computing really is, it’s pretty clear what it is NOT. It isn’t “classic” data center services. This is what had me shaking my head over Gartner declaring Amazon weakness because they don’t offer co-location, or dedicated private physical servers.

Having started as a classic data center provider here, SwiftWater Telecom, my own operation, provides both classic data center services such as co-location AND cloud computing services and the combination of these things gives us more flexibility to meet customer’s needs through combination. On the other hand, the classic side of the coin isn’t a weakness or a strength of the cloud computing side. It just means I have a wider range of tools to satisfy more customer needs.

After previously having gone around with Lydia Leong of Gartner about a hairbrained suggestion to chop public clouds into private ones (“Cloud computing: another ridiculous proposal.”), I can only conclude that they only have enough handle on cloud computing to be dangerous.

Trying to mix a requirement for traditional data center offerings in to the equation when the subject is supposed to be “cloud computing infrastructure” is the most clueless thing I’ve seen in quite a while.


<Return to section navigation list> 

0 comments: