Monday, March 28, 2011

Windows Azure and Cloud Computing Posts for 3/28/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3332   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Avkash Chauhan explained Reading and Saving table storage entities without knowing the schema or updating TableStorageEntity schema at runtime in a 3/28/2011 post:

image When you have Windows Azure Table Storage with a schema, you declare a class which inherits from "TableStorageEntity"  and this class sets the fields which you will use in your table. Your table schema could be dynamic and during runtime you will be handling key and values collection , which will be defined in tablestorage fields. If you want to get table storage entities you can use storage client sample classes to achieve your objective however a question comes how to update the table schema during runtime when objects are constructed dynamically. Or you may ask how to enumerate table properties to construct the proper type of object dynamically during run time.

imageAfter a little research on net, I found that ReadingEntity and WritingEntity methods can be used to solve it. Here are the details:

You can define a class that has PartitionKey, RowKey and a dictionary (maps string to object) and then use ReadingEntity event on the DataServiceContext to achieve this.

   [DataServiceKey("PartitionKey", "RowKey")]   
    public class GenericEntity   
    {   
        public string PartitionKey { get; set; }   
        public string RowKey { get; set; }   
        Dictionary<string, object> properties = new Dictionary<string, object>();   
        internal object this[string key]   
        {   
            get   
            {   
                return this.properties[key];   
            }   
            set   
            {   
                this.properties[key] = value;   
            }   
        }   
        public override string ToString()   
        {   
            // TODO: append each property   
            return "";   
        }   
    }   

Once you have the entity defined, you can do your deserialization in the ReadingEntity method.

        void TestGenericTable()   
        {   
            var ctx = CustomerDataContext.GetDataServiceContext();   
            ctx.IgnoreMissingProperties = true;   
            ctx.ReadingEntity += new EventHandler<ReadingWritingEntityEventArgs>(OnReadingEntity);   
            var customers = from o in ctx.CreateQuery<GenericTable>(CustomerDataContext.CustomersTableName) select o;   
            Console.WriteLine("Rows from '{0}'", CustomerDataContext.CustomersTableName);   
            foreach (GenericEntity entity in customers)   
            {   
                Console.WriteLine(entity.ToString());   
            }   
        }  

The ReadingEntity implementation is as follows:

       // Credit goes to Pablo from ADO.NET Data Service team 
        public void OnReadingEntity(object sender, ReadingWritingEntityEventArgs args)   
        {   
            // TODO: Make these statics   
            XNamespace AtomNamespace = "http://www.w3.org/2005/Atom";   
            XNamespace AstoriaDataNamespace = "http://schemas.microsoft.com/ado/2007/08/dataservices";   
            XNamespace AstoriaMetadataNamespace = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata";   
            GenericEntity entity = args.Entity as GenericEntity;   
            if (entity == null)   
            {   
                return;   
            }   
            // read each property, type and value in the payload   
            var properties = args.Entity.GetType().GetProperties();   
            var q = from p in args.Data.Element(AtomNamespace + "content")   
                                    .Element(AstoriaMetadataNamespace + "properties")   
                                    .Elements()   
                    where properties.All(pp => pp.Name != p.Name.LocalName)   
                    select new   
                    {   
                        Name = p.Name.LocalName,   
                        IsNull = string.Equals("true", p.Attribute(AstoriaMetadataNamespace + "null") == null ? null : p.Attribute(AstoriaMetadataNamespace + "null").Value, StringComparison.OrdinalIgnoreCase),   
                        TypeName = p.Attribute(AstoriaMetadataNamespace + "type") == null ? null : p.Attribute(AstoriaMetadataNamespace + "type").Value,   
                        p.Value   
                    };   
            foreach (var dp in q)   
            {   
                entity[dp.Name] = GetTypedEdmValue(dp.TypeName, dp.Value, dp.IsNull);   
            }   
        }   
      private static object GetTypedEdmValue(string type, string value, bool isnull)   
        {   
            if (isnull) return null;   
            if (string.IsNullOrEmpty(type)) return value;   
            switch (type)   
            {   
                case "Edm.String": return value;   
                case "Edm.Byte": return Convert.ChangeType(value, typeof(byte));   
                case "Edm.SByte": return Convert.ChangeType(value, typeof(sbyte));   
                case "Edm.Int16": return Convert.ChangeType(value, typeof(short));   
                case "Edm.Int32": return Convert.ChangeType(value, typeof(int));   
                case "Edm.Int64": return Convert.ChangeType(value, typeof(long));   
                case "Edm.Double": return Convert.ChangeType(value, typeof(double));   
                case "Edm.Single": return Convert.ChangeType(value, typeof(float));   
                case "Edm.Boolean": return Convert.ChangeType(value, typeof(bool));   
                case "Edm.Decimal": return Convert.ChangeType(value, typeof(decimal));   
                case "Edm.DateTime": return XmlConvert.ToDateTime(value, XmlDateTimeSerializationMode.RoundtripKind);   
                case "Edm.Binary": return Convert.FromBase64String(value);   
                case "Edm.Guid": return new Guid(value);   
                default: throw new NotSupportedException("Not supported type " + type);   
            }   
        }   

Furthermore if you decide to save the dynamic data, you can use WritingEntity event to store the data. You can add the xml element for each property to the content element in which the content element can be retrieved from ReadingWritingEntityEventArgs.

References:

http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/481afa1b-03a9-42d9-ae79-9d5dc33b9297/


<Return to section navigation list> 

SQL Azure Database and Reporting

Hitney asked Storing Data in Azure: SQL, Tables, or Blobs? and answered “SQL Azure” for his data on 3/28/2011:

image While building the back end to host our “Rock, Paper, Scissors in the cloud” game, we faced a situation of where/how to store the log files for the games that are played.   In my last post, I explained a bit about the idea; in the game, log files are essential at tuning your bot to play effectively.  Just to give a quick example of what the top of a log file might look like: 

image

imageIn this match, I (bhitney) was playing a house team (HouseTeam4) … each match is made up of potentially thousands of games, with one game per line.    From the game’s perspective, we only care about the outcome of the entire match, not the individual games within the match – but we need to store the log for the user. 

There’s no right or wrong answer for storing data – but like everything else, understanding the pros and cons is the key. 

Azure Tables

We immediately ruled out Azure Tables, simply because the entity size is too big.   But what if we stored each game (each line of the log) in an Azure Table?    After all, Azure Tables shine at large, unstructured data.   This would be ideal because we could ask specific questions of the data – such as, “show me all games where…”.  Additionally, size is really not a problem we’d face – tables can scale to TBs. 

But, storing individual games isn’t a realistic option.  The number of matches played for a 100 player match 4,950.  Each match has around 2,000 games, so that means we’d be looking at 9,900,000 rows per round.   At a few hundred milliseconds per insert, it would take almost a month to insert that kind of info.  Even if we could get latency to a blazing 10ms, it would still take over a day to insert that amount of data.    Cost wise, it wouldn’t be too bad: about $10 per round for the transaction costs.

Blob Storage

Blob storage is a good choice as a file repository.  Latency-wise, we’d still be looking at 15 minutes per round.  We almost went this route, but since we’re using SQL Azure anyway for players/bots, it seemed excessive to insert metadata into SQL Azure and then the log files into Blob Storage.  If we were playing with tens of thousands of people, that kind of scalability would be really important.   But what about Azure Drives?   We ruled drives out because we wanted the flexibility of multiple concurrent writers. 

SQL Azure

Storing binary data in a database (even if that binary data is a text file) typically falls under the “guilty until proven innocent” rule.  Meaning: assume it’s a bad idea.  Still, though, this is the option we decided to pursue.  By using gzip compression on the text, the resulting binary was quite small and didn’t add significant overhead to the original query used to insert the match results to begin with.  Additionally, the connection pooling makes those base inserts incredibly fast – much, much faster that blob/table storage.

One other side benefit to this approach is that we can serve the GZip stream without decompressing it.  This saves processing power on the web server, and also takes a 100-200k log file to typically less than 10k, saving a great deal of latency and bandwidth costs.

Here’s a simple way to take some text (in our case, the log file) and get a byte array of the compressed data.  This can then be inserted into a varbinary(max) (or deprecated image column) in a SQL database:

And to get that string back:

   1: public static string Decompress(byte[] compressedText)
   2: {
   3:     try
   4:     {
   5:         if (compressedText.Length == 0)
   6:         {
   7:             return string.Empty;
   8:         }
   9:  
  10:         using (MemoryStream ms = new MemoryStream())
  11:         {
  12:             int msgLength = BitConverter.ToInt32(compressedText, 0);
  13:             ms.Write(compressedText, 0, compressedText.Length - 0);
  14:  
  15:             byte[] buffer = new byte[msgLength];
  16:  
  17:             ms.Position = 0;
  18:             using (GZipStream zip = new GZipStream(ms, CompressionMode.Decompress))
  19:             {
  20:                 zip.Read(buffer, 0, buffer.Length);
  21:             }
  22:  
  23:             return Encoding.UTF8.GetString(buffer);
  24:         }
  25:     }
  26:     catch
  27:     {
  28:         return string.Empty;
  29:     }
  30: }

In our case, though, we don’t really need to decompress the log file because we can let the client browser do that!  In our case, we have an Http Handler that will do that, and quite simply it looks like:

   1: context.Response.AddHeader("Content-Encoding", "gzip");
   2: context.Response.ContentType = "text/plain";
   3: context.Response.BinaryWrite(data.LogFileRaw); // the byte array
   4: context.Response.End();

Naturally, the downside of this approach is that if a browser doesn’t accept GZip encoding, we don’t handle that gracefully.   Fortunately it’s not 1993 anymore, so that’s not a major concern.


See also Scott Klein described SQL Azure, OData, and Windows Phone 7 in a 3/28/2010 post in the section below:

<Return to section navigation list> 

MarketPlace DataMarket and OData

Scott Klein described SQL Azure, OData, and Windows Phone 7 in a 3/28/2010 post:

One of the things I really wanted to do lately was to get SQL Azure, OData, and Windows Phone 7 working together; in essence, expose SQL Azure data using the OData protocol and consume that data on a Windows Mobile Phone 7 device. This blog will explain how to do just that. This example is also in our SQL Azure book in a bit more detail, but with the push for WP7 I thought I'd give a sneak-peak here.

You will first need to download and install a couple of things, the first of which is the OData client Library for Windows Phone 7 Series CTP which is a library for consuming OData feeds on the Windows Phone 7 series devices. This library has many of the same capabilities as the ADO.NET Data Services client for Silverlight. The install will simply extract a few files to the directory of your choosing.

The next item to download is the Windows Phone Developer Tools, which installs the Visual Studio Windows Phone application templates and associated components. These tools provide integrated Visual Studio design and testing for your Windows Phone 7 applications.

Our goal is to enable OData on a SQL Azure database so that we can expose our data and make it available for the Windows Phone 7 application to consume. OData is a REST-based protocol which standarizes the querying and updating of data over the Web. The first step then is to enable OData on the SQL Azure database by going into the SQL Azure Labs site and enabling OData. You will be required to log in with your Windows Live account, then once in the SQL Azure Labs portal select the SQL Azure OData Service tab. As the home page states, SQL Azure Labs is in Developer Preview.

The key here is the URI at the bottom of the page in the User Mapping section. I'll blog another time on what the User Mapping is, but for now, highlight and copy the URI to the clipboard. You'll be using it later.

Once OData is enabled on the selected SQL Azure database, you are ready to start building the Windows Phone application. In Visual Studio 2010, you will notice new installed templates for the Windows Phone in the New Project dialog. For this example, select the Windows Phone Application.

Once the project is created, you will need to add a reference to the OData Client Library installed earlier. Browse to the directory to which you extracted the OData Client Library and add the System.Data.Services.Client.dll library.

The next step is to create the necessary proxy classes that are needed to access a data service from a .NET Framework client application. The proxy classes can be generated by using the DataSvcUtil tool, a command-line tool that consumes an OData feed and generates the client data service classes. Use the following image as an example to generate the appropriate data service classes. Notice the /uri: paramter. This is the same URI listed in the first image above, and what the DataSvcUtil will use to generate the necessary proxy classes.

Once the proxy class is generated, add it to the project. Next, add a new class to your project and add the following namespaces which provide addtional functionality needed to query the OData source and work with collections.

using System.Linq;
using System.ComponentModel;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using System.Windows.Data;
using TechBioModel;
using System.Data.Services.Client;
using System.Collections.ObjectModel;

Next, add the following code to the class. The LoadData method first initializes a new TechBio instance of the proxy generated class, passing the URI to the OData service to call out to the service. A LINQ query is used to pull the data you want and the results loaded into the Docs DataServiceCollection.

public class classname
{
    public TechBioModel()
    {
        LoadData();
    }

    void LoadData()
    {
        TechBio context = new TechBio(new Uri("https://odata.sqlazurelabs.com/OData.svc/v0.1/servername/TechBio"));

        var qry = from u in context.Docs
                    where u.AuthorId == 113
                    select u;

        var dsQry = (DataServiceQuery<Doc>)qry;

        dsQry.BeginExecute(r =>
        {
            try
            {
                var result = dsQry.EndExecute(r);
                if (result != null)
                {
                    Deployment.Current.Dispatcher.BeginInvoke(() =>
                    {
                        Docs.Load(result);
                    });
                }
            }
            catch (Exception ex)
            {
                MessageBox.Show(ex.Message.ToString());
            }
        }, null);

    }

    DataServiceCollection<Doc> _docs = new DataServiceCollection<Doc>();

    public DataServiceCollection<Doc> Docs
    {
        get
        {
            return _docs;
        }
        private set
        {
            _docs = value;
        }
    }
}

I learned from a Shawn Wildermuth blog post that the reason you need to use the Dispatcher is that this call is not guaranteed to be executed on the UI thread so the Dispatcher is required to ensure the this call is executed on the UI thread. Next, add the following code to the App.xaml. This will get called by the load of the phone with the application starts.

private static TechBioModel tbModel = null;
public static TechBioModel TBModel
{
    get
    {
        if (tbModel == null)
            tbModel = new TechBioModel();

        return tbModel;
    }
}

To call the code above, add the following code to the OnNavigatedTo event of the phone iteslf (the MainPage constructor)

protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    if (DataContext == null)
        DataContext = App.TBModel;

}

Lastly, you need to go to the UI of the phone and add a ListBox and then tell the ListBox where to get its data. Here we are binding the ListBox to the Docs DataServiceCollection.

<ListBox Height="611" HorizontalAlignment="Left" Name="listBox1"
        VerticalAlignment="Top" Width="474"
        ItemsSource="{Binding Docs}" >

You are now ready to test. Run the project and when the project is deployed to the phone and run, data from the SQL Azure database is queried and displayed on the phone.

In this example you saw an simple example of how to consume an OData feed on on Windows Phone 7 application that gets its data from a SQL Azure database.

Steve Yi recommended Scott’s article in Scott Klien - Using OData Protocol with Windows Phone 7 of 3/28/2011:

SQL Azure database is a great way to extend data  to smartphones, tablets, and devices.  You can already do this with Windows Phone 7. Scott Klien, co-founder of Azure training and consulting firm Blue Syntax Consulting, has written a basic walkthrough article on his blog about how to get the two systems to talk to one another. These are the types of blog posts I like.  It does a good job of walking you through the basics of a technology, with code samples and screenshots.

Click here to read his article

About Scott Klien
Scott Klein is co-founder of Blue Syntax Consulting, a company that specializes in Azure training and consulting. He speaks frequently at SQL Saturday events and user groups, and was spoke at the 2008 European PASS conference in 2008.


Glenn Gailey (@ggailey777) explained Calling [OData] Service Operations from the Client in a 3/28/2011 post:

image There seems to be a bit of confusion around the support for and usage of OData service operations in the WCF Data Services client library. Service operations are exposed in the data service metadata returned by an OData service. (If you have no idea what I am talking about, read the topic Service Operations [WCF Data Services].)

imageFor example, the Netflix OData service exposes several service operations, exposed as FunctionImport elements, which you can see in the service’s EDMX document. Because of this, you would think that the OData tools, such as Add Service Reference, would be able to turn those FunctionImport elements into methods in the generated data service container (which inherits from DataServiceContext). That is not unreasonable to assume, since EntitySet elements are turned into typed DataServiceQuery<T> properties of the data service container class (let’s just call it “context” from now on).

Not getting these nice service operation-based methods on the context is basically the limitation of using service operations in the WCF Data Services client. (This means that you need to use URIs.) Beyond that, you can use the Execute<T> method on the context to call any service operation on your data service, as long as you know the URI. For example, the following client code calls a service operation named GetOrdersByCity, which takes a string parameter of “city” and returns an IQueryable<Order>:

// Define the service operation query parameter.
string city = "London";

// Define the query URI to access the service operation with specific
// query options relative to the service URI.
string queryString = string.Format("GetOrdersByCity?city='{0}'", city)
    + "&$orderby=ShippedDate desc"
    + "&$expand=Order_Details";

// Create the DataServiceContext using the service URI.
NorthwindEntities context = new NorthwindEntities(svcUri2);

try
{
    // Execute the service operation that returns all orders for the specified city.
    var results = context.Execute<Order>(new Uri(queryString, UriKind.Relative));

    // Write out order information.
    foreach (Order o in results)
    {
        Console.WriteLine(string.Format("Order ID: {0}", o.OrderID));

        foreach (Order_Detail item in o.Order_Details)
        {
            Console.WriteLine(String.Format("\tItem: {0}, quantity: {1}",
                item.ProductID, item.Quantity));
        }
    }
}
catch (DataServiceQueryException ex)
{
    QueryOperationResponse response = ex.Response;

    Console.WriteLine(response.Error.Message);
}

Notice that not only did I get a collection of materialized Order objects returned from this execution, but because the operation returns an IQueryable<T>, I was also able to compose against the service operation in the query URI. Pretty cool, right. Just have to know the URI.

Here’s an example of calling a service operation that only returns a single Order entity:

// Define the query URI to access the service operation,
// relative to the service URI.
string queryString = "GetNewestOrder";

// Create the DataServiceContext using the service URI.
NorthwindEntities context = new NorthwindEntities(svcUri2);

try
{
    // Execute a service operation that returns only the newest single order.
    Order order
        = (context.Execute<Order>(new Uri(queryString, UriKind.Relative)))
        .FirstOrDefault();

    // Write out order information.
    Console.WriteLine(string.Format("Order ID: {0}", order.OrderID));
    Console.WriteLine(string.Format("Order date: {0}", order.OrderDate));
}
catch (DataServiceQueryException ex)
{
    QueryOperationResponse response = ex.Response;

    Console.WriteLine(response.Error.Message);
}

Notice that I needed to call the FirstOrDefault() method to get the execution to return my single Order object (otherwise it doesn’t compile), but it works just fine and I get back a nice fully-loaded Order object.

And we can call a service operation that doesn’t even return an entity, it returns an integer value:

// Define the query URI to access the service operation,
// relative to the service URI.
string queryString = "CountOpenOrders";

// Create the DataServiceContext using the service URI.
NorthwindEntities context = new NorthwindEntities(svcUri2);

try
{
    // Execute a service operation that returns the integer
    // count of open orders.
    int numOrders
        = (context.Execute<int>(new Uri(queryString, UriKind.Relative)))
        .FirstOrDefault();

    // Write out the number of open orders.
    Console.WriteLine(string.Format("Open orders as of {0}: {1}",
        DateTime.Today.Date, numOrders));
}
catch (DataServiceQueryException ex)
{
    QueryOperationResponse response = ex.Response;

    Console.WriteLine(response.Error.Message);
}

Again, I had to use the FirstOrDefault method (I couldn't think of a good excuse to return an IEnumerable<int> from my Northwind data).

And what about methods that return void? Maybe not the best design on the service end, but we can still do it with our client:

// Define the query URI to access the service operation,
// relative to the service URI.
string queryString = "ReturnsNoData";

// Create the DataServiceContext using the service URI.
NorthwindEntities context = new NorthwindEntities(svcUri2);

try
{
    // Execute a service operation that returns void.
    string empty
        = (context.Execute<string>(new Uri(queryString, UriKind.Relative)))
        .FirstOrDefault();
}
catch (DataServiceQueryException ex)
{
    QueryOperationResponse response = ex.Response;

    Console.WriteLine(response.Error.Message);
}

Of course, the only proof we have that the void method worked is that no error was returned.

Hopefully, this will convince folks that they can still use the WCF Data Services client library to call service operations, and not resort to that messy WebRequest code to do it. I will also likely add a new service operation topic to the client library documentation, which certainly won’t hurt.


James Downey described Analytics: The Next Wave for Big Data in a 3/28/2011 post:

image Speakers at the SDForum analytics conference last Friday on the Stanford campus made clear that the next wave of analytics brings with it exciting new opportunities. And here are the keys elements that make this next wave so exciting:

  • New data sources. First wave analytics was limited mostly to structured data found behind the firewall. The next wave of analytics pulls all forms of data from everywhere—data from sensors embedded throughout our physical world, data from the mobile phones that we carry wherever we go, unstructured data pulled from anywhere.
  • Mobile. Mobile devices both generate data and serve as our interface to that data, dramatically increasing its immediacy and value.
  • Immediately actionable. The first wave of analytics created reports that looked back in time, dashboards that were little more than eye candy, and spreadsheets that degenerated into Excel hell. The next wave will be real-time, predictive, and immediately actionable.
  • Big and agile. The first wave of analytics scaled but at the cost of agility. There were big IT projects that tended to bog down. The next wave will utilize virtualization and sophisticated architectures to scale with agility.
  • Core of business. First wave analytics were an afterthought for most enterprises, optimizing processes on the margins. Next wave analytics will be core to the business, enabling innovative new business models that could not exist without analytics.

For details on the event, read my article or follow the twitter feed.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Sebastian W reported on 3/28/2011 a workaround for Unrecognized element ‘transportClientEndpointBehavior’ exceptions in Windows AppFabric:

I was preparing some demos for Extreme 2011 Prague , one the them involves Windows Appfabric, and every time when I tried to configure WCF service I was getting  that horrible message:

Unrecognized element ‘transportClientEndpointBehavior’

[I did a] bit of research and I’ve found http://support.microsoft.com/kb/980423 , install reboot and done, no more errors. See you in Prague.

image722322222.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Toddy Mladcnov answered What Environment Variables Can You Use in Windows Azure? in this 3/28/2011 post:

Yesterday I found the following forum post on Windows Azure Forums: How to get environment variable and information in startup task. At the same time I was prototyping some of our new features, and needed to achieve almost the same goal as in the forum post, hence I decided that it will be useful to publish some more details about what environment variables are available by default in Windows Azure, and how can you use them.

Environment Variables in Windows Azure

You may already know that you can define environment variables per role in Windows Azure Service Definition (CSDEF) file using the Environment tag:

<Environment>

     <Variable name="[my_name]" value="[my_value]" />

</Environment>

CSDEF schema allows you to put the Environment tag it either the Startup task tag or in the Runtime tag. It is important to know that environment variables that are defined for startup tasks are not available in the runtime and the reverse. Hence if you want to have environment variable that can be used in both the startup task and the runtime you need to define it twice.

Here an example for defining environment variables for startup tasks and for your role’s runtime:

<WorkerRole name="MyWorker">
     <Startup>
          <Task taskType="background" commandLine="my.cmd"

                               executionContext="limited"> 
               <Environment>
                    <Variable name="MY_ENV" value="my_value" />
               </Environment>
          </Task>
     </Startup>
     <Runtime>
          <Environment>
               <Variable name="MY_SECOND_ENV" value="my_value2" />
          </Environment> 
     </Runtime>

</WorkerRole>

Pre-Defined Environment Variables in Windows Azure

Of course the more interesting question is what are the pre-defined environment variables in Windows Azure, and whether you can leverage those. I wrote a simple Web Role that uses System.Environment.GetEnvironmentVariables(), iterates through all of them and prints them in a web page. Majority of the variables are standard Windows environment variables but here are some Windows Azure specific ones (Note: Those variables are specific to the process in which your role runs):

  • @AccountUsername ** – this is the username you can use for Remote Desktop connection to the role instance
  • @AccountEncryptedPassword ** – this is the encrypted password that you can use for Remote Desktop connection to the role instance. The password is encrypted using the SSL certificate for your Hosted Service
  • @AccountExpiration ** – this is the timestamp when the Remote Desktop account expires
  • @ConnectionString – is the diagnostics connection string for access to Windows Azure Storage. This is the configuration that you defined in the Service Configuration (CSCFG) file for your deployment. You can change this by modifying your CSCFG file.
  • DiagnosticStore – this is the location where diagnostics information is stored locally before transferred to the Windows Azure storage account
  • RdRoleId – this is the really unique identifier of the role instance. This information is not available in the Management Portal and it is build by concatenating the RoleDeploymentID and the RoleInstanceID.
    Example: eaaa6a386255466dada9dd158c5d4008.WebTest_IN_0
  • RdRoleRoot – is is the same as the RoleRoot above
  • RoleDeploymentID * – is the deployment identifier for your deployment and it is the same one you see in the Windows Azure Management Portal. This is the same for each role and role instance.
    Example: eaaa6a386255466dada9dd158c5d4008
  • RoleInstanceID * – is the unique identifier of the role instance. The instance ID uses the RoleName as a prefix. 
    Example: WebTest_IN_0
  • RoleName * – this is the role name.
    Example: WebTest
  • RoleRoot – this is maybe the most important environment variable in Windows Azure, or at least the one that you need the most. It points to the root where your Windows Azure role code is placed. Very often you will want to access some resource files for your role using %RoleRoot%\AppRoot.
  • __WaRuntimeAgent__ – this is the identifier for the runtime agent that is used by the Fabric controller to monitor your role. Again, it is very unlikely that you will need this one
  • WA_CONTINER_SID – this is the system identifier of the Windows Azure container. It is very unlikely you will need this one

All environment variables marked with asterisk (*) above are available as properties in Windows Azure Management Portal.

You can change the environment variables marked with double asterisks (**) above by clicking on the role in Windows Azure Management Portal and then on Configure Remote Access.

Also, most of the information stored in the environment variables is available through the Windows Azure Runtime APIs. However those environment variables are accessible from both your role’s startup tasks as well as from the runtime.

Windows Azure Full IIS Mode and Environment Variables

One thing to remember is that if you use Full IIS Mode in Windows Azure you will NOT be able to access the environment variables mentioned above. The reason for that is that you should use the standard IIS programing techniques to access your environment. You will know that you are running in Full IIS Mode if you have the Site element in your Service Definition file.


The Windows Azure Team announced Aidmatrix and Bing Maps Use Windows Azure to Aid Japan Disaster Recovery Efforts in a 3/28/2011 post:

imageTo help aid the disaster recovery efforts in Japan, two new portals built on the Windows Azure platform have just been released. These portals are both designed to provide relief workers and the worldwide community with real-time access to the latest news, relief needs and product donations following the country's recent devastating disasters.

As the nation's first food bank, Second Harvest Japan (2HJ) collects food that would otherwise go to waste from food manufacturers, farmers, and individuals, and distributes them to people across Japan who are in need, such as children in orphanages, families in shelters, and the homeless.  To provide the 2HJ community of supporters worldwide and relief agencies on the ground access to the latest updates, resources and information about recovery efforts from the recent earthquake and tsunami, Microsoft and The Aidmatrix Foundation have launched the Second Harvest Japan Emergency Response Portal.  From this site, supporters can receive the latest situation reports, review the latest news, make financial donations or see relief supply needs and make product donation offers.

Read more about the portal and the technology behind it here.

Road-Status After the Earthquake Website Shows Real-time Status of Japan's Roads

Bing Maps Technology Specialist Johannes Kebeck worked with Microsoft Japan to deploy the Road-status After the Earthquake website that shows current status of roads in Japan following its recent natural disasters.  Delivered as a Bing Map and built on the Windows Azure platform in just days, this website provides real-time searchable status of roads across Japan and is accessible from multiple platforms and browsers, including most mobile devices.

You can read more about how this application was built in Johannes' blog post, "Dynamic Tile-Layers with Windows Azure and SQL Azure".


Tim Anderson (@timanderson) reported Microsoft backs Telefonica’s BlueVia mobile SDK – but the market is fragmented in a 3/28/2011 post:

image Announced at Mobile World Congress last month, BlueVia is Telefonica’s effort to attract developers to its app platform. Telefonica is the largest phone operator in Spain and also owns O2 in the UK, and has various other operations around the world.

In this case though, “Platform” is not just the devices connected to Telefonica networks, but also services exposed to apps via newly published APIs. BlueVia has APIs for sending and receiving SMS messages, delivering mobile ads, and obtaining information about the current user through a User Context API.

Things like sending a text from an app are nothing new, but a difference is that BlueVia will pay the developer a cut from the revenue generated. Along with ads, the idea is that an app can generate a revenue stream, rather than being just a one-off purchase.

imageThe news today is that Microsoft is backing BlueVia with a toolset and marketing to Windows platform developers. There has been an SDK for Microsoft .NET for some time, but today Microsoft and BlueVia have delivered a new SDK for .NET which includes both server and client side support for the BlueVia APIs. On the server, there are templates for Windows Azure and for BlueVia ASP.NET MVC2 and WCF (Windows Communication Foundation) applications. On the client side, there are Silverlight controls such as a DialPad, an Advertising control, and a text to speech control. Microsoft also provides hooks to Windows Live Services in the hope that you will integrate these with your BlueVia applications. [Emphasis added.]

The snag with developing your app with BlueVia APIs is that it will only work for Telefonica customers, thus restricting your market or forcing you to code to different APIs for other operators. “If you want to expose an API in the way that Telefonica is doing, you need to be a Telefonica customer in order to be able to use it,” says Jose Valles, Head of BlueVia at Telefonica.

If you further restrict your app’s market by targeting only Windows Phone, it gets small indeed.

Valles says there is hope for improvement. “We are working with the industry and with WAC in order to standardise this API,” says, assuring me that the reaction is “very positive”. WAC is the Wholesale Applications Community, a cross-industry forum for tackling fragmentation. Do not count on it though; it strikes me as unlikely that a cross-industry group would accept BlueVia’s APIs as-is.

There is also a glimpse of the challenges facing developers trying to exploit this market in the BlueVia forums. This user observes:

During the submission process we could only submit the app for a single device model while it is actually supported on hundreds of models. So please also explain how to specify all the supported models during the submission process

The answer: BlueVia has defined around 20 groups of compatible devices, and you can only upload your app for one at a time. 20 uploads is better than hundreds, but still demonstrates the effort involved in trying to attain any kind of broad reach through this channel.

BlueVia is in beta, but Valles says this will change “in the next few weeks”. That said, it is already up and running and has 600 developers signed up. “It is already commercial, whoever wants to come in just needs to email and we will send it to him,” he says.

The idea of the operator sharing its ongoing revenue with app developers is a good one, but be prepared to work hard to make it a reality.

Related posts:

  1. Android Market not working for all customers, says Angry Birds CEO
  2. Steve Ballmer at CES: Microsoft pins mobile hopes on Windows 8
  3. Flash 10.1 mobile roadmap confusion, Windows phone support far off


<Return to section navigation list> 

Visual Studio LightSwitch

image2224222222No significant articles today.

 


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

imageSee Mary Jo Foley asserted There's more than one way to get Microsoft server app virtualization in a 3/28/2011 post to ZDNet’s All About Microsoft blog in the Windows Azure Platform Appliance (WAPA), Server App-V, Hyper-V and Private/Hybrid Clouds section below.


Avkash Chauhan described the Complete Solution: Adding SSL Certificate with Windows Azure Application for substituting your own domain name for cloudapp.net:

imageFirst of all you will need to get SSL certificate from a certificate authority (CA) for your domain i.e. www.yourcompanydomain.com. Please be sure that you are not going to request SSL certificate for cloudapp.net as this is not your domain, your service is hosted there. You will have to register your actual domain, i.e., www.yourcompanydomain.com, at a domain registration service of your choice. After it, you will request SSL certificate for the same domain, i.e., www.yourcompanydomain.com, from a Certificate Authority of your choice.

imageI have categorized this process in 4 steps as below:

  1. Create CSR for your domain and getting SSL Certificates from your desired CA
  2. Installing SSL certificates in your development Machine
  3. Uploading SSL certificates on Windows Azure Portal for your Service and including in your HTTPS endpoint
  4. Setting up proper CNAME entry for your domain in DNS register

Step 1: Create CSR for your domain and getting SSL Certificates from your desired CA

You can use IIS7 (Either from Windows Server 2003/2008 or Window XP) to generate a certificate request for your domain and use the CA to get the SSL certificates from your CA. So far I know, IIS7x running on Windows 7 does not allows to generate CSR. To get the SSL certificate for your domain you will need to pass a CSR request to your CA and you can use IIS server to create CSR request.

  • For IIS server 7.x please use the following details:

http://www.digicert.com/csr-creation-microsoft-iis-7.htm

  • For IIS server 5.x and 6.x please use the following details:

http://www.networksolutions.com/support/csr-for-microsoft-iis-5-x-6-x/

Step 2: Installing SSL certificates in your development Machine

In most of the cases you will receive minimum 3 certificate from your CA or may be more:

  1. Domain Certificate
  2. Root Certificate
  3. Intermediate certificate

You will received these certificates either separate PFX files or chained into one PFX certificate file. I have seen most of the time, 1 PFX file has all the certificates in it. You will also need to download a few CER files from the CA as well. Once you have all the files please install all of these certificates (PFX and CER) in your development machine. Once you have installed all necessary certificates in your development machine you will be able to verify your domain correctly with proper root certificate and intermediate certificate. You will see your domain certificate and chained intermediate certificate, stored into your machine account personal storage however the root certificate will be stored in privilege root storage. This step will also help you to select and include all the necessary certificates in your Windows Azure Application configuration and setup HTTPS Endpoint.

Step 3: Uploading SSL certificates on Windows Azure Portal for your Service and including in your HTTPS endpoint

After you installed these certificate in your development machine, you will need to upload these SSL certificates (all) to certificates section inside your Service on Windows Azure Portal. You also needs to include all the certificates inside your Service Configuration file as described in following blog:

http://blogs.msdn.com/b/azuredevsupport/archive/2010/02/24/how-to-install-a-chained-ssl-certificate.aspx

Step 4: Setting up proper CNAME entry for your domain in DNS register

Finally, once you have the SSL certificate setup correctly in Windows Azure Portal and in your HTTPS Endpoint and Service Configuration file, you just need to add CNAME entry in your DNS service to route it correctly. To setup proper CNAME entry please follow:

http://blog.smarx.com/posts/custom-domain-names-in-windows-azure


Kevin L. Jackson asked Government Cloud Computing on Forbes? in his first monthly post to the “Cloud Musings on Forbes” blog:

image I had the same question a couple of weeks ago when Bruce Upbin’s invitation showed up in my inbox. Although Forbes has always been an important source of business information, the Federal government seemed out of place. Government cloud computing, however,  is changing this image fast.

image Long known for it’s glacial speed and risk avoidance culture, the US Federal Government is moving with surprising speed into the brave new world of cloud computing. With budget language, executive directives and the Federal Cloud Computing Strategy, the federal government is re-thinking its information technology infrastructure, planning to shift $20B annually “to the cloud”. This  aggressive transformation will be accomplished by virtualizing data centers, consolidating IT operations, and ultimately adopting a cloud-computing business model. Surprising to many, the government seems to be adopting this new model much faster that many commercial industries.

Tangible evidence of this transition include:

  • IT Dashboard – an online window into the details of Federal information technology (IT) investments and provides users with the ability to track the progress of investments over time. The IT Dashboard displays data received from federal agencies’ reports to the Office of Management and Budget (OMB), including general information on over 7,000 Federal IT investments and detailed data for over 800 of those investments that agencies classify as “major.” The performance data used to track the 800 major IT investments is based on milestones contained in agency reports to OMB called “Capital Asset Plans”, commonly referred to as “Exhibit 300s.” Federal Agency Chief Information Officers (CIOs) are responsible for evaluating and updating select data on a monthly basis, which is accomplished through interfaces provided on the IT Dashboard website.
  • Data.gov – An online facility designed to increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government. As a priority Open Government Initiative for President Obama’s administration, Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government.
  • Recovery.gov – Required by The American Recovery and Reinvestment Act of 2009, this  website was created to “foster greater accountability and transparency in the use of funds made available in this Act.”  Recovery.gov went live on February 17, 2009, the day  President Obama signed the Act into law.  It’s primary mandate is to give taxpayers user-friendly tools to track Recovery funds — how and where they are spent— in the form of charts, graphs, and maps that provide national overviews down to specific zip codes. In addition, the site offers the public an opportunity to report suspected fraud, waste, or abuse related to Recovery funding.

While these cloud-based resources are old hat for the Washington Beltway crowd, my travels have helped me realize how little is known about this important transformation. The Federal CIO Vivek Kundra, in fact, released a report of over 30 illustrative case studies of how cloud computing is changing the nature of government IT.

Although I’ve been following this transition for over three years on my own blog “Cloud Musings“, I am now honored to have the opportunity to share my views and observations with Forbes.com readers through “Cloud Musings on Forbes“.  Please join and engage with me on this important journey. I look forward to the dialog.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V, Server App-V and Private/Hybrid Clouds

Mary Jo Foley asserted There's more than one way to get Microsoft server app virtualization in a 3/28/2011 post to ZDNet’s All About Microsoft blog:

image There’s going to be more than one way for customers to take advantage of Microsoft’s server application virtualization (Server App-V) technology when it becomes commercially available later this year.

I didn’t understand this until I saw a new blog post on March 25 on the Microsoft Virtualization Team Blog.

image I did know that Microsoft officials committed last summer to delivering server app virtualization via the Microsoft System Center Virtual Machine Manager (SCVMM) 2012 product, which is due to ship in the second half of 2011. (Microsoft delivered last week Beta 2 of SCVMM 2012, which included Beta 2 of Server App-V.) This is only half the story — the private cloud half — however.

imageMicrosoft also is going to make Server App-V available in public cloud form via Windows Azure. The Community Technology Preview (CTP) test build of Server App-V that Microsoft rolled out on the last day of December 2010 is the public-cloud version. The Azure Server App-v feature will complement the Windows Azure VM role, offering users a way to migrate certain existing applications to Azure. Like the private-cloud version, the Azure Server App-V technology is due to be released commercially in the second half of 2011.

In short: Server App V will be available via either SCVMM 2012 or as part of a coming update to Windows Azure. It’s the same technology, just packaged in two different ways.

Server App-V technology is of interest to current and potential private- and public-cloud customers because it could help in moving legacy applications into the cloud. Server application virtualization would be like Microsoft’s existing client-side App-V product; it would allow customers to package applications into virtual containers, each of which would be storable and maintainable as a self-contained stateless environment. Microsoft execs have been talking up the potential for Server App V for the past few years.

It’s worth noting that Microsoft is specifying which types of applications will be virtualizable (is that a word?) with SCVMM 2012. From last Friday’s Virtualization Team blog post:

Microsoft is prioritizing business applications such as ERP applications. As with Microsoft Application Virtualization for the desktop there is not a list of applications that Server Application Virtualization will support.  However, there are a number of architectural attributes that the initial release of this technology has been optimized for.”

The first group of applications that will be supported are those with the following attributes, according to the post: State persisted to local disk; Windows Services; IIS Applications; Registry; COM+/DCOM; Text-based Configuration Files; WMI Providers; SQL Server Reporting Services; Local users and groups; Java. Applications without these attributes “may” be supported in later versions of SCVMM, the post said. Applications and architectural attributes that won’t be supported initially include: Virtualization of Windows core component (IIS, DHCP, DNS, etc); J2EE Application Servers; SQL Server; and Exchange Server, Microsoft officials said.


Bill Zack described Server App-V, New Server Application Packaging and Deployment Options in a 3/28/2011 post to the Ignition Showcase blot:

image At The Microsoft Management Summit 2011, Brad Anderson, Corporate Vice President, Management and Security Division, discussed the System Center 2012 Virtual Machine Manager 2012 Beta. Server Application Virtualization (Server App-V) allows you to separate the application configuration and state from the underlying operating system.  This offers a simplified approach to application deployment and servicing.

Which applications can Server Application Virtualization virtualize as part of System Center 2012?

image There is not yet a list of applications that Server Application Virtualization will support, however, there are a number of architectural attributes that the initial release of this technology has been optimized for. These attributes include:

  • State persisted to local disk
  • Windows Services
  • IIS Applications
  • Registry
  • COM+/DCOM
  • Text-based Configuration Files
  • WMI Providers
  • SQL Server Reporting Services
  • Local users and groups
  • Java

Applications that do not have these attributes may be supported in later versions. The following applications or architectural attributes will not be supported in V1:

  • Virtualization of Windows core component (IIS, DHCP, DNS, etc).
  • J2EE Application Servers
  • SQL Server
  • Exchange Server

For more details see here.


Eric Knorr (@EricKnorr) asserted “You say you're sick to death of cloud this and cloud that? Don't be. Think of it as a golden opportunity to upgrade your infrastructure” in a deck for his How to cash in on cloud computing post to InfoWorld’s Modernizing IT/Cloud Computing blog:

image In the past year or two, I wouldn't be surprised if some excitable business management type has come up to you and asked a loaded question: "So what are we doing about cloud computing?"

What that person really meant is, "I hear cloud computing drives down the cost of operations -- can we have some of that?" Or with a little more sophistication, "I hear cloud computing makes it a lot quicker to spin up the new applications I need -- when are we going to be able to do that?"

image Depending on who's asking the question, you may have given a guttural, unintelligible response or, more defensively, said, "We already have a cloud. It's called a data center." If it was a vice president or higher, maybe you said, "We're doing really well with virtualization right now. Going great. Very high levels of hardware utilization."

image Wrong answers! In each case, the chance for some good old-fashioned lobbying slipped by. Every time there's a huge trend in IT -- and by the sheer quantity of words spilled on the topic, cloud computing surely qualifies -- you need to think opportunistically, not defensively.

Cloud computing is all about scale and agility. And since when have those been anything but the highest aspirations of IT? Instead of saying through gritted teeth that "we're working on it," explain to any business exec who asks that there are just a few things you need to get there:

  • New server hardware. Virtualization is the indivisible foundation of cloud computing. Legacy processors don't support virtualization. Plus, you need to stuff virtualization hosts with enough high-bandwidth NICs to handle all the I/O from those virtual machines. Better still would be some fancy new converged hardware with, in effect, built-in network switches, such as Cisco UCS or HP Matrix servers. And memory? All those virtual machines and jillions of simultaneous sessions really, really eat up the memory. In other words, "We need some new metal, dude."
  • New network switches. In a virtualization scenario of any reasonable size, multiple core switches with redundant interswitch links are highly recommended, with enough ports to support fully redundant virtualization host links on every planned network. Furthermore, these switches should support Layer-3 networking, HSRP (Hot Swap Routing Protocol) or VRRP (Virtual Routing Redundancy Protocol), link aggregation, VLANs, and VLAN trunking to fully realize the benefits of server virtualization. Together these features will help sustain performance, resiliency, and ease of management. In other words, "I need more money."
  • Only VMware will do. Without question, VMware is the cream of the crop among virtualization software solutions -- no wonder it costs so much. Fortunately, VMware the company had the perspicacity to release vCloud Director (that's right, "cloud" is in the name of the product) last fall with all kinds of advanced virtualization management features and even chargeback to support private cloud deployments. You need vCloud and vSphere to do it right. In other words, "We must have VMware because we need a solid foundation for our cloud."
  • More security. The thing about the cloud is that you never know who might want to use it. When you scale internal applications, you may also want to open them to partners. If you extend to the public cloud, you need to make sure you know which employees have subscribed to which services -- and, right away, which are leaving the company so that you can deprovision their accounts immediately. Plus, you need to secure data you move to partner or public clouds. Of course, you should have all these protections in place anyway, but chances are you don't. In other words, "We need two-factor authentication. We need real identity management. And boy oh boy, do we need ubiquitous encryption, although that might require more powerful hardware."

You get the idea. Just as vendors have grandfathered everything they sell into the cloud, you can engage in your own brand of cloudwashing to get the infrastructure you need. I haven't even mentioned storage; of course you need a new SAN for your virtualization farm. Then there's data center automation: "You want agility? Then we really need to be able to auto-provision physical hosts." And on and on.

I've just touched on the obvious, and I'm sure I've missed a bunch. So join the fun and add some of your own cloudwashing suggestions in the comments below. You may not get everything you ask for, but the more you say is necessary, the more you're likely to get. Either that or they'll stop poking you about the cloud. It's a win-win.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

Cloud Slam ‘11 announced on 3/28/2011 that Microsoft’s Ron Markezich will deliver the Welcome & Opening Keynote on 4/28/2011 at 9:00 AM PDT:

Business in the Cloud

imageWhile it’s clear that cloud computing can transform IT, lower costs and accelerate innovation, what’s less obvious is how companies can leverage their existing IT investments while planning for a full or partial migration to the cloud.

cloud computing conference

In his headline keynote, Ron Markezich will discuss how companies can successfully navigate the myriad of choices available to them in order to move to the cloud at their pace and on their terms. He’ll provide examples of businesses in public and private sector industries taking full advantage of the cloud and showcase the benefits and success they are experiencing.

About Speaker Ron Markezich

cloud computing conferenceA 13-year veteran of Microsoft, Ron Markezich is corporate vice president of the company’s U.S. Enterprise and Partner Group (EPG), which serves approximately 1,500 enterprise clients and works closely with a set of managed partners and independent software vendors. In this role, Markezich is responsible for leading enterprise sales and marketing across the United States, including nearly 1,600 employees in field and national sales, partner, marketing, operations and vertical industry teams.

Markezich previously served as the corporate vice president of Microsoft Online at Microsoft Corp., where he was responsible for growing the Microsoft Online business, including business development and service delivery. Microsoft Online provides businesses with the option to have their Microsoft technology delivered as an online service. The services that are part of Microsoft Online are Microsoft Exchange Online, Microsoft Office SharePoint Online, Microsoft Office Communications Online, Business Online Service Delivery, Microsoft Exchange Hosted Services and Microsoft Office Live Meeting.

Prior to working with Microsoft Online, Markezich held the position of chief information officer for Microsoft. In that role he was responsible for Microsoft's global network, data centers, information security, help desk, core IT services and enterprise line-of-business applications. In addition, Markezich was responsible for running Microsoft on beta Microsoft enterprise software and signing off on those products before they were shipped.

Microsoft organizations under Markezich's leadership have received numerous awards, including the CFO Magazine Best in Finance, the Alexander Hamilton Award for Technology and Primus Luminary award.

Markezich joined Microsoft in 1998. Before that, he was at Accenture (formerly Andersen Consulting) in the Electronics and High Tech Group. Markezich is a graduate of the University of Notre Dame, where he was an All-American cross-country runner and captain of the track and field team.

Click here for the conference agenda. You can preview a PDF of Cloud Slam 11’s Special Advertising section in Bloomberg BusinessWeek here.

Full Disclosure: The Cloud Slam ‘11 organizers have provided me with free VIP press credentials.


Adron Hall (@adronbh) announced on 3/28/2011 that his Bellingham Cloud Talk [is] Coming Right Up on 4/5/2011:

image Here’s the basic outline of what I intend to speak on at the upcoming presentation I have for the Bellingham, Washington .NET Users Group.  If you happen to be in the area you should swing by and give it a listen (or heckle, whatever you feel like doing).

On April 5th I have a talk lined up with the Bellingham .NET Users Group. So far here’s a quick one over of the talk:

What Cloud Computing REALLY is to us techies

  • Geographically dispersed data centers.
  • Node based – AKA grid computing configurations that are…
  • Highly Virtualized – thus distributed.
  • Primarily compute and storage functionality.
  • Auto-scalable based on demand.

What kind of offerings exist out in the wild?

  • Amazon Web Services
  • Rackspace
  • Orcs Web
  • GoGrid
  • Joyent
  • Heroku
  • EngineYard

…many others and then the arrival in the last year”ish” of…

    image

  • Windows Azure
  • AppHarbor

Developing for the cloud, what are the fundamentals in the .NET world?

Well, let’s talk about who has been doing the work so far, pushing ahead this technology.

  • Linux is the OS of choice… free, *nix, most widely used on the Internet by a large margin, and extremely capable…
  • Java
  • Ruby on Rails
  • Javascript & jQuery, budding into Node.js via Google’s V8 Engine
  • The Heroku + EngineYard + Git + AWESOMESAUCE capabilities of pushing… LIVE to vastly scalable and distributable cloud provisions!

So where does that leave us .NETters?

  • AWS .NET SDK released a few years ago.
  • Windows Azure & SDK released about a year ago.

These two have however been lacking compared to Heroku and EngineYard for those that want something FAST, something transformative, easy to use, without extra APIs or odd tightly coupled SDKs.

Enter AppHarbor.

In Summary the .NET Platform has primarily:

  • AWS for the top IaaS and most widely available zones & capabilities at the absolutely lowest prices,
  • Windows Azure for the general build to PaaS Solution, and for the people lucky enough to be going the Git +
  • MVC + real Agile route, AppHarbor is the peeminent solution.

Demo Time…

  • Windows Azure Demo
  • AWS Demo
  • AppHarbor Demo

Adron’s presentation wasn’t on the Bellingham .NET Users Group Events list as of 3/28/2011.


The HPC in the Clouds blog reported on 3/28/2011 a Call for Abstracts: Cloud Futures--Advancing Research and Education with Cloud Computing by Microsoft Research:

Cloud Futures: Advancing Research and Education with Cloud Computing, June 2-3, 2011, Redmond, WA

Call for Abstracts & Participation

image Cloud computing is an exciting platform for research and education.  Cloud computing has the potential to advance scientific and technological progress by making data and computing resources readily available at unprecedented economy of scale and nearly infinite scalability.  To realize the full promise of cloud computing for research and education, however, we must think about the cloud as a holistic platform for creating new services, new experiences, and new methods to pursue research, teaching and scholarly communication.  This goal presents a broad range of interesting questions.

imageWe invite extended abstracts that illustrate the role of cloud computing across a variety of research and educational areas---including  computer science, engineering, earth sciences, healthcare, humanities, life sciences, and social sciences---that highlight how new techniques and methods of research in the cloud may solve distinct challenges arising in those diverse areas.  Please include a bio (150 words max) and a brief abstract (300 words max) of a 30-minute short talk on a topic that describes practical experiences, experimental results, and vision papers.  

Workshop updates will be posted at http://research.microsoft.com/cloudfutures2011/

Please submit your abstract by April 19, 2011 to cloudfut@microsoft.com

Invited talks will be announced on April 27, 2011

Logistics for Attendees and Selected Abstract Presenters

Academics worldwide are encouraged to attend.  Workshop attendees and abstract presenters are expected to make their own air travel arrangements to Seattle including airport transfers.  All attendees and presenters will be provided complimentary meals and refreshments during the workshop agenda (June 2, 3) including motor coach transport to/from the Bellevue Hyatt.  Attendees are offered a special workshop room rate in the Bellevue Hyatt of $189 per night (plus tax).  Deadline to book this rate is May 25, 2011.

Abstract presenters will receive up to three nights’ complimentary accommodation at this hotel if booked by May 25, 2011.    Microsoft is committed to adhering to local government laws and regulations, as well as institutional policies regarding conflict of interest and anti-corruption. To ensure compliance, Microsoft may be unable to cover the expenses and fees of some attendees and presenters for their attendance at Cloud Futures 2011.


James Staten (@staten7) revisited his Cloud Connect 2011 keynote in his The two words you need to know to turn on cloud economics post of 3/28/2011 to the Forrester blog:

image Everyone understands that cloud computing provides pay per use access to resources and the ability to elastically scale up an application as its traffic increases. Those are values that turn on cloud economics but how do you turn cloud economics to your advantage?

That was the topic of my keynote session at the Cloud Connect 2011 event in Santa Clara, CA earlier this month. The video of this keynote can now be viewed on the event web site at http://tv.cloudconnectevent.com/. You will need to register (free) on the site. In this short - six minute - keynote you will get the answers to this question. I also encourage you to view many of the other keynotes from this same event as this was the first cloud computing conference I have attended that finally moved beyond Cloud 101 content and provided a ton of great material on how to really take advantage of cloud computing. We still have a long way to go, but this is a great step forward for anyone still learning about the Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solutions and how they can empower your organization.

image If you still aren't experimenting with these platforms, get going. While they won't transform the world, they do give you new deployment options that can accelerate time to market, increase deployment flexibility and prepare you for the new economic model they are bringing to many early adopters today.

I'll be giving a more in depth talk on the topic of cloud economics in my keynote at the virtual event, CloudSlam April 19th. I'll also be moderating a panel or two on April 18 at the in-person event on April 18. Come join me if you can.


Wes Yanaga suggested that you Arrive a Day Early and Attend a Boot camp at MIX11 in a 3/28/2011 post to the US ISV Evangelism blog:

image Want to get the most out of MIX11? Attend a full day of pre-conference Boot Camps designed to get you up-to-speed on the most important technologies in web design and development. Need to ramp up on Silverlight, HTML5, CSS3, or  jQuery. Want to learn how to take advantage of the Cloud or how to build Silverlight applications for SharePoint?

There’s a Boot Camp for both. Perhaps you’d like to learn how high-profile, complex sites and games are built using the latest web technologies? We have speakers from some of the top interactive agencies in the world deconstructing their projects right before your eyes. Whether you’re a designer or a developer you can take advantage of some of the best, most comprehensive content available from some of the industry’s most regarded presenters.

The popular MIX11 pre-conference Boot Camps are lecture style and will include high-quality visuals, code examples and walk-trough's, as well as materials you can take with you for later reference.

Boot Camps take place on Monday, April 11 and cost only $350

Boot Camps | April 11, 2011
Mandalay Bay Resort and Casino

3950 Las Vegas Blvd. South
Las Vegas, NV 89119

MIX11 Morning Boot Camps

HTML5/CSS3 Boot Camp

Stephanie (Sullivan) Rewis (W3Conversions)

Although CSS3 rendering varies between browsers and HTML5 hasn`t reached the candidate recommendation stage yet, they're already the hot, new buzzwords and some intrepid front-end developers are already creating sites with them. Are these new techniques ready for prime time on the web? What happens with older browsers?

In this session, Stephanie demonstrates the latest HTML5 structural elements, form markup, and ways of presenting audio and video; then she adds cutting-edge CSS3 features for easier implementation of visual effects, enhanced typography, color treatment, and object effects -- all with a focus on satisfying your cravings for tomorrow`s technologies through practical implementations today. Welcome to the next generation of web design!

Design, Content, Code: Start-to-finish

Roman Blagovirnyy , Tony Walt, Chad Bakeman, Anthony Franco, and Cindy Vanover (EffectiveUI)

Ever wanted to learn how a professional, high-end, media-heavy, interactive site is built from start to finish? In this boot camp the senior team from award-winning User Experience Agency, Effective UI, will walk you through how they built an incredible new site for a very demanding organization. The project has not yet launched but it will be live before MIX. (Microsoft: Trust us, it is AWESOME!)

The team will cover everything from initial UX consulting and requirements gathering to project management, design iterations, media development (video, audio, 3D), Silverlight development, HTML5, JavaScript, ASP.NET, Azure hosting, content management, to meeting client expectations. This is a boot camp that you do not want to miss!

Silverlight Boot Camp

John Papa (Microsoft), Mike Taulty (Microsoft)

In this half-day workshop of demos with a few slides tagged on ;-) we’ll take a tour around the landscape of Silverlight applications. We’ll dig into the core platform capabilities, the tooling and we’ll dive into some of the patterns and frameworks that you’ll work with in getting data in and out of your Silverlight applications and onto the screen. We’ll demonstrate how to solve real problems using the core Silverlight features as well as the latest features in Silverlight 5, to get you ready for the upcoming week at MIX11!

What we expect of you is that you have a basic awareness around .NET and Silverlight. What you can expect from us is a half-day bringing you up to speed on the essentials of building applications with Silverlight.

Cloud Boot Camp

Vishwas Lele (Applied Information Sciences)

image

Cloud computing is being viewed as the one of the most important concepts in IT going forward. In the last couple of years it has moved from high-level concept to a technology that is now seeing mainstream adoption. In this half-day workshop we will begin by discussing the what, why and how of cloud computing. After we have covered the fundamentals of cloud computing, we will dive deep into the Windows Azure platform. This will include a discussion of various building blocks, features, and tooling. Next, through a series of demos we’ll review a variety of features of the platform and patterns that can help you move your on-premise workloads to the cloud. Finally, we will discuss the economics of cloud including cost and SLA. What we expect of you is that you have a basic awareness around .NET and experience building web applications. What you can expect from us is a half-day bringing you up to speed on the essentials of building applications on the Azure platform.

Vishwas Lele is an AIS Chief Technology Officer and is responsible for the company vision and execution of creating business solutions using the Microsoft Windows Azure Platform

MIX11 Afternoon Boot Camps

jQuery Boot Camp

Joe Marini (Microsoft)

jQuery has become one of the most popular JavaScript libraries for building rich, interactive, cross-browser Web sites and applications. In this session, Joe Marini will show you how to get started using the power of jQuery and its sister library jQueryUI to build your next-generation Web applications. Whether it’s animation, dynamic formatting, AJAX, or complicated UI, jQuery has the features you need to help tackle your hardest Web development problems.

HTML5 Canvas Mastery

Lanny McNie (gskinner.com), Shawn Blais (gskinner.com)

In this boot camp we will take a comprehensive look at everything from simple shape and image drawing, to advanced techniques for building interactive experiences and games. We will also look at available tools to streamline development, and share some helpful tips on implementation, performance, and optimization.

Lanny McNie and Shawn Blais are senior developers at gskinner.com, a leading interactive production shop focused on providing cutting-edge experiences in HTML5, Flash, iOS, and Android.

Windows Phone 7 Boot Camp

Grant Hinkson (Microsoft), Adam Kinney (Pixel Lab)

In this workshop you’ll see a complete Windows Phone app built from the ground up. We’ll walk from File > New Project all the way through app submission in the marketplace. You’ll learn all the tips and tricks required to make your app feel like a native app, from smooth page transitions to highly performant progress bars. You’ll leave with a fully functional app that you can start customizing with your own data and submit to the marketplace for instant global reach.

Silverlight for SharePoint Boot Camp

Paul Stubbs (Microsoft)

SharePoint is the leading business application platform for sharing and collaborating. Silverlight 4 is the perfect tool for bringing SharePoint to life. Demand for developers who can create rich SharePoint applications using Silverlight is off the chart. Come learn how to build interactive SharePoint applications using Silverlight 4, JQuery and OData. Understand how designers and developers can work together using Expression Blend, Visual Studio, sample data, and data binding against SharePoint Lists.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Joe Panettieri asked HP Cloud Strategy: Big Ambitions, But Will Partners Engage? in a 3/28/2010 post to the TalkinCloud blog:

Hewlett-Packard plans to launch an HP Cloud and an app store for consumers and enterprises, according to HP CEO Leo Apotheker. Those cloud plans surfaced during Apotheker’s HP Americas Partner Conference keynote a few minutes ago in Las Vegas. HP’s cloud strategy is ambitious. In fact, I wonder if it’s too ambitious, and I also wonder where HP channel partners will ultimately fit in.

During the conference, Apotheker described an “Everybody On” cloud strategy that sounds quite a bit like Microsoft’s “All In” cloud strategy. HP is promising to launch a public cloud, an app store, developer tools and more to connect the dots between data centers, cloud computing and devices like PCs, printers and tablets.

In stark contrast, rivals like Oracle and Cisco Systems have vowed not to launch public clouds, instead deferring public cloud opportunities to channel partners and service providers.

The big question: Where do channel partners fit in the HP Cloud strategy? Apotheker assured more than 1,000 channel partners here that the company remains committed to partners. And Channel Chief Stephen DiFranco described HP’s progress with channel partners in the past year. Plus, HP and Axcient are celebrating SMB partner wins in the managed services market, a close cousin to the cloud industry.

Question Marks

Still, plenty of questions remain. Apotheker did not discuss any specific channel partner cloud strategy — though HP is expected to discuss that topic over the next couple of days here at HP Americas Partner Conference. Earlier this month TalkinCloud warned HP to avoid a key cloud channel mistake that Microsoft has already made. Will HP heed that advice? I’ll be searching for answers over the next couple of days.

Even if HP gets its cloud channel partner strategy right, the company is in catch-up mode. Cloud providers like Amazon Web Services and Rackspace have been online and growing rapidly in recent years, and even Microsoft’s fledgling Windows Azure cloud has been online for a full year.

Moreover, HP needs to carefully articulate an ISV strategy to ensure the forthcoming HP app store(s) are populated with quality business and consumer apps. Those app stores will most certainly support the forthcoming HP TouchPad tablet, which DiFranco demonstrated earlier today.

Overall, the HP Americas Partner Conference has been an upbeat engagement. But the forthcoming HP Cloud launch sounds somewhat late and extremely ambitious to me. Plus, I’m yet to hear how partners will potentially profit from HP Cloud.

Follow Talkin’ Cloud via RSS, Facebook and Twitter. Sign up for Talkin’ Cloud’s Weekly Newsletter, Webcasts and Resource Center. Read our editorial disclosures here.

Read More About This Topic


Jeff Barr reported Adding a Second AWS Availability Zone in Tokyo in a 3/28/2011 post to the Amazon Web Services blog:

imageOur hearts go out to those who have suffered through the recent events in Japan. I was relieved to hear from my friends and colleagues there in the days following the earthquake. I'm very impressed by the work that the Japan AWS User Group (JAWS) has done to help some of the companies, schools, and government organizations affected by the disaster to rebuild their IT infrastructure.

imageWe launched our Tokyo Region with a single Availability Zone ("AZ") about a month ago. At that time we said we would be launching a second Tokyo AZ soon. After a very thorough review of our primary and backup power supplies, we have decided to open up that second Availability Zone, effective today.

As you may know, AWS is currently supported in five separate Regions around the world: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo). Each Region is home to one or more Availability Zones. Each Availability Zone in a Region is engineered to be operationally independent of the other Zones, with independent power, cooling, physical security, and network connectivity. As a developer or system architect, you have full control over the Regions and Availability Zones that your application uses.

A number of our customers are already up and running in Tokyo and have encouraged us to open up the second Availability Zone so that they can add fault tolerance by running in more than one AZ. For example, with the opening of the second AZ developers can use the Amazon Relational Database Service (RDS) in Multi-AZ mode (see my blog post for more information about this), or load balance between web servers running Amazon EC2 in both AZ's.

-- Jeff;

PS - We continue to monitor the power situation closely. The AWS Service Health Dashboard is the best place to go for information on any possible service issues.


Klint Finley (@klintron) summarized Amazon Web Services Adds an Un-Cloudy Option to Its IaaS in a 3/28/2011 post to the ReadWriteCloud blog:

image Amazon Web Services announced today the availability of a new service: Amazon EC2 Dedicated Instances. Instead of using a multitenant architecture, the new service gives customers dedicated hardware. In other words, instead of sharing physical servers with other customers, Dedicated EC2 customers will have their own private servers. AWS competitor GoGrid recently announced a similar service called Hosted Private Cloud.

image Multitenacy is one of the defining features of the cloud, but it's also considered risky. The new service takes a lot of uncertainty out of running applications in public infrastructure, but also reduces the elasticity of the service. It's also more expensive.

Here's how we explained the trade-offs when GoGrid announced its service:

The advantage over a public cloud is that none of your data is "touching" that of another customer. The disadvantage is that you have to pay for resources that you might not use. The advantage over a private cloud is that you can rent the resources without putting up capital expenditure money or the time and labor of building a data center. The disadvantage is that you still have to trust an outside organization to store your data.

However, according to the announcement from the official Amazon Web Services Blog:

It is important to note that launching a set of instances with dedicated tenancy does not in any way guarantee that they'll share the same hardware (they might, but you have no control over it). We actually go to some trouble to spread them out across several machines in order to minimize the effects of a hardware failure.

imageAWS bills for the new service by the hour. There's a $10 an hour per region fee, plus a cheaper hourly fee per instance. Pricing can be found here.

Here's an explanation from the AWS Blog of the region fee:

When you launch a Dedicated Instance, we can't use the remaining "slots" on the hardware to run instances for other AWS users. Therefore, we incur an opportunity cost when you launch a single Dedicated Instance. Put another way, if you run one Dedicated Instance on a machine that can support 10 instances, 9/10ths of the potential revenue from that machine is lost to us.

In order to keep things simple (and to keep you from wasting your time trying to figure out how many instances can run on a single piece of hardware), we add a $10/hour charge whenever you have at least one Dedicated Instance running in a Region. When figured as a per-instance cost, this charge will asymptotically approach $0 (per instance) for customers that run hundreds or thousands of instances in a Region.

This service continues AWS' shift towards being more enterprise-friendly.

AWS has been adding features and services such as the Virtual Private Cloud Wizard, Cloud Formation and Elastic Beanstalk that make managing applications on the AWS infrastructure easier. It also expanded its support services.


Chris Czarnecki analyzed Improved Amazon Private Cloud Security: EC2 Dedicated Instances in a 3/28/2011 post to the Learning Tree blog:

image Back in October last year I posted an article titled ‘How Dedicated is Your Private Cloud ?‘. The main theme was that whilst organisations like Amazon offer private clouds on Amazon infrastructure, your virtual machines may actually be co-hosted on the same physical hardware as other organisations virtual machines. What is private in such scenarios is the virtual network your instances are connected to.

image Without a good understanding of cloud computing and the underlying technologies that make this possible, private cloud means one thing, yet to those with a good understanding of cloud computing will know that there are different levels of ‘private’ cloud when that cloud is hosted by a third party. Amazon, as part of their AWS have offered a virtual private cloud (VPC) for some time now. With the Amazon VPC, instances are co-hosted with instances from other organisations. Until today that is. Today, Amazon have announced EC2 dedicated instances which ensure that all EC2 compute instances will be isolated at the hardware level. It is possible to create a VPC in EC2 that has a mixture of dedicated and non-dedicated machine instances all on the same network based on application requirements.

imageIn addition, earlier this month Amazon made some changes to the way VPC’s can  be accessed. Originally, the only way of accessing an Amazon VPC was from an IPSec Virtual Private network (VPN). This required extra onsite resources for many organisations. The VPN restriction has now been relaxed and and Amazon VPC can now be accessed by the Internet. Amazon are certainly making the private cloud something that is now comfortably within reach of all organisations.

For anybody who would like to gain an understanding of what cloud computing is, the underlying technologies and how it can benefit their organisation, Learning Tree have developed a Cloud Computing course that provides hands-on exposure to a variety of cloud computing tools and services. In addition, currently under development is a course dedicated specifically to Amazon AWS. If you are interested, more details are provided here.


Nicole Hemsoth reported Amazon Opens Door to Dedicated Hardware in a 3/28/2011 post to the HPC In the Cloud blog:

image Amazon announced today that it would take one step past its recent announcement of their enhanced Virtual Private Cloud (VPC) offering and provide the option of dedicated hardware for customer applications.

According to Amazon’s Jeff Barr, the company’s VPC services weren’t enough for some customers who required network isolation. Some users expressed worry about the fact that other companies could be running on the same host--one of the most frequently-cited cloud concerns.

Certainly, as with any other on-demand instances that aren’t operating in a centralized datacenter that belongs to one's organization, this announcement still does not alleviate all concerns about using cloud-based (read as 'remote' resources). There's still no way to have complete control over data, but this can be something of a balm for both perception purposes and more concretely, for compliance.

image For those who need computational resources and don’t have the wherewithal to purchase and maintain servers or a cluster, this is an ideal offering, in part because it speaks to the perceived security concerns that arise in a multitenant environment even if users might still contend with worries about loss of control or resource centralization. The need for resources versus the inability to invest in hardware have been the central attractors to the public cloud but for those who were hesitant before due to the multitenancy argument, this might finally make the clouds less...shady.

image With the current VPC option, users will be able to select whether they want to spin up a private cloud that combines dedicated and normal AWS instances or if they want to just run their application on dedicated hardware.

Barr notes that when it comes to virtualization and the availability of virtual private clouds, even though the company already uses a sophisticated version of the Xen hypervisor to ensure that users are completely isolated from one another, customers have been hesitant.

In essence, this type of service is roughly the same thing that HPC users could get from an HPC on-demand service providers. R Systems, Cycle Computing (which pulls its bread and butter out of the Amazon empire), Sabalcore, SGI’s Cyclone---the list goes on—all have faced stiff competition from users who need resources without the hardware investment but now instead of standing on the security and regulatory grounds to show that their services are more appropriate, these HPC on-demand providers will need to beef up other claims as well as provide clear signs that their pricing models are more attractive.

Comparing pricing for Amazon’s Dedicated Instances running for HPC workloads will take some application-dependent math and guesswork but taking it a step further, using that information to compare the Amazon Dedicated Instance to the host of other HPC on-demand pricing models is going to be a challenge due to the completely different ways each provider prices their offerings.

The pricing for this new option is somewhat different than the way Amazon bills for some of its other cloud services as it is split between two fee structures. On the one hand, users will incur a fee for each instance and then will also pay what they call a “dedicated per region fee” which is a flat $10 per hour although isn’t contingent on how many instances are running in any particular region. This regional fee varies by where the instance is running and also changes according to the OS and performance level.

As Barr states, “When you launch a dedicated instance, we can’t use the remaining ‘slots’ on the hardware to run instances for other AWS users. Therefore, we incur an opportunity cost when you launch a single dedicated instance. Put another way, if you run one Dedicated Instance on a machine that can support 10 instances, 9/10ths of the potential revenue is lost to us.”

An additional option is one-time payments per instance based on a contract with Amazon in which the user would get a discount on the hourly use fees.

What’s worth noting here, however, is that while it might be difficult to understand exactly how the pricing models compare with another, one thing that users will need to factor into any decision or comparison is the support angle.

While the Dedicated Instances might be attractive from a cost perspective after the heavy lifting process behind price comparing is finished, remember that HPC on-demand providers factor in a very important element that Amazon doesn’t provide—support.  And lots of it.

Put another way, Amazon simply provides you with the hardware and counts on your own internal expertise but for some applications, bare hardware and basic instructions aren’t going to cut it.

While it might be tempting to think that on-demand providers are going to be further spurred to beef up their offerings and take another look at their pricing, it’s probably more realistic to see the critical support factor becoming central to their messaging and packaging—and rightly so.


Chris Hoff (@Beaker) asked Dedicated AWS VPC Compute Instances – Strategically Defensive or Offensive? in a 3/28/2011 post:

image Chugging right along on the feature enhancement locomotive, following the extension of networking capabilities of their Virtual Private Cloud (VPC) offerings last week (see: AWS’ New Networking Capabilities – Sucking Less ;) ,) Amazon Web Services today announced the availability of dedicated (both on-demand and dedicated) compute instances within a VPC:

Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level.

image That’s interesting, isn’t it?  I remember writing this post “ Calling All Private Cloud Haters: Amazon Just Peed On Your Fire Hydrant… and chuckling when AWS announced VPC back in 2009 in which I suggested that VPC:

  • Legitimized Private Cloud as a reasonable, needed, and prudent step toward Cloud adoption for enterprises,
  • Substantiated the value proposition of Private Cloud as a way of removing a barrier to Cloud entry for enterprises, and
  • Validated the ultimate vision toward hybrid Clouds and Inter-Cloud

That got some hackles up.

So this morning, people immediately started squawking on Twitter about how this looked remarkably like (or didn’t) private cloud or dedicated hosting.  This is why, about two years ago, I generated this taxonomy that pointed out the gray area of “private cloud” — the notion of who manages it, who owns the infrastructure, where it’s located and who it’s consumed by:

I did a lot of this work well before I utilized it in the original Cloud Security Alliance Guidance architecture chapter I wrote, but that experienced refined what I meant a little more clearly and this version was produced PRIOR to the NIST guidance which is why you don’t see mention of “community cloud”:

  1. Private
    Private Clouds are provided by an organization or their designated service provider and offer a single-tenant (dedicated) operating environment with all the benefits and functionality of elasticity* and the accountability/utility model of Cloud.  The physical infrastructure may be owned by and/or physically located in the organization’s datacenters (on-premise) or that of a designated service provider (off-premise) with an extension of management and security control planes controlled by the organization or designated service provider respectively.
    The consumers of the service are considered “trusted.”  Trusted consumers of service are those who are considered part of an organization’s legal/contractual umbrella including employees, contractors, & business partners.  Untrusted consumers are those that may be authorized to consume some/all services but are not logical extensions of the organization.
  2. Public
    Public Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.
    The physical infrastructure is generally owned by and managed by the designated service provider and located within the provider’s datacenters (off-premise.)  Consumers of Public Cloud services are considered to be untrusted.
  3. Managed
    Managed Clouds are provided by a designated service provider and may offer either a single-tenant (dedicated) or multi-tenant (shared) operating environment with all the benefits and functionality of elasticity and the  accountability/utility model of Cloud.The physical infrastructure is owned by and/or physically located in the organization’s datacenters with an extension of management and security control planes controlled by the designated service provider.  Consumers of Managed Clouds may be trusted or untrusted.
  4. Hybrid
    Hybrid Clouds are a combination of public and private cloud offerings that allow for transitive information exchange and possibly application compatibility and portability across disparate Cloud service offerings and providers utilizing standard or proprietary methodologies regardless of ownership or location.  This model provides for an extension of management and security control planes.  Consumers of Hybrid Clouds may be trusted or untrusted.

* Note: the benefits of elasticity don’t imply massive scale, which in many cases is not a relevant requirement for an enterprise.  Also, ultimately I deprecated the “managed” designation because it was a variation on a theme, but you can tell that ultimately the distinction I was going for between private and hybrid is the notion of OR versus AND designations in the various criteria.

AWS’ dedicated VPC options now give you another ‘OR’ option when thinking about who manages, owns the infrastructure your workloads run on, and more importantly where.  More specifically, the notion of ‘virtual’ cloud becomes less and less important as the hybrid nature of interconnectedness of resources starts to make more sense — regardless of whether you use overlay solutions like CloudSwitch, “integrated” solutions from vendors like VMware or Citrix or from AWS.  In the long term, the answer will probably be “D) all of the above.”

Providing dedicated compute atop a hypervisor for which you are the only tenant will be attractive to many enterprises who have trouble coming to terms with sharing memory/cpu resources with other customers.  This dedicated functionality costs a pretty penny – $87,600 a year, and as Simon Wardley pointed out that this has an interesting effect inasmuch as it puts a price tag on isolation:

Here’s the interesting thing that goes to the title of this post:

Is this a capability that AWS really expects to be utilized as they further blur the lines between public, private and hybrid cloud models OR is it a defensive strategy hinged on the exorbitant costs to further push enterprises into shared compute and overlay security models?

Specifically, one wonders if this is a strategically defensive or offensive move?

A single tenant atop a hypervisor atop dedicated hardware — that will go a long way toward addressing one concern: noisy (and nosy) neighbors.

Now, keep in mind that if an enterprise’s threat modeling and risk management frameworks are reasonably rational, they’ll realize that this is compute/memory isolation only.  Clearly the network and storage infrastructure is still shared, but the “state of the art” in today’s cloud of overlay encryption (file systems and SSL/IPSec VPNs) will likely address those issues.  Shared underlying cloud management/provisioning/orchestration is still an interesting area of concern.

So this will be an interesting play for AWS. Whether they’re using this to take a hammer to the existing private cloud models or just to add another dimension in service offering (logical, either way) I think in many cases enterprises will pay this tax to further satisfy compliance requirements by removing the compute multi-tenancy boogeyman.

/Hoff

Related articles


Joe Panettieri reported HP TouchPad: WebOS Tablet Will Be Partner, Cloud Ready in a 3/28/2011 post to the TalkinCloud blog:

image At the HP Americas Partner Conference in Las Vegas, Channel Chief Stephen DiFranco today demonstrated the forthcoming HP TouchPad to TalkinCloud. DiFranco says the HP TouchPad, which runs WebOS, will present a key opportunity for VARs and cloud integrators that focus on vertical markets such as health care. He also said the TouchPad tablet will unlock mobile virtualization and authentication opportunities for partners. Take a look.

Click here to view the embedded video.

DiFranco briefly demonstrated the HP TouchPad in a TalkinCloud FastChat Video (left). He did not mention pricing or availability for the TouchPad. However ArsTechnica is reporting:

A product release schedule leaked this week by a major US retailer, however, indicates that pricing will range from $499-599, with different tiers based on storage capacity. A $499 price point at launch will help make the Touchpad more competitive in a marketplace where Apple’s comparably-priced iPad 2 is soaking up sales.

The product launch sheet, which was obtained and published by the PreCentral blog, shows that the 10-inch Touchpad will have a June launch. This is consistent with HP’s previous statements that the Touchpad was planned for Summer. The launch sheet also indicates that a 7-inch HP tablet with webOS is planned for September, with pricing still to be determined.

imageHP is gearing up to train distributors and channel partners on the TouchPad and WebOS, according to Mike Parrottino VP of the HP Americas Personal Systems Group (PSG). Throughout mid-2011 HP will offer potential TouchPad partners web-based training while also launching a TouchPad road show that includes a partner education track, Parrottino said.

Meanwhile, HP CEO Leo Apotheker is expected to discuss HP’s cloud and WebOS strategies later today at the HP Americas Partner Conference. The event, which kicked off today, focuses heavily on recurring revenue opportunities — both cloud and managed services. In start contrast, the 2010 event focused mainly on helping partners to sell more servers, PCs, mobile devices and printers.


Jeff Barr (@jeffbarr) reported the avaiability of Amazon EC2 Dedicated Instances on 3/27/2011:

image We continue to listen to our customers, and we work hard to deliver the services, features, and business models based on what they tell us is most important to them. With hundreds of thousands of customers using Amazon EC2 in various ways, we are able to see trends and patterns in the requests, and to respond accordingly. Some of our customers have told us that they want more network isolation than is provided by "classic EC2."  We met their needs with Virtual Private Cloud (VPC). Some of those customers wanted to go even further. They have asked for hardware isolation so that they can be sure that no other company is running on the same physical host.

We're happy to oblige!

imageToday we are introducing a new EC2 concept — the Dedicated Instance. You can now launch Dedicated Instances within a Virtual Private Cloud on single-tenant hardware. Let's take a look at the reasons why this might be desirable, and then dive in to the specifics, including pricing.

Background
Amazon EC2 uses a technology commonly known as virtualization to run multiple operating systems on a single physical machine. Virtualization ensures that each guest operating system receives its fair share of CPU time, memory, and I/O bandwidth to the local disk and to the network using a host operating system, sometimes known as a hypervisor. The hypervisor also isolates the guest operating systems from each other so that one guest cannot modify or otherwise interfere with another one on the same machine. We currently use a highly customized version of the Xen hypervisor. As noted in the AWS Security White Paper, we are active participants in the Xen community and track all of the latest developments.

While this logical isolation works really well for the vast majority of EC2 use cases, some of our customers have regulatory or restrictions that require physical isolation. Dedicated Instances have been introduced to address these requests.

The Specifics

Each Virtual Private Cloud (VPC) and each EC2 instance running in a VPC now has an associated tenancy attribute. Leaving the attribute set to the value "default" specifies the existing behavior: a single physical machine may run instances launched by several different AWS customers.

Setting the tenancy of a VPC to "dedicated" when the VPC is created will ensure that all instances launched in the VPC will run on single-tenant hardware. The tenancy of a VPC cannot be changed after it has been created.

You can also launch Dedicated Instances in a non-dedicated VPC by setting the instance tenancy to "dedicated" when you call RunInstances. This gives you a lot of flexibility; you can continue to use the default tenancy for most of your instances, reserving dedicated tenancy for the subset of instances that have special needs.

This is supported for all EC2 instance types with the exception of Micro, Cluster Compute, and Cluster GPU.

It is important to note that launching a set of instances with dedicated tenancy does not in any way guarantee that they'll share the same hardware (they might, but you have no control over it). We actually go to some trouble to spread them out across several machines in order to minimize the effects of a hardware failure.

Pricing
When you launch a Dedicated Instance, we can't use the remaining "slots" on the hardware to run instances for other AWS users. Therefore, we incur an opportunity cost when you launch a single Dedicated Instance. Put another way, if you run one Dedicated Instance on a machine that can support 10 instances, 9/10ths of the potential revenue from that machine is lost to us.

In order to keep things simple (and to keep you from wasting your time trying to figure out how many instances can run on a single piece of hardware), we add a $10/hour charge whenever you have at least one Dedicated Instance running in a Region. When figured as a per-instance cost, this charge will asymptotically approach $0 (per instance) for customers that run hundreds or thousands of instances in a Region.

We also add a modest premium to the On-Demand pricing for the instance to represent the added value of being able to run it in a dedicated fashion. You can use EC2 Reserved Instances to lower your overall costs in situations where at least part of your demand for EC2 instances is predictable.


Jamal Mazhar asserted “Microsoft's Virtual Machine Manager 2012 to Enable Service Level Management” is a preface to his First Amazon, Now Microsoft Follows Kaavo in Cloud Management post of 3/25/2011:

A few weeks’ back I wrote about Amazon CloudFormation and how it is similar to Kaavo’s approach.  Earlier this week at the Microsoft Management Summit 2011 in Las Vegas, Microsoft announced what they are calling “a new innovation” to be released in the Virtual Machine Manager 2012 to enable service level management.  The enabling technology powering this is what Microsoft hails as the “new concept” call “service template”.  According to Microsoft, a service template captures all the information that you need to deploy a service.  So basically Microsoft has finally recognized that in the cloud you have to take a top down application centric approach to effectively manage applications and workloads.  “Service Template” is basically MS equivalent of Kaavo System Definition; we released it in January 2009.  The biggest difference is that Kaavo has implemented a general purpose application centric management solution which is agnostic to OS and programing languages.  One has to wonder if Microsoft corporate policy of not using Google search is to blame for Microsoft not knowing about Kaavo System Definition concept.

Anyways it is great to see Microsoft is taking an application centric approach for cloud management.  At Kaavo, as pioneers of the approach, we feel great about this as it is another validation of what have been working on since 2007.

For those who are wondering why application centric management is important for cloud computing, I will quote Lori MacVittie as she said it best in her blog, “when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves.”

CIO, CTO & Developer Resources

As a reference here is the link to the transcript of the announcement, it is in the second half of the transcript when Brad Anderson, Microsoft Corporate Vice President, Management & Security talks about it briefly and introduces Chris Stirrat. For a quick overview of Kaavo technology and System Definition please watch the following short video.

Click here and scroll down to watch the video.

Jamal  is founder and CEO of Kaavo.


<Return to section navigation list> 

0 comments: