Thursday, September 02, 2010

Windows Azure and Cloud Computing Posts for 9/2/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb311  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

The Windows Azure Storage Team announced an important StorageClient Hotfix Release – September 2010 on 9/2/2010:

imageWe have released a hotfix for the StorageClient library to address two critical issues:

1. Application crashes with unhandled NullReferenceException that is raised on a callback thread

StorageClient uses a timer object to keep track of timeouts when getting the web response. There is a race condition that causes access to a disposed timer object causing a NullReferenceException with the following call stack:

System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.WindowsAzure.StorageClient.Tasks.DelayTask.<BeginDelay>b__0(Object state)
at System.Threading.ExecutionContext.runTryCode(Object userData)
at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading._TimerCallback.PerformTimerCallback(Object state)

This can impact any API in the library and since this occurred on a callback thread and the exception was not handled, it resulted in the application crashing. We have fixed this issue now.

2. MD5 Header is not passed to Azure Blob Service in block blob uploads

The Storage Client routines UploadText, UploadFromStream, UploadFile, UploadByteArray use block blobs to upload the files. The client library used incorrect header (it used x-ms-blob-content-md5 rather than the standard content-md5 header) while uploading individual blocks. Therefore, the MD5 was not checked when the block was stored in the Azure Blob service. We have fixed this by using the correct header.

Please download the update from here.

The preceding hot fix added to the Windows Azure SDK (June 2010) release creates the currently downloadable Windows Azure Software Development Kit (June 2010), which Microsoft’s Download Center describes as:

[Refreshed on September 1, 2010] The Windows® Azure™ SDK provides developers with the APIs, tools, documentation, and samples needed to develop Internet-scale applications that run on Windows Azure.

This release of Windows Azure SDK was refreshed on September 1, 2010. An existing installation of Windows Azure SDK can be upgraded to this release. The fixes in this refresh can be found below.

The fixes are the same as items 1 and 2 above.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

imageSee Vassilis Touloum (@VassilisTouloum) requested responses re the Microsoft PDC group’s announcement of plans for OData session feeds in the Cloud Computing Events section below.


Wayne Walter Berry (@WayneBerry) recommended Compression for [SQL Azure] Speed and Cost Savings in this 9/2/2010 post to the SQL Azure team blog:

image SQL Azure doesn’t currently support page level or row level compression like the enterprise edition of SQL Server 2008. However, you can implement your own column level compression in your data access layer to take advantage of the performance and cost savings of compression. I will discuss how to do this and provide some example code in this blog post.

imageColumn level compression is the concept of compressing your data before you write it to the database, and decompressing it when you it from the database. For example, instead of having varchar(max) column of text you have a varbinary(max) of compressed text that is holding on average 80% less data.

The Right Scenario

Only in certain scenarios does column level compression work well. When compressing columns consider that:

  • Large text columns are the best to compress; they gain you the most savings. The gain from compression must exceed the costs of creating the compression dictionary (a result of using deflate compression); this only happens when there is a large amount of data that repeats itself. This technique will benefit not only text but large xml and binary data as well depending on the content. For example, you don’t want to compress images blobs – they are usually already compressed.
  • Don’t compress columns that appear in the WHERE clause of your queries. With this technique you can’t query on the compressed text without decompressing it. You also won’t be able to query or access these fields through SSMS or load data directly using BCP.exe
  • Compress columns that can be cached on the application side, to avoid multiple reads; this avoids the costs of decompressing the text.

For example scenario that would work well is a web based product catalogues where you compress the product description and where you don’t need to search within the description and it doesn’t change very often.

The Benefits

Compression can reduce the amount of data you are storing, creating potential cost savings. It can also potentially help you stay below the maximum 50 Gigabyte database size for SQL Azure and avoid the development costs of partitioning.

In certain scenarios compression can result in speed improvements for queries that are preforming full table scans on the clustered index. When dealing with large-value data types, if the data in the column is less than 8,000 bytes it will be stored in the page with the rest of the column data. If you can reduce the 8000 bytes, more rows can be paged at one time, giving you performance gains on full table scans of the table.

Table Modification

Compressed data should be stored in varbinary(max) columns. If you are compressing a nvarchar(max) column, you will need to create an additional column to compress you data into. You can do this with an ALTER TABLE command. Once you have the text compressed, you delete the nvarchar(max) column. Here is a little example Transact-SQL that adds a column to SQL Azure:

ALTER TABLE Images ADD PageUriCompressed varbinary(max) NOT NULL DEFAULT(0x0)
The Data Layer

Fortunately, .NET CLR 2.0 has some great compression built into the System.IO.Compression namespace with the GZipStream class. The first thing I need to do is create a throw-away console application that I will use once to compress all the existing rows into the new column, here is what it looks like:

do
{
    using (SqlConnection sqlConnection =
        new SqlConnection(
            ConfigurationManager.ConnectionStrings["SQLAzure"].ConnectionString))
    {
        String pageUri;
        Int64 Id;

        // Open the connection
        sqlConnection.Open();

        // Pull One Row At A Time To Prevent Long Running
        // Transactions
        SqlCommand sqlCommand = new SqlCommand(
            "SELECT TOP 1 ID, PageUri FROM [Images] WHERE PageUriCompressed = 0x0",
            sqlConnection);

        using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
        {
            // WWB: Exit Do Loop When There Is No More Rows
            if (!sqlDataReader.Read())
                break;

            pageUri = (String)sqlDataReader["PageUri"];
            Id = (Int64)sqlDataReader["ID"];
        }

        Console.Write(".");

        // Compress Into the Memory Stream
        using (MemoryStream memoryStream = new MemoryStream())
        {
            using (GZipStream gzipStream = new GZipStream(memoryStream,
                CompressionMode.Compress, true))
            {
                // Unicode == nvarchar
                Byte[] encodedPageUri = Encoding.Unicode.GetBytes(pageUri);
                gzipStream.Write(encodedPageUri, 0, encodedPageUri.Length);
            }

            // Now Everything is compressed into the memoryStream
            // Reset to Zero Because We Are Going To Read It
            memoryStream.Position = 0;

            // WWB: Stream for Writing
            using (SqlStream sqlStream = new SqlStream(sqlConnection,
                "dbo",
                "Images",
                "PageUriCompressed",
                "ID",
                SqlDbType.BigInt, Id))
            {
                using (BinaryReader binaryReader = new BinaryReader(memoryStream))
                {
                    using (BinaryWriter binaryWriter = new BinaryWriter(sqlStream))
                    {
                        Int32 read;
                        Byte[] buffer = new Byte[1024];
                        do
                        {
                            read = binaryReader.Read(buffer, 0, 1024);
                            if (read > 0)
                                binaryWriter.Write(buffer, 0, read);

                        } while (read > 0);
                    }
                }
            }
        }
    }
} while (true);

Console.WriteLine("");

This code uses the SqlStream class that was introduced in this blog post.It also tries to make good use of the local memory and not consume too much if the string being compressed is really big. However, this results in a very “chatty” application that creates a lot of connections to SQL Azure and runs slower than I would like.

Evaluation

My next step is to evaluate if the compressed really helped me. I do this because it can be hard to evaluate what the benefits compression will have until you have compressed your real world data. To do this I use the DATALENGTH field in Transact-SQL to sum up the two columns, i.e. before and after compression. My query looks like this:

SELECT COUNT(1), SUM(DATALENGTH(PageUri)), SUM(DATALENGTH(PageUriCompressed))
FROM Images

The results look like this:

clip_image001

I can see that compressed is actually going to reduce the size of my database in this case. In some scenarios compression makes the data bigger, usually when the data dictionary exceeds the gains from compression. As a rule of thumb, to have effective compression on text you need to have multiple repeating phrases, which happen with longer text blocks

Code

Now that the column is compressed you need to be able to read the compressed data and write uncompressed data to the column compressed. Here is a little bit of example code to help you do that:

protected static String Read(Int64 id)
{
    using (SqlConnection sqlConnection =
        new SqlConnection(
            ConfigurationManager.ConnectionStrings["SQLAzure"].ConnectionString))
    {
        sqlConnection.Open();

        SqlCommand sqlCommand = new SqlCommand(
            "SELECT PageUriCompressed FROM [Images] WHERE ID = @Id",
                sqlConnection);

        sqlCommand.Parameters.AddWithValue("@Id", id);

        using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
        {
            sqlDataReader.Read();

            Byte[] compressedPageUri = 
                (Byte[])sqlDataReader["PageUriCompressed"];

            using (MemoryStream memoryStream = 
                new MemoryStream(compressedPageUri))
            {
                using (GZipStream gzipStream = new GZipStream(memoryStream, 
                    CompressionMode.Decompress))
                {
                    using (StreamReader streamReader =
                        new StreamReader(gzipStream, Encoding.Unicode))
                    {
                        return (streamReader.ReadToEnd());
                    }
                }
            }
        }
    }
}

For writing:

protected static void Write(Int64 id, String pageUri)
{
    using (SqlConnection sqlConnection =
        new SqlConnection(
            ConfigurationManager.ConnectionStrings["SQLAzure"].ConnectionString))
    {
        // Open the connection
        sqlConnection.Open();

        // Compress Into the Memory Stream
        using (MemoryStream memoryStream = new MemoryStream())
        {
            using (GZipStream gzipStream = new GZipStream(memoryStream,
                CompressionMode.Compress, true))
            {
                // Unicode == nvarchar
                Byte[] encodedPageUri = Encoding.Unicode.GetBytes(pageUri);
                gzipStream.Write(encodedPageUri, 0, encodedPageUri.Length);
            }

            // Now Everything is compressed into the memoryStream
            // Reset to Zero Because We Are Going To Read It
            memoryStream.Position = 0;

            // WWB: Stream for Writing
            using (SqlStream sqlStream = new SqlStream(sqlConnection,
                "dbo",
                "Images",
                "PageUriCompressed",
                "ID",
                SqlDbType.BigInt, id))
            {
                using (BinaryReader binaryReader = new BinaryReader(memoryStream))
                {
                    using (BinaryWriter binaryWriter = new BinaryWriter(sqlStream))
                    {
                        Int32 read;
                        Byte[] buffer = new Byte[1024];
                        do
                        {
                            read = binaryReader.Read(buffer, 0, 1024);
                            if (read > 0)
                                binaryWriter.Write(buffer, 0, read);

                        } while (read > 0);
                    }
                }
            }
        }
    }
}
SQL Compression

For a comparison, the compression built into an on-premise SQL Server is transparent to the application and benefits much wider range of types like decimal and bigint all the way to larger types because it scopes to the row to the page. In other words it can use the repetition dictionary in compression across all the columns in the page. You can read more about SQL Server 2008 compression here.

Wayne’s compression technique might help reduce the bloat I experienced when upsizing an Access database to SQL Azure, as described in my Migrating a Moderate-Size Access 2010 Database to SQL Azure with the SQL Server Migration Assistant post of 8/21/2010. However, much of the database size expansion is due to the clustered indexes added.


Pablo M. Cibraro (@cibrax) described Managing the SO-Aware Repository with PowerShell on 9/1/2010:

image As Jesus mentioned in this post, SO-Aware provides three interfaces for managing the service repository. An OData API [is provided] in case you want to integrate third applications with the repository. OData is a pure HTTP API that can be easily consumed in any platform using a simple HTTP client library. The management portal, which is an ASP.NET MVC user interface layered on top of the OData API and probably the one most people will use. And finally, a PowerShell provider that also mounts on top of the OData API to allow administrators to automate management tasks over the repository with scripting. 

imageThe SO-Aware PowerShell provider, in that sense offers around 40 commands that enables simple management scenarios like registering bindings or services or more complex scenarios that involves testing services or sending alerts when a service is not properly working.  

This provider can be registered as an snapin in an existing script using the following command,

$snapin = get-pssnapin  | select-string "SOAwareSnapIn"
if ($snapin -eq $null)
{
Add-PSSnapin "SOAwareSnapIn"
}

Once you have registered the snapin, you can start using most of the commands for managing the repository.

The first and more important command is “Set-SWEndpoint”, which allows you to connect to an existing SO-Aware instance. This command receives the OData service location as first argument, and it looks as follow,

Set-SWEndpoint -uri http://localhost/SOAware/ServiceRepository.svc

As next step, you can start managing or querying data from the repository using the rest of the commands. For instance, the following example registers a new binding in the repository only if it was not created already

function RegisterBinding([string]$name,[string]$type,[string]$xml)
{
$binding = GetBinding($name);
if(!$binding)
{
Add-SWBinding -Name $name -BindingType $type -Configuration $xml
}
}


function GetBinding([string]$name)
{
$bindings = Get-SWBindings
foreach($binding in $bindings)
{
if($binding.Name -eq $name)
{
return $binding
}
}
}


RegisterBinding "stsBinding" "ws2007HttpBinding" "<binding>
<security mode='Message'>
<message clientCredentialType='UserName' establishSecurityContext='false' negotiateServiceCredential='false'/>
</security>
</binding>";

As you can see, this provider brings a powerful toy that administrators in any organization can use to manage services or governance aspects by leveraging their scripting knowledge.


Brian Swan described Accessing OData for SQL Azure with AppFabric Access Control and PHP in this 9/2/2010 tutorial:

image If you are having trouble making sense of the title of this post, I don’t blame you. To clear things up, here’s what this post is about (which I couldn’t fit any more concisely into a title): The SQL Azure Labs team has made it possible to consume data in SQL Azure as an OData feed. And, you can set up an OData feed so that only authenticated users (authenticated with the AppFabric access control service, or ACS) can access it. In this post, I’ll show you how to consume these protected feeds using PHP.

imageI did write a post a few weeks ago that described how to enable anonymous access to SQL Azure OData feeds (Consuming SQL Azure Data with the OData SDK for PHP), but I had a few things to learn about AppFabric access control before I felt comfortable writing about authenticated access to these feeds. I wrote about what I learned in these posts: Understanding Windows Azure AppFabric Access Control via PHP and Access Control with the Azure AppFabric SDK for PHP. If you aren’t familiar with AppFabric access control, I would suggest reading those posts before reading this one (although it’s not 100% necessary to). I would also suggest reading some other posts I've written if you are not familiar with OData: Retrieving Data with the OData SDK for PHP and CRUD Operations with the OData SDK for PHP. Again, not required reading, but they might prove to be helpful background reading if you decide to dig into this post.

Setting up a Protected SQL Azure OData Feed

imageOK, I just said the posts I listed above were not required reading. That’s not quite true. To set up a SQL Azure server and enable access to data as an OData feed, you’ll need to follow the instructions in the Creating a SQL Azure Server and Creating a SQL Azure OData Service sections of this post: Consuming SQL Azure Data with the OData SDK for PHP. Then we’ll go one step further and restrict anonymous access to the feeds.

After you have enabled a SQL Azure Odata service and while you are in the SQL Azure Labs portal, select (or leave) No Anonymous Access from the Anonymous Access User dropdown:

image

Now click on +Add User to add a user whose identity will be impersonated when we access feeds after authentication. In the resulting dialog box I’m selecting the “dbo” user (which is there by default), but you can select another user if you have added users for your database. Leave the Issuer Name blank and click Add User:

image

You should now have a User Name, Secret Key, Issuer name, and OData service endpoint (I’ve blacked out my server ID in the Issuer Name and service endpoint). You will need this information later.

image

Now we’re ready to use the OData SDK for PHP to generate classes that will make it easy to access feeds from the endpoint above.

Generating Classes with the OData SDK for PHP

To install the OData SDK for PHP (which we will use to generate classes that allow easy access to OData feeds), follow these steps:

1. Download the OData SDK for PHP here: http://odataphp.codeplex.com/.

2. Follow the directions in the Installation and Configuration section of the User_Guide.htm file (in the doc directory of the SDK download).

3. Don’t forget to re-start your Web server after making changes to your php.ini file.

4. (Optional) Add your PHP installation directory to your PATH environment variable. This will allow you to run PHP scripts from any directory.

Important Note: I found that I had to make one change to a file in the SDK in order to proceed. I did not find a flag in the command line utility for generating classes that allowed me to submit claims as part of ACS authentication. The SQL Azure Labs endpoint requires one claim: authorized=true. I hardwired this into the SDK by changing line 59 in the ACSUtil.php file to this: $this->_claims = array("authorized"=>"true");

Now to generate the classes, execute this statement from a command line prompt (without the line breaks):

php PHPDataSvcUtil.php /uri=[OData service endpoint from above.]
                       /out=[File to which classes will be written.] 
                       /auth=acs
                       /u=[Issuer Name from above.]
                       /p=[Secret Key from above.]
                       /sn=sqlazurelabs
                       /at=[Odata service endpoint from above.]

So, for example, the statement I’m executing looks something like this:

php PHPDataSvcUtil.php /uri=https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind
                       /out=C:\PHPLibraries\odataphp\NorthwindACSProxies.php
                       /auth=acs
                       /u=https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind/dbo
                       /p=AAAAABBBBBCCCCC111112222233333DDDDDEEEEEFFF=
                       /sn=sqlazurelabs
                       /at=https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind

Note that sqlazurelabs is the AppFabric service namespace used here to issue tokens.

Using the Generated Classes

Now we are ready to use the generated classes. We’ll start off by including the generated classes, defining variables that we’ll use throughout the script, and creating an instance of the class that we’ll use for executing queries. Note that we again use information from above: the $wrap_name variable is set to the Issuer Name, the $wrap_password variable is set to the Secret Key, and the $wrap_scope variable is set to the OData service endpoint.

require_once "NorthwindACSProxies.php";

$service_namespace = "sqlazurelabs";
$wrap_name = "https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind/dbo";
$wrap_password = "AAAAABBBBBCCCCC111112222233333DDDDDEEEEEFFF=";
$wrap_scope = "https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind";
$claims=array('authorized'=>'true');

$svc = new Northwind();
$svc->Credential = new ACSCredential($service_namespace,
                                     $wrap_name,
                                     $wrap_password,
                                     $wrap_scope,
                                     $claims,
                                     null //proxy
                                     );

Now we can easily get all customers

$query = $svc->Customers();
$customers = $query->Execute();
foreach($customers->Result as $customer)
    echo $customer->CompanyName.": ".$customer->ContactName."</br>";

…or use a filter to get a particular customer

$query = $svc->Customers()->filter("CustomerID eq 'ALFKI'");
$customers = $query->Execute();
foreach($customers->Result as $customer)
    echo $customer->CompanyName.": ".$customer->ContactName."</br>";

…and we can get the orders for a customer

$query = $svc->Customers()->filter("CustomerID eq 'ALFKI'");
$customers = $query->Execute();
foreach($customers->Result as $customer)
{
    echo $customer->CompanyName.": ".$customer->ContactName."</br>";
    $orders = $svc->LoadProperty($customer, 'Orders');
    foreach($customer->Orders as $order)
        echo $order->OrderID . "<br/>";
}

To add a customer, do the following…

try

   $newCustomer = Customer::CreateCustomer("BRIAN");
   $newCustomer->CompanyName = "Microsoft";
   $newCustomer->ContactName = "Brian";
   $svc->AddToCustomers($newCustomer);
   $svc->SaveChanges();
}
catch(ODataServiceException $exception)
{
   echo $exception->getError();
}

…to update a customer

try
{
    $query = $svc->Customers()->filter("CustomerID eq 'BRIAN'");
    $customer = $query->Execute();
    $customer->Result[0]->ContactName = "Brian Swan";
    $svc->UpdateObject($customer->Result[0]);
    $svc->SaveChanges();
}
catch(ODataServiceException $exception)
{
   echo $exception->getError();
}

…and, finally, to delete a customer

try
{
    $query = $svc->Customers()->filter("CustomerID eq 'BRIAN'");
    $customer = $query->Execute();
    $svc->DeleteObject($customer->Result[0]);
    $svc->SaveChanges();
}
catch(ODataServiceException $exception)
{
   echo $exception->getError();
}

Accessing OData Feeds without the OData SDK for PHP

While I, personally, like the style of code above (a style similar to .NET code for Microsoft’s Entity Framework, for which I spent quite a bit of time writing documentation), it does seem to obfuscate the simplicity that OData promises. So, if you aren’t familiar with the code style above (or if you simply don’t like it) and you want to access relational data with a URL,  you are in luck: you don’t have to use the OData SDK for PHP to access SQL Azure OData feeds. Really, all you have to do is access a URL, like https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind/Customers to get all customers. The only tricky part is when this feed requires ACS authentication. But solving that problem isn’t hard: once you have requested and received a security token from ACS, you simply need to add it to an Authorization header for a GET request and you’ll get the data you want. In other words, once you have requested and received a token (as shown in this post using cURL), then this code will get all customers:

$token = 'WRAP access_token="'.$token.'"';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://odata.sqlazurelabs.com/OData.svc/v0.1/MySvrId/Northwind/Customers");
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Authorization: ".urldecode($token)));
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,  true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);

$customers = curl_exec($ch);
curl_close($ch);

print("<pre>");
print_r($customers);
print("</pre>");

That’s it for today. As usual, I hope this post at least interesting, if not actually useful.


James Senior interviewed Jonathan Carter in a 00:21:16 Web Camps TV #2 - OData Overview with Jonathan Carter segment:

image

imageThis week on Web Camps TV James Senior talks to Jonathan Carter about the new craze sweeping the world of services - OData - the new web protocol for querying and updating data. Jonathan describes how OData (short for Open Data protocol) is great for companies who are hosting services and allows developers to build cool apps on their API with zero ramp-up time because they are familiar with the standards-based approach.  We explore the way to consume a typical service OData service and the different ways to query an API.

Learn more about OData at our free Web Camps events - sign up today!


Cihangir Biyikoglu described Database Copy Command & Handling Asynchronous Execution in T-SQL in SQL Azure on 9/1/2010:

image The Database Copy T-SQL command in SQL Azure provides the ability to create a identical copy of a database through a pure server side process. You can think of it as a command that backs up a database and restores it at the other end. You can refer to general documentation for the details and use cases. The Database Copy command works on any size database thus, in general, requires a long transaction. Just like the SQL Server Backup command, the operation takes a while and creates a point-in-time consistent copy synchronized to the completion of the command.

imageOne unique aspect of this command in SQL Azure is that the system provides further robustness through built-in retry logic on the operation. Another unique aspect is how the transaction is executed: A long synchronous transaction would require a live connection and correct query timeout values to be in place for the whole duration of the database copy operation. When operating remotely on your database server over an internet connection, these requirements could be challenging. That is why database copy operation operates asynchronously.

The command is kicked off through the following command;

CREATE DATABASE destination.srvdestination_db AS COPY OF source.srv.source_db. 

This creates the destination_db in a COPYING state on the destination server. The rest of the transfer is performed asynchronously.

Asynchronous nature of the command works great in most cases. However when scripting small database copy operations, is may get more challenging to write a script that creates a database through a copy command and then, just like the classic scripts today, wants to work with the new database in the next line of the script. Take the following script for example;

-- connect to master and create the db as a copy of another db
CREATE DATABASE test145 AS COPY OF test123 
GO 
-- connect to the test_db and install schema * this step may fail since DB copy may not be done after the control returns to the script.
CREATE TABLE t1(c1 primary key, ...) 
GO

The good news is that it is fairly easy to convert an async operation to a synchronous operation through a simple script such as the one below. The additional steps block until the database copy is complete and destination database reach the ONLINE status.

-- connect to master and create the db as a copy of another db
CREATE DATABASE test145 AS COPY OF test123 
GO 
DECLARE @state 
DECLARE @dbid 
DECLARE @error_code 
DECLARE @error_desc 
DECLARE @error_severity 
DECLARE @error_state 
SELECT @dbid=database_id FROM sys.databases WHERE name='test145' 
WHILE (@state!='ONLINE') 
BEGIN 
    SELECT @state=state FROM sys.databases WHERE database_id=@dbid 
    IF (@status!='COPYING') 
    BEGIN 
        SELECT TOP 1 @error_code=error_code, 
                @error_desc=error_desc, 
                @error_severity=error_severity, 
                @error_state=error_state 
        FROM sys.dm_database_copies 
        WHERE database_id=@dbid 
        RAISERROR @error_code,@error_desc,@error_severity,@error_state 
    END 
    ELSE 
        -- delay for another 10 seconds 
        WAITFOR DELAY '00:00:10.000' 
END
GO
-- connect to the test_db and install schema * this step will not fail since DB copy is done
CREATE TABLE t1(c1 primary key, ...) 
GO

Enjoy.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

image72 No significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Jim O’Neill continues his Azure@home series with Azure@home Part 6: Synchronous Table Storage Pagination of 9/2/2010:

This post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil.  Be sure to read the introductory post for the context of this and subsequent articles in the series.

So where were we before vacations and back-to-school preparations got in my way!? Part 5 of this continuing series talked about the underlying REST protocol that’s used by the Azure Storage API, and in that discussion, we touched on two query classes:

  • DataServiceQuery, which will return at most 1000 entities, along with continuation tokens you can manage yourself to retrieve the additional entities fulfilling a given query, and
  • CloudTableQuery, which includes an Execute method you call once to return all of the data with no further client calls necessary.  Essentially, it handles the continuation tokens for you and makes multiple HTTP requests (each returning 1000 entities or less) as you materialize the query.  Recall, you can use the AsTableServiceQuery extension method on a DataServiceQuery to turn it into a CloudTableQuery and access this additional functionality.

Status.aspxIn the status.aspx page for Azure@home there are two collection displays – a Repeater control (InProgress) showing the work units in progress and a GridView (GridViewCompleted) displaying the completed work units.  The original code we’ve been looking at has the following line to retrieve all of the work units:

var workUnitList = ctx.WorkUnits.ToList<WorkUnit>();

We know now that code will result in at most 1000 entities being returned.  Presuming Azure@home has been chugging away for a while, there may be more than 1000 completed work units, and as implemented now, we’ll never get them all.  Additionally, since the table data is sorted by PartitionKey, and the partition key for the workunit table is the Azure WorkerRole instance ID, the entities you do get may change over time – it’s not as if you’re guaranteed to get the first 1000 work units completed or the last 1000.

It’s simple enough to replace the line above with

var workUnitList = ctx.WorkUnits.AsTableServiceQuery().ToList<WorkUnit>();

and all of the data will be returned, all 10 or all 10,000 entities – whatever happens to be in the table at the time.  Obviously we need a middle-ground here: control over the pagination without bringing down massive amounts of data that the user may never look at. 

While it’s conceivable you could have thousands of in-progress work units (each running in a worker role), that’s costly and beyond what you’ll be able to deploy as part of a typical Windows Azure account (20 roles is the default limit).  To save some time and complexity then, I’m not going to worry about paginating the InProgress Repeater control. 

You could though certainly accumulate a lot of completed work units, especially if you are leveraging an offer such as the Windows Azure One Month Pass.  So for purposes of this discussion, the focus will be on a pagination scheme for the GridView displaying those completed work units..

As you might have expected by the existence of two similar classes (DataServiceQuery and CloudTableQuery), there are actually two mechanisms you can use to implement the pagination, one synchronous (via DataServicesQuery) and the other asynchronous (via CloudTableQuery).  This post will focus on the former, and in the next post, we’ll transform that into an asynchronous implementation.

Some Refactoring

image It was my goal minimally disrupt the other code in Azure@home and confine modifications solely to status.aspx.  To accomplish that I had to do a bit of refactoring and introduce a utility class or two.  The completed implementation of the changes to status.aspx (and the code-behind files) are attached to this blog post, so you should be able to replace the original implementation with this code and give it a whirl. …

Jim continues with a substantial amount of refactored classes and a long ASP.NET page lifecycle section. He concludes …

Granted this post was a tad lengthy (when are mine not!), but hopefully it was straightforward enough for you to understand the pagination mechanism and how to code for it in your own applications.  The devil’s always in the details, but the primary takeaway here is

When using DataServiceQuery, always expect continuation tokens unless you’re specifying both PartitionKey and RowKey in the query (that is, you’re selecting a single entity.

In the next post, I’ll take this code and transform it using a later addition to the StorageClient API (CloudTableQuery) that will handle much of the continuation token rigmarole for you.


The Windows Azure Team posted a Real World Windows Azure: Interview with Melvin Greer, Chief Strategist of Cloud Computing, Information Systems, and Global Solutions at Lockheed Martin case study on 9/2/2010:

As part of the Real World Windows Azure series, we talked to Melvin Greer, Chief Strategist of Cloud Computing, Information Systems, and Global Solutions at Lockheed Martin, about using the Windows Azure platform to develop the company's ThundercloudTM design pattern. Here's what he had to say:

MSDN: Tell us about Lockheed Martin and the services you offer.

Greer: Lockheed Martin is well known as a premier defense contractor. We are principally engaged in the research, design, development, manufacture, integration, and sustainment of advanced technology systems, products, and services. We also develop innovative IT solutions for government, healthcare, and energy markets.

imageMSDN: What were the biggest challenges that Lockheed Martin faced prior to implementing the Windows Azure platform?

Greer: We wanted to give our customers the benefits of cloud computing, such as high performance, flexibility, and a consumption-based pricing model. At the same time, though, particularly for our federal government customers, we need to enable them to balance security, privacy, and confidentiality concerns.

MSDN: Can you describe the solution you built with Windows Azure to deliver the benefits of cloud computing while addressing security and privacy issues?

Greer: We used the Windows Azure platform to develop Thundercloud design patterns, which help customers integrate on-premises IT infrastructure with computing, storage, and application services in the cloud and then extend those applications to a remote, portable, or handheld mobile device. By powering Thundercloud applications with Windows Azure, our customers can add or remove computing resources to a solution quickly, paying only for what they use. Developers can also use Windows Azure platform AppFabric to enhance Thundercloud-based applications by integrating on-premises enterprise data sets and security methods to computing resources in the cloud.

MSDN: What makes your solution unique?

Greer: Design patterns are important tools used by engineers, architects, and software developers to provide technical solutions faster, with consistent compliance to best practices, all at a lower cost. With Windows Azure, customers can take advantage of Thundercloud design patterns to build applications with familiar tools, such as the Microsoft .NET Framework 3.5 and Microsoft SQL Azure.

MSDN: Have you offered Thundercloud to any new markets since implementing the Windows Azure platform?

Greer: What we have done is built five innovative mission-focused end-user applications: Healthcare Case Management, Augmented Reality for First Responders, Weather and Ocean Observing, Records and Information Management, and Biometric-Enabled Identity Management. We did this in order to effectively demonstrate the viability of the pattern, data portability from one cloud to another, interoperability between clouds, and extension of cloud services to multiple mobile devices.

MSDN: What kinds of benefits are you realizing with Windows Azure?

Greer: By using Windows Azure, we're able to give our customers the IT capacity that comes with cloud computing and storage, but at a savings of 40 to 60 percent over traditional infrastructure costs; the federal government spends nearly U.S.$80 billion on IT capabilities, so that's potentially a significant savings. At the same time that customers reap the benefits of cloud computing, they can do so while still preserving their confidential data in their own on-premises infrastructure-that's the beauty of the Windows Azure platform and AppFabric.

Read the full story at: www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000007971

To read more Windows Azure customer success stories, visit:
www.windowsazure.com/evidence


J. D. Meier reported Now Available: patterns & practices Parallel Programming with Microsoft .NET on 9/1/2010:

patterns & practices Parallel Programming with Microsoft .NET is now available.  The book shows design patterns to help developers use the .NET 4 Task Parallel Library (TPL) to write parallel applications successfully.

Contents at a Glance

The Patterns
The book describes six key parallel patterns for data and task parallelism and how to implement them using the TPL.

image

The Book

The Code Samples

The Talk

The Community


K. G. Sreeju Nair described Preparation of local Development Environment for Windows AZURE on 8/31/2010:

imageThe Windows Azure is the cloud services platform provided by Microsoft through Microsoft data centers. Windows Azure enables the developers to create applications running in the cloud by using Microsoft Visual Studio development environment and the Microsoft .NET Framework. In this article I am speaking about setting up a development environment that allows you to develop applications for cloud using visual studio and windows azure SDK.

Prerequisites

This article assumes that you already installed Visual studio 2010 and .Net framework 4 and SQL Server(Express editions of 2005 or 2008). Also you need windows vista sp1 or higher, windows 7, windows server 2008 as the OS for the development environment. This article has uses windows 7 to setup the environment.

Enable IIS 7 on windows 7

You need to enable IIS 7 with ASP.Net. The steps to configure IIS 7 is as follows

  • Navigate to start -> Control Panel ->Programs and Features.
  • Select “Turn Windows Features On or Off”.
  • Under Internet Information Services, expand World Wide Web Services.
  • Under Application Development Features, select ASP.NET.
  • Under Common HTTP Features, select Static Content, the screen will look similar to the following

clip_image001

  • Click OK to Install the selected features.
Download windows Azure SDK

You can download the windows azure SDK from msdn. Follow the link http://msdn.microsoft.com/en-us/library/dd179367.aspx

In the MSDN page, there are 2 versions of SDK available, one as a standalone product, the other for Visual Studio. Since you need to use visual studio as the development tool, choose the option “Windows Azure Tools for Microsoft Visual Studio”, which will redirect you to the download page. Download the latest version from here.

*If you have any previous versions of SDK installed, remove it before installing the new one.

Once you downloaded the SDK, install it by double clicking on the windows installer. The installation wizard will open

clip_image002

Click Next to continue with the installation. You will get the license page

clip_image003

Select the checkbox “I have read and accept the licenst terms”

Click next; you will reach to the “system requirements check page”. Here the installer will check disk space, any processes/ services that affect the installation process. For e.g. if you have opened visual studio 2010, it will show as incompatible process and you need to close visual studio and click the refresh button.

clip_image004

If all the requirements are met click next, the installation will start. The screen will show the installation progress.

clip_image005

Once completed, it will prompt you with the success/failure message.

In case of failure, you need to refer the log file. The installation success screen will be similar to the following. You can view the log file if you require to know more about the installation. At this moment, you can click the finish button to exit the wizard.

clip_image006

By default, the SDK is installed into the C:\Program Files\Windows Azure SDK\(version) directory. If you navigate to the folder, you will see the file structure of the folder as follows.

clip_image007

Now I have installed the SDK. The first question arises in anybody’s mind is “How do I test this?” the SDK shipped with 2 sets of sample projects, one for C# and one for VB. You need to extract the version of your choice to a directory where you have write access. By default you will not have write access to “Program files”. I extracted samples.cs file to c:\test folder. Once extracted, the folder will have the following contents

clip_image008

Now you need to test the installation. In order to do this, launch the SDK command prompt.

Go to start ->program files ->windows azure SDK v1.2 ->Windows Azure SDK Command Prompt

Navigate to the helloWorld folder under c:\test(or to the location where you extracted the test file). You can navigate to the helloworld folder by executing the command “cd c:\test\helloworld”

clip_image009

Type rumme.cmd in the command prompt to run the sample application, you will see the following output.

clip_image010

A browser will automatically open with “Hello world” as text. Also you will see the Windows Azure Simulation Environment icon appear in the system tray.

clip_image011

The browser output will be as follows

clip_image012

Now you are ready with the development environment.

Thanks to the US ISV Evangelism group’s Bruce Kyle for the heads-up.


Dmitry Lyalin and Peter Laudati interviewed Steve Marx on Azure in a Connected Show podcast on 8/30/2010:

In this episode, the one and only Windows Azure Tactical Strategist, Steve Marx [pictured], joins Dmitry and Peter to give us an update on the Windows Azure platform. Steve talks about common real world Windows Azure use patterns, including storage and compute instance configurations. Steve uses some strategic tactics to tell us what's in the tea leaves for the future of Azure. Peter also responds to 'cat ladies & acne-laden teenagers' by sharing 'The Memo'.


Abdom Nacif posted five patterns that claim to determine the answer to Is your application suitable for Windows Azure? on 8/27/2010 to the Credera blog (missed when posted):

image Some IT professionals describe cloud computing as an “amorphous” thing. It is hard to clearly understand what it is, and most importantly, to know if it works for your organization. The cloud is not only difficult to understand, but it is growing and changing at such a fast pace that it is hard to keep up with it and understand what it offers. Nonetheless, the cloud is not an ‘all or nothing’ deal; you need to assess the readiness of each application of your organization.

imageTo understand cloud computing, you need to assess what technical, monetary, and marketing benefits it offers for each of your applications. There are five simple patterns that serve to assess if the applications in your organization are suitable for the cloud or not, more specifically, if they are suitable for Microsoft’s cloud computing platform, Windows Azure.

1. Transferance. This is about understanding how you actually take what the application has on-premise and push it out to the Azure platform. This is really about migration – what does the app look like in the cloud? You might even want parts of the application hosted in Azure while still keeping parts of it on-premise. For example, you might want to manage identity on-premise and have the rest of the application in Azure. If so, then you have to understand how these parts would communicate – Windows Azure offers services for effective communication. This pattern also considers the type of data that you manage and store in the application; if the data is regulated you might want to keep it on-premises – there is still some debate as to whether the cloud is compliant with PCI, HIPAA, and SOX.

2. Scale and multi-tenancy. Windows Azure differentiates the web tier of your application from the worker tier (back-end compute tier) for different reasons, one of them scalability - in case you need to achieve scale on only one tier. This pattern involves the web tier. If you are not sure how big and at what rate your application will grow or shrink, Windows Azure’s cloud provides real time capability to increase or decrease the number of web tier instances. For example, if you are creating a new web site that your team estimates will be receiving 10,000 hits per month in 6 months and you are hosting it on-premises, you will most likely buy the capacity now for 10,000 hits and expect that it will need exactly that. While if you host it on Windows Azure, you can, on a daily basis, set the number of web-tier instances you need based on the hits you are receiving. Since Windows Azure offers a pay as you go model, you only pay for the capacity you use per day.

  • A recent (July 2010) survey of IT and business decision makers sponsored by Savvis showed that 76 % of survey takers see the “lack of access to IT capacity” as a barrier to business progress, hence Windows Azure can help with business agility. Kelley Blue Book migrated its applications to Windows Azure and it now takes them 6 minutes to boost capacity (vs. six-week turnaround for on-premise hardware).

3. Burst compute. This pattern involves the worker/compute tier of your applications. You can use Windows Azure for some tasks on the computing tier of your application that might need to be scaled up and down really quickly, for high compute power for short periods of time. For example, if your application needs to process heavy imaging at the end of every month, you can use the “infinite” compute power of Windows Azure. More importantly, if one month you need to process 100 times the number of images that you normally process, you can quickly increase the number of instances of your application’s worker tier for one day and then shrink it back to normal. To illustrate further, Windows Azure could also be used to host a tax application that needs a lot of capacity for only a couple of weeks per year – 2 to 3 weeks before Apr 16.

4. Elastic storage. Currently, storage is non-expensive on-premise. However, the actual maintenance of the storage is the greatest expense; adding, removing, swapping disks in a changing environment can become very difficult to handle. In addition, you need to worry about backing up, recovering, etc. On Windows Azure, you can smoothly add/remove storage capacity, paying as you go, and Microsoft takes care of everything else such as automatically double-backing up your data behind the scenes.

  • Furthermore, Windows Azure Content Delivery Network (CDN) has 18 global locations strategically placed to provide maximum bandwidth for delivering content to users. For example, Glympse, who provides a location-sharing application for GPS-enabled phones, moved from Amazon’s cloud to Windows Azure and mentioned that performance of Azure exceeded that available on Amazon

5. Inter-organizational communication. This pattern involves Azure’s offering through its Service Bus component, by ensuring easy and effective communication of applications that have firewalls in between. For example, if your cloud-based or on-premise application needs to share some functions with a partner’s application, they can use the Service Bus functionality to give direct access without worrying about the firewall. Otherwise, you need to create specific mediums/packages of communications, which then becomes even more difficult to handle if you add a third partner application to the communication.

  • The inter-organizational communication issue will grow further since we are running out of public v4 IP addresses and a lot of organizations are doing network address translation which makes communication more difficult.

Again, deciding where your applications should be hosted is not an ‘all or nothing’ deal, you need to determine what will be the best place for each application. It is about having a good strategy for your organization’s applications considering your current investments on hardware, the specifications of your applications, and future expectations for the use and growth of each of them.

In case you want to test Windows Azure, Microsoft is currently offering a Introductory Special until October 31, 2010. If you want to assess the cost savings of using Windows Azure, you can use Azure’s TCO and ROI calculator.


Return to section navigation list> 

VisualStudio LightSwitch

Matt Thalman posted Using application permissions in Visual Studio LightSwitch on 9/2/2010:

image Securing your Visual Studio LightSwitch application begins with defining permissions.  In the first version of LightSwitch, developers are responsible for defining and consuming their own permissions.  Once permissions have been defined, you can write the business logic to consume them where appropriate.  As part of the development process, only permissions and their associated logic need to be defined.  Once the application has been published, it can be administered to add users and roles and assign permissions.

The following is a walkthrough showing what is involved in this process:

To start with, I’ve created a simple Customer table:

Customer Table

image22In this application, I want to secure the ability to delete customers.  To do this, I need to create a new permission.  Permissions can be defined in the Access Control tab of the application properties.  To get to the application properties you can double-click the Properties node in Solution Explorer or right-click the Application node and select Properties.  Before permissions can be added, you need to enable authentication by choosing whether to use Windows or Forms authentication (see my previous blog post for more info on this).  For my purpose, I’ve chosen to use Forms authentication.  Now that authentication is enabled, I can add permissions.  I’ve defined a permission named CanDeleteCustomer that provides the ability for users to delete customers:

Permissions Grid

With the permission defined, I can now write some code to secure the customer table.  To do this, I navigate back to the Customer table and select “Customers_CanDelete” from the Write Code drop-down button:

Write Customer Code

This opens up the code file for the customer table and generates the method stub where I can write my logic. 

partial void Customers_CanDelete(ref bool result)
{
    result = this.Application.User.HasPermission(
Permissions.CanDeleteCustomer); }

A few things to note about this code:

  • The result parameter is assigned a value that indicates whether the user can delete the customer.
  • The User object is accessible from the Application class.  The User object provides all sorts of information about the current user, most notably whether they have a specific permission.
  • The Permissions class is a generated class that contains the permissions that were defined in the Access Control tab.
  • This code runs on the server not the client.  This ensures that the customer table will always be protected from a client that may not enforce this permission.

Ok, let’s test this to ensure it’s behaving as expected.  First, I create an editable grid screen for the customer table.  Now I’ll run the app and see how it looks.

Customer Grid

Here you’ll see that the Delete button is disabled.  By default, the currently running user doesn’t have the CanDeleteCustomer and the UI for the grid is reflecting this.  So now you may be wondering how we can run as a user that does have that permission.  To do that, I can just go back to the Access Control tab and check the “Granted for debug” checkbox for the CanDeleteCustomer permission.

Granted for debug

Now when I run the app, the Delete button is enabled.

Customer Grid

In a subsequent blog post, I’ll describe how you assign permissions in a published application.


Tim Heuer reported on 9/1/2010 that the Silverlight 4 service release (September 2010) solves a LightSwitch compatibility problem:

image Today we released an update to Silverlight 4 (update build is 4.0.50826.0) along with an updated SDK.  We appreciate our customers’ patience on working with us to help identify and verify necessary updates to this service release.  You can find all the details in KB2164913.  Here are the relevant highlights:

  • SDK feature to enable Add New Row capabilities in DataGrid control
  • Improving startup performance of Silverlight applications
  • Adding mouse wheel support for out-of-browser applications on the Mac platform
  • Various media-related fixes around DRM content
  • Fixed memory leak when MouseCapture is used
  • Fixed memory leak for DataTemplate usage

All the installer links have been updated to leverage these new bits for our customers.

For Visual Studio LightSwitch Users

image22When Visual Studio LightSwitch shipped they included a pre-release later build of Silverlight 4.  This caused some issues for customers who were using a single machine to evaluate LightSwitch but also using the same machine.  Any Silverlight 4 application developed and deployed would give customers messages indicating that they needed a later version of Silverlight, but were unable to acquire a compatible version!

This is now solved with this service release.  Simply put: If you are using LightSwitch, install the updated developer runtime and SDK.  This will solve this issue and allow you to develop LightSwitch applications as well as production Silverlight 4 applications.

For end users

For end users, having them simply install the runtime will provide them with the updated bits and benefits of the fixes/features in this service release.  The best way to force encourage your users to upgrade to this service release would be to leverage the MinRuntimeVersion attribute of your object tag:

   <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="640" height="400">
     <param name="source" value="YOUR_PATH_TO_XAP" />
     <param name="background" value="white" />
     <param name="minRuntimeVersion" value="4.0.50826.0" />
     <param name="autoUpgrade" value="true" />
     <a href="http://go.microsoft.com/fwlink/?LinkID=149156&amp;v=4.0.50826.0" style="text-decoration: none">
         <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style: none" />
     </a>
  </object>

Notice lines 5 and 6 above.  This would trigger that the end user is required for your application to run and require them to upgrade. The minRuntimeVersion/autoUpgrade are the minimum to require your user to upgrade to the later version.  Ideally you would follow good installation experience guidance (see “Installation Experience Whitepaper” with complete sample source code) to customize your install and/or upgrade experience.

For Developers

If you are a developer and authoring Silverlight applications you may want to grab the new developer bits and updated SDK:

I would install the developer build first and then the SDK and you’ll have a refreshed environment.  As with any release we try to get you the information as soon as possible and sometimes the information flows faster than the download servers replicate.  If you aren’t getting the updated bits using the links above, please be patient as the download servers from Microsoft refresh their replication caches. 

Note that when you now create a new project you’ll be using the new SDK and so the minRuntimeVersion (see above) of the project templates as well as compiled bits for your SL4 application will be using/requiring the updated runtime.

There are NO Visual Studio tools updates for this release so the Silverlight4_Tools.exe package is not needed to re-install.

Hope this helps!  As always if you have feedback on Silverlight, here are some methods of providing feedback to our team.


Beth Massi (@BethMassi) reported More LightSwitch How Do I Videos Released Today on 9/1/2010:

image22I’m back at it with three more “How Do I” videos on Visual Studio LightSwitch. This continues the series I started last week.

Video #8 shows off the impressive validation framework that Prem wrote about and some of the slickness around validating sets of data on the client and the middle-tier. If you missed the first 5 videos here they are:

image This series of videos build upon each other as we create a sample application for order entry. It’s looking pretty good now! You can access all the videos from the LightSwitch Developer Center Learn page. In the next set of videos I’ll show you how create and sort lookup data, how to create a single screen for both updating and new records, and lots more goodies.

Enjoy!


No significant articles today.

<Return to section navigation list> 

Windows Azure Infrastructure

The Windows Azure Team announced Two New Nodes for the Windows Azure CDN Enhance Service Across Asia on 9/2/2010:

imageWe are happy to announce that two additional nodes for the Windows Azure Content Delivery Network are now live in Seoul, Korea and Taipei, Taiwan. These two new nodes bring the total number of CDN physical nodes available globally to 22, with six of these nodes available in the Asia-Pacific region. These two new locations expand our service in the Asia-Pacific region and enhance the delivery of performance-sensitive content for customers choosing to serve data through the CDN*.

Please read our previous post, "20 Nodes Available Globally for the Windows Azure CDN" for a list of the other 20 locations.

Offering pay-as-you-go, one-click-integration with Windows Azure Storage, the Windows Azure CDN is a system of servers containing copies of data, placed at various points in our global cloud services network to maximize bandwidth for access to data for clients throughout the network. The Windows Azure CDN can only deliver content from public blob containers in Windows Azure Storage - content types can include web objects, downloadable objects (media files, software, documents), applications, real time media streams, and other components of Internet delivery (DNS, routes, and database queries).

For details about pricing for the Windows Azure CDN, read our earlier blog post here.

*A Windows Azure CDN customer's traffic may not be served out of the physically "closest" node; many factors are involved including routing and peering, Internet "weather", and node capacity and availability.  We are continually grooming our network to meet our Service Level Agreements and our customers' requirements.


David Linthicum asked and answered Will cost savings continue to be a significant driver for cloud computing? in a 9/2/2010 post to ebizQ’s Where SOA Meets Cloud blog:

image Yes, but it's not the only driver. There can be substantial cost benefits when leveraging cloud computing, but as we pointed out, your mileage may vary. You have to consider the cost holistically with other factors, including strategic benefits that are typically harder to define but there nonetheless.

image It's easy to determine that cloud computing is less expensive than traditional on-premise computing by simply considering the operating expenses. The real benefit of cloud computing (or more specifically, SOA using cloud computing) is the less-than-obvious value it brings to an enterprise, including:

  • The benefit of scaling.
  • The benefit of agility.

The benefit of scaling means that cloud computing provides computing resources on-demand. As you need those resources, you simply contact your cloud computing provider and add more capacity by paying more money. You can do this in a very short period of time, typically less than a day, and thus avoid the latency, expense, and risk of going out to purchase hardware and software that takes up data center space, and the traditional time required to scale up an application in support of the business.

The use of cloud computing resources allows you to go the other direction as well. You can remove capacity, and thus expense, as needed to support the business. If the business contracts and the number of transactions are not what they used to be, you can reduce your costs by simply reducing the computing resources within the cloud computing providers. No need to turn off expensive servers, and have them idle.

The benefit of agility means that our SOA using cloud computing architecture can be easily changed to accommodate the needs of the business since it uses services that are configured via a configuration or process layer. For instance, if you add a new product line that needs specific processes altered to accommodate the manufacturing of that product, the sale of that product, and the transportation of that product, those processes are typically changeable by making a configuration change, and not by driving redevelopment from the back-end systems.

While this is a core benefit of SOA, in general, the use of cloud computing resources enhances agility since cloud resources are commissioned and decommissioned, as needed to support the architecture, and changes to the architecture. You can bind in logistics processes from an application-as-a-service provider that supports the movement of the new product from the factory to the customer. Since you leverage a pre-built service, out of the clouds, you don't have to suffer through the expense and cost of building that service from scratch.

Moreover, cloud computing does not provide a cost benefit in all cases. You have to closely look at each problem domain and business, and do an objective cost analysis to determine the true benefit. The tendency is to go with what seems trendy in the world of enterprise computing. While cloud computing may be of huge benefit to your enterprise IT, you have to consider all of the angles.

The Windows Azure Team released Windows Azure Guest OS 1.6 (Release 201008-01) with stability and security patches on 9/1/2010:

imageThe following table describes release 201008-01 of the Windows Azure Guest OS 1.6:

Friendly name

Windows Azure Guest OS 1.6 (Release 201008-01)

Configuration value

WA-GUEST-OS-1.6_201008-01

Release date

September 1, 2010

Features

Stability and security patch fixes applicable to Windows Azure OS.

Security Patches

The Windows Azure Guest OS 1.6 includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin ID

Parent KB

Vulnerability Description

MS10-032

979559

Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Remote Code Execution

MS10-033

979902

Vulnerabilities in Media Decompression Could Allow Remote Code Execution

MS10-034

980195

Cumulative Security Update of ActiveX Kill Bits

MS10-035

982381

Cumulative Security Update for Internet Explorer

MS10-037

980218

Vulnerability in the OpenType Compact Font Format (CFF) Driver Could Allow Elevation of Privilege

MS10-040

982666

Vulnerability in Internet Information Services could allow Remote Code Execution

MS10-041

981343

Vulnerabilities in the Microsoft .NET Framework Could Allow Tampering

Windows Azure Guest OS 1.6 is substantially compatible with Windows Server 2008 SP2, and includes all Windows Server 2008 SP2 security patches through July 2010.

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date indicated in the table below, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.


Phil Wainwright asked and answered Multi-tenancy: why you should care in this 9/1/2010 post to the Enterprise Irregulars blog:

image Dennis Howlett has summarized the story so far in a debate I was having these past few days with Josh Greenbaum, Bob Warfield and assorted other Enterprise Irregulars (and it’s not the first time I’ve debated multi-tenancy). But everyone still seems to overlook the key issue about multi-tenancy, the one factor that makes it an essential prerequisite to demand as an enterprise buyer if you’re shopping for a SaaS or cloud solution.

imageMaybe people miss it because it’s so obvious, or perhaps it’s because the significance is not yet self-evident. When something changes the landscape so dramatically, it sometimes takes a bit of imagination to see how it will all pan out. Back in the early days of the automobile, who would have foreseen the networks of high-speed roads, the filling stations and the mass production that were going to make this a faster, cheaper, more convenient form of transport than the horse and the railroad?

Today, people look at cloud computing and yet completely overlook the role of the cloud itself in making it what it is. What if early motorists had only ever wanted to drive on their own private land? Automobile innovation would have peaked with the golf buggy. Private clouds and single-tenant SaaS applications are just as limited, and that’s why multi-tenancy matters. Yet even Bob Warfield, who’s on my side of the argument, can write, “There are two primary advantages to the Cloud: it is a Software Service and it is Elastic.” The cloud has a crucial third advantage, and it’s no coincidence that it shows up later in the same blog post at precisely the point when Bob is explaining why multi-tenancy matters:

“What would an app look like if it was built from the ground up to live in the Cloud, to connect Customers the way the Internet has been going, to be Social, to do all the rest? Welcome to SaaS Multitenant.”

Multi-tenancy matters because it’s the ideal architecture to make the most of the public cloud environment. The cloud matters because it’s where all the connections are. That’s where your customers, your suppliers, your partners and your employees are, along with a wealth of other resources from all around the world. Multi-tenancy provides the best possible platform for interacting in real-time with all of those resources (which is why top online properties like Google, Facebook, Amazon, eBay and others are completely multi-tenant). But more than that, a multi-tenant application runs on a shared platform that is constantly being fine-tuned to succeed better at those interactions.

Multi-tenancy benefits enormously from the magic of something I call collective scrutiny and innovation. When hundreds or even thousands of other businesses are using exactly the same operational infrastructure, all of them benefit from each of the different ways in which they’re challenging and stretching that shared infrastructure. All of them have access to the newest functionality that’s introduced at the behest of the early adopter minority. All of them benefit from the hardening of the infrastructure after any of them come in contact with a newly detected threat.

The strength of multi-tenancy is that each of its multitude of individual tenants keeps it constantly evolving. This is in direct contrast to single tenancy, the whole point of which is to limit evolution only to those changes that are perceived to directly benefit the individual tenant. Thus single tenancy misses out on innovations and other advances that are being adopted by competitors, partners and third-party services. If everyone were unconnected that may not matter so much but it’s hugely important in the cloud, where you may only realize the significance of a new capability when you see it linked up with other resources.

This is before we even start to think about the potential for using pooled aggregate data for benchmarking, validation or trend analysis, along the lines that Dennis Howlett discussed in his weekend blog post. Most of the benefits of public cloud multi-tenancy have not even begun to be explored, much as the huge transformative impact of the internal combustion engine was unimagined at the beginning of the twentieth century.

That’s probably why, for now, it may look as though the choice between multi-tenancy and single-tenancy is something that matters only to the vendor. In truth, multi-tenancy matters even more to buyers, because it’s what makes the difference between a SaaS application that’s destined for rapid obsolescence and one that will continue to evolve with the cloud and all the wealth of possibility that’s opening up in the connected Web.


Nati Shalom explained Scale-out vs Scale-up in this 9/1/2010 post:

image In my previous post Concurrency 101 I touched on some of the key terms that often comes up when dealing with multi-core concurrency.

In this post I'll cover the difference between multi-core concurrency that is often referred to as Scale-Up and distributed computing that is often referred to as Scale-Out model.

The Difference Between Scale-Up and Scale-Out

One of the common ways to best utilize multi-core architecture in a context of a single application is through concurrent programming. Concurrent programming on multi-core machines (scale-up) is often done through multi-threading and in-process message passing also known as the Actor model.Distributed programming does something similar by distributing jobs across machines over the network. There are different patterns associated with this model such as Master/Worker, Tuple Spaces, BlackBoard, and MapReduce. This type of pattern is often referred to as scale-out (distributed).

Conceptually, the two models are almost identical as in both cases we break a sequential piece of logic into smaller pieces that can be executed in parallel. Practically, however, the two models are fairly different from an implementation and performance perspective. The root of the difference is the existence (or lack) of a shared address space. In a multi-threaded scenario you can assume the existence of a shared address space, and therefore data sharing and message passing can be done simply by passing a reference. In distributed computing, the lack of a shared address space makes this type of operation significantly more complex. Once you cross the boundaries of a single process you need to deal with partial failure and consistency. Also, the fact that you can’t simply pass an object by reference makes the process of sharing, passing or updating data significantly more costly (compared with in-process reference passing), as you have to deal with passing of copies of the data which involves additional network and serialization and de-serialization overhead.

image

Choosing Between Scale-Up and Scale-Out

The most obvious reason for choosing between the scale-up and scale-out approaches is scalability/performance. Scale-out allows you to combine the power of multiple machines into a virtual single machine with the combined power of all of them together. So in principle, you are not limited to the capacity of a single unit. In a scale-up scenario, however, you have a hard limit -– the scale of the hardware on which you are currently running. Clearly, then, one factor in choosing between scaling out or up is whether or not you have enough resources within a single machine to meet your scalability requirements.

Reasons for Choosing Scale-Out Even If a Single Machine Meets Your Scaling/Performance Requirements

Today, with the availability of large multi-core and large memory systems, there are more cases where you might have a single machine that can cover your scalability and performance goals. And yet, there are several other factors to consider when choosing between the two options:

1. Continuous Availability/Redundancy: You should assume that failure is inevitable, and therefore having one big system is going to lead to a single point of failure. In addition, the recovery process is going to be fairly long which could lead to a extended down-time.

2. Cost/Performance Flexibility: As hardware costs and capacity tend to vary quickly over time, you want to have the flexibility to choose the optimal configuration setup at any given time or opportunity to optimize cost/performance. If your system is designed for scale-up only, then you are pretty much locked into a certain minimum price driven by the hardware that you are using. This could be even more relevant if you are an ISV or SaaS provider, where the cost margin of your application is critical to your business. In a competitive situation, the lack of flexibility could actually kill your business.

3. Continuous Upgrades: Building an application as one one big unit is going to make it harder or even impossible to add or change pieces of code individually, without bringing the entire system down. In these cases it is probably better to decouple your application into concrete sets of services that can be maintained independently.

4. Geographical Distribution: There are cases where an application needs to be spread across data centers or geographical location to handle disaster recovery scenarios or to reduce geographical latency. In these cases you are forced to distribute your application and the option of putting it in a single box doesn’t exist.

Can We Really Choose Between Scale-Up and Scale-Out?

Choosing between scale-out/up based on the criteria that I outlined above sound pretty straightforward, right? If our machine is not big enough we need to couple a few machines together to get what we're looking for, and we're done. The thing is, that with the speed in which network, CPU power and memory advance, the answer to the question of what we require at a given time could be very different than the answer a month later. 

To make things even more complex, the gain between scale-up and scale-out is not linear. In other words, when we switch between scale-up and scale-out we're going to see a significant drop in what a single unit can do, as all of a sudden we have to deal with network overhead, transactions, and replication into operations that were previously done just by passing object references. In addition,we will probably be forced to rewrite our entire application, as the programming model is going to shift quite dramatically between the two models. All this makes it fairly difficult to answer the question of which model is best for us.

Beyond a few obvious cases, choosing between the two options is fairly hard, and maybe even almost impossible.

Which brings me to the next point: What if the process of moving between scale-up and scale-out were seamless -- not involving any changes to our code?

I often use storage as an example of this. In storage, when we switch between a single local disk to a distributed storage system, we don’t need to rewrite our application. Why can’t we make the same seamless transition for other layers of our application?

Designing for Seamless Scale-Up/Scale-Out

To get to a point of seamless transition between the two models, there are several design principles that are common to both the scale-out and scale-up approaches.

Parallelize Your Application

1. Decouple: Design your application as a decoupled set of services. “All problems in computer science can be solved by another level of indirection" is a famous quote attributed to Butler Lampson. In this specific context: if your code sets have loose ties to one another, the code is easier to move, and you can add more resources when needed without breaking those ties. In our case, designing an application from a set of services that doesn’t assume the locality of other services is used to enable us to handle a scale-up scenario by routing requests to the most available instance.

2. Partition: To parallelize an application, it is often not enough to spawn multiple threads, because at some point they are going to hit a shared contention. To parallelize a stateful application we need to find a way to partition our application and data model so that our parallel units share-nothing with one another.

Enabling Seamless Transitions Between Remote and Local Services

First, I'd like to clarify that the pattern I outlined in this section is intended to enable seamless transition between distributed and local service. It is not intended to make the performance overhead between the two models go away.

The core principle is to decouple our services from things that assume locality of either services or data. Thus, we can switch between local and remote services without breaking the ties between them. The decoupling should happen in the following areas: 

1. Decouple the communication: When a service invokes an operation on another service we can determine whether that other service is local or remote. The communication layer can be smart enough to go through more efficient communication if the service happens to be local or go through the network if the service is remote. The important thing is that our application code is not going to be changed as a result.

2. Decouple the data access: Similarly, we need to abstract our data access to our data service. A simple abstraction would be a distributed hash table, where we could use the same code to point to a local in-memory hash-table or to a distributed version of that update. A more sophisticated version would be to point to an SQL data store where we could have the same SQL interface to point to an in-memory data store or to a distributed data-store.

Packaging Our Services for Best Performance and Scalability

Having an abstraction layer for our services and data brings us to the point where we could use the same code whether our data happens to be local or distributed. Through decoupling, the decision about where our services should live becomes more of a deployment question, and can be changed over time without changing our code.

In the two extreme scenarios, this means that we could use the same code to do only scale-up by having all the data and services collocated, or scale-out by distributing them over the network.

In most cases, it wouldn't make sense to go to either of the extreme scenarios, but rather to combine the two. The question then becomes at what point should we package our services to run locally and at what point should we start to distribute them to achieve the scale-out model.

To illustrate, let’s consider a simple order processing scenario where we need to go through the following steps for the transaction flow:

1. Send the transaction

2. Validate and enrich the transaction data

3. Execute it

4. Propagate the result

Each transaction process belongs to a specific user. Transactions of two separate users are assumed to share nothing between them (beyond reference data which is a different topic).

In this case, the right way to assemble the application in order to achieve the optimal scale-out and scale-up ratio would be to have all the services that are needed for steps 1-4 collocated, and therefore set up for scale-up. We would scale-out simply by adding more of these units and splitting both the data and transactions between them based on user IDs. We often refer to this unit-of-scale as a processing unit.

To sum up, choosing the optimal packaging requires:

1. Packaging our services into bundles based on their runtime dependencies to reduce network chattiness and number of moving parts.

2. Scaling-out by spreading our application bundles across the set of available machines.

3. Scaling-up by running multiple threads in each bundle.

The entire pattern outlined in this post is also referred to as Space Based Architecture. A code example illustrating this model is available here.

Final Words

Today, with the availability of large multi-core machines at significantly lower price, the question of scale-up vs. scale-out becomes more common than in earlier years.

There are more cases in which we could now package our application in a single box to meet our performance and scalability goals.

A good analogy that I have found useful to understanding where the industry is going with this trend is to compare disk drives with storage virtualization. Disk drives are a good analogy to the scale-up approach, and storage virtualization is a good analogy to the scale-out approach. Similar to the advance in multi-core technology today, disk capacity has increased significantly in recent years. Today, we have xTB data capacity on a single disk.

PC hard disk capacity (in GB).The plot is logarithmic,
so the fitted line corresponds to exponential growth

Interestingly enough, the increase in capacity of local disks didn’t replaced the demand for storage, quite the contrary. A possible explanation is that while single-disk capacity doubled every year, the demand for more data grew at a much higher rate as indicated in the following IDC report:

Market research firm IDC projects a 61.7% compound annual growth rate (CAGR) for unstructured data in traditional data centers from 2008 to 2012 vs. a CAGR of 21.8% for transactional data.

Another explanation to that is that storage provides functions such as redundancy, flexibility and sharing/collaboration. Properties that a single disk drive cannot address regardless of its capacity.

The advances with the new multi-core machines will follow similar trends, as there is often a direct correlation between the advance in the capacity of data and the demand for more compute power to manage it, as indicated here:

The current rate of increase in hard drive capacity is roughly similar to the rate of increase in transistor count.

The increased hardware capacity will enable us to manage more data in a shorter amount of time. In addition, the demand for more reliability through redundancy, as well as the need for better utilization through the sharing of resources driven by SaaS/Cloud environments, will force us even more than before towards scale-out and distributed architecture.

So, in essence, what we can expect to see is an evolution where the predominant architecture will be scale-out, but the resources in that architecture will get bigger and bigger, thus making it simpler to manage more data without increasing the complexity of managing it. To maximize the utilization of these bigger resources, we will have to combine a scale-up approach as well.

Which brings me to my final point -– we can’t think of scale-out and scale-up as two distinct approaches that contradict one another, but rather must view them as two complementing paradigms.

The challenge is to make the combination of scale-up/out native to the way we develop and build applications. The Space Based Architecture pattern that I outlined here should serve as an example on how to achieve this goal.


Wes Yanaga announced Windows Phone 7 Released to Manufacturing in this 9/1/2010 post to the US ISV blog:

image An exciting day – the Windows Phone team has reached a big milestone – the release to manufacturing of Windows Phone 7. There still more work to do with the integration of our partners hardware, software and networks before launch. You can get more details at the Windows Phone Blog.

In the meantime, here are some resources to get you started:

Getting Started on Windows Phone 7

image Large or small, all developers will have equal opportunity to capitalize on the first mover advantage of having their apps or games ready at launch. In order to do that, there are a few things developers will need to do:

  1. Register at the marketplace today
  2. Finish you application or game using the Beta tools
  3. Download the final Windows Phone Developer Tools when released on September 16
  4. Recompile your app or game using the final tools
  5. Have your XAP ready for ingestion into the marketplace in early October when it opens.
Free Developer Training
Free Developer Assistance, Marketing Support

Microsoft Platform Ready (MPR) is designed to give you what you need to plan, build, test and take your solution to market. We’ve brought a range of platform development tools together in one place, including technical, support and marketing resources, as well as exclusive offers. With MPR, you can:

  • Access training and webinars to help you get compatible.
  • Test your application with online resources and testing tools.
  • Utilize marketing toolkits including customizable templates for email, letters and presentations.

imageObviously, WP7 will be a major client OS for Windows Azure apps.


Alex Williams reported Intel and the Cloud - Federated, Automated and Intelligent from VMworld on 9/1/2010 for the ReadWriteCloud:

Thumbnail image for cloud_picture_aug10.jpgIs a cloud utopia possible? The idea being that someday everything will be elastic. Services scale up and down based on usage. You would never have to worry about updating an application on your laptop. Security would be taken care of and devices would be smart enough to know what data to process and what should be rendered in the cloud.

imageSure. We are already seeing some of these scenarios unfold. But it's not common. In reality, the builder has a big job ahead.

image Intel is here at VMWorld with a message about how this plays out. In their view, it comes down to three factors. The cloud and correlating devices should be federated, automated and client-aware.

Federated Cloud

A federated cloud means an open cloud. It's a cloud that can be connected. Virtual machines work in the enterprise as well as they work in the data center. Communications, data, and services can move easily within and across cloud computing infrastructures. Identity is seamless. Interoperability is without question.

Automated Cloud

An automated cloud means cloud computing services and resources can be specified, located, and securely provisioned with very little or zero human interaction. This is a data center with intelligence. It does not have to be as manually operated as it is today. It allocates resources and is optimized in terms of its utilization and power efficiency.

Client-Aware

This represents one of the most significant challenges and in many ways represents the emergence of the intelligent network. In this scenario, the device and the cloud are optimized to work with each other. The client-aware chip technology know when to process on the device or in the cloud. In today's world we do see some level of data intelligence but for the most part the service provider looks for the lowest common denominator. That's why it is often difficult to use services on a handheld device because they were written for a PC, not a mobile phone. The trick is to know the device attributes that include location, policies, and connectivity. Security is taken care of in this scenario as the device and the cloud are synchronized to meet policy requirements.

Developer Challenges

A developer always has to choose a platform. Once the platform is chosen, the developer then has to consider what different versions are required for the various devices on the market. This is increasingly complex as the types and total number of devices continue to multiply. A client-aware scenario could mitigate the issue. Further is the issue about when to use the cloud and when to use a device for processing. For example, should a video be rendered on the device or in the cloud? It all depends. But client-aware technology could help bring a level of data intelligence that would mitigate the issue. It would be capable of determining if the video should be rendered in the cloud or on the device.

Is this all far off into the future? Intel says the disruption in the enterprise market is helping advance innovation. As the impetus for moving to the cloud increases, so should the advancements in these various scenarios.

We may never reach a utopia but at least we may find some ways to make it a bit easier for the builder trying to optimize the relationship between the cloud and the devices people use.

imageIt sounds to me like Intel is describing Windows Azure.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

The Windows Server Division Team pondered Cloud Computing a Catalyst for Server Growth? in a 9/2/2010 post to the TechNet blogs:

In an interesting piece by Ryan Nichols at Computerworld yesterday: Cloud computing by the numbers: What do all the statistics mean, Nichols summarizes some recent research findings on the market potential of cloud computing, quoting impressive market forecasts from sources such as Gartner ($150 billion by 2013), Merrill Lynch ($160 billion by 2011),and AMI Partners (SMB spend to approach $100 Billion By 2014).

As part of his analysis, he gives a nod to some of the reasons for this growth, with the need for business agility and the proliferation of mobile and social computing being front and center. At the same time, he identifies a couple of "head scratchers," raising the question: "if virtualization is growing and cloud computing is growing, how can the market for private enterprise servers also be growing?"

image It's a great question, and one that we hear frequently given that we are the only company to provide both a server platform, Windows Server, and a cloud services platform, the Windows Azure platform. How can both markets possibly grow at the same time? And growing they are. As Ryan points out, IDC is seeing strong growth in the server market. Just last week the analyst firm issued its Worldwide Quarterly Server Tracker, which showed that "server unit shipments increased 23.8% year over year in 2Q10... representing the fastest year-over-year quarterly server shipment growth in more than five years."

While it may seem contradictory at first blush, there are a number of reasons for this and it is actually pretty straight forward. When we talk to customers, the vast majority of them are thinking about cloud computing and looking at how to bring cloud-like capabilities and benefits (cost savings, elastic scalability, self-service, etc.) into their organization. However, they are all in different stages of the process, with very disparate infrastructure and business needs. And for many organizations, a wholesale move to a public cloud service isn't particularly realistic in the short term, whether it's due to regulatory requirements, geographic concerns, or the nature of the workloads and data they are hosting. 

In addition, there are other organizations that will want to take advantage of the benefits of cloud computing, but also may want to preserve existing infrastructure investments and maintain a level of versatility that can't be met by public clouds Enter the notion of "private clouds" and again, enter Windows Server. We continue to make enhancements to Windows Server to make it easy for customers and partners to use it to build private (and public) cloud services, such as the recent release of System Center Virtual Machine Manager Self-Service Portal 2.0.

imageBoth of these scenarios continue to drive heavy demand for our Windows Server platform. In that same IDC report from last week, Microsoft is highlighted as the server market leader, "with hardware revenue increasing 36.6% and unit shipments increasing 28.2% year over year."" Those are big growth numbers, even with more than 10,000 customers signing up to our Windows Azure platform this year.

So is there still room for enterprise servers in a cloud computing era? Absolutely. The numbers and customers don't lie. Offering both a server and a services platform with onramps to the cloud is at the heart of our business strategy and a reason why we are seeing such success in both areas. For those organizations that want a highly optimized, scalable environment where we prescribe the hardware and normalize the cost of operations, there's our services platform, the Windows Azure platform. For those that want the versatility to enable environments of any scale, or need custom hardware configurations and operating models, there's our server platform, built on Windows Server. And, of course, we have a common application development, identity and management model spanning the two platforms, which doesn't hurt either.

The Windows Azure Team linked to this post in their New Windows Server Post Explains How Cloud Computing Market is Growing post of 9/2/2010.


The VAR Guy questioned Microsoft: 25 Percent of Virtualization Market and Climbing? and quoted Microsoft’s David Greschler in this 9/1/2010 post:

At first glance, VMware dominates the virtualization market. But Microsoft’s David Greschler, attending this week’s VMworld conference, doesn’t sound worried. Greschler, director of Microsoft’s integrated virtualization strategy, says savvy channel partners are taking a look at Microsoft Hyper-V and cross-platform management tools like Microsoft System Center. Here’s why.

imageSimply put, Microsoft has captured about 25 percent of the virtualization market, and the software giant is connecting the dots between Hyper-V, private clouds and public clouds like Windows Azure, Greschler asserts.

image When it comes to Microsoft’s virtualization strategy, most VARs focus first on the Hyper-V hypervisor. But Greschler makes a strong case for focusing on Microsoft System Center as well. Using System Center, solutions providers can manage physical and virtual assets — including Hyper-V and VMware-centric deployments, Greschler notes.

Microsoft’s ultimate assertion: While VMware wants to control the entire virtualization market, Microsoft is developing tools that support multiple hypervisors.

Also of note: Greschler claims Microsoft offers a standardized approach to next-generation application deployments and management, whether the applications run on-premise or in the cloud.

imageUsing Hyper-V for private, on-premises clouds or service provider clouds is a natural stepping stone to deploying applications in Microsoft’s Windows Azure cloud, Greschler adds.

On the one hand, Windows Azure supports a range of programming standards — like PHP, Perl, Python and Ruby on Rails. But on the other hand, Microsoft’s on-premises, virtualization and cloud strategies all support .Net, so partners and developers can easily move applications from one approach to the next, Greschler asserts.

Dollars and Cents

What’s in it for partners? Greschler says Microsoft’s virtualization solutions cost end-customers roughly one-third the price of VMware’s offerings. The cost savings, in turn, allow VARs to promote additional services to customers, he asserts.

Greschler and other Microsoft executives — particularly COO Kevin Turner — also are quick to note Microsoft’s market share gains. According to some recent research from IDC, Microsoft has about 25 percent of the virtualization market, compared to roughly 50 percent for VMware. True believers include CH2M Hill, a Fortune 500 firm that’s moving from VMware to Hyper-V.

The most telling statistic of all: Microsoft’s virtualization products have only been available for about two years, but they seem to be gaining traction within Microsoft’s channel. One prime example: The M7 Group spans seven solutions providers that promote virtualization offerings from a range of vendors — including Microsoft.

VMware’s Spin

Of course, The VAR Guy within the next few days has to give VMware some equal time on the topic of Microsoft and virtualization.

In the meantime, thousands of folks continue to navigate the trade show flow at VMworld — suggesting that VMware remains as influential as ever. Also, Motley Fool notes that VMware’s stock has risen 158 percent since September 2009 as more customers flocked to VMware’s software. And a survey from CommVault suggests VMware remains the preferred virtualization standard for 83 percent of customers.

Hmmm… Read this blog entry again from top to bottom and it seems like there’s plenty of debate over the Microsoft and VMware market share stats. But this is known: Microsoft and VMware both have growing virtualization businesses…


Joe Panettieri reported from VMworld: EMC’s Memo to [Internal] Service Providers on 9/1/2010 to the MSPMentor blog:

image At first glance, most of VMworld involved vendor news and a race to grab new customers. But if you listen closely to the right people, you’ll hear plenty of perspectives for service providers. One case in point: EMC CTO and Chief Marketing Officer Chuck Hollis (pictured) claims this year’s VMworld “changes the game for service providers.” I’m not sure I agree. But here’s the spin from Hollis.

image First, take note of Hollis’ personal blog: Service Provider Insider. On the one hand, it’s written for IT departments that are becoming internal service providers. But on the other hand, Hollis also targets his message to external service providers. Among his assertions:

imageVMware vCloud Director is ideal for service providers that want to implement private clouds and IT as a service.

VMware vFabric, an open source development environment, could help service providers create cloud-ready and cloud-agnostic applications.

VMware’s strategy also will touch next-generation desktops.

image Of course, Hollis also offered EMC’s perspectives on VMworld, and the implications for service providers. You’ll find it all here in this blog post.

Extending VMware’s Lead?

Instead of bowing to competition, Hollis says VMware continues to pull away from rivals. He writes:

“Yes, I’m far too close to this all (sampling bias pervades), but — frankly — I don’t see any competitor slowing VMware down anytime soon.  Their track record up to this point speaks for itself, and their velocity appears to be increasing dramatically.”

Still, rivals refuse to remain silent. Microsoft, for one, claims to be rapidly gaining market share vs. VMware.

So who do you believe? And is VMware really changing the rules for service providers? I can’t say for sure. But at least Hollis took some time to direct his message to a hugely influential audience: Service Providers.

Chuck also announced “a fresh version of the popular Cloud Service Provider Report (commissioned by EMC) that's written from a non-US perspective.”


Dana Gardner asserted “Process automation elevates virtualization use while transforming IT's function to app and cloud service broker” as a preface to his Achieve Strategic Levels of Virtualization post of 9/1/2010:

The trap of unchecked virtualization complexity can have a stifling effect on the advantageous spread of virtualization in data centers.

Indeed, many enterprises may think they have already exhausted their virtualization paybacks, when in fact, they have only scratched the surface of the potential long-term benefits.

Automation, policy-driven processes and best practices are offering more opportunities for optimizing virtualization so that server, storage, and network virtualization can move from points of progress into more holistic levels of adoption.

The goals then are data center transformation, performance and workload agility, and cost and energy efficiency. Many data centers are leveraging automation and best practices to attain 70 percent and even 80 percent adoption rates.

By taking such a strategic outlook on virtualization, process automation sets up companies to better exploit cloud computing and IT transformation benefits at the pace of their choosing, not based on artificial limits imposed by dated or manual management practices.

To explore how automation can help achieve strategic levels of virtualization, BriefingsDirect brought together panelists Erik Frieberg, Vice President of Solutions Marketing at HP Software, and Erik Vogel, Practice Principal and America's Lead for Cloud Resources at HP. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Vogel: Probably the biggest misconception that I see with clients is the assumption that they're fully virtualized, when they're probably only 30 or 40 percent virtualized. They've gone out and done the virtualization of IT, for example, and they haven't even started to look at Tier 1 applications.

The misconception is that we can't virtualize Tier 1 apps. In reality, we see clients doing it every day. The broadest misconception is what virtualization can do and how far it can get you. Thirty percent is the low-end threshold today. We're seeing clients who are 75-80 percent virtualized in Tier 1 applications.

Frieberg: The three misconceptions I see a lot are, one, automation and virtualization are just about reducing head count. The second is that automation doesn't have as much impact on compliance. The third is if automation is really at the element level, they just don't understand how they would do this for these Tier 1 workloads.

You're starting to see the movement beyond those initial goals of eliminating people to ensuring compliance. They're asking how do I establish and enforce compliance policies across my organization, and beyond that, really capturing or using best practices within the organization.

When you look at the adoption, you have to look at where people are going, as far as the individual elements, versus the ultimate goal of automating the provisioning and rolling out a complete business service or application.
When I talk to people about automation, they consistently talk about what I call "element automation." Provisioning a server, a database, or a network device is a good first step, and we see gaining market adoption of automating these physical things. What we're also seeing is the idea of moving beyond the individual element automation to full process automation.

Self-service provisioning
As companies expand their use of automation to full services, they're able to reduce that time from months down to days or weeks. This is what some people are starting to call cloud provisioning or self-service business application provisioning. This is really the ultimate goal -- provisioning these full applications and services versus what is often IT’s goal -- automating the building blocks of a full business service.

This is where you're starting to see what some people call the "lights out" data center. It has the same amount or even less physical infrastructure using less power, but you see the absence of people. These large data centers just have very few people working in them, but at the same time, are delivering applications and services to people at a highly increased rate rather than as traditionally provided by IT.

Vogel: One of the challenges that our clients face is how to build the business case for moving from 30 percent to 60 or 70 percent virtualized. This is an ongoing debate within a number of clients today, because they look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.

In order to really make that jump, the business case has to be made beyond just reduction in headcount or less work effort. We see clients having to look at things like improving availability, being able to do migrations, streamlined backup capabilities, and improved fault-tolerance. When you start looking across the broader picture of the benefits, it becomes easier to make a business case to start moving to a higher percentage of virtualization.

One of the things we saw early on with virtualization is that just moving to a virtual environment does not necessarily reduce a lot of the maintenance and management that we have, because we haven’t really done anything to reduce the number of OS instances that have to be managed.

More than just asset reduction
The benefits are relatively constrained, if we look at it from just a physical footprint reduction. In some cases, it might be significant if a client is running out of data-center space, power, or cooling capacity within the data center. Then, virtualization makes a lot of sense because of the reduction in asset footprint.
But, when we start looking at coupling virtualization with improved process and improved governance, thereby reducing the number of OS instances, application rationalization, and those kinds of broader process type issues, then we start to see the big benefits come into play.

Now, we're not talking just about reducing the asset footprint. We're also talking about reducing the number of OS instances. Hence, the management complexity of that environment will decrease. In reality, the big benefits are on the logical side and not so much on the physical side.

It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services.

Frieberg: What we're seeing in companies is that they're realizing that their business applications and services are becoming too complex for humans to manage quickly and reliably.

The demands of provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT, where you're consuming more business services, is really moving beyond what a lot of people can manage. The idea is that they are looking at automation to make their life easier, to operate IT in a compliant way, and also deliver on the overall business goals of a more agile IT.

Companies are almost going through three phases of maturity when they do this. The first aspect is that a lot of automation revolves around "run book automation" (RBA), which is this physical book that has all these scripts and processes that IT is supposed to look at.

But, what you find is that their processes are not very standardized. They might have five different ways of configuring your device, resetting the server, and checking why an application isn’t working.

So, as we look at maturity, you’ve got to standardize on a set of ways. You have to do things consistently. When you standardize methods, you then find out you're able to do the second level of maturity, which is consolidate.
Transforming how IT operates

Vogel: It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services. We have to start changing the way we are thinking when we're going to stand up a number of virtual images.

When we start moving to a cloud environment, we talk about how we share a resource pool. Virtualization is obviously key and an underlying technology to enable that sharing of a virtual resource pool.

We're seeing the virtualization providers coming out with new versions of their software that enable very flexible cloud infrastructures.

This includes the ability to create hybrid cloud infrastructures, which are partially a private cloud that sits within your own site, and the ability to burst seamlessly to a public cloud as needed for excess capacity, as well as the ability to seamlessly transfer workloads in and out of a private cloud to a public cloud provider as needed.

We're seeing the shift from IT becoming more of a service broker, where services are sourced and not just provided internally, as was traditionally done. Now, they're sourced from a public cloud provider or a public-service provider, or provided internally on a private cloud or on a dedicated piece of hardware. IT now has more choices than ever in how they go about procuring that service.

But it becomes very important to start talking about how we govern that, how we control who has access, how we can provision, what gets provisioned and when. ... It's a much bigger problem and a more complicated problem as we start going to higher levels of virtualization and automation and create environments that start to look like a private cloud infrastructure.

I don’t think anybody will question that there are continued significant benefits, as we start looking at different cloud computing models. If we look at what public cloud providers today are charging for infrastructure, versus what it costs a client today to stand up an equivalent server in their environment, the economics are very, very compelling to move to a cloud-type of model.

Governance crosses boundary to strategy
Without the proper governance in place, we can actually see cost increase, but when we have the right governance and processes in place for this cloud environment, we've seen very compelling economics, and it's probably the most compelling change in IT from an economic perspective within the last 10 years.

Frieberg: If you want to automate and virtualize an entire service, you’ve got to get 12 people to get together to look at the standard way to roll out that environment, and how to do it in today’s governed, compliant infrastructure.
The coordination required, to use a term used earlier, isn’t just linear. It sometimes becomes exponential. So there are challenges, but the rewards are also exponential. This is why it takes weeks to put these into production. It isn’t the individual pieces. You're getting all these people working together and coordinated. This is extremely difficult and this is what companies find challenging.
The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months. Get that payback and then address the next challenge and the next challenge and the next challenge. It's not a big bang approach. It's this idea of continuous payback and improvement within your organization to move to the end goal of this private cloud or hybrid IT infrastructure.

Vogel: We've developed a capability matrix across six broad domains to look at how a client needs to start to operationalize virtualization as opposed to just virtualizing a physical server.

We definitely understand and recognize that it has to be part of the IT strategy. It is not just a tactical decision to move a server from physical machine to a virtual machine, but rather it becomes part of an IT organization’s DNA that everything is going to move to this new environment.

We're really going to start looking at everything as a service, as opposed to as a server, as a network component, as a storage device, how those things come together, and how we virtualize the service itself as opposed to all of those unique components.

It really becomes baked into an IT organization’s DNA, and we need to look very closely at their capability -- how capable an organization is from a cultural standpoint, a governance standpoint, and a process standpoint to really operationalize that concept.


Charles Babcock asserted “Version 2.0 of the open source Amazon EC2-compatible code contains features and functions ahead of the Enterprise edition” as a preface to his Eucalyptus Builds Scalability Into Private Clouds article for InformationWeek’s Plug into the Cloud blog:

image Eucalyptus Systems, supplier of Amazon EC2-compatible software for building the private cloud, has brought out version 2.0 of its Eucalyptus open source system.

The Santa Barbara, Calif., company was founded to support the output of the Eucalyptus open source project, founded at the University of California at Santa Barbara's computer science department. Prof. Rich Wolski and associates produced interfaces compatible with Amazon Web Services' EC2 APIs and packaged them together as a way to start building out an enterprise cloud.

Eucalyptus 2.0 is the second major release of the open source code. In it, "we have improved scalability all over the product," said Marten Mickos, CEO, in an interview. The firm provides technical support for Eucalyptus open source code. The open source version is not to be confused with the Eucalyptus commercial Enterprise edition, also labeled 2.0, although based on a pre-2.0 version of the open source code.

image The Eucalyptus open source code is issued under the GPL, contains features and functions ahead of the Enterprise edition, and can be freely downloaded. The firm is seeing 12,000 downloads in peak months and Eucalyptus is included in Canonical's Ubuntu Linux distribution, he said.

Eucalyptus scales across a larger server cluster more easily because the 2.0 version "has been clearer about the segregation of tasks. We no longer locate the cluster controller and the node controller on the same node," where they sometimes ended up in contention over resources, Mickos noted. The former CEO of MySQL, now part of Oracle, joined Eucalyptus Systems in March.

Version 2.0 supports iSCSI disks as elastic block store volumes and allows the cloud builder to place an iSCSI storage controller on any server in a cluster, including outside the cloud domain of the cluster, if he chooses, Mickos said.

Version 2.0 also supports the open source virtio, an API for virtualizing I/O that is used by the open source KVM hypervisor. KVM is included in distributions of Red Hat Enterprise Linux and Novell's SUSE Linux Enterprise System. Virtio uses a common set of I/O virtualization drivers that are both efficient and potentially adaptable for use by other hypervisor suppliers, Mickos said. Virtual I/O consists of a virtual machine sending both its communications traffic and storage traffic through the hypervisor to a virtual device, rather than through a server's network interface card or host bus adapter. From the virtual device, it can be moved off the virtualized server into the network fabric and handled more efficiently there.

Eucalyptus 2.0 also supports retrieval of specific versions of objects stored in Walrus, the Eucalyptus storage system that is compatible with Amazon's S3 storage service. Users may perform version control on objects as they are stored in Walrus and retrieve a specific version, as needed.

Eucalyptus to some extent now mimics the slogan of the OpenStack project, started recently by Rackspace, which claims it's building governance software for a million-node cloud, a prospect that even the largest service providers have yet to attain.

"Sure Eucalyptus can support a million-node cloud, but the more important question is how large an application can you run on your cloud" and how effectively can you manage it there with your cloud software. Eucalyptus is concentrating on effective management for private clouds, not massive public infrastructure providers, Mickos said.

See also Derrick Harris claimed Eucalyptus Anchors the Latest Cloud Software Stack in this 8/25/2010 post to GigaOm’s Structure blog in the Other Cloud Computing Platforms and Services section below.


<Return to section navigation list> 

Cloud Security and Governance

Mark O’Neill posted Cloud Security - The Question of API Keys on 9/1/2010:

I had a really good discussion with Kaitlin Brunsden from EbizQ on the topic of Cloud Security in general, and API Keys in particular. All too often, CISOs and IT managers do not realize that if their organization is using Amazon Web Services (AWS), for example, then the Secret Key ID used to authenticate to AWS is often sitting on a hard drive or coded into an application.

image This Secret Key ID, in combination with the Access Key ID (which is readily available through traffic logs) can be used by a malicious user to provision or terminate virtual machines, to access data in Cloud-based queues or databases, or just simply to run up a large charge which will then hit the credit card linked to the API keys. Vordel can help, by protecting the API keys in the same way that our products protect keys used in other contexts (e.g. private keys for SSL).

The podcast is here: http://www.ebizq.net/blogs/guest_session/2010/08/cloud-security-talking-with-vordel.php.

imageI understand that key management problems have contributed to the delay in enabling Transparent Data Encryption (TDE) for SQL Azure.

<Return to section navigation list> 

Cloud Computing Events

Vassilis Touloum (@VassilisTouloum) requested responses re the Microsoft PDC group’s announcement of plans for OData session feeds:

image

imageNot sure what Vassilis has to do with PDC 2010, however. 


Michael Coté recorded VMworld 2010 – IT Management & Cloud Podcast #79 on 9/2/2010:

vmworld

John [Willis] and I give our take on VMWare’s cloud and application development announcements at VMware. We also discuss private cloud and what the announcements mean for other vendors, if anything.

Download the episode directly right here, subscribe to the feed in iTunes or other podcatcher to have episodes downloaded automatically. …

I’ll have a write-up sometime soon. In the meantime, we say plenty in the recording.

Disclosure: numerous people mentioned are clients, including VMware.


Andrea DiMaio announced a September 8: Gartner Webinar on Government in the Clouds on 9/2/2010:

image Should anybody interested in listening to what Gartner thinks about the outlook for cloud computing in government, please join me at our webinar  “Government in the Cloud: The Truth Behind the Hype”, which will be held on September 8, at 10:00 ET.

Per its description on our web site:

imageThe landscape for cloud computing adoption in government remains confusing. This session aims at bringing clarity about the relationships between cloud and more-traditional sourcing models, about the rationale for adopting public rather than private cloud services, and the multiple roles that government plays as user, provider, broker and regulator of cloud services.

What You Will Learn
  • What the opportunities and challenges of cloud computing are for government organizations
  • What government cloud services are being adopted, how they are being deployed and who the providers are
  • What the main criteria are for selecting government cloud services?

The link for registration is here.

Andrea Di Maio is a vice president and distinguished analyst in Gartner Research, where he focuses on the public sector, with particular reference to e-government strategies, Web 2.0, the business value of IT, open-source software… Read Full Bio.

You might be interested also in Andrea’s related Is There A Government 2.0 Market? post of the same date.


Wes Yanaga recommended that you Register for the Windows Phone 7 Unleashed Events in the Western US in this 9/1/2010 post to the US ISV blog:

imageWindows Phone 7 is HOT! Come check out Windows Phone 7 Unleashed for everything you need to know to develop for WP7. Whether you’re a seasoned veteran or you’re just starting with .NET development, there’s something in it for you.

imageThe first half of this deep dive event is lecture and hands on lab. At the half point mark of the day, you’ll have a solid foundation for building WP7 applications.

The second half of the day is going straight to code. Build the best app and have a chance to win!

In order to deliver the best possible experience for attendees, seating at these events is VERY limited. Register now!

PREREQUISITES:

  1. This event is a bring your own laptop event. Wireless internet will be available and you must have a laptop capable of connecting to wireless as no hard wired connection will be available.
  2. Download the tools BEFORE the event: http://bit.ly/WP7tools

DATE

CITY

STATE

LOCATION

REGISTER

09/11/10

Irvine

CA

QuickStart

REGISTER

09/11/10

Mountain View

CA

Microsoft Silicon Valley Campus

REGISTER

09/11/10

Phoenix

AZ

Gangplank

REGISTER

09/18/10

Los Angeles

CA

UCLA

Coming Soon!

09/18/10

Albuquerque

NM

New Horizons Learning Center

REGISTER

09/18/10

Orange County

CA

Honda

Coming Soon!

09/18/10

Inland Empire

CA

DeVry University

REGISTER

09/25/10

Denver

CO

Microsoft Denver Office

REGISTER

09/25/10

Mission Viejo

CA

Microsoft Store

Coming Soon!

09/25/10

Los Angeles

CA

Robert Half

Coming Soon!

10/1/2010

Salt Lake City

UT

Microsoft Salt Lake City Office

Coming Soon!

10/9/2010

Mission Viejo

CA

Microsoft Store

Coming Soon!

10/25/2010

Mission Viejo

CA

Microsoft Store

Coming Soon!

Events are being confirmed daily, please subscribe to an RSS feed on www.msdnevents.com for updates on events in your area!

Session 1 – Introduction to Windows Phone 7 Programing

In this session, we start with a discussion of windows phone, the architecture changes made from 6.5 to 7.0, the hardware specifications and then move into the beginnings of building a WP7 application including :

  • Sensor overview
  • Application life cycle
  • Splash screen and the importance of 1 second / 19 second loading
  • Files associated with project template
  • Application Bar
  • Live Tile
  • Application Icon

The session will end with a brief discussion of the Windows phone 7 Marketplace before the entering into the HOL.

  • HOL – The screens that will be built during this session are:
  • Project Set-up
  • Live Tile
  • Splashscreen
  • Home Screen with Application Bar
Session 2 – Connecting to Services

In this session, we will discuss how Cloud Services help to bring power to the phone. We will be binding to a rest based services and show how to search and display the information received. In this session we will also talk about Navigation, passing information between screens, simple page animations while working with List and detail information.

HOL – The screens that will be built during this session are:

  • Search Screen
  • Display Screen
  • Detail Screen
  • Additional Application Bars
Session 3 – Recording Data

In this session we will be working with the Camera to capture and crop photos, record audio files, take notes, saving location, sharing (sending emails and SMS messages – and if time permits sending to twitter or FB) and saving everything using isolated storage. We will discuss tombstoning and how it effects your application process, including the when, where and whys of saving state).

HOL – The screens that will be built during this session are:

  • ·Add New Wine
  • ·Record Audio
  • ·Picture Capture and Crop
  • ·My Wines
For More Information on How to Get Started with Windows Phone
Getting Started on Windows Phone 7

Large or small, all developers will have equal opportunity to capitalize on the first mover advantage of having their apps or games ready at launch. In order to do that, there are a few things developers will need to do:

  1. Register at the marketplace today
  2. Finish you application or game using the Beta tools
  3. Download the final Windows Phone Developer Tools when released on September 16
  4. Recompile your app or game using the final tools
  5. Have your XAP ready for ingestion into the marketplace in early October when it opens.
Free Developer Training
Free Developer Assistance, Marketing Support

Microsoft Platform Ready (MPR) is designed to give you what you need to plan, build, test and take your solution to market. We’ve brought a range of platform development tools together in one place, including technical, support and marketing resources, as well as exclusive offers. With MPR, you can:

  • Access training and webinars to help you get compatible.
  • Test your application with online resources and testing tools.
  • Utilize marketing toolkits including customizable templates for email, letters and presentations.

Sign up today for MPR for Windows Phone 7 at Microsoft Platform Ready.

imageObviously, WP7 will be a major client OS for Windows Azure apps.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Handy reported A fast start for an open-source cloud (OpenStack) on 9/2/2010:

image In July, Citrix, NASA, Rackspace and a host of other companies announced that they were forming the OpenStack project. This two-pronged effort aims to construct the open-source pieces needed to provide cloud infrastructure both on the desktop for testing and in the data center for cloud hosting. Thus far, OpenStack consists of a generally available object-storage system and compute-cluster management system, which are slated for an Oct. 21 release.

image Rackspace is taking on most of the work to build OpenStack, but the object-storage component of the stack is a direct software release that runs behind Rackspace Cloud's storage system. The compute portion of OpenStack will be a combination of NASA's Nebula project and Rackspace's own internal software. But according to three of the men tasked with growing and maintaining this new community, it has already made thousands of code contributions in the six short weeks since the platform's announcement.

SD Times: How did you come to choose NASA's Nebula project for OpenStack?
Mark Collier, vice president of community and business development, OpenStack and Rackspace: The compute project started with some good code from NASA's Nebula project. In that case, we were looking at other projects out there to see if there was something we could use, or should we release the code we had developed at Rackspace.

We discovered that this code NASA had written was very strong. They'd run into a lot of the challenges we'd seen running at scale. Where you'll see us going with the compute project is there are a lot of people contributing in the community, but a lot of the chunks of code from Rackspace Cloud and NASA are already done. In the weeks since launch, we've seen a huge outpouring in the community. Citrix has been very active in providing support for Xen Server.
One of the goals we had from the beginning is to be hypervisor agnostic. That's happened much faster than we expected. Thanks to contributions from the community, we now support KVM, Xen and VirtualBox.

Read More: Next Page, Pages 2, 3


Jeff Barr announced an Amazon EC2 Price Reduction on 9/2/2010:

imageWe're always looking for ways to make AWS an even better value for our customers. If you've been reading this blog for an extended period of time you know that we reduce prices on our services from time to time.

imageEffective September 1, 2010, we've reduced the On-Demand and Reserved Instance prices on the m2.2xlarge (High-Memory Double Extra Large) and the m2.4xlarge (High-Memory Quadruple Extra Large) by up to 19%.  If you have existing Reserved Instances your hourly usage rate will automatically be lowered to the new usage rate and your estimated bill will reflect these changes later this month.  As an example, the hourly cost for an m2.4xlarge instance running Linux/Unix in the us-east Region from $2.40 to $2.00. This price reduction means you can now run database, memcached, and other memory-intensive workloads at substantial savings. Here's the full EC2 price list.

As a reminder, there are many different ways to optimize your costs. When compared to On-Demand instances, Reserved Instances enable you to reduce your overall instance costs by up to 56%.  You pay a low, one-time fee to reserve an instance for a one or three year period. You can then run that instance whenever you want, at a greatly reduced hourly rate.

For background processing and other jobs where you have flexibility in when they run, you can also use Spot Instances by placing a bid for unused capacity. You job will run as long as your bid is higher than the current spot price.


Marcus Klems announced on 9/2/2010 that Cloud Computing Australasia 2010 will be held in Sydney, Australia on 11/30/2010:

image Cloud Computing Australasia is a one day, end-user, discussion led event (30th November) and interactive workshops (1st December). The event will focus on practical plans for implementation and preparedness for migration to the cloud. It will enable delegates to fully investigate the strategies to harness this transformational business concept and find solutions to their concerns over trust and control within Cloud environments.

If you currently have questions surrounding compliance, how secure your data will be in the Cloud, how your existing IT infrastructure will be migrated and preparatory steps for Cloud, register for Cloud Computing Australasia in Sydney this November.

To view the full program and speaker line-up visit www.cloud-compute.com.au. You can also gain access to a range of free speaker interviews, industry articles and blogs.


Alex Williams asserted Verizon: A High Profile VMware Service Provider with Extra Importance in a 9/1/2010 post to the ReadWriteCloud blog:

image Service providers are a lynchpin in VMware's strategy to extend its virtualization technology into the cloud.

image If the strategy works, VMware could become the leader in providing hybrid cloud infrastructures. It's a test for VMware's new foray into the cloud. And it has its risks. VMware is looking to provide an end-to-end platform that extends from the enterprise data center to the public cloud environments managed by the service providers.

How the technology performs will come to define VMware's position in the vast network of service providers that work with the majority of enterprise customers.

imageVerizon is one of the companies that has opted to begin using VMware vCloud Director. This has to be VMware's highest profile partner. But in cloud computing, it is not well known.

But Verizon is making a play into cloud computing. It calls its platform Computing-as-a- Service (CaaS).

Its latest enhancement to its CaaS platform is built on VMware vCloud Datacenter. It uses VMware's cloud infrastructure technology including VMware vSphere, the new VMware vCloud Director and VMware vShield security solutions.

Verizon is using the VMware technology to provide customers with both performance and security that can be audited. Verizon is banking that VMware will help enhance its security features such as layer 2 isolation and LDAP integration.

Verizon will launch the service initially with the Intercontinental Hotel Group. It's a wordlwide field trial and a test to determine how the VMware platform will perform in the demanding and dynamic world of the hotel business.

The service providers are critical to VMware's success. Verizon will be a test to see how well the new VMware technology performs in a demanding environment.

Verizon is offering a sophsiticated service. The goal is to extend its service with VMware across the enterprise and public clouds. If they can do that it will help show how VMware can help service providers provide hybrid cloud environments for its customers.


John Brodkin asserted “VMware's Project Horizon targets cloud-based desktop apps on any device” in a deck for his VMware aims to displace Windows with cloud-based desktop apps post to NetworkWorld’s Data Center blog:

imageVMware is developing a new hosted service with the code name "Project Horizon" that will allow delivery of cloud-based desktop applications to any sort of user device, and perhaps further its goal of diminishing the importance of Microsoft's Windows operating system.

image The subscription service, previewed at VMworld this week, will help deliver the right applications and data to users, whether they have an iPad, Android phone, Windows machine or a Mac, according to VMware. Partners will be involved in Project Horizon, presumably to deliver the end-user applications.

Details on Project Horizon are scarce, but the key seems to be a security model that extends on-premise directory services to public cloud networks, giving each user a "cloud identity," as VMware puts it.

Project Horizon may also play a role in VMware's long-term project to diminish the importance of the operating system, particularly the Windows operating system sold by Microsoft, its greatest rival. (See also: VMware says Windows still matters … sort of)

Project Horizon -- to be available as a hosted, subscription service sometime in 2011 -- will create a "permissions and control structure that worries less about the operating system" than current technologies do, says Noah Wasmer, a director of product management for VMware.

"The role of the operating system is getting diminished every day on the server side," and a similar shift is beginning to happen on the desktop, claims Vittorio Viarengo, vice president of desktop marketing for VMware. For users, "Windows is becoming the offline mode" as they increasingly use applications hosted entirely over the Web, he says.

Microsoft, of course, presents a different argument. Virtualization, particularly on the server side, is just a feature of the operating system, rather than a replacement of the OS, in Microsoft's view. Even on the PC, Microsoft provides virtual desktop technology within the Windows desktop operating system.

To state the obvious, operating systems and virtualization technologies have co-existed for decades, for example on IBM's mainframe, and will continue to do so for the foreseeable future.

But VMware's attempts to move from being a company that simply virtualizes operating systems to one that provides the broader operating frameworks for data centers and desktops is interesting, nonetheless.

Horizon "securely extends enterprise identities into the cloud and provides new methods for provisioning and managing applications and data based on the user, not the device or underlying operating system," VMware says.

Project Horizon aims to provide access to various types of applications including software-as-a-service, legacy applications and mobile apps. One example mentioned by Wasmer is a calendar and contacts application. But VMware is trying to build up interest in the project without getting too specific about it. Wasmer mentioned the phrase "cloud-based desktop," but whether Project Horizon will be robust enough to replace existing desktops remains to be seen.

Read more: 2, Next >

Sounds to me like swapping a known operating system for an unknown one.


Barbara Duck (@MedicalQuack) charted The Return of the Thin Client to Hospitals–Interfaith Medical Center in Brooklyn Transitions From PCs To Virtual Desktops on 8/31/2010:

The hospital is using virtualization technology from VMware  and the medical records system in use is MEDITECH, one of the big ones around for hospitals.  imageNobody has to even think of why this looks good when it comes to support on a thin client versus a PC, as there is none on the device other than a little hardware.  Back in the old days we used to call these “dumb clients” and I think thin client does have a much nicer ring to it. 

imageWyse is the supplier and they have been around for a long time with thin clients and I used to do business with one of their divisions years ago in Garden Grove, CA.  Price of a device is something to look at to as there’s no ‘guts’ in the thin clients.  Here’s a picture of the VMware unit and it requires no operating system at all.  You just hook up the monitor(s), keyboard, mouse and you are pretty much ready to go.  All data is stored on the server and much easier for the IT Department to manage-one place to go to trouble shoot. 

imageThin clients also say a lot of positive things for security too.  This is interesting to see this transition and it may not be for everyone but it certainly beats a help department running after and supporting PCs in their environment.

 

New York-based Healthcare Leader Turns to Wyse and VMware View™ to Deliver Secure Patient Care and Meet HIPAA Regulations at Lower TCO | EON: Enhanced Online News
imageSAN FRANCISCO--(EON: Enhanced Online News)--Today at VMworld 2010, Wyse Technology, the global leader in cloud client computing, today announced how its customer, Interfaith Medical Center, has begun the transition away from PCs in favor of Wyse thin clients. Interfaith, a multi-site community teaching healthcare system that provides a wide range of medical services throughout Brooklyn, New York, operates a newly-modernized hospital and 16 clinics, serving more than 250,000 patients every year. Squeezed by a combination of rising costs and declining reimbursements, Interfaith looked for tools that would help it continue to deliver excellent care while reducing administrative costs.
imageThe Interfaith IT team saw MEDITECH training as an opportunity to begin the transition away from PCs toward virtual desktops. With the assistance of Tech Access Corporation, a regional Wyse reseller, they decided to implement Wyse thin clients to train healthcare workers. The team set up four training labs, each with 15 desktop workstations, rotating all staff through a multi-day training course.


Derrick Harris claimed Eucalyptus Anchors the Latest Cloud Software Stack in this 8/25/2010 post to GigaOm’s Structure blog:

For a long time, Eucalytpus Systems’ flagship customer was NASA, which was using the company’s open-core cloud software as the  foundation its Nebula project. When the OpenStack project launched last month, we learned that co-leader (along with Rackspace) NASA had abandoned Eucalyptus to roll its own malleable, vastly scalable cloud code. Apparently, Eucalyptus was determined to be part of an integrated cloud stack because it’s announced a technology partnership with newScale and rPath that aims to give businesses a ready-to-go cloud platform.

image Technology-wise, it seems like a trio of building blocks that fit together nicely. Eucalyptus, which we’ve covered extensively, provides a foundation for turning existing resources into an Amazon EC2-style cloud computing infrastructure. rPath enables automation of both platform and application stacks via a software repository that knows which components are required for any given workload or user. newScale offers a self-service frontend for letting business users provision their own resources, with the IT department’s policies already built into the experience (i.e., users only have the option of getting what they’re approved to get).

imageBusiness-wise, it’s difficult to argue with the idea of strength through unity, but the Eucalyptus-rPath-newScale platform will find it tough going to win customers. Given Eucalyptus’ NASA connection, the most obvious comparison will be to the aforementioned — and free — open-source OpenStack.

However, as I detail in a new report on GigaOM Pro, VMware also has designs on providing a top-to-bottom cloud experience, and it has plenty of competitive advantages. Then there are the IaaS-in-a-box startups like Nimbula, which just received another $15 million, and Cloud.com, which announced it can run atop VMware vSphere environments. Or perhaps customers will consider lesser-known, but certainly not less-capable, options like Platform Computing, which announced a $5,000 starter version of its ISF cloud software.

Internal cloud software options and approaches appear to be growing with each passing week. Solo vendors, vendor partnerships, startups, huge vendors, proprietary, open source, open core, IaaS, PaaS, hybrid … it’s never-ending. Assuming Eucalyptus hasn’t lost too much luster after the NASA loss, its mindshare momentum and renowned CEO could help raise this new partnership’s voice above the noise. Proving its approach is the right one in such a nascent, crowded market won’t be so easy, though.

Image courtesy of Flicker user Budzlife.

<Return to section navigation list> 

1 comments:

Stacey said...

good post..but could have made little more precise.