Sunday, January 23, 2011

Windows Azure and Cloud Computing Posts for 1/22/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

James Senior presented a 00:02:38 Get Windows Azure Storage working in WebMatrix in 3 minutes Channel9 video clip on 10/6/2010 (missed when published):

image James Senior (@jsenior) shows how easy it is to create a table, write to it and then retrieve data from Windows Azure Storage using the Helper in WebMatrix.  He starts off with a blank website and using NuPack gets the helper he needs to complete his task in three minutes.

  • imageDownload WebMatrix
  • Learn more about WebMatrix at a free Web Camp near you

He also published a 00:13:21 New: Windows Azure Storage Helper for WebMatrix clip to Channel9 on 8/27/2010:

Hot on the heels of the OData Helper for WebMatrix, today we are pushing a new helper out into the community. The Windows Azure Storage Helper makes it ridiculously easy to use Windows Azure Storage (both blob and table) when building your apps. If you’re not familiar with “cloud storage,” I recommend you take a look at this video where my pal Ryan explains what it’s all about. In a nutshell, it provides infinitely simple yet scalable storage, which is great if you are a website with lots of user generated content and you need your storage to grow auto-magically with the success of your apps. Tables aren’t your normal relational databases—but they are great for simple data structures and they are super fast.

Read more about the helper on my blog.


<Return to section navigation list> 

SQL Azure Database and Reporting

Peter Kellner explained SqlAzure and a Best Practices way to deal with the Required Retries on Connections in a 1/21/2011 post:

Introduction

image If you’ve started using SqlAzure for your SqlServer with your Azure application, you’ve probably discovered that you get a reasonable number of connection failures.  The advice from the Azure team is add retry logic to all your connections to SqlAzure. There is a long discussion posted by the Azure team here.

imageThe key paragraph states the problem as follows:

The Problem
One of the things that SQL Azure does to deliver high availability is it sometimes closes connections. SQL Azure does some pretty cool stuff under the covers to minimize the impact, but this is a key difference in SQL Azure development vs. SQL Server development.

Basically, what this means is that you must be able to deal with connections failing when you call SqlAzure.  Something that all of probably should have been doing forever, but because most of the time SqlServer is running on your local LAN and the likelihood if a connection failing was next to zero unless something else was going terribly wrong.  Certainly not something we had to do on regular basis.  To emphasize that even more, most of the controls built into asp.net that open connections to sqlserver don’t even do this and that’s from Microsoft itself.

The solution proposed in the thread mentioned above basically has you add tons of code to everyplace you access a connection object.  Personally, I don’t like that because I have hundreds if not thousands of places I open connections and inserting tens of thousands of lines of extra new untested code is a little scary.

So, what to do?

Fortunately, another team at Microsoft, known as the Windows Server AppFabric Customer Advisory Team published a general purpose solution using Extension Methods and some darn clever coding wrote a great article and published code including azure examples that solves this problem very elegantly without requiring a lot of changes to your existing code base.

In this article I plan on giving an example and publishing a sample project that uses this code with SqlAzure to solve the connection retry problem.  My goal here is not to simply restate what they published but to simply have a very simple concrete example of using their library.

Design Goal

We have two goals.

  1. Change as little code as possible

  2. Log Connection Errors when they happen With Locations

  3. Make sure not to trap errors that are NOT connection related such as bad column names

Incorrect Code

So, this is what the original code looks like that will fail because it does not have connection retry logic:

public static int UsersIdFromUserNameNoConnectionRetry(string userName)
{
var retUsersId = 0;
const string sql =
@"SELECT Id FROM Users
WHERE Username = @Username";

using (var sqlConnection = new SqlConnection(
ConfigurationManager.ConnectionStrings["CRStorageWebConnectionString"].
ConnectionString))
{
sqlConnection.Open();
using (var sqlCommand = new SqlCommand(sql, sqlConnection))
{
sqlCommand.Parameters.Add("@Username", SqlDbType.NVarChar).Value = userName;
using (var reader = sqlCommand.ExecuteReader())
{
while (reader.Read())
{
retUsersId = reader.IsDBNull(0) ? 0 : reader.GetInt32(0);
}
}
}
}
return retUsersId;
}

So, if there is a connection, an exception will get thrown and will need to be caught, but then the method will not have done it’s job.

Correct Code With Retries

So, now take a look at the revised code after the library is setup and used.  The setup is non-trivial, but you just have to do that once and then you can simply fix all your other code with very few changes.  Below is the new code with connection retry logic built in.

public static int UsersIdFromUserName(string userName)
{
var retUsersId = 0;
const string sql =
@"SELECT Id FROM Users
WHERE Username = @Username";

using (var sqlConnection =
new ReliableSqlConnection(
ConfigurationManager.ConnectionStrings["CRStorageWebConnectionString"].
ConnectionString,
new RetryUtils("", "UsersIdFromUserName").GetRetryPolicy()))
{
sqlConnection.Open();
using (var sqlCommand = new SqlCommand(sql, sqlConnection.Current))
{
sqlCommand.Parameters.Add("@Username", SqlDbType.NVarChar).Value = userName;
using (var reader = sqlCommand.ExecuteReader())
{
while (reader.Read())
{
retUsersId = reader.IsDBNull(0) ? 0 : reader.GetInt32(0);
}
}
}
}
return retUsersId;
}

There are basically two changes. 

  1. The first is instead of create an SqlConnection(…) we are creating a ReliableSqlConnection(…).  The ReliableSqlConnection takes an extra parameter which basically wraps the retry logic used, as well as labelling this connection so when it fails, it gets logged.  In the log, there will be the comment “UsersIdFromUserName” so we know what method threw the retry.
  2. The second is slight change we when we create the SqlCommand, we have to add the property .Current to it so we know we are talking about the currently executing connection.  There may be a default way to handle this but I could not figure it out.

That’s it!  you are new connection safe for retrying failed connections.

The Setup Pieces

First, download the library from the article mention above which can be found here in Microsoft’s Code Gallery

http://code.msdn.microsoft.com/Project/Download/FileDownload.aspx?ProjectName=appfabriccat&DownloadId=14007

The project has all kinds of stuff in it that build quite nicely, run and test under vs2010.  All I’m interested in is the ado.net piece and the retry logic around that.  I actually used the 1.2 version, however I now see there is a 1.3 version with some improvements.  I would post my project but I don’t want to post it with old code so I’ll just tell you the steps I went through so you can do the same.

Build the class library

Compile the project and make sure you have the dll from the library lincluded in your actual visual studio project.  The library you want is Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling.

image

Update Your Web.config

Add the config section below to your web.config file.

<configSections>
<section name="RetryPolicyConfiguration"
type="Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling.Configuration.RetryPolicyConfigurationSettings,
Microsoft.AppFabricCAT.Samples.Azure.TransientFaultHandling" />
</configSections>

<RetryPolicyConfiguration defaultPolicy="FixedIntervalDefault" defaultSqlConnectionPolicy="FixedIntervalDefault"
defaultSqlCommandPolicy="FixedIntervalDefault" defaultStoragePolicy="IncrementalIntervalDefault"
defaultCommunicationPolicy="IncrementalIntervalDefault">
<add name="FixedIntervalDefault" maxRetryCount="10" retryInterval="100" />
<add name="IncrementalIntervalDefault" maxRetryCount="10" retryInterval="100" retryIncrement="50" />
<add name="ExponentialIntervalDefault" maxRetryCount="10" minBackoff="100" maxBackoff="1000" deltaBackoff="100" />
</RetryPolicyConfiguration>

This defines a new section in your web.config, then implements it with several different retry policies include a default one which seems quite reasonable to me.

If you remember, in the implementation section above, we used a public class called RetryUtils.  This class is actually one I invented as a convenience class to minimize the code I have to update on each use of the ReliableConnection Object.  Just to refresh your memory, the implementation is this:

using (var sqlConnection =
new ReliableSqlConnection(
ConfigurationManager.ConnectionStrings["CRStorageWebConnectionString"].
ConnectionString,
new RetryUtils("", "UsersIdFromUserName").GetRetryPolicy()))

The actual code fo rthe class RetryUtils is below here.  You’ll have to stick this someplace in your project.

namespace Utils
{

public class RetryUtils
{
public string Username { get; set; }
public string CallersName { get; set; }

public RetryUtils(string username, string callersName)
{
Username = username;
CallersName = callersName;
}

public RetryUtils()
{
Username = string.Empty;
CallersName = string.Empty;
}

public RetryPolicy<SqlAzureTransientErrorDetectionStrategy> GetRetryPolicy()
{
var retryPolicy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>
(RetryPolicy.DefaultClientRetryCount, RetryPolicy.DefaultMinBackoff,
RetryPolicy.DefaultMaxBackoff,
RetryPolicy.DefaultClientBackoff);

retryPolicy.RetryOccurred += RetryPolicyRetryOccurred;

return retryPolicy;
}

void RetryPolicyRetryOccurred(int currentRetryCount, Exception lastException, TimeSpan delay)
{
GeneralUtils.GetLog4NetAllDataContext().AddLog4NetAll(new Log4NetAll
{
Date = DateTime.UtcNow,
EllapsedTime = 0,
ExceptionStackTrace = lastException.StackTrace,
Message = "RetryCount: " + currentRetryCount + " delay: " + delay.Seconds,
ExceptionMessage = lastException.Message,
Logger = "",
Level = "Error",
UserName = Username,
PartitionKey = Username,
RowKey = Guid.NewGuid().ToString()
});
}
}
}

This code actually has the nice callback delegate that does the logging when a retry actually occurs.  I’m not including my implementation of GeneralUtils.GetLog4NetAllDataContext().AddLog4NetAll, but you can pretty much guess what it does.  It simply logs the retry with all the details of what happened.  My implementation sticks this in an Azure Table, but that’s really for another post.

Non Connection Related Errors

Remember, our second design criteria is that we should only fail on errors that are connection related and not things like data column not found.  With no additional work, this library takes care of this for us.  Actually, in the release notes for the 1.3 release (which I have not used yet) say they have improved that feature by adding additional codes not to fail on.  That is, the last thing you want is your code spending 5 minutes retrying on a problem you’d just like reported immediately.

Keep in mind that the library we are using here is a general purpose retry library not designed just for use with ado.net.  The team has provided us with examples using LINQ2SQL, EntityFramework and other technologies.  I spent  a little time reading about using those other technologies but did not get far enough to blog about it.  My current SqlAzure implementation uses 100% ado.net because performance is critical to me and neither EF or LINQ2SQL are quite up to my task yet.

Conclusions

So basically, that’s it!  You know have the tools to implement very nice retry logic in your ado.net code so Azure will not fail on “normal” connection failures.  I suggest that if you are using SqlAzure, you implement this as soon as possible so your code will be solid going forward.


James Vastbinder (@jamesvastbinder) [pictured below] posted a very brief Preview of SQL Azure Federations Connectivity Model to the InfoQ blog on 1/21/2011:

image Earlier this week Cihan Biyikoglu of Microsoft provided a preview of how developers will need to adapt their code for the upcoming SQL Azure Federations to supports its connectivity model.  SQL Azure Federations will support a new connection called FILTERING used to work with federated data and provide the re-partitioning component which will support full-availability.  The intent is to provide a safe model for developers to work with federated data and/or multi-tenant applications.

Sharding Pre-SQL Azure Federations

imageToday, before the release of SQL Azure Federations,  Azure developers and architects must overcome two main issues when dealing with federated data: Connection Pool Management and Cache Coherency.  Fragmentation is a developers largest concern in regards to connection pool management as connections tend to become stale over time.  Cache coherency is difficult to maintain during data movement among shards and developers need to be wise to update the shard map simultaneously as data movement occurs.

Coming with Federations

imageFederations will free developers from having to manage connections to individual shards and they will instead supply a USE FEDERATION statement, like below from Cihan's preview:

USE FEDERATION orders_federation(customer_id=55) WITH RESET, FILTERING=ON

Filtering turned on applies the desired predicate automatically  on behalf of the developer, in this case customer_id=55. When Filtering is suggested to be turned off only when performing bulk operations or when querying data over many atomic units.

Federations will also allow great support for multi-tenancy in applications through the built-in sharding capabilities which over time should reduce the number of individual databases needing management and care.  The SQL Azure team has yet to announce when SQL Azure Federations will be released.

For more details and links to related sharding topics, see my Resource Links for SQL Azure Federations and Sharding Topics post of 1/19/2011.


Rajesh Padmanabhan, BI Engineering Senior PM Lead, and Sanjay Soni, Senior BI Technology Evangelist from Microsoft IT explained How Microsoft IT is Using SQL Azure to Enable Self-Service BI in a 00:16:45 TechNet Edge video clip:

image

imageLearn how Microsoft IT is delivering Data as a Service (DaaS) for the Enterprise Data Warehouse to various Microsoft businesses and IT groups. DaaS is being implemented as Data Services Layer (DSL) platform which abstracts and exposes data from on premise and SQL Azure (cloud). DSL is a data distribution platform that provides a single location for Microsoft users to explore, publish and consume EDW and other external data from multiple data sources. Data can be consumed by LOB applications, PowerPivot business users, database applications and Windows phone in various forms including scorecards, reports, SQL queries, web services and self service BI.

Other related video clips from Microsoft IT:


<Return to section navigation list> 

MarketPlace DataMarket and OData

The Programming4Us blog posted OData with SQL Azure - Enabling OData on an Azure Database on 1/19/2011:

imageWhen you first look at OData and its interoperation with SQL Azure, you may think you're in for a lengthy process and a ton of reading. Wrong. Enabling OData is simple and takes no time at all. This section spends a couple of pages walking you through the process of enabling OData on your SQL Azure database.

1. Getting Started at SQL Azure Labs

Browse to SQL Azure Labs. Then, follow these steps:

    1. On the home page, you see a list at left of things Microsoft is working on. Click the SQL Azure OData Services tab, which first asks you to log in using a Windows Live account.

    2. When you've logged in, you're presented with the window shown in Figure 1: a summary of OData and, more important the first step of configuring your SQL Azure OData Service. The Create a New Server link on the right takes you to the SQL Azure home page where you can sign up for an Azure account, and so on. Because you've already done that, you skip that link. In the Connection Information section, enter your complete server name plus your username and password, and then click Connect.

      Figure 1. Configure OData Service Connection Information section

    3. After your information is validated, a Database Information section appears on the page. Select the database on which you want to enable OData, and select the Enable OData check box. At this point you may think you're done, but not quite. When you click the Enable OData check box, a User Mapping section appears on the page, as shown in Figure 2.

      Figure 2. Configure OData Service Database Information section

          As the User Mapping section explains, you can map specific users to the Access Control Service (ACS) keys or allow anonymous access to your SQL Azure database via OData through a single SQL Azure account. Now that we have our data exposed via a REST interface it is through the User Mapping that we control access to our SQL Azure data.

          The Anonymous Access User drop-down defaults to No Anonymous Access, but you can also choose to map and connect via dbo, as shown in Figure 3. This article talks about anonymous access shortly. Selecting the dbo option allows you to connect using the database dbo account, basically as administrator. In a moment you learn the correct way to connect to the OData service.

          Figure 3. Configure OData Service User Mapping section

          Notice also that this section provides a URI link that you can use to browse your SQL Azure data in a web browser.

          For the sake of this example, select dbo. You have now OData-enabled your SQL Azure database. Before you proceed, let's spend a few minutes discussing in more detail anonymous access and the ACS and how it applies to SQL Azure.

          2. Understanding Anonymous Access

          Anonymous access means that authentication isn't needed between the HTTP client and SQL Azure OData Service. Keep in mind, however, that there is no such thing as anonymous access to SQL Azure. If you want to allow anonymous access, you must specify a SQL Azure user that the SQL Azure OData Service can use to access SQL Azure. Figure 4 shows how you do that.

          Figure 4. Adding an OData user

          The SQL Azure OData Service access has the same restriction as the SQL Azure user. Therefore, if the SQL Azure user being used in SQL Azure OData Service anonymous access has read-only permissions to the SQL Azure database, SQL Azure OData Service can only read the data in the database.

          Depending on the requirements of the application, you may consider creating a read-only user for your SQL Azure database. The syntax to do that is as follows:

          EXEC sp_addrolemember 'db_datareader', username

          Let's talk a moment about ACS and how that applies to SQL Azure.

          3. Understanding the Access Control Service

          ACS is part of the Windows Azure AppFabric. It's a hosted service that provides federated authentication and rules-driven, claims-based authorization for REST based web services, allowing these web services to rely on ACS for simple username/password scenarios.

          In the Community Technology Preview (CTP) of SQL Azure OData Service, it's necessary for you to sign up for the AppFabric and create a service namespace to be used with the SQL Azure OData Service. This allows a single user to access SQL Azure OData Service through the Windows Azure AppFabric Access Control. This user must have the same user id as the database user.

          4. Implementing Security Best Practices

          Now that you know a little about security regarding SQL Azure OData, you need to be familiar with a few best practices surrounding SQL Azure OData Service:

            • Always create a new SQL Azure user instead of allowing anonymous access to SQL Azure OData Service.

            • Never use your SQL Azure Administrator username to access SQL Azure OData Service.

            • Don't allow the SQL Azure user that is used by SQL Azure OData Service to have write access to SQL Azure OData Service through anonymous access.

            The problem you run into by not creating a new user is that you then allow anyone to read from and write to your database. You also have no way to control how much data or what type of data they write.

            But with all of that said, because SQL Azure OData is currently in CTP, it's easier to test with anonymous access than with a read-only SQL Azure user. But when out of CTP, you should build your client to use anything other than anonymous access. The browser doesn't support simple web token authentication natively, and this is required for SQL Azure OData Service via ACS. Thus, in production, don't use anonymous access.


            imageSee The Kentucky .NET Users Group announced User Group Meeting Thur January 27th: Cloud Connectivity at the University of Louisville Campus about the Azure MarketPlace DataMarket in the Cloud Computing Events section.


            <Return to section navigation list> 

            Windows Azure AppFabric: Access Control and Service Bus

            image722322No significant articles today.

             


            <Return to section navigation list> 

            Windows Azure Virtual Network, Connect, RDP and CDN

            Ranjith Pallath posted Windows Azure Connect and Azure AppFabric to his MSDN blog on 1/21/2011:

            imageAt PDC 2010 we announced the availability of Azure Connect (formerly Project Sydney) which is a part of Azure Virtual Network. This feature allows a easy way of migrating complex application over to Azure.

            Azure connect aims at providing a easy but secured way to link your on-premise machines to the roles hosted in Azure so they communicate among themselves with much ease. So no more Service Bus and ACS.

            You can now sign up for the Windows Azure Connect CTP via the Windows Azure Management portal.

            * All relays for Windows Azure Connect during the CTP are located outside of Windows Azure Data Centers, thus network traffic between Windows Azure roles and Connect relays will be charged as normal Windows Azure bandwidth usage.

            So what does Azure Connect exactly do? Its an easy mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources. This enables  direct IP-based network connectivity with you existing on-premises infrastructure.

            Some application scenarios for Windows Azure Connect include:

            • Enable enterprise apps, which have migrated to Windows Azure, to connect on-premises servers (e.g. SQL Server ).

            • Help applications running on Windows Azure to domain join on-premises Active Directory. Control access to Windows Azure roles based on existing AD accounts and groups.

            • Remote administration and trouble-shooting of Windows Azure roles. E.g. Remote PowerShell to access info from Windows Azure instances.

            Most of these were earlier implemented using Azure AppFabric Service Bus. So its even more important to understand how they are different and when to use what. First thing to keep in mind is that they do not compete instead, they go hand in hand. Here is however a chart of technical specifications of both of these:

            Category

            Connect

            AppFarbic

            Purpose An IP-sec connection between the local machines and azure roles.

            An application service running on the cloud.

            Connectivity IP-sec, Domain-joint NetTcp, Http, Https
            Components Windows Azure Connect Driver

            Service Bus, Access Control, Caching

            Usage

            • Azure roles connect to local database server.

            • Azure roles use local shared files,  folders and printers, etc.

            • Azure roles join the local AD.

            • Expose the local service to Internet.

            • Move the authorization process to the cloud.

            • Integrate with existing identities such as Live ID, Google ID, etc. with existing local services.

            • Utilize the distributed cache.

            Having understood the specifications of the technologies, lets understand when to use these based on the scenarios.

            Scenario Connect AppFabric
            I have a service deployed in the Intranet and I want the people can use it from the Internet

             

            Thumbs up

            I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet

            Thumbs up

            I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service

            Thumbs up

            I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated

            Thumbs up

            I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities

            Thumbs up

            Thumbs up

            Roadmap of Azure Connect

            • CTP released - end of 2010
              On-premises agent for non-Windows Azure resources
              Supports Windows Server 2008 R2, Windows Server 2008, Windows 7, Windows Vista SP1, and up.

            • Future release
              Enable connectivity using existing on-premises VPN devices

            Please watch this PDC session for an overview of Azure Connect. For all new features available with this release please watch the overview webcast.


            <Return to section navigation list> 

            Live Windows Azure Apps, APIs, Tools and Test Harnesses

            Srinivasan Sundara Rajan described Challenges and Solutions for the Health Care Industry in Cloud Computing with “Community Cloud Case Studies” in an 1/23/2011 post:

            Health Care Industry Few Challenges
            The health care sector represents one of the most important and growing industry in terms of support from Information Technology. In several countries, including the US, health care spending in IT is at the top of the IT industry. The following are some of the challenges faced by the industry worldwide in general and the US in particular.

            • Challenge I: Single Truth of Customer data, electronic medical records (EMR) and health information exchanges (HIE). A patient information needs to be true and unique across the globe and across transactions
            • Challenge II: Applications subject to unpredictable bursting in thework load for reasons beyond control like epidemic spread and natural disasters.
            • Challence III: Large data volumes and processing needs. Scans, X-ray, real-time monitoring information and extensive analysis of the medical conditions. A modest 100-bed hospital will generate approximately 60GB of new digital content per-bed per year, requiring an additional six terabytes of storage space annually.
            • Challenge IV: Common legal compliance needs across nations, HIPAA (Health Insurance Portability and Accountability Act), EU Directives

            Cloud Computing will help the health care industry in solving these challenges, as explained in the following sections. [Azure emphasis added]

            Cloud in Health Care Industry Master Data Management

            • Cloud is probably the first viable option to support Master Data Management of Health care information to support single truth of data
            • Cloud Storage will be a common repository accessible to several providers across the globe to avoid data redundancy and duplication.
            • Community Clouds and easily accessible public clouds (Amazon EC2, Google) enable common storage of clinical data from various providers.
            • Cloud Federation will support the transfer or EMR and HIE across multiple clouds.
            • Verizon has announced a new service that will make medical records available in the cloud. Called the Verizon Health Information Exchange, the service aims to make it easier for hospitals, doctors' offices and other health care centers to access and share medical records.
            • GE Centricity Advance is a new software-as-a-service offering that includes integrated e-medical records, practice management applications, and a patient portal. The software is aimed at the 500,000 physicians in small U.S. practices with 15 or fewer doctors

            Cloud in Health Care Industry Dynamic Scaling

            • The health care industry is prone to sudden bursts in system usage, due to the seasonal fluctuations like influenza or viral or other epidemics; the situation could go further unpredictable due to natural disasters.
            • Cloud provides the ideal solution for handling dynamic workload s. A key benefit of cloud computing is the ability to add and remove capacity as and when it is required. This is often referred to as elastic scale. Being able to add and remove this capacity can dramatically reduce the total cost of ownership.
            • Amazon EC2 provides out-of-the-box support for elasticity and auto scaling with multiple components.
            • Auto scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions that you define.
            • In Windows Azure, auto scaling is achieved by changing the instance count in the service configuration. Increasing the instance count will cause Windows Azure to start new instances.

            Cloud Supporting Health Care Industry Storage

            • The massive amount of data storage and predicted growth patterns in clinical data along with the legal needs for archival and disaster recovery coupled with massive processing needs for analysis of real time data, puts most private hospital data centers unable to scale up to the needs of future growth coupled with unpredictable burst in data conditions
            • Cloud enables utilization of a new data storage paradigm in the form of NoSQL databases, and implementation like BigTable.
            • BigTable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in BigTable
            • Amazon Elastic Block Store (EBS) provides block-level storage volumes for use with Amazon EC2 instances
            • Amazon EBS allows you to create storage volumes from 1 GB to 1 TB that can be mounted as devices by Amazon EC2 instances. Multiple volumes can be mounted to the same instance.
            • Amazon EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component.
            • The health care industry is producing massive amounts of multimodal data.
            • Medical imaging procedures are becoming increasingly popular each year with more than millions of CT scans performed per year
            • The need for parallel processing is apparent for mining these massive multimodal data sets, which can range anywhere from tens of gigabytes, to terabytes or even petabytes.
            • Amazon Elastic MapReduce is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data.
            • Windows Azure supports a Worker role is used for generalized development, and may perform background processing for a web role.

            Cloud and Healthcare Industry Compliance

            • Legal Compliances and Procedural Compliances are inevitable and time bound. There will not be any tolerance on non compliance
            • Building, maintaining and modifying existing software is costly and time-consuming especially for small players
            • SaaS (Software-as-a-Service) provides a viable alternative for small health care providers to quickly get up to the speed.
            • GE is using a Software-as-a-Service option for an online repository of information on 500,000 global suppliers.
            • Adoption of SaaS will provide a common platform for performing correct analysis , procedures and legal compliance for medical companies and hospitals

            Summary
            Like other industries, health care industry is looking for technology not only for cost cutting but also for business capability needs and Cloud Computing provides a perfect platform We find the adoption is rising with the cross collaboration of Cloud Computing with the healthcare industry, I recently received an invitation about value proposition of Cloud Computing in biotech and medical device industries, from Southern California Biomedical Council (SoCalBio).


            Andy Cross (@andybareweb) explained Tracing to Azure Compute Emulator SDK V1.3 in a 1/22/2010 to his BareWeb blog:

            image When using Windows Azure, you may come across the need to trace information and not worry about connecting all the Diagnostics information.  I saw this question on StackOverflow which pretty much asks the same point – how do I get ASP.NET tracing info into the Azure Compute Emulator? Here’s the answer.

            imageBy default this happens for WorkerRoles and WebRoles when in the context of the RoleEntryPoint. In a Web Role, your trace from WebRole.cs goes into the Compute Emulator, but does not when you are in default.aspx.cs or any other code executing inside the application. The reason for this is that the RoleEntryPoint runs in a different application pool context to the web application itself, and it has different TraceListeners added to the Trace.Listeners collection.

            If we want to trace to the Compute Emulator (what used to be called DevFabric) we can very simply add a TraceListener to the collection and get the output in the Compute Emulator UI.

            Note: You shouldn’t do this in production – and so you may wish to configure the adding of the TraceListener programmatically, or remove the code before you deploy.

            I added a new TraceListener to the config, but left the DiagnosticMonitorTraceListener so that other diagnostic activities are not disrupted. The web.config entry is:

            <SYSTEM.DIAGNOSTICS>
              <TRACE>
                <LISTENERS>
                    <ADD name="AzureDiagnostics" type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
                      <FILTER type="" />
                    </ADD>
                    <ADD name="DevFabricListener" type="Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime.DevelopmentFabricTraceListener, Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
                      <FILTER type="" />
                    </ADD>
                  </LISTENERS>
               </TRACE>
            </SYSTEM.DIAGNOSTICS>

            The output looks like:

            The output in Compute Emulator

            The output in Compute Emulator

            You can download my basic project: AspnetComputeTracing


            Mike Wasser posted Introducing the Cloud Automation Toolkit on 1/22/2011:

            I would like to introduce the .Net Cloud Automation Toolkit.  This Apache 2.0 licensed code is meant to help with automating tasks related to managing cloud computing infrastructure.  This first version is limited to being a .Net client to the Windows Azure Management API documented here but there are many more items on the todo list for this project.

            imageWhat types of things can you do with this?  Perhaps automate deployment for development, testing and production on your own terms.  Use this as the framework to help develop and managing your personal or organization’s cloud computing needs.  Several things I’m planning in the near future include:

            • Windows Workflow Foundation Activities to manage Azure
            • Windows Workflow Foundation Activities to manage Amazon Web Services (AWS)
            • Windows Azure Windows Workflow Worker Role

            Realistically, how might this functionality benefit you?

            Perhaps it could help you deploy a development Azure service as part of your build process.

            Maybe it is used to provision instances for automated testing in a real Azure environment.

            It could also be used to help manage applications in production including custom tools to version your application with zero down time in an automated way with no guess work.

            The reality is that there’s large numbers of applications here that have yet to be discovered.  My guess is that Microsoft may release some of this functionality in the future, but it isn’t here today.  Meanwhile, any scenario described above will require code from this or a similar library.

            To get started with the Cloud Automation Toolkit, download it here: http://cdn.mikewasser.com/CloudAutomationToolkit-1.0.zip.  The package contains brief introductory documentation to help get started using it.

            If you would like to help with the development of this project or have questions in regard to using it, please let me know!


            Ranjith Pallath posted MSDN Library links to What's new in Windows Azure 1.3 SDK topics on 1/22/2011:

            imageWindows Azure includes new features and capabilities to help you deploy and manage your cloud services:

            For the list of known issues and SDK installation requirements, see Windows Azure SDK Release Notes (November 2010).


            Tom Woods, Microsoft IT Solution Manager in Product Group Strategic Initiatives, and Scott Richardson, Senior Architect with the Field IT organization, produced a 00:10:05 How Microsoft IT Built Application Segmentation and Migration Strategies for the Cloud video segment for TechNet Edge’s Microsoft IT Showcase series:

            image

            imageLearn how Microsoft IT established and implemented segmentation and Azure migration strategies across its application portfolio. Microsoft IT (MSIT) manages a portfolio of more than 1500 internal line of business applications. These applications cover a broad range of business capabilities, technologies, and information types. This content describes how Microsoft IT established and implemented segmentation and Azure migration strategies across its application portfolio.


            The Microsoft Case Studies team posted Startup Successfully Launches with Highly Scalable, Security-Enhanced Cloud Services on 1/21/2011 (missed when published):

            image Founded in 2010, Gestone is a startup company that offers a project management solution that combines basic project management tools with the social-networking site Facebook . Gestone built its solution on the Windows Azure platform to meet its need for a scalable and cost-effective infrastructure that also met customers’ needs for privacy and provided the option to integrate with infrastructure built on Microsoft products and technologies.

            imageBy using Windows Azure, including Windows Azure storage services, Gestone achieved critical levels of scalability and a security-enhanced infrastructure, with low capital expenditures and operational costs. In addition, the young company has the flexibility to try various business models, such as offering trial subscriptions to its service, with little investment to its infrastructure or risk of overbuying and underutilizing computing resources.

            Situation
            Gestone is a small startup company that was founded in 2010 with the simple philosophy of combining social media and productivity tools to help people “get stuff done.” The Founder and CEO of Gestone, Andy Harjanto, has a strong background in project management and noticed a trend in the makeup of project management teams since the 1990s: they are getting smaller, more dispersed, and no longer need complicated project management tools. While employees who work on large-scale projects find robust project management software ideal for setting multiple milestones, creating Gantt charts, and allocating resources across teams, many companies need scaled-back, simple-to-use solutions that do not have steep learning curves.

            *
            We’re not concerned with getting too big too fast because we know that we can handle the volume with Windows Azure.
            *

            Andy Harjanto
            Founder and CEO, Gestone

            In addition, Harjanto recognizes the proliferation of social-networking applications, such as Facebook, beyond social settings and into the business work environment . “Many companies are turning to social-networking outlets for business reasons because of their sheer popularity,” he says. “Our coworkers, partners, and customers are all on Facebook.”

            Thus, the company wanted to develop a service to help businesses manage projects without the hassle of complicated software. Gestone also wanted to incorporate Facebook as a way to easily connect with project team members. The company envisioned a cloud-based solution to deliver anywhere, anytime access to its project management service.

            One of the challenges that Gestone faced in building a project management service that incorporated Facebook was developing a solution that businesses trusted to keep their information private and secure. “For any application that incorporates other technology that is primarily associated with fun social activities, companies are inherently concerned about the privacy of their employees and security of their data,” explains Harjanto. “A security-enhanced solution and architecture was a primary concern for us from day one.”

            In addition, the company required a scalable architecture that it could easily grow—for both computing power and storage needs—as the solution gained popularity, or shrink as demand dictated. “It can be difficult to predict the level of success we will see as we launch a new service,” says Harjanto. “If just one popular media outlet gives us a good review, we could see traffic explode, and we want to be able to accommodate any number of customers.”

            However, it would be cost-prohibitive for the small startup company to build its own on-premises server infrastructure. In fact, to build a data center with enough servers and storage capacity to accommodate the high traffic, and that would be secure enough to meet enterprise customers’ often-strict privacy and security standards, would cost Gestone up to U.S.$80,000, including personnel costs.

            When choosing a cloud services provider, Gestone also wanted to ensure that any infrastructure it relied on would easily integrate with technologies that many enterprise customers use for their infrastructure, such as the Windows Server operating system and Active Directory services. “Most enterprise businesses, and many small and medium businesses, build their server infrastructure on Microsoft technologies,” explains Harjanto. “However, several cloud services providers, such as Amazon and Google, are not built on—and do not integrate easily with—Microsoft technologies. A solution built on Amazon or Google clouds would be difficult to integrate for several of our target customers.”

            Figure 1.

            By using Gestone, team members can come together through Facebook and collaborate on projects.

            Solution
            To meet its requirements for an easily scalable, security-enhanced cloud service that could integrate with customers’ existing server infrastructure, Gestone chose the Windows Azure platform. Windows Azure is the development, service hosting, and service management environment for the Windows Azure platform. It provides developers with on-demand compute, storage, and bandwidth, as well as a content distribution network to host, scale, and manage web applications through Microsoft data centers, which are SAS 70 Type II–compliant.

            Cloud-Based Project Management Solution
            On October 20, 2010, Gestone unveiled its cloud-based project management solution. The solution is a browser-based application that uses the Microsoft Silverlight 4 browser plug-in on the client side and is hosted in web roles in Windows Azure. To get started using the service, customers simply log in to the Gestone service through their Facebook user account and do not have to install any software on their computers. Once they are logged in to Gestone, users can create a new project and invite project team members who also use Facebook to participate.

            For each project, the project manager can create work items—tasks to be completed—to assign to individual team members with deadlines and reminders to help ensure that projects are delivered on time. Each team member can post status updates for work items, post questions to other team members, and keep up-to-date on the progress of projects through a news feed. They can also upload files and documents, write notes, draw sketches on whiteboards, post ideas, and fill-in data for other team members to review.

            *
            Windows Azure enables us to try new business models, but without the investment of implementing a large on-premises infrastructure and without the risk of underutilizing that infrastructure.
            *

            Andy Harjanto
            Founder and CEO, Gestone

            Additionally, team members can upload files from the Internet from within the Gestone interface. Gestone uses the Bing application programming interface (API) to seamlessly present team members with a dialogue box from which they can search the Internet just as they would through a browser and select media to include in a project update.

            To support the storage needs for each project, Gestone uses Windows Azure storage services , including Table and Blob storage. Gestone uses one table for each project, which stores hierarchical data for work items, updates, and posts. For its unstructured data, such as documents, presentations, and spreadsheets, the company uses Blob storage in Windows Azure. Each table and blob is deployed in a multitenant architecture so that each storage container is separated from other projects. By using a multitenant structure in Windows Azure, Gestone helps to ensure that customer data is safeguarded and private. In addition, the storage services are inherently redundant, helping ensure that customer data is not lost.

            “Gestone has a complex hierarchical data requirement and Windows Azure gives us the ability to contain that structure within a single, partitioned table in a multitenant environment,” explains Harjanto. “With a traditional database model, this is hard to attain with any level of efficiency, but Windows Azure delivers.”

            Gestone also uses the Queue service and worker roles in Windows Azure to support its notification services. Each time an event is triggered, such as a work item or status update, the information is also sent to a queue. Worker roles then pick up that item and send a message to the team members notifying them that there is new activity for the project.

            imageIn addition, Gestone implemented Microsoft SQL Azure . SQL Azure offers the first cloud-based relational and self-managed database service built on Microsoft SQL Server 2008 data management software; like Windows Azure, it is accessed over the Internet and hosted in Microsoft data centers. The company uses SQL Azure to manage its billing information and user account data, as well as to capture data about customer behavior, which the company uses to create new features.

            Plans for Future Enhancements
            Gestone already has plans for future enhancements to its project management solution. For instance, the company plans to implement Windows Azure AppFabric Service Bus to enable connectivity across network boundaries between the cloud-based service and customers’ existing infrastructure. It also wants to take advantage of the Content Delivery Network , a large-scale network available to Windows Azure customers. Gestone will use the Content Delivery Network to cache the application data at strategically placed data centers around the world. This will help ensure optimum system performance for customers, no matter their location.

            The company is also developing a mobile version of the project management service, which will be compatible with Windows Phone 7 . The mobile version will support how companies in other countries do business, such as countries in Asia where many organizations rely heavily on mobile and Short Message Service–based business tools.

            *
            Windows Azure gave us a cost-effective model for starting our business—we didn’t have expensive up-front costs, and now we only pay for what we use.
            *

            Andy Harjanto
            Founder and CEO, Gestone

            Finally, Gestone plans to offer optional integration services for customers who prefer to deploy the project management solution in a separate, private cloud and who use the Windows Server operating system or Active Directory Federation Services 2.0. Customers will be able to integrate Gestone with their own Windows Server infrastructure and also offer federated identity management with their Active Directory environment, instead of using the Facebook platform in a public cloud.

            Benefits
            As a result of using Windows Azure, Gestone achieved critical levels of scalability to meet unpredictable demand for its project management service. The company also has in place a security-enhanced infrastructure that is both flexible enough to support changing business models for the new company and cost effective to implement.

            Achieved Critical Levels of Scalability
            By using Windows Azure, Gestone can scale up and scale down its infrastructure to meet customer demand, which is particularly important because the company does not have any historical business data to help predict the popularity of its service. “As a new company, we have no idea how much traffic we will have, and it can change very suddenly,” says Harjanto. “By using Windows Azure, we can easily scale up to handle as many customers who visit our website and use our solution without a problem.”

            For example, just five days after Gestone launched its project management solution, the website was featured on the Microsoft Silverlight website. As a result of the prominent exposure, Gestone witnessed high-volume traffic and simply created additional web roles to handle the website traffic and additional tables to accommodate new customers. “In the first couple of days after Gestone was launched, the site had more than 400 unique visitors to the site and a high conversion rate for new customers. It was much more than we ever anticipated, but we’re not concerned with getting too big too fast because we know that we can handle the volume with Windows Azure,” explains Harjanto.

            Implemented Flexible Infrastructure to Support Changing Business Models
            The impressive levels of scalability enable Gestone to easily try out new business models with little investment or risk. The company is considering offering customers a limited-time free trial for the project management service to encourage participation, which is likely to cause a surge in demand for additional computing and storage resources. “We can scale up to accommodate customers who use our service on a trial basis and then, if they do not convert to a paid subscription after the trial, we can easily scale down our infrastructure,” says Harjanto. “With an on-premises model, we wouldn’t have that flexibility. Windows Azure enables us to try new business models, but without the investment of implementing a large on-premises infrastructure and without the risk of underutilizing that infrastructure .”

            Avoided Capital Expenses, Achieves Low Operating Costs
            Gestone launched its cloud-based service without the need to purchase expensive infrastructure. Instead of procuring physical servers and building its own data center, the company relies on Microsoft data centers to deliver robust computing and storage for its project management service. In addition, Gestone enjoys low operating costs with the pay-as-you-go model offered with Windows Azure. “Windows Azure gave us a cost-effective model for starting our business—we didn’t have expensive up-front costs, and now we only pay for what we use.”

            Delivered Security-Enhanced Solution
            By using Windows Azure and Microsoft data centers, Gestone delivered a security-enhanced solution that even its enterprise customers who have strict security and privacy standards can appreciate. The company deployed its solution in a multitenant environment, so every project that a customer manages on Gestone is stored separately from other projects, which helps assure customers that their confidential data is safeguarded. In addition, Microsoft data centers adhere to industry-recognized, high security standards and have a SAS 70 Type II certification. “Microsoft has a commitment to security and meets rigorous standards in its data centers,” explains Harjanto. “We are confident that Windows Azure helps us deliver a security-enhanced solution, and our customers trust the Microsoft name.”

            I wonder if this is the same Andy Harjanto at Microsoft, who helped me with ActiveDirectory Services Interfaces (ADSI) 2.5 issues when I was writing the Visual Basic 6.0 GroupPol.exe application for Chapter 2, “Optimizing Active Directory Topology for Group Policy” of my Admin911:Windows 200 Group Policy book at the turn of the century. (I’m surprised that MSDN still has my chapter as an active topic.”)

            You can download and run GroupPol.exe’s installer (GroupPol.msi) from my Windows Live SkyDrive account.


            The Microsoft Case Studies team posted Global Trade Provider Reduces Deployment Time by 97 Percent with Cloud-Based Solution on 1/14/2011 (missed when published):

            As an increasing number of companies move goods globally, governments are introducing new electronic customs systems.

            imageTradefacilitate, a global trade provider, wanted to quickly scale its web-based application to serve a growing number of importers and exporters required to meet these evolving trade regulations. Rather than hosting its flagship application on its own servers, Tradefacilitate decided to move the application to a cloud-services model based on the Windows Azure platform. By doing so, Tradefacilitate reduced its deployment time by 97 percent. By moving to the Windows Azure platform, Tradefacilitate has quickly scaled its application to keep up with global demand—at less cost than adding its own servers. It has also freed developers from managing a complex and costly infrastructure, while giving them the agility they need to quickly respond to changing customer needs.

            Situation
            Based in Dublin, Ireland, Tradefacilitate is dedicated to reducing the costs associated with traditional, inefficient, paper-based trade transactions and helping to increase competitiveness among importers and exporters globally.

            *
            imageWindows Azure puts you back in the driver’s seat. You’re able to control your costs, and experiment at very little expense. And you’re able to leverage all of your Microsoft toolset experience.
            *

            Derek Hardiman
            Chief Technology Officer, Tradefacilitate

            As more and more companies move goods around the world, governments are introducing new electronic customs systems that require companies to exchange paper-free trade data before products are shipped. To help importers and exporters comply with these requirements, Tradefacilitate developed a web-based service, which it hosted on physical servers running at a hosting facility in Dublin. The hosted application initially was sufficient, when the company’s main role was to assist small and midsize businesses with cross-border alcohol trade in the European Union. Yet as demand for Tradefacilitate services grew, the company needed to quickly scale its solution to serve thousands of businesses needing to exchange paper-free trade data and manage the millions of trade documents necessary.

            Because Tradefacilitate’s core service interacts with government authorities, who are the ultimate authorities on whether goods can be moved, it was important that its solution be reliable and agile under new electronic processes in the two largest trading blocks in the world—the United States and the European Union. Tradefacilitate also wanted a solution that enabled it to concentrate on its core business rather than on managing a complex infrastructure. “We considered the old-fashioned approach of adding more servers and bandwidth, but that gets to be expensive very quickly,” says Derek Hardiman, Chief Technology Officer at Tradefacilitate. “Rather than managing a complex infrastructure, we wanted to focus on our core competency, which is delivering innovative services to customers.”

            Solution
            Tradefacilitate decided to develop a cloud version of its flagship application and host it on Windows Azure, the development, service hosting, and service management environment for the Windows Azure platform, which is hosted in Microsoft data centers. Tradefacilitate uses the web and worker role model within Windows Azure to effortlessly scale the application. Web roles serve web pages and take requests from users. As the number of users grows, Tradefacilitate can seamlessly add additional web roles. Similarly, these web roles communicate via Windows Azure queues with backend worker roles, which do the real work, such as handling outbound email, submitting customs filings, and interfacing with third-party systems. The number of worker roles can be increased, which makes it possible to scale the application both during peak use times and as the total number of customers grows, with little effort.

            Tradefacilitate diagram

            In addition to moving its web-based application to the Windows Azure platform, TradeFacilitate transferred its existing Microsoft SQL Server database to Microsoft SQL Azure, which offers relational database services in the cloud. The company uses Windows Azure AppFabric Service Bus to retrieve order and invoice information from customers’ on-premises enterprise applications and display this information directly within the Tradefacilitate system hosted in the cloud. It also uses Blob storage to store unstructured, binary data, such as scanned document images, that are associated with each transaction.

            *
            A massive selling point for Windows Azure is that it really is the only enterprise-ready, platform-as-a-service environment out there. It was really a slam dunk for us because we … knew we could migrate quickly.
            *

            Derek Hardiman
            Chief Technology Officer, Tradefacilitate

            Because Tradefacilitate is built on the Microsoft .NET Framework 3.5 using the Microsoft Visual Studio 2010 Ultimate development system, developers at the company were able to migrate the solution to the Windows Azure platform in just two days. “A massive selling point for Windows Azure is that it really is the only enterprise-ready, platform-as-a-service environment out there,” says Hardiman. “It was really a slam dunk for us, because we already had the capability and knew we could migrate quickly. Also, choosing Microsoft as our host provider sent a powerful message to our customers that we were basing our solution on a trustworthy platform.”

            Benefits
            Moving the Tradefacilitate core application to the Windows Azure platform has brought about many benefits. First, deployment was both quick and cost-effective because developers were able to leverage their existing expertise. Second, in the cloud there is a more seamless testing environment, which means that developers can bring new features to market faster. Moreover, by basing its application on the Windows Azure platform, Tradefacilitate can keep up with rapidly growing demand for its services, while offering developers the agility and efficiency needed to respond to changing customer needs.

            Reduced Deployment Time by 97 Percent
            Altogether, it took developers only two days to migrate the existing code base to the Windows Azure platform—compared to the four- to eight-weeks it would have taken had Tradefacilitate scaled by installing and configuring its own servers. “Had we not moved to the cloud infrastructure, we would have had to bring in a whole operational function around the infrastructure that we weren’t prepared to deal with at the time,” Hardiman says.

            Migrating the Tradefacilitate hosted application was both quick and cost-effective, according to Hardiman, because developers were able to use the Visual Studio and .NET Framework tools with which they were already familiar. “We’ve hired a number of additional developers since then, and they haven’t had any issues getting up to speed using Windows Azure,” Hardiman says. “From their perspective, it’s a very natural extension of applications they have worked on previously using Visual Studio and .NET.”

            Improved Test Environment
            Moving its online system to the Windows Azure platform has also improved the test environment, making it possible for Tradefacilitate developers to bring new features to market faster and with greater confidence. “When you own your own infrastructure, a lot of the time your testing environment isn’t the same as your live environment due to cost constraints, so you can run into issues when you move from one to the other,” Hardiman says. “Because our test, User Acceptance Testing (UAT), and live environments are all in the cloud, we know we can move from one environment to the other and get the same behavior.”

            Rapid Scalability at Less Cost
            While Tradefacilitate initially served businesses primarily within the European Union, it now has customers around the globe. By basing its application on the Windows Azure platform, the company now can quickly scale to meet growing demand for its services. Rather than purchasing and managing additional servers, scaling to meet new demands simply requires adding more web and worker roles—which saves roughly one-third of the cost of adding an additional server every three years.

            In addition, Tradefacilitate no longer has to incur expensive upfront capital expenses. Instead of spending €4,000 (U.S.$5,341) on each additional server and depreciating the cost over time, the company now pays about €80 (U.S.$107) per month as an operational expense, which is a lot easier to manage and budget for.

            “Windows Azure allows us to side-step the whole issue of managing and owning an infrastructure,” he says. “We don’t have to spend time worrying about operating system patches, ancillary software problems, or depreciating hardware costs. And we don’t have to deal with the lead time required to buy equipment, and get it configured and up-and-running. We can instantly add extra capability—just like turning on a tap—and the problem is solved at very little cost.”

            Increased Agility and Efficiency
            Because Tradefacilitate developers are freed from the responsibility of managing what would have been a complex and costly infrastructure, they have more time to focus on customer needs. As a result, they can develop services more rapidly to comply with evolving trade regulations. They can also more efficiently accommodate the growing number of importers and exporters interested in using Tradefacilitate services.

            “Windows Azure puts you back in the driver’s seat,” says Hardiman. “You’re able to control your costs, and experiment at very little expense. And you’re able to leverage all of your Microsoft toolset experience. We’re able to get new features out the door quickly with a small team of developers. That’s key to our success because it allows us to stay agile.”

            Windows Azure Platform
            The Windows Azure platform provides an excellent foundation for expanding online product and service offerings. The main components include:

            • Windows Azure. Windows Azure is the development, service hosting, and service management environment for the Windows Azure platform. It provides developers with on-demand compute and storage to host, scale, and manage web applications on the Internet through Microsoft data centers.
            • Microsoft SQL Azure. Microsoft SQL Azure offers the first cloud-based relational and self-managed database service built on Microsoft SQL Server technologies.
            • Windows Azure AppFabric. With Windows Azure AppFabric, developers can build and manage applications more easily both on-premises and in the cloud.
              − AppFabric Service Bus connects services and applications across network boundaries to help developers build distributed applications.
              − AppFabric Access Control provides federated, claims-based access control for REST web services.
            • Windows Azure Marketplace DataMarket. Developers and information workers can use the new service DataMarket to easily discover, purchase, and manage premium data subscriptions in the Windows Azure platform.

            To learn more, visit:
            www.microsoft.com/windowsazure
            www.sqlazure.com


            <Return to section navigation list> 

            Visual Studio LightSwitch

            image2224222No significant articles today.

             


            Return to section navigation list> 

            Windows Azure Infrastructure

            Joannes Vermorel described Telling the difference between cloud and smoke in a 1/23/2011 post to his personal blog:

            image Returned a few days ago from NRF11. As expected, there were many companies advertising cloud computing, and yet, how disappointing when investigating the case a tiny bit further: it seems that about less than 10% of the companies advertising themselves as cloudy are actually leveraging the cloud.

            For 2011, I am predicting there will be a lot of companies disappointed by cloud computing - now apparently widely used a pure marketing buzzword without technological substance to support the claims.

            For those of you who might not be too familiar with cloud computing, here is a 3min sanity test to check if an app is cloud-powered or not. Obviously, you also go for a very rigorous in-depth audit, but with this test, you should be able to uncover the vast majority of smoky apps.

            1. Is there any specific reason why this app is in the cloud?

            Bad answer: we strive to deliver next-generation outstanding software solutions, exceeding customer expectations, blah blah blah, insert here more corporate talk ...

            A pair of regular servers - typically a web server plus database server - can handle thousands of concurrent users for non-intensive webapps. This is already a lot more users than what most apps of the market will ever face (remember with a high probability you don't need to scale). So is there has to be a compelling reason that justify the cloud beside the very hypothetical scenario to grow faster than Facebook.

            2. Is the underlying infrastructure larger than 100k machines?

            Bad answer: well, in fact we are just having our own dedicated servers at DediHost Corp Inc (put here the name of regular hoster).

            A key aspect of cloud computing is cost reduction through massification. As of 2011, there are still only a handfew cloud providers available, namely: Amazon WS, Google App Engine, Rackspace Cloud, Salesforce and Windows Azure. Make sure to ask which cloud infrastructure is being used. Also, private clouds are no exceptions, it's not because it's "private" that suddenly massification is achieved with 100 servers. It takes more, a lot more, to build a cloud.

            3. Can you open an account and get started right from the web, no setup cost?

            Bad answer: let's meet and evaluate your requirements first.

            Multitenancy is a key aspect to reduce admin costs. In particular, with any reasonable cloud-based architecture there is no reason to have mandatory setup costs (which does not mean that company may not charge some optional onboarding package providing eventually training , dedicated support etc). Setup costs are typically a sign of a non cloud software where each extra deployment takes some amount of gruntwork.

            4. Is there a public pricing? Typically indexed on usage or user metrics.

            Bad answer: pricing really depends on your company.

            For cloud-based apps, there are about zero compeling reason not to have a public pricing. Indeed, cloud costs are highly predicable and strictly based on usage, hence, it makes little sense from a market perspective to go for a customized pricing for each client as it increase sales friction providing no added value for the client.

            5. Can two machines failing bring down the app along with them?

            Bad answer: we have backups, don't worry.

            In the cloud, the app layer should be properly decoupled from the hardware layer. In particular, hardware failures are accounted for and primarily handled by the cloud fabric which reallocate VMs when facing hardware issues. The cloud does not offer better hardware, just a more resilient way to deal with failures. In this respect, setting-up a backup server for every single production server is a very non-cloud approach. First, it doubles the hardware cost, keeping half of the machine idle about 99% of the time, and second, it proves brittle facing Murphy's law, aka 2 machines failing at the same time.

            As a final note, it's rather hard to tell the difference between a well-designed SaaS powered by a regular hoster and the same but powered by a cloud. Although, back to point 1, unless there is a reason to need the cloud, it won't make much difference anyway.

            Joannes is the founder of Lokad and an engineer from the Corps des Mines who initially graduated from the ENS.


            Doug Rehnstrom posted Windows Azure Training Series – Understanding Azure Roles to the Learning Tree blog on 1/22/2011:

            image This is the fourth in a series of Microsoft Windows Azure training articles. In the last article, Windows Azure Training Series – Creating Your First Azure Project, we created a simple Azure application and in the next article we will deploy it. First, we need to understand Windows Azure roles though.

            Windows Azure Roles Overview

            imageTo use Windows Azure, you first need a subscription. Once you have a subscription, you can create what are called hosted services. A hosted service is equivalent to an application. You might want to see the first post in this series, Windows Azure Training Series – Understanding Subscriptions and Users, if this is new to you.

            image Each hosted service consists of one or more roles, and each role runs on one or more instances. Each instance is running in a Windows Azure virtual machine, which is running a version of the Windows Server operating system.

            In the screenshot below, “Course 2602″ is a subscription. There are 3 hosted services; “Flash Cards hosted Service”, “Hello Version 2″ and “Doug’s Pretty Good Service”. Only “Doug’s Pretty Good Service” has anything deployed to it. It consists of one role “DougsPrettyGoodWebsite”, and there is one instance of that role running.

            Windows Azure Role Types

            When creating a Windows Azure project in Visual Studio, you add one of more roles to the project. The dialog shown below allows you to choose the roles, as we saw in the last article. Notice there are 5 choices for roles. However, there are really only 2 different kinds of roles, Web roles and worker roles. There just so happens to be 4 different ways of creating Web roles, hence the 5 choices.

            Windows Azure Web Roles

            A Web role, once deployed, is really just a Web application that is configured under the instance of IIS that is running in the Azure virtual machine. You can choose to create the Web application using ASP.NET or ASP.NET MVC. If you prefer PHP, choose the CGI Web role.

            If you want to create Web services you would choose WCF Service Web role. Web services are configured under IIS just like the other Web roles.

            Windows Azure Worker Roles

            Worker roles are background services, running some task periodically. Worker roles are really simple. The role starts and just runs in a loop. Each iteration through the loop the worker process looks to see if it has something to do, and if so performs its task. If not, it goes to sleep.

            Typically, communication between Web roles and Worker roles is done using a message queue. Kevin Kell wrote some articles about this. See Worker Role Communication in Windows Azure – Part 1.

            Configuring Windows Azure Roles

            Roles must be configured before deploying them. This can be done directly in the configuration file, or you can use the Visual Studio properties pages as shown below.

            The most important things you need to configure are VM size and instance count. VM size determines the amount of computing resources that are allocated for your virtual machine(s). Instance count determines how many VMs you get per role.

            Summary

            Now, we’re ready to deploy a role to the Windows Azure cloud. I’ll cover that in my next post.

            To learn more about Windows Azure, come to Learning Tree course 2602, Windows Azure Platform Introduction: Programming Cloud-Based Applications.

            You [might] also be interested in other courses in Learning Tree’s .NET programming curriculum.


            <Return to section navigation list> 

            Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

            image

            No significant articles today.


            <Return to section navigation list> 

            Cloud Security and Governance

            ShooAnswers posted Microsoft Azure: Security in the Cloud on 1/23/2010:

            image

            Windows Azure is deployed within Global Foundation Services datacenters, and thus enjoys the network security benefits provided by GFS. It is the responsibility of application developers to ensure that application data is secured at the application layer. Thus it is up to the application developer to determine whether/which data should be encrypted. , Microsoft’s hat in the cloud security ring is Azure, a services platform by which organizations can create, deploy, manage and distribute web-based applications on the local network (private cloud) or over the Internet. But will those applications and services offer a secure computing environment? In this article, we look at what Microsoft is doing to address the biggest cloud security “hot spots”.

            Read More- http://www.windowsecurity.com/articles/Microsoft-Azure-Security-Cloud.html

            Related posts:

            1. Should Microsoft Identity Integration Server Be Part of Your Security Plan?
            2. Secure Installation of Microsoft SQL Server 2000
            3. Microsoft Windows and the Common Criteria Certification Part I
            4. Are Free/Low Cost Web Apps Secure Enough for Business? – Part 2: Microsoft Office Web Apps and BPOS/Office 365


            <Return to section navigation list> 

            Cloud Computing Events

            Michael S. Collier offered slides from his CodeMash 2011 Presentation: Windows Azure – What, Why, and How in a 1/22/2011 post:

            image

            Last week wrapped up another fabulous CodeMash.  I think this conference gets better and better each year!

            I was fortunate enough to be given the opportunity to speak at CodeMash this year.  There were a few sessions on cloud computing  this year – some on related to Windows Azure and some on Amazon AWS.  It’s no secret that I’m very passionate about Windows Azure.  So, it should be no surprise my session was on Windows Azure.  If you would like to view the slides for my presentation, you can get them here.

            I hope those that attended my session enjoyed it!  If you have any questions, please feel free to contact me.  I’m always happy to chat about Windows Azure!


            The Kentucky .NET Users Group announced User Group Meeting Thur January 27th: Cloud Connectivity at the University of Louisville Campus (South of Downtown Louisville), Duthie Center for Engineering, Room 117 (Building behind the JB Speed Hall):

            Cloud Connectivity

            With cloud computing, social networks, and connected architectures, one of the first design decisions you need to make is how to authenticate users and services. In this session, we will examine a service that makes it easier - Azure Access Control Service (ACS).  We will show how to implement ACS as an integrated, single sign on and centralized authorization for web sites and phone applications. We will walk through all the steps to setup an ACS account, configure an ACS Project, select the Identity Providers (Live, Google, Facebook), configure Relying Parties, and using returned claims in an ASP.Net MVC site.

            Windows Azure Marketplace: Consuming the Cloud DataMarket

            Leveraging the Cloud's data services can provide real-time feeds that keep your users bustling. Learn how to quickly absorb the DataMarket's datasets using OData APIs and expose them to client applications on any platform. A sample from Microsoft's Windows Azure Support team utilizing a Windows Azure service to consume crime data from Data.gov will be discussed. OData, one of the cloud's increasingly popular and flexible web protocols for querying and updating data, will be examined and reviewed.

            Speakers: DeVaris Brown, Microsoft Academic Developer Evangelist, Brian Carter, Kelly Orr, and Patrick Riley.


            <Return to section navigation list> 

            Other Cloud Computing Platforms and Services

            Derrick Harris explained Why Amazon’s Cloud Competitors Won’t Follow into PaaS on 1/23/2010:

            Amazon Web Services developed its own flavor of PaaS because it had to. The cleverly named Elastic Beanstalk gives the company a way to keep up with cloud development trends. That doesn’t mean the rest of its IaaS brethren need to follow suit — at least not in the immediate future. Instead, several leading cloud providers are distinguishing themselves by innovating at the intersection of cloud computing and dedicated infrastructure, the one area they can certainly distinguish themselves from market leader AWS, which has to focus on developers because it doesn’t have a legacy business on which to lean.

            imageI think GoGrid might actually be the innovation leader in terms of fusing its dedicated hosting business with its cloud computing business. In 2008, it launched its Cloud Connect program (since renamed Hybrid Hosting) that lets customers deploy dedicated servers and connect them to GoGrid cloud servers via a private network. This week, GoGrid took its hybrid approach a step (or two) further by announcing dedicated infrastructure — servers, network, management software, etc. — as a cloud service. It’s a big deal because GoGrid’s customer base of service providers and more traditional businesses want to leverage — or even resell — GoGrid’s cloud capabilities, but they want to do so in the most secure manner possible.

            Rackspace Cloud is a step behind GoGrid in terms of merging managed hosting and cloud computing, but it’s making progress. Rackspace recently announced its Cloud Connect offering that, like the GoGrid offering once bearing the same name, lets customers deploy hybrid cloud environments comprised of dedicated managed servers and shared cloud servers. Additionally, Rackspace is now offering Cloud Servers with a Managed Service Level, which brings the company’s experience in managing customer environments into the traditionally self-service cloud computing model.

            Joyent doesn’t have a managed hosting business like GoGrid and Rackspace, but it is differentiating itself by catering to conservative customers that want to build private clouds. Joyent’s SmartDataCenter software brings the Joyent public-cloud experience in-house, complete with IaaS and PaaS capabilities. As far as I know, it’s the only cloud provider actually selling its secret sauce to individual companies and service providers to run on their own infrastructure, although it certainly isn’t the only company pushing cloud software.

            imageWhat’s noteworthy is that rather than try to keep up with AWS by rolling out feature after feature or jumping into the PaaS space, this collection of providers seem to have chosen to compete by leveraging their own strengths. Such strategies probably won’t result in sheer user numbers comparable to AWS — although Rackspace and Joyent do enjoy fairly large developer bases — but they should result in bigger deals and plenty of long-term business.

            For more on these cloud providers’ strategies, see my weekly update at GigaOM Pro (subscription required).

            Image courtesy of David Wyatt.

            Related content from GigaOM Pro (sub req’d):


            Stacey Higgenbotham posted Meet Elastic Beanstalk, Amazon’s PaaS Play on 1/19/2011 (missed when published):

            image Amazon Web Services, which popularized cloud computing with its Elastic Compute Cloud and Simple Storage Service, has moved up the stack from infrastructure to providing Amazon Elastic Beanstalk, its Platform-as-a-Service play. However, Amazon is layering its PaaS offering on top of its other services in a way that’s easily reversed, which means developers can take the easy way out of developing on Beanstalk, or they can peel back the platform to manually provision and tweak their underlying VMs if they want.

            Adam Selipsky, VP of Amazon Web Services, says the service was built to address the idea of vendor lock-in and inflexibility that commonly afflicts other platforms for application development. With the first efforts, Amazon is providing a framework for folks to build Java apps on AWS with other programming languages and partnerships to follow. It’s not surprising, given the attention that Platforms-as-a-Service have been getting in the last 12 months or so.

            image

            At the beginning of last year, Microsoft finally opened up its Azure platform, while a few months later, VMware and Salesforce.com teamed up to offer VMforce, a Java cloud hosted on Salesforce’s infrastructure, and VMware eased into the PaaS market in other ways this year. Google amped up its App Engine offering, tying it to Salesforce and VMware, while smaller providers of Platforms-as-a-Service such as Makara and Heroku were snapped up (by Red Hat and Salesforce respectively).

            imageFor Amazon, long the leader in the cloud space, seeing competitors move up the stack and developers taking advantage of those platforms that weren’t necessarily built on AWS infrastructure was a warning. As my colleague Derrick Harris wrote back in May (GigaOM Pro sub req’d):

            image So, my question is this: If AWS really will be simplifying management within the coming weeks, what are the chances it does so via a PaaS offering of sorts? It would be wise for AWS to leverage its current leads in market and mind share and preempt any serious momentum by PaaS providers. Technically, they’re not competitors yet (to the degree that IaaS and PaaS can vary differently in terms of target audience), but the next generation of PaaS offerings will blur those lines. AWS has the tools to build a holistic PaaS offering, the economies of scale to make it profitable, and the SDKs to cater to specific set of developers. If it does so, the cloud-computing discussion will take on an entirely different tenor as PaaS providers scramble to differentiate themselves from AWS in this area, too.

            Amazon’s Beanstalk offering has taken longer to launch than the few weeks Derrick had hoped for, but now it’s here. Amazon’s next move will be expanding beyond Java, something it could do via partnerships with other providers or on its own. Brian White, a developer with AWS, said PHP and Ruby are high on Amazon’s list, but declined to specify how partnerships with other providers might look. When asked about competing with other PaaS providers who host their platforms on AWS infrastructure, Selipsky suggested that perhaps those might become partners for supporting other languages.

            Indeed, in its press release on Beanstalk, Amazon included a quote from John Dillon, CEO of Engine Yard, saying the company is working with Amazon to provide a Ruby on Rails offering on Beanstalk. So will Amazon be a giant lumbering down the beanstalk to crush the PaaS competition, and will it lift others up to its height?

            Image courtesy of Flickr user Melody.

            Related GigaOM Pro Content (sub req’d)


            Derrick Harris reported GoGrid Fuses Cloud Capabilities to Dedicated Servers to GigaOm’s Structure blog on 1/19/2010 (missed when published):

            Cloud provider GoGrid has expanded its Infrastructure-as-a-Service catalog by launching a Hosted Private Cloud that maintains all the features of GoGrid’s standard multi-tenant cloud offering, but on dedicated physical servers. It’s an interesting tactic for getting new customers, and it highlights the different value propositions and visions of the leading cloud providers. Unlike Amazon Web Services, which today went even further down the developer path by releasing its own PaaS offering, GoGrid is targeting more-conservative IT types who want the benefits of cloud computing but still aren’t sold on the security of sharing resources.

            image The dedicated servers aren’t the most noteworthy about the GoGrid Hosted Private Cloud — the company already offers those as part of its hybrid cloud offering — the retention of the core IaaS features is. That means on-demand provisioning, pay-per-use billing, the same APIs and management portal, load balancing and the complete lineup of partner products in the GoGrid Exchange. Additionally, customers can choose from multiple data centers — GoGrid, Verizon and Digital Realty Trust) in San Francisco, and Equinix in Washington, D.C. Ashburn, Va.– in which to host their private-cloud servers. According to GoGrid CEO John Keagy, infrastructure is already in place in the GoGrid and Verizon data centers, whereas customers choosing Digital Realty Trust or Equinix will have to rent their own private cages.

            If you’re wondering how dedicated resources can scale on demand, here’s how Hosted Private Cloud works. Customers have their own pools of physical servers on which they can scale up and down VMs as need be. Keagy says that adding VMs is a truly a real-time experience, while adding new physical servers, which GoGrid calls “nodes,” is near real-time. This is because Hosted Private Cloud nodes are the same “super-large Intel boxes” atop which the GoGrid public cloud is built, meaning they’re already in place and ready to go when needed. Each node can house up to 250 VMs.

            Cloud computing purists might deride efforts like Hosted Private Cloud as “fake clouds,” but that won’t likely make any difference on sales, and certainly doesn’t make it any worse an idea.  For cloud providers like GoGrid, which have their roots in managed hosting, it’s smart to make use of that expertise for differentiation instead of always trying to compete with AWS.Further, Keagy told me that GoGrid is doing a lot of business with service providers, including Orange, looking to resell GoGrid resources under their own banners, and Hosted Private Cloud is the result of their desires.

            Keagy understands why AWS focuses on developers with efforts like Elastic Beanstalk, but says that GoGrid is focusing on “where the available market is” now. “We think the bulk of the 3 trillion IT economy is for production challenges, where you can’t just Greenfield new applications,” Keagy explained. “We’re not targeting developers so much as we’re targeting systems administrators.” However, he added that cloud watchers “don’t want to count us out” on the PaaS front, as GoGrid is working on its own flavor of platform services that will be tailored to its customer base.

            CIOs appear interested in private clouds, so GoGrid’s challenge will be getting them to buy into its approach instead of building their own in-house clouds. It might come down to a matter of weighing cost savings against the peace of mind of knowing exactly what’s going on with their infrastructure and who’s tending to it. Also, GoGrid will have to compete against a number of competitors selling their own flavors of dedicated, physical cloud resources, including StrataScale. Peer1 Hosting (s pix) and VMware (s vmw) vCloud Datacenter partners. It wouldn’t be surprising to see Rackspace (s rax) get into this market too, as a complement to its dedicated-shared hybrid cloud offering.

            Image courtesy of Flickr user midom.

            No significant articles today.


            <Return to section navigation list> 

            0 comments: