Thursday, January 27, 2011

Windows Azure and Cloud Computing Posts for 1/27/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3[3]   
• Updated 1/27/2011 at 3:30 PM PST with a few new articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi delivered a brief Overview of SQL Azure DataSync to the SQL Azure Team blog on 1/27/2011:

image SQL Azure Data Sync provides a cloud based synchronization service built on top of the Microsoft Sync Framework.  It provides bi-directional data synchronization and capabilities to easily share data across SQL Azure instances and multiple data centers.   Last week Russ Herman posted a step-by-step walkthrough of SQL Azure to SQL Server synchronization to the TechNet Wiki. 

imageTypical usage scenarios for Data Sync

  • On-Premises to Cloud
  • Cloud to Cloud
  • Cloud to Enterprise
  • Bi-directional or  sync-to-hub or sync-from-hub synchronization

Conclusion

The SQL Azure Data Sync is a rapidly maturing synchronization framework meant to provide synchronization in cloud and hybrid cloud solutions utilizing SQL Azure.  In a typical usage scenario, one SQL Azure instance is the "hub" database, which provides bi-directional messaging to member databases in the synchronization scheme. 


imagePreview Chapter 5: An Introduction to SQL Azure from O’Reilly Media’s Microsoft Azure: Enterprise Application Development by Richard J. Dudley and Nathan Duchene by signing up for a Safari Books Online trial.


<Return to section navigation list> 

MarketPlace DataMarket and OData

My (@rogerjenn) Vote for Windows Azure and OData Open Call Sessions at MIX 2011 of 1/26/2011 lists these six potential MIX 2011 sessions:

imageFollowing are the … Open Call page’s session abstracts that contain OData in their text:

Optimizing Data Intensive Windows Phone 7 Applications: Shawn Wildermuth

As many of the Windows Phone 7 applications we are writing are using data, it becomes more and more important to understand the implications of that data. In this talk, Shawn Wildermuth will talk about how to monitor and optimize your data usage. Whether you’re using Web Services, JSON or OData, there are ways to improve the user experience and we’ll show you how!


Creating OData Services with WCF Data Services: Gil Fink

Data is a first-class element of every application. The Open Data Protocol (OData) applies web technologies such as HTTP, AtomPub and JSON to enable a wide range of data sources to be exposed over HTTP in a simple, secure and interoperable way. This session will cover WCF Data Services best practices so you can use it the right way.


REST, ROA, and Reach: Building Applications for Maximum Interoperability: Scott Seely

We build services so that someone else can use those services instead of rolling their own. Terms like Representation State Transfer, Resource Oriented Architectures, and Reach represent how we reduce the Not Invented Here syndrome within our organizations. That’s all well and good, but it doesn’t necessarily tell us what we should actually DO. In this talk, we’ll get a common understanding of what REST and ROA is and then take a look at how these things allow us to expose our services to the widest possible audience. We’ll even cover the hard part: resource structuring. Then, we’ll look at how to implement the hard bits with ROA’s savior: OData!


Using T4 templates for deep customization with Entity Framework 4: Rick Ratayczak

Using T4 templates and customizing them for your project is fairly straight forward if you know how. Rick will show you how to generate you code for OData, WPF, ASP.NET, and Silverlight. Using repositories and unit-of-work patterns will help to reduce time to market as well as coupling to the data store. You will also learn how to generate code for client and server validation and unit-tests.


WCF Data Services, OData & jQuery. If you are an asp.net developer you should be embracing these technologies: James Coenen-Eyre

The session would cover the use of WCF Data Services, EntityFramework, OData, jQuery and jQuery Templates for building responsive, client side web sites. Using these technologies combined provides a really flexible, fast and dynamic way to build public facing web sites. By utilising jQuery Ajax calls and jQuery templating we can build really responsive public facing web sites and push a lot of the processing on to the client rather than depending on Server Controls for rendering dynamic content. I have successfully used this technique on the last 3 projects I have worked on with great success and combined with the use of MemoryCache on the Server it provides a high performance solution with reduced load on the server. The session would walk through a real world example of a new project that will be delivered in early 2011. A Musician and Artists Catalog site combined with an eCommerce Site for selling merchandise as well as digital downloads.


On-Premise Data to Cloud to Phone - Connecting with Odata: Colin Melia

You have corporate data to disseminate into the field, or service records that need to be updated in the field. How can you quickly make that data accessible from your on-premise system to Windows Phone users? Come take a look at OData with Microsoft MVP for Silverlight and leading WP7 trainer, Colin Melia, and see how you can expose data and services into the cloud and quickly connect to it from the phone, from scratch


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Itai Raz published Introduction to Windows Azure AppFabric blog posts series – Part 1: What is Windows Azure AppFabric trying to solve? on 1/27/2011 to the Windows Azure AppFabric Team blog:

image Recently, in October 2010, at our Professional Developers Conference (PDC) we made some exciting roadmap announcements regarding Windows Azure AppFabric, and we have already gotten very positive feedback regarding this roadmap from both customers and analysts (Gartner Names Windows Azure AppFabric "A Strategic Core of Microsoft's Cloud Platform").

image722322[2]As result of these announcements we wanted to have a series of blog posts that will give a refreshed introduction to the Windows Azure AppFabric, its vision and roadmap.

Until the announcements at PDC we presented Windows Azure AppFabric as a set of technologies that enable customers to bridge applications in a secure manner across on-premises and the cloud. This is all still true, but with the recent announcements we now broaden this, and talk about Windows Azure AppFabric as being a comprehensive cloud middleware platform that raises the level of abstraction when developing applications on the Windows Azure Platform.

But first, let's begin by explaining what exactly it is we are trying to solve.

Businesses of all sizes experience tremendous cost and complexity when extending and customizing their applications today.  Given the constraints of the economy, developers must find new ways to do more with less but at the same time simultaneously find new innovative ways to keep up with the changing needs of the business.  This has led to the emergence of composite applications as a solution development approach.  Instead of significantly modifying existing applications and systems, and relying solely on the packaged software vendor when there is a new business need, developers are finding it a lot cheaper and more flexible to build these composite applications on top of, and surrounding, existing applications and systems.

Developers are now also starting to evaluate newer cloud-based platforms, such as the Windows Azure Platform, as a way to gain greater efficiency and agility. The promised benefits of cloud development are impressive, by enabling greater focus on the business and not in running the infrastructure.

As noted earlier, customers already have a very large base of existing heterogeneous and distributed business applications spanning different platforms, vendors and technologies.  The use of cloud adds complexity to this environment, since the services and components used in cloud applications are inherently distributed across organizational boundaries.  Understanding all of the components of your application - and managing them across the full application lifecycle - is tremendously challenging. 

Finally, building cloud applications often introduces new programming models, tools and runtimes, making it difficult for customers to enhance, or transition from, their existing server-based applications.

Windows Azure AppFabric is meant to address these challenges through 3 main concepts:

1.       Middleware Services - pre-built higher-level services that developers can use when developing their applications, instead of the developers having to build these capabilities on their own. This reduces the complexity of building the application and saves a lot of time for the developer.

2.       Building Composite Applications - capabilities that enable you to assemble, deploy and manage a composite application that is made up of several different components as a single logical entity.

3.       Scale-out Application Infrastructure - capabilities that makes it seamless to get the benefit of cloud, such as: elastic scale, high availability, density, multi-tenancy, etc'.

So, with Windows Azure AppFabric you don't just get the common advantages of cloud computing such as not having to own and manage the infrastructure, but you also get pre-built services, a development model, tools, and management capabilities that help you build and run your application in the right way and enjoy more of the great benefits of cloud computing such as elastic scale, high-availability, multi-tenancy, high-density, etc'.

Tune in to the future blog posts in this series to learn more about these capabilities and how they help address the challenges noted above.

Other places to learn more on Windows Azure AppFabric are:

If you haven't already taken advantage of our free trial offer make sure to click on the image below and start using Windows Azure AppFabric already today!

A couple of detailed infographics would have contributed greatly to this post.


Sebastian W (@qmiswax) posted Windows Azure AppFabric and CRM 2011 online part 2 to his Mind the Cloud blog on 1/27/2011:

image A while back I’ve published post about how to configure CRM 2011 Online out of the box integration with AppFabric, I showed all configuration on CRM so now it’s time to present client application  I’d call that app listener, and main role of that app will be listen and get all messages send from CRM Online via AppFabric. Application itself will be hosted on premise  (to be 100 % honest on my laptop).

image722322[2]The CRM SDK version from December 2010 contains [a] sample app and I’m not going to reinvent a wheel. I’ll show what you need to configure in that app to receive messages from our CRM system. Presented example will be the simplest one as mentioned in previous post it is one way integration, so message comes from CRM Online via AppFabric Service Bus to our app. Let’s get hands dirty. I attached [a] ready to compile solution, which is modified example from SDK. (I removed Azure storage code to simplify it ), all you need to do is change app.config

Values to change

  • IssuerSecret : Current Management Key fom appfabric.azure.com
  • ServiceNamespace : Service Namespace from  appfabric.azure.com
  • IssuerName : Management Key Name
  • ServicePath : part of service bus url

Code of that application  is very simple, example implements IServiceEndpointPlugin behaviour which has got only one method Execute. Functionality of sample app is even simpler  it prints the contents of the Microsoft Dynamics CRM execution context sent from plugin to on premise app.

Despite simplicity of the example, it is very “powerful” solution from business point of view we integrated CRM 2011 online with on premise app without much hassle, so happy coding/integrating.:)

You can download source code from here


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Mary Jo Foley (@maryjofoley) reported Microsoft Research delivers cloud development kit for Windows Phone 7 in a 1/27/2011 post to ZDNet’s All About Microsoft blog:

image Microsoft Research has made available for download a developer preview of its Windows Phone 7 + Cloud Services Software Development Kit (SDK).

The new SDK is related to Project Hawaii, a mobile research initiative which I’ve blogged about before. Hawaii is about using the cloud to enhance mobile devices. The “building blocks” for Hawaii applications/services include computation (Windows Azure); storage (Windows Azure); authentication (Windows Live ID); notification; client-back-up; client-code distribution and location (Orion).

imageThe SDK is “for the creation of Windows Phone 7 (WP7) applications that leverage research services not yet available to the general public,” according to the download page.

The first two services that are part of the January 25 SDK are Relay and Rendezvous. The Relay Service is designed to enable mobile phones to communicate directly with each other, and to get around the limitation created by mobile service providers who don’t provide most mobile phones with consistent public IP addresses. The Rendezvous Service is a mapping service “from well-known human-readable names to endpoints in the Hawaii Relay Service.” These names may be used as rendezvous points that can be compiled into applications, according to the Hawaii Research page.

The Hawii team is working on other services which it is planning to release in dev-preview form by the end of February 2011. These include a Speech-to-Text service that will take an English spoken phrase and return it as text, as well as an “OCR in the cloud” service that will allow testers to take a photographic image that contains some text and return the text. “For example, given a JPEG image of a road sign, the service would return the text of the sign as a Unicode string,” the researchers explain.

Microsoft officials said earlier this week that the company sold last quarter 2 million Windows Phone 7 operating system licenses to OEMs for them to put on phones and provide to the carriers. (This doesn’t mean 2 million Windows Phone 7s have been sold, just to reiterate.) Microsoft launched Windows Phone 7 in October in Europe. There are still no Windows Phone 7 phones available from Verizon or Sprint in the U.S. Microsoft and those carriers have said there will be CDMA Windows Phone 7s on those networks some time in 2011. …


Eric Nelson (@ericnel) pointed out A little gem from MPN: FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform on 1/27/2011:

image I know a lot of technical people who work in partners (ISVs, System Integrators etc).

I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc.

I am one of those people :-)

imageHence imagine my surprise when I stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join).

This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :)

Course Structure

The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process.

  • Module 1:  Introduction: 
    • This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers.
  • Module 2:  Dynamic Environment:
    • This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform.
  • Module 3:  Local State:
    • This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods.
  • Module 4:  Latency and Timeouts:
    • This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio.
  • Module 5:  Transactions and Bandwidth:
    • This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs.
  • Module 6:  Authentication and Authorization:
    • This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation.
  • Module 7:  Data Sensitivity:
    • This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud.
  • Module 8:  Summary
    • Provides an overall review of the course.


Pradeep Viswav posted on 1/27/2011 a 00:05:08 RealDolmen Shows Dynamics CRM 2011 + Windows Phone 7 + Windows Azure demo video clip:

image

image This is the video that played on the RealDolmen stand during the CRM 2011 Launch event in Belgium. In this demo application we integrate CRM Online with Windows Phone 7 using Windows Azure (the cloud).

Pradeep is a Microsoft Student Partner currently pursuing his Computer Science & Engineering degree.

The Microsoft Case Studies Team posted Digital Marketing Startup [Kelley Street Digital] Switches Cloud Providers and Saves $4,200 Monthly on 1/21/2011 (missed when published):

image Kelly Street Digital developed an application that tracks consumer interactions in digital marketing campaigns. For seven months, the company delivered the beta application in the cloud through Amazon Web Services and paid consultants to manage the environment and administer the database. However, when the service went down and no support was available, the company decided that it needed a faster, more reliable, and more cost-effective solution. After a successful pilot program, the company moved its application to the Windows Azure platform . Because its developers use Microsoft development tools that are integrated with Windows Azure, it was simple to migrate the database-centric application to Microsoft SQL Azure . The company pays only 16 percent of what it paid for Amazon Web Services, and it no longer has to pay consultants. Plus, its application runs much faster on Windows Azure.

Situation
Glen Knowles, the Cofounder of Kelly Street Digital , started his company after working for many years directing marketing campaigns. While working at a major advertising firm in Australia, he was constantly frustrated because he was unable to track consumer movements when conducting campaigns for large customers. “All the tracking tools were focused on pages and clicks,” say Knowles. “I wanted to focus on tracking people.”

In 2008, Knowles founded Kelly Street Digital, a self-funded startup company with eight employees—six of whom are developers. The company created Campaign Taxi, an application available by subscription that helps customers track consumer interactions across multiple marketing campaigns. Designed for advertising and marketing agencies, Campaign Taxi features an application programming interface (API) that customers can use to easily set up digital campaigns, add functionality to their websites, store consumer information in a single database, and present the data in reports.

Kelly Street Digital formally launched Campaign Taxi on September 1, 2010, and the company updates the product every four weeks. It has eight customers in Australia, including one government customer who uses the application to track user-generated content on its website. Among its corporate customers is an online book reseller who uses the application to plot online registrations, responses to monthly email offers, and genre preferences of buyers.

“The goal of Campaign Taxi is to follow consumers from one campaign to the next with a cost-effective solution,” says Knowles. “We build a list of all the consumers’ interactions over time, and the information is aggregated, so you can get a single, cumulative view across all consumer activity, across multiple campaigns, and measure your effectiveness.” (See Figure 1.)

Figure 1.

Figure 1. The Campaign Taxi application aggregates consumer
interactions during marketing campaigns.

When the company began developing Campaign Taxi, it ran the application with a local hosting company. It then intended to run the application in an on-premises data center. It made significant infrastructure purchases, including two instances of Microsoft SQL Server 2008 R2 data management software and two web servers. Then Kelly Street Digital discovered Amazon Web Services and, after a few trials, it moved the application to the Amazon cloud. “We liked the idea of scalability so that as volume of customer activity grew, we could virtually add more servers,” says Knowles. “Plus, the cost of Amazon Web Services was about the same as the on-premises hardware infrastructure.”

The Campaign Taxi beta application resided on Amazon Web Services for seven months. Kelly Street Digital purchased third-party software to manage instances of Campaign Taxi in the Amazon cloud. The company secured the services of a consultant based in the United States to set up and manage the cloud environment. It also hired a consultant database administrator to help with the database servers. Says Knowles, “Not only was it expensive to hire consultants, but it was unreliable because they sometimes had conflicting priorities and they lived in different time zones.”

*

*
The savings we achieved by switching to Windows Azure is just outstanding. I can use the annual cost savings to pay a developer’s salary.
*

Glen Knowles
Cofounder, Kelly Street Digital

*

In December 2009, a few days before the Christmas holiday, the instance of Campaign Taxi in the Amazon cloud stopped running. The cloud consultant that Kelly Street Digital had hired to manage its environment was on vacation in Paris, France. “I couldn’t call Amazon, and they provided no support options,” says Knowles. “The best we could do was post on the developer forum. When you rely on the developer community for support, you can’t rely on them at Christmas time because they’re on holiday.”

In general, Kelly Street Digital felt it needed a faster and more reliable cloud solution. “If the Campaign Taxi API is down, our customers’ sites don’t work,” says Knowles. “If they’re relying on our application for competition entries, consumer registrations, or any other functionality, we’ve broken their site. I did not want Kelly Street Digital to be managing servers and patching software.”

Solutionimage
In April 2010, when the Microsoft cloud-computing platform—Windows Azure—became available for technology preview, Kelly Street Digital decided to try it. It conducted a two-week pilot program with Campaign Taxi on the Windows Azure platform, which provides developers with on-demand compute and storage to host, scale, and manage web applications on the Internet through Microsoft data centers.

The first thing Kelly Street Digital noticed during the pilot program was that the response time of its application and API was significantly faster with Windows Azure than it was with Amazon Web Services. Based on this improved latency alone, the company decided to move Campaign Taxi to the Windows Azure platform. It estimated that the process would take six weeks, but it took one developer only three weeks to migrate the application. By June 2010, Kelly Street Digital was running the application—which was still in beta—in the Windows Azure cloud.

With the application on the Windows Azure platform, Kelly Street Digital can take advantage of familiar tools and well-established technologies, and it can deploy its application in minutes. It created Campaign Taxi by using the Microsoft .NET Framework , a software framework that supports several programming languages and is compatible with Windows Azure. The company’s developers write code in Microsoft Visual Studio 2010 , an integrated development environment, and collaborate by using Microsoft Visual Studio Team Foundation Server 2010 , an application lifecycle management solution that is used to manage development projects with tools for source control, data collection, reporting, and project tracking. “With Windows Azure, you press a button to test the application in the staging environment,” says Knowles. “Then you press another button to put the application into production in the cloud. It’s seamless.”

When Kelly Street Digital needed to rework Campaign Taxi to run on Microsoft SQL Azure —which provides database capabilities as a fully managed service—it sent its lead developer to a four-hour training through Microsoft BizSpark , a program that provides software startup companies with developer resources. The developer then wrote a script that quickly ported the application’s relational database to SQL Azure. “The migration from Microsoft SQL Server to Microsoft SQL Azure was quite straightforward because they’re very similar,” says Knowles. “You have to code around a few things, but they are basically the same application.”

When Kelly Street Digital needed another instance of the application while it was using Amazon Web Services, it had to commission a second instance of Microsoft SQL Server and have a database consultant configure the mirroring of data. With SQL Azure, however, Kelly Street Digital can scale the application automatically in the cloud without adding more hardware. The company uses Blob storage in Windows Azure to import consumer data and temporarily store its customers’ uploaded data files. It also employs Blob storage to store backups of SQL Azure.

*

*
Windows Azure is an unbelievable product. I’m an evangelist for it in my network of startups. We’ve chosen this cloud platform and we’re sticking with it.
*

Glen Knowles
Cofounder, Kelly Street Digital

*

Although it still backs up its database locally each night to conform to best practices, Kelly Street Digital has yet to experience any data loss since migrating its application to Windows Azure. “One thing that Microsoft is really clear about is that you should always have two instances of your application in the cloud, but at first we had only one,” says Knowles. “When installing patches, we always have one instance running and available to customers in the cloud.”

Benefits
Since Kelly Street Digital moved its Campaign Taxi application to the Windows Azure platform, it benefits from significantly reduced costs, faster run time and increased speed of deployment, a familiar development environment, reliable service, and built-in scalability. “Windows Azure is an unbelievable product,” says Knowles. “I’m an evangelist for it in my network of startups. We’ve chosen this cloud platform and we’re sticking with it.”
Decreased Costs
Kelly Street Digital paid U.S.$4,970 each month for Amazon Web Services; the cost of subscribing to the Windows Azure platform is only 16 percent of that cost—$795 a month—for the same configuration. “The savings we achieved by switching to Windows Azure is just outstanding,” says Knowles, “I can use the annual cost savings to pay a developer’s salary.”

Additionally, the company no longer has to pay contractors to maintain its database or manage the Amazon Web Services environment—for an average savings of $600 a month. Kelly Street Digital staff members can now focus on improving the application. “With the Windows Azure platform, we don’t have to manage anything,” says Knowles. “We don’t have to manage the hosting environment. We don’t have to manage the databases. We just don’t have those discussions anymore. That’s a huge cost savings.”

Kelly Street Digital takes advantage of the Microsoft account tools that provide up-to-date reports on how much an organization is spending on cloud computing each month. Unlike Amazon, Microsoft offers pricing for Windows Azure in Australian dollars, so the company avoids the uncertainty of currency fluctuations that it used to experience. “The Australian dollar goes up and down like a yo-yo,” says Knowles. “If you’re a startup—especially if you’re using a scalable resource—you need to accurately predict your cash flow requirements. I have to know what it’s going to cost.”

Table 1.

Table 1. Kelly Street Digital saves 68.1 percent with the Windows Azure platform.

Increased Speed
Everyone at Kelly Street Digital was surprised at how much faster the Campaign Taxi application worked on the Windows Azure platform compared to its running time on Amazon Web Services. Says Knowles, “For that kind of improved latency, we would have paid a premium to convert to the Windows Azure platform.”

The increased speed of deployment impressed the team even more. “When we used Amazon Web Services, it took a whole afternoon to deploy our application,” says Knowles. “Now we can deploy our application by using Visual Studio Team Foundation Server and it takes maybe 20 minutes.”

Tightly Integrated Technologies
Kelly Street Digital takes advantage of familiar and reliable Microsoft technologies that make it simple to sustain a rapid development cycle for Campaign Taxi. Because they’re using familiar development tools, the company’s developers can focus on enhancing the application. “The integration of Windows Azure with Microsoft development tools is brilliant,” says Knowles. “The .NET Framework development environment is remarkably mature. If you have Visual Studio, Team Foundations Server, and the cloud, you’re in business.”

Improved Reliability
Kelly Street Digital has confidence in the service level agreement and 24-hour support that comes with the Windows Azure platform. “Our business lives and dies on the reliability of our service,” says Knowles. “With the Windows Azure platform, all system administration issues are gone. We control the most important thing: our application. The cloud takes care of the rest.”

Enhanced Scalability
The company also makes use of the built-in scalability that comes with the Windows Azure platform to store ever-increasing quantities of consumer interaction data. Kelly Street Digital expects storage requirements to double each month for the next year. “With Windows Azure, we don’t have to worry about scalability because it’s automatic,” says Knowles. “Plus, I don’t have to worry about how fast the company grows. I can embrace growth because scalability is built in.”


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@bethmassi) described How To Send HTML Email from a LightSwitch Application in a 1/27/2010 post:

image A while back Paul Patterson wrote an awesome step-by-step blog post based on a forum question on how to send an automated email using System.Net.Mail from LightSwitch. If you missed it here it is:

Microsoft LightSwitch – Send an Email from LightSwitch

image2224222[2]This is a great solution if you want to send email in response to some event happening in the data update pipeline on the server side. In his example he shows how to send a simple text-based email using an SMTP server in response to a new appointment being inserted into the database. In this post I want to show you how you can create richer HTML mails from data and send them via SMTP. I also want to present a client-side solution that creates an email using the Outlook client which allows the user to see and modify the email before it is sent.

Sending Email via SMTP

As Paul explained, all you need to do to send an SMTP email from the LightSwitch middle-tier (server side) is switch to File View on the Solution Explorer and add a class to the Server project. Ideally you’ll want to put the class in the UserCode folder to keep things organized.

image

TIP: If you don’t see a UserCode folder that means you haven’t written any server rules yet. To generate this folder just go back and select any entity in the designer, drop down the “Write Code” button on the top right, and select one of the server methods like entity_Inserted.

The basic code to send an email is simple. You just need to specify the SMTP server, user id, password and port. TIP: If you only know the user ID and password then you can try using Outlook 2010 to get the rest of the info for you automatically.

Notice in my SMTPMailHelper class I’m doing a quick check to see whether the body parameter contains HTML and if so, I set the appropriate mail property.

Imports System.Net
Imports System.Net.Mail

Public Class SMTPMailHelper
    Const SMTPServer As String = "smtp.mydomain.com"
    Const SMTPUserId As String = "myemail@mydomain.com"
    Const SMTPPassword As String = "mypassword"
    Const SMTPPort As Integer = 25
    Public Shared Sub SendMail(ByVal sendFrom As String,
                               ByVal sendTo As String,
                               ByVal subject As String,
                               ByVal body As String)
        Dim fromAddress = New MailAddress(sendFrom)
        Dim toAddress = New MailAddress(sendTo)
        Dim mail As New MailMessage

        With mail
            .From = fromAddress
            .To.Add(toAddress)
            .Subject = subject

            If body.ToLower.Contains("<html>") Then
                .IsBodyHtml = True
            End If

            .Body = body
        End With

        Dim smtp As New SmtpClient(SMTPServer, SMTPPort)
        smtp.Credentials = New NetworkCredential(SMTPUserId, SMTPPassword)
        smtp.Send(mail)
    End Sub
End Class
Creating HTML from Entity Data

Now that we have the code to send an email I want to show you how we can quickly generate HTML from entity data using Visual Basic’s XML literals. (I love XML literals and have written about them many times before.) If you are new to XML literals I suggest starting with this article and this video. To use XML literals you need to make sure you have an assembly reference to System.Core, System.Xml, and System.Xml.Linq.

What I want to do is create an HTML email invoice for an Order entity that has children Order_Details. First I’ve made my life simpler by adding computed properties onto the Order_Details and Order entities that calculate line item and order totals respectively. The code for these computed properties is as follows:

Public Class Order_Detail
    Private Sub LineTotal_Compute(ByRef result As Decimal)
        ' Calculate the line item total for each Order_Detail
        result = (Me.Quantity * Me.UnitPrice) * (1 - Me.Discount)
    End Sub
End Class
Public Class Order
    Private Sub OrderTotal_Compute(ByRef result As Decimal)
        ' Add up all the LineTotals on the Order_Details collection for this Order
        result = Aggregate d In Me.Order_Details Into Sum(d.LineTotal)
    End Sub
End Class

Next I want to send an automated email when the Order is inserted into the database. Open the Order entity in the designer and then drop down the “Write Code” button on the top right and select Order_Inserted to generate the method stub. To generate HTML all you need to do is type well formed XHTML into the editor and use embedded expressions to pull the data out of the entities.

Public Class NorthwindDataService
    Private Sub Orders_Inserted(ByVal entity As Order)
        Dim toEmail = entity.Customer.Email

        If toEmail <> "" Then
            Dim fromEmail = entity.Employee.Email
            Dim subject = "Thank you for your order!"

            Dim body = <html>
                           <body style="font-family: Arial, Helvetica, sans-serif;">
                               <p><%= entity.Customer.ContactName %>, thank you for your order!<br></br>
                                   Order date: <%= FormatDateTime(entity.OrderDate, DateFormat.LongDate) %></p>
                               <table border="1" cellpadding="3"
                                   style="font-family: Arial, Helvetica, sans-serif;">
                                   <tr>
                                       <td><b>Product</b></td>
                                       <td><b>Quantity</b></td>
                                       <td><b>Price</b></td>
                                       <td><b>Discount</b></td>
                                       <td><b>Line Total</b></td>
                                   </tr>
                                   <%= From d In entity.Order_Details
                                       Select <tr>
                                                  <td><%= d.Product.ProductName %></td>
                                                  <td align="right"><%= d.Quantity %></td>
                                                  <td align="right"><%= FormatCurrency(d.UnitPrice, 2) %></td>
                                                  <td align="right"><%= FormatPercent(d.Discount, 0) %></td>
                                                  <td align="right"><%= FormatCurrency(d.LineTotal, 2) %></td>
                                              </tr>
                                   %>
                                   <tr>
                                       <td></td>
                                       <td></td>
                                       <td></td>
                                       <td align="right"><b>Total:</b></td>
                                       <td align="right"><b><%= FormatCurrency(entity.OrderTotal, 2) %></b></td>
                                   </tr>
                               </table>
                           </body>
                       </html>

            SMTPMailHelper.SendMail(fromEmail, toEmail, subject, body.ToString)
        End If
    End Sub
End Class

The trick is to make sure your HTML looks like XML (i.e. well formed begin/end tags) and then you can use embedded expressions (the <%= syntax) to embed Visual Basic code into the HTML. I’m using LINQ to query the order details to populate the rows of the HTML table. (BTW, you can also query HTML with a couple tricks as I show here).

So now when a new order is entered into the system an auto-generated HTML email is sent to the customer with the order details.

image

Sending Email via an Outlook Client

The above solution works well for sending automated emails but what if you want to allow the user to modify the email before it is sent? In this case we need a solution that can be called from the LightSwitch UI. One option is to automate Microsoft Outlook -- most people seem to use that popular email client, especially my company ;-). Out of the box, LightSwitch has a really nice feature on data grids that lets you to export them to Excel if running in full trust. We can add a similar Office productivity feature to our screen that auto generates an email for the user using Outlook. This will allow them to modify it before it is sent.

We need a helper class on the client this time. Just like in the SMTP example above, add a new class via the Solution Explorer file view but this time select the Client project. This class uses COM automation, a feature of Silverlight 4 and higher. First we need to check if we’re running out-of-browser on a Windows machine by checking the AutomationFactory.IsAvailable property. Next we need to get a reference to Outlook, opening the application if it’s not already open. The rest of the code just creates the email and displays it to the user.

Imports System.Runtime.InteropServices.Automation

Public Class OutlookMailHelper
    Const olMailItem As Integer = 0
    Const olFormatPlain As Integer = 1
    Const olFormatHTML As Integer = 2

    Public Shared Sub CreateOutlookEmail(ByVal toAddress As String,
                                         ByVal subject As String,
                                         ByVal body As String)
        Try
            Dim outlook As Object = Nothing

            If AutomationFactory.IsAvailable Then
                Try
                    'Get the reference to the open Outlook App
                    outlook = AutomationFactory.GetObject("Outlook.Application")

                Catch ex As Exception 'If Outlook isn't open, then an error will be thrown.
                    ' Try to open the application
                    outlook = AutomationFactory.CreateObject("Outlook.Application")
                End Try

                If outlook IsNot Nothing Then
                    'Create the email

                    ' Outlook object model (OM) reference: 
                    ' http://msdn.microsoft.com/en-us/library/ff870566.aspx

                    Dim mail = outlook.CreateItem(olMailItem)
                    With mail
                        If body.ToLower.Contains("<html>") Then
                            .BodyFormat = olFormatHTML
                            .HTMLBody = body
                        Else
                            .BodyFormat = olFormatPlain
                            .Body = body
                        End If

                        .Recipients.Add(toAddress)
                        .Subject = subject

                        .Save()
                        .Display()
                        '.Send()
                    End With
                End If
            End If

        Catch ex As Exception
            Throw New InvalidOperationException("Failed to create email.", ex)
        End Try
    End Sub
End Class

The code to call this is almost identical to the previous example. We use XML literals to create the HTML the same way. The only difference is we want to call this from a command button on our OrderDetail screen. (Here’s how you add a command button to a screen.) In the Execute method for the command button is where we add the code to generate the HTML email. I also want to have the button disabled if AutomationFactory.IsAvailable is False and you check that in the CanExecute method.

Here’s the code we need in the screen:

Private Sub CreateEmail_CanExecute(ByRef result As Boolean)
    result = System.Runtime.InteropServices.Automation.AutomationFactory.IsAvailable
End Sub

Private Sub CreateEmail_Execute()
    'Create the html email from the Order data on this screen
    Dim toAddress = Me.Order.Customer.Email
    If toAddress <> "" Then

        Dim entity = Me.Order
        Dim subject = "Thank you for your order!"

        Dim body = <html>
                       <body style="font-family: Arial, Helvetica, sans-serif;">
                           <p><%= entity.Customer.ContactName %>, thank you for your order!<br></br>
                                  Order date: <%= FormatDateTime(entity.OrderDate, DateFormat.LongDate) %></p>
                           <table border="1" cellpadding="3"
                               style="font-family: Arial, Helvetica, sans-serif;">
                               <tr>
                                   <td><b>Product</b></td>
                                   <td><b>Quantity</b></td>
                                   <td><b>Price</b></td>
                                   <td><b>Discount</b></td>
                                   <td><b>Line Total</b></td>
                               </tr>
                               <%= From d In entity.Order_Details
                                   Select <tr>
                                              <td><%= d.Product.ProductName %></td>
                                              <td align="right"><%= d.Quantity %></td>
                                              <td align="right"><%= FormatCurrency(d.UnitPrice, 2) %></td>
                                              <td align="right"><%= FormatPercent(d.Discount, 0) %></td>
                                              <td align="right"><%= FormatCurrency(d.LineTotal, 2) %></td>
                                          </tr>
                               %>
                               <tr>
                                   <td></td>
                                   <td></td>
                                   <td></td>
                                   <td align="right"><b>Total:</b></td>
                                   <td align="right"><b><%= FormatCurrency(entity.OrderTotal, 2) %></b></td>
                               </tr>
                           </table>
                       </body>
                    </html>

        OutlookMailHelper.CreateOutlookEmail(toAddress, subject, body.ToString)
    Else
        Me.ShowMessageBox("This customer does not have an email address",
                          "Missing Email Address",
                          MessageBoxOption.Ok)
    End If
End Sub

Now when the user clicks the Create Email button on the ribbon, the HTML email is created and the Outlook mail message window opens allowing the user to make changes before they hit send.

image

I hope I’ve provided a couple options for sending HTML emails in your LightSwitch applications. Select the first option to use SMTP when you want automated emails sent from the server side. Select the second option to use the Outlook client when you want to interact with users that have Outlook installed and LightSwitch is running out-of-browser.


Mauricio Rojas described Restoring simple lookup capabilities to Silverlight ListBox in a 1/27/2011 post:

VB6 and WinForms ListBox has the built in capability to provide a simple data look up. But the Silverlight ListBox does not.
So if you have a list with items:

  1. Apple
  2. Airplane
  3. Blueberry
  4. Bee
  5. Car
  6. Zoo
  7. Animal Planet

And your current item is Apple when you press A the next current item will be Airplane

  1. Apple
  2. Airplane
  3. Blueberry
  4. Bee
  5. Car
  6. Zoo
  7. Animal Planet

And the next time you press A the next current item will be Animal Planet

  1. Apple
  2. Airplane
  3. Blueberry
  4. Bee
  5. Car
  6. Zoo
  7. Animal Planet

And the next time you press A the next current item will be Apple again

Ok to do in Silverlight you need to add a event handler. You can create a user control and this event handler and replace your listbox for your custom listbox or just add this event handler for the listboxes that need it. The code you need is the following:

void listbox1_KeyDown(object sender, KeyEventArgs e)
{
    String selectedText = this.listbox1.SelectedItem.ToString();
    String keyAsString = e.Key.ToString();
    int maxItems = listbox1.Items.Count;
    if (!String.IsNullOrEmpty(selectedText) && 
        !String.IsNullOrEmpty(keyAsString) && keyAsString.Length == 1 && 
         maxItems > 1)
    {   
        
        int currentIndex = this.listbox1.SelectedIndex;
        int nextIndex    = (currentIndex + 1) % maxItems;
        while (currentIndex != nextIndex)
        {
            if (this.listbox1.Items[nextIndex].ToString().ToUpper().StartsWith(keyAsString))
            {
                this.listbox1.SelectedIndex = nextIndex;
                return;
            }   
            nextIndex    = (nextIndex + 1) % maxItems;   
        }
        //NOTE: theres is a slight different behaviour because for example in 
        //winforms if your only had an item that started with A and press A the selectionIndex
        //will not change but a SelectedIndexChanged event (equivalent to SelectionChanged in Silverlight)
        //and this is not the Silverlight behaviour
    }
    
}


Mihail Mateev explained Using Visual Studio LightSwitch Applications with WCF RIA Services in a 1/27/2011 post to the Infragistics Community blog:

image LightSwitch is primarily targeted at developers who need to rapidly product business applications.

Visual Studio LightSwitch application could connect to a variety of data sources. For now it can connect to database servers, SharePoint Lists and WCF RIA Services.

image2224222[2]Connection with WCF RIA Services requires some settings that are not "out of the box".  Despite this, it is a very often expected scenario and developers want to have a sample where step by step to create a LightSwitch application, using as data source WCF RIA Services. That is the reason to create this walkthrough.

Demo Application:

Requirements:

Steps to implement the application:

  • Create a sample database (HYPO database)
  • Create a WCF RIA Services Class Library
  • Add an ADO.NET Entity Data Model
  • Add a Domain Service Class
  • Create a Visual Studio LightSwitch application
  • Add a WCF RIA Services data source
  • Create a screen, using a WCF RIA Services data source

Steps to Reproduce:

  • Create a sample database

Create a database, named HYPO

Set a fields in the table “hippos”.

  • Create a WCF RIA Services Class Library, named RIAServicesLibrary1

  • Add an ADO.NET Entity Data Model

Delete generated Class1 and add an ADO.NET Entity Data Model using data from database “HYPO”.

  • Add a Domain Service Class

Create a Domain Service Class, named HippoDomainService using the created ADO.NET Entity Data Model (HYPOEntityModel).

LightSwitch applications require WCF RIA Data Source to [have a] default query and key for entities.

Modify HippoDomainService class in  HippoDomainService.cs : add an attribute [Query(IsDefault=true)] to the method HippoDomainService.GetHippos().

   1: [Query(IsDefault=true)]
   2: public IQueryable<hippos> GetHippos()
   3: {
   4:     return this.ObjectContext.hippos;
   5: }

Add an attribute [Key] to the ID field in the hippos class (HippoDomainService.metadata.cs)

   1: internal sealed class hipposMetadata
   2: {
   3:  
   4:     // Metadata classes are not meant to be instantiated.
   5:     private hipposMetadata()
   6:     {
   7:     }
   8:  
   9:     public Nullable<int> Age { get; set; }
  10:  
  11:     [Key]
  12:     public long ID { get; set; }
  13:  
  14:     public string NAME { get; set; }
  15:  
  16:     public string Region { get; set; }
  17:  
  18:     public Nullable<decimal> Weight { get; set; }
  19: }
  • Create a Visual Studio LightSwitch application

Add a new LightSwitch application (C#) to the solution with WCF RIA Class Library:

  • Add a WCF RIA Services data source

Add a reference to RIAServicesLibrary1.Web and add HippoDomainService as data source.

Add the entity “hippos” as a data source object:

Ensure “hippos” properties:

Switch view type for the LightSwitch application to “File View” and display hidden files.

Open in RIAServicesLibrary1.Web   App.config file and copy the connection string to Web.config  in a LightSwitch ServerGenerated project. 

Connection string:

   1: <add name="HYPOEntities" connectionString="metadata=res://*/HYPOEntityModel.csdl|res://*/HYPOEntityModel.ssdl|res://*/HYPOEntityModel.msl; providerName="System.Data.EntityClient" /
   2: provider=System.Data.SqlClient;
   3: provider connection string=&quot;
   4: Data Source=.\SQLEXPRESS;Initial Catalog=HYPO;Integrated Security=True;
   5: MultipleActiveResultSets=True&quot;"
   6: ></connectionStrings>
  • Create a screen, using a WCF RIA Services data source

Add a new Search Data Screen, named Searchhippos using hippos entity:

Ensure screen properties:

Run the application: hippos data is displayed properly.

Modify hippos data.

Source code of the demo application you could download here: LSRIADemo.zip


Avkash Chauhan published a workaround for System.ServiceModel.Channels.ServiceChannel exception during ASP.NET application upgrade from Windows Azure SDK 1.2 to 1.3 on 1/27/2011:

When you upgrade your ASP.NET based application from Windows Azure SDK 1.2 to 1.3 it is possible you may hit the following exception:

System.ServiceModel.CommunicationObjectFaultedException was unhandled
 Message=The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state.
 Source=mscorlib
  StackTrace:
    Server stack trace: 
       at System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout)
    Exception rethrown at [0]: 
       at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
       at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
       at System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout)
       at System.ServiceModel.ClientBase'1.System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout)
       at Microsoft.WindowsAzure.Hosts.WaIISHost.Program.Main(String[] args)
  InnerException:

This is a known issue with Windows Azure SDK 1.3 when ASP.NET based application is upgraded from SDK 1.2 to 1.3.

There are two potential causes for such problem as below:

1.     Web.config file marked read-only in compute emulator

2.     Multiple role instances writing to same configuration file in compute emulator

The problem is described in details along with potential workaround at:

http://msdn.microsoft.com/en-us/library/gg494981.aspx

Please refer the following link for another potential issue when upgrading Windows Azure SDK 1.2 to 1.3:

http://blogs.msdn.com/b/windowsazure/archive/2010/12/08/specifying-machine-keys-with-windows-azure-sdk-1-3.aspx

http://blogs.msdn.com/b/avkashchauhan/archive/2010/12/07/upgrading-asp-net-web-role-to-cloud-sdk-1-3-from-sdk-1-2-may-cause-form-authentication-broken-or-log-in-issues.aspx


Campbell Gunn answered What are LightSwitch Extensions? in a 1/21/2011 thread of the LightSwitch Extensibility forum:

image Extensions are custom content/data access that is not available in the LightSwitch product.

Extensions are a combination of code (in either VB or C#) and Entity Framework metadata (model data) that describes the custom content/data access you are creating and the add to LightSwitch.

image2224222[2]You can get an understanding of what and how extensions are used, by downloading the LightSwitch Training kit at: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=ac1d8eb5-ac8e-45d5-b1e3-efb8e4e3ebd1&displaylang=en

This shows currently two extension types, one is a custom control extension and the other is custom data source.

In our next release we will have more extension types available for you to develop on.

Campbell is Program Manager - Microsoft Visual Studio LightSwitch - Extensibility


Adron Hall (@adronbh) posted Cuttin’ Teeth With NuGet on 1/26/2011:

image If you’re in the .NET Dev space you might have heard about the release of nuget.  This is an absolutely great tool for pulling in dependencies within a Visual Studio 2010 Project.  (It can also help a lot in Visual Studio 2008, and maybe even earlier versions)

imageInstead of writing up another tutorial I decided I’d put together specific links for putting together packages, installing packages, and other activities around using this tool.  In order of importance in getting started, I’ve itemized the list below.

  1. Nuget Gallery: The first site to check out is the Nuget Codeplex Site.
  2. Check out the Nuget Gallery.
  3. Then check out the getting start[ed] page.

Once you’ve checked out all those sites & got rolling with nuget, be sure to check out some of the blogs from the guys that have put time in on the development and ongoing awesomeness of it!

That’s all I’ve got for now, check out the nuget, if you code against the .NET stack you owe it to yourself!

Also some of my cohorts have put together a few pieces of information related to nuget and getting it tied together a bit like Ruby on Rails Gems:


Return to section navigation list> 

Windows Azure Infrastructure

• Dan Orlando posted Cloud Computing: Choosing a Provider for Platform as a Service (PaaS) on 1/27/2011:

image I recently wrote a series of articles that will be published next month on the topic of cloud computing, and there was one thing that really stood out about the current state of cloud computing. A few years ago, cloud computing was mostly an abstract concept with varying definitions depending on who you asked. Things have changed though. It is evident that a large portion of the Information Technology and business sectors have gotten better at understanding cloud computing. However, when cloud computing is broken down into the three classifications of cloud computing, namely: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), most people find it very hard to understand the difference.

The most prevalent confusion seems to be with Platform as a Service. I’ve experienced IT professionals confusing Platform as a Service with Infrastructure as a Service so many times in the last few months that I lost track. The articles I wrote go into great detail on these classifications and their differences as a result of this, and I will post links to them here when they are published. In the meantime, I thought I would post a brief description of Platform as a Service and the players involved to assist in our continued understanding of cloud computing and it’s three major classifications.

Platform as a Service is like a middle-tier between infrastructure and software, consisting of what is known a “solution stack”. The solution stack is what sets companies apart who offer Platform as a Service. If you are a decision-maker for your company, this is something you will need to explore in greater depth before making a decision to jump on board the PaaS train. Let’s take a quick look at the top three contenders in the PaaS arena to give you an idea of the differences between them:

  • Force.com. Force.com pages use a standard MVC design, HTML, and other Web technologies such as CSS, Ajax, Adobe Flash® and Adobe Flex®.
  • imageWindows Azure™ platform. Microsoft’s cloud platform is built on the Microsoft .NET environment using the Windows Server® operating system and Microsoft SQL Server® as the database.
  • Google App Engine. Google’s platform uses the Java™ and Python languages and the Google App Engine datastore.

In conducting research for this article, I took the opportunity to spend some time with Google App Engine, Force.com, and Windows Azure. With App Engine, you have a Java run time environment in which you can build your application using standard Java technologies, including the Java Virtual Machine (JVM), Java servlets, and the Java programming language—or any other language using a JVM-based interpreter or compiler, such as JavaScript or Ruby. In addition, App Engine includes a dedicated Python run time environment, with a Python interpreter and the Python standard library. The Java and Python run time environments support multi-tenant computing so that applications run without interference from other applications on the system.

With Force.com, you can use standard client-side technologies in an MVC design pattern. For example, if you’re using HTML with Ajax, your JavaScript behaviors and Ajax calls will be separate from your styling, which is held in CSS, and the HTML will hold your page layout structure. With Windows Azure, you’re using the Microsoft .NET Framework with a SQL Server database.

Hopefully this was helpful in providing you with a means for comparing PaaS competitors and understanding what is available to you. If you did find it useful, you will definitely want to read the articles I mentioned earlier. Although I cannot disclose any further information about these articles right now, I promise to post links the moment each one is published.

Dan missed The Windows Azure Platform’s choice between Windows Azure NoSQL storage (tables, blobs and queues) and relational SQL Azure databases.


Christian Weyer posted very important information about Network pipe capabilities of your Windows Azure Compute roles on 1/27/2010:

As this question shows up again and a again:
here is some good data from a PDC10 session called Inside Windows Azure Virtual Machines about which network bandwidth is available to which kind of VMs:

VM Type

CPU

Memory

Peak Mbps

Extra Small

1

768MB

5

Small

1

1.75GB

100

Medium

2

3.50GB

200

Large

4

7.00GB

400

XL

8

14.0GB

800

Hope this helps.

The Extra Small instance would better be named “Nano” when it comes to bandwidth.


Buck Woody (@buckwoody) explained Cloud Computing Pricing - It's like a Hotel in a 1/27/2011 post to his Carpe Data blog:

image I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it.

Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an[d] no more. But it's not quite like that. It's actually more like a hotel.

imageWhen you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for [a one-role] service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility.

Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account.

I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier?

As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down.

Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that.

And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.


Tony Bailey (a.k.a. tbtechnet) posted Common Sense and Windows Azure Scale Up on 1/27/2011:

image I enjoyed this [Common Sense] technical webinar given by Juan De Abreu [pictured at right]. It covers using the Windows Azure platform for scale up, burst demand scenarios.

I like how this topic is covered from a real-world “we have really done this for clients” angle and the deck is clear, concise with some good background information too.

imagehttp://blog.getcs.com/2011/01/scalability-azure-webinar-success/

After you’ve watched Juan’s presentation, get a free Windows Azure platform account for 30-days. No credit card required: http://www.windowsazurepass.com/?campid=A8A1D8FB-E224-E011-9DDE-001F29C8E9A8

Promo code is TBBLIF

I’ve also come across more content on the scale-up scenario in this how-to lab: http://code.msdn.microsoft.com/azurescale


Kenneth van Sarksum reported RightScale will support Azure VMs in a 1/27/2011 post to the CloudComputing.info blog:

RightScale, which offers a management solution for Infrastructure-as-a-Service (IaaS) announced that it will support Azure VMs in its product Cloud Management Platform, the Register reports in a interview with CTO and founder Thorsten van Eicken.

imageRightScale currently supports a number of public IaaS clouds: Amazon EC2, GoGrid and FlexiScale. The company already announced the upcoming support for The Rackspace Cloud. It also supports those private IaaS clouds that are already managed by Eucalyptus, allowing customers to build and manage hybrid clouds architectures.

An early preview of the Azure VM role was released in December last year, and is still expected for the first half of this year.


Robert McNeill discussed Dealing with Cloud Computing Sprawl in a 1/27/2011 post to the Saugatuck Technology blog:

image Saugatuck research indicates that 65 percent or more of all NEW business application / solution deployments in the enterprise will be Cloud-based or Hybrid by 2015 (up from 15-20 percent in 2009) (SSR-834, Key SaaS, PaaS and IaaS Trends Through 2015 – Business Transformation via the Cloud, 13Jan2011). One implication is that by 2015, 25 percent or more of TOTAL enterprise IT workloads will be Cloud-based or Hybrid.

In order to more effectively manage Cloud procurement and reduce “Cloud and SaaS sprawl”, IT asset management practices must evolve to define, discover and manage cloud based assets. This becomes critical as Cloud Computing adoption moves from an opportunistic point solution strategy to one that is deeply embedded as part of an integrated hybrid infrastructure. Saugatuck hears of horror stories from unmanaged Cloud Computing procurement and poor operational disciplines. Consider the following:

  • SaaS projects are bypassing centralized procurement functions. Saugatuck believes that most organizations may be underestimating SaaS use by up to 25 percent (see Strategic Research Report, Enterprise Ready, or Not – SaaS Enters the Mainstream, SSR-460, 10July2008). We are aware of one client that has at least six salesforce.com instances, and our experience shows that this is not dissimilar to other large decentralized organizations. Do you have a process in place to standardize procurement in order to drive economies of procurement and support?

  • Small department driven (silo’d) projects have grown into enterprise wide deployments enabling business processes and hosting corporate data in external data centers. Security, standards and management of cloud computing assets has not kept up the business adoption (see Strategic Perspective, Mitigating Risk in Cloud-Sourcing and SaaS: Certifications and Management Practices, MKT-660, 30Oct2009). Do you have visibility into the performance, reliability and security of your data and applications?
  • Client and mobile management groups require improved security practices to deal with new smart devices like iPhones that are accessing more than email, but corporate Cloud based applications. When users leave the organization, do you immediately cut off access to critical information?

    Saugatuck’s advice is to identify and implement repeatable practices around management of the delivery of IT and business processes that are enabled by both on-premises and Cloud IT. The IT management team must evolve and expand their role and responsibilities in response to the Cloud providing information about the state of on-premises and Cloud IT to users such as risk, security, HR and vendor management.

    New technology vendors will enter the market to focus on the technical challenges associated with managing Cloud IT. Conformity and Okta are two such vendors we recently had briefings with focused on addressing the challenges of identity and access to Cloud Based Solutions. But while IT organizations may add new tools to assist in discovering, provisioning and managing Cloud IT, the strategy and process for managing IT and business processes must remain coordinated across traditional on-premises and Cloud IT.


    Darryl K. Taft reported “As part of its Technical Computing initiative, Microsoft opens a new Technical Computing Labs project under its MSDN DevLabs banner” as a deck for his Microsoft Opens New Technical Computing Labs Project post of 1/26/2011 to eWeek.com’s Windows & Interoperability News blog:

    image As part of its Technical Computing initiative, Microsoft has launched a new effort known as Technical Computing Labs (TC Labs) for developers on Microsoft Developer Network’s (MSDN) DevLabs.

    TC Labs provides developers with the opportunity to learn about Technical Computing technologies, get early versions of code and to provide feedback to Microsoft. TC Labs is a new resource for developers to access early releases of Microsoft Technical Computing software.

    According to the TC Labs page on DevLabs, Microsoft is bringing multiple technologies and services to bear with this initiative including parallel development tools in Visual Studio, distributed computing environments with Windows High Performance Computing (HPC) Server, cloud computing with Windows Azure, and a broad ecosystem of partner applications.

    “Microsoft Technical Computing is focused on empowering a broader group of people in business, academia, and government to solve some of the world’s biggest challenges,” Microsoft said on the TC Labs site. “It delivers the tools to harness computing capacity to make better decisions, fuel product innovation, speed research and development, and accelerate time to market – including decoding genomes, rendering movies, analyzing financial risks, streamlining crash test simulations, modeling global climate solutions and other highly complex problems. Doing this efficiently, at scale, necessitates a comprehensive platform that integrates well with your existing IT environment.”

    Microsoft TC Labs projects include Sho, which provides those who are working on Technical Computing-styled workloads an interactive environment for data analysis and scientific computing that lets you seamlessly connect scripts (in IronPython) with compiled code (in .NET) to enable fast and flexible prototyping.

    imageIn a Jan. 26 blog post, S “Soma” Somasegar, senior vice president of Microsoft’s Developer Division, said of Sho, “The environment includes powerful and efficient libraries for linear algebra and data visualization, both of which can be used from any .NET language, as well as a feature-rich interactive shell for rapid development. Sho comes with packages for large-scale parallel computing (via Windows HPC Server and Windows Azure), statistics, and optimization, as well as an extensible package mechanism that makes it easy for you to create and share your own packages.” [Emphasis added.]

    Another TC Labs project is the Task Parallel Library (TPL), which was introduced in the .NET Framework 4, providing core building blocks and algorithms for parallel computation and asynchrony.

    Regarding TPL, Somasegar said, “.NET 4 saw the introduction of the Task Parallel Library (TPL), parallel loops, concurrent data structures, Parallel LINQ (PLINQ), and more, all of which were collectively referred to as Parallel Extensions to the .NET Framework. TPL Dataflow is a new member of that family, layering on top of tasks, concurrent collections, and more to enable the development of powerful and efficient .NET-based concurrent systems built using dataflow concepts. The technology relies on techniques based on in-process message passing and asynchronous pipelines and is heavily inspired by the Visual C++ 2010 Asynchronous Agents Library and DevLab's Axum language. TPL Dataflow provides solutions for buffering and processing data, building systems that need high-throughput and low-latency processing of data, and building agent/actor-based systems. TPL Dataflow was also designed to smoothly integrate with the new asynchronous language functionality in C# and Visual Basic I previously blogged about.”

    And another TC Labs project, Dryad, DSC, and DryadLINQ, are a set of technologies that support data-intensive computing applications that run on a Windows HPC Server 2008 R2 Service Pack 1 cluster. Microsoft said these technologies enable efficient processing of large volumes of data in many types of applications, including data-mining applications, image and stream processing, and some scientific computations. Dryad and DSC run on the cluster to support data-intensive computing and manage data that is partitioned across the cluster. DryadLINQ allows developers to define data intensive applications using the .Net LINQ model.

    Sho’ ‘nuff.


    The Windows Azure Documentation team updated the Windows Azure SDK Schema Reference for ServiceConfiguration.cscfg and ServiceDefinition.csdef XML content on 1/24/2011 (missed when published):

    imageA service requires two configuration files, which are XML files:

    • The service definition file describes the service model. It defines the roles included with the service and their endpoints, and declares configuration settings for each role.
    • The service configuration file specifies the number of instances to deploy for each role and provides values for any configuration settings declared in the service definition file.

    This reference describes the schema for each file.

    In This Section
    See Also: Concepts

    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    TechNet ON posted System Center in the Cloud to the TechNet Newsletter for 1/27/2010:

    imageBridging the Gaps (View All)
    image
    image Manage the Private Cloud (View All)

    Mitch Irsfeld asserted The System Center product portfolio provides a unified management approach for applications and workloads across on-premises datacenters, private cloud and public cloud environments in a preface to his Editors Note: Managing for the Future with System Center’s Datacenter-to-Cloud Approach for TechNet Magazine and published it on 1/27/2011:

    image With any transition to the cloud--whether it’s a wholesale shift or moving small pieces of your IT infrastructure to start with; whether it’s delivered on-premise or through a provider—the ability to manage and control the environment is paramount. In fact, the trust factor is one of the perceived barriers to cloud computing. Trust grows out of visibility into infrastructure, applications and data protection policies.

    Microsoft’s System Center suite can help bridge those visibility gaps between the traditional enterprise datacenter, the virtual datacenter, and the cloud by taking the processes and management skills in place for your current infrastructure and applying it to whatever you provision over time.

    That datacenter-to-cloud approach makes System Center and Windows Server 2008 R2 a powerful combined solution for implementing the IT-as-a-service model which promises cost efficiency and business agility gains.

    To learn about Microsoft’s vision for cloud computing from an IT perspective, and why it’s important to have a datacenter-to-cloud approach, download the whitepaper Microsoft’s Cloud Computing Infrastructure Vision & Approach. You’ll notice that a key component of that approach is unified management across premises and cloud environments. That’s where System Center comes in. This edition of TechNet ON looks at the System Center suite as the foundation for managing private cloud and public cloud infrastructures.

    For an overview of the System Center modules that can help accelerate your migration to the cloud, read Joshua Hoffman’s new TechNet Magazine article The Power of System Center in the Cloud. In it he addresses the primary tools for managing a private cloud infrastructure as well as providing operational insight into applications hosted on the Windows Azure platform in a public cloud or hybrid environment.

    Virtual Machine Manager and the Private Cloud

    System Center Virtual Machine Manager (VMM) 2008 R2 is the primary tool for managing a virtual private cloud infrastructure. It provides a unified view of an entire virtualized infrastructure across multiple host platforms and myriad guest operating systems, while delivering a powerful toolset to facilitate the onboarding of new workloads.

    To make it easier to configure and allocate datacenter resources, customize virtual machine actions and provision self-service management for business units, Microsoft offers the Virtual Machine Manager Self-Service Portal 2.0 (VMMSSP). VMMSSP is a free, fully supported, partner-extensible solution that can be used to pool, allocate, and manage your compute, network and storage resources to deliver the foundation for a private cloud platform in your datacenter. For an overview of the VMMSSP, the Solution Accelerators team penned System Center Virtual Machine Manager Self Service Portal 2.0 -- Building a foundation for a dynamic datacenter with Microsoft System Center.

    For more in-depth work with VMM, Microsoft Learning offers a free lesson, excerpted from Course 10215A: Implementing and Managing Microsoft Server Virtualization. The lesson, titled Configuring Performance and Resource Optimization, describes how to implement performance and resource optimization in System Center Virtual Machine Manager 2008 R2, an essential tool for managing your private cloud infrastructure.

    While VMM simplifies the task of converting physical computers to virtual machines—an essential step in building your private cloud infrastructure—it’s not the only System Center component to consider for delivering IT services.  As with a physical data infrastructure, you still need to monitor, manage and maintain the environment. You still need to ensure compliance with a good governance model. And you’ll want to streamline the delivery of services and gain efficiencies through process automation.

    System Center Operations Manager can deliver operational insight across your entire infrastructure in a physical datacenter, in a private cloud, or deployed as public cloud services. If you are already assessing, deploying, updating and configuring resources using Operations Manager, you can provide the same degree of systems management and administration as workloads are migrated to a cloud environment. [I think we need to add/clarify that, even if you are using a hosted PaaS, like Azure, you’ll want to  be informed about versioning and configuration in a dashboard, to track how Microsoft is evolving the platform you use.

    Beyond the Datacenter

    As your operations are ready to take advantage of the expanded computing capacity and cost efficiencies of the public cloud, System Center migrates with you. In particular, System Center Service Manager 2010, and Opalis 6.3 extend the process automation, compliance and SLA management to the cloud.

    Service Manager can help provision services across the enterprise and cloud. It automatically connects knowledge and information from System Center Operations Manager, System Center Configuration Manager, and Active Directory Domain Services to provide built-in processes based on industry best practices for incident and problem resolution, change control, and asset lifecycle management.

    By using Service Manager with Opalis, administrators can also automate IT processes for incident response, change and compliance, and service-lifecycle management. Opalis provides a wealth of interconnectivity and workflow capabilities, allowing administrators to standardize an automated process so that it can be delivered more quickly and more reliably because it is executing the same way each time. And it can do that across the System Center portfolio and across third party management tools and infrastructure.

    Finally, if you are already running applications in the Windows Azure environment, the

    Windows Azure Application Monitoring Management Pack works with Operations Manager 2007 R2 to monitor the availability and performance of applications running on Windows Azure.

    The cloud—whether private or public—represents a transformation in the operations of datacenters, promising significant improvements in cost efficiency. You don’t, however, need to transform your management tools to take advantage of those efficiencies. System Center bridges the physical, virtual (private cloud), and public cloud environments with a unified management approach, allowing you to use familiar tools for greater reliability and trust.

    Mitch is the Editor of TechNet.


    David Linthicum asserted “Although some cloud providers look at the hybrid model as blasphemy, there are strong reasons for them to adopt it” as a deck for his Why the hybrid cloud model is the best approach post to InfoWorld’s Cloud Computing blog of 1/27/2011:

    image When the industry first began discussing the hybrid cloud computing model back in 2008, cloud computing purists pushed back hard. After all, they already thought private clouds were silly and a new, wannabe-hip name for the data center. To them, the idea of hybrid clouds that used private clouds or traditional computing platforms was just as ridiculous.

    image Over time, it became clear that hybrid cloud computing approaches have valid roles within enterprises as IT tries to mix and match public clouds and local IT assets to get the best bang for the buck. Now it's the cloud computing providers who are pushing back on hybrid cloud computing, as they instead try to promote a pure public cloud computing model.

    However, these providers are hurting the adoption of cloud computing. Although public cloud computing has valid applications, the path to public cloud computing is not all that clear to rank-and-file enterprises. For many, it's downright scary.

    Leveraging a hybrid model accomplishes several goals:

    1. It provides a clear use case for public cloud computing. Specific aspects of existing IT infrastructure (say, storage and compute) occur in public cloud environments, and the remainder of the IT infrastructure stays on premise. Take the case of business intelligence in the cloud -- although some people promote the migration of gigabytes of operational data to the cloud, many others find the hybrid approach of keeping the data local and the analytical processing in the cloud to be much more practical.
    2. Using a hybrid model is a valuable approach to architecture, considering you can mix and match the resources between local infrastructure, which is typically a sunk cost but difficult to scale, with infrastructure that's scalable and provisioned on demand. You place the applications and data on the best platforms, then span the processing between them.
    3. The use of hybrid computing acknowledges and validates the fact that not all IT resources should exist in public clouds today -- and some may never exist in public clouds. Considering compliance issues, performance requirements, and security restrictions, the need for local is a fact of life. This experience with the hybrid model helps us all get better at understanding what compute cycles and data have to be kept local and what can be process remotely.

    Of course there are cloud providers that already have their eye on leveraging a hybrid model. These new kids on the block even provide management and operating systems layers specifically built for hybrid clouds. However, the majority of public cloud providers are religious about pushing everything outside of the firewall (after all, that's where they are). They need to be careful that their zealotry doesn't turn off potential cloud converts.


    Microsoft reported on 1/26/2011 a job opening for a Software Development Engineer in Test II Job on the Windows Azure Appliance team:

    Software Development Engineer in Test II Job
    • Date: Jan 26, 2011
    • Location: Redmond, WA, US
    • Job Category: Software Engineering: Test
    • Location: Redmond, WA, US
    • Job ID: 740107-34394
    • Division: Server & Tools Business

    Do you want to make an impact on the future of cloud computing? Here is your chance.

    We are the Windows Azure Platform Appliance team. Our mission is to deliver a robust cloud platform that customers can deploy on their own datacenter. We are looking for a strong SDET to help us deliver on this mission.

    What this position offers:

    • Opportunity to develop a deep understanding of the Windows Azure Platform
    • Participate in all phases of shipping a v1 product, we are just about to start our very first milestone [emphasis added]
    • Tough technical challenges with opportunity to make a significant impact
    • A team that strongly believes in quality upfront and key role for test in product definition
    • A strong and senior test team with significant focus on coaching and skills development
    • Opportunities for significant cross-team collaboration
    • Be part of What's next at Microsoft

    What we are looking for in the ideal candidate:

    • B.S./M.S. in Computer Science or equivalent field
    • Passionate about technology and software quality
    • Excellent performance record
    • Strong background in object oriented design, data structures and algorithms
    • Strong coding, problem solving and debugging skills
    • Experience with web technologies - XML, REST, IIS, WCF
    • Strong programming skills in C#, C++, Powershell
    • Experience testing distributed applications/online services is a plus
    • Experience in networking or shipping online services a plus …


    Kamesh Pemmaraju posted Microsoft 2011: Cloud Strategy Revisited, which includes a transcript of an interview with Doug Hamer, Azure General Manager, to the SandHill Opinion blog on 1/26/2011:

    image 2010 was a tumultuous year for Microsoft's cloud leadership. First, there was the departure of Ray Ozzie who was the brains behind Azure - the most comprehensive cloud platform around. Then came the somewhat unexpected announcement that Bob Muglia would be leaving the company in 2011. A 23-year Microsoft veteran, Muglia was in charge of the $15 billion Windows and SQL Server division. The group included Azure and its a profit of $5.5 billion for the year ending June 2010.

    imageExperts debated: What do the back-to-back exits of two of the most respected executives portend for Microsoft's cloud future? How will giant Microsoft compete and keep giving the cloud upstarts a run for their money? I spoke with Microsoft's GM of Azure to get the straight story (see below.)

    Microsoft's dilemma—one shared by other incumbents including Oracle, SAP and others—is a notorious problem with a history of well-known failures: how does a big company balance disruptive innovation with legacy technologies, business models and cultural norms?

    The megavendors find it hard to let go of legacy models for two main reasons. Firstly, legacy lines-of-business still generate billions of dollars in revenue and high margins; Secondly, change—especially one that is as pervasive as the cloud—is extremely hard to manage. The cloud is not just a minor innovation, it is a disruptive force in the industry and Microsoft is faced with the dilemma of how to embrace cloud without cannibalizing its existing businesses (which, of course, are still all-too PC- and Windows OS-centric and the traditional server OS will increasingly be a non-factor in the cloud...

    Despite the critics, I think Microsoft retains a competitive edge with its relatively mature and full-featured (at least compared to the competition) Azure cloud platform, massive-scale datacenters, Hyper-V virtualization technology, and increasingly powerful lineup of cloud-based applications including email, collaboration, and CRM applications. With a large established base of enterprise customers, Microsoft is a formidable competitor which the upstarts will find extremely hard to displace, not withstanding new competitive offerings from AWS (beanstalk) and Salesforce Force.com and database.com).

    All in all, Microsoft can be counted as amongst the best and most comprehensive cloud vendors in the market today. With the exit of Ozzie and Muglia, however, Steve Ballmer will have to work hard to find competent replacements while continuing to execute on Ozzie's services vision, and helping the market and customers understand what it is and how to take advantage of it.

    We saw some very interesting Azure enhancements from Microsoft at its recent Partner Development Conference, including new cloud offerings across the stack including office in the cloud, private clouds, appliances and so forth.

    I spoke with Doug Hauger, General Manager for Microsoft's Azure business to get the company's perspective on the build out of the cloud stack, and details about their cloud strategy for 2011 and beyond.

    Does Microsoft offer an Infrastructure-as-a-Service (IaaS)? How does it differ from the competition?

    Doug Hauger: To be clear, we don't offer Infrastructure-as-a-Service (IaaS) similar to that of Amazon Web Service (AWS) from our data centers. We offer it through third-party cloud vendors (AWS is included in the sense you can run Windows images on AWS). Many of our partners use the Hyper-V cloud to build their IaaS service (AWS is an exception because they their own technology). Hyper-V cloud is a combination of the latest Windows Server 2008 R2 Hyper-V, System Center, and Virtual Machine Manager Self Service Portal 2.0. These components allow end-customers and Microsoft partners alike to build private or public clouds and gain the benefits of self-service, scalability, and elasticity.
    These IaaS offerings (including Hyper-V) are built on single-server building blocks (predominantly virtual, but these can also be physical servers) which you can allocate, configure, provision and build on top of. You choose your guest OS (Windows in our case) and use these features to build your infrastructure environment. All this requires significant administration and management effort and time.

    Where does IaaS end and PaaS begin?

    DG: There is a grey area between the top-end of the Infrastructure-as-a-Service (IaaS) layer and the bottom-end of the Platform-as-a-Service (PaaS) layer. In Azure, we offer service functionality called a VMRole. Using VMRole you can move Windows server along with your application as a VHD image to Azure and run that instance in that role. Note that we still automatically manage the underlying host OS, virtual IP space, and networking infrastructure. The crucial difference is this: we won't update or patch your server image in the guest OS. The responsibility is on you to take care of all the up-keep of that instance. We think of VMrole as a bridge between IaaS and PaaS.

    If you don't want to run your instance on Azure because it is hosted in a MSFT datacenter, you may choose to run it on the Hyper-V cloud behind your own firewall. Of course, in that case you have to manage the entire infrastructure yourself but you get flexibility, control, and the ability to create your own IaaS service for your internal customers. What is unique about this is that once you have this running in your data center, you can then easily move these instances to a hosted environment with one of our partners (offering Hyper-V cloud). Additionally, you can use the VMRole to extend it to the Windows Azure platform which can interoperate with other virtual machines inside your data center creating a hybrid operating environment.

    How exactly does Microsoft address Hybrid cloud environments?

    DG: A hybrid cloud is a very popular use case, a lot more than I had anticipated. We are seeing a lot of demand for that (See "Five Cloud Trends for 2011 and Beyond"). Customers want to move or extend their traditional applications running in their current physical data center environments to take advantage of scale and elasticity of public clouds. The popular use cases tend to be front-end web serving workloads, scale-out compute analytics, or high performance computing.
    To enable this we provide the service bus capability, which is part of Windows Azure AppFabric functionality. Among other things, AppFabric enables bridging your existing applications to the cloud through secure connectivity across network and geographic boundaries, and by providing a consistent development model for both Windows Azure and Windows Server. If you are running Windows Server running Hyper-V in your data center, you already have the AppFabric functionality—no need to install anything new.

    What is Microsoft's view of Platform-as-a-Service inside the firewall?

    DG:This is the Azure platform appliance we announced back in June 2010. We are in limited production with one customer: eBay (See recent post on eBay: (Private and Hybrid architectural in the cloud, eBay's experience), and three partners: Dell, Fujitsu, and HP. The Windows Azure Platform Appliance is an integrated stack of hardware and software that is exactly the same as the Public Azure platform but running in a customer's data center. It's not ready for enterprise use yet. Today, you can't go to a Dell or HP website and order it online. But Microsoft is incredibly serious about it. There's no question about it. We want to make sure this works and works well for serious enterprise use before going to the market with it.

    If you want to use your existing Windows-based infrastructure, you can certainly use Hyper-V cloud to turn it into an IaaS. But it's not going to be PaaS. We are moving not all the Azure APIs to the Hyper-V cloud and so you won't get many feature like Content Distribution Network (CDN) or automatic replication and distribution of storage blobs, and so on.

    However, you can take advantage of certain functionality in Hyper-V and Azure in a "hybrid" sort of way. You will have to re-architect your applications for this to work but it is easier because Azure APIs are essentially extensions of the .Net APIs. You can easily move an ASP.Net application to Azure, often in less than a day. If you have a very complex, line-of-business app that's running in a non-virtualized environment, then it's going to take more time to move and re-architect.

    We encourage developers to learn about the new paradigm of writing applications to PaaS and design them keeping in mind things like multi-tenancy, scale across compute and storage etc.

    Kamesh Pemmaraju heads cloud research at the Sand Hill Group.


    Onuora Amobi published FedEx CIO Explains The Power Of Cloud to the Cloud Computing Zone blog on 1/24/2011:

    Leave it to Rob Carter, the CIO of FedEx, to clarify what’s really powerful about cloud computing. Carter, the company’s CIO since 2000 and an InformationWeek advisory board member for almost as long, has a knack for discussing technology in a way that cuts to the business payoff, but without leaning on buzzwords that whitewash the complexity involved.

    Carter boils down cloud computing, when applied to IT infrastructure, to “general purpose computing.” It’s the ability to connect servers, networking, and storage that are “workload agnostic,” meaning the jobs they handle can be shuffled around among a company’s computers, so those machines are used as efficiently as possible. …

    Just last fall, FedEx opened a new data center in Colorado Springs based on this idea of general purpose computing. It uses commodity x86 servers, each with just a single 10-gig Ethernet cord into the back for networking, replacing the bevy of wires of the past for host-bus adapters, NIC cards, etc. Before applications move into the new data center, they’re “commonized”–revised to use the same database and messaging technology, for example, so they can move easily among servers.

    FedEx is using this cloud infrastructure inside its own data center—a private cloud–but Carter says workloads could easily shift to public clouds run by vendors such as Amazon and others, if that made strategic sense down the road.


    <Return to section navigation list> 

    Cloud Security and Governance

    Frank Huerta explained “Hedging against unexpected outages” in his Improved Business Resilience with Cloud Computing post of 1/25/2011:

    image North American businesses are collectively losing $26.5 billion in revenue each year as a result of slow recovery from IT system downtime according to a recent study. The study also indicates that the average respondent suffers 10 hours of IT downtime a year and an additional 7.5 hours of compromised operation because of the time it takes to recover lost data. Other studies estimate that the average cost of a single hour of downtime is approximately $100,000.

    A disk failure at Virgin Blue Airlines recently caused a 21-hour system outage that caused the cancellation of more than 100 flights, affected over 100,000 passengers, and cost the company $15-20M. This past summer, software-monitoring tools detected "instability" within DBS Bank's storage system even though there was not an actual problem and the administrator-initiated recovery process took the company's systems offline for seven hours, affecting all commercial and consumer banking systems.

    To hedge against unexpected outages, IT organizations attempt to prepare by creating redundant backup systems, duplicating every layer in their existing infrastructure and preparing elaborate disaster recovery processes. This approach is expensive and only partly effective, as demonstrated by the string of notable outages, and can be seen, at best, as a way to minimize downtime.

    Major social networking companies, such as Google and Facebook, have figured out how to scale out application stacks rather than scale up vertically. This results in operational advantages including improved response time and built-in redundancy. Unfortunately, it comes at the cost of a significantly more complicated development model and increased development cost structure. Ideally, enterprise software could achieve similar advantages without those operational costs.

    Balancing Cloud Economics with Control of Data
    As organizations look at ways to leverage the economics and efficiencies of virtualization and cloud computing, it is becoming painfully clear that the traditional approaches to infrastructure that underlie most of today's cloud offerings do not effectively enable the potential agility of these new models.

    Today, organizations are wrestling with ways to take advantage of cloud economics while maintaining control of their data and providing improved support for remote users. Now is the time for technology that enables options for deploying on-premise, in the cloud or a combination of both.

    This is the next phase in truly enabling IT organizations to deliver applications with consistently high availability and performance to global and mobile workers, while maintaining an elastic and robust infrastructure within the constraints of tight budgets.

    Decentralization and Business Resilience
    Emerging technologies that fundamentally decentralize applications and data greatly improve business resilience and simplify disaster and network recovery. They are designed to handle less-than-perfect performance from all components of the infrastructure.

    New emerging approaches to scalable application computing simplify IT infrastructure by combining the various required elements - including storage, load balancing, database and caching - into easily managed appliances or cloud instances. Unlike conventional infrastructures where scale, redundancy, and performance are increased by "scaling up" and adding additional tiers of components, this provides an architecture where additional capabilities are added by "scaling out" and adding additional, identical nodes.

    These systems automatically store data across the nodes based on policy, usage and geography, and intelligently deliver information when and where it is needed. All information is replicated across multiple nodes to ensure availability. If a node fails, users are re-routed to other nodes with access to their data so that productivity does not suffer. When the original node recovers, it resumes participating in the flow of data and applications and local users are reconnected to it. The system automatically synchronizes data in the background so no data is lost and performance is not compromised.

    Availability and Performance in Remote Locations
    Despite business globalization, with customers, partners and employees more likely than ever to be located around the world, in recent years there's been a drive to consolidate data centers. The underlying assumption is that consolidated data centers will allow information technology organizations to better control resource costs for space, energy, IT assets and manpower. With the stampede to consolidation, valid concerns about availability and performance for users in remote locations are sometimes overlooked. Unfortunately, the consolidation cost savings aren't always as dramatic as anticipated and new problems are often introduced as a result.

    Substantial problems remain with maintaining availability and performance for remote workers. In addition, high-speed WAN links used in attempts to address these problems can be prohibitively expensive, particularly outside North America.

    If all the required application infrastructure components resided on comprehensive nodes, the nodes could be placed in small and remote locations. Since virtually all of the supporting infrastructure for an application would be included in a node, performance and responsiveness would improve at each site.

    Ongoing support costs would also be reduced because scaling an application in this way is much easier than with traditional deployments. If a site is growing and needs greater scale, a node can be easily added at that site. This approach only makes sense if additional IT staff is not required at the remote sites. For instance, the addition of a node should be easy enough that non-IT staff can do it.

    Application Response Times and Performance
    Organizations today are more geographically dispersed than ever and many IT organizations have dedicated significant resources to ensure adequate response time performance for their remote offices around the globe. These organizations have usually invested heavily in infrastructure, such as WAN optimization, federated applications and high speed network connections. Today's typical application infrastructure requires a variety of components - a pair of hardware load balancers, application servers, database servers as well as storage for their data. Moreover, to attain redundancy, much of this infrastructure needs to be duplicated off-site.

    The complexity of this type of infrastructure requires continual investment simply to maintain the systems and components. Yet poor performance and spotty availability are often a reality for those working in remote offices.

    Taking a new approach to application deployment can result in significantly lower costs. Using inexpensive, identical nodes at each site, and eliminating the need for a separate failover site could dramatically reduce initial capital expense. Another factor contributing to lower costs is the simpler, fully integrated stack, which makes applications much easier to deploy, manage and troubleshoot.

    Inherent Redundancy and Availability
    Organizations make significant investments in order to achieve high availability and business continuity, and every time a new application is deployed, these expenses increase as the redundant infrastructure is scaled up. Because of the intrinsic complexity in current application deployments, attempts at redundancy are often ineffective and application availability suffers.

    What's now required is an application infrastructure that inherently provides high availability without the additional dedicated infrastructure needed with 2n or 3n redundancy. If a site became unreachable due to an outage, geographic redundancy would preserve the availability of applications and data.

    Conclusion
    The future of enterprise computing requires truly distributed computing that enables remote workers to be highly productive. Simplified, smarter application platforms that integrate disparate technologies such as data storage, database, application servers and load balancing will surpass existing solutions in cost, manageability and reliability.

    Fundamental architecture changes and technologies are emerging that are resilient, and are enabling IT professionals to provide solid infrastructures, eliminate downtime and deliver applications with consistently high availability for global and mobile workers.

    Frank Huerta is CEO and Co-founder of Translattice.


    Phil Worms asserted “It is of little surprise that IT is viewed as both a key enabler of risk management and a key threat” as he explained Why Cloud Computing Must Be Included in Disaster Recovery Planning in a 1/25/2011 post to iomart hosting’s RackPack blog:

    image It is now official. 2010, according to the United Nations, was one of the deadliest years for natural disasters experienced over the past two decades.

    Statistics released yesterday are both shocking and heartbreaking in equal measure. Some 373 natural disasters claimed the lives of more than 296,800 people last year, affecting nearly 208 million people at an estimated cost of nearly $110 billion. To put this in perspective the loss of life equates to losing the entire population of a UK city the size of Nottingham or Leicester.

    image The research was compiled by the Centre for Research on the Epidemiology of Disasters (CRED) of the Université catholique de Louvain in Belgium, and supported by the UN International Strategy for Disaster Reduction (UNISDR).Unfortunately according to the same report, the disasters that befell 2010 may just be the start of an unwelcome trend. Indeed the U.N. assistant secretary-general for disaster risk reduction, Margareta Wahlström, stated that last year’s figures may simply be viewed as benign in years to come.

    “Unless we act now, we will see more and more disasters due to unplanned urbanization and environmental degradation. And weather-related disasters are sure to rise in the future, due to factors that include climate change.”

    Ms. Wahlström then moves on to state: “that disaster risk reduction was no longer optional.”  “What we call ‘disaster risk reduction’ - and what some are calling ‘risk mitigation’ or ‘risk management’ - is a strategic and technical tool for helping national and local governments to fulfil their responsibilities to citizens.”

    As if we needed further reminding of the impact that natural disasters can have on communities, at the same time as Ms.Wahlström was spearheading the UN’s media activities, the Royal Australian College of General Practitioners (RACGP) was issuing guidance to help GP’s overcome the challenge of restoring Information Technology functionality after the recent Queensland floods.

    Professor Claire Jackson, RACGP President and GP in Brisbane, stated: “This will be a time that will test the disaster recovery and business continuity planning of many general practices. The RACGP’s IT disaster recovery flowchart and fact sheet will provide guidance with the often technically difficult challenges of restoring IT systems and procedures to their full functionality.”

    It is a sad fact of life that once the world’s media turns its cameras towards the next news story, we soon forget about the impact that a disaster has had on a particular community, and for some perverse reason, no matter how graphic the images or how sad the stories, we move on, safe in the knowledge that ‘it can’t happen to us.’And whilst for a whole host of geographic or socio economic reasons, the majority of us will never thankfully, experience the pain and devastation that a large scale natural disaster can bring, everyone of us can, and will, suffer some type of ‘personal disaster’ for which we could have taken some basic precautionary measures thus negating their effects.

    As the world, and its citizens, is ever reliant on technology to function on a day to day basis, it is of little surprise that IT is viewed as both a key enabler of risk management and a key threat. We can assume that all of the Queensland GP’s patients’ medical records were stored electronically to improve both service and process delivery, and can only hope that they were not stored in a single data centre location that has now been washed away.

    How many times have you heard people complain that they are ‘suffering the day from hell’ as their laptop has been stolen/lost/damaged and they’ve lost all their work/photos/contacts (delete as appropriate)?, and for some bizarre reason, we are supposed to empathise with them. Much in the same way that folks without basic desktop protection are filled with indignation that someone with a dubious moral compass has dared plant a Trojan on their PC or attempted to steal their bank account details.

    In the recent past, two of the reasons oft quoted by business and consumers for not to taking disaster recovery too seriously, has been cost and complexity. And who could argue with the business logic? Purchasing and maintaining a full set of servers to mirror an existing infrastructure could be viewed as an expensive overhead, particularly if considered simply an ‘insurance policy’ that will in all probability never be ‘claimed.’ Many an IT department’s justifiable claims for additional DR Capex have fallen on ‘deaf ears’ over the years - usually dismissed out of hand as ‘expense for expense’ sake.

    Likewise consumers have probably been baffled by the thought of regularly archiving and backing up their treasured holiday snaps. Where too and how?

    As stated earlier, technology does have a major role to play in disaster recovery.

    Cloud computing is now being considered as a genuine viable disaster recovery/business continuity solution at all levels and for all markets. According to the Cloud Convergence Council, cloud service providers are currently reporting that one in four end-users are asking for cloud-based disaster recovery and backup services.

    The reason is simple. As mentioned previously, you don’t need DR services until something goes wrong - making them a cost centre, not a profit center. Moving disaster recovery into the cloud can deliver greater efficiency, security, scalability, and much desired cost savings.

    There can be no doubt that the ability to store, back up and archive data to an offsite third party, via the cloud, is compelling but as with anything ‘risk’ related there are several considerations to make before deciding upon a solution.

    You will need to consider whether you want your data to be held within a physical geographical boundary. If you do, you will need to ensure that you contract with a cloud provider who can guarantee that their data centre is within your desired territory. You will also need to know how and where your data is stored, once it is in your provider’s cloud. Will it be stored within one physical location, thereby increasing the risk of a single point of failure, or will it be distributed across nodes in more than one data centre? You may be happy that the providers ‘resilience’ simply involves separate racks in separate data halls, in which case you will need to be convinced that their site is served with diverse power supplies, and is free from potential natural hazards etc.

    On the other hand you may feel that this level of cloud service equates to ‘all eggs in one basket’ approach, in which case you might opt for a cloud supplier with multiple data centres offering multiple resilience options across the DC estate. This will be in all likelihood a more expensive option but one which less ‘risky.’

    This leads us to the question of cost. Many cloud services are charged on a per MB, GB or TB usage basis, which can make predictable budgeting a challenge. One blue chip company that recently considered moving to cloud for data replication estimated that it would cost them, over a period of three years, $55,000 more when compared with running a comparable in house system that would regularly and automatically fail over as required, due to the variable nature of the cloud provider’s billing.

    Once again, you should seek to find a cloud provider that can provide some element of inclusive fixed pricing/packaging or will provide you with an agreed fixed tiered pricing model.

    Finally, and most importantly, seek a provider that offers cast iron service level agreements for cloud. If their marketing blurb states that they can have you ‘fully restored and running’ within an hour, determine exactly what, contractually, ‘fully restored and running’ means and a definition of an ‘hour’ might also be useful.

    As with any form of insurance policy, the more ticks you place in the boxes - the greater the degree of protection - the higher the premium. You should not expect a full cloud ‘belt and braces’ DR solution to be cheap. Total peace of mind does cost.

    No one has yet stated that the cloud will solve every disaster recovery/business continuity issue, but it should certainly be considered as a workable solution to an age old problem.

    It would however appear to be somewhat ironic that we are turning to a meteorologically named technological DR solution in a week when we are being advised to plan better as “weather-related disasters are sure to rise in the future.”

    Phil is the Marketing Director of one of the UK's largest managed hosting and cloud computing services companies - iomart Group plc.


    <Return to section navigation list> 

    Cloud Computing Events

    Wely Lau (pictured at right below) reported on 1/27/2011 about Day 1: Windows Azure Introduction to Imagine Cup Students at Kuala Lumpur, Malaysia, which occurred on 1/24/2011, and provided links to his slide deck and demos:

    image Thanks to my buddy, Hoong Fai (Web Strategy Lead, Microsoft Malaysia) for flying me to Kuala Lumpur for two events. Although it was very tiring, but it was a great experience.

    The first event (Monday, 24 Jan 2011) is to deliver an introduction topic on Cloud Computing and Windows Azure for Malaysia’s student who are participating Imagine Cup 2011. It was attended by more than 70 peoples including students and lecturers.

    Introduction to Cloud Computing by Lai Hoong Fai

    Fai started the session to explain some basic concept of cloud computing that includes:

    • the difference between on-premise, hosting, and cloud computing
    • the scenario that is suitable for cloud,
    • the benefit that we received when moving to the cloud,
    • the category of cloud computing (IAAS, PAAS, and SAAS),
    • some of the cloud player in the market
    Windows Azure Platform Overview by me

    Having completed the cloud computing by Fai which took about 45 minutes, next was my session on Windows Azure Platform.

    I started my session by briefly introducing three main components in Windows Azure Platform (Windows Azure, SQL Azure, and AppFabric), and subsequently explaining more details on each component:

    1. Windows Azure: Operating System on the cloud, what the Fabric Controller is, how it works, compute role (web and worker), and Windows Azure Storage

    2. SQL Azure: Relational database service on the cloud, what are the differences, how to migrate SQL Server to SQL Azure

    3. AppFabric: Service Bus and Access Control Service.

    Since I believe that audience would have a better understanding and real experience when seeing the demo,  I delivered more demo than slides.

    Some photos of the event

    DSCN1265DSCN1266

    DSCN1268DSCN1271

    DSCN1272DSCN1274

    My Slide and Demos: For the audience, feel free to download the slides and demo.

    Wely is a developer advisor in Developer and Platform Evangelism (DPE), Microsoft Indonesia.


    The Federal Cloud Blog reported on 1/23/2011 a What is the government’s role in cloud computing? event to be held on 1/25/2011:

    The Commerce Department and the National Institute of Standards and Technology are trying to figure out what the federal government’s role in cloud computing should be. The agencies are hosting a panel discussion Tuesday with industry leaders and experts from academia to discuss this as well as other national needs.

    A media advisory from the Commerce Department says, “Achieving national priorities – which include a Smart Grid for electricity distribution, electronic health records, cybersecurity, cloud computing and interoperable emergency communications –depends upon the existence of sound technical standards. The standards being developed through public-private partnerships for these new technology sectors are helping to drive innovation, economic growth and job creation.”

    Some of the questions the agencies hope to answer at the event:

    • What is the appropriate role for the federal government in convening industry stakeholders and catalyzing standards development and use?
    • How should the federal government engage in sectors where there is a compelling national interest?
    • How are existing public-private initiatives in standardization working?

    U.S. Commerce Secretary Gary Locke, Federal Chief Technology Officer Aneesh Chopra, and NIST Director Patrick Gallagher will all speak at the event.

    Confirmed panelists include:

    • Mark Chandler, General Counsel, Cisco
    • Arti Rai, Professor of Law, Duke Law School
    • Geoff Roman, Chief Technology Officer, Motorola Mobility
    • Raj Vaswani, Chief Technology Officer, Silver Spring Network
    • Stephen Pawlowski, Senior Fellow and General Manager, Central Architecture and Planning, Intel Corp.
    • Ralph Brown, Chief Technology Officer, CableLabs

    The event will be held from 9:30 a.m. – 12 p.m., Tuesday, January 25, 2011 at the Department of Commerce.

    Fortunately, there’s a Web cast of the event here. Unfortunately, it didn’t work for me and might not work for you.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    • Verizon Communications, Inc. prefaced its Verizon to Acquire Terremark, Boosting Cloud Strategy through Powerful Unified Enterprise IT Delivery Platform press release of 1/27/2011 with “Acquisition Will Accelerate 'Everything as-a-Service' Strategy by Leveraging the Companies' Collective Strengths” and “Wholly Owned Subsidiary to Retain Terremark Name, Extend Leadership in Rapidly Evolving Global Cloud Services Market”:

    image NEW YORK and MIAMI, Jan. 27, 2011 /PRNewswire/ -- In a move that will decisively reshape the rapidly evolving global business technology solutions market, Verizon Communications Inc. (NYSE, Nasdaq: VZ) and Terremark Worldwide Inc. (Nasdaq: TMRK) today announced a definitive agreement under which Verizon will acquire Terremark, a global provider of managed IT infrastructure and cloud services, for $19.00 per share in cash, or a total equity value of $1.4 billion. 

    image Pursuant to the agreement, Verizon anticipates that it will commence a tender offer between Feb. 10, 2011, and Feb. 17, 2011, for all shares of common stock of Terremark. The tender offer price constitutes a premium of 35 percent per share over today's closing price. The transaction is subject to the valid tender of a majority of the shares of Terremark; the expiration or early termination of the Hart-Scott-Rodino antitrust waiting period; and other customary closing conditions. The Board of Directors of Terremark has unanimously approved the transaction, and the transaction was unanimously approved by the directors of Verizon present and voting. Verizon has also entered into agreements with three stockholders of Terremark to tender their shares into the offer, representing approximately 27.6 percent of the outstanding voting shares of Terremark. Verizon expects to close the tender offer late in the first quarter of 2011.

    This transaction will accelerate Verizon's "everything-as-a-service" cloud strategy by delivering a powerful portfolio of highly secure, scalable on-demand solutions to business and government customers globally through a unified enterprise IT platform and unique business cloud offerings that leverage the companies' collective strengths.

    Verizon plans to operate the new unit as a wholly owned subsidiary retaining the Terremark name and with Terremark's current management team continuing to manage the company.

    "Cloud computing continues to fundamentally alter the way enterprises procure, deploy and manage IT resources, and this combination helps create a tipping point for 'everything-as-a-service,'" said Lowell McAdam, president and chief operating officer of Verizon.  "Our collective vision will foster innovation, enhance business processes and dynamically deliver business intelligence and collaboration services to anyone, anywhere and on any device."

    Manuel D. Medina, chairman and CEO of Terremark, said: "This transaction, first and foremost, provides Terremark's stockholders with the opportunity for immediate, maximum value and liquidity for their investment in our common stock.  We are very proud of all we've accomplished in building and developing a world-class business that delivers industry-leading services. This agreement represents an exciting opportunity to accelerate our strategy and serve our enterprise and government customers with even greater innovation on a global scale with Verizon's resources and extensive reach. We will continue to work with leading hardware, software, systems integrator and carrier partners to build on our unique business model."

    Headquartered in Miami, Terremark is a widely recognized Infrastructure-as-a-Service leader with a proven track record of delivering cloud-based resources with the highest levels of security and availability in the industry. Operating 13 data centers in the U.S., Europe and Latin America, Terremark combines secure cloud computing, colocation and managed hosting services into a seamless hybrid environment. Its Enterprise Cloud platform provides some of the world's largest companies and U.S. government agencies with on-demand access to secure and reliable computing resources.

    Verizon is a global leader in driving better business outcomes for mid-sized and large enterprises and government agencies.  The company operates more than 220 data centers across 23 countries, including 19 premium centers and five smart centers.  Verizon combines integrated communications and IT solutions, professional services expertise with high-IQ global IP and mobility networks to enable businesses to securely access information, share content and communicate.  Verizon is rapidly transforming to a cloud-based "everything-as-a-service" delivery model that will put the power of enterprise-grade solutions within the reach of every business, wherever and whenever needed.  Find out more at www.verizonbusiness.com.

    Verizon was represented by Goldman Sachs and Weil, Gotshal & Manges, and Terremark was represented by Credit Suisse Securities (USA) LLC and Greenberg Traurig.

    About Verizon

    Verizon Communications Inc. (NYSE, Nasdaq: VZ), headquartered in New York, is a global leader in delivering broadband and other wireless and wireline communications services to mass market, business, government and wholesale customers.  Verizon Wireless operates America's most reliable wireless network, serving 94.1 million customers nationwide.  Verizon also provides converged communications, information and entertainment services over America's most advanced fiber-optic network, and delivers innovative, seamless business solutions to customers around the world.  A Dow 30 company, Verizon employs a diverse workforce of more than 194,000 and last year generated consolidated revenues of $106.6 billion.  For more information, visit www.verizon.com.

    About Terremark

    Terremark Worldwide (Nasdaq: TMRK) is a leading global provider of IT infrastructure services delivered on the industry's most robust and advanced technology platform. Leveraging data centers in the United States, Europe and Latin America with access to massive and diverse network connectivity, Terremark delivers government and enterprise customers a comprehensive suite of managed solutions including managed hosting, colocation, disaster recovery, security, data storage and cloud computing services. Terremark's Enterprise Cloud computing architecture delivers the agility, scale and economic benefits of cloud computing to mission-critical enterprise and Web 2.0 applications and its DigitalOps(R) service platform combines end-to-end systems management workflow with a comprehensive customer portal. More information about Terremark Worldwide can be found at www.Terremark.com.

    Verizon is my mobile voice carrier and I’m pleased with their service. I’m waiting to purchase a Windows Phone 7 until Verizon offers them.


    Dana Gardner claimed “Ariba cloud-based procurement brings savings and visibility to buying process at scale” in an introduction to his Case Study: How Businesses Are Using Cloud Commerce post of 1/27/2011:

    This case study podcast discussion explains how businesses are using cloud commerce to dramatically improve how they procure by better managing the overall buying process.

    Cloud-based procurement services are improving cash management, and helping to gain an analytical edge over spend management while constantly improving company-wide buying efficiency and repeatability.

    One company, California-based First American Financial, has successfully moved its procurement activities to the cloud to save on operational expenses, even while dramatically increasing the purchases managed.

    To learn more about how First American is conducting its businesses better through collaborative cloud commerce, Dana Gardner, Principal Analyst at Interarbor Solutions, interviewed Jeff Nolen, Procurement Solutions Manager at First American Financial Corp.

    Here are some excerpts:

    Nolen: First American has existed for a long, long time. We've been in business since 1889. We're financial services company and we sell a lot of financial products related to residential and commercial real estate transaction, title insurance mainly, other things of that nature -- home warranty insurance and specialty insurance that covers the real estate transaction. It’s a very relationship-driven business.

    One of the things we've been trying to do is standardize and centralize a lot of our administrative functions -- IT, accounting, etc., and procurement falls under that. We started centralizing procurement in 2006. So we managed to survive for 117 years without organized procurement as part of the company.

    Because there was no centralized procurement, people had come up with local solutions. So when we came along in the picture and said, "We're here to save you," they were kind of nonplussed. They said, "We've already solved these problems locally."

    Regional areas
    So we had an issue where we had a lot of regional areas that had solved these problems locally and we needed to help them understand that the way they had solved them wasn’t perhaps the optimal solution, but without doing that in an ego-bruising way. Change management is a huge piece of it.

    In the fall of 2006, we rolled out Ariba Buyer and Ariba Spend Visibility to get control and visibility into what we were spending as a company. And, not only what are we spending, but how can we move it to a smaller number of suppliers, become standardized on items, things of that nature. You can't do without having a tool.

    We were undergoing a lot of standardization and centralization of various functions at our company at the same time. They were going from a distributed accounts payable (AP) processes to a centralized AP model, and that really helped us, because obviously, procurement and AP work hand-in-hand.

    Standardizing the AP process really helped us from a procurement perspective, because then we had one group that we had to work with.

    Once we had a single AP group that we were working with, it was a lot easier for us to standardize our process, because we rolled out Ariba prior to having an ERP system. We would get purchase orders, and they would go to the vendor, but they didn’t go to any accounting system, because there was no one central accounting system to push to.

    Standardizing the AP process really helped us from a procurement perspective, because then we had one group that we had to work with in terms of invoicing issues and things of that nature.

    Generally, we took the low-hanging fruit where it was. Any time that there was a centralized buying group like IT, where everybody bought their hardware through one IT group, it was easy to win those folks over.

    Things that people were already used to buying online, like office supplies, were real quick hits that we could do. Once we had some early gains in those areas and built relationships internally within our company, they started to trust us with more and more areas.
    Now, we see a lot of internal process issues that are much, much better. We have a lot more electronic invoicing. We had virtually no electronic invoicing before. Even in cases when a supplier could give us an electronic invoice, we didn’t have any sort of pre-approval going on. We had paper invoices that came in after the good or service was already consumed. We routed that for approval, but the horse was already out of the barn.

    Now, we have the ability to approve expenses before we have committed anything to a supplier, which is a huge deal. We have the ability to deny. We have the ability to maximize our visibility, so we can find areas where we are not optimizing how we buy a particular good or service.

    This [cloud approach] now allows our procurement group to focus on the things that we're best at, which is dealing with the processes internal to our company, managing the internal politics, and communicating with the vendors. It keeps us out of the application management business, which I have no desire to be in.

    In terms of the internal politics and change management piece, it helps us to some degree, because there is more of a focus on cutting costs in the business.

    In terms of the internal politics and change management piece, it helps us to some degree, because there is more of a focus on cutting costs in the business. So, we get a little more mindshare when it comes to dealing with folks out in the field. Previously, it was about getting new customers and expanding market share.

    There certainly still is that desire to expand market share, but when there is a drastically reduced number of transactions per month, we can't sell our product until somebody buys a piece of property. When fewer people are buying property, we have to find other ways to increase our margins.

    Savings and hard dollars
    Like everybody else, we track savings and hard dollars -- old cost minus new cost times volume equals savings -- fairly simple stuff. We look at cost avoidance figures as well, but that’s a much softer, squishier area. The area that we get credit for is in hard savings dollars.

    We've been doing fairly well. We have had, on average, about $8 million to $9 million of savings credited to our organization for the last five years. We've been very pleased with that, but obviously we want to go further.

    Some of the key metrics we look at -- obviously the percentage compliance, both on a dollar volume basis and on a transaction basis; how many invoices we had; how many of those were on PO, because obviously when you are looking at processing issues, you care about the number of documents more than the number of dollars. From a savings standpoint, you want to get those dollars through the tool, so you can get pre-approval on those.

    There are other areas that we look at in terms of degree of supplier aggregation, like how well are we doing in a given commodity space in getting most of our spend with the smaller number of suppliers, so that we can manage them easier. In some areas this is very easy, because there are only a few players. In overnight shipping, for example, there are only a couple nationwide players in that category. There are other areas like print and promotional, where that’s a much tougher nut to crack.

    [In terms of bargaining] this helps a lot. It increases our ability to increase our share of wallet with the suppliers that are performing well for us. There are some suppliers who grouse about it initially and see this as the enemy, because it’s just going to drive prices down. Well, also, if you are a good partner for us, it helps us communicate better with you. It’s not just about increasing efficiency within the four walls of our company, but it’s also about increasing efficiency between organizations.

    We can do everything right in house, but there’s still a lot of stuff between our company and the companies that we do business with that, if that’s inefficient, then we're not really gaining as much as we should.

    This electronic invoicing initiative that we have. In office supplies, we would receive one paper invoice per purchase order, and that’s thousands of purchase orders for a year. We've reduced that now, where we have summary invoicing and we receive an electronic invoice, basically one electronic invoice for the month. We match that way, and it’s just a lot more efficient than it had been previously.

    [For those beginning such a cloud procurement journey], I would say focus on process first, because the technologies are great. But you can’t think, "We'll just buy the software and that will solve all of our problems," because it won’t. The softwares are malleable and they're flexible. They can do a lot of different things. The question is, what do you want that software to do? And if you have a bad process and you apply technology to it, then you are just accelerating the wrong process.

    The first software anybody should buy is [workflow authoring solution] Visio, and figure out exactly how they want orders and invoices to flow through their organizations.

    Impose clarity
    Another area is to make sure that you're telling your business clearly what you want out of purchase orders, what do you want to go on purchase orders, what do you want to go on a P-Card or credit card. Anywhere where you can impose clarity, it will help you be more successful.

    Also, you can’t ignore the internal change management dynamics, because that’s often the more challenging thing. Traditionally, outside of manufacturing environments, where it probably rides higher than the saddle, procurement is not the sexiest, most respected group in any organization ... but I don’t think we would have lasted another 117 years without some form of centralized standardized procurement.

    You might also be interested in:


    Simon Crosby sheds more light on AWS/Citrix arrangements with a Can They Match This? post of 1/26/2010 to the Citrix Community blog (see later article below):

    image It's been a long time in the making, and the announcement is perhaps predictably short on detail. But it's fantastic to be able to publicly comment for the first time on our partnership with Amazon.

    imageEveryone knows that Amazon (AWS) operates the largest IaaS cloud. With a customer list that reads like the who's who of web brands, AWS knows more about how to securely operate a cloud infrastructure at scale than any enterprise vendor, and has proven to be superbly inventive in adding compelling software services (recent examples include the Elastic Beanstalk and the RDS Relational Database Service) for no incremental charge above storage and compute. In my view AWS is building PaaS the right way - offering highly sticky services that power real world applications. Even luminaries such as the Sheriff of Redwood Valley and Ray Ozzie have recognized their genius.

    VMware has been trying to position AWS as a "consumer cloud" of no relevance to the enterprise, a conjecture elegantly refuted by Randy Bias of Cloudscaling. AWS is very serious about the enterprise cloud segment, and today's announcement is the first of many that we hope to make in this spirit.

    imageCitrix has been working with Amazon for some time to ensure that XenApp and our networking products, such as NetScaler, Branch Repeater and their virtual appliance implementations (VPX), work well in scenarios where customers want to take advantage of cloud-bursting to scale out their Windows or web application delivery infrastructures - effectively permitting them to outsource their DMZ and the app delivery services to the cloud. Amazon has also long been a friend of the Xen community.

    Today's announcement takes our relationship a stage further. Citrix has announced that it will closely partner with Amazon to further our strategic goal of delivering open, inter-operable cloud solutions. We have specifically announced collaboration on Windows, interoperability and development of a rich set of value-added cloud solutions:

    1. To start with, Citrix and Amazon are both committed to running any VM from any hypervisor, whether Windows or Linux based, with unbeatable performance, pay-as-you-go pricing, and elastic scalability,
    2. We are both committed to the continued enrichment of the infrastructure service fabric to meet the security, compliance, performance and SLAs demanded by enterprises for app delivery from the cloud
    3. We will ensure unparalleled portability, security and manageability of application workloads between private and public clouds
    4. We will ensure that AWS runs Windows as well as, or better than your private cloud, enabling you to move more workloads to the cloud sooner, and to maximize price/performance. Citrix has extensive experience in optimizing Windows workloads on XenServer and its natural for Citrix to be the right partner to help optimize these same applications on AWS.
    5. We will continue to collaborate to optimize performance and portability for all workloads, focusing on security by design, and open innovation.

    Here are some key take-aways for Citrix customers:

    1. You can expect seamless manageability for private and hosted workloads, with role-based, end-to end management, from any enterprise virtualization platform, to the cloud. A Citrix XenServer customer will be able to simply use XenCenter to manage their hosted workloads
    2. This announcement is good news for another great partner: Microsoft. The Azure VM Role thus far offers only an ephemeral instance model for Windows Server 2008 R2 VMs. Our collaboration with Amazon will enable Microsoft's Enterprise customers (and SPLA consumers) to confidently select AWS for their most demanding non-ephemeral workloads.
    3. Citrix is committed to offering customers a set of solutions that embrace cloud computing and seamlessly extend the enterprise into an open, inter-operable and portable cloud environment, including
      1. Citrix OpenCloud Bridge for seamless network connectivity
      2. Citrix OpenCloud Access provides secure, role-based access to cloud hosted app workloads by enterprise administrators
      3. Citrix NetScaler offers secure credential-based access to desktops and cloud based apps
      4. A powerful set of value added solutions that span enterprise and cloud, including Lab Management, Disaster Recovery, Compliance and more.

    It's probably also worth pointing out that if your Linux vendor has been telling you (as I have repeatedly heard) that it's all over for Xen in the cloud you should tell them it RHEL-ly ain't so. I recommend that you evaluate Amazon Linux as an excellent alternative that is compatible, well supported and Enterprise class.

    At the end of the day, I invite you to consider two possibilities for your cloud. The first is a single-vendor, closed solution that operates at modest scale, is not interoperable or compatible with other vendors. Moreover it commands an extraordinary price premium whether you run it privately or consume it from one of a hundred "me too" providers that cannot both offer it and deliver innovative services of their own. It is a one-way trip that you may well regret. The other mandates interoperability, portability, security, performance and pay-as-you-go, and frees you to pick advanced services that set your applications free. It comes from leading vendors and the open source community. It will outlast any proprietary solution. Pick one.

    Simon is the CTO of Citrix’s Data Center and Cloud Division.


    Maureen O’Gara asserted “The effort is supposed to enhance the interoperability and performance of Windows workloads on AWS” as a deck for her Cloud Computing: Citrix to Make Amazon Cloud More Windows-Fit post of 1/27/2011:

    image Citrix is cuddling up closer to Amazon, saying it will do the engineering to optimize its widgetry and Windows apps on Amazon's Xen-based cloud.

    The effort, which Citrix admits is short on detail, is supposed to enhance the interoperability and performance of Windows workloads on AWS.

    image Simon Crosby, the CTO of Citrix' Datacenter and Cloud Division, blogged that Microsoft's "Azure VM Role thus far offers only an ephemeral instance model for Windows Server 2008 R2 VMs. Our collaboration with Amazon will enable Microsoft's enterprise customers (and SPLA consumers) to confidently select AWS for their most demanding non-ephemeral workloads."

    imageOf course Microsoft may argue with Crosby's appreciation of [the] VM Role (http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx).

    Anyway, the deal should mean improved interoperability between Amazon's own Xen hypervisor and Citrix' XenServer hypervisor and XenServer management tools.

    image Citrix says it's supposed to extend whatever optimizations it achieves to on-premise deployments of XenServer, its commercial server virtualization platform, to make it easier for customers to seamlessly migrate workloads between enterprise data centers and EC2. XenServer customers should be able to better connect, migrate and manage virtual machines across both AWS and on-premise XenServer.

    Citrix will also collaborate with Amazon on advanced cloud solutions for the enterprise such as disaster recovery, applications-on-demand, advanced security and compliance.

    Terry Wise, AWS director of partner relations, suggests there's increased demand for running Windows and Citrix workloads on Amazon.

    CIO, CTO & Developer Resources

    Crosby throws in that "VMware has been trying to position AWS as a ‘consumer cloud' of no relevance to the enterprise," implying that Citrix and Windows will lay that libel to rest.

    He said, "We will ensure that AWS runs Windows as well as, or better than your private cloud, enabling you to move more workloads to the cloud sooner, and to maximize price/performance," adding that "It's probably also worth pointing out that if your Linux vendor has been telling you (as I have repeatedly heard) that it's all over for Xen in the cloud you should tell them it RHEL-ly ain't so. I recommend that you evaluate Amazon Linux as an excellent alternative that is compatible, well supported and enterprise class."


    Robert Duffner posted a Thought Leaders in the Cloud: Talking with Randy Bias, Cloud Computing Pioneer and Expert interview on 1/26/2011:

    image Randy Bias [pictured at right] is a cloud computing pioneer and recognized expert in the field.  He has driven innovations in infrastructure, IT, Operations, and 24×7 service delivery since 1990. He was the technical visionary on the executive team of GoGrid, a major cloud computing provider. Prior to GoGrid, he built the world's first multi-cloud, multi-platform cloud management framework at CloudScale Networks, Inc.

    In this interview, we discuss:

    • Cloud isn't all about elasticity.  Internal datacenters run about 100 servers for each admin.  The large cloud providers can manage 10,000 servers per admin.
    • Users can procure cloud resources on an elastic basis, but like power production, the underlying resource isn't elastic, it's just built above demand.
    • Just doing automation inside of your datacenter, and calling it private cloud, isn't going to work in the long term.
    • Laws and regulations are not keeping pace with cloud innovations.
    • Startups aren't building datacenters.  In the early days, companies built their own power generation, but not any more.  Buying compute instead of building compute is evolving the same way.
    • The benefit of cloud isn't in outsourcing the mess you have in your datacenter.  It's about using compute on-demand to do processing that you're not doing today.

    Robert Duffner: Could you take a minute to introduce yourself and your experience with cloud computing, and then tell us about Cloudscaling as well?

    Randy Bias: I'm the CEO of Cloudscaling. Before this, I was VP of Technology Strategy at GoGrid, which was the second to market infrastructure-as-a-service provider in the United States. Prior to that, I worked on a startup, building a cloud management system very similar to RightScale's. 

    I was interested very early in cloud technology and I also started blogging on cloud early in 2007. Prior to cloud I had already amassed a lot of experience building tier-one Internet service providers (ISPs), managed security service providers (MSSPs), and even early pre-cloud technology solutions at Grand Central Communications.

    Cloudscaling was started about a year and a half ago, after I left GoGrid. Our focus is on helping telcos and service providers build infrastructure clouds along the same model of the early cloud pioneers and thought leaders like Amazon, Google, Microsoft, and Yahoo.

    Robert: On your blog, you recently stated that elasticity is not cloud computing. Many people see elasticity as the key feature that differentiates the cloud from hosting. Can you elaborate on your notion that elasticity is really a side effect of something else?

    Randy: We look at cloud and cloud computing as two different things, which is a different perspective from that of most folks. I think cloud is the bigger megatrend toward a hyper-connected "Internet of things." We think of cloud computing as the underlying foundational technologies, approaches, architectures, and operational models that allow us to actually build scalable clouds that can delivery utility cloud services.

    Cloud computing is a new way of doing IT.  Much in the same way that enterprise computing was a new way of doing IT compared to mainframe computing. There is a clear progression from mainframe to enterprise computing and then from enterprise computing to cloud computing. A lot of the technologies, architectures, and operational approaches in cloud computing were pioneered by Amazon, Microsoft, Google, and other folks that work at a very, very large scale.

    In order to get to a scale where somebody like Google can manage 10,000 servers with a single head count, they had to come up with whole new ways of thinking about IT. In a typical enterprise data center, it's impossible to manage 10,000 servers with a single head count.  There are a number of key reasons this is so.

    As one example, a typical enterprise data center is heterogeneous. There are many different vendors and technologies for storage, networking, and servers. If we look at somebody like Google, they stated publicly that they have somewhere around five hardware configurations for a million servers. You just can't get any more homogeneous than that. So all of these big web operators have had to really change the IT game.

    This highlights how we think of cloud computing as something fundamentally new. One of the side effects of large cloud providers being able to run their infrastructures on a very cost effective basis at large scale is that it enables a true utility business model.

    The cost of storage, network, and computing will effectively be driven toward zero over time. Consumers have the elastic capability to use the service on a metered basis like phone or electric service, even though the actual underlying infrastructure itself is not elastic.

    It's just like an electric utility. The electricity system isn't elastic, it's a fixed load. There's only so much electricity in the power grid. That's why we occasionally get brown-outs or even black-outs when the system becomes overloaded. It's because the system itself is not elastic, it's the usage.

    Robert: That's actually a great analogy, Randy. You mentioned that public cloud is at a tipping point. There are obvious reasons for organizations wanting to go down a private cloud path first. Are you sensing that many organizations will go to the public cloud first? And then re-evaluate to see what makes sense to try internally?

    Randy: In our experience, a typical large enterprise is bifurcated. There is a centralized IT team focused on building internal systems that you could call private cloud as an alternative to the public cloud services. On the other side are app developers in the various lines of business, who are trying to get going and accomplish something today. Those two constituencies are taking different approaches.

    The app developers focus on how to get what they need now, which tends to push them toward public services. The centralized IT departments see this competitive pressure from public services and try to build their own solutions internally.

    We should remember that we're looking at a long term trend, and that it isn't a zero-sum game. Both of these constituencies have needs that are real, and we've got to figure out how to serve both of them.

    We have a nuanced position on this, in the sense that we are neither pro-public cloud nor pro-private cloud. However, we generally take the stance that probably in the long term, the majority of enterprise IT spending and capacity will move to the public cloud. That might be on a 10 to 20 year time-frame.

    If you're going to build a private cloud that will be competitive, you're going to have to take the same approach as Amazon, Google, Microsoft, Yahoo, or any of the big web operators. If you just try to put an automation layer on top of your current systems, you won't ultimately be successful.

    We know the history of trying to do large-scale automation inside our data centers over the past 20 or 30 years. It's been messy, and there's no reason to think that's going to change. You've got to buy into that idea of a whole new way of doing IT. Just adding automation inside your data center and calling it a private cloud won't get you there.

    Robert: Some of the people that we've spoke to have expressed the notion that clouds only work at sufficient scale. When we talk about Azure and the cloud in the context of ideal workloads or ideal scenarios, we always talk about this idea of on and off batch processing that requires intensive compute or a site that's growing rapidly. And then of course your predictable and unpredictable bursting scenarios. In your experience, is there some minimum size that makes sense for cloud implementation?

    Randy: For infrastructure clouds, there probably is a minimum size, but I think it's a lot lower than most people think. It's about really looking at the techniques that the public cloud providers have pioneered.

    I see a lot of people saying, "Hey, we're going to provide virtual machines on demand. That is a cloud," to which I respond, "No, that's virtual machines on demand." Part of the cloud computing revolution is that providers like Amazon and Google do IT differently, like running huge numbers of servers with much lower head count.

    Inside most enterprises, currently IT can manage around a 100 servers per 1 admin. So when you move from a 100:1 to say a 1000:1, labor opex moves from $75 a month for a server to $7.50 per month. And when you get to ten thousand, it's a mere $0.75 a month.

    These are order of magnitude changes in operational costs, or in capital expenditures, or in the overall cost structure. Now what size do you have to be to get these economies? The answer is ... not as big as you think.

    When some people consider economies of scale, they believe it means the ability to buy server hardware cheaply enough. But that's not really very difficult.  You can go direct to Taiwanese manufacturers and get inexpensive commodity hardware that is very reliable.  These hardware has the same components as the hardware you could get from IBM, Dell, or HP today and is built by the same companies that build these enterprise vendors hardware.

    For hardware manufacturers, especially the original Taiwanese vendors, there is only so much of a discount they can provide, so Amazon doesn't have significantly more buying power than anybody who's got a few million bucks in their pocket.

    There are also economy of scale comes from more subtle places, such as the ability to build a rock star cloud engineering team.  For example, Amazon Web Services cloud engineering team iterates on a rapid pace and they have designed software so they can actually manage a very large data system efficiently at scale.

    You could do that with a smaller team and less resources, but you've got to be really committed to do that. Also finding that kind of talent is very difficult.

    Robert: You've also talked about how cloud is fundamentally different from grid and HPC. How do you see that evolving? Do you see them remaining very separate, for separate uses and disciplines? Or do you see the lines blurring as time goes on?

    Randy: I think those lines will blur for certain. As I say in the blog post, I view cloud more as high scalability computing than as high performance computing. That actually means that the non-HPC use cases at the lower end of the grid market already make sense on public clouds today. If you run the numbers and the cost economics make sense, you should embrace cloud-based grid processing today.

    Amazon is building out workload-specific portions of their cloud for high performance computing running on top of cloud. Still, at that very top of the current layer of grid use cases that are HPC, the cost economics for cloud are probably never going to make sense. For example, it may be the case for a large research institution like CERN or some other large HPC consumer that really needs very low latency infrastructure for MPI problems.

    Robert: It seems that a lot of issues around the cloud are less associated with technical challenges than they are about law, policy, and even psychology. I'm thinking here about issues of trust from the public sector, for example. Many end customers also currently need to have the data center physically located in their country. How do you see the legal and policy issues evolving to keep up with the technical capabilities of the cloud?

    Randy: It's always hard to predict the future, but some of the laws really need to get updated as far as how we think about data and data privacy. For example, there are regulatory compliance issues that come up regularly when I talk to people in the EU. Every single EU member country has different laws about protecting data and providing data privacy for your users. Yet at the same time, some of that is largely prescriptive rather than requirements-based, like stating that data can't reside outside of a specific country.

    I don't know that that makes as much sense as specifying that you need to protect the data in such a way that you never leave it on the disk or move it over the network in such a way that it can be picked up by an unauthorized party. I think the security, compliance, and regulatory laws really need to be updated to reflect the reality of the cloud, and that will probably happen over time. In the short term, I think we're stuck in a kind of fear, uncertainty, and doubt cycle with cloud security.

    Previously, I spent about seven years as a full-time security person.  What I found is that there is always a fairly large disconnect between proper security measures and compliance. Compliance is the codification in laws to try to enforce a certain kind of security posture.
    But because of the way that data and IT are always changing and moving forward, while political systems take years to formulate laws, there's always a gap between the best practices in security and what the current compliance and regulatory environment is.

    Robert: Now, you mentioned a big cloud project your company did in South Korea. What are some of the issues that are different for cloud computing with customers outside the United States?

    Randy: I think one of the first things is that most folks outside the U.S. are really at the beginning of the adoption cycle, whereas inside the U.S., folks are pretty far along, and they've got more fully formulated strategies. And the second thing is that in many of these markets, since the hype cycle hasn't picked up yet, there are still a lot of questions around whether the business model actually works.

    So for example, in South Korea, the dedicated hosting and web hosting business is very small, because most of the businesses there have preferred to purchase the hardware. It's a culture where people want to own everything that they are purchasing for their infrastructure. So will a public cloud catch on? Will virtualization on demand catch on? I don't know.

    I think it'll be about cost economics, business drivers, and educating the market. So I think you're going to find that similar kinds of issues play out in different regions, depending on what the particulars are there. We're starting to work with folks in Africa and the Middle East, and in many cases, hosting hasn't caught on in any way in those regions.

    At the same time, the business models at Infrastructure as a Service providers in the U.S. don't really work unless you run them at 70 to 80% capacity. It's not like running an enterprise system where you can build up a bunch of extra capacity and leave it there unused until somebody comes along to use it.

    Robert: I almost liken it to when the long-distance companies, because of the breakup of the Bells, started to offer people long distance plans. You had to get your head around what your call volume was going to look like. It was the same when cell phones came out. You didn't know what you didn't know until you actually started generating some usage.

    Randy: I think the providers will have options about how they do the pricing, but the reality is that when you are a service provider in the market, you are relatively undifferentiated. And one of the ways in which you try to achieve differentiation is through packaging and pricing. You see this with telecommunications providers today.

    So we're going to see that play out over the next several years. There will be a lot of attempts at packaging and pricing services to address consumers' usage patterns. I liken it to that experience where you get that sticker shock because you went over your wireless minutes for that month, and then you realize that you need plan B or C, and then you start to use that.

    Or when you, as a business, realize that you need to get an all you can eat plan for all of your employees, or whatever you want that now works for your business model. Then service providers will come up with a plethora of different content pricing and packaging to try to service those folks and that will be more successful.

    Robert: In a recent interview I did with New Zealand's Chris Auld, he said that cloud computing is a model for the procurement of computing resources. In other words it's not a technological innovation as much as a business innovation, in the sense that it changes how you procure computing. What are your thoughts on his point?

    Randy: I am adamantly opposed to that viewpoint. Consider the national power grid; is it a business model or a technology? The answer is that it's a technology. It's a business infrastructure, and there happens to be a business model on top of it with a utility billing model.

    The utility billing model can be applied to anything. We see it in telecommunications, we see it in IT, we see it with all kinds of resources that are used by businesses and consumers today.
    We all want to know, what is cloud computing? Is it something new? Is it something disruptive? Does it change the game?

    Yes, it's something new. Yes, it's something disruptive. Yes, it's changed the game.

    The utility billing model itself has not changed the game.  Neither has the utility billing model as applied to IT, because that has been around for a long time as well. People were talking about and delivering utility computing services ten years ago, but it never went any where.

    What has changed the game is the way that Google, Amazon, Microsoft, and Yahoo use IT to run large scale infrastructure.  As a side effect, because we've figured out how to do this very cost effectively at a massive scale, the utility billing model and the utility model for delivering IT services now actually works. Before, you couldn't actually deliver an on-demand IT service in a way that was more cost effective than you could build inside your own enterprise.

    Those utility computing models didn't work before, but now we can operate at scale, and we have ways to be extremely cost-efficient across the board. If we can continue to build on that and improve it over time, we're obviously going to provide a less expensive way to provide IT services over the long run.

    It's really not about the business model. It really is about enabling a new way of doing IT and a new way of computing that allows us to do it at scale.  Then on top of this to provide a utility billing model.

    Robert: Clearly, we're seeing a lot of immediate benefit to startups, for the obvious reason that they don't need to procure all of that hardware. Are you seeing the same thing as well?

    Randy: I've been more interested in talking about enterprise usage of public services lately, but it seems that the start ups are well into the mature stage, where nobody ever goes out and builds infrastructure anymore if they have a new start up. It just doesn't make any sense.

    When folks were first starting to use electricity to automate manufacturing, textiles, and so on, larger businesses were able either to build a power plant, or to put their facility near some source of power, such as a hydroelectric water mill. Smaller businesses couldn't.

    Then when we built a national power grid, suddenly everybody could get electricity for the same cost, and it became very difficult to procure and use electricity for a competitive advantage. We're seeing the same thing here, in the sense that access to computing resources is leveling the playing field. Small businesses and start ups actually have access to the same kinds of resources that very large businesses do now. I think that that really changes the game over the long term.

    You will know we crossed a tipping point when two guys and their dog in a third world country can build the infrastructure to support the next Facebook with a credit card and a little bit of effort.

    Robert: Those are all of the prepared questions I had. Is there anything else you'd like to talk about?

    Randy: There are a few things that I'd like to add, since I have the opportunity. The first thing reaches back to the point I made before, likening the way cloud is replacing enterprise computing to the way client-server or enterprise computing replaced mainframes. What drove the adoption of client-server (enterprise) computing?

    It really wasn't about moving or replacing mainframe applications, but about new applications. And when you look at what's going on today, it's all new applications. It's all things that you couldn't do before, because you didn't have the ability to turn on 10,000 servers for an hour for $100 and use them for something.

    If you look at the way that enterprises are using cloud today, you see use cases like financial services businesses crunching end-of-day trading data, or pharmaceutical companies doing very large sets of calculations overnight, where they didn't have that capability before.

    There's a weird fixation in a lot of the cloud community on enterprise or private cloud systems. They're trying to say that cloud computing is about outsourcing existing workloads and capacity. Somebody who maybe doesn't have the same kind of cost efficiencies that Amazon or Google has.

    If you just outsource the mess in your data center to someone else who has the same operational cost economics, it can't really benefit you from a cost perspective. What has made Amazon and others wildly successful in this area is the ability to leverage this new way of doing IT in ways that either level the playing field or otherwise create new revenue opportunities. It's not about bottom line cost optimization.

    If we just continue doing IT the way we already do it today, I think we're going to miss the greater opportunity. On the other hand, you ask your developers, "What can you do for the business if I give you an infinite amount of compute, storage, and network that you can turn on for as little as five minutes at a time?" That's really the opportunity.

    Robert: That's excellent, Randy. I really appreciate your time.

    Randy: Thanks Robert.


    Rob Ashton (@robashton) posted RavenDB vs MongoDB - Why I don't usually bother (making the comparison) on 1/26/2010:

    image I’m constantly asked at my talks and around various community events what the differences between MongoDB and RavenDB are - and when I made a small blog series describing the differences between RavenDB and CouchDB, I was lambasted for deciding that it wasn’t worth my time to cover MongoDB  - so here we go, let’s talk about this.

    In order to talk about Mongo and Raven, we first need to look at reasons we might choose to ditch our relational database to move to any of the alternative data stores, this is simple – but surprisingly people seem to skip the obvious and stick NoSQL in one imaginary group marked “scalability” and “not for us”. So let’s do a little bit of catch-up before I begin the comparisons.

    • Relational databases are pieces of mature software that are used as a one size fits all solution
    • NoSQL data stores are typically geared towards a specific sweet spot, and make sacrifices in other areas in order to do that one thing well

    Simple really - let’s look at an example of this -  Solr, a HTTP server built around Lucene.

    A lot of us are already using this, and were using it long before the current “nosql fad” hit.  When writing search into our applications we have a choice – we can use the full text indexing services in our standard database (which can be clunky, technologically awkward and basically keeps the DBA happy), or we can give this job to something that does search well.

    Solr doesn’t really do transactions (although there is a commit, it is global in nature), it doesn’t do relationships, it just does flattened documents on which you can perform full-text searches, it also has fantastic horizontal scaling support.  It isn’t really geared towards being a primary data store but excels at what it is designed for, which is adding fully-featured search support to applications.

    Another example of this: contrary to popular belief, Facebook doesn’t use Cassandra for much more than messages between inboxes – instead running most of their data stores off a intelligently partitioned MySQL set-up (because it works, albeit they have had to think very carefully about how they do that).

    It’s all about sweet spots -  if you have a problem that needs solving – then as a developer you want to first try to find a way to solve that problem without writing any code, and then if you really have to, solve the problem with as little code as possible and in as an efficient manner as possible.

    This brings us back to our main three contenders for primary data stores in an OLTP environment (which is what most of us work in), Couch, Raven and Mongo, where are their sweet spots? Let’s look at some main data points across those three and then ask that question again

    image CouchDB

    • Transactions: Atomic writes per document (bulk operations are not atomic)
    • Consistency: Strong node/eventual cluster (Views are updated when you read from them)
    • Writing: Always write entire documents
    • Reading: Always “querying” materialised views
    • Set-based operations: Nil
    • Durability: About as durable as it gets
    • Ad-hoc queries: Nil
    • “Joins”: You can pull entire documents down with a single HTTP request (not projections)
    • Tech: HTTP Rest API with JSON docs
    • Summary: Fast writes, fast-ish reads,

    image RavenDB

    • Transactions: Atomic bulk operations, supported across server-instances
    • Consistency: Eventual (Views are updated in the background)
    • Writing: Typically write entire documents
    • Reading: Always “querying” materialised views
    • Set-based operations: Updates/Deletes
    • Durability: Effectively a transaction log, data is either written or it is not written
    • Ad-hoc queries: Indexes can be created/managed automatically based on ad-hoc queries
    • “Joins”: Post-query, properties from related documents can be loaded into a projection (live projections)
    • Tech: HTTP Rest API with JSON docs
    • Summary: Fast writes, fast reads, stale data from views is allowed

    image MongoDB

    • Transactions: Atomic writes per document
    • Consistency: Strong master/eventual slave (No pre-computed views)
    • Writing: Amazing array of possibilities, all blindingly fast
    • Reading: Very similar to queries in a relational DB, except querying docs, not tables
    • Set-based operations: Nil
    • Durability: At the moment, power goes out, you lose data, in 1.8 we get a transaction log
    • Ad-hoc queries: Sure, but if you want performance you’re going to have to add indexes to your documents
    • “Joins”: Nil
    • Tech: Custom TCP protocol with JSON docs
    • Summary: Blindingly fast writes (potentially to a black hole), traditional query-based read-model

    Anyway, that’s the pointless and missing-a-lot-of-the-salient-points feature comparison out the way and done with, I’ll instead now give my very abstract views over the three:

    RavenDB

    RavenDB is feature heavy, has separate read/write stores (documents = write store, indexes = query store), and a “fast is fast enough approach” to performance considerations. While most operations have been profiled and tweaked, most of the performance gains in OLTP applications come from the materialized map or map/reduce views (pre-computed queries). There is a massive drive towards usability with this project and it shows.

    CouchDB:

    CouchDB is a feature-lite but but more mature brother to RavenDB and as such has separate read/write stores (documents = write store, views = query store), a more pure approach to achieving performance targets and apparently one of their big priorities is the ability to run on as many platforms as possible. Worth noting that performance is gained in a similar manner to RavenDB, by effectively performing pre-computed queries using map or map/reduce

    MongoDB: Feature-lite, very similar to a traditional database in that you store documents, and add indexes to fields in those documents which make queries over those documents faster (indexes effectively mean data from those documents are stored in some cool data structures in memory-mapped files). Performance is entirely gained by micro-optimising every level of this pipeline (so it uses TCP directly, not HTTP, connection pooling in all their platform drivers (APIs are generally written on top of these drivers)), and writing every bit of code to run as fast as it possibly can.

    Where I stand

    So, given the above, we can see that Raven and Couch are very different to our traditional databases – they have a big drive towards making reads cheap - and because in most applications we tend to do more reads than writes and we generally know what queries we’re going to be making in advance, setting them all up as materialized views (whether automatically in the case of Raven, or manually in the case of Couch) just makes sense.

    The biggest thing for both of these (and Raven in particular) is not that they’re fast, or scalable – it is that they are both incredibly usable because they give us a structured framework for creating a basic CQRS principled OLTP application without having to worry about mapping or query plans or complex architectures or DBAs who think that SPROCs are the one and only way to do things. They enable us to build applications that work _fast_, and that’s what is important to our customers. This is a sweet spot.

    Let’s talk about Mongo then

    Mongo on the other hand, is very similar to our traditional databases – with the core difference that data is stored as documents instead of being split out (normalised) across multiple tables. The rest of it all looks very similar – except we lose the ability to do joins across our tables (so complex reporting is out). Reads are fast, but only because Mongo has been micro-optimised to the hilt (just like most database engines), writes are fast, but only because the system doesn’t wait to find out if writes have been successful before carrying on with the next job.

    I don’t see it, I don’t see the sweet spot – even if the durability issues are sorted out it’s still just a traditional database with a few less features in the name of gaining a few milliseconds that most of us don’t need.

    It achieves fast query speed by storing indexes in memory, which means what Mongo actually gives us is a really slow way to query in memory objects – and heaven forbid you run out of enough RAM to store these indexes on your production servers (ala Foursquare a few months ago). If you’re  going to go for an in-memory query store then you’re probably going to use Redis because that’s its sweet spot…

    As a consequence of this, when asked to compare Raven and Mongo, I find it generally hard to answer the question because they are fundamentally very different animals (is Mongo even an animal?), and even lengthy blog entries like this don’t come close to scratching the surface of the differences between the data stores.

    In my honest opinion, there are no problems that Mongo solve[s] that the either Couch, Raven or even MySQL don’t already solve better – there is no sweet spot apart from maybe as a really fast logging system – and even then I might debate that point because I value my logs and don’t want to lose them to a random power outage.

    Hopefully I’ve explained myself here – it’s not that I dislike Mongo, I just don’t see the point of it – I don’t see what it brings to the table and I don’t see a sweet spot – and I think that a lot of people using it instead of Couch or Raven don’t really have any credible reasons for moving away from a relational store in the first place.

    Using Mongo instead of Raven or Couch is like buying the fastest most tuned up car you can afford, and then only driving it in reverse when you arrive at the races.


    <Return to section navigation list> 

    0 comments: