Sunday, February 14, 2010

Windows Azure and Cloud Computing Posts for 2/12/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
• Update 2/14/2010 8:00 AM PST: New items on Sunday morning.

• Update 2/13/2010 8:00 AM PST: New items on Saturday morning.

Update 2/12/2010 4:00 PM PST: Live Bing, Silverlight, OGDI, and Azure at the Olympic Games on 2/11/2010. Go here for the live apps that use Azure, Open Government Data Initiative (OGDI) or both.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in February 2010. 

Azure Blob, Table and Queue Services

Ryan Dunn delivers an illustrated Windows Azure Storage Overview Webcast linked by the Azure Service Brazil site on 2/12/2010:

A key component of any cloud computing offering is durable storage. Windows Azure provides 3 forms of durable storage: tables, blobs, and queues. Join me as we explore the highlights of these 3 important implementations.

The Webcast is in English, not Brazilian Portuguese.

Ryan Dunn explains Shared Access Signatures (SAS) for Windows Azure in his Sharing Blobs in Windows Azure post of 2/12/2010:

Windows Azure storage makes use of a symmetric key authentication system.  Essentially, we take a 256-bit key and sign each HTTP request to the storage subsystem.  In order to access storage, you have to prove you know the key.  What this means of course is that you need to protect that key well.  It is an all-or-nothing scenario, if you have the key, you can do anything.  If you don't possess the key, you can do nothing*.

The natural question for most folks when they understand this model is, how can I grant access to someone without compromising my master key?  The solution turns out to be something called Shared Access Signatures (SAS) for Windows Azure.  SAS works by specifying a few query string parameters, canonicalizing those parameters, hashing them and signing the hash in the query string.  This creates a unique URL that embeds not only the required access, but the proof that it was created by someone that knew the master key.  The parameters on the query string are:

  • st - this is the start time of when the signature becomes valid.  It is optional.  If not supplied, then now is implied.
  • se - this the the expiration date and time.  All signatures are timebound and this parameter is required.
  • sr - this is the resource that the signature applies to and will be either (b)lob or (c)ontainer.  This is required.
  • sp -this is the permission set that you are granting - (r)ead, (w)rite, (d)elete, and (l)ist.  This is required.
  • si - this is a signed identifier or a named policy that can incorporate any of the previous elements. Optional.
  • sig - this is the signed hash of the querystring and URI that proves it was created with the master key.  It is required.

There is one other caveat that is important to mention here.  Unless you use a signed identifier - what I refer to as a policy - there is no way to create a signature that has a lifetime longer than an hour.  This is for good reason.  A SAS URL that was mistakenly created with an extremely long lifetime and without using the signed idenifier (policy) could not be revoked without changing the master key on the storage account.  If that URL was to leak, your signed resource would be open to abuse for a potentially long time.  By making the longest lifetime of a signature only an hour, we have limited the window in which you are exposed. …

Dave Thompson’s Consuming data from Codename “Dallas” tutorial of 2/12/2010 begins:

Codename “Dallas” as part of Windows Azure is an Information Service offering which makes it easy to find and consume data from a number of different languages, both managed such as C# and VB.NET, and other web technologies such as javascript and php due to the interface it provides.

For a good overview and the developer portal for Dallas can be found here.

In this post I am going to summarise some of the methods of consuming data from Dallas, as mentioned in the Dallas Learning Course on Channel 9.

and Zach Owen’s part of the Dallas presentation from PDC 09.

For details on navigating the Dallas Catalog and Datasets the first lab on Channel 9 is very helpful to get to know what is available:

Dave continues with the tutorial following his “So you have found your data source and want to consume some data…” headline.

Jerry Huang’s Here Comes Another Azure Drive post of 2/11/2010 explains the differences between using Gladinet Cloud Desktop to map a network drive to the Windows Azure Blob Storage and the beta release of the Windows Azure Drive:

… So what is the difference between the two Azure Drives? The key is to point out where they exist. The Microsoft Azure Drive is not local to your PC while the Gladinet Azure Drive is local to your PC.

The second difference is the purpose of the drive. The Microsoft Azure Drive is for application migration. If you have a Windows app using NTFS APIs, you can continue to use NTFS APIs when the app runs inside the Azure Virtual Machines.   Conceptually it is similar to Amazon’s Elastic Block Storage (EBS) for the Amazon EC2.  The Gladinet Azure Drive is for data migration. Since the drive is mapped locally, you can do drag and drop to transfer files to Azure Blob Storage. 

The third difference is the audience. The Microsoft Azure Drive is for developers. You will need to use specific API (Azure SDK Feb. 2010)to mount the drive inside the Azure VM. On the other hand, the Gladinet Azure Drive is for regular user that needs to use cloud storage to store and backup online content. …

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

•• Josh HolmesGetting MySql Running on Azure tutorial of 2/8/2010 begins:

There are a few applications that I’m playing with in Windows Azure that are reliant on MySql for various reason. For example, I’m working with a group that is doing Joomla development and it’s completely dependent on MySql. Mostly this is due to using MySql native drivers rather than using a database independent layer such as ADO.NET in .NET or PDO in PHP or leveraging database specific features that are only found in MySql. Regardless of the reason, for me to run these applications in Windows Azure, I have to get MySql running in Azure. I found that it wasn’t as hard as I initially thought it would be and it’s a technique that I can reuse for a lot of binary executables.

And concludes:

Although I recommend using MySql on Windows Azure for stopgap and last resort, the awesome news is that with the Windows Azure MySQL PHP Solution Accelerator it’s actually not that hard to setup and works pretty well.

•• Brian Hitney recommends adding a db_executor role in his SQL Azure Logins post of 2/13/2010:

SQL Azure currently has fairly limited management capabilities.  When you create a database, you receive an administrator account that is tied to your login (you can change the SQL Azure password, though).  Because there is no GUI for user management, there’s a temptation to use this account in all your applications, but I highly recommend you create users for your application that have limited access. 

If you limit access to only stored procedures, you need to specify execute permissions.  Assuming you want your connection to have execute permissions on all stored procedures, I recommend a new role that has execute permissions.  That way, you can simply add users to this role and as you add more stored procedures, it simply works. …

Brian continues with the T-SQL commands to create the role and add users (logins) to it.

SQL Server guru Bob Beauchemin’s Transactions, isolation, and SQL Azure post of 2/11/2010 reports:

I was trundling through my SQL Azure database today, looking for interesting behaviors. Found one. A "select * from sys.databases"  reveals that both "snapshot_isolation_state" and "is_read_committed_snapshot_on" return 1 (on) for all databases. Because ALTER DATABASE isn't supported, these cannot be changed.

So READ COMMITTED SNAPSHOT is the default transaction isolation level, which may come as a surprise to those applications that depend on the read committed locking behavior of SQL Server. Although so far there's been no big hue and cry. The readcommittedlock query hint works as expected, but if you're expecting read locks, you won't get them by default. And the other three locking-based isolation levels are available, SQL Azure is SQL Server after all, just with the two snapshot isolation switches turned on.

Remember also that the SQL Azure session timeout will rollback uncommitted transactions in progress (as it should). I was reminded of that while testing isolation levels and forgetting to commit a transaction.

There isn't much reference to this in the SQL Azure Books Online, and although I did find a reference to this in the SQL Azure FAQ, the FAQ says "snapshot isolation" is the default. Technically it's "read committed snapshot" (known also as "statement-level snapshot") that's the default, although the SQL Server snapshot isolation level (known as "transaction-level snapshot") is available and works as advertised.

This may be for the best, because you can't use either the dynamic management views or sp_lock to observe the locks in your instance/database in any case. A final point of interest is that application locks are supported, but lack of visibility would mean it may be difficult to troubleshoot these

Dave Robinson reported on 2/10/2010 that the SQL Azure North Central US datacenter is online in Chicago:

I’m happy to announce that SQL Azure is now available in our North Central US datacenter.

Starting today, when creating a new SQL Azure server, there will now be four options in the region drop down - North Central US, South Central US,  East Asia, and North Europe.

Sorry for the short post. Bigger announcements coming soon… [Emphasis added.]

Perhaps Dave will announce geolocation capability for North Central US backups of South Central SQL Azure databases.

<Return to section navigation list> 

AppFabric: Access Control, Service Bus and Workflow

Carl Franklin and Richard Campbell interview Ron Jacobs on Azure AppFabric on 2/92010 in .NET Rocks Show #523:

Ron Jacobs talks to Carl and Richard about the Windows Azure platform AppFabric, which provides secure connectivity as a service to help developers bridge cloud, on-premises, and hosted deployments.

Ron JacobsRon Jacobs is a Sr. Technical Evangelist in the Microsoft Platform Evangelism group based at the company headquarters in Redmond Washington. Ron's evangelism is focused on Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF.) Since 1999 Ron has been a product and program manager on various Microsoft products including the .Net Framework, Windows Communication Foundation and COM+. A top-rated conference speaker, author and podcaster, Ron brings over 20 years of industry experience to his role of helping Microsoft customers and partners to build architecturally sound and secure applications.

The Azure AppFabric Team has started a Windows Azure AppFabric Feature Voting Forum that’s similar to those for Windows Azure and SQL Azure. There weren’t many votes there as of 2/12/2010. Follow the team @Azure_AppFabric.

@AppFabric is an alias for both Windows Azure [Platform] AppFabric and Windows [Server] AppFabric. Apparently @Azure_AppFabric is a fork.

Brien Loesgen continues his video series about bridging the gap with BizTalk and Azure with his 00:08:27 Extending the Reach: Using a BizTalk Dynamic Send port to send to Windows Azure platform AppFabric Service Bus segment of 2/11/2010:

This video walks through how to use a BizTalk dynamic send port to send a message to the Windows Azure platform Service Bus. This uses the out-of-the box BizTalk Server 2009 WCF-Custom adapter in conjunction with the AppFabric SDK to achieve interoperability with only a trivial amount of code.

and 00:10:53 Extending the Reach: Using a BizTalk ESB Off-ramp to send to Windows Azure platform AppFabric Service Bus segment of the same date.

<Return to section navigation list>

Live Windows Azure Apps, Tools and Test Harnesses

•• JP Morgenthal’s The Real Impact of IT Owning Your SOA Effort essay of 2/11/2010 begins:

I had a great lunch discussion today regarding the impact of ownership of SOA within the business. As I have discussed multiple times, SOA is about business services, not technical services, but the reality of this point is lost in semantics.  The following illustrates the different perspectives that each have and how that impacts the overall SOA initiative. …

The Windows Azure Team suggests that you Stay Up To Date on Windows Azure Partner Programs and Resources with Windows Azure Platform Partner Hub in this 2/12/2010 post:

Are you a current or prospective Microsoft partner who wants to know more about our partner strategy in the cloud with Windows Azure?  Would you like a way to communicate, collaborate and get feedback to us about Windows Azure? Check out the newly revamped Windows Azure Platform Partner Hub to share stories, learn about tips and resources or get access to simulated custom dev projects and application migrations currently being done by Microsoft partners building on the Windows Azure Platform.  

To make the site more interactive and responsive, the team worked with Slalom Consulting, a Microsoft Gold Certified Partner, to deploy BlogEngine.NET on Windows Azure.  This is one of the first deployments of BlogEngine.NET on Windows Azure and you can read more about the project in SlalomWork's blogpost, New Azure Site Launched: Azurehub.com.

We hope you find the new partner hub website useful; let us know what you think.  What other resources would you like to see?  Please share your thoughts or suggestions by posting a comment; we look forward to hearing from you.

Brian Hitney produced the first two members of an Azure screencast Miniseries:

  • Azure Miniseries #2: Deployment (2/12/2010): In the first screencast, we took a look at getting a simple Azure web application up and running.  Now, we'll take a quick look at deploying your Azure web application and how the staging/production environments work.
  • Azure Miniseries #1: Migration (2/11/2010): This is the first screencast in an open-ended and short form series on Windows Azure.  In this screencast, we'll take a look at starting a new Azure web project by migrating an existing ASP.NET website.

Brian is a Microsoft Developer Evangelist in the DPE East region. Check out his WCF in an Azure WorkerRole blog post of 2/12/2010:

The other day, a colleague got in touch with me looking for help in getting a WCF service working in an Azure WorkerRole. It would work locally, but not deployed in the cloud. This is a common problem I’ve run into – for example, calling Open() on a ServiceHost will work locally, but no[t] in the cloud due to permissions.

I wasn’t much help in getting John’s situation resolved, but he pinged me about it a couple days later with the solution. The first is to make sure your service has the correct behavior to respond to any address:

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]

The next was to make sure you explicitly set the SecurityMode in the binding:

NetTcpBinding binding = new NetTcpBinding(SecurityMode.None); 

Webroles are different as they are hosted in IIS and limited to HTTP.

Also, there are some good demos mentioned in this post on the MSDN forums that points to the Azure All-In-One demos on CodePlex. 

Adrian Studer explains why Open Text Corporation, a Canadian content management application developer, chose Windows in this 00:03:04 Webcast:

One of the biggest advantages of Windows Azure is low cost of operating. Open Text is ready to ship new product Archive Server and used Windows Azure as a batch storage device instead hardware. Main advantage of Windows Azure is that it is the complete story from Microsoft.

Adrian is Technical Alliance Manager of Open Text Corporation, Waterloo, Ontario, Canada.

ICS Solutions announced in a 2/12/2010 press release its Azure Advantage Program, which the UK firm describes as follows:

The Azure Advantage Program from ICS Solutions is a flexible framework of options that helps you to plan, build, deploy, manage and support applications in the Cloud and leverage the benefits of Cloud Computing in a flexible, rapid and viable manner.

The flexible suite of ten options that ensure your Azure Platform success are:

  • Azure Inform: Get ready for Azure – educate, evangelise, justify
  • Azure Staffing: Agile Azure resourcing solutions
  • Azure Consult: Plan your Azure deployment with confidence
  • Azure Training: Flexible Azure learning options
  • Azure Strategy Briefing: Explore what is possible with Cloud Computing
  • Azure Project Accelerator: Accelerate your first Azure pilot
  • Azure .Net Migration: Applying .Net development to the Azure Platform
  • Azure Showcase: Expertise, ready made ISV Cloud solutions
  • Azure Build: Tailor made Azure solutions
  • Azure Support: Managed support service

Voices for Innovation’s Live Bing, Silverlight, OGDI, and Azure at the Olympic Games. Go here for the live apps that use Azure, Open Government Data Initiative (OGDI) or both.

Hanu Kommalapati’s Windows Azure Platform for Enterprises article in the Februrary 2009 issue of MSDN Magazine carries this abstract:

Learn all about Microsoft’s Windows Azure platform at the architectural level and how it addresses enterprise cloud computing concerns including economics, security, storage and more. Included are an Azure pricing table and a sample cost calculator.

You can download his Excel-based Azure pricing calculator (part of the article) here.

My Determining Your Azure Services Platform Usage Charges post of 2/11/2010 includes an update for changes to Online Bill page and addition of Data Transfer Usage, SQL Azure and Windows Azure Usage Charges pages, as well as a sample monthly charge estimate (click image for full-size screen captures):

The three Usage Charges lead to individual pages for Data Transfer, SQL Azure, and Windows Azure resources (combined in this screen capture):

Patrick Butler Monterde’s Windows azure billing overview post of 2/10/2010 received this review from Simon Davies in his Windows Azure Service Administration and Configuration post of 2/12/2009:

Since the commercial release of Windows Azure many questions have been asked about the relationship between the billing portal and the developer portal, for example:

  • How does a subscription in the billing portal relate to the development portal?
  • What is the difference between the Account Owner and the Service Administrator?
  • How many services can I create in one project?
  • How many instances of compute is a service limited to?
  • Which limits are variable and how do I go about changing them?

Patrick Butler Monterde has put together a great post that explains the answers to all this and more. Please let us know if we missed anything.

Patrick’s post begins:

There have been a lot of questions regarding Windows Azure Platform billing and how it works. This diagram shows the relationship between components. The current diagram shows the current configuration. This configuration may change in the future:

clip_image002

… A service can have a maximum of five roles per application. This is any combinations of different web and worker roles on the same configuration file up to a maximum of 5. Each role can have any number of VMs. For Example:

Standard 3 tier application: Web-Business-Data Tiers to Windows Azure Roles

clip_image004

On this example the service has 2 roles, each role with a specific workers role. Web Role worker, web tier, takes care of the Web interface. The Worker Role, business tier, is responsible for the business logic. Each role can have any number of VMs/Cores, to the maximum available on the Project. …

Don’t miss Patrick’s detailed billing analysis for the three-tier application.

The Windows Azure team blog continues its series about early Windows Azure adopters with Real World Windows Azure: Interview with Michael Lucaccini, President of Archetype of 2/11/2010, which begins:

As part of the Real World Windows Azure series, we talked to Michael Lucaccini, President of Archetype, about using the Windows Azure technology platform for the company's media management system. Here's what he had to say:

MSDN: What services does Archetype provide?

Lucaccini: Archetype is an interactive design and engineering firm that specializes in developing Internet applications, branded experiences, and advanced user interfaces.

MSDN: What was the biggest challenge your company faced prior to implementing Windows Azure?

Lucaccini: Our Archetype Media Platform (AMP) system, which is built on the Microsoft .NET Framework, helps customers efficiently manage all of their media assets from upload through deployment. However, accommodating each customer's individual IT infrastructure resulted in time-consuming deployments-sometimes taking months. Adding to that, customers often faced steep entry costs or quickly outgrew initial server configurations.

MSDN: Describe the solution you built with the Windows Azure platform to help you address these common challenges with scalability, time-consuming deployments, and high entry costs for customers?

Lucaccini: To enable massive numbers of users to upload, edit, and redistribute media without a bottleneck on the server side, we migrated our AMP solution to the Windows Azure platform. It is a turnkey solution that helps save our customers a great deal in up-front costs. We're also taking advantage of Microsoft SQL Azure and Blob Storage to persist data. …

Jim Nakashima explains Windows Azure RoleEntryPoint Method Call Order in this 2/11/2010 post:

I saw an internal discussion that had some information I thought would be useful to share.  It’s about how some of the methods in RoleEntryPoint get called.

In the case of a Worker Role, the RoleEntryPoint class is the class you derive to write your code.  When you create a new Worker Role project in Visual Studio, you’ll see that the project contains one code file and in that code file there is a class called WorkerRole that derives from RoleEntryPoint.

Worker Role Call Order:

WaWorkerHost process is started.

  1. Worker Role assembly is loaded and surfed for a class that derives from RoleEntryPoint.  This class is instantiated.
  2. RoleEntryPoint.OnStart() is called.
  3. RoleEntryPoint.Run() is called. 
  4. If the RoleEntryPoint.Run() method exits, the RoleEntryPoint.OnStop() method is called .
  5. WaWorkerHost process is stopped. The role will recycle and startup again.

For step 1 above, Windows Azure only loads one assembly and takes the first class that derives from RoleEntryPoint that it finds.  Visual Studio knows which assembly implements RoleEntryPoint based on the reference to the project under the Roles node. …

image

Jim continues with more details about the requirement for a RoleEntryPoint derived class.

Dan Kasun’s Open Government Data and .NET post of 2/11/2010 describes a new project that ultimately will incorporate data from Microsoft Project “Dallas”:

As mentioned in my previous post [link added], I wanted to test the waters by writing a small app that allowed someone to enter a zip code, find all of their Congressional representatives , then find out the committees they're on, bills they've sponsored, how they vote based on some issues… and then provide me the tools to contact them, as well as enter a daily rating on how I think they're doing.

Despite being snowed in, it was a pretty busy day – so I only had an hour or so of free time to work on it.  Thus, tonight’s goal was just to find the representatives by zip, show their info and a picture, and then list their committees and bill sponsorships.  I’ll work on the rest tomorrow or through the weekend.

My technology choices:

  • .NET and Visual Studio
  • Windows Forms for the UI (simply because it’s fastest for me right now… I’ll switch to WPF or Silverlight once I’m through prototyping)
  • WCF REST Starter Kit – because I really dislike parsing XML.  I just started using it a day or two ago, and one of the most fantastic thing it provides is a tool that automatically creates a .NET class from an XML file… so all all you need to do is get a sample of the REST response – paste it into the class et voila!  It creates a class for you – which you can then use to access the response data.  You can get it here: http://bit.ly/JYxf
  • APIs: I’m going to use both the SunlightLabs API and the GovTrack.us API – they’re actually redundant, since the Sunlight APIs are really just a wrapper around GovTrack.us – but I wanted to mix it up a bit.  Both have REST interfaces – and the specific APIs I’m using are:

Dan is a member of the Microsoft Public Sector DPE Team and posted Vote for the BEST Windows Azure Application for State & Local Gov Partners the same day:

Let your voice be heard and spread the word!  The submissions are in and public voting is now open for the Windows Azure Application Development Contest targeted towards State and Local Government agencies.  Click here to vote for the best application!

Voting is open until Feb 19th at 5pm PST. Check out the submissions and don’t miss your chance to vote - Vote Now!

· Microsoft Partners are counting on your vote: 

  • First Prize will win $10,000 USD, Second Prize will win $5,000 USD and Third Prize will win $2,500 USD.  An additional Bonus Prize for the best application that integrates with Dynamics CRM Online and/or Microsoft Business Productivity Online Suite. 
  • Winners will be posted online at www.microsoftps.com following the announcement at the Microsoft CIO Summit on February 25, 2010. The list of winners will remain posted on www.microsoftps.com until March 28, 2010.
  • Additional Rules and information for the contest can be found here:  www.microsoft.com/government/azure

Ryan Dunn added a pointer to Karsten Januszewski’s Incarnate application for MIX 2010 in his Do you Incarnate? post of 2/10/2010:

It wasn't too long ago when Karsten Januszewski came to my office looking for a Windows Azure token (back when we were in CTP).  I wondered what cool shenanigans the MIX team had going.  Turns out it was for the Incarnate project (explained here).  In short, this service finds all the different avatars you might be using across popular sites* and allows you to select an existing one instead of having to upload one again and again. …

Since the entire Incarnate service is running in Windows Azure, I was interested in Karsten's experience:

“We chose Windows Azure to host Incarnate because there was a lot of uncertainty in traffic.  We didn't know how popular the service would be and knowing that we could scale to any load was a big factor in choosing it.”

Karsten’s Incarnate page says:

Incarnate is powered in-part by Windows Azure.

Return to section navigation list> 

Windows Azure Infrastructure

•• James Hamilton’s Scaling FarmVille analysis of 2/13/2010 illustrates the need for instant, gigantic scalability of Facebook apps:

Last week, I posted Scaling Second Life. Royans sent me a great set of scaling stories: Scaling Web Architectures and Vijay Rao of AMD pointed out How FarmVille Scales to Harvest 75 Million Players a Month. I find the Farmville example particularly interesting in that it’s “only” a casual game. Having spent most of my life (under a rock) working on high-scale servers and services, I naively would never have guessed that casual gaming was big business. But it is. Really big business. To put a scale point on what "big" means in this context, Zynga, the company responsible for Farmville, is estimated to have a valuation of between $1.5B and $3B (Zynga Raising $180M on Astounding Valuation) with annual revenues of roughly $250M (Zynga Revenues Closer to $250).

The Zynga games portfolio includes 24 games, the best known of which are Mafia Wars and FarmVille. The Farmville scaling story is an great example of how fast internet properties can need to scale. The game had 1M players after 4 days and 10M after 60 days.

In this interview with FarmVille’s Luke Rajich (How FarmVille Scales to Harvest 75 Million Players a Month), Luke talks about scaling what he refers to as both the largest game in the world and the largest application on a web platform. FarmVille is a Facebook application and peak bandwidth between FarmVille and Facebook can run as high as 3Gbps. The FarmVille team has to manage both incredibly fast growth and very spikey traffic patterns. They have implemented what I call graceful degradation mode(Designing and Deploying Internet-Scale Services) and are able to shed load as load as gaming traffic increases push them towards their resource limits. …

Check out the High Scalability article How FarmVille Scales to Harvest 75 Million Players a Month by Todd Hoff for more details and seem Rick Miller’s article about “Centers launched in low-power circuits, cloud computing” below.

•• The SearchCloudComputing.com staff’s The Daily Cloud: VMware's Jackson takes a shot at Windows Azure article of 2/12/2010 reports:

Does someone need a hug, or perhaps a specialized high-performance virtualization platform to take advantage of their near-monopoly in the enterprise data center? VMware honcho Rick Jackson took the gloves off at the software maker's annual partner event this week, claiming that Microsoft doesn't even use its own hypervisor product, Hyper-V, in its cloud service, Windows Azure.

That's a low blow, although watching VMware and Microsoft squabble over business tactics is like watching a one-legged man in a sack fight with the Jolly Green Giant.

It's also entirely academic, since Microsoft, unlike VMware, isn't trying to sell anyone on buying Azure's technology. It's just trying to get people to use and pay for it, so it could be running KVM on CentOS or Xen, for all users will care.

•• Rick Miller reports “Centers launched in low-power circuits, cloud computing” in his Berkeley discusses progress in parallel programming article of 2/11/1010 for EETimes magazine:

Berkeley professor Michael Franklin formally announced the AMP Lab, a new research center seeking to drive cloud computing to the next level. The center aims to address what Franklin called the scalability problem involving algorithms, machine learning and people.

Machine learning algorithms and data analytics don't scale to increasingly large and complex data sets. Meanwhile cloud services lack crowd-sourcing tools to harness large groups of people over the Internet to tackle shared problems.

The lab is a spin-out of a Berkeley center developing software that will help individuals use cloud computing to launch new Web services. The new lab wants to enable many people to collaborate to collect, generate, clean and make sense of large data sets, he said.

View UC Berkeley’s Reliable Distributed [Systems] Lab (RAD Lab) report: AMPLab: Exploring Big Data with Algorithms, Machines and People presentation of 1/11/2010 by Michael Franklin, Alex Bayen, Armando Fox, Michael Jordan, Anthony Joseph, Randy Katz, Dave
Patterson, Scott Shenker and Ion Stoica.

These are most of the folks that brought you the Above the Clouds: A Berkeley View of Cloud Computing paper almost exactly one year ago (2/10/2009). Use this search to find previous references to Above the Cloud in the OakLeaf blog (search the returned posts with Above the Cloud for responses  by Pat Helland, Chris Hoff, Paul Miller (podcast), and David Linthicum (podcast.)

Note: Don’t confuse AMPLab with UC Berkeley’s RAMPLab, Research Accelerator for Multiple Processors or PARLab, which is the Parallel Computing Laboratory.

Sam Johnston announced the start of a cloud-computing article syndication service in his Introducing Planet Cloud: More signal, less noise post of 2/9/2010:

Sam Johnston Portrait

As you are no doubt well aware there is a large and increasing amount of noise about cloud computing, so much so that it's becoming increasingly difficult to extract a clean signal. This has always been the case but now that even vendors like Oracle (who have previously been sharply critical of cloud computing, in part for exactly this reason) are clambering aboard the bandwagon, it's nearly impossible to tell who's worth listening to and who's just trying to sell you yesterday's technology under today's label.

It is with this in mind that I am happy to announce Planet Cloud, a news aggregator for cloud computing articles that is particularly fussy about its sources. In particular, unless you talk all cloud, all the time (which is rare - even I take a break every once in a while) then your posts won't be included unless you can provide a cloud-specific feed. Fortunately most blogging software supports this capability and many of the feeds included at launch take advantage of it. You can access Planet Cloud at http://www.planetcloud.org/ or @planetcloud. …

David Linthicum continues his concentration on doubts about the future of Windows Azure with a Can Microsoft Make it in the Cloud? podcast of 2/12/2010. I take issue with most of his conclusions in my An Answer to David Linthicum’s Questions about the Prospects for Windows Azure in his 2/12/2010 Podcast post of 2/12/2010.

Lori MacVittie’s Users use Applications. Applications use Clouds post of 2/12/2010 takes issue with this statement by a Wall Street Journal writer: “Broadly speaking, any service or program sent over an Internet connection can be considered a cloud service.”

DoubleFacePalm

Preparing for the upcoming Cloud Connect conference several speakers and presenters have put forth the proposal that no one should attempt to define cloud yet again. After all, if you’re attending the conference (and you are attending, of course, aren’t you?) then you certainly have a firm understanding of what cloud computing is and what it can do. But most end-users and business stakeholders won’t be attending and don’t have a firm understanding of cloud computing. Even the technology pundits to whom these constituents turn to learn about the technology often fail to really “get” cloud computing, as evinced by the number of “what is cloud computing” articles have sprung up in main stream publications lately. Many of these articles define cloud computing necessarily, it’s the focus of the article after all. But the definitions proffered clearly indicate that there are still equal parts confusion and interest in cloud computing from, well, the mainstream. Take this statement from the Wall Street Journal regarding cloud computing:

blockquote Broadly speaking, any service or program sent over an Internet connection can be considered a cloud service.

No, actually, it can’t. Or more correctly – because there’s no vocabulary or definition police in the technology sector – it shouldn’t be.

Cloud computing is not a synonym for cloud. And vice-versa. Cloud computing is perhaps the first case of “technology for technology’s sake” that is actually a good thing. That’s because cloud computing is for applications. It’s not for users, it’s for applications. A cloud computing environment without an application is pretty much useless. A dynamic collection of compute resources that remains unfulfilled, idly spinning disks and catching CPU interrupts willy nilly without purpose.

Jeffrey M. Kaplan claims “Microsoft's star has been dimming the past few years, while other companies have been transforming the technology industry with the cloud computing model. Yet Microsoft founder Bill Gates saw the services revolution coming years ago, so it's not as though the company was caught by surprise. What's likely is that Microsoft will once again come late to the party -- and perhaps take it over” in his Microsoft Poised to Regain Momentum in 2010 article of 2/12/2010 for eCommerce Times:

No software company has been more threatened by the rapid growth of Software as a Service (SaaS) and the broader cloud computing phenomenon than Microsoft (Nasdaq: MSFT).…

Microsoft's Azure PaaS solution has also received positive grades from early users and is now available to developers for full-fledged production purposes. Azure may not be the best-of-breed PaaS leader, but Microsoft's market power will make it the preferred development platform for many ISVs and enterprise developers.

I expect Microsoft to push a partner-friendly strategy Download Free eBook - The Edge of Success: 9 Building Blocks to Double Your Sales to promote its Azure PaaS. Its experience working with ISVs can give Microsoft a competitive advantage over Salesforce.com, Google and Amazon who are still learning how to work with third-party developers and have already encountered conflicts with their new channel partners.

Encouraging third parties to develop SaaS solutions on Azure also permits Microsoft to mitigate the risk of cannibalizing its legacy on-premise applications and to minimize the potential for channel conflicts.

Microsoft received two major endorsements of cloud strategies last month. HP (NYSE: HPQ) and Microsoft announced a US$250 million joint venture to develop and deliver cloud solutions. Intuit (Nasdaq: INTU) also announced that it will team with Microsoft to link their respective PaaS offerings together to give developers a broader set of functional and go-to-market capabilities.

So, once again, Microsoft may be a late entrant in the market with a set of solutions that lag those offered by today's industry innovators, but it is still in a good position to regain its momentum and become a dominant force in the rapidly evolving cloud computing marketplace.

Lori MacVittie claims “If developers will not write “virtualization aware” applications, who will? The future of application development platforms may be at stake…” and predicts the Return of the Web Application Platform Wars in her 2/11/2010 post:

Right now developers are packaging up applications in virtual machines and deploying them. That’s according to, well, every survey you find related to virtualization and cloud computing. Joe McKendrick, citing the latest Evans Data Cloud Development Survey, noted that “sixty-one percent of 400 developers in Evans Data Corp’s recent Cloud Development Survey report that at least some of their IT resources will move to the public cloud within the next year.”

But even given the number of developers deploying applications in virtualized environments, internal and external, it’s probably true that they aren’t writing “virtualization aware” applications.

Why? Because “virtualization aware” applications require some specific development at the operating system/driver layers. Layers that very few developers ever touch, especially those who develop atop existing application “platforms” such as Java EE, Microsoft, a LAMP stack, or Ruby. Developers writing applications that target these environments rarely, if ever, write as low as even the TCP socket layer. Diving deeper into the operating system/driver layer is simply not something they’re likely to do. Yet the advantages of writing to these layers is higher efficiency of the application as well as increased performance, because it effectively bypasses the additional layers of abstraction introduced by virtualization technology. …

InformationWeek Analytics announced on 2/11/2010 the availability of its Informed CIO: Cloud Contracts and SLAs analysis by Jonathan Feldman (US$99.00 for download):

Cloud computing is approximately where the Internet was more than a decade ago: full of both promise and hype, and constantly changing. As companies evaluate cloud services, CIOs must help them reach business objectives and save money, while also serving in a stewardship role.

A stepwise process is what’s needed. IT must assess a business unit’s needs and help business managers come to good decisions, while balancing risk, fiscal impact, and flexibility.

If the decision goes in favor of a cloud approach, you need to figure out how to proceed, and what to do if and when things go wrong. Some of the questions that IT professionals need to ask as they evaluate cloud services include: What’s the use case? What’s the risk/benefit profile? And, what happens if the provider completely fails?

Getting the answers to these and other key questions will help you determine if cloud computing makes sense, evaluate providers, determine if you need an SLA, and, if so, how to craft a strong agreement.

Ray de Pena continues his series on 2/11/2010 with Innovation and Risk in the Clouds (Part 2 of 3):

In part 1 of innovation and risk in the clouds, I focused on the alignment between strategy, organizational culture, innovation, and cloud computing.  In part 2 we will discuss some of the risks and challenges cloud computing presents.

Do you remember the last time you took a risk?  The first time?  Do you remember the first time you saw your company take a meaningful risk?

In business there are some very memorable moments in risk taking.  Which ones do you recall?  One of my favorites is the Apple, Xerox, Microsoft, and IBM story told in the Pirates of Silicon Valley docudrama.

It’s a classic tale of a large enterprise not fully recognizing the value of technology, and a smaller organization being sufficiently bold and nimble to capitalize on that mistake.

As a former project manager, one of the activities I enjoyed most when managing projects was risk planning.  It slowed us down and forced us to think through our activities, identify risks, analyze the risks, and plan appropriately.  Some believe the speed of being a first mover, and its advantage, is enough, but it’s not simply the fastest car that wins the Indianapolis 500, the 24 Hours of Le Mans and the Monaco Grand Prix; it’s the team that best manages the risks inherent in participating at such events.  The risk of a tire blowout is very real, and is a risk that needs to be properly managed.

Similarly, organizations need to be structured with the people, resources, processes, and strategic focus to adopt risk-taking endeavors in an acceptable and measured way.

David Linthicum asserts “Internet-based delays in cloud-delivered apps can occur, but saying that this means the whole cloud concept is bad misses the reality of what the cloud is” in his Hey John C. Dvorak, you're a bit behind on the cloud post of 2/11/2010:

You've got to love John C. Dvorak, the host of the Cranky Geeks podcast and columnist for PC Magazine (for which I used to write as well). I've found John to be typically spot-on when it comes to technology analysis. That said, I figured I would make sure to enlighten him, just a bit, on his recent column entitled "Hey Microsoft, get out of the cloud."

John wrote, "But the cloud stinks. Its applications have always been much slower than their desktop counterparts. Try to get to the end cell of a large cloud-based shreadsheet. You'll long for the desktop version. The whole process is exacerbated by the speed of the Internet. The Internet is also unreliable. A couple of weeks ago, I was down for two hours. A month ago, I lost my connection for 20-plus hours."

John further complains that you're at the mercy of the cloud computing provider, and that it can deny you service at any time. However, he does talk about the potential use of cloud computing to make traditional enterprise IT less expensive. …

Eric Gray prefaces his Hypervisor Footprint Quiz post of 2/10/2010 with “What Windows Azure and VMware vSphere Have in Common?”:

Remember the hypervisor footprint debate?  You know, the one where Microsoft Virtualization declares that it is nothing but VMware FUD to tout the benefits of a small-footprint hypervisor.

I just found another point of view on hypervisor footprint size — take a look at this excerpt:

Small footprint: any features not applicable to our specific … scenarios are removed.  This guarantees that we do not have to worry about updating or fixing unnecessary code, meaning less churning or required reboots for the host.  All critical code paths are also highly optimized for our … scenarios.

Any guesses where that came from?  Must be more of that VMware FUD!

Actually, it was one of the Windows Azure design principles.  Which makes sense if you think about it — vSphere, with small-footprint ESXi,  is the perfect foundation for cloud computing.

Don’t miss the comments to this post.

Brian J. Dooley’s Architectural Requirements Of The Hybrid Cloud article of 2/10/2010 for Information Management begins:

Cloud computing continues to gain momentum as a description of service offerings based on a virtualized data center infrastructure and provided over the Internet on an as-needed basis. Public clouds, such as Amazon EC2, first brought attention to this model, followed by private clouds built within an organization, as exemplified by IBM's Blue Cloud initiative. Both public and private clouds have been found to have advantages for the enterprise, but most analysts now agree that the real power of the cloud concept lies in a marriage between the two -- the hybrid cloud.

A hybrid provides services using a mixture of private and public clouds that have been integrated to optimize service. The promise of the hybrid cloud is to provide the local data benefits of the private clouds with the economies, scalability, and on-demand access of the public cloud. The hybrid cloud remains somewhat undefined because it specifies a midway point between the two ends of the continuum of services provided strictly over the Internet and those provided through the data center or on the desktop. Today, almost every enterprise could be said to have an IT infrastructure containing some elements of both extremes. Meshing them into a common structure is what becomes interesting and offers a range of new possibilities in handling local and cloud-based data, but it also introduces a range of complexities in data transfer and integration.

In its most mature form, the hybrid cloud is a private cloud linked to one or more external cloud services, centrally managed, provisioned as a single unit, and circumscribed by a secure network (see Figure 1). Each cloud will have a similar infrastructure and will be based on standards permitting interoperability, making it possible to optimize processing and data location according to such issues as load-balancing requirements, regulatory and security concerns, efficiency of operation, and data-transfer necessities. Each cloud will be used for different purposes, depending on available services and costs, and movement between clouds will be simple and relatively painless.

Figure 1 -- The hybrid cloud: integrating multiple clouds in a secure network.

This vision of the hybrid cloud is, at present, a projection. Currently, interoperability is somewhat limited at various points, including at the virtualization hypervisor level; data transfer also remains problematic, as is integration between applications in separate clouds. But these problems are being worked on, and market demand will help to ensure integration. …

Brian J. Dooley is a Consultant with Cutter Consortium.

Finance & Commerce “Minnesota’s only business daily” reported on 2/5/2010 Cloud computing a priority in survey:

A survey of 150 IT decision makers at small and large companies across the country found that three-quarters of companies identified the development of cloud computing as a priority for 2010.

ReliaCloud, a national IT infrastructure company based in Minneapolis, found that a majority 85.3 percent of decision makers were either currently implementing cloud computing services or had plans to do so within the next 12 months. 

A majority (95.4 percent) of IT decision makers believed that cloud computing would either radically shift or have a definite impact on how technology services will be provided within their companies. As for the services that would be best suited to cloud computing, the survey indicated that Web applications, databases and data storage topped the list. 

Issues like up time/high availability, performance and cost savings were cited as the top reasons for using cloud computing, while security and support were cited as the top barriers to adopting the technology. …

This is a much higher percentage of firms implementing or intending to implement cloud computing than I’ve see in similar surveys with a wider geographical scope. Are Minnesotans early adopters?

<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) announces The Automated Audit, Assertion, Assessment, and Assurance API (A6) Becomes: CloudAudit in this 2/12/2010 post:

I’m happy to announce that the Automated Audit, Assertion, Assessment, and Assurance API (A6) working group is organizing under the brand of “CloudAudit.”  We’re doing so to enable reaching a broader audience, ensure it is easier to find us in searches and generally better reflect the mission of the group.  A6 remains our byline.

We’ve refined how we are describing and approaching solving the problems of compliance, audit, and assurance in the cloud space and part of that is reflected in our re-branding.  You can find the original genesis for A6 here in this series of posts. Meanwhile, you can keep track of all things CloudAudit at our new home: http://www.CloudAudit.org.

The goal of CloudAudit is to provide a common interface that allows Cloud providers to automate the Audit, Assertion, Assessment, and Assurance (A6) of their environments and allow authorized consumers of their services to do likewise via an open, extensible and secure API.  CloudAudit is a volunteer cross-industry effort from the best minds and talent in Cloud, networking, security, audit, assurance, distributed application and system architecture backgrounds.

Our execution mantra is to:

  • Keep it simple, lightweight and easy to implement; offer primitive definitions & language structure using HTTP(S)
  • Allow for extension and elaboration by providers and choice of trusted assertion validation sources, checklist definitions, etc.
  • Not require adoption of other platform-specific APIs
  • Provide interfaces to Cloud naming and registry services …

… If you would like to get involved, please join the CloudAudit Working Group or visit the homepage here.

CloudAudit is a much better name, IMO.

Teresa Carlson interviews Pat Arnold, Microsoft’s Trustworthy Computing Group Chief Technology Officer in her Open Source Security – The Myth of “Many Eyeballs” post of 2/12/2010:

Earlier this year I posted a blog entry on open source software which sparked some vigorous debate.  The post was focused on the need for open standards (regardless of how software is developed), but also touched on the notion that open source code is inherently more secure than proprietary software.  This belief is often described as the “many eyeballs” approach to software security – the more sets of eyes that review the code, the greater the chance that security flaws will be identified and fixed.  To examine this line of thinking, I interviewed Pat Arnold, Microsoft’s Trustworthy Computing Group Chief Technology Officer.

Teresa:  Pat, what are your thoughts on the “many eyeballs” approach to security?

Pat:  Through the years there have been many discussions regarding the security of open source software (OSS) versus that of proprietary commercial software.  The prevailing viewpoint within the OSS community appears to be that thousands of worldwide developers with open access to source code are going to achieve better software security than proprietary development teams.  While a seemingly logical conclusion, alone, the “many eyeballs” approach to software security is simply flawed. …

Teresa is Vice President of Microsoft Federal

Subra Kumaraswamy offers links to three security-related Webcasts in this recent Cloudbook post:

Subra Kumaraswamy has more than 18 years of engineering and management experience in information security, Internet, and e-commerce technologies. He is currently leading an Identity & Access Management program within Sun Microsystems. Subra has held leadership positions at various Internet-based companies, including Netscape, WhoWhere, Lycos, and Knowledge Networks. He was the cofounder of two Internet-based startups, CoolSync and Zingdata. He also worked at Accenture and the University of Notre Dame in security consulting and software engineering roles. In his spare time, Subra researches emerging technologies such as cloud computing to understand the security and privacy implications for users and enterprises. Subra is one of the authors of Cloud Security and Privacy, which addresses issues that affect any organization preparing to use cloud computing as an option. He's a founding member of the Cloud Security Alliance as well as cochair of the Identity & Access Management and Encryption & Key Management workgroups. Subra has a master's degree in computer engineering and is CISSP certified.

Subra is [currently] Security Manager at Sun Microsystems, Inc. I wonder who his next employer will be.

<Return to section navigation list> 

Cloud Computing Events

Chris Hoff (@Beaker) is Pimping the Security Non-Cons: Troopers 2010 in this 2/12/2010 post:

My friends at ERNW in Germany are putting on another fantastic security conference this year. I was lucky enough to attend Troopers ‘08 in Munich and this year it’s in Heidelberg.  Check out the details here.

“TROOPERS10 – This time it’s a home match.

This year we’re bringing back the action right to the place where everything started: Heidelberg, Germany.

In 2007 the idea of a security conference without the usual product presentations, marketing blabla, and bull*ht-bingo was born – just pure practical IT security. After an enthusiastic response from our audiences in Munich we decided to evolve the concept into a full-blown conference combined with a series of workshops and round tables.

We’re inviting (C)ISOs, IT auditors, sysadmins, security consultants and everyone who is involved with IT security to come to Heidelberg and get in touch with leading experts from all over the world. A number of workshops on monday and tuesday covers highly relevant topics in detail, on wednesday and thursday you’ll learn about the latest developments, threats and achievements from world class security evangelists, experts and hackers. And on friday we seat you on round tables right next to the speakers and fellow experts. You’ll be able to discuss your own strategies and concerns with them face-to-face. You will be listened to, because in the end of the day we’re all the same: TROOPERS in the infosec world.”

I’ll be posting a couple of other excellent conferences shortly.

David Bowermaster announced in a brief Saturday: Brad Smith Talks Cloud Computing on C-SPAN article in Microsoft on the Issues post of 2/12/2010:

As you may have read here recently, Microsoft General Counsel Brad Smith delivered a speech on cloud computing at the Brookings Institution in Washington, D.C. last month.

In his remarks,  Brad described the many ways cloud computing can increase the efficiency and transparency of government and other parts of society, and noted areas where the development of cloud computing must be carefully managed,  particularly when it comes to privacy and data security.   He encouraged industry and policymakers to take action to build confidence in cloud computing, and proposed the Cloud Computing Advancement Act to promote innovation, protect consumers and provide government with new tools to address the critical issues of data privacy and security.

While Brad was in D.C. he stopped by C-SPAN to film a segment of The Communicators, a “weekly series featuring a half-hour interview with the people who shape our digital future.”   Brad spoke with C-SPAN producer Pedro Echevarria and Wyatt Kash, Editor-in-Chief of Government Computer News, about “The Future of Cloud Computing.”

The episode will air on C-SPAN Saturday at 6:30 p.m. ET, and will be rebroadcast  Monday on C-SPAN2 at 8am ET & 8pm ET.

Looks to me as if Brad Smith is enjoying his new role as a regulatory evangelist for Windows Azure.

Voices for Innovation reports live Bing, Silverlight, OGDI, and Azure at the Olympic Games on 2/11/2010. Here are the live apps that use Azure, Open Government Data Initiative (OGDI) or both:

The European ISV Convention, run by IT Europa, will be held at The Tower Hotel, London on 2/25/2010. According to a 2/12/2010 press release:

Cloud Computing and its profound impact on European ISVs and the software development community will be a major focus at the third European ISV Convention, run by IT Europa and being held at The Tower Hotel, London on 25th February 2010.

"ISVs and Software Developers have been reeling over the last few years adapting to SOA methodologies of product development and SaaS delivery and business models," said John Chapman, Content Director of the European ISV Convention 2010. "However, now as they start to absorb the implications of Cloud Computing they are rapidly revisiting their business strategy and radically rethinking their go to market plans."

At the European ISV Convention 2010 major organisations involved in Cloud Computing like IBM, Microsoft, Flexera, Ness Technologies, Azlan, Mitel, Cisco, Motorola and the Software and Information Industry Association (SIIA) will be giving their views on how this marketplace will evolve, debate these issues with delegates and other sponsors and meet in boardroom and 1:1 style sessions.

"ISVs and Software Developers have become aware that Cloud Computing rewrites the rules on how business applications will be delivered to enterprises and small to medium size organisations," John Chapman continued. "It enables the development of new business process and mobility applications that will revolutionise the way organisations run their operations. The implications for developers is that they will need new business partners to help them understand these new issues, provide some of the technologies and skills as well as open up new markets and business opportunities." …

Brian Hitney’s MSDN Events and Roadshows Coming Soon… post of 2/11/2010 describes the forthcoming Southern Fried Roadshow:

Top line:  March 3rd, we’ll be in Raleigh, and March 5th, we’ll be in Charlotte for our next MSDN Event and Southern Fried Roadshow. 

This time it’s a full day of Azure – if you have an interest in cloud computing, be sure to come out!  See you then!

MSDN Events presents:  Take Your Applications Sky High with Cloud Computing and the Windows Azure Platform

Join your local MSDN Events team as we take a deep dive into cloud computing and the Windows Azure Platform. We’ll start with a developer-focused overview of this new platform and the cloud computing services that can be used either together or independently to build highly scalable applications. As the day unfolds, we’ll explore data storage, SQL Azure, and the basics of deployment with Windows Azure. Register today for these free, live sessions in your local area.

  1. SESSION 1: Overview of Cloud Computing and Windows Azure
  2. SESSION 2: Survey of Windows Azure Platform Storage Options
  3. SESSION 3: Going Live with your Azure Solution

Randy Bias continues his story of the Cloud Migrations Track @Cloud_Connect 2010 in this 2/10/2010 post:

I wanted to follow up from yesterday’s post on the upcoming Cloud Connect event.  In particular, I want to talk a bit about the track I’m leading: Cloud Migrations.  The focus of the track is to talk about:

  • How to adopt cloud now
  • Choosing between clouds: internal, external, or both?
  • Real world example use cases
  • Understanding how clouds are built when creating a strategy

There are some great panelists who are attending, but I wanted to call out a few in particular that might pique your interest.  BTW, all of the panelists on this track are amazing, I’m just picking a few to highlight.

Randy goes on to list the panelists and their qualifications.

Microsoft Technology User Groups Ireland (mtug.ie) announced a Windows Azure Services Platform session by Cormac Keogh on 2/23/2010 7:00 PM at the Imperial Hotel, South Mall, Cork:

This session will give an overview of Microsoft’s Cloud Computing Platform offering, Windows Azure, and will explain what’s included in Windows Azure and the Windows Azure application Fabric and why these developments important to customers of various sizes, ranging from small Independent software Vendors (ISVs) to larger Enterprises.

Some Use Cases for Windows Azure will be explored and the pricing model. 

The session will show how to get started with the Toolkit and get some sample applications deployed to the Platform.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Reuven Cohen asserts Amazon EC2's Greatest Threat is Cloud Regionalization in this 2/12/2010 post that begins:

Earlier today I tweeted that I believed that the biggest threat to Amazon EC2 isn't found in any one single IaaS Cloud provider, but instead it's collectively the hundreds of regional cloud providers in the midst of launching public services. The tweet started a started a bit of a storm with those who believe that the economies of scale that exist within large IaaS providers will ultimately spell doom for the smaller regional players who will never be able to to compete directly with the IBM's, Google's, Microsoft's and Amazon's of the world.

I respectfully disagree. First of all it's now fairly obvious to most in the web hosting and data center space that the hosting world is moving en masse to cloud based infrastructure. This isn't a prognastication, this is happening today. With hosting companies & regional telcom's in almost every region of the world either in the midst of building or launching public cloud offerings. Just one example is Enomaly ECP customer City Networks who launched late last year. City Networks was an early mover, offering the first Cloud offering in Sweden. Within the first month of operation they had hundreds of local customers using their service. Providing further proof that location matters, their customers could of have easily choosen a broader regional service provider, one that offered an "EU" cloud, but instead they choose to use a local smaller regional cloud provider.

More established vendors like Cisco, IBM, EMC, VMware, Microsoft also see the opportunity found within regional cloud service providers and are all now actively going after this market. So to be as direct as possible, cloud enablement appears to be the biggest market currently for cloud computing and it's all about broad federated / distributed scale. Just ask one of my sales guy who are continuing to see the flood of inbound inquires for our service provider platform from more than 40 countries in the last few weeks alone. …

Rackspace Hosting says its “… offering includes an intuitive control panel that allows administrators to quickly initiate SharePoint” in a Rackspace Puts Microsoft SharePoint in the Cloud press release of 2/11/2010:

Rackspace Hosting on Thursday announced the launch of hosted Microsoft SharePoint, the collaboration and file sharing platform, backed by Rackspace’s 24x7x365 Fanatical Support. With the addition of hosted Microsoft SharePoint, Rackspace's suite of cloud applications now includes business-class collaboration, email and storage solutions.

“With SharePoint, Rackspace is offering businesses the most powerful file sharing and collaboration software on the market today,” said Pat Matthews, General Manager of Rackspace Email & Apps. “With our cloud delivery model, businesses can take advantage of all the power Microsoft SharePoint has to offer without the hassle of having to manage the software in-house. Microsoft SharePoint backed by Fanatical Support is the recipe our customers have been asking for.”

The Rackspace offering includes an intuitive control panel that allows administrators to quickly initiate SharePoint, add/remove users, and create SharePoint sites with only a few steps. The control panel also gives administrators the ability to manage Rackspace Email & Apps’ full suite of applications, including hosted Microsoft Exchange, Rackspace Email, Email Archiving, and Rackspace Cloud Drive and Server Backup, in one dashboard.

Ed Sperling interviews Salesforce.com CIO Kirsten Wolberg in this Salesforce.com: Working In A Cloud: A peek inside the IT operation at Salesforce.com article of 1/25/2010 for Forbes magazine, which I missed when posted:

Mention cloud computing and software as a service and one name immediately pops up: Salesforce.com.

Since it was created in 1999 as a provider of customer relationship management software, the company has become IT's test bed for the cloud-based model. Forbes caught up with Salesforce.com ( CRM - news - people ) CIO Kirsten Wolberg to glimpse the internal operation at the world's most established cloud operation and discover what lessons the company has learned over the years. …

The SearchCloudComputing staff reports The Daily Cloud: Engine Yard partners with Terremark to enhance VMware's shine on 2/11/2010:

Ruby on Rails Platform as a Service provider Engine Yard is partnering with Terremark to offer dedicated, fenced-off infrastructure resources for enterprises that want to free up developers but still maintain security.

That means that people who want Engine Yard's features without having to share space with other people's grubby Web apps can go to Terremark and ask them for a slice of Terremark's VMware-powered Enterprise Cloud.

Calling it private cloud is bound to cause heartburn at various levels of the cloud community, from the pinky-out cloud elite who will remonstrate that it's not private unless it's in your data center to the users wondering why it's not called "Engine Yard at Terremark for beaucoup bucks."

Look to VMware for the answer to that. The announcement is another ride in the stable for VMware's vCloud Express, which it hosts on, among other services providers, Terremark. VMware wants to scrub away enterprise users' (highly justified) concerns about cloud computing security and coax them out into lucrative public cloud services. All this while also keeping them in the fold and raking in licensing, rather than having them switch to less hawkish platforms like Amazon Web Services or Rackspace.

VMware is selling the idea that its users can let their virtual machines out to play safely, as long as they don't stray from behind VMware's skirts.

<Return to section navigation list> 

blog comments powered by Disqus