Monday, April 05, 2010

Windows Azure and Cloud Computing Posts for 4/5/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

No significant articles yet today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Ahmed Moustafa’s Relationship links post of 4/5/2010 describes changes proposed to make OData more “hypermedia friendly”:

odataOData (the protocol used by WCF Data Services) enables you to address the relationships between Entries. This functionality is required to be able to create or change a relationship between two instances, such as an Order_Detail that is related to a given Order. Currently the OData protocol requires clients and servers to agree up front on how to address relationships. For example, most OData implementations today follow the optional URL conventions for addressing links stated in the main OData spec. This convention states that a “$links” URI path segment be used to distinguish the relationship between Entries as opposed to an entry itself. For example, the following URI addresses the relationship between Order 1 and one or more OrderDetail Entries: http://myserver/myODataService/Orders(1)/$links/OrderDetails . Currently many of the OData client libraries rely on this $links convention when manipulating relationships.

In an effort to make the protocol (OData) used by odata services more “hypermedia friendly” and reduce the coupling between clients and servers (by allowing a client to be fully response payload driven) we would like to remove the need for a URL convention and have the server state (in the response payload) the URIs which represent the relationships for the Entry represented in a given response.

He continues with a draft of the proposed design and concludes:

Server Rules:

  • The server MAY return URI that represents the relationship between two Entries.

Client Rules:

  • Client MUST use relationship URIs obtained from the server payload when addressing a relationship between two Entries, if present in the server response
  • If the relationship URI is not present in the server response the client runtime MAY choose to use convention, for URI to construction, to address the relationship.

Backwards Compatibility:

  • Existing WCF Data Services clients that rely on convention, for URI construction, to address the relationship between two Entries will continue to work as long as the server supports the convention.

We look forward to hearing what you think of this approach...

Ahmed is a Program Manager, WCF Data Services

Tom Conte observed SQL Azure connection problems as reported by his 4/2/2010 thread in the SQL Azure — Getting Started forum:

I get the following error trying to connect to my DB from SSMS: "A connexion was successfully established with the server, but then an error occurred during the login process". (Error: 10054)

It used to work fine yesterday.

When I ty to open the SQL Azure server in the portal, I get the following error:

An unexpected error has occurred. Please go back and retry your operation. Please use activity id 'f7a36766-75b0-4bc9-9727-2820282d5789' when contacting customer support.

I tried to create a new server with another Azure account and got the following error:

SQL Azure was unable to create a server for you at this time. Please retry your operation later. Please use activity id '4be4019f-8675-4f7c-be7d-5c2a64533631' when contacting customer support.

Any idea of what is going wrong? Looks like SQL Azure has been "yellow" in the service dashboard for three days now, but this looks more like "red" to me...

All four Microsoft data centers (South Central US, North Central US, North Europe, and Southeast Asia) that support SQL Azure were reporting “SQL Azure Intermittent Authentication Failures” over the Easter weekend:

image

The SQL Azure Service Dashboard displayed the preceding message when queried on 4/5/2010 at 2:00 PM PDT.

Alex James posted the OData weekly roundup #1 on 3/26/2010 but I didn’t see it until 4/5/2010 due to a feed reader glitch:

odataIt's been a little over a week since the new OData.org went live, and in that time we've had plenty of interesting feedback.

Thank you!

Here are some of the suggestions we've received:

  • Open Source the .NET Server libraries too (we announced and will release the .NET client libraries under apache soon). For more information read Miguel's post where he makes his case.
  • Better guidance on how to integrate various Authentication schemes with OData. Alex Hudson's post argues strongly for more help in this area. We hear this particular feedback loud and clear, and are working to create some guidance.
  • Figure out how to use OData to supplement or replace the MetaWeblog API most blog engines use - Sounds like an interesting way to extend the reach of OData.
  • Binary version of the protocol. Mainly for perf/compression reasons - let's face it XML isn't exactly terse. We have been thinking about this, and have started considering things like Binary XML.
  • Create a VBA client library so you can easily process OData inside Office applications.
  • Make it easy to download an OData feed into Excel - not via PowerPivot - directly into Excel.
  • Create some guidance showing how to mock the .NET client library.
  • Add OData support to more Microsoft products like the Dynamics family of products.
  • Work out exactly how OData works with and complements other linked data standards like RDF, OWL and SPARQL.
  • Create an OData wiki. Actually this is something we are already working towards.
  • Adopt a more open licensing model. Perhaps using Creative Commons Public Domain?
  • Consider working with W3C, IETF or OWF to standardize all the OData specifications.
  • Add an about page to the site, so we can find out about the people and companies currently funding working on OData.

Alex would appreciate your thoughts about these suggestions.

Chris Woodruff’s OData Primer (@ODataPrimer) is a relatively new wiki that bills OData as:

The Open Data Protocol (OData) is an open protocol for sharing data. It provides a way to break down data silos and increase the shared value of data by creating an ecosystem in which data consumers can interoperate with data producers in a way that is far more powerful than currently possible, enabling more applications to make sense of a broader set of data. Every producer and consumer of data that participates in this ecosystem increases its overall value.

Joined. Click here to create an account.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Innovation Showcase blog briefly describes Using the Windows Identity Foundation SDK with Visual Studio 2010 in this 4/5/2010 post:

Looking to get started building the next generation of applications using the Windows Identity Framework and Visual Studio 2010?

VS2010WIFWindows Identity Foundation enables .NET developers to externalize identity logic from their application, improving developer productivity, enhancing application security, and enabling interoperability.  Enjoy greater productivity, applying the same tools and programming model to build on-premises software as well as cloud services.  Create more secure applications by reducing custom implementations and using a single simplified identity model based on claims.  Enjoy greater flexibility in application deployment through interoperability based on industry standard protocols, allowing applications and identity infrastructure services to communicate via claims.

Windows Identity Foundation is part of Microsoft's identity and access management solution built on Active Directory that also includes:

  • Active Directory Federation Services 2.0 (formerly known as "Geneva" Server): a security token service for IT that issues and transforms claims and other tokens, manages user access and enables federation and access management for simplified single sign-on
  • Windows CardSpace 2.0 (formerly known as Windows CardSpace "Geneva"): for helping users navigate access decisions and developers to build customer authentication experiences for users.

Check-out this blog post for some great tips and tricks for working with the new Windows Identity Framework and Visual Studio 2010 RC.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Mike Norman comments about Developing PHP on Microsoft Azure in this 4/5/2010 post to the Virtualization Practice blog:

As mentioned in a previous post I’ve been at EclipseCon (the annual conference for the Eclipse Open Source Development tool platform, where the Open Source community gathered to discuss how to build the tooling that will allow us to build and target applications directly at the cloud, without concern for the underlying infrastructure.

The key issue is interoperability – are we building applications for The cloud, or for A cloud? For application servers (or indeed .NET)  as discussed in our previous piece, the application level APIs need not significantly change, the set of services provided by the application server is transparently provided in the cloud. Applications written in languages such as PHP that hitherto had run in a web server directly against the operating system (rather than in a container) could be extended to access services inside the Application Server or the .NET framework.

However, since there is a broad consensus about the cloud architecture, interoperability for languages like PHP can be achieved through a fairly simple set of APIs that would allow applications to access scalable storage (as BLOB, REST or Queue), and to deploy/provision applications. However, whilst there was a momentum around standards at this level in the Fall of 2009 (the simple cloud API and VMWare’s vCloud), these initiatives seem to be stalled. Now, since the introduction of SQL into the mix of standard cloud services (SQL Azure and Amazon RDS), interoperability across the broad range of cloud APIs may be unattainable, because it would imply a resolution between MySQL and SQL Server which is simply impractical.

That said, you can at least run MySQL in Microsoft Azure, and then use PHP to access that. This means you can run widely-used software such as WordPress in Azure, so we could run The Virtualization Practice website in there.  We don’t.

In a strange twist of the Open Source community model, Microsoft is actually paying a small French/Chinese company called Soyatech to create the open source plugins to Eclipse to make PHP run on Azure, and this (rather than Visual Studio – where you can also buy PHP technology from a third-party, Jcx Software) appears to be the preferred IDE for PHP on Azure. …

Pat McLoughlin’s Windows Azure Start Up Error post of 4/5/2010 proposes a solution to System.IO.FileLoadException errors during startup:

When starting up my first Windows Azure project, [I] kept receiving a System.IO.FileLoadException. The message was saying that the path of the temp directory that the assemblies are being built in was too long. The error message looked like this.

Path Too Long

When starting up, Windows Azure compiles all of the assemblies into a temporary folder. By default the folder is located here: C:\Users\AppData\Local\dftmp Some of the paths can get very long. If you are getting this message there are really only two things you can do. You can rename all of the your projects to shorter names or you can change the directory name that the assemblies are compiled and run from.

An environmental variable can be set that overrides the default location. The environmental variable is called _CSRUN_STATE_DIRECTORY. This can be set to a shorter path, buying you some more characters.

Environmental Variable

After the variable is set make sure to shutdown the development fabric either by shutting it down through the system tray or at the command line using csrun /devfabric:shutdown. If shortening the path doesn’t do the trick, the only option may be to rename your project and assemblies.

Guy Barrette’s Roger Jennings’ Cloud Computing with the Windows Azure Platform post of 4/4/2010 offers a brief review of my book:

Writing and publishing a book about a technology early in its infancy is cruel.  Your subjected to many product changes and your book might be outdated the day it reaches the book stores.  I bought Roger Jennings “Cloud Computing with the Windows Azure Platform” book knowing that it was published in October 2009 and that many changes occurred to the Azure platform in 2009.

Right off the bat and from a technology point of view, some chapters are now outdated but don’t reject this book because of that.  In the first few chapters, Jennings does a great job at explaining Cloud Computing and the Azure platform from a business point of view, something that few Azure articles and blogs fail to do right now.  You may want to wait for the second edition and read Jennings’ outstanding Azure focused blog in the meantime.

My thanks to Guy for the kind words. Guy is a Microsoft Regional Director for Quebec, Canada.

The Carbon Project updated its Windows Azure CloudSync project in late March 2010. Here’s a description from the project’s home page:

The Carbon Project's CarbonCloud Sync is a Cloud or Web-hosted service that allows synchronization of geospatial content across a federated deployment of services. The system can use any existing deployment of Open Geospatial Consortium (OGC) Web Feature Services with transactional capabilities (WFS-T) without changing the services or their configuration.

This Web application provides the management tools needed to configure the CarbonCloud Sync service operation as well as the administration of users and services.

The CarbonCloud Sync package includes a suite of Gaia Extenders to support the operations synchronization. These extenders add the user interface and functionality to handle the feature operations and synchronization.

The Carbon Project operates The Carbon Portal Forums, an “Open-Geospatial .NET Community” with groups for Carbon Tools PRO, Carbon Tools Lite, Gaia, CarbonArc, and a Gadget for GeoData.gov.

Return to section navigation list> 

Windows Azure Infrastructure

Toddy Mladenov’s Upgrade Domains and Fault Domains in Windows Azure post of 4/5/2010 begins:

Recently on the Windows Azure Forum I saw couple of times the question “what is a fault domain?”. The reason people are asking is the following statement in Windows Azure SLA:

For compute, we guarantee that when you deploy two or more role instances in different fault and upgrade domains your Internet facing roles will have external connectivity at least 99.95% of the time.

Upgrade domains are fairly known concept in Windows Azure, however fault domains are not something that customers have lot of visibility into, and may need some clarification.

Fault Domain and Upgrade Domain Definitions

Here are the two simple definitions for fault domain and upgrade domain:

  • Fault Domain is a physical unit of failure, and is closely related to the physical infrastructure in the data centers. In Windows Azure the rack can be considered a fault domain. However there is no 1:1 mapping between fault domain and rack. 
    Windows Azure Fabric is responsible to deploy the instances of your application in different fault domains. Right now Fabric makes sure that your application uses at least 2 (two) fault domains, however depending on capacity and VM availability it may happen that it is spread across more than that.
    Right now you, as a developer have no direct control over how many fault domains your application will use but the way you configure it may impact your availability (see below).
  • Upgrade Domain is a logical unit, which determines how particular service will be upgraded.
    The default number of upgrade domains that are configured for your application is 5 (five). You can control how many upgrade domains your application will use through the upgradeDomain configuration setting in your service definition file (CSDEF). Windows Azure Fabric ensures that particular upgrade domain is not within single fault domain (see picture below).

Fault Domains and Update Domains

Toddy continues with “How Fault Domains and Upgrade Domains Work?,” “Querying Fault Domain and Upgrade Domain Information” and “Guidelines for Fault Domains and Upgrade Domains” topics.

Tatham Oddie attributes the lack of free developer accounts as Why Windows Azure Is Not Worth Investing In in this 4/5/2010 post:

It’s pretty obvious that I’m a Microsoft fanboy. This is the story of how they let me down today.

Why I Wanted To Learn Windows Azure

This afternoon I sat down to write a simple web service that would expose my cloud based Exchange calendar as public free/busy feed (webcal://tath.am/freebusy).

The hosting requirements:

  • around 1 request every 5 minutes (0.003 requests/second)
  • around 10kb of transfer per request (100 MB/month)
  • no writeable storage or databases (it’s totally stateless)

Cloud hosting sounded like the perfect solution for this. I don’t care where it runs. I don’t care who or what else is running on the box. I don’t have any funky dependencies.

Whenever I go to write something for myself I use it as an opportunity to learn something new.

Today, it seemed like I was going to learn Windows Azure.

Why Azure Needs To Be Free For This Scenario

Being able to scale down a cloud hosting solution is just important as being able to scale it up. In fact, I’d go as far as suggesting that I think it’s even more important.

Right now, I can’t market myself as an Azure professional. I can’t get up at a community event and evangelise the product. I can’t answer questions about the product on mailing lists or forums. Giving me 25 hours of compute time to try the service isn’t going to let me learn the service.

The free availability of competing services is what lets somebody spin up a political statement in an hour. As well as getting a quick, free and suitably tiny hosting solution, Lachlan and Andy were sharpening their skills and building their own confidence with Heroku.

Be sure to read the numerous comments, most of which support Tatham’s position. As I mentioned in a comment to Tatham’s post:

For the last year, I’ve been agitating for an Azure billing threshold for developers, similar to that offered by Google’s app engine. See Lobbying Microsoft for Azure Compute, Storage and Bandwidth Billing Thresholds for Developers and A Comparison of Azure and Google App Engine Pricing as examples. So far, I’ve seen no interest by Microsoft in adopting such a plan.

Tatham is a Web developer and senior consultant with Readify based in Sydney, Australia.

Andrew R. Hickey asserts Cloud Computing Services Set To Explode: Report in this 4/5/2010 post to the ChannelWeb blog:

The cloud computing services market is expected to balloon to $222.5 billion over the next five years, according to a new report released by Global Industry Analysts.

The new report indicates that the market for cloud computing services will reach that $222.5 billion market by 2015, fueled by end users modernizing their networking infrastructure, further proliferation of the Internet and the tumultuous economy. Those factors combine to create a perfect storm in which companies will upgrade their networks to cut costs and boost performance, the report indicates.

"Against a background where companies are coerced into recalibrating their communication applications and network infrastructure into cost-effectively supporting distributed IT applications, the importance of cloud computing comes to the fore," Global Industry Analysts noted in the report. "As companies modernize their enterprise networking infrastructure, driven by the need to remain competitive, and retain critical survival capabilities, such as, agility and flexibility in a fast changing marketplace, it is opportunities galore for technologies like cloud computing and virtualization, among others. Simplicity in implementation and low costs are prime factors driving adoption of clouds by large and small enterprises alike."

Solution providers stand to gain from the predicted cloud computing services market explosion, as a large chunk of the $222.5 billion will move through the channel. Global Industry Analysts noted that the cloud computing services charge will be lead by marquee cloud vendors including Amazon (NSDQ:AMZN) Web Services (AWS), Google (NSDQ:GOOG), IBM (NYSE:IBM), Microsoft (NSDQ:MSFT), Rackspace, Salesforce.com and many others.

Additional factors driving the staggering growth in cloud computing services include the increased number of vendors and offerings, the push toward more virtualization and green IT efforts and the slumping economy as "revenue starved companies prowl for IT solutions that are cost-effective, require minimum to zero investment and low management of computing resources."

The Global Industry Analysts report comes on the heels of a report from market research firm Sand Hill Group that indicates IT budgets for the cloud are growing. The Sand Hill survey found that 70 percent of respondents currently spend less than 3 percent of their IT budgets in the cloud, while by 2013 80 percent expect to spend between 7 percent and 30 percent.

The recent numbers are a strong indicator of the growth of cloud computing. The $222.5 billion market for cloud computing services is a major jump from the roughly $70 billion IDC forecasted that cloud computing services would generate come 2015, as it is expected to grow about 26 percent annually. IDC said 2009's cloud computing services market hit $17.4 billion and will hit $44.2 billion in 2013.

I’ve not heard of Global Industry Analysts before. Have you?

Lori MacVittie asks “What makes a cloud a cloud? The ancient Greek philosopher Plato might tell you“cloudness”, but what exactly does that mean?” as a preface to her If a Cat has Catness Does a Cloud have Cloudness? post regarding the “private cloud” controversy:

image Long before human scientists figured out that DNA was the basic building block of everything living, philosophers spent long eons being satisfied with Plato’s (and his equally famous student Aristotle’s) explanation that there is some inherent “ness” in everything that makes it what it is. One of Aristotle’s dialogues deals with the answers to questions like, “What makes a cat a cat? And why does a kitten never have a duck?” as he explains the concept. Retroactively applying our knowledge of DNA to Plato’s principle there’s a certain simple but elegant logic in his answer. DNA carries “catness” or “humanness” as it passes from generation to generation and, thus, his principle is actually a fairly sound one. It’s just not very satisfying if you actually desire any kind of detail.

The answer to “what makes a cloud a cloud” can also be viewed as an Platonic one: cloudness. For many who view the cloud from a black-box perspective, “cloudness” is a good enough answer because all that really matters is that cloud computing fulfills its promises: delivery of applications in an efficient, on-demand fashion for less cost than would be possible in a traditional corporate data center. How that happens is not nearly as important as the end result. But for many it is important because they, too, want to partake of the goodness that is cloud computing but want to do so on their own terms, in their own data centers.

blockquote "Our belief is the future of internal IT is very much a private cloud," says Gartner analyst Thomas Bittman."Our clients want to know 'what is Google's secret? What is Microsoft's secret?' There is huge interest in being able to get learnings from the cloud."  -- Jon Brodkin, Private cloud networks are the future of corporate IT, NetworkWorld

That isn’t a recent question. The article and questions were raised in November 2008. It’s now April of 2010 and the notion of private cloud computing, if not still contentious, has still not really been answered. That’s likely because “cloud computing” isn’t a “thing”, it’s an architecture; it’s an operational model, a deployment model, even a financial model, but it’s not a tangible “thing” with a specific “secret ingredient” that makes it work. Cloudness is, in fact, very much like DNA: it’s the way in which the individual strands of genetic material (infrastructure) are intertwined (processes) and the result of that combination that make something a cat – or a cloud - not the existence of the individual components.

So like Google and Microsoft and Salesforce and a host of other cloud computing providers across the “aaS” spectrum, they all have the same ingredients – they’ve just architected them in different ways to make what we call “cloud computing.” Their secret is ultimately in the operational integration of servers, storage, network, and compute resources smothered in a secret sauce called “orchestration” that gives it cloudness: a dynamic infrastructure.

Lori concludes with an analysis of DYNAMISM across the DATA CENTER.

… But obviously it can be done as it is being done right now by both providers and enterprise organizations alike. No two architectures will be exactly alike because every organization has differential operational parameters under which they operate. Some will be designed for peak performance, others for fault-tolerance, some for secure operations, and others purely for scale and availability. What they all have in common, however, is “cloudness” : a dynamic infrastructure.

John Treadway writes in his Markets for Cloud Tools post of 4/5/2010:

I often get asked about where cloud tools and technologies are being sold/implemented today.  Note that cloud tools is different from “clouds” in that you use the tools to build/automate your cloud.  Things like infrastructure stack software, orchestration, configuration management, etc. fall into the category of cloud tools.  There are three primary markets I am seeing today.

  1. Enterprise Private Clouds – This is that most people think of when sizing the market for cloud tools.  Large corporate users taking their existing data center automation tools or a new cloud stack like Eucalyptus or CloudBurst (which is just the old tools in a new package),  and standing up their own clouds.  This market is still VERY early stages with a lot of poking and prodding, and some specific use cases such as development/test getting traction
  2. Provider Clouds – The big land rush today is to get your cloud service going.  As I have previously written, there are going to be a lot of new entrants in the cloud service provider (CSP) market.  Many will fail, and few will become huge, but that’s not stopping people from jumping in.  These tend to be far more automated than their enterprise counterparts, and have different requirements in support of multi-tenant isolation and security. 
  3. Government Community Clouds - A community cloud is like an invitation-only club.  If you meet the criteria, you can join my cloud.  In a government community cloud, this can be other agencies, other governments, etc.  For example, the State of Michigan has talked about providing cloud services for municipalities and colleges inside the state.  As a general rule government community clouds tend to combine the requirements of both enterprise and provider clouds.  Enterprise because the government is actually using the cloud for their own workloads.  Provider because they are “selling” cloud hosting to other entities.

Today, the market with the most active purchases/implementations is for cloud service providers.  There’s a strong and realistic sense that the market is leaving them behind and catching up is critical for survival.

Dana Blankenhorn asserts “BriefingsDirect analysts pick winners and losers from cloud computing's economic disruption and impact” and asks Will Cloud Computing Models Impact Pricing? in this post of 4/5/2010 to BriefingsDirect:

The latest BriefingsDirect Analyst Insights Edition,cloud computing and dollars and cents. Our panel dives into more than the technology, security, and viability issues that have dominated a lot of cloud discussions lately -- and move to the economics and the impact on buyers and sellers of cloud services.

When you ask any one person how cloud will affect their costs, you're bound to get a different answer each time. No one really knows, but the agreement comes when the questions move to, "Will cloud models impact how buyers and providers price their technology? And over the long-term what will buyers come to expect in terms of IT value?"
What comes when we move to a cloud based pay-per value pricing, buying, and budgeting for IT approach? How does the shift to high-volume, low-margin services and/or subscription models affect the IT vendor landscape? How does it affect the pure cloud and software-as-a-service (SaaS) providers, and perhaps most importantly, how do cloud models affect the buy side?

Join the panel of Dave Linthicum, CTO of Bick Group, a cloud computing and data-center consulting firm; Michael Krigsman, CEO of Asuret and a blogger on ZDNet on IT failures as well as writer of analyst reports for IDC, and Sandy Rogers, an independent industry analyst.

Dana continues with excerpts from the panelists comments and concludes:

Start now. You need to start committing to this stuff right now and putting some skin in the game, and I think a lot of people in these IT organizations are very politically savvy and want to protect their positions. There are a few of them who want to put that skin in the game right now.

I think we are going to see kind of an unfairness in business. People who are starting businesses these days and building it around cloud infrastructures are learning to accept the fact that a lot of their IT is going to reside out on the Internet and the cost effective nature of that. They're going to have a huge strategic advantage over legacy businesses, people who've been around for years and years and years.

There are going to be a lot of traditional companies out there that are going to be looking at these vendors and learning from them.
As they grow and they start to go public and they start to grow as a business, they get up to a half a billion mark, they are going to find that they are able to provide a much more higher cost and price advantage over their competitors and just eat their lunch ultimately.

We're going to see that, not necessarily now, because those guys are typically smaller and just up and coming, but in five years, as they start to grow up, their infrastructure is just going to be much more cost effective and they are just going to run circles around the competition.

... Ultimately, it would be about the ability to leverage technology that's pervasive around the world. What you're going to find is the biggest uptake of any kind of new technological shift is going to be in the United States or the North American marketplaces. We're seeing that in the U.S. right now.

We could find that the cloud computing advantage it has brought to the corporate U.S. infrastructure is going to be significant in the next four years, based on the European enterprises out there and some of the Asia-Pacific enterprises out there that will play catch-up toward the end.

<Return to section navigation list> 

Cloud Security and Governance

Brian Prince claims “A joint survey from Symantec and the Ponemon Institute painted a less than rosy picture of enterprise approaches to cloud computing. For many, security seems to be on the backburner” as a preface to its Symantec: Cloud Computing Security Approaches Need to Catch up to Adoption post of 4/5/2010 to eWeek’s Security blog:

A new survey of IT professionals has painted a troubling picture of enterprise approaches to cloud computing security.

According to the survey, which was done by Symantec and the Ponemon Institute, many organizations are not doing their due diligence when it comes to adopting cloud technologies – a situation that may partly be due to the ad hoc delegation of responsibilities.

Among the findings: few companies are taking proactive steps to protect sensitive business and customer data when they use cloud services. According to the survey, less than 10 percent said their organization performed any kind of product vetting or employee training to ensure cloud computing resources met security requirements before cloud applications are deployed.

In addition, just 30 percent of the 637 respondents said they evaluate cloud vendors prior to deploying their products, and most (65 percent) rely on word-of-mouth to do so. Fifty-three percent rely on assurances from the vendor. However, only 23 percent require proof of security compliance such as regulation SAS 70.

The researchers speculated this may be due to a gap between the people employees think should be responsible for evaluating cloud vendors and who actually is. For example, 45 percent said that responsibility resides with end users, while 23 percent said business managers. Eleven percent said the burden belonged to the corporate IT team, while nine percent said information security.

However, a total of 69 percent said they would prefer to see the information security (35 percent) or corporate IT teams (34 percent) lead the way in that regard. Most often, security teams are not part of the decision-making process at all when it comes to the cloud. Only 20 percent said their information security teams played a part on a regular basis, and 25 percent said they never do. …

<Return to section navigation list> 

Cloud Computing Events

Judith Hurwitz writes about the “four top things” she learned at CloudConnect in her “Why we about to move from cloud computing to industrial computing?” post of 4/5/2010:

I spent the other week at a new conference called Cloud Connect. Being able to spend four days emerged in an industry discussion about cloud computing really allows you to step back and think about where we are with this emerging industry. While it would be possible to write endlessly about all the meeting and conversations I had, you probably wouldn’t have enough time to read all that. So, I’ll spare you and give you the top four things I learned at Cloud Connect. I recommend that you also take a look at Brenda Michelson’s blogs from the event for a lot more detail. I would also refer you to Joe McKendrick’s blog from the event.

  1. Customers are still figuring out what Cloud Computing is all about.  For those of us who spend way too many hours on the topic of cloud computing, it is easy to make the assumption that everyone knows what it is all about.  The reality is that most customers do not understand what cloud computing is.  Marcia Kaufman and I conducted a full day workshop called Introduction to Cloud. The more than 60 people who dedicated a full day to a discussion of all aspects of the cloud made it clear to us that they are still figuring out the difference between infrastructure as a service and platform as a service. They are still trying to understand the issues around security and what cloud computing will mean to their jobs.
  2. There is a parallel universe out there among people who have been living and breathing cloud computing for the last few years. In their view the questions are very different. The big issues discussed among the well-connected were focused on a few key issues: is there such a thing as a private cloud?; Is Software as a Service really cloud computing? Will we ever have a true segmentation of the cloud computing market?
  3. From the vantage point of the market, it is becoming clear that we are about to enter one of those transitional times in this important evolution of computing. Cloud Connect reminded me a lot of the early days of the commercial Unix market. When I attended my first Unix conference in the mid-1980s it was a different experience than going to a conference like Comdex. It was small. I could go and have a conversation with every vendor exhibiting. I had great meetings with true innovators. There was a spirit of change and innovation in the halls. I had the same feeling about the Cloud Connect conference. There were a small number of exhibitors. The key innovators driving the future of the market were there to discuss and debate the future. There was electricity in the air.
  4. I also anticipate a change in the direction of cloud computing now that it is about to pass that tipping point. I am a student of history so I look for patterns. When Unix reached the stage where the giants woke up and started seeing huge opportunity, they jumped in with a vengeance. The great but small Unix technology companies were either acquired, got big or went out of business. I think that we are on the cusp of the same situation with cloud computing. IBM, HP, Microsoft, and a vast array of others have seen the future and it is the cloud. This will mean that emerging companies with great technology will have to be both really luck and really smart.

The bottom line is that Cloud Connect represented a seminal moment in cloud computing. There is plenty of fear among customers who are trying to figure out what it will mean to their own data centers. What will the organizational structure of the future look like? They don’t know and they are afraid. The innovative companies are looking at the coming armies of large vendors and are wondering how to keep their differentiation so that they can become the next Google rather than the next company whose name we can’t remember. There was much debate about two important issues: cloud standards and private clouds.

Are these issues related? Of course. Standards always become an issue when there is a power grab in a market. If a Google, Microsoft, Amazon, IBM, or an Oracle is able to set the terms for cloud computing, market control can shift over night. Will standard interfaces be able to save the customer? And how about private clouds? Are they real? My observation and contention is that yes, private clouds are real. If you deploy the same automation, provisioning software, and workload management inside a company rather than inside a public cloud it is still a cloud. Ironically, the debate over the private cloud is also about power and position in the market, not about ideology.

If a company like Google, Amazon, or name whichever company is your favorite flavor… is able to debunk the private cloud — guess who gets all the money? If you are a large company where IT and the data center is core to how you conduct business — you can and should have a private cloud that you control and manage.

I had the pleasure of meeting Judith at the San Francisco Cloud Computing Club (#SFCloudClub) Meetup of 3/16/2010, which was colocated in the Santa Clara Convention Center with Cloud Connect.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Tom Henderson and Brendan Allen assert “Terremark, Rackspace, BlueLock deliver enterprise cloud services” in their Enterprise cloud put to the test post of 4/5/2010 to Network World’s Data Center blog:  

The potential benefits of public clouds are obvious to most IT execs, but so are the pitfalls – outages, security concerns, compliance issues, and questions about performance, management, service-level agreements and billing. At this point, it's fair to say that most IT execs are wary of entrusting sensitive data or important applications to the public cloud.


How we tested these cloud computing products
Archive of Network World tests
Net resultsBut a technology as hyped as cloud computing can't be ignored either. IT execs are exploring the public cloud in pilot programs, they're moving to deploy cloud principles in their own data centers, or they are eyeing an alternative that goes by a variety of names – enterprise cloud, virtual private cloud or managed private cloud.

We're using the term enterprise cloud to mean an extension of data center resources into the cloud with the same security, audit, and management/administrative components that are best practices within the enterprise. Common use cases would be a company that wanted to add systems resources without a capital outlay during a busy time of the year or for a special, resource-intensive project or application.

Click to see: Net results

In this first-of-its-kind test, we invited cloud vendors to provide us with 20 CPUs that would be used for five instances of Windows 2008 Server and five instances of Red Hat Enterprise Linux – two CPUs per instance. We also asked for a 40GB internal or SAN/iSCSI disk connection, and 1Mbps of bandwidth from our test site to the cloud provider. And we required a secure VPN connection. …

The authors continue with details of their test program.

<Return to section navigation list> 

blog comments powered by Disqus