Sunday, June 28, 2009

Windows Azure and Cloud Computing Posts for 6/22/2009+

Windows Azure, Azure Data Services, SQL Data Services and related cloud computing topics now appear in this weekly series.

••• Updates 6/27 – 6/28/2009: OGDI expansion, other additions and corrections
Updates 6/25 – 6/26/2009: Microsoft presentations streamed at GigaOm’s Structure 09 (Cloud Computing Events), Mary Jo Foley’s “All About Azure” audio archive (Cloud Computing Events), Velocity 09 videos, Structure 09 summaries and additions 
• Updates 6/23 – 6/24/2009: Additions, typos

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, click the post title to display the single article you want to navigate.

Azure Blob, Table and Queue Services

<Return to section navigation list> 

Arnon Rotem-Gal-Oz expands on his CRUD is bad for REST thesis in this 6/24/2009 post to Dr. Dobbs CodeTalk. In brief, Arnon’s position is:

[T]he main reason CRUD is wrong for REST is an architectural one. One of the base characteristics(*) of REST is using hypermedia to externalize the statemachine of the protocol (a.k.a. HATEOS– Hypertext as the engine of state). The URI to URI transition is what makes the protocol tick (the transaction implementation by Alexandros  discussed in the previous post shows a good example of following this principle). …

Maarten Balliauw recommends storing MVC views in Azure blobs in his A view from the cloud (or: locate your ASP.NET MVC views on Windows Azure Blob Storage) post of 6/8/2009 (missed at the time.)

Hosting and deploying ASP.NET MVC applications on Windows Azure works like a charm. However, if you have been reading my blog for a while, you might have seen that I don’t like the fact that my ASP.NET MVC views are stored in the deployed package as well… Why? If I want to change some text or I made a typo, I would have to re-deploy my entire application for this. Takes a while, application is down during deployment, … And all of that for a typo…

Luckily, Windows Azure also provides blob storage, on which you can host any blob of data (or any file, if you don’t like saying “blob”). These blobs can easily be managed with a tool like Azure Blob Storage Explorer. Now let’s see if we can abuse blob storage for storing the views of an ASP.NET MVC web application, making it easier to modify the text and stuff. We’ll do this by creating a new VirtualPathProvider.

Note that this approach can also be used to create a CMS based on ASP.NET MVC and Windows Azure.

Bruno Terkaly continues his series on Azure tables with these three posts of 6/21 – 6/22/2009:

Bruno is a Microsoft Developer Evangelist.

SQL Data Services (SDS)

<Return to section navigation list> 

No significant new articles as of 6/23/2009 10:30 AM PDT

.NET Services: Access Control, Service Bus and Workflow

 <Return to section navigation list>

Brent Stineman’s .NET Services – Introduction to the Service Bus post of 6/26/2009 begins:

Darned if this post hasn’t been rough to write. I don’t know if its my continued lack of caffeine (quit it about 10 days ago now), or the constant interruptions. At least the interruptions have been meaningful. But after 2 days of off and on again effort, this post is finally done.

As some of you reading this may already been aware, I’ve spent much of my spare time the last several weeks diving into Microsoft’s .NET Services. I’m finally ready to start sharing what I’ve learned in what I hope is a much more easily digestible format. Nothing against all the official documents and videos that are out there. They’re all excellent information. The problem is that there’s simply too much of it. :)

Brent’s summary of the three major .NET Services includes Workflow, the demise of which in Azure v1 I reported on earlier. Workflow won’t return to .NET Services until after .NET 4.0 RTMs.

Vittorio Bertocci promotes a video from his Putting authentication in its place: claim-based identity, services and Geneva TechDays session in his [VIDEO] Putting authentication in its place: claim-based identity, services and Geneva post of 6/25/2009. Here’s the blurb:

The code that takes care of authentication is traditionally one of the nastiest spot of every distributed application. The current situation derives from multiple causes, from tightly coupling with specific technologies to trusting non-experts to write security code. Microsoft has been among the thought leaders who proposed a strategic solution to the problem, the Identity MetaSystem and its claim based identities, achieving vast consensus across the industry. Come to this session to learn how you can finally put that vision in practice thanks to the new 'Geneva' products line.

TechDays was held in Antwerp a few months ago.

Dan Guberman describes Improved support for X.509 credential in Information Cards in this 6/23/2009 post to the CardSpaceBlog.

The Beta 2 version of “Geneva” has many features that improve the deployment of Geneva platform for our enterprise customers, like the Group Policy-driven provisioning of Information Cards  or the administrative policy of card usage that we talked about in our previous blog posts.

Another such feature is the enhanced support for X.509 certificate credentials in Information Cards.

Using Information Cards backed by an X509 certificate provides the added benefit of increased security, and with “Geneva” Server Beta 2 it becomes very easy to provision such a card. Pretty much all that you need to do is to check the “Certificate” checkbox in the Information Card Properties dialog in Geneva Server (right-click on Information Card tab in the navigation pane, and select Properties from the context menu).

However, there’s still no update to this caveat in Vittorio Bertocci’s Claims and Cloud: Pardon our Dust post of 4/1/2009:

[F]or a variety of reasons, an application that takes advantage of the Geneva Framework will not work “as is” when hosted in Windows Azure, including Microsoft products that were written to use the Geneva Framework. You may have heard that the new full trust settings we announced for Windows Azure at MIX would make the above scenario work, however that’s not the case: there is more than full trust for enabling the complete range of possibilities offered by claims based access.

My question about Geneva Beta 2 in a comment to this post remains unanswered.

Live Windows Azure Apps, Tools and Test Harnesses

<Return to section navigation list> 

••• Microsoft’s Open Government Data Initiative (OGDI) site has expanded with more Azure-hosted Washington, D.C data sets on the Data Page, and details on the OGDI API on the Developers page. According to the Home page:

The Open Government Data Initiative (OGDI) is an initiative led by Microsoft Public Sector Developer Evangelism team. OGDI uses the Azure Services Platform to make it easier to publish and use a wide variety of public data from government agencies. OGDI is also a free, open source ‘starter kit’ (coming soon) with code that can be used to publish data on the Internet in a Web-friendly format with easy-to-use, open API's. OGDI-based web API’s can be accessed from a variety of client technologies such as Silverlight, Flash, JavaScript, PHP, Python, Ruby, mapping web sites, etc.

Ben Riga finishes his five-part series about combining the Microsoft Web Platform with Dynamics CRM to quickly build and deploy self-service solutions hosted on Windows Azure with his Dynamics Duo: Silverlight and Jazz Hands post of 6/18/2009. Following are links to the entire saga:

  1. Dynamics Duo Rides Again
  2. Dynamics Duo: Everybody needs an Identity
  3. Dynamics Duo: Wide World Importers Code
  4. Dynamics Duo: Composition with Third-Party Web Services
  5. Dynamics Duo: Silverlight and Jazz Hands

Keith posted Web-based mapping tool gets government data via API from OGDI: http://visualfusion.cloudapp.net to the Microsoft Public Sector Developer and Platform Evangelism Team Blog on 6/25/2009:

Microsoft partner IDV Solutions has created a terrific mapping overlay tool that can get map data from any KML source.

Since OGDI natively emits KML, its a great demonstration of web standards enabling open government data.  They’ve included DC data from OGDI, and some national data (parks, earthquakes), but you can easily add any KML data set just by entering a URL.

Go to http://visualfusion.cloudapp.net to see it in action!

Mary Jo Foley reports on 6/24/2009 Five reasons why Microsoft's Hohm is more than just another Web 2.0 service. Following are the first two (and Azure-related) reasons from Troy Balterberry, Microsoft’s Hohm product manager:

1. Hohm is a hosted serice running on Azure, Microsoft’s cloud platform. There are relatively few Microsoft services that already are running fully on top of Azure. HealthVault is one; Live Mesh is another. The calculations upon which the Hohm service is built are “really complicated,” Balterberry said, and require historical modeling. By running on Azure, Hohm can be scaled up or down, depending on demand, to use lots of compute cycles during peak demand.

2. Speaking of HealthVault, Hohm was patterned after it and uses the same security and privacy mechanisms that Microsoft’s health-information service uses. While energy consumption data doesn’t seem as in need of guarding that patient health data is, energy usage and pricing are information that is sensitive and to which access needs to be controlled, said Balterberry.

Here’s the full Microsoft Hohm Helps Consumers Save Money and Energy press release.

C. G. Lynch asks How Far Will Microsoft Go with Cloud? in this feature-length article of 6/23/2009 for InfoWorld:

… To date, the majority of Microsoft's software has come paired with servers and hardware that IT departments run and manage in-house. Now, with online services, Microsoft can manage the software in its own data centers while employees at customer companies around the world access applications through a web browser.

According to Microsoft executives, companies can realize huge cost savings by not hiring staff to manage Exchange servers or reallocate current IT staff to other areas-a refrain software as a service (SaaS) vendors have been pushing for years now.

"IT is dominated by the people cost," says Bob Muglia, president of the Microsoft Server & Tools division. "It's the single largest expense in IT. By leveraging the scale online services can deliver, you can leverage costs and be leaner." …

Ingersoll Rand was running the e-mail system in-house. It had also developed many custom apps on the Lotus Domino server, but the cost was taking its toll, Kalka says. After looking at the on-premise, traditional version of Exchange, Kalka says "the numbers didn't look much better."

Then Microsoft approached him about online version of Exchange. Kalka saw the cheap per user price. Coupled with the fact he didn't need to manage hardware, he decided to sign up.

"That big e-mail cost went away," he says. "We had e-mail servers all around the world. 95 percent are shut down or re-allocated for something else." …

Lynch goes on to describe SharePoint in the cloud as a “Trickier Decision” and notes that “Microsoft will roll out a fully online version of Office later this year or early next, but it remains unclear how robust the offering will be in comparison to the installed version.”

David Pallman’s "Joint Venture": New Azure Multi-Business Enterprise Application (MEBA) post of 6/21/2009 describes his:

[L]atest Silverlight-Azure reference application which is called Joint Venture. Joint Venture provides a workspace for cross-business project teams. That is, teams made up of people from more than one business who are working on some kind of business collaboration. This is an example of a Multi-Enterprise Business Application (MEBA), an app used by multiple businesses who have a relationship with each other. The cloud is an ideal place for business collaboration, providing a neutral location that can be easily and universally accessed.

David requests your Azure Developer Contest vote in his Vote for Me! post of the same date.

Azure Infrastructure

<Return to section navigation list> 

••• Chris Hoff (a.k.a. @Beaker) concludes in his Cloud Maturity: Just Like the iPhone, There’s An App For That… post of 6/27/2009:

The thing I love about my iPhone is that it’s not a piece of technology I think about but rather, it’s the way interact with it to get what I want done.  It has its quirks, but it works…for millions of people.

The point here is that Cloud is very much like the iPhone.  As Sir James (Urquhart) says “Cloud isn’t a technology, it’s an operational model.”  Just like the iPhone.

Cloud is still relatively immature and it doesn’t have all the things I want or need yet (and probably never will) but it will get to the point where its maturity and the inclusion of capabilities (such as better security, interoperability, more openness, etc.) will smooth its adoption even further and I won’t feel like we’re settling anymore…until the next version shows up on shelves.

But don’t worry, there’s an app for that.

John Brodkin’s Survey casts doubt on cloud adoption article for NetworkWorld of 6/26/2009 summarizes Laura DiDio’s recent cloud-computing survey for ITIC:

New survey results cast doubt on whether cloud computing adoption will ramp up in the next 12 months, with only 15% of corporate customers having adopted or considering adopting cloud technology over the next year.

A survey of 300 corporations worldwide found that 38% are undecided or unsure about whether they will adopt cloud services, and another 47% said they are not considering implementing cloud in the next year. Security is the biggest roadblock.

“An overwhelming 85% majority of corporate customers will not implement a private or public cloud computing infrastructure in 2009 because of fears that cloud providers may not be able to adequately secure sensitive corporate data,” writes Information Technology Intelligence Corp. principal analyst Laura DiDio in a new report.

Laura’s survey conflicts directly with Harris Interactive’s Microsoft-sponsored analysis reported by Julie Bort (see below.)

Stephen Lawson reports in his Cloud is Internet's next generation, HP executive says post of 6/25/2009 that Cloud-Services “CTO Russ Daniels says the cloud makes the Internet more than an infrastructure for automating business processes or letting people view information.”

The International Supercomputing Conference (ISC) blog’s High Performance Cloud Computing Still an Oxymoron post of 6/25/2009 observes:

There was general agreement [at ISC ‘09 in Hamburg, Germany] on the benefits of cloud computing: elastic capacity, pay-per-use model, platform abstraction, economies of scale, and built-in fault tolerance. Unfortunately -- and maybe significantly -- there didn't seem to be much consensus about whether the clouds would usurp traditional HPC infrastructure as the platform of choice.

C. Burns and B. Guptill’s MIT Cloud Computing Forums: Executives Don’t Know What They Don’t Know research report of 6/24/2009 (site registration required) begins:

CIOs and similar high-ranking user executives see promise in Cloud Computing and, for the most part, believe that they understand what it is, and how to benefit from it. But insights from a recent four-day series of events with CIOs around the US indicate that, in reality, there are multiple definitions of Cloud Computing - and relatively few executives can see the scope of its effects.

From June 11 through June 18, Saugatuck Research VP Charlie Burns took part in four expert panel and networking reception events examining the realities of Cloud Computing, and their effects on user business and IT strategy, planning and management.

And continues with an analyzes of “[d]iscussions during the events and private conversations with session attendees.”

Lori MacVittie suggests Five questions you need to ask about load balancing and the cloud on 6/25/2009 and starts with:

Horizontal scaling of applications is a fairly well understood process that involves (old skool) server virtualization of the network kind: making many servers (instances) look like one to the outside world. When you start adding instances to increase capacity for your application, load balancing necessarily gets involved as it’s the way in which horizontal scalability is implemented today. …

Joe McKendrick reports in his Survey: Wall Street looks to cloud technology for its next bailout post of 6/25/2009:

A new survey released by IBM and Securities Industry and Financial Markets Association (SIFMA) finds that IT budgets are tight on Wall Street, but things are loosening up, and there’s going to be plenty of demand for new technology initiatives in the near future as firms on the Street look to “transformational” solutions to help better manage risk.

The survey of more than 350 Wall Street IT professionals found a “significant” increase in interest in new technologies and computing models, in particular cloud computing, as firms seek to overcome budgetary restrictions and skills shortages. Almost half of the respondents now see cloud computing as a disruptive force. …

Gartner’s Lydia Leong asks on 6/26/2009 Does Procurement know what you care about? when sourcing cloud computing services:

Increasingly, … procurement is self-educating via the Internet. I’ve been seeing this a bit in relationship to the cloud (although there, the big waves are being made by business leadership, especially the CEO and CFO, reading about cloud in the press and online, more so than Purchasing), and a whole lot in the CDN market, where things like Dan Rayburn’s blog posts on CDN pricing provide some open guidance on market pricing. Bereft of context, and armed with just enough knowledge to be dangerous, purchasing folks looking across a market for the cheapest place to source something, can arrive at incorrect conclusions about what IT is really trying to source, and misjudge how much negotiating leverage they’ll really have with a vendor.

Derrick Harris analyzes cloud-based infrastructure in these recent weekly posts to the GigaOm PRO Beta network:

Derrick is the Infrastructure Curator for the GigaOM Network.

Toby Wolpe’s Gartner rejigs cloud definitions article of 6/24/2009 for ZDNet UK and Chris Talbot’s Gartner identifies ideal attributes of cloud computing post of 6/25/2009 for eChannelLine cover Darryl Plummer’s revamped definition of cloud computing. According to Chris, the five “ideal attributes” are, in brief:

  1. Service-based
  2. Scalable and elastic
  3. Shared
  4. Metered by use
  5. Uses Internet technology

Chris provides more details of the five points, while Toby delivers more background.

Daryl Plummer is a managing vice president and chief Gartner fellow.

Mary Jo Foley’s “All about Azure” Webcast of 6/24/2009 and slides should be are available for download here, but the link doesn’t work. Will update if and when ZDNet fixes it. See the Cloud Computing Events section for more details.

• Julie Bort reports on 6/24/2009 that Many companies say they will adopt cloud computing within two years based on a “Microsoft-sponsored [Harris Interactive] study on IT spending [which] shows green is out, efficiency is in and security is still painful.”

One-third of 1,200 organizations (33%) plan to convert their application environments away from a traditional, client-server model to one based on virtualization and cloud computing over the next two years, according to a study commissioned by Microsoft and released today. The study sought to broadly determine global IT spending priorities.

While the survey was far from comprehensive, it did uncover a few silver-lining facts. IT spending budgets will not be cut, with 98% saying they will generally maintain or increase their planned investment. Nearly 2/3 say the economy has created reason to invest more in one or more areas of technology. And of those, virtualization, security, systems management and cloud computing are the areas of choice. Specifically:

  • 42% plan increased investment in virtualization.
  • 36% plan increased investment in security.
  • 24% plan increased investment in systems management.
  • 16% plan increased investment in cloud computing.

• Dana Gardner chimes in on the Harris Interactive report in his Virtualization and Cloud Computing Get IT Green Light post of 6/24/2009 subtitled “Cloud and upgraded computing future brightens despite overcast economy, Microsoft-sponsored survey finds.” Gardner concludes:

The survey confirmed Microsoft’s in-house belief that IT budgets still have room for investment in infrastructure innovations, he said. The Redmond folks hope that will include convincing corporate IT departments, which pretty much skipped the Vista era, to finally move from Windows XP to Windows 7.

More survey highlights are available at the Microsoft Core Infrastructure Optimization site.

• James Hamilton describes his ISCA 2009 Keynote II: Internet-Scale Service Infrastructure Efficiency session in this 6/24/2009 post:

I presented the keynote at the International Symposium on Computer Architecture 2009 yesterday. Kathy Yelick kicked off the conference with the other keynote on Monday: How to Waste a Parallel Computer.

Thanks to ISCA Program Chair Luiz Borroso for the invitation and for organizing an amazingly successful conference. I’m just sorry I had to leave a day early to attend a customer event this morning. My slides: Internet-Scale Service Infrastructure Efficiency.

Abstract: High-scale cloud services provide economies of scale of five to ten over small-scale deployments, and are becoming a large part of both enterprise information processing and consumer services. Even very large enterprise IT deployments have quite different cost drivers and optimizations points from internet-scale services. The former are people-dominated from a cost perspective whereas internet-scale service costs are driven by server hardware and infrastructure with people costs fading into the noise at less than 10%.

In this talk we inventory where the infrastructure costs are in internet-scale services. We track power distribution from 115KV at the property line through all conversions into the data center tracking the losses to final delivery at semiconductor voltage levels. We track cooling and all the energy conversions from power dissipation through release to the environment outside of the building. Understanding where the costs and inefficiencies lie, we ll look more closely at cooling and overall mechanical system design, server hardware design, and software techniques including graceful degradation mode, power yield management, and resource consumption shaping.

• Reuven Cohen complains about fixed software licensing fees for cloud deployment in his Examining Utility Software Licensing post of 6/24/2009. Ruv writes:

Recently I read an article about a traditional enterprise grid computing company who is attempting to enter the nascent cloud computing market. Without naming names, I will say the technology is probably decent, what they seem to lack is any real insight into the cost advantages that cloud computing enables. What I'm getting at is the ability to scale your resources -- hardware and software alike as you need them only paying for what your need, when you need it. This is arguably one of the key advantages of cloud computing, be it a private or public cloud.

My biggest issue with enterprise software companies applying traditional software licensing to cloud infrastructure software is that by charging $1,000 per year / per node, you are in a sense applying a static costing model to a dynamic environment which basically negates any of the costs advantages that cloud computing brings. It's almost like they're saying this how we've always done it, so why change? To put it another way, on one hand they're saying "reinvent your datacenter" yet on the other hand they saying" we don't need to reinvent how we bill you".

Dmitry Sotnikov claims A VM running in EC2 is not SaaS in this 6/23/2009 post:

Just because you have software packaged as a virtual machine and running in Amazon EC2 does not mean you have a “cloud” offering.

As easy as it sounds in most cases when a vendor claims they have their software available as a service/cloud offering – it is just that: a virtual machine image (such as Amazon Machine Image – AMI) and maybe a hosting partner eager to host this virtual machine for you.

Dmitri then goes on to analyze Lydia Leong’s US$95 “Software on Amazon’s Elastic Compute Cloud: How to Tell Hype From Reality Gartner report on the topic.

Reuven Cohen’s MIT Technology Review Names Key Cloud Players post of 6/23/2009 provides a brief summary and the following links to articles in the July/August issue of the MIT Technology Review:

10Gen is developing MongoDB, a database for the cloud that supports Ruby, Python, Java, C++, PHP, Perl, and server-side Javascript and has more features than key-value (Entity-Attribute-Value, EAV) databases.

Here’s the MIT Cloud Stack:

Robin Wauters reports Microsoft Poaches Former Yahoo Exec To Head Up Data Center Services in this 6/22/2009 TechCrunch post:

Acquiring Yahoo, one employee at a time: Microsoft has recruited Kevin Timmons, former lead of Yahoo’s data center team, to head up its Data Center Services organization. Timmons was once director of Operations at GeoCities and worked his way up to VP of Operations at Yahoo, where he led the build-out of the company’s data centers and infrastructure.

Robert L. Scheier’s Busting the nine myths of cloud computing post of 6/22/2009 for InfoWorld’s Cloud Computing column carries this deck:

Vendor hype and IT self-delusion can quickly lead to disappointment. If you're considering a cloud strategy, don't get fooled by these false premises.

and William Hurley asks Will lawyers ruin cloud computing?

Looming legal battles over privacy, security, regulation, and intellectual property have the potential to steal cloud computing's thunder

Brent Stineman asks is Cloud Computing [a] backlash against constraints? in this 6/22/2009 post:

[He cannot] help but ponder if one motivation for moving to the cloud was this “need” to not be limited by existing infrastructure. How many folks will look to the cloud not because of cost, or features, but simply because the near endless resources it brings mean that they are no longer bound by the constraints imposed by their existing infrastructure. They can operate outside of enterprise infrastructure governance and budgeting.

Lydia Leong recommends that cloud-compute vendors avoid Overpromising in this 6/22/2009 post:

I’ve turned one of my earlier blog entries, Smoke-and-mirrors and cloud software into a full-blown research note: “Software on Amazon’s Elastic Compute Cloud: How to Tell Hype From Reality” (clients only). It’s a Q&A for your software vendor, if they suggest that you deploy their solution on EC2, or if you want to do so and you’re wondering what vendor support you’ll get if you do so. The information is specific to Amazon (since most client inquiries of this type involve Amazon), but somewhat applicable to other cloud compute service providers, too.

More broadly, I’ve noticed an increasing tendency on the part of cloud compute vendors to over-promise. It’s not credible, and it leaves prospective customers scratching their heads and feeling like someone has tried to pull a fast one on them. Worse still, it could leave more gullible businesses going into implementations that ultimately fail. This is exactly what drives the Trough of Disillusionment of the hype cycle and hampers productive mainstream adoption. …

Ben Kepes summarizes the first session of the Enterprise 2.0 2009 conference by Alistair Croll in his Cloud Computing – A Real World Guide post of 6/22/2009. Croll is co-author of Complete Web Monitoring and a principal analyst for Bitcurrent.

Reuven Cohen says “On second thoughts, ‘Multiverse’ does little to describe how each of those clouds interact” in his The Cloud Computing Metaverse post of 6/21/2009:

In describing my theory on the Cloud Multiverse, I may have missed the few obvious implications of using the prefix "multi" or consisting of more than one part or entity. Although the Cloud Multiverse thesis suggests there will be more then one internet based platform or cloud to choose from. It does little to describe how each of those clouds interact. For this we need another way to describe how each of these virtualized interconnected environments interact with one another.

In place of "multi" I suggest we use the prefix "Meta" (from Greek: μετά = "after", "beyond", "with", "adjacent", "self").

Michelle Munson explains Avoiding Latency in the Cloud in this 6/20/2009 post to the GigaOM blog for the Structure Conference. Michelle begins:

The cloud promises to change the way businesses, governments and consumers access, use and move data. For many organizations, a big selling point in cloud infrastructure services is migrating massive data sets to relieve internal storage requirements, leverage vast computing power, reduce or contain their data center footprint, and free up IT resources for strategic business initiatives. As we move critical and non-critical data to the cloud, reliable, secure and fast access to that information is crucial. But given bandwidth and distance constraints, how do we move and manage that data to and from the cloud, and between different cloud services, in a cost-efficient, scalable manner?

Apprenda, Inc.’s SaaSGrid PaaS offering sounds a bit like Azure:

SaaSGrid℠ is a comprehensive Platform as a Service (PaaS) offering that drastically reduces time-to-market, allows organizations to build complex and powerful SaaS applications and affords them the ability to easily manage their SaaS business. SaaSGrid focuses on reducing the barrier to entry for SaaS by smashing significant technical hurdles like multi-tenancy and by providing "out of the box" application services like monetization and billing, while supplying ongoing value with an arsenal of management tools to manage a SaaS business and associated application maintenance.

Build real enterprise SaaS applications with technologies you already know. SaaSGrid applications are written using Microsoft .NET languages and the simple yet powerful SaaSGrid API. There is no need to learn new programming languages or flashy online 'drag and drop editors' that impose artificial limitations on your business. In fact, with SaaSGrid, the web-based enterprise apps you've already built using .NET are probably closer to SaaS-enabled than you think. SaaSGrid allows you to take advantage of your existing assets and knowledge, and extend them with massive SaaS-focused value. [Emphasis Apprenda’s.]

I’d certainly like to see a point-by-point comparison with Azure WebRoles and .NET Services.

Cloud Security and Governance

<Return to section navigation list>

•• Aliya Sternstein’s Microsoft: Legacy systems not a barrier to [government] cloud computing article of 6/26/2009 for NextGov quotes Susie Adams, Microsoft’s chief technology officer for federal civilian agencies:

"It's an evolution of the industry." And transitioning does not require overhauling all computer programs and hardware. "The first entree from a transparency perspective is to put publicly available data into the cloud. That's the least risky," Adams said.

To ensure Microsoft remains a player in the growing cloud market, company officials are developing software that is interoperable, or able to exchange information among multiple systems and services. "It's all about choices," she said. "It's going to be a hybrid world.”

•• Greg Papadopoulos is quoted in TechPulse360’s Public Computing Clouds Could Be More Secure That Private Ones post of 6/26/2009:

“Most public clouds are run in a more secure manner than the networks enterprises maintain on their own. Not all private companies maintain the same discipline,” he said Thursday at the Structure 09 conference in San Francisco.

This is a common refrain that few CTOs, CIOs or CISOs appear to believe. Greg is CTO and Executive Vice President of Research and Development at Sun Microsystems.

Reuven Cohen’s IBM Solves Cryptographic Cloud Security post of 6/25/2009 comments on IBM’s purported discovery of “a method to fully process encrypted data without knowing its content. If true, this could greatly further data privacy and strengthen cloud computing security.” Ruv quote’s IBM’s press release:

An IBM researcher has solved a thorny mathematical problem that has confounded scientists since the invention of public-key encryption several decades ago. The breakthrough, called "privacy homomorphism," or "fully homomorphic encryption," makes possible the deep and unlimited analysis of encrypted information -- data that has been intentionally scrambled -- without sacrificing confidentiality.

And adds this caveat in an update:

According to a Forbes article, Gentry's elegant solution has a catch: It requires immense computational effort. In the case of a Google search, for instance, performing the process with encrypted keywords would multiply the necessary computing time by around 1 trillion, Gentry estimates. But now that Gentry has broken the theoretical barrier to fully homomorphic encryption, the steps to make it practical won't be far behind, predicts professor Rivest. "There's a lot of engineering work to be done," he says. "But until now we've thought this might not be possible. Now we know it is." [Emphasis added.]

Government Information Security Podcasts offers the Audit, Risk Trends: Insights from David Melnick of Deloitte podcast in this 6/22/2009 post:

Audit and enterprise risk - they're inextricably linked. As cyber threats grow - from the inside and out - require organizations and their regulators to pay closer attention to technology and information security.

What are some of the key audit and risk trends to track? David Melnick of Deloitte answers that question in an interview focusing on:

  • Top challenges for financial institutions and government agencies;
  • Successful strategies being deployed to mitigate threats;
  • Trends organizations should track as they eye 2010.

Melnick is a principal in security and privacy services within the audit and enterprise risk services practice in the Los Angeles office of Deloitte and brings more than 17 years of experience designing, developing, managing and auditing large scale secure technology infrastructure. Melnick has authored several technology books and is a frequent speaker on the topics of security and electronic commerce.

Cloud Computing Events

<Return to section navigation list>

•• O’Reilly Media offers videos of sessions from its Velocity 09 Web Performance and Operations Conference, held 6/22 - 6/24/2009 in San Jose California. Videos with cloud topics include:

•• David Pallman will be “speaking at So Cal Code Camp this weekend in San Diego. [His] talk is on Azure Design Patterns, Saturday at 2:30:”

This session will present Design Patterns for cloud computing on the Azure platform. Azure provides oodles of functionality that range from application hosting and storage to enterprise-grade security and workflow. Design patterns help you think about these capabilities in the right way and how they can be combined into composite applications. We'll cover design patterns for hosting, data, communication, synchronization, and security as well as composite application patterns that combine them. We'll be doing hands-on code demos of a number of composite applications, including a grid computing application. Azure Design Patterns Web Site.

When: 6/27 and 6/28/2009 9:00 AM to 5:30 PM 
Where: UCSD Extension Complex, 9600 N. Torrey Pines Rd., La Jolla, CA 92037

•• Krishnan Subramanian summarizes the Structure 09 Panel - The Myth Of One Size Fits All Cloud, Structure 09 Panel - Building The Perfect Host for Web Apps and Structure 09 Panel: From Dataspaces To Databases panel discussions of 6/25/2009 from GigaOm’s Structure 09 conference.

•• Brandon Watson reports that GigaOm’s Structure 09, Putting Cloud Computing to Work, will stream on 6/25/2009 the following panels with Microsoft representatives:

  • 11:05 AM: The Myth of the One-Size-Fits-All Cloud (Yousef Khoudi)
  • 1:30 PM: Toward Cloud Computing: Private Enterprise Clouds As A First Step (Brandon Watson)
  • 3:30 PM: Spinning the Web to the Cloud (Brian Goldfarb and Steve Yi)
  • 4:00 PM: On The Shoulders of Giants (Najam Ahmad)

More details about the panels and presentations are here.

Brandon says in his What Is Cloud Computing? post of 6/25/2009 from Structure 09 that “the word ‘cloud’ is catnip for nerds.” … “Next up on the zeitgeist watch?  Attaching the word “scale” to the name of your company.”

 John Willis reports that he “will be moderating next week’s Cloud Camp in Columbus Ohio next week Tuesday 6/30/09” in his Cloud Camp Columbus and The IBM Blue Cloud post of 6/24/2009:

I will be giving two session[s] at the conference, “Introduction to Clouds” and “Clouds in the Enterprise”. In my “Clouds in the Enterprise” will cover IBM’s new “Blue Cloud/Cloudbursting” announcement. If you happen to be in the Columbus area next Tuesday you should come and learn more about Cloud Computing. If you have any questions please feel free to contact me. Also, I have reserved extra tickets for Tivoli users.

When: 6/30/2009 5:00 PM to 8:30 PM 
Where: TechColumbus, 1275 Kinnear Rd, Columbus, OH 43212

Mary Jo Foley will “help sort out what Azure is (and what it isn’t) in a live Webcast on Wednesday, June 24 at 1:00 PM ET / 10:00 AM PT / 5:00 PM GMT” according to Jason Hiner. “This is a good opportunity to get up to speed on Azure before Microsoft launches it later this year.”

Jason describes the content:

ZDNet’s “All About Microsoft” blog editor Mary Jo Foley will offer an Azure primer. She’ll explain what Azure is — from the base operating system level, to the higher-level services layers, to the “user experience.” Foley will compare Azure to competing cloud platforms from Amazon, Google and other players. She will discuss how Microsoft is using and plans to use the platform itself. And Foley will differentiate between what we know about Azure from what many are anticipating from the platform.

Even if you’re dragging your heels about moving your apps and data “to the cloud,” it’s not too soon to hear more about Microsoft’s cloud plans. This Webcast will provide a high-level overview of where Microsoft has been and where it’s going in the cloud/utility computing market.

When: 6/24/2009 from 10:00 PM PDT
Where: Internet (Webcast) You should be able to download the audio archive of the All About Azure Webcast from http://bit.ly/uSyYO and download slides here.

Nandita of Microsoft’s Public Sector DPE Team announces Microsoft Developer Dinner Series for Partners Presenting: Microsoft Open Government Data Initiative – Cloud Computing, REST, AJAX, CSS, oh my! - June 24, 2009 - Reston, VA. Speakers will be:

    • Marc Schweigert, Developer Evangelist 
    • James Chittenden, User Experience Evangelist
    • Vlad Vinogradsky, Architect Evangelist

To help public sector entities meet these demands, Microsoft announced the Open Government Data Initiative (OGDI) on May 7, 2009. OGDI provides an Internet-standards-based approach to house existing public government data in Microsoft’s cloud computing platform, called Windows Azure. The approach makes the data accessible in a programmatic manner that uses open, industry-standard protocols and application programming interfaces (APIs).

Typically, federal, state and local government data is available via download from government Web sites, which requires citizen developers to host and maintain the data themselves. Through OGDI, Microsoft is highlighting the importance of programmatic access to government data (versus downloading the data).

Register here.

Here are the follow up links promised:

When: 6/24/2009 from 6:00 PM to 8:00 PM PDT 
Where: Microsoft Innovation & Technology Center, 12012 Sunset Hills Road Reston, VA 20190

Wayne Erickson will present a Webinar, Making Sense of SaaS BI: The Pros and Cons of Moving BI to the Cloud, on 6/25/2009 at 9:00 AM PDT:

Companies are adopting Software as a Service (SaaS) business intelligence (BI) solutions at a record pace as they upgrade from complex collections of spreadsheets and augment their existing BI deployments. Before your company jumps into the fray of deploying BI using the Cloud computing model, join industry expert Wayne Eckerson, Director of The Data Warehousing Institute (TDWI) Research, for straight talk about pitfalls to avoid and how to achieve a rapid Return on Investment (ROI).

Register here.

When: 6/25/2009 from 9:00 PM PDT 
Where: Internet (Webinar)

David Pallman announces the Next Orange County Azure User Group Meeting Thursday 6/25 on Silverlight and Azure in this 6/20/2009 post:

The Orange County Azure User Group next meets on Thursday, June 25 at 6pm. The topic for this month's meeting is Silverlight and Azure. David Pallmann and Richard Fencel will both be presenting.

In David's presentation, you'll learn how to create rich Silverlight applications that are Azure-hosted and take advantage of cloud services. We'll build an Azure-hosted Silverlight application from the ground up that utilizes web services and cloud storage.

When: 6/25/2009 from 6:00 PM to 8:30 PM PDT 
Where: QuckStart Intelligence, 16815 Von Karman Avenue, Suite 100, Irvine, CA 92606

SOA World reports SOA & Cloud Computing To Intersect This Week at SOA World on 6/22 – 6/23/2009 “at The Roosevelt Hotel, the 15th International SOA World Conference & Expo.”

When: 6/22 to 6/23/2009 
Where: Roosevelt Hotel, New York

Other Cloud Computing Platforms and Services

<Return to section navigation list> 

••• John Foley lists 10 Essentials Of IBM's Cloud Computing Strategy in this 6/26/2009 post to InformationWeek’s Cloud Cloud Computing Destination. John writes:

IBM recently made its most significant cloud computing announcement to date, which one executive compares to the launch of Big Blue's venerable System/360 mainframe 40 years ago. Following is my list of the top 10 things you need to know about IBM's emerging cloud strategy. …

•• John WillisBig 4 Little 4 - Private Clouds post of 6/25/2009 offers brief opinions about what he considers the Big 4 in private clouds:

and the Little 4:

•• Ted Leung and Ashwin Rao contributed the Explaining the Allure of Cloud Computing post of 6/25/2009 to Sun Microsystems’ SystemNews, which summarizes a presentation to JavaOne. Topics covered (briefly) are:

  • Attractions of the Cloud
  • Problems of the Cloud
  • The Current Tool Landscape
  • Sun's Cloud Tools (Kenai, Zembly, O’Malley, and Speedway)

 John Foley’s Oracle Moves A 'Little Bit' Into Cloud Computing post of 6/24/2009 to InformationWeek’s Cloud Computing segment begins:

Following his outburst against cloud computing last year, it appears that Larry Ellison has warmed up to the cloud computing model, if not the buzz phrase itself. Oracle's CEO yesterday said it's a goal to become the software industry's "number one on-demand application company."

Ellison last year lambasted cloud computing, referring to the hype around it as "idiocy," "gibberish," and "crazy." As I pointed out at the time, however, Oracle was moving into cloud computing even as its leader railed against it. During a conference call yesterday with analysts to discuss Oracle's financial results, Ellison provided evidence that Oracle is indeed making progress on this front and has ambitious goals in the software-as-a-service market.

"We think we can be the number one applications company, the number one on-premise application company, and the number one on-demand application company. That's our goal," he said. …

Throughout all of this, Ellison didn't use the term cloud computing, referring instead to on-demand software. One analyst observed, "It sounds like you're getting into cloud computing." To which Ellison, the cloud antagonist, responded: "Little bit."

 John Treadway explains the architecture of Joyent - Yes Virginia, There Is A Hybrid Cloud in this 6/24/2009 post based on John’s conversation with James Duncan and Bryan Bogensberger of Joyent at #e2conf. John concludes:

Effectively, your servers are “joined” to the cloud. This is my “marketecture” view from my conversation with James and Bryan, and what they end up releasing may look very different. But if what they say is true, they may be one of the first to have actually deployed a hybrid cloud intro production. That’s huge - like Santa Claus is Real kind of huge!

 Rich Miller reports on Yahoo’s new data center in Quincy, Wash that neighbors Microsoft’s in his oddly titled Yahoo’s Unstealths Its Data Center Efficiency post 0f 6/24/2009:

When it comes to data center efficiency, Yahoo has maintained a lower profile than rivals Google and Microsoft. But the Yahoo team is building a compelling data center story of its own, with innovations in cooling design and energy efficiency ratings approaching the best that Google has achieved.

Yahoo’s Adam Bechtel began telling the story yesterday at the O’Reilly Velocity 2009 conference in San Jose, Calif. Bechtel, the chief architect of Yahoo’s data center operations, shared details of a patented cold-aisle containment system that integrates an overhead cooling module, building the air conditioning units into the top of a “podule” of cabinets packed with servers. …

Yeshim Dentz’s HP Introduces Cloud Consulting Services post of 6/23/2009 announces “[t]he new offerings, including the HP Cloud Discovery Workshop and HP Cloud Roadmap Service.”

Shannon Williams posted Top 7 Requirements from Infrastructure Cloud Providers to the VMOps blog on 6/24/2009:

Right now, a huge number of service providers are making plans to launch computing clouds, and I thought it would be interesting to outline some of the requirements I often hear from prospective cloud providers here. …

  1. Our clouds need to run on inexpensive storage.
  2. We want to build on an Open-Source Hypervisor.
  3. We need a  way to integrate with our Billing & Provisioning apps.
  4. We need to support both Windows and Linux VMs, and that means image based pricing.
  5. We want an API, but also a UI that makes admin simple for end-users.
  6. Cloud images need to be more reliable than dedicated servers.
  7. We want a turn-key solution, not something we have to maintain.

The post includes details of the seven “requirements.” (Apparently, the original post was named “7 Challenges for the Would-Be Cloud Architect)

Rich Miller reports on 6/23/2009 that Amazon Adds Cloud Data Center in Virginia:

As Amazon’s cloud continues to grow, the company is investing in real-world brick-and-mortar data centers to provide additional capacity. The retail/infrastructure company recently leased a 110,000 square foot property in northern Virginia to expand its data center footprint.

The additional space will help accommodate dramatic growth for Amazon Web Services, the suite of services that allow companies to run their applications on Amazon’s infrastructure and pay based on usage. More than 500,000 developers are now using AWS, and Amazon’s S3 storage now houses more than 50 billion objects.

Jay Fry restarts the Cassatt Data Center Blog for CA with his A front row seat for the private cloud evolution: our top content post of 6/22/2009 which offers highlights of the blog’s past six months.

David Linthicum claims IBM 'Clouds' Look Like Conventional IT in this 6/22/2009 post to Intelligent Enterprise:

According to this e-Week report, and this report in the New York Times, IBM continues to form its cloud computing strategy, including the definition of some key products. …

The issue here is that cloud computing is really about, well, cloud computing. Existing hardware and software vendors, including Microsoft, Cisco, HP, etc., and of course IBM, seem to find that thought a bit scary and continue to toss traditional hardware and software at the problem. …

I don’t believe Microsoft is throwing the same hardware into its data center as Cisco, HP and IBM want to sell to private cloud wannabees.

Following is IBM’s #CloudComputing Strategy Map #e2conf:

IBM Smart Business Framework

According to John Treadway, who posted the above slide on 6/22/2009:

The diagram above below gives a bit of insight into where IBM is today and where they are heading. I posted this last week, but removed the diagram at IBM’s Request. Now I’m reposting it after seeing Sean Poulay from IBM presented the chart at the Enterprise 2.0 Conference in Boston.

Glenn Brunette describes Sun’s Immutable Service Containers (ISC) in his Project Kenai (Beta) post of 6/22/2009:

Immutable Service Containers (ISC) are an architectural deployment pattern used to describe a foundation for highly secure service delivery. ISCs are essentially a container into which a service or set of services is configured and deployed. First and foremost, ISCs are not based upon any one product or technology. In fact, an actual instantiation of an ISC can and often will differ based upon customer and application requirements. That said, each ISC embodies at its core the key principles inherent in the Sun Systemic Security framework including: self-preservation, defense in depth, least privilege, compartmentalization and proportionality.

Ruven Cohen delivers his 2 cents worth about Sun’s ISC in his Autonomic Cloud Security post of 6/22/2009.

Joe McKendrick asks on 6/22/2009 are Vendors being pushed into cloud, kicking and screaming?

Lately, if you have listened to the pronouncements of vendors large and small, they all are enthusiastically embracing cloud computing as the next wave of software and service delivery.

However, the Wall Street Journal’s Ben Worthen and Justin Scheck have a different take on all this happy cloud talk. The way they see it, the recent economic slump and tighter IT budgets have pushed many vendors into the cloud world, kicking and screaming.  Oracle, HP, IBM, Microsoft, and SAP all run the risk of seeing business move into a lower-margin space, with a longer timeframe to see revenues, they write.

HP Software Chief Tom Hogan even offers an eye-opening comment, admitting to WSJ that the move from traditional to cloud software is “highly disruptive,” and that “shareholders don’t like it, and it’s a real conflict between business strategy and fiduciary duty.” …

Ben Kepes summarizes in this 6/22/2009 post a recent panel discussion about Selecting Cloud Providers. Speakers were:

  • Tony Lucas, CEO, XCalibre
  • Simon West, Chief Marketing Officer, Terremark
  • Alex Barnett, Group Manager, Intuit Partner Platform and IDN, Intuit
  • Jason Hoffman, Founder and CTO, Joyent

James Urquhart’s The new generation of cloud-development platforms post of 6/22/2009 begins:

Software development "in the cloud" has been one of the really interesting developments to come out of the cloud computing market so far. While many early players, such as Zimky and Coghead died on the vine, there is a pretty robust Platform as a Service (or "PaaS") market out there today, with Google App Engine taking the most visible lead, and a pretty solid stable of Ruby on Rails-based hosting providers telling a compelling story of their own.

Such success is driving some new players to seek the spotlight, however. I wanted to highlight two that I found most interesting. They are very different from one another, but those differences highlight the breadth of opportunity that remains in the PaaS market.

And goes on to describe AppScale, AppEngine, and TIBCO Silver, but not Azure as PaaS players.

<Return to section navigation list> 

Tuesday, June 23, 2009

LINQ and Entity Framework Posts for 6/15/2009+

Note: This post is updated daily or more frequently, depending on the availability of new articles.

ADO.NET Team Blog posts 0f 6/22/2009 for EF v2 CTP 1 will be repeated in the 6/22/2009+ issue.

Entity Framework and Entity Data Model (EF/EDM)

Carl Perry’s Announcing: Entity Framework Feature CTP 1 for the Net Framework 4.0 Beta 1 post of 6/22/2009 notes:

We weren’t able to ship these capabilities in the .NET Framework 4.0 Beta 1 so we’ve decided to release them alongside the Beta.  This CTP is an early preview of these features and as such we’re looking for lots of feedback on these components.  This functionality is currently not scheduled to be part of the .NET Framework 4.0 and we expect to release another CTP of these features based on the feedback we get from you. [Emphasis added.]

Following are CTP1’s new features:

The following posts of 6/22/2009 provide walkthroughs of the new features:

Julie Lerman comments on the ADO.NET Team’s deprecation of System.Data.OracleClient in her Oracle and ADO.NET- Microsoft: Deprecated; Oracle: quiet, DataDirect:Beta and DevArt:Released post of 6/21/2009.

She writes in her EF4: Model-Defined Functions Level 1 & 2 post of the same date:

Model Defined Functions are a great addition to EF4. It allows you to add functions directly into your model rather than having to place the additional logic into business classes. This not only allows the functions to be “just there”, but you can use them in queries, something that you cannot do with properties that are defined in the classes.

Alex James shows you how to avoid superfluous queries and simplify use of EF in Tip 26 – How to avoid database queries using Stub Entities of 6/19/2009.

Beth Massi writes in her WPF Data Binding Samples on Code Gallery post of 6/17/2009:

One of the many samples released for Visual Studio 2010 Beta 1 that you should be aware of are examples of WPF data binding against Entity Data Models. You can find some easy to follow samples here: http://code.msdn.microsoft.com/WPFDatabinding

This sample demonstrates how to create a WPF Forms solution that checks user input with validation code, demonstrates common controls such as DataGrid and ComboBox, and shows typical data manipulation including create, read, update, and delete. The sample solution is available in both Visual Basic and C# and is intended for use with Visual Studio 2010 Beta 1 and with the .NET Framework 3.5.  In the future, we will release a sample that performs with the .Net Framework 4.0 Beta.

Simon Segal’s Entity Framework, Fluent Interfaces & Domain Specific Languages Part 2 of 6/16/2009 continues his tiny DSL series:

In the first part of this series I looked at how you might go about building an (incredibly tiny) domain specific language for analysing data. The context I gave was a scenario where project managers were required to work with a continuous stream of data in the form of a known schema. This ‘known’ schema is most commonly used in moving and transforming data between various systems in a domain where the central or end target is a Document Management System. The ‘known’ schema is an agreed format that all systems in this particular industry use to extract and subsequently load. It is common to see the project managers struggling with tools like access to compose queries to analyse the data before or after these ETL processes and hence proposition of a DSL.

Matthieu Mezil explains how to implement a “sub EntitySet” property in his SubObjectSet post of 6/16/2009. Matthieu writes:

With EF, when you use TPH or TPC inheritance mapping scenarii, the EntitySet is on the base class.

As I mentioned often in the past with EF v1, you can add a property in your context which returns the EntitySet.OfType<MySubType>().

Ok it’s interesting but… In EF v1, the EntitySet is an ObjectQuery<T> property and our property also but in EF v2 the EntitySet is an ObjectSet<T>. This class implements the IObjectSet<T> interface which has three methods to add, attach and delete entities.

One guy tells me that he wants to be able to use these methods directly on the “sub EntitySet” property.

As you would expect, Matthieu provides the implementation.

Faisal Mohamood explains Using Repository and Unit of Work patterns with Entity Framework 4.0 with EF v2 POCO in his detailed post of 6/16/2009. Here are links to his three previous POCO posts:

In this post, Faisal “look[s] at how we might be able to take our example a bit further and use some of the common patterns such as Repository and Unit Of Work so that we can implement persistence specific concerns in our example.”

Jonathan Carter writes in his Gaining some context into ASP.NET AJAX 4’s DataContext… post of 6/18/2009:

The ASP.NET AJAX 4 release has some really cool features in it that can help lower the barrier of entry into developing client-side web applications (jQuery doesn’t hurt either). One of the more compelling new classes is the DataContext. Basically, the DataContext is an object that is capable of consuming a server-side resource that serves JSON data. In its most basic form, you simply give it the URI of a service and the operation name to execute and it handles making the underlying request. If you had an AJAX-enabled ASMX service like so (note: I’m using the Entity Framework)… [Emphasis added.]

and continues with his How the DataContext can change your data and your life (well, sort of, but not really)… post of 6/19/2009.

Himanshu Vasishth’s System.Data.OracleClient Update post of 6/15/2009 announces that Microsoft’s System.Data.OracleClient ADO.NET data provider will be deprecated in favor of third-party Oracle providers in the .NET Framework 4.0:

We learned  that a significantly large portion of customers use our partners’  ADO.NET providers for Oracle;  with regularly updated support for Oracle releases and new features. In addition, many of the third party providers are able to consistently provide the same level of quality and support that customers have come to expect from Microsoft. This is strong testament of our partners support for our technologies and the strength of our partner ecosystem.  It is our assessment that even if we made significant investments in ADO.Net OracleClient to bring it at parity with our partners based providers, customers would not have a compelling reason to switch to ADO.Net OracleClient.

Jaroslaw Kowalski explains Using EFProviderWrappers with precompiled views in this 6/15/2009 post, which notes that":

Injecting a provider into provider chains involves changing SSDL file and that invalidates the hash.

Jarek describes the workaround.

Craig Lee is a member of the EF Tools team who recently started a blog. Following are his first two posts:

Subscribed. Thanks to Alex James for the heads up.

LINQ to SQL

Damien Guard is the interviewee for HerdingCode - Episode 50: Damien Guard on LINQ to SQL, Entity Framework, and Fontography of 6/21/2009:

This week the guys talk to Damien Guard, a developer working on LINQ to SQL and Entity Framework. After discussing data access for a while, they talk about the programming font Damien publishes, Envy Code R.

The post includes a detailed topic list.

Matt Warren posted the 15th chapter of his IQueryable saga, Building a LINQ IQueryable provider - Part XV (IQToolkit v0.15), on 6/16/2009. Matt says his new IQToolkit version offers these new features:

    • More Providers - MySQL and SQLite join the previous MS only line up.
    • Transactions - Use ADO transactions to control the isolation of your queries & updates.
    • Entity Providers - The provider concept is expanded to include tables of entities
    • Entity Sessions - The session concept adds identity caching, change tracking and deferred updates via SubmitChanges
    • Provider Factory - Create providers on the fly w/o knowing anything more than the database name and mapping.
    • Madness

LINQ to Objects, LINQ to XML, et al.

Jim Wooley writes in his Add Extension Methods in LinqPad post of 6/20/2009:

As we already announced, the samples for chapters 1-8 of our LINQ in Action book are available through LINQPad. This includes the LINQ to Objects and LINQ to SQL. I've been working on the LINQ to XML chapters (9-11) and hope that we will add them to the download soon. In the process, I've needed to learn a bit about how LINQPad works under the covers in order to add specialized classes. …

If you need to refer to external methods or add other classes, choose the Program option. This will add a Sub Main method and allow you to add additional methods. …

ADO.NET Data Services (Astoria)

No significant new posts as of 6/23/2009 9:00 AM PDT

ASP.NET Dynamic Data (DD)

No significant new posts as of 6/23/2009 9:00 AM PDT

Miscellaneous (WPF, WCF, MVC, Silverlight, etc.)

David Ebbo announced A new and improved ASP.NET MVC T4 template in his 6/17/2009 post:

A couple weeks ago, I blogged about using a Build provider and CodeDom to generate strongly typed MVC helpers at runtime.  I followed up a few days later with another version that used T4 templates instead, making it easier to customize.

And now I’m back with yet another post on this topic, but this time with a much simpler and improved approach!  The big difference is that I’m now doing the generation at design time instead of runtime.  As you will see, this has a lot of advantages.

Update: current version is now 0.9.0006 (attached to his post as a zip file)

Monday, June 22, 2009

Windows Azure and Cloud Computing Posts for 6/15/2009+

Windows Azure, Azure Data Services, SQL Data Services and related cloud computing topics now appear in this weekly series.

Updated 6/20 – 6/21/2009: Additions
Updated 6/18 – 6/19/2009: Additions and correction of date typos
• Updated 6/16 – 6/17/2009: Microsoft’s Allison Watson on Azure pricing; Robert Le Moine on Taking.NET Development to the Cloud and other additions

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, click the post title to display the single post you want to navigate.

Azure Blob, Table and Queue Services

<Return to section navigation list> 

• Stefan Tilkov discusses REST and Transactions with Arnon Rotem-Gal-Oz’s observations about RETRO: A RESTful Transaction Model in this 6/17/2009 post.

Brent Stineman’s A last word on Azure Queues (Performance) post of 6/14/2009 describes his downloadable Queue performance test app:

Some time ago, someone came by the MSDN Windows Azure forums and asked a question regarding performance of Azure Queues. They didn’t just want to know something simple like call performance, but wanted to know more about throughput, from initial request until final response was received. So over the last month I managed to put together something that lets me create what I think is a fairly solid test sample. The solution involves a web role for initializing the test and monitoring the results, and a multi-threaded worker role that actually performs the test. Multiple worker roles could also have been used, but I wanted to create a sample that anyone in the CTP or using the local development fabric could easily execute. …

Krishnan Subramanian describes Adobe’s new Tables app in his 6/15/2009 Adobe Releases Spreadsheet SaaS Application And Adds Premium Version post:

Let me highlight some of the features of Tables as noted in their announcement

• All users can add data simultaneously - solving one of the biggest problems with shared worksheets. All data is always in up-to-date for everyone.
Presence - lets you know who else is working on the table and where they are working
Private and common views - allows the team to work together, but see the information that is important each person. Private views let you see information that is important to you, without disturbing others working on the sheet.
Filtering is real time so you can play with the data and adjust your filter in real time, without having to open a dialog box for every change.
Sorting - quick, simple and always includes all of the data

The most interesting feature for me is the idea of Private View and Common View. This feature really solves the problem encountered by people collaborating on spreadsheets online. Apart from these features, the functionality is very basic (well, that is the reason this product is still in the labs) and they have promised to add more features in the near future.

SQL Data Services (SDS)

<Return to section navigation list> 

See The Fortworth .NET User Group’s June 2009 meeting will be Developing Applications Using [SQL] Data Services in the Cloud Computing Events section.

.NET Services: Access Control, Service Bus and Workflow

 <Return to section navigation list>

Vittorio Bertocci’sThe Id Element Weekly: Donovan Follette on making the shift from ADFS v1 to Geneva Server post of 5/12/2009 describes the Donovan Follette on making the shift from ADFS v1 to Geneva Server video segment on Channel9:

In this week’s episode of the ID Element Vittorio interviews Donovan Follette… as the guest!

Donovan is a senior technical evangelist and a host for this very show: he worked on identity since he joined Microsoft in 2005, and is a well known expert in the ADFS community. In this episode Vittorio talks with Donovan about the relationship between ADFS and Geneva Server: Donovan explains in details how to map the old terminology to the new concepts introduced in Geneva, focusing on differences and similarities in the two approaches, and in general equipping today’s  ADFS expert with everything he or she needs for hitting the ground running with Geneva Server.

•• Matias Woloski describes the Claims-Driven Modifier control’s expressions for ClaimValue, Condition and Mapping, which the designer ordinarily sets for you (see below for more about the control).

Vittorio Bertocci’s Use claims for driving your web UI… without even *seeing* a line of code post of 6/19/2009 describes the new ASP.NET Claims-Driven Modifier server control. Vibro says:

While pretty much everybody can understand (& appreciate) the high level story about claims, it is not always easy to make it concrete for everybody. The developer who had to deal with code handling multiple credentials, or had to track down where a certain authorization decision happen, sees very clearly where and how claims can make his life easier: UI developers, however, may have found challenging to bridge the gap between understanding the general story and finding tangible ways in which claims make their work easier. Until now (at least i hope).

We have put together a demo which shows an example of what you could build on top of the Geneva Framework infrastructure and further raise the lever of abstraction, to the point that a web developer is empowered to take advantage of the information unlocked by the claims with just few clicks. This touches on the theme of customization, which somehow gets less attention that authentication and authorization (for obvious reasons) but that deserves its place nonetheless. In any case, it’s not rocket science: it is a simple ASP.NET control that can modify the value of properties of other controls on the page, according to the value of the incoming claims. Despite its simplicity, it allows a surprising range of tricks :-) 

The code of the demo is available on code gallery, at http://code.msdn.microsoft.com/ClaimsDrivenControl.

• Matias Woloski talks about the authorization process in Identity thoughts #2: Level 2 Authorization.

The authorization decision happens near the application or the service because it knows about the resource (each application has a different domain model).

The following figure shows a very high level architecture of the components and its interactions:

• Matias Woloski’s Identity thoughts #1: Analogy between a single app and a federated app post of 6/16/2009 offers a table that “shows an analogy of identity concepts between a single application and a federated application.”

The single app has its own identity silo and the federated app relies on an STS (like Geneva Server). I find this analogy useful to explain how things differ from the non-federated non-claim-based world.

Neil Kidd confirms that .NET Workflow Services will be missing from the RTM of Azure Services Platform v1 in his .NET Services workflow is moving to Fx 4‘s workflow engine, but … post of 6/16/2009.

Neil works for Microsoft in the UK as an Architect in the Microsoft Technology Centre.

• Manuel Meyer delivers an Introduction to Windows Azure Live Mesh/Live Framework Part 1 as the first in a series about Live Mesh, Live Services and the Live Framework.

• Vittorio Bertocci is Announcing FabrikamShipping, in-depth semi-realistic sample for Geneva Framework in this 6/16/2009 post:

Do you remember the PDC session in which Kim announced all the new wave of identity products, including Geneva?

During that session I showed a pretty comprehensive demo, where  all the products & services worked together for enabling a fairly realistic end-to-end scenario. You have seen demos based on the same scenario at TechEd EU, TechDays and in many presentations from my colleagues in the various subsidiaries; finally, if you came at the Geneva booth at RSA chances are that you got an detailed walkthrough of it. Since people liked it so much, we thought it would have been nice to extract just the main web application from that scenario, and make it available to everyone in form of an in-depth example. You can find the code in a handy self-installing file on code gallery, at http://code.msdn.microsoft.com/FabrikamShipping (direct link here).

Mary Jo Foley’s Too many .Nets, too little time? gets the word out that the .NET Services team is dropping .NET Workflow Services until .NET 4 releases, as I reported in last week’s post.

Oren Melzer explains Silent Information Card Provisioning with Geneva Server in this 6/15/2009 post:

One obstacle that administrators looking to deploy information cards in an enterprise will inevitably face is getting information cards to their users. Nobody wants to have to send an email to their users saying that in order to access a web service, they’ll need to go to an issuance website and download an information card. Things should just work. With that in mind, the “Geneva” Server and CardSpace teams created Silent Card Provisioning, a feature that uses Group Policy to deploy information cards to domain users automatically.

Leon Welicki’s Sequential and Flowchart modeling styles post of 6/12/2009 begins:

WF 4 ships with an activity palette that consists of many activities – some of these are control flow activities that represent the different modeling styles developers can use to model their business process. Sequence and Flowchart are a couple of modeling styles we ship in WF 4. In this post, we will present these modeling styles, learn what they are, when to use what, and highlight the main differences between them.

Leon Welicki is a Program Manager on Microsoft’s Connected Framework Team

DotNetBlogger posted Introduction to Workflow Tracking in .NET Framework 4.0 Beta1 on 6/11/2009.

By now you must be aware of the significantly enhanced Windows Workflow Foundation (WF) scheduled to be released with .Net Framework 4.0. The road to WF 4.0 and .Net Framework 4.0 Beta1 documentation for WF can give you more details. Being a member of the team responsible for the development of the WF tracking feature, I am excited to discuss the components that constitute this feature. In a nutshell, tracking is a feature to gain visibility into the execution of a workflow. The WF tracking infrastructure instruments a workflow to emit records reflecting key events during the execution. For example, when a workflow instance starts or completes tracking records are emitted. Tracking can also extract business relevant data associated with the workflow variables. For example, if the workflow represents an order processing system the order id can be extracted along with the tracking record. In general, enabling WF tracking facilitates diagnostics or business analytics over a workflow execution. For people familiar with WF tracking in .Net 3.0 the tracking components are equivalent to the tracking service in WF 3. In WF 4.0 we have improved the performance and simplified the programming model for WF tracking feature.

Here’s a high-level view:

Nuno Filipe Godinho’s Interesting Incubation and Innovation Projects from Microsoft in the Cloud spectrum post of 6/15/2009 describes and links to the following Azure incubation projects, most of which are relatively new:

Live Windows Azure Apps, Tools and Test Harnesses

<Return to section navigation list> 

Bruno Tekaly’s Azure – Rich Client(s) meets Azure Table Data. Smart Grid Sample – Step 01 post of 6/20/2009 starts a series that uses Azure Tables as the data source for an ASP.NET and a Silverlight rich client example:

Azure – Rich Client(s) meets Azure Table Data. Smart Grid Sample – Step 02 explains the components to be used.

Steve Marx posted an Update to The CIA Pickup Source on 6/19/2009 before leaving for his vacation in Moscow (Russia). Steve says:

The new code is already live at www.theciapickup.com.  Please download the source again to pick up the changes, and keep the feedback coming!

Be sure to watch Steve’s “What is Windows Azure?” (a Hand-Drawn Video) of 6/19/2009.

• Robert Le Moine calls Taking.NET Development to the Cloud “a leap of faith” in this 6/17/2009 post that describes how his employer uses the Azure Services Platform as virtual laboratory for application development:

Cloud computing platforms, such as Microsoft Azure, offer compelling advantages for building new scalable .NET applications. But can the Cloud be used for developing existing .NET applications? In this article, I'll explain how we've made the leap to Cloud-based development for our internal applications and the lessons we've learned along the way. Specifically, I'll describe our checklist for selecting a Cloud vendor and how we've used the virtualization capabilities of the Cloud to improve our agile development process. I'll also outline the quantifiable benefits we've seen, including saving $100,000 in capital expenditure and reducing our iteration cycle times by 25%.

As the development team lead for Buildingi, a corporate real estate consultancy that specializes in back-office technology solutions to manage large portfolios, I'm responsible for building Web-based applications using Visual Studio, the Microsoft .NET Framework, and Silverlight. Last year we started looking at Cloud Computing to gain the advantages of a scalable, virtualized platform for software. …

There are a few use cases where the Cloud is not recommended for testing (see below). These include tests that require specific x86 hardware (e.g., BIOS driver tests) and some types of performance and stress testing. If an application requires an onsite Web Service behind a firewall, this can usually be accessed using a VPN connection. …

• West Monroe Partners created this Windows Azure and Silverlight Interactive Map (http://tasteofchicago09.cloudapp.net/Map.aspx) for the forthcoming Taste of Chicago festival scheduled for 6/26 – 7/5/2009 in Chicago’s Grant Park, Lakefront and Loop neighborhoods.

The West Monroe Partners Launches New Interactive Map for Taste of Chicago press release of 6/16/2009 provides a lengthy description of the project:

In addition to developing the interactive Taste of Chicago map, West Monroe Partners and Microsoft also partnered to create a hosting solution that could handle the web site's user load-including half a million views. The hosted Microsoft Windows Azure solution provides the equivalent capacity of 25 purchased servers, with no infrastructure investment required by the City of Chicago. [Link added.]

• Dan Griffin’s Cloud Backup application “is a disaster recovery solution that allows you to export a Hyper-V virtual machine, archive it in Azure, and later restore it,” which he posted to CodePlex on 6/16/2009. Dan’s Cloud Backup whitepaper, CloudBackup.pdf, is a fully illustrated guide to using his solution.

• Ben Riga and Girish Raja describes a self-service Dynamics CRM project hosted in Windows Azure (Dynamics CRM Online) in their Dynamics Duo Rides again Channel9 video of 6/15/2009:

In this episode we walk through the demo in some detail.  The Wide World Importers Conference site we use here is the main site for a fictitious conference.  The self-service part of this is entirely hosted on Windows Azure.  As we walk through the registration process the information is retrieved and stored directly in Dynamics CRM Online.  Naturally, as we’ve said in the past, Dynamics CRM is great at managing both contact and transactional information.  We also look at how, by using 3rd party web services, we can compose new capabilities into our system.  In this case we show how to integrate an internet flight booking service into the attendee registration process and then store that complex flight booking information in the Dynamics CRM data store.  Finally we show how to use Silverlight to build a compelling user experience for a self-service portal.  This one is pretty slick.

The project also uses SQL Data Services to store data.

For more background and a video about Dynamics CRM and Azure, see Ben’s Self-Service Dynamics CRM solutions fly on Windows Azure post of 1/7/2009.

See Brent Stineman’s A last word on Azure Queues (Performance) post of 6/14/2009, which describes his downloadable Queue performance test app. in the Azure Blob, Table and Queue Services section.

Azure Infrastructure

<Return to section navigation list> 

••• Reuven Cohen’s Hoff's Cloud Metastructure post of 6/20/2009 discusses @Beaker’s Metastructure concept for defining cloud infrastructure:

Recently, Chris Hoff posted an interesting concept for simply defining the logical parts of a cloud computing stack. Part of his concept is something he is calling the "Metastructure" or "essentially infrastructure is comprised of all the compute, network and storage moving parts that we identify as infrastructure today. The protocols and mechanisms that provide the interface between the infrastructure layer and the applications and information above it".

Actually I quite like the concept and the simplicity he uses to describe it. Hoff's variation is the practical implementation for a meta-abstraction layer that sits nicely between existing hardware and software stacks while enabling a bridge for future yet undiscovered / undeveloped technologies.. The idea of a Metastructure provides an extensible bridge between the legacy world of hardware based deployments and the new world of virtualized / unified computing. (You can see why Hoff is working at Cisco, he get's the core concepts of unified computing -- one API to rule them all)

••• Chris Hoff posted Incomplete Thought – Cloudanatomy: Infrastructure, Metastructure & Infostructure on 6/19/2009:

I wanted to be able to take the work I did in developing a visual model to expose the component Cloud SPI layers into their requisite parts from an IT perspective and make it even easier to understand.

Specifically, my goal was to produce another visual and some terminology that would allow me to take it up a level so I might describe Cloud to someone who has a grasp on familiar IT terminology, but do so in a visual way:

Ron Schmelzer of ZapThink asks Who's Architecting the Cloud? in this 6/19/2009 post:

Will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches?

As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?

Jonathan Feldman discusses whether single-provider clouds are a “single vulnerability” in his Of Cloud 9 and The Importance of Parachutes post of 6/19/2009 for InformationWeek Analytics:

Back when I did a lot of security work, we used to joke around that single sign on should be called "single vulnerability". Maybe single provider cloud models should be called "single point of failure".

Toodledo went down hard last week . I rely massively on Toodledo to organize my massively complicated work and family life. But I wasn't terribly upset because my data lives in more than one place. I wrote a draft of this blog on the Toodledo site, but I could have easily written it on the equipment that houses the synchronized copy of my notes. The site being down was annoying but not, as we say in the support business, without its workaround. …

Multiple data centers replicating apps and data are the obvious workaround for “single vulnerability” syndrome.

Charles Babcock says Cloud Standards Will Emerge From Current Haze in this 6/18/2009 post for InformationWeek’s Cloud Computing Destination:

What standards do you follow if you're interested in getting started in cloud computing? The short answer is, there are few clearly defined standards in what remains a loosely defined area. Nevertheless, the main outline is clear. Follow the leaders and follow the Web.

In an InformationWeek Webcast on The Cloud and Virtualization June 16, I tried to lay out a few of the standards that will dominate cloud computing. One assumption is that cloud computing will adopt the most efficient paradigms found on the Internet, say the massive and uniformly managed server farms of Google and Amazon.

Dana Gardner’s latest DirectBriefing is EDS’s David Gee on the spectrum of cloud and outsourcing options unfolding before ID architect of 6/19/2009:

HP's purchase last year of EDS came just as talk of cloud computing options ramped up. So how does long-time outsourcing pioneer EDS fit into a new cloud ecology?

Is EDS, in fact, a cloud provider? And how will IT departments properly factor their decisions on what to keep on-premises in data centers versus placing assets and workloads on someone else's cloud infrastructure?

•• Yves Goeleven’s Event driven architecture onto the Azure Services Platform article of 6/19/2009 for the Microsoft Benelux Architect Newsletter begins:

In this article, I will guide you through this new environment and point out some of these design challenges that the cloud presents to us. I will also propose an architectural style, and some additional guidance, that can be used to overcome many of these challenges. Furthermore I'll give you an overview of the tools offered by the Azure cloud platform that can be used to implement such a system.

Yves details three event-processing styles: Simple Event Processing, Stream Event Processing, and Complex Event Processing.

Stream Event Processing

•• Lori MacVittie analyzes Lydia Leong’s post (see below) in her Your Cloud is Not a Precious Snowflake (But it Could Be) post of 6/18/2009:

She lists traits common to most cloud providers: premium equipment, VMWare-based, private VLANs, private connectivity, and co-located dedicated gear but doesn’t really get into what really is – or should be – the focus of cloud offerings: services. To be more specific, infrastructure services.

A cloud provider of course wants a solid, reliable infrastructure. That’s why they tend to use the same set of “premium” equipment. But as Lydia points out differentiation requires services above and beyond simple hosting of applications in somebody else’s backyard.

Lydia Leong distinguishes Job-based vs. request-based computing in this 6/18/2009 post to the Gartner Blog:

Companies are adopting cloud systems infrastructure services in two different ways: job-based “batch processing”, non-interactive computing; and request-based, real-time-response, interactive computing. The two have distinct requirements, but much as in the olden days of time-sharing, they can potentially share the same infrastructure. …

Observation: Most cloud compute services today target request-based computing, and this is the logical evolution of the hosting industry. However, a significant amount of large-enterprise immediate-term adoption is job-based computing.

Dilemma for cloud providers: Optimize infrastructure with low-power low-cost processors for request-based computing? Or try to balance job-based and request-based compute in a way that maximizes efficient use of faster CPUs?

• Krishnan Subramanian’s Nature’s Attack On Amazon And The Instance Vs Fabric Debate post of 6/17/2009 discusses the pros and cons of instance-based and fabric-based clouds:

Last week, a lightning strike rendered part of Amazon EC2 belonging to a single zone cutoff from the real world. I don't want to go into whether it is an outage or not debate but towards a different kind of debate. Ever since Cloud Computing started gaining traction, we have a debate in the industry about whether the instance based setup is better or a fabric based one. I thought I will revisit this debate again in the light of the recent Amazon EC2 "it's not an outage" incident. Let me do a brief recap of the terminologies and, then, see how the debate shapes up in the aftermath of the "Amazon lightning incident". …

• Vivek Kundra answers Cloud Computing: 10 Questions For Federal CIO Vivek Kundra from InformationWeek’s J. Nicholas Hoover. Background:

Federal CIO Vivek Kundra is well known for innovative approaches to government IT. He introduced Google Apps to the city of Washington, D.C. when he was its CTO of back in 2007.

He's brought with him to the federal government a philosophy that cloud computing could save money, facilitate faster procurement and deployment of technologies, and allow government agencies to concentrate more on strategic IT projects.

InformationWeek sat down with him at his office last week to discuss his thoughts about cloud computing in government, and what it would take to make cloud technologies easier to adopt in the federal space.

• Jim Metzler and Steve Taylor co-author The hype surrounding cloud computing, the first of two articles for NetworkWorld that compares the hype about cloud computing with that for Asynchronous Transfer Mode (ATM) a few years ago.

• PRNewswire reports that “MSMS Connects With HealthVault to Make Health Management Easier and More Effective” in this Michigan State Medical Society Collaborates With Microsoft to Expand Health Care Technology in Michigan press release of 6/17/2009:

The Michigan State Medical Society (MSMS) today announced a collaboration with Microsoft Corp., Compuware subsidiary Covisint and MedImpact Healthcare Systems, Inc., to be first in the nation to provide statewide connectivity of medical and pharmacy data for Michigan. Patients and physicians who use the medical society's electronic portal, MSMS Connect, will now have access to critical health care data in one location -- Microsoft HealthVault. This new collaboration expands MSMS' nation-leading effort to help implement electronic health care technology statewide. …

When fully implemented into MSMS Connect, the addition of HealthVault will enable patients to store their individual health data, or their whole family's health record, in one location at no cost. Through HealthVault, which is built on a privacy- and security-enhanced foundation, patients will have complete control over their electronic health data and can give permission to their physicians and other health care providers to view it. Patients can access data from their physicians, health plans, and pharmacies, as well as upload information from medical devices that monitor a number of factors including heart rate, blood pressure and blood sugar.

• Ina Fried’s Microsoft to announce Azure business plan next month post of 6/15/2009 quotes from an interview with Microsoft Corporate Vice President Allison Watson:

[T]he company will get concrete about the financial details and say how partners can help sell Azure at Microsoft's Worldwide Partner Conference which runs July 13-16 in New Orleans.

When Microsoft announced Azure, it said that all of the applications would be run from its data centers. However, Watson said the company is also looking at ways that partners can host cloud-based solutions.

"We've had some interesting conversations," Watson said.

Watson’s comments about enabling partners to host cloud-based solutions bodes well for potential on-site (private-cloud) Azure implementations, which would downplay cloud lock-in issues other than a choice of operating system. It’s a foregone conclusion that moving Azure projects to platforms other than Windows Server will be impossible.

Azure watchers have expected details about the Service Level Agreement (SLA) for Azure, but none of the articles about the interview mention SLAs explicitly. However, its likely that Azure SLAs and pricing will be interdependent.

Julie Bort chimes in with additional history of Microsoft pricing in her Long-awaited pricing details of Windows Azure expected soon post of 6/15/2009 to NetworkWorld’s Microsoft Subnet.

• Bill Stempf offers comparative cost analyses in The Dollars and Sense of Cloud Computing of 6/16/2009, which claims “Cloud computing makes a lot of sense in this economic environment - Part 2:”

This is about the money. Hate to say it, I really hate to say it, but what is going to make cloud computing take off is the financial - the economic realities of hardware, staff, and power consumption. Each of our money wizards has a perspective on this, and we will take them one at a time.

Here’s a link to Part 1.

• Rob England claims Cloud Computing Outlook [Is] Far From Sunny in this 6/16/2009 contrarian post to ServerWatch:

Cloud computing is "buzz" concept of the year for 2009. It has its place, especially for high-risk/low-capital applications like startups or small business or web sites, but for enterprise computing and — especially for improving existing core applications — I have a more jaundiced view.

As a concept, cloud computing is a pointer to the future, but there is much hype around the present. As James Maguire of Datamation put it recently: "As Cloud computing has emerged as a red hot trend, tech vendors of every stripe have painted the term 'Cloud' on their products, much like food brands all tout that they're 'low fat'."

• ebizQ’s Cloud Computing discussion section seeks answers to Where is the Revenue Stream in Cloud Computing?

This question comes from our cloud computing virtual conference, and asks: Where is the revenue stream in cloud computing? Who controls the money. If you are using services you are not responsible for, how will different providers receive their revenue?

The replies as of 6/15/2009 make an interesting read.

 Marianne Kolbasuk McGee asks Is That A Cloud On Healthcare's Horizon? in her 6/16/2009 post to InformationWeek’s Cloud Computing Destination. Marianne reports:

Cloud models are starting to provide an attractive option for large and influential regional medical centers to get lots of small, local, laggard doctor offices trading in their paper patient files for electronic medical records. Are there clouds in your forecast?

Beth Israel Deaconess Medical Center (BIDMC), together with its Beth Israel Deaconess Physicians Organization (BIDPO), is just one of a handful of large and prestigious health care organizations in the country helping small doctor offices in their region (in this case, the Boston area) to deploy e-medical record systems.

A cloud model allows these doctor offices to use software to manage their practices and patient data, but the servers are located remotely and supported by BIDMC and Concordant, a services provider. BIDMC is covering about 85% of the non-hardware expenses for the practices to deploy the eClinicalWorks software, and the doctor offices pay a monthly subscription fee of between $500 and $600 for support.

A similar cloud plan is also being used by University Health System of Eastern Carolina to get small doctor practices in rural North Carolina using 21st century technology, says CIO Stuart James. "Most providers can't afford to hire IT people to keep these systems running," he says. "This keeps the costs down." …

Greg Ness analyzes Nick Carr's Cloud-Network Disconnect in this 6/15/2009 post that carries “Virtualization and cloud computing are promising to change the way in which IT services are delivered” as its deck.

Nicholas Carr told a recent audience at IDC Directions that "Cloud computing has become the center of investment and innovation."   While he is not a technologist, his sometimes shocking insight into the transformation of IT have been prescient, even if he doesn't sweat the details of how complex IT infrastructures can morph into the equivalent of today's public utilities.

To his credit Carr has predicted the rise of the cloud computing press release, multiple cloud conferences and panels and even the SaaS repositioning exercise.  He also foresaw the rise in Amazon and Google cloud announcements, perhaps years ahead of profits and/or material revenue. …

Tom Lounibos claims The Next BIG Cloud Service may be Reliability-as-a-Service in this 6/15/2009 post. Tom writes:

Aggregators …, such as FaceBook and Apple, are taking notice of what they are publishing to their sites these days with a growing concern that their own brand will be affected by poor performance by association. This forces SaaS vendors to look beyond their own cool features and rethink how with whom they deploy their applications with.   Even the leading Managed Service Providers (Rackspace, Terramark, and Savvis) and emerging Cloud Platform  Providers (Amazon, IBM, and Force.com) are rushing to deliver newer Services to ensure their customer’s that they have the most reliable deployment environment for SaaS based applications.  Reliability matters more today then ever!

David Linthicum’s Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide, Rough Cuts is a downloadable version of his book that was published May 6, 2009 by Addison-Wesley Professional as part of the Addison-Wesley Information Technology Series series. Here’s the description:

This book is the bible for those looking to take advantage of the convergence of SOA and cloud computing, including detailed technical information about the trend, supporting technology and methods, and a step-by-step guide for doing your own self-evaluation, and, finally, reinventing your enterprise to become a connected, efficient money-making machine. This is an idea-shifting book that sets the stage for the way information technology is delivered. This is more than just a book that defines some technology; this book defines a class of technology, as well as approaches and strategies to make things work within your enterprise.

Author David S. Linthicum has written the book in such a way that IT leaders, developers, and architects will find the information extremely useful. Many examples are included to make the information easier to understand, and ongoing support from the book’s Web site is included.  Prerequisites for this book are a basic understanding of Web services and cloud computing, and related development tools and technologies at a high level. However, the non-technical will find this book just as valuable as a means of understanding this revolution and how it affects your enterprise.

You can read the TOC, but nothing else, at no charge.

Mache Creeger describes his Cloud Computing: An Overview survey article for the Association for Computing Machinery (ACM) Queue magazine as a “summary of important cloud-computing issues distilled from ACM CTO Roundtables.” Topics include:

    • What is Cloud Computing?
    • CapEx vs. OpEx Tradeoff
    • Benefits
    • Use Cases
    • Distance Implications between Computation and Data
    • Data Security
    • Advice
    • Unanswered Questions

I don’t usually include survey articles in my cloud posts, but publication by ACM Queue gives this article higher than average clout.

Randy Bias continues his Cloud Futures posts with Cloud Futures Pt. 3: Focused Clouds of 6/15/2009. Randy classifies Focused Clouds in the following categories:

and recommends “Focus, Focus, Focus.”

Randy’s earlier articles are:

James Hamilton’s PUE and Total Power Usage Efficiency (tPUE) post of 6/14/2009 begins:

I like Power Usage Effectiveness as a course measure of infrastructure efficiency. Its gives us a way of speaking about the efficiency of the data center power distribution and mechanical equipment without having to qualify the discussion on the basis of server and storage used or utilization levels, or other issues not directly related to data center design. But, there are clear problems with the PUE metric. Any single metric that attempts reduce a complex system to a single number is going to both fail to model important details and it is going to be easy to game. PUE suffers from some of both nonetheless, I find it useful.

In what follows, I give an overview of PUE, talk about some the issues I have with it as currently defined, and then propose some improvements in PUE measurement using a metric called tPUE.

Cloud Security and Governance

<Return to section navigation list>

Hartford Financial Services Group, Inc.’s Cyberbuzz page contains links to current articles and podcasts about security and data risk; “data malpractice;” and Web 2.0 defamation lawsuits.

According to NetworkWorld’s Tim Green in his Insurers keep an eye on cloud security threats article of 5/22/2009:

The Hartford has a dedicated insurance offering called CyberChoice that pays off if failure of the IT infrastructure results in liability for loss of personal information, intellectual property and the like. The insurance pays for investigation of the failure and payment of the costs of notifying customers if there is a reportable breach.

Passing the insurance company’s test of whether to insure a business is not easy, says Drew Bartkiewicz, vice president of technology and new media markets for The Hartford. Only a very few corporations – mostly Fortune 500 – even apply for the insurance, and of those who do, two thirds are turned away for coverage because they don’t live up to the requirements.

Chris Hoff wants to See You At Structure09 and Cisco Live! according to his post of 6/18/2009:

I managed to squeak out some additional time at the end of my first docking with the Mothership in San Jose next week such that I can attend Cisco Live!/Networkers the week after.  I’ll be at Live! up to closing on 7/1. …

If you’re going to be there, let’s either organize a tweet-up (@beaker) or a blog-down…

• Eric Chabrow’s NIST Issues Two Reports article of 6/16/2009 for GovInfoSecurity.com provides brief descriptions of:

• David Linthicum “talks about how to use governance to make cloud computing work” in this  Governance and Cloud Computing podcast of 6/17/2009.

• Ellen Messmer reports “Microsoft’s “IT Infrastructure Threat Modeling Guide” offers security advice” in her 6/15/2009 Microsoft's threat-modeling guide: Think like an attacker article for NetworkWorld:

Microsoft offers up security advice on how to fend off attacks against corporate IT resources by looking at ways that attackers can undermine an organization in its “IT Infrastructure Threat Modeling Guide” published today.

“Look at it from the perspective of an attacker,” says Russ McRee, senior security analyst for online services at Microsoft, the primary author of the 32-page guide that discusses the fundamentals and tactics of network defense. McRee said the “IT Infrastructure Threat Modeling Guide” is actually the outcome of a lot of thinking about the topic at Microsoft, which itself is using the guide as a reference.

The guide is not about Microsoft products and in fact “needs to be agnostic so it can work for anyone,” says McRee. “An organization has to figure out what their threats are.”
The guide offers ways that IT staff—especially those without formal security training—can analyze their own wired and wireless networks, model them for security purposes, in some cases along the lines of “trust boundaries and levels,” to determine where defenses should be. …

• Craig Balding slams self-serving security audits of PaaS and IaaS vendors in his Stop the Madness! Cloud Onboarding Audits - An Open Question… post of 6/16/2009. Craig writes near the end of his detailed post:

If you’re following along thus far, you’ll also see the possibility for trusted 3rd party auditors to digitally ’sign’ individual policy statements made by cloud providers they have audited. That signature could itself reflect the assurance level you need.  This in turn could help drive the nascent cyberinsurance market for cloud…assuming the auditor is open to counterclaims by the insurer ;-).

Microsoft’s SAS 70 attestations and ISO/IEC 27001:2005 certifications by the British Standards Institution (BSi), as described in Charlie McNerney’s Securing Microsoft’s Cloud Infrastructure post of 5/27/2009, are a step in the right direction.

Joe McKendrick coments on Dana Gardner’s BriefingDirect in his SOA, IT and cloud governance converge into 'total services governance' post of 6/17/2009.

Dana Gardner’s latest BriefingDirect is Hurdles To Cloud Adoption Swirl Around Governance of 6/15/2009:

Our panel of IT analysts discusses the emerging requirements for a new and larger definition of governance. It's more than IT governance, or service-oriented architecture (SOA) governance. The goal is really more about extended enterprise processes, resource consumption, and resource-allocation governance.

In other words, "total services governance." Any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place. Already, we see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.

Kevin Jackson reports on an interchange of Tweets with cloud security expert Chris Hoff (a.k.a. @Beaker) in this Maneuver Warfare in IT: A Cheerleading Pundit post of 6/15/2009. Chris has just taken a high-level job with Cisco.

Success Factors’ announces SuccessFactors Leads Enterprise Cloud Security With Strategic Technology Partnership With WhiteHat Security and Imperva in this 6/15/2009 press release.

Cloud Computing Events

<Return to section navigation list>

UKAzure Net announces The Cumulonimbus Event on 7/29/2009 at Microsoft’s London office:

Our 2nd meeting has been booked!  Our first event was a fantastic success and we hope to emulate this with the next two speakers. 

Richard Godfrey will demonstrate his KoodibooK product and demonstrate how it can be scaled using Azure.

Bert Craven will discuss how Azure can be used from a technical and commercial proposition from an enterprise such as EasyJet.  He will also demonstrate moving a WCF service into the cloud using the .NET Service Bus and Relay Bindings.

When: 7/29/2009 from 6:00 PM to 9:30 PM GMT 
Where: Microsoft, Cardinal Place 100 Victoria Street, London SW1E 5JL, UK 

• Peter Laudati’s Azure Fire Starter Philly – Saturday June 20th post of 3/16/2009 describes the event and its schedule:

Learn about Windows Azure and Azure services which enable developers to easily create or extend their applications and services. From consumer-targeted applications and social networking web sites to enterprise class applications and services, these services make it easy for you to give your applications and services the most compelling experiences and features.

  • 9:00-10:30 Introduction to Azure
  • 10:45-12:15 Azure Storage
  • 12:15-1:15 Working Lunch - Putting it together - Building a simple Azure Application
  • 1:30-2:00 Azure Services (Service Bus, Workflow Services)
  • 2:15-3:45 Azure Services (Access Control Service)
  • 4:00-5:30 Introduction to Live Services

Register free at Microsoft Events.

When: 6/20/2009 from 9:00 AM to 5:30 PM EDT 
Where: Microsoft - Malvern MPR 1 & 2, Great Valley Corporate Center, 45 Liberty Boulevard Suite 210, Malvern, PA 19355

The CloudCamp site announces CloudCamp San Francisco on 6/24/2009 from 5:30 to 10:00 PM at 835 Market Street, Suite 700, San Francisco, CA (Microsoft’s SFO office.)

Tentative Schedule:
5:30 Registration & Networking
6:00 Intro & Welcome to CloudCamp
6:15 Unpanel
7:00 Prepare for Unconference
7:15 Unconference - Sessions 1
8:00 Unconference - Sessions 2
8:45 Unconference - Sessions 3
9:30 Summary of Sessions
9:45 Networking

When: 6/24/2009 from 5:30 to 10:00 PM PDT 
Where: 835 Market Street, Suite 700, San Francisco, CA (Microsoft’s SFO office.)

The World Bank presents Financial Crisis and Cloud Computing: Delivering More for Less, Demystifying Cloud Computing as Enabler of Government Efficiency and Transformation, a Government Transformation Initiative Workshop on 6/16/2009 at 9:00 AM to 1:00 PM EDT in Washington DC that features a live Webcast:

The workshop will discuss the emergence of cloud computing and the advantages that it offers, particularly in terms of cost savings. The workshop will also highlight various challenges that need to be addressed with a special focus on connectivity, business models, efficiency, reliability, integration, security, privacy and interoperability issues.

The key objective is to clarify the rather misty concept of cloud computing for both World Bank staff and our country clients. There is a lot of confusion around this idea with over 20 definitions offered so far by various parties. The workshop will also clarify the potential role of the World Bank and other development organizations in helping developing countries to realize this opportunity.

This workshop is organized by the Global ICT Department and other partners as part of the Government Transformation Initiative, a collaboration between World Bank and the private sector aimed at supporting government leaders pursuing ICT-enabled public sector transformation.

Register to confirm your participation in the Webcast.

When: 6/16/2009 9:00 AM to 1:00 PM 
Where: Washington, DC (Internet Webcast)

The Fortworth .NET User Group’s June 2009 meeting will be Developing Applications Using [SQL] Data Services, presented by Rob Vettor, .NET Architect/Senior Solution Developer at Jack Henry and Associates on 6/16/2009 at Justin Brands. According to this 6/14/2009 post:

In this session, we’ll…

  • Gain a clear understanding of a data service and how the REST protocol plays a key role
  • Explore local, or “on-premises,” data services implemented with the ADO.NET Data Services Framework
  • Explore Cloud-based data services implemented with SQL Data Services
  • Walk through examples with Silverlight and ASP.NET Ajax
  • Show how the ADO.NET Entity Framework provides an underlying foundation for data services 
  • Contrast the difference between SQL Data Services in the cloud and cloud data storage

You’ll walk-away with a clear understanding of how this technology works as well as what is available now and in the near future.

I wonder if Rob has an advance copy of SDS’s first relational CTP.

When: 6/16/2009 
Where: Justin Brands, Ft. Worth, TX

David Linthicum and Ed Horst will conduct a Webinar entited Managing Business Transactions from the Enterprise to the Cloud and Back on 6/17/2009 at 9:00 AM PDT. AmberPoint is sponsoring the Webinar and Joe McKendric will moderate it. Details:

In this informative webcast we’ll take you through the basics of implementing SOA systems that leverage cloud computing. We’ll focus on how to manage these systems, taking into account the special requirements posed by transactions flowing from the enterprise to the cloud and back.

SOA and cloud computing expert David Linthicum, author of “Cloud Computing and SOA Convergence in Your Enterprise,” will walk you through the approach of bringing transactional SOA to the clouds, and the best practices in SOA governance. Ed Horst, Vice President of Product Strategy for industry leader AmberPoint, will cover best practices for managing composite application that leverage cloud computing.

Register here. (Site registration is required)

When: 6/17/2009 9:00 AM PDT 
Where: The Internet

Other Cloud Computing Platforms and Services

<Return to section navigation list> 

••• Greg Ness claims The Intercloud Makes Networks Sexy Again and “Cisco Leads with Vision” in this 6/19/2009 post:

Who knows who created the intercloud term, but it is a major development in articulating the enterprise cloud payoff.  Check out this Cisco blog and intercloud preso.  It is a grand and spectacular vision of where computing needs to go.

Think of the intercloud as an elastic mesh of on demand processing power deployed across multiple data centers.  The payoff is massive scale, efficiency and flexibility.

Just when you thought that Google and Amazon would control the skies, along comes Cisco with a brilliant vision that amplifies the role of the network and offers enterprises a sexy alternative.

Kevin Jackson’s Two Days with AWS Federal post describes an upcoming two days of training with Amazon Web Services (AWS) Federal:

Today, I start two days of training with Amazon Web Services (AWS) Federal. If that's the first time you've ever heard about an AWS Federal division, your not alone. Held in downtown Washington, DC the course was invite-only and attendance was IT services firms that had demonstrated a clear track record of success in the Federal market.

He then goes on to list the companies in attendance, describes AWS’s use of the term “70/30 switch” and describes the first days session contents.

Bernard Gordon recounts the Amazon Web Services Start-Up Event at the PlugandPlayTechCenter in Sunnyvale in his The Cloud as Innovation Platform: Early Examples article of 6/18/2009 for the Norwegian branch of IDG New Service:

… Turning to the Amazon event, four Amazon customers presented and discussed their use of cloud computing (my discussion of the following is from notes and memory, as the slides are not yet available). …

The customers were ShareThis, Pathwork Diagnostics, SmugMug and NetFlix.

David Meyer’s BT moves infrastructure into the cloud post of 6/18/2009 for ZDNet.UK leads with:

BT is about to formally launch a virtualised infrastructure service called BT Virtual Data Centre, which will form the basis of its cloud-computing strategy.

VDC involves the virtualisation of servers, storage, networks and security delivered to customers via an online portal as cloud-based services. On Thursday, BT's Global Services division announced the customer rollout of VDC, which will initially target multinational corporate customers and the public sector.

"VDC is the basis of our cloud-computing offering," Neil Sutton, BT Global Services's product chief, told ZDNet UK on Thursday. "We've begun to deliver communications-as-a-service and hosted services for voice, unified communications and CRM, and we see a roadmap where people want to be able to provision an infrastructure end-to-end. We want to deliver those things as a service in a predictable and flexible manner."

C. Burns and M. West cowrote IBM’s Cloud Takes Shape, But Offering Still Lacks Necessary Guidance, a Saugatuck Research Alert about IBM’s newly announced “Smarter IT” cloud strategy. (Free site registration required.) The report identifies several “areas where user IT organizations will definitely need guidance and services” from IBM.  

• Himanshu Vasishth’s System.Data.OracleClient Update post of 6/15/2009 to the ADO.NET Team Blog announces that the System.Data.OracleClient class will be deprecated in .NET Framework 4.0 in favor of third-party versions:

… We learned  that a significantly large portion of customers use our partners’  ADO.NET providers for Oracle;  with regularly updated support for Oracle releases and new features. In addition, many of the third party providers are able to consistently provide the same level of quality and support that customers have come to expect from Microsoft. This is strong testament of our partners support for our technologies and the strength of our partner ecosystem.  It is our assessment that even if we made significant investments in ADO.Net OracleClient to bring it at parity with our partners based providers, customers would not have a compelling reason to switch to ADO.Net OracleClient. …

Himanshu Vasishth is MSFT’s Program Manager, ADO.NET OracleClient (for a while).

Q: Does Oracle’s pending purchase of Sun Microsystems have something to do with the ADO.NET Team’s decision?
A: Probably (see below).

• Dan Woods explains Why Oracle Wants Solaris in this 6/16/2009 for Forbes magazine. Woods writes:

My guess is that the "Industry in a Box" vision mentioned by Charles Phillips, Oracle's co-president, will actually become the next wave of cloud computing. In a previous column, I recommended that Google ( GOOG - news - people ) get into the appliance business. My guess is Oracle will follow this path with a vengeance. Solaris will power Oracle's cloud offerings, but through appliances, Oracle will bring the cloud to the data center.

Remember that Google, the leading provider of large-scale computing services in the cloud, does so by building its own hardware and software that is integrated and optimized for the task. I believe that Oracle recognizes that there are limits to the amount of enterprise IT that can be put into the cloud. Problems such as security, disaster recovery and moving huge amounts of data are significant barriers to cloud migration. But many of the same economic and operational benefits of the cloud can be achieved through remotely managed appliances that integrate software and hardware in one box. Oracle can run these over the Net using the Smart Services model I wrote about in Mesh Collaboration. The customer gets all the benefits of the cloud without having to move data off premise.

• Lydia Leong discusses differentiation of cloud vendor offerings in her “Enterprise class” cloud post to the Gartner blogs of 6/16/2009:

There seems to be an endless parade of hosting companies eager to explain to me that they have an “enterprise class” cloud offering. (Cloud systems infrastructure services, to be precise; I continue to be careless in my shorthand on this blog, although all of us here at Gartner are trying to get into the habit of using cloud as an adjective attached to more specific terminology.)

If you’re a hosting vendor, get this into your head now: Just because your cloud compute service is differentiated from Amazon’s doesn’t mean that you’re differentiated from any other hoster’s cloud offering. …

Ashlee Vance reports Sun Is Said to Cancel Big Chip Project (the Rock CPU) in her 6/15/2009 article for the NY Times’ Bits column:

Sun has been working on the Rock project for more than five years, hoping to create a chip with many cores that would trounce competing server chips from I.B.M. and Intel. The company has talked about Rock in the loftiest of terms and built it up as a game-changing product. In April 2007, Jonathan Schwartz, the chief executive of Sun, bragged about receiving the first test versions of Rock. …

This marks the second high-end chip in a row that Sun has canceled before its release. These types of products cost billions of dollars to produce, and Sun now has about a 10-year track record of investing in game-changing chips that failed to materialize.

You can bet your children’s college fund that Oracle had something to do with killing Rock.

• David Linthicum says Another Reason to Put Data in the Cloud is Google’s Fusion Tables in his 6/16/2009 post.

Google Labs recently announced Google Fusion Tables, an "experimental system" for fusing data management and collaboration. In other words, it's a means to merge many data sources, including any electronic conversations around data, visualization and data queries. Fusion Tables provide a platform to analyze data along with tools for electronically collaborating about that analysis.

The use cases here are numerous, but the core idea is that users will upload data, and then analyze and visualize the data on Google Maps or mashed up with other APIs, such as the Google Visualization API. Nothing new there, right? Wrong. Fusion Tables also provide for the discussion of data at the row or column level, or even specific data elements... think database and business intelligence meets Google Docs. However, the biggest bang for this new cloud service is the ability to "fuse" multiple sets of data that are logically related and then determine patterns.

This looks to me like the capability that Jon Udell has been seeking for his calendar curating project the last several months.

• Stacey Higgenbotham’s The GigaOM Interview: Kristof Kloeckner, CTO of IBM Cloud Computing post of 6/15/2009 begins:

IBM’s first true cloud computing products, announced today, consists of workload-specific clouds that can be run by an enterprise on special-purpose IBM gear, Big Blue building that same cloud on its special-purpose gear running inside a firewall, or running the workload on IBM’s hosted cloud. The offering seems like a crippled compromise between the scalability and flexibility that true computing clouds offers and what enterprises seem to be demanding when it comes to controlling their own infrastructure. I spoke today with the chief technology officer of IBM’s cloud computing division, Kristof Kloeckner, to learn more. [Emphasis added.]

Reuven Cohen summarizes the “Big Blue Cloud” in his The Big Blue Cloud, Getting Ready for the Zettabyte Age post of 6/16/2009:

Well IBM has gone and done it, they've announced a cloud offering yet again. Actually what's interesting about this go, is not that they're getting into the cloud business (again) but instead this time they're serious about it. And like it or not, they're approach actually does kind of make sense for, assuming you're within their target demographic (the large enterprise looking to save a few bucks).

My summary of the "Big Blue Cloud" is as follows: It's not what you can do for the cloud, but what the cloud can do for you. Or simply, it's about the application, duh? …

James Urquhart’s IBM releases new enterprise cloud portfolio of 6/15/2009 is another analysis of IBM’s “Big Blue Cloud.”

Jason Hiner attempts to answer Why Microsoft, Google, and Amazon are racing to run your data center in this 6/15/2009 post to ZDNet’s Behind the Lines blog:

The race for your data center has already begun. Google, Microsoft, and Amazon are the leading players in a global data center build-out that has not been slowed by the current economic recession and that over next decade will change the face of both consumer computing and IT departments.

The reason why these three companies are building out data center capacity around the world at a breakneck pace is that they want to be ready with enough capacity to handle the two big developments that are preparing to transform the technology world:

  1. Cloud computing: Applications and services delivered over the Internet
  2. Utility computing: On-demand server capacity powered by virtualization and delivered over the Internet

With both of these trends, the biggest target is private data centers. Cloud computing wants to run the big commoditized applications (mail, groupware, CRM, etc.) so that an IT department doesn’t have to run them from a private data center. …

Rich Miller’s IBM’s Cloud Gains Definition post of 6/15/2009 to the Data Center Knowledge site:

This week IBM is rolling out new products that begin to bring some definition to its cloud computing roadmap. IBM is offering several services enabling public cloud computing. But Big Blue’s sharpest focus is on the private cloud, which presents an opportunity to sell hardware and software rather than monthly subscriptions.

Here’s what IBM is announcing:

Public Cloud: IBM can run your application testbed in its public cloud today, and will soon offer a subscription service to host virtual desktops in its data centers. The IBM Smart Business Test Cloud Services taps into, while the upcoming IBM Smart Business Desktop Cloud will establish a beachhead for expected future growth in enterprise desktop virtualization as a service delivery strategy. …

Private Cloud: IBM CloudBurst provides customers with a private cloud in a single 42U rack for about $200,000. Included is a Websphere CloudBurst Appliance that comes pre-loaded with images for quickly deploying application environments based on IBM’s WebSphere software. … 

John Treadway claims that the NYTimes broke IBM’s 6/16/2009 embargo by releasing details today and posted IBM Smart Business #CloudComputing Press Release (DRAFT) on 6/15/2009. John adds more IBM collateral in Tomorrow’s IBM “Smart Business” #CloudComputing Strategy – Today of the same date.

Steve Lohr provides more background on IBM’s offerings in his I.B.M. to Help Clients Fight Cost and Complexity article for the New York Times of 6/14/2009.

According to a Chris Hoff Tweet of the same date, IBM’s private cloud offering will compete with Cisco’s Unified Computer System.

Nicholas Kolakowski reports Salesforce Offers Free Edition of Force.com in the 6/15/2009 eWeek article:

Salesforce.com announced on June 15 the release of the Force.com Free Edition, a stripped-down version of its cloud computing platform for the enterprise. By relying on cloud-based resources, Force.com clients can run Websites and build Web applications without an on-premises infrastructure.

Each client utilizing the free version of Force.com can deploy their newly built Web applications to up to 100 users. In addition, the free edition gives clients access to one Website with up to 250,000 page views per month, 10 custom objects/custom database tables per user, a sandbox development environment, free online training, and a library of sample applications.

eBizQ’s Force.com Sites Expanding Cloud Platform to Deliver Real-Time Web Sites and Web Applications post of 6/15/2009 describes new Force.com sites:

With the addition of Force.com Sites, companies can now use Force.com to build and run applications for their internal business processes as well as public-facing Web sites - entirely on salesforce.com's real-time cloud computing platform.

Salesforce.com’s press release of 6/15/2009 is here.