Sunday, October 18, 2009

Windows Azure and Cloud Computing Posts for 10/14/2009+

Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.

• Update 10/18/2009: Microsoft: New Windows Azure landing page; Dana Gardner: Forrester’s new “Top 15 Technology Trends” report; Jack Mann: Interview of Ryan Howard, CEO of Practice Fusion; Chris Hoff: Who should bear DDoS attack costs; Wade Wegner: Azure Services Platform slide deck; Lori MacVittie: Putting a price on uptime; Geva Perry: CloudCamp in the Cloud; Ayende Rahien: NHibernate Shards: Progress Report; Reuven Cohen: Anatomy of a Cloud Consultant; Gwen Morton and Ted Alford: Governmental cloud computing economic analysis; Kevin Jackson: Review of the Morton/Alford analysis; Ashlee Vance: Contemplating Microsoft’s future prospects; and others.

• Update 10/15/2009: Liz MacMillan: Non-Profit Group Slams Google; Lori MacVittie: Per-instance-hour pricing issues; Dana Garder: Roadmap from Virtualization to the cloud; Lori MacVittie: Amazon EC2 Load Balancing; Guy Korland: OOPSLA'09 - Cloud Workshop; Jon Brodkin: Internal Clouds; Nick Barclay: Connecting Performance Point Server to SQL Azure; Dag König: Upgraded SQL Azure Explorer; Simon Guest: Five SQL Azure Screencasts; Teresa Carlson: Microsoft talks cloud security for federal agencies; George Reese: Sys Admins for Cloud Apps; and much more.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts, Databases, and DataHubs*”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page.
* Content for managing DataHubs will be added when Microsoft releases a CTP of the technology

Azure Blob, Table and Queue Services

The Data Platform Insider blog reports in its Get ready for SharePoint Conference Next Week! post of 10/16/2009 that SharePoint 2010 will gain a new RESTful Data Service. Watch for sessions named Developing with REST and LINQ in SharePoint 2010 (#SPC359 on Twitter) and Externalizing BLOB Storage in SharePoint 2010 (#SPC399 on Twitter) when new SharePoint 2010 features are announced next week.

In the meantime, check out the conference’s Session Sneak Peak page and use Jeremy Thake’s SPC09 Session codes and titles list to follow Tweets by session code. See Mary Jo Foley’s What makes Microsoft's SharePoint tick? column of 10/16/2008 for more background on SharePoint 2010. Mary Jo describes SharePoint as follows in a comment to her post:

In short, it is six server workloads bundled into a single product. … It does content/document-management; intranet search; enterprise social networking and collaboration; internet/intranet portal creation; business-intelligence; and provides business connectivity and development (that's what "composites" means.

Mike Amundsen claims “HTTP servers are middleware servers. that's it. end of story” in his HTTP, REST, and Middleware post of 10/16/2009:

[W]ell, maybe not the end.

[I]'m not a fan of talk about HTTP apps needing middleware; that's redundant. OOP apps need objects, right?

[S]ee "...Hypertext Transfer Protocol (HTTP) is an application-level protocol..." and, as such, is used as an abstraction layer between clients (common browsers, RIAs, desktop apps, bots, etc.) and server resources (file systems, databases, other servers, etc.).

[A]nd the REST architectural style is an excellent way to implement middleware services on HTTP servers. the defined constraints of this style include well-known ones (Client-Server, Stateless, & Cache) and ones unique to REST (Uniform Interface, Layered System, & Code-On-Demand). …

Microsoft’s new Windows Azure Platform landing page sports Web 2.0 colors and graphics:

 

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

Ayende Rahien’s NHibernate Shards: Progress Report explains how the NHibernate.Shards.Id.ShardedUUIDGenerator class generates GUIDs, such as

  • 00010000602647468c2ef2f10ded039a
  • 000200006ba74626a564d147dc89f9ad
  • 00030000eb934532b828601979036e3c

    in which the first four characters are the ShardId.

    SQL Azure developers need sharding recommendations from the patterns & practices group in advance of post v1 distributed queries and transactions.

    • Wade Wegner provides additional details about George Huey’s SQL Migration Wiz (MigWiz) v1.1 of 10/13/2009 in The SQL Azure Migration Wizard will now migrate your data! of 10/15/2009.

    Wade also provides a link to detailed instructions for using MigWiz in the SQL Azure Migration Wizard whitepaper.

    • Simon Guest offers one of his Screencasts from "Patterns for Cloud Computing" Presentation of his 10/14/2009 post about SQL Azure:

    The fourth screencast focuses on storing data in the cloud, primarily with SQL Azure.  For this demo, I take the database used in the second demo, and move it from a local instance of SQL to an instance running in the cloud.

    • Dag König has upgraded his SQL Azure Explorer on CodePlex to v0.2.0 with new features for creating and dropping databases, logins and users:

    version020.png

    Nick Barclay explains how to connect to SADB from Performance Point Server (PPS) in his Cloud-based Tabular Data Sources post of 10/15/2009:

    I realize there are many out there who are sick of the term “the cloud”. Larry Ellison's rant on this topic is great.

    Nonetheless I got my Azure invitation yesterday and for no other reason other than it’s geeky, tried to access it from PPS. As I’d hoped, it was simple (as was connecting using SSMS). I set up a sample DB in my allotted condensed water vapor storage area and created a tabular data source using the appropriate connection string. Easy!

    ConnectionString

    Zach Skyles Owens announces Windows Azure Platform Training Kit – October Update is Live in this 10/14/2009 post to his private blog:

    Aligned with the SQL Azure October CTP release today we have published an updated version of the Windows Azure Platform Training Kit here.

    While the Download Center servers are being replicated you will want to make sure the title of the page contains “October Update” and the File Name is “WindowsAzurePlatformKitOctober2009.exe”.

    This update contains a few new sections with the Hands On Lab for dealing with the firewall and how to use BCP to move data into SQL Azure.  The training kit contains the following SQL Azure related content.

    Presentations

    • Introduction to SQL Azure
    • Building Applications using SQL Azure
    • Scaling Out with SQL Azure

    Demos

    • Preparing your SQL Azure Account
    • Connecting to SQL Azure
    • Managing Logins and Security in SQL Azure
    • Creating Objects in SQL Azure
    • Migrating a Database Schema to SQL Azure
    • Moving Data Into and Out Of SQL Azure using SSIS
    • Building a Simple SQL Azure App
    • Scaling Out SQL Azure with Database Sharding

    Hands On Labs

    • Introduction to SQL Azure
    • Migrating Databases to SQL Azure
    • Building Your First SQL Azure App

    My SQL Azure Now Feature Complete for PDC 2009 Release post of 10/14/2009 reports:

    The SQL Azure team sent the following e-mail on 7/13/2009 at about 7:40 PM to all registrants of the SQL Azure Database August 2009 CTP release. George Huey also released a new v1.0 of the SQL Azure Migration Wizard that supports the Bulk Copy Protocol (BCP), which is described at the end of this post.

    and include an excerpt from the SQL Azure Team’s Updated CTP for SQL Azure Database includes complete feature set for PDC 2009! post of the same date that details the new features.

    Abhibit Gadkari reports New CTP release – Firewall Settings and Views in this 10/14/2009 thread in the SQL Azure — Getting Started forum:

    I liked the SQL Azure new CTP released yesterday. It has introduced a new firewall security feature – a good move, but kind of gotcha. Remember to add your host IP address so that login will work. Once you login – these rules can be checked using

    • select * from sys.firewall_rules

    here some other useful commands [use master database]

    • select * from INFORMATION_SCHEMA.Tablesselect * from sys.database_usage [not sure how to interpret the result ]
    • select * from sys.bandwidth_usage

    Jo Maitland reports on 3/13/2009 from Oracle World in San Francicso that Oracle users balk at cloud computing:

    Oracle users battled through torrential rain this week to attend sessions on the future of database technology. One far-out notion that presenters touted: running Oracle in the cloud.

    Jamie Kinney, a business development manager for Amazon Web Services (AWS), led a session on deploying Oracle TimesTen in-memory database technology on Amazon Elastic Compute Cloud.

    "We can offer you 10 million transactions per day for just under $20," he told a room of about a hundred attendees. The equivalent hardware to drive this kind of performance would cost tens of thousands of dollars.

    Costs, performance keep data close to home
    It might sound cool, but few users seemed ready for cloud-based databases.

    Steven Winter, a senior database engineer at Gracenote, a subsidiary of Sony Corp., said the company's database is the core of Gracenote's business, and for that reason, it will never run on anyone else's infrastructure.

    Gracenote runs a massive database that includes all the metadata for the music stored on iTunes. Every time someone downloads a song, iTunes makes a call to Gracenote's database for the details on that piece of music.

    <Return to section navigation list> 

    .NET Services: Access Control, Service Bus and Workflow

    • Simon Guest offers one of his Screencasts from "Patterns for Cloud Computing" Presentation of his 10/14/2009 post about the .NET Service bus:

    The fifth and final screencast looks at communication using the cloud, specifically using the .NET Service Bus. Instead of exchanging "hello world" type messages, I actually show how the service bus can be used to do protocol mapping between two machines - I think this demonstrates some of the more interesting applications that using the cloud can enable.  In this demo I connect to an instance of SQL Server running on a remote machine using the .NET Service Bus.

    <Return to section navigation list> 

    Live Windows Azure Apps, Tools and Test Harnesses

    •• Sriram Krishnan warns about a Windows Azure Service Management API upgrade (new versioning header required) in this 10/10/2009 thread in the Windows Azure forum:

    We've made some minor fixes to the Windows Azure Service Management API that we launched a few weeks ago.

    All requests for this release will need the versioning header "x-ms-version:2009-10-01" specified. Requests that use the older version "x-ms-version:2009-08-08" will fail and code will need to be changed to use the new versioning header.

    There is a new version of csmanage posted that uses the new versioning header and reflects the API changes at https://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=windowsazuresamples&ReleaseId=3233.

    The actual docs for the API, as always, can be found at http://msdn.microsoft.com/en-us/library/ee460799.aspx. We're pushing out new docs and you should see them there in a few hours.

    Jack Mann interviews Ryan Howard, CEO of Practice Fusion in a Practice Fusion’s Ryan Howard: Five benefits of cloud-based electronic health records (EHRs) post to ExecutiveBiz of 9/30/2009. Jack offers this description:

    Since its launch in 2005, the San Francisco-based company has been providing a SaaS-based Electronic Health Record system to the healthcare market. Along the way, the company has become one of the fastest growing electronic health information networks in the United States. Practice Fusion offers a free, web-based solution anchored in cloud computing — a technology that’s been serving as an attractive option to a frequently overlooked sliver of the healthcare market: doctors’ offices with physicians of nine or less. They constitute 80 percent of the 900,000 physicians nationwide and typically lack sufficient IT resources to spearhead EHR adoption via a traditional (and more costly) software enterprise. Here Howard offers five facts to consider about EHR adoption, based upon a cloud approach. A must-read for any company looking for ideas on how to enlarge its own health IT footprint.

    Here’s an outline of Ryan’s five benefits:

    1. A cloud-based model is cost-effective.
    2. A cloud-based model is secure.
    3. Federal criteria for meaningful use will likely cover three scenarios.
    4. Aligning with a web-based provider is key.
    5. The future belongs to a centralized platform.

    for which Jack provides detailed descriptions.

    • Simon Guest offers five Screencasts from "Patterns for Cloud Computing" Presentation in viewers of his 10/14/2009 post:

      1. The first screencast shows a small application called PrimeSolvr, which was created by a couple of colleagues - Wade Wegner and Larry Clarkin.  PrimeSolvr is a simple application that solves prime numbers, running on a 25 node instance using Windows Azure.
      2. The second screencast focuses on multi tenancy and shows handling data and UI for multi-tenant applications using ASP.NET MVC, which could then be uploaded the cloud.  The demo was inspired by Maarten Balliauw, a Microsoft MVP in Belgium who created a routing mechanism for ASP.NET that uses the domain name.
      3. The third screencast, and demo, is a simple implementation of MapReduce running on Windows Azure.  The demo is run locally to demonstrate logging in the fabric, and shows how a MapReduce-like application can run on Windows Azure.
      4. The fourth screencast focuses on storing data in the cloud, primarily with SQL Azure.  For this demo, I take the database used in the second demo, and move it from a local instance of SQL to an instance running in the cloud.
      5. The fifth and final screencast looks at communication using the cloud, specifically using the .NET Service Bus. Instead of exchanging "hello world" type messages, I actually show how the service bus can be used to do protocol mapping between two machines - I think this demonstrates some of the more interesting applications that using the cloud can enable.  In this demo I connect to an instance of SQL Server running on a remote machine using the .NET Service Bus.

    Robert Rowley, MD writes about The emergence of e-patients on 10/14/2009 from the Health 2.0 conference in San Francisco:

    Fresh on the heels of the Health 2.0 conference in San Francisco, a renewed spotlight is shining on empowered “e-patients.” The convention was a showcase for a myriad of online and mobile tools which enable patients to engage in their health and their lives in ways not previously feasible.

    But who are these “e-patients,” and do they represent a significant segment of the opulation? A common belief among physicians is that “e-patients” are a minority of motivated people, but do not represent the bulk of the patients being attended. Is this true? A sentinel report describes e-patients, and what the term means, in some detail.

    A recent report by the Pew Internet Project describes the rise of the e-patient. A rapidly growing segment of the population is using Internet and other e-tools for access to health information: 61% of the total population, and 83% of the online population, use the Internet for querying about health information. It is widespread across genders and ethnicities: 64% of men and 57% of women; 65% of whites, 51% of blacks, and 44% of Hispanics use the Internet this way. There is a skewing toward upscale and educated users, and towards parents of young children.

    Over the past few years, e-patients have become high-speed (using broadband 88% of the time), and mobile (using wireless 89% of the time). E-patients are also more likely to use social media, and build online networks of “patients like me” to discuss their health conditions, find out about resources, or simply vent their frustrations or experiences with the delivery system. …

    Photo credit: Robert Rowley, MD

    Microsoft PressPass reports Caritas Christi Health Care Partners With Microsoft to Connect Physicians and Patients in this 10/14/2009 press release:

    Caritas Christi Health Care, New England’s largest community-based hospital network, has entered into a far-reaching agreement to use Microsoft Corp. health solutions, including Microsoft Amalga Unified Intelligence System (UIS) and Microsoft HealthVault, as the underpinning of an extensive system-wide data and patient relationship strategy designed to improve the health and quality of care for its 1 million patients while reducing overall healthcare costs.

    “The future of healthcare is in providing world-class health care to patients in the communities in which they live — not in having them travel to distant healthcare settings,” said Ralph de la Torre, M.D., CEO of Caritas Christi Health Care. “Our relationship with Microsoft will use technology to integrate our healthcare delivery system, and accelerate our mission of using the full leverage of our six hospitals and 13,000 employees to deliver world-class, community-based care.”

    Caritas has selected Amalga UIS to unlock and aggregate patient data currently stored across multiple independent systems, providing physicians with a real-time and complete picture of a patient’s medical history at the point of care. In addition, Caritas Christi will help enable patients in its communities to take greater control of their health and wellness by delivering unprecedented access to their personal health data both inside and outside the hospital using Microsoft HealthVault, the personal health information platform. …

    In addition, as part of its patient empowerment strategy, Caritas will enable patients and caregivers to access and store their personal health information generated at the hospital and physician offices, including medical summary data, provider information, appointments, and insurance and billing information — all through HealthVault. Patients can then share this information with physicians, family members or anyone they choose, as well as use the information in a wide range of personal health and wellness applications connected to HealthVault. …

    <Return to section navigation list> 

    Windows Azure Infrastructure

    •• Ashlee Vance contemplates Microsoft’s future prospects in her Forecast for Microsoft: Partly Cloudy New York Times feature article of 10/17/2009. Here are brief excerpts from page 3:

    “I would say there’s clearly a change in the fundamental platform of computing,” Mr. Ballmer says. “The cloud is now not just the Internet; it’s really a fundamental computing resource that’s getting thought about and looked at in a different way.”

    But the cloud presents Microsoft with a host of challenges to its time-tested model of selling desktop and computer server software for lucrative licensing fees. Fast-paced rivals like Salesforce, Amazon and Google hope to undercut its prices while adding software features every few weeks or months rather than every few years, as Microsoft has done.

    Microsoft executives acknowledge that the company had perhaps stalled, licking its wounds and trying to figure out how to behave while under scrutiny after years of antitrust court battles.

    “We’ve moved to be a mature company, but maybe too nice a guy in some senses, and not maybe moving fast enough in things,” says Bob Muglia, a 20-year Microsoft employee and president of its server software business. …

    In an effort to continue remaking its image, Microsoft is courting young software developers and cloud computing start-ups. Company executives acknowledge losing touch with these crucial audiences as open-source software turned into the standard for people looking to create the next wave of applications and services.

    Charlton Barreto comments on Ashlee‘s article in his Why Microsoft sees its future in [Rich Services] Cloud of 10/18/2009:

    From health care systems to cellphones, Steve Ballmer wants Microsoft "to invent everything that's important on the planet." …

    Microsoft, if it wants to maintain shareholder value and secure a place in this new world, has to adapt to the reality of the Rich Services Cloud, and both Steve Ballmer and Ray Ozzie have recognised this. They have bet Microsoft's future on the relevance of Rich Services Cloud. Whilst Microsoft will look at a diminution of their revenue streams from the "cash cow" days of Windows OS and Microsoft Office, their role in a world that includes KVM-type platforms (such as the WinXP KVM in Win7) and Rich Services Cloud apps (such as MS Online) will continue to have relevance.

    In embracing further value-added apps that utilise the Rich Services Cloud model, Microsoft may find its next "cash cow" in vertical suites. Whether this works for the depends upon whether they can make a splash early on with MS Online, and show that it can engender innovation in office productivity (i.e. compete very favourably with Google Apps), whilst providing the better usage models and user experiences.

    Charlton appears to miss the importance of the Azure Services Platform to Microsoft’s future revenue and earnings.

    •• Reuven Cohen’s Anatomy of a Cloud Consultant post of 10/18/2009 asks:

    So how do you qualify a cloud expert? This is where things start to get complicated. First of all, unlike other areas of IT there is no professional certification for "cloud consultants". So choosing a professional cloud consultant or service firm is a matter of doing your due diligence. To help, I've compiled a brief check list of things you may want to look for when selecting your cloud consultant.

    •• Gwen Morton and Ted Alford detail federal government migration costs in their The Economics of Cloud Computing Analyzed: Addressing the Benefits of Infrastructure in the Cloud post of 10/17/2009. From the References section:

    Our model focuses on the costs that a cloud migration will most likely directly affect; i.e., costs for server hardware (and associated support hardware, such as internal routers and switches, rack hardware, cabling, etc.), basic server software (OS software, standard backup management, and security software), associated contractor labor for engineering and planning support during the transition phase, hardware and software maintenance, IT operations labor, and IT power/cooling costs. It does not address other costs that would be less likely to vary significantly between cloud scenarios, such as storage, application software, telecommunications, or WAN/LAN. In addition, it does not include costs for government staff. Further, for simplicity we removed facilities cost from the analysis. [Emphasis added.]

    Gwen is a Senior Analyst and Ted is an Associate of Booz Allen Hamilton’s economic and business analysis practice.

    •• Kevin Jackson’s Government Cloud Economics post of 10/16/2009 effuses over the Booz, Allen report (see above) and says it’s “an EXCELLENT economic evaluation of the federal government's push to cloud computing. Anyone interested in this market should definitely read it, analyse it, and believe it !!” and is “Definities a must read!!” [Emphasis Kevin’s.]

    My advice is to take into consideration what’s not addressed in the report, as emphasized in the preceding post.

    Dana Gardner reviews Forrester Research’s new “The Top 15 Technology Trends EA Should Watch” report in his What's on your watch list? Forrester identifies 15 key technologies for enterprise architects BriefingsDirect post of 10/16/2009. The report’s “Restructured IT services platforms” theme includes the following technologies:

    Note the emphasis on PaaS.

    • George Reese posits Your Cloud Needs a Sys Admin in this 10/15/2009 article for the O’Reilly Media Broadcast blog:

    I've attended a number of CloudCamps around the world, and the question as to whether systems administrators are relevant in the post-cloud world always seems to come up. Let's put this silly question to bed: your cloud needs a sys admin.

    Programmers vs. Sys Admins

    A mature IT ecosystem has both systems administrators and developers. While there's a lot of overlap in the more mundane skills of each, I've rarely seen good sys admins make good programmers. And I've rarely seen good programmers make good sys admins. The cloud, however, has a nasty habit of deluding programmers into thinking they no longer need sys admins.

    Sys admins live and breathe hardware, the OS, and the network.

    They know the right feeds to follow to keep track of security alerts and advancements, and they know when to patch and when to let something slide. They also know how to manage the patching of production environments to minimize the impact on system uptime.

    I agree with William Vambenepe’s comment that Sys Admins are much more important for IaaS clouds, such as AWS, than automated PaaS services, such as Windows Azure. Microsoft manages security alerts, OS patches, and OS updates and scales applications up and down with specified numbers of instances. There is very little for a Sys Admin to manage in a Windows Azure implementation. DBAs might be helpful when designing an SAL Azure schema and assisting with SQL query optimization, but DBAs can’t get their hands on the knobs to tune SQL Azure.  

    • The Economist’s un-bylined Battle of the clouds story carries this subtitle: “The fight to dominate cloud computing will increase competition and innovation:”

    THERE is nothing the computer industry likes better than a big new idea—followed by a big fight, as different firms compete to exploit it. “Cloud computing” is the latest example, and companies large and small are already joining the fray. The idea is that computing will increasingly be delivered as a service, over the internet, from vast warehouses of shared machines. Documents, e-mails and other data will be stored online, or “in the cloud”, making them accessible from any PC or mobile device. Many things work this way already, from e-mail and photo albums to calendars and shared documents.

    This represents a big shift. If you store more and more things online, and access more and more software through an ordinary web browser, it suddenly matters much less what sort of computer you have, and what kind of software it is running. This means Microsoft, which launches the newest version of its Windows operating system this month, could lose out—unless, that is, the software giant can encourage software developers and users to migrate to its new suite of cloud-based services. Its main rival is Google, which offers its own range of such services, and continues to launch new ones and interlink them more closely. Yahoo!, which is allied with Microsoft, and Apple also offer cloud services for consumers; specialists such as Salesforce and NetSuite do the same for companies. Amazon has pioneered the renting out of cloud-based computing capacity. Some firms will offer large, integrated suites of cloud-based services; others will specialise in particular areas, or provide the technical underpinnings necessary to build and run clouds. But battle has been joined (see article). …

    Article is the Clash of the clouds feature story of the same date, sub-titled “The launch of Windows 7 marks the end of an era in computing—and the beginning of an epic battle between Microsoft, Google, Apple and others.” The story begins with the Windows 7 launch and continues with more commentary about competition in the cloud.

    • William Vambenepe posits a Cloud platform patching conundrum: PaaS has it much worse than IaaS and SaaS in this 10/15/2009 post:

    The potential user impact of changes (e.g. patches or config changes) made on the Cloud infrastructure (by the Cloud provider) is a sore point in the Cloud value proposition (see Hoff’s take for example). You have no control over patching/config actions taken by the provider, any of which could potentially affect you. In a traditional data center, you can test the various changes on specific applications; you don’t have to apply them at the same time on all servers; and you can even decide to skip some infrastructure patches not relevant to your application (”if it aint’ broken…”). Not so in a Cloud environment, where you may not even know about a change until after the fact. And you have no control over the timing and the roll-out of the patch, so that some of your instances may be running on patched nodes and others may not (good luck with troubleshooting that).

    Unfortunately, this is even worse for PaaS than IaaS. Simply because you seat on a lot more infrastructure that is opaque to you. In a IaaS environment, the only thing that can change is the hardware (rarely a cause of problem) and the hypervisor (or equivalent Cloud OS). In a PaaS environment, it’s all that plus whatever flavor of OS and application container is used. Depending on how streamlined this all is (just enough OS/AS versus a traditional deployment), that’s potentially a lot of code and configuration. Troubleshooting is also somewhat easier in a IaaS setup because the error logs are localized (or localizable) to a specific instance. Not necessarily so with PaaS (and even if you could localize the error, you couldn’t guarantee that your troubleshooting test runs on the same node anyway). …

    • Danny Goodall wants to find out what is The Decision Making Unit for Cloud Computing in this 10/15/2009 post to the REPAMA blog:

    I’m kicking off some research into the Decision Making Unit (DMU) for Cloud Computing services and software.

    I’m interested to see how much, if at all, the cloud computing decision making unit differs from that of tradition data centre or infrastructure software sales. And if it does differ (as I suspect it does) then what is the impact on traditional marketing elements like audience, message, value propositions, supporting materials, etc.

    I want to examine the decision making units for each of the different high-level segments in the Lustratus REPAMA market landscape / taxonomy / segmentation model for cloud computing.

    For each of these segments, the basic question I want to answer is:

    “Within a B2B Cloud Computing transaction, what job roles are involved in the decision making process, what do these individuals need in order to arrive at a decision and how does this differ from traditional enterprise software sales?”

    Dana Gardner’s Making the Leap from Virtualization to Cloud Computing: A Roadmap and Guide 10/15/2009 is a “Transcript of a sponsored BriefingsDirect podcast on what enterprise architects need to consider when moving from virtualization to cloud computing:”

    Today, we present a sponsored podcast discussion on making a leap from virtualization to cloud computing. We’ll hammer out a typical road map for how to move from virtualization-enabled server, storage, and network utilization benefits to the larger class of cloud computing agility and efficiency.

    How should enterprise IT architects scale virtualized environments so that they can be managed for elasticity payoffs? What should be taking place in virtualized environments now to get them ready for cloud efficiencies and capabilities later? And how do service-oriented architecture (SOA), governance, and adaptive infrastructure approaches relate to this progression or road map from tactical virtualization to powerful and strategic cloud computing outcomes?

    Here to help you answer these questions and to explain more about properly making a leap from virtualization to cloud computing, we are joined by two thought leaders from Hewlett-Packard: … Rebecca Lawson, director of Worldwide Cloud Marketing at HP … [and] Bob Meyer, the worldwide virtualization lead in HP’s Technology Solutions Group.

    Microsoft’s US Partner Learning Blog lists 4 Essential Azure Resources for Partners in this 10/14/2009 post:

      1. What is Azure? Webcast – For those of you who are just starting to dig in. What is the Azure Services Platform? What will it do for you and your customers? Free and on-demand, so watch it whenever you want.
      2. Azure Step-by-Step Tutorial/Community Technology Preview – This is your chance to put in some elbow grease and understand the nitty gritty of Azure. With the CTP, you can create your own “Hello World” Azure app in 45 minutes.
      3. Pricing Guide – How are Azure costs calculated? This page explains it all.
      4. Cost Calculator – You have the pricing, but what does it all actually mean? How much bandwidth will your customers actually use? What about storage costs? Thanks to Cumulux for this great tool.

    John Brodkin claims Internal clouds are more than just virtualization in this 10/14/2009 NetworkWorld article:

    Building an internal cloud is as easy as installing VMware, right?

    That's what a lot of customers think, but in reality the virtualization of servers is just one of many required steps for enterprises that want to build a cloud network.

    Last year, Forrester Research asked enterprises in a survey how many of them had built an internal cloud, and about 5% said they had, according to analyst James Staten. But when asked to define the internal cloud, IT executives typically replied "my VMware environment," Staten says.

    In reality, adoption of internal clouds as defined by Forrester is less than 2% of enterprises, and vendors are just beginning to provide the proper tools necessary to build them, he says. …

    James Urquhart continues with Part 4 of his Cloud computing and the big rethink series for CNet News:

    So far in this series, I've described why the very form of application infrastructure delivery will change in the coming years, and why both infrastructure and software development will play a major role in that. These are powerful forces that are already at work, and you are already seeing their effects on the way enterprise IT and consumer Web applications are being operated.

    There is one more key force that will change the way we acquire, build, and consume enterprise application functionality and data, however. It is the very reason that enterprise IT exists. I am speaking, of course, of the users--the business units and individuals that demand IT give them increased productivity and competitive advantage.

    How is it that end users could affect cloud-based architectures? After all, isn't one of the key points about cloud computing that it hides infrastructure and operations from hosted applications and services? The answer is simple: the need for cloud-operated infrastructure comes from the need for more efficient application delivery and operations, which in turn comes from the accelerated need for new software functionality driven by end users. …

    Vishwas Lele’s thoughtful and detailed Outlook for Azure – scattered clouds but generally sunny analysis of the Windows Azure Platform, which covers the following topics and debunks six common cloud-computing concerns:

    • Azure – A Platform as a Service offering
    • Concern #1: “The Car Analogy  -  Azure Pricing model is fundamentally broken”
    • Concern #2: “Azure does not scale dynamically”
    • Concern #4: “I am out if there is no Remote Desktop Access”
    • Concern #5: “You cannot seamlessly move your Azure application back to the datacenter”
    • Concern #6: “Azure is slow compared to EC2 or GAE”

    Vishwas is Chief Technology Officer (.NET Technologies) at Applied Information Sciences, Inc. where he’s responsible for assisting organziations in envisioning, designing, and implementing enterprise solutions. Vishwas also serves as the Microsoft Regional Director for the Washington DC area.

    Bill Lodin stars in two new one-hour Windows Azure video training sessions:

    Additional Azure training sessions will become available on 10/16, 10/26, 11/30 and 12/14/2009 at msdev,com’s Everything You Need to Know About Azure as a Developer page.

    James Staten takes on Alex Williams in his Cloud Is Defined, Now Stop the Cloudwashing post of 10/14/2009:

    This blog post is a response to an article by Alex Williams on ReadWriteWeb. Thanks for the shout out, Alex, and for bringing more attention to the contentious issue of cloud computing definitions. While Forrester research reports are created exclusively for our clients, our definition is freely available:

    A standardized IT capability (services, software, or infrastructure) delivered via Internet technologies in a pay-per-use, self-service way.

    We first published this definition back in March 2008 in the report, “Is Cloud Computing Ready for the Enterprise,” and have held to that published definition ever since (in fact it has been leveraged in multiple Forrester reports, speeches at industry events, news articles, blog posts and tweets since that original publication). Our definition was also used by NIST and several other Federal government agencies as a resource used to create their definition. …

    In this Tech Radar we lauded NIST for their definition, and contrary to your statement, do not believe we need the “circus” of more or better definitions at this stage. Rather we believe we need broader recognition of what is and what isn’t cloud computing to get past the marketing hype and make it easier for customers to identify and then consume these valuable new service offerings. That’s why we’ve stuck with our definition since 2008 and are glad to see NIST sticking to theirs. [Tech Radar link added.]

    <Return to section navigation list> 

    Cloud Security and Governance

    •• Lori MacVittie’s Putting a Price on Uptime post of 10/16/2009 asks “How do you put a price on uptime and more importantly, who should pay for it?” Lori begins:

    A lack of ability in the cloud to distinguish illegitimate from legitimate requests could lead to unanticipated costs in the wake of an attack. How do you put a price on uptime and more importantly, who should pay for it?

    A “Perfect Cloud”, in my opinion, would be one in which the cloud provider’s infrastructure intelligently manages availability and performance such that when it’s necessary new instances of an application are launched to ensure meeting the customer’s defined performance and availability thresholds. You know, on-demand scalability that requires no manual intervention. It just “happens” the way it should.

    Several providers have all the components necessary to achieve a “perfect cloud” implementation, though at the nonce it may require that customers specifically subscribe to one or more services necessary. For example, if you combine Amazon EC2 with Amazon ELB, Cloud Watch, and Auto Scaling, you’ve pretty much got the components necessary for a perfect cloud environment: automated scalability based on real-time performance and availability of your EC2 deployed application.

    The Windows Azure Platform provides “automated scalability based on real-time performance” as a standard feature, so Lori’s issue applies equally to Windows Azure. Lori continues;

    Cool, right?

    Absolutely. Except when something nasty happens and your application automatically scales itself up to serve…no one. …

    The reason the perfect cloud is potentially a danger to the customer’s budget is that it currently lacks the context necessary to distinguish good requests from bad requests. Cloud today, and most environments if we’re honest, lack the ability to examine requests in the context of the big picture. That is, it doesn’t look at a single request as part of a larger set of requests, it treats each one individually as a unique request requiring service by an application.

    Microsoft should respond to this issue with its security measures against DDoS and simlar attacks on Windows Azure and SQL Azure instances and describe its policies for billing bandwidth consumed by marauders if such attacks aren’t deflected.

    •• Chris Hoff (@Beaker) rebutts in his Amazon Web Services: It’s Not The Size Of the Ship, But Rather The Motion Of the… post of 10/16/2009 the content of Carl Brooks’ (@eekygeeky) 10/14/2009 interview of Peter DeSantis, VP of Amazon Web Services’ EC2:

    … [T]his article left a bad taste in my mouth and invites more questions than it answers.  Frankly I felt like there was a large amount of hand-waving in DeSantis’ points that glossed over some very important issues related to security issues of late.

    DeSantis’ remarks implied, per the title of the article, that to explain the poor handling and continuing lack of AWS’ transparency related to the issues people like me raise,  the customer is to blame due to hype and overly aggressive, misaligned expectations.

    In short, it’s not AWS’ fault they’re so awesome, it’s ours.  However, please don’t remind them they said that when they don’t live up to the hype they help perpetuate.

    You can read more about that here “Transparency: I Do Not Think That Means What You Think That Means…” …

    Chris quotes from Lori MacVittie’s Amazon Elastic Load Balancing Only Simple On the Outside 10/15/2009 post:

    A lack of ability in the cloud to distinguish illegitimate from legitimate requests could lead to unanticipated costs in the wake of an attack. How do you put a price on uptime and more importantly, who should pay for it?

    And from his own earlier warning:

    I quote back to something I tweeted earlier “The beauty of cloud and infinite scale is that you get the benefits of infinite FAIL”

    The largest DDOS attacks now exceed 40Gbps. DeSantis wouldn’t say what AWS’s bandwidth ceiling was but indicated that a shrewd guesser could look at current bandwidth and hosting costs and what AWS made available, and make a good guess.

    Chris continues:

    The tests done here showed the capability  to generate 650 Mbps from a single medium instance that attacked another instance which, per Radim Marek, was using another AWS account in another availability zone.  So if the “largest” DDoS attacks now exceed 40 Gbps” and five EC2 instances can handle 5Gb/s, I’d need 8 instances to absorb an attack of this scale (unknown if this represents a small or large instance.)  Seems simple, right?

    Again, this about absorbing bandwidth against these attacks, not preventing them or defending against them.  This is about not only passing the buck by squeezing more of them out of you, the customer.

    Teresa Carlson’s Secure the Datacenter, Secure the Cloud post of 10/14/2009 to FutureFed the Microsoft Federal Blog describes Microsoft’s compliance with security standards and regulations:

    I’ve talked a lot about the essential role cloud computing can play in creating a more agile, efficient, and cost-effective federal government, but when deciding whether or not to embrace cloud technology, agencies’ biggest questions rightly focus on security and privacy.  That’s why adhering to top line standards in each of those areas is critically important.

    Datacenters are the foundation of any organization’s approach to cloud computing, which is why Microsoft has built its datacenters to comply with the strictest international security and privacy standards, including International Organization for Standardization (ISO) 27001, Health Insurance Portability and Accountability Act (HIPAA), Sarbanes Oxley Act of 2002 and SAS 70 Type 1 and Type II.  The ISO 27001 global certification is particularly important, as the highest international standard for information security.  Todd VanderVen, president of BSI Management Systems discussed ISO 27001 in a recent research report, saying, "As the first major online service provider to earn ISO/IEC 27001:2005 certification, Microsoft is further demonstrating a commitment to making its company more secure and securing the information of its customers.  By formalizing their documentation and processes and using ISO/IEC 27001:2005, Microsoft will be able to improve quality as well as security and continue to raise the bar for the industry, as they have done so well over the years."

    Teresa is vice-president, Microsoft Federal. You can follow @FutureFed on Twitter.

    <Return to section navigation list> 

    Cloud Computing Events

    •• BrightTalk presents an eight-hour Electronic Health Records summit on 10/20/2009 starting at 8:00 AM PT.

  • Click the associated Attend button to register for the session.

      •• Tom Bittman’s Confessions of a Gartner Analyst at Symposium post of 10/17/2009 reveals his thought before travelling to next week’s Gartner Symposium in Orlando, FL:

    • I love these conferences. During the year, I spend a large percentage of my time on the phone with clients (600 or so calls this year?). I also visit with clients face-to-face throughout the year (I think I visited with perhaps a hundred this year). However, nothing compares with the density of client conversations that take place at Symposium.

      For me, Symposium is about four days of constant client interaction. This year, I’ll deliver two presentations (one on cloud and private cloud computing, one on virtualization), a debate (is private cloud real?), a client roundtable, about 40 one-on-ones, two breakfasts with clients, two lunches with clients, a dinner with one client, and another dinner with a few dozen key CIOs. History says, all remaining open time will disappear as soon as I arrive. This will be solid 7am to 10pm client discussions. [Emphasis added] …

      This year I’ll be active[ly] tweeting during the conference. Of course, nothing confidential about individual clients, but I’ll tweet about the pulse of the market and things that are coming up often (tombitt on Twitter, and I’ll hash my tweets with #GartnerSym).

      •• Geva Perry reports on CloudCamp in the Cloud - Oct 22 in this 10/16/2009 post:

      The organizers of the very successful CloudCamp events have put together a new virtual event called CloudCamp in the Cloud, which takes place next week. I've been to a few physical CloudCamps and they are great events, so I expect this online one to be good as well. Worth attending.

      Here are the details:

      CloudCamp, organizer of the community-based cloud computing unconference series, today announced that it’s taking its popular event series virtual with the forthcoming “CloudCamp in the Cloud CloudCamp in the Cloud, to be held Thursday, October 22, 2009 from 12 noon to 3 pm Eastern Standard Time, builds upon the original live CloudCamp format providing a free and open place for the introduction and advancement of cloud computing. Using an online meeting format, attendees will exchange ideas, knowledge and information in a creative and supporting environment, advancing the current state of cloud computing and related technologies.

      •• Wade Wegner offers his slide deck from the Day of Cloud conference at Chicago in his Presenting on the Windows Azure Platform at the Day of Cloud post of 10/16/2009. His session covered Windows Azure apps and storage, SQL Azure and .NET Services. The conference also offered sessions about Salesforce.com, Amazon.com, and Google App Engine.

      Guy Korland reports about the OOPSLA'09 - Cloud Workshop on 8/12/2009 to LinkedIn’s Cloud Interoperability Forum:

      OOPSLA’09 will be held on 10/25 to 10/29/2009 at the Disney Contemporary Resort in Orlando, FL.

      Best Practices in Cloud Computing: Designing for the Cloud, 8:30–5:00, Sunday (Oct 25) — Fantasia Ballroom L: Cloud computing is the latest technology evolution and labelled among many as the next potential technology silver bullet. There are both great expectation and fear to what consequences these technologies might cause. We want to engage the development community in a series of workshops at OOPSLA09 to ensure that cloud computing evolves in a meaningful way for those who are likely to develop solutions on the cloud. This workshop will focus on design implications. Although there is already strong support for these technologies from companies such as IBM and Microsoft, there is a need to explore good ways of designing services for the Cloud to ensure quality and productivity. There are movements in the modeling community that require further investigation as well as surviving concepts from the SOA era that need to be captured. Capturing and discussing best practices on these subjects will contribute to a healthy movement in the right direction for those who will develop services for the Cloud.

      CloudCamp, 5:30–10:00, Sunday (Oct 25) — Grand Republic Ballroom B: CloudCamp is a [Free] unconference where early adapters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged you to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

      Best Practices in Cloud Computing: Implementation and Operational Implications for the Cloud, 8:30–5:00, Monday (Oct 26) — Fantasia Ballroom L: Cloud computing is the latest technology evolution and labelled among many as the next potential technology silver bullet. There are both great expectation and fear to what consequences these technologies might cause. We want to engage the development community in a series of workshops at OOPSLA09 to ensure that cloud computing evolves in a meaningful way for those who are likely to develop solutions on the cloud. This workshop will focus on operational implications. Due to the potential rapid availability of services in the cloud it is important to start exploring consequences of using such services, for instance access control, regulatory issues, development practices, security and practical operational issues. Capturing and discussing best practices on these subjects will contribute to a healthy movement in the right direction for those who will develop services for the Cloud.

      Forrester Research’s Selling the Cloud workshop will take place 12/1/2009 at 950 Tower Lane, Suite 1200, Foster City, CA from 8:30 AM to 4:00 PM PT.

      The following speakers:

      will cover these topics:

      <Return to section navigation list> 

      Other Cloud Computing Platforms and Services

      •• Chris Hoff (@Beaker) had an Incomplete Thought: The Cloud Software vs. Hardware Value Battle & Why AWS Is Really A Grid… on 10/18/2009:

      Some suggest in discussing the role and long-term sustainable value of infrastructure versus software in cloud that software will marginalize bespoke infrastructure and the latter will simply commoditize.

      I find that an interesting assertion, given that it tends to ignore the realities that both hardware and software ultimately suffer from a case of Moore’s Law — from speeds and feeds to the multi-core crisis, this will continue ad infinitum.  It’s important to keep that perspective.

      In discussing this, proponents of software domination almost exclusively highlight Amazon Web Services as their lighthouse illustration.  For the purpose of simplicity, let’s focus on compute infrastructure.

      Chris continues with four reasons that “pointing to Amazon Web Services (AWS) as representative of all cloud offerings in general to anchor the hardware versus software value debate is not a reasonable assertion” and concludes:

      Comparing AWS to most other IaaS cloud providers is a false argument upon which to anchor the hardware versus software debate. [Emphasis Chris’.]

      This appears to me to be a complete thought.

      James Hamilton’s Jeff Dean: Design Lessons and Advice from Building Large Scale Distributed Systems of 10/17/2009 reviews Jeff Dean of Google’s keynote talk at LADIS 2009. James’ notes on new developments at Google:

      • Working on next generation GFS system called Colossus
      • Metadata management for Colossus in BigTable
      • Working on next generation Big Table system called Spanner

      Spanner characteristics:

      • Similar to BigTable in that Spanner has tables, families, groups, coprocessors, etc.
      • But has hierarchical directories rather than rows, fine-grained replication (ad directory level), ACLs
      • Supports both weak and strong data consistency across data centers
      • Strong consistency implemented using Paxos across replicas
      • Supports distributed transactions across directories/machines
      • Much more automated operation
      • Auto data movement and replicas on basis of computation, usage patterns, and failures
      • Spanner design goals: 10^6 to 10^7 machines, 10^13 directories, 10^18 storage, 10^3 to 10^4 locations over long distances
      • Users specify require latency and replication factor and location

      Lori MacVittie claims Amazon Elastic Load Balancing Only Simple On the Outside in her 10/15/2009 post to F5’s DevCentral blog:

      Amazon’s ELB is an exciting mix of well-executed infrastructure 2.0 and the proper application of SOA, but it takes a lot of work to make anything infrastructure look that easy.

      The notion of Elastic Load Balancing, as recently brought to public attention by Amazon’s offering of the capability, is nothing new. The basic concept is pure Infrastructure 2.0 and the functionality offered via the API has long been available on several application delivery controllers for many years. In fact, looking through the options for Amazon’s offering leaves me feeling a bit, oh, 1999. As if load balancing hasn’t evolved far beyond the very limited subset of capabilities exposed by Amazon’s API.

      That said, that’s just the view from the outside.

      Though Amazon’s ELB might be rudimentary in what it exposes to the public it is certainly anything but primitive in its use of SOA and as a prime example of the power of Infrastructure 2.0. In fact, with the exception of GoGrid’s integrated load balancing capabilities, provisioned and managed via a web-based interface, there aren’t many good, public examples of Infrastructure 2.0 in action. Not only has Amazon leveraged Infrastructure 2.0 concepts with its implementation but it has further taken advantage of SOA in the way it was meant to be used.

      NOTE: What follows is just my personal analysis, I don’t have any especial knowledge about what really lies beneath Amazon’s external interfaces. The diagram is a visual interpretation of what I’ve deduced seems likely in terms of the interactions with ELB given my experience with application delivery and the information available from Amazon and should be read with that in mind.

      Her detailed architectural analysis follows.

      • Michael Cooney asks What kind of cloud computing project would you build with $32M? on in this 10/14/2009 NetworkWorld article and attempts to answer it:

      The US Department of Energy said today it will spend $32 million on a project that will deploy a large cloud computing test bed with thousands of Intel Nehalem CPU cores and explore the work of commercial cloud offerings from Amazon, Microsoft and Google.

      Ultimately the project, known as Magellan, will look at cloud computing as a cost-effective and energy-efficient way for scientists to accelerate discoveries in a variety of disciplines, including analysis of scientific data sets in biology, climate change and physics, the DOE stated.

      Magellan will explore whether cloud computing can help meet the overwhelming demand for scientific computing, the DOE stated. Although computation is an increasingly important tool for scientific discovery, and DOE operates some of the world’s most powerful supercomputers, not all research applications require such massive computing power. The number of scientists who would benefit from mid-range computing far exceeds the amount of available resources, the DEO stated.

      Half the amount will go to Lawrence Berkeley National Laboratory (LBNL, affectionately know to East Bay residents as “the Cyclotron”), which is about three air miles from my house. I’m waiting for the DOE to provide me a 100Gps ESnet connection, but I’m not holding my breath.

      • Andre Yee summarizes Google’s 2009 Communications Intelligence Report in his Google Report Reveals SaaS Impact on Enterprise IT post to ebizQ of 10/14/2009:

      Last week, Google released its 2009 Communications Intelligence Report, covering the impact and current state of cloud applications/SaaS on IT. It was based on a survey of 1125 IT decision makers and perhaps predictably, it offered a positive commentary and outlook for cloud applications. That said, I think this report raised a number of interesting, sometimes surprising points that are worthy of note:

      • Companies adopt SaaS for a variety of reasons …
      • Cost isn't as significant a factor as you might think …
      • SaaS users are more satisfied with their solution than the non-SaaS users …
      • Security & compliance is a big issue for SMBs …
      • Top 3 cloud applications doesn't include CRM …

      Andre expands on each of the “surprising points.”

      Lori MacVittie’s When Cloud Is Both the Wrong and the Right Solution post of 10/14/2009 analyzes the economics of moving from running F5’s applications on Amazon Web Services (AWS) or on-premises servers:

      Cloud offers an appealing “pay only for what you use” that makes it hard to resist. Paying on a per-usage hour basis sounds like a good deal, until you realize that your site is pretty much “always on” because of bots, miscreants, and users. In other words, you’re paying for 24x7x365 usage, baby, and that’s going to add up. Ironically, the answer to this problem is … cloud.

      Don and I occasionally discuss how much longer we should actually run applications on our own hardware. After all, the applications we’re running are generally pretty light-weight, and only see real usage by “users” about 12-20 hours a week. Yeah, a week. Given that Amazon AWS pricing per hour runs at or below the cost of electricity per kWh necessary to power the hardware, it seems on the surface that it would be cheaper – even when including data transfer costs – to just pack up the servers and deploy those couple of applications “in the cloud.”

      Then I got curious. After all, if I was going to pay only when the system was in use, it behooved me to check first and ensure that usage really was as low as I thought. …

      Lori continues with a detailed analysis of the effect of per-instance-hour pricing versus finer-grained pricing intervals.

      Liz MacMillan reports Non-Profit Group Slams Google in this 10/14/2009 post subtitled “Consumer Watchdog highlights Google hypocrisy in differing cloud computing statements:”

      Consumer Watchdog slammed Google for its apparent hypocrisy in marketing its new "cloud computing" products, blandly assuring customers that their data is secure on Google internet servers but at the same time warning shareholders of the security risks posed by swift expansion of its commercial online business. The nonpartisan, nonprofit group sent a letter to a Los Angeles City Councilman showing that Google says one thing when trying to sell its products, but something else in federally required filings aimed at shareholders. Consumer Watchdog also released another round of annotated Google P.R. documents in its Google "Charmwatch" campaign.

      Google's marketing practices were outlined in a letter to Bernard C. Parks, Chairman of Los Angeles City Council's Budget and Finance Committee, which is weighing a $7.25 million contract that would move the city's 30,000 email users to a "cloud computing" system provided by Google. The deal could be considered by the committee next week.

      "The difference in tone between Google's attempts to reassure potential users of its applications about security concerns and its explicit warnings of the applications' risks in communications aimed at shareholders as required by federal law smacks of hypocrisy," wrote John M. Simpson, consumer advocate, in the letter.

      As an example the letter cited a company document titled "Introduction to Google." The document claims, "Google goes to great lengths to protect the data and intellectual property on servers that host user data. These facilities are protected around the clock and we have a dedicated security operations team who focuses specifically on maintaining the security of our environment."

      But in the most recent form 10-Q filed with the Securities and Exchange Commission Google warns:

      "We may also need to expend significant resources to protect against security breaches. The risk that these types of events could seriously harm our business is likely to increase as we expand the number of web based products and services we offer as well as increase the number of countries where we operate...

      "These products and services are subject to attack by viruses, worms, and other malicious software programs, which could jeopardize the security of information stored in a user's computer or in our computer systems and networks.” ...

      Mary Hayes Weier’s The Drama That Didn't Happen At Oracle Open World article of 10/14/2009 for InformationWeek’s Plug into the Cloud blog begins:

      So Marc Benioff treaded onto Larry Ellison's turf at Oracle Open World yesterday and acknowledged that sometimes, customers use both Oracle and Salesforce.com. Well, unless you're completely gullible to the attention-loving drama kings of the software industry, this shouldn't come as a surprise. Cloud and traditional computing will forever coexist in an IT architecture world that is far more grey than black and white, and both Larry and Marc know it.

      Yesterday, conference attendees packed into a room in excited anticipation of some trash talk, but what they heard was Benioff talking about Oracle and Salesforce.com's "fantastic relationship." That may be an exaggeration, but it's closer to the truth then all this staged war mongering between Salesforce.com and Oracle. For one thing, Ellison was an early investor in Salesforce.com (a reported $2 million) and is still a shareholder. While the story goes that Benioff kicked Ellison off of his board a few years ago after a falling out, it's clear Benioff still has a lot of respect for his former boss. And the guy in the "no software" character suit walking around Salesforce.com's booth at Oracle Open World? Come on, that's just silly. (SpongeBob and Patrick characters would've been funny, though.) As Ellison has pointed out, what's running in Salesforce.com's data centers that run the Force.com cloud computing platform? A lot of Oracle databases.

      • Cloudbook describes the Open Cirrus Cloud Computing Testbed created by HP Labs, Intel Research and Yahoo Research in this undated post:

      Open Cirrus is an open cloud-computing research testbed designed to support research into design, provisioning, and management of services at a global, multi-datacenter scale.

      The open nature of the testbed is designed to encourage research into all aspects of service and datacenter management. In addition, they hope to foster a collaborative community around the testbed, providing ways to share tools, lessons and best practices, and ways to benchmark and compare alternative approaches to service management at datacenter scale.

      There are a number of important and useful testbeds, such as PlanetLab, EmuLab, IBM/Google cluster, and Amazon EC2/S3, that enable researchers to study different aspects of distrubuted computing. However, no single testbed supports research spanning systems, applications, services, open-source development, and datacenters. Towards this end, we have developed Open Cirrus, a cloud computing testbed for the research community that federates heterogeneous distributed data centers. Open Cirrus offers a cloud stack consisting of physical and virtual machines, and global services such as sign-on, monitoring, storage, and job submission. By developing the testbed and making it available to the research community, we hope to help spur innovation in cloud computing and catalyze the development of an open source stack for the cloud.

      The Open Cirrus testbed is a collection of federated datacenters for open-source systems and services research. The testbed is composed of nine sites in North America, Europe, and Asia. Each site consists of a cluster with at least 1000 cores and associated storage. Authorized users can access any Open Cirrus site using the same login credentials. …

      Cloudbook’s post includes links to a paper: Open Cirrus TM Cloud Computing Testbed of published in June 2009 and a video: Hadoop Scheduling in the Open Cirrus Cloud Testbed - July 2009.

      <Return to section navigation list> 

    • blog comments powered by Disqus