Thursday, April 29, 2010

Windows Azure and Cloud Computing Posts for 4/29/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.
 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
To use the above links, first click the post’s title to display the single article you want to navigate.
Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)
Read the detailed TOC here (PDF) and download the sample code here.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:
  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”
HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Steve Marx shows you how to Update Your Windows Azure Website in Just Seconds by Syncing with Blob Storage in this 4/29/2010 post:
One of the coolest uses I’ve found for my Windows Azure Hosted Web Core Worker Role is to sync my website with blob storage, letting me change files at will and immediately see the results in the cloud. You can grab the code over on Code Gallery, or a prebuilt package that you can deploy right away.
Here’s a [link] to a 30-second video showing it in action.
Steve goes on to show how the worker role syncs his Website.
Lionel Robinson’s Accessing BLOB Data from External Systems Using Business Connectivity Services in SharePoint Server 2010 post of 4/27/2010 links to an eponymous whitepaper:
Tajeshwar Singh wrote a white paper which shows you how to use Microsoft Business Connectivity Services (BCS) in Microsoft SharePoint Server 2010 to access and surface BLOB data in the SharePoint user interface and search. Check out the overview below taken from the paper.
Link to document: Accessing BLOB Data from External Systems Using Business Connectivity Services in SharePoint Server 2010
Overview of the white paper
Microsoft Business Connectivity Services (BCS) is the new version of Microsoft Office SharePoint Server 2007 Business Data Catalog functionality. New features are added that help retrieve binary large object data (referred to as BLOB data) from external systems and make it available in Microsoft SharePoint Server 2010. This article describes the following:
  • The functionality that is provided by the StreamAccessor stereotype that is introduced in Business Connectivity Services.
  • How to use StreamAccessor to retrieve file attachments from external systems for viewing and indexing.
  • How to write the BDC model that is required to consume BLOB data.
  • The built-in Web Parts behavior for BLOB data, and how BLOB fields can be indexed by SharePoint Server search.
In this article's scenario, the AdventureWorks database that is hosted in Microsoft SQL Server 2008 is used as an external system that contains the binary data. The BDC metadata model is created with a StreamAccessorMethodInstance to retrieve the BLOB field of type varbinary from SQL Server as an external content type. The BLOB fields are modeled as types that can be read in chunks to help Business Connectivity Services read the stream in chunks, and not load the complete content in memory. This can help prevent out-of-memory conditions. An example of such a type is System.IO.Stream in the Microsoft .NET Framework. An External Data Grid Web Part is configured to show the external items with links to download the BLOB. Finally, Search is configured to crawl the BLOBs and show the results in the SharePoint Server search user interface (UI).
There’s no mention of Azure blobs or SQL Azure varbinary data that I can find in the white paper, so I asked both Lionel and Tajeshwar about the applicability.
Eugenio Pace comments on Continuation Tokens in Windows Azure Tables – Back and Previous paging in this 4/29/2010 post:
Scott [Densmore] has published the results of his “Continuation Token” spike, which is a critical aspect of dealing with queries against Windows Azure tables storage. His findings will make it to the guide, but you can read the essentials here.
The unusual thing that you’ll see in his article is that it shows a way of dealing with forward and backward paging. The trick is storing the Continuation Token you get from Windows Azure in a stack (in session) and then using that to retrieve the right page of data. This is possible because the Continuation Token is serializable and you can persist it somewhere for later use.
There are some interesting implementation details I’d suggest you look at it if you have to deal with pagination. 
<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

David Robinson starts a new series about SQL Azure basics with his Clustered Indexes and SQL Azure post of 4/29/2010:
We are going to start a new series of posts focusing on the basics of SQL Azure and build on top of these to give you more detailed information about building and migrating applications to SQL Azure.
Unlike SQL Server, every table in SQL Azure needs to have a clustered index. A clustered index is usually created on the primary key column of the table. Clustered indexes sort and store the data rows in the table based on their key values (columns in the index). There can only be one clustered index per table, because the data rows themselves can only be sorted in one order.
A simple table with a clustered index can be created like this:
CREATE TABLE Source (Id int NOT NULL IDENTITY, [Name] nvarchar(max), 
CONSTRAINT [PK_Source] PRIMARY KEY CLUSTERED 
(
      [Id] ASC
))
SQL Azure allows you to create tables without a clustered index; however, when you try to add rows to that table it throws this error:
Msg 40054, Level 16, State 1, Line 2
Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again.
SQL Azure does not allow heap tables – heap tables, by definition, is a table that doesn't have any clustered indexes. More about SQL Server indexes in this article on MSDN.
Temporary Tables
That is the rule for all permanent tables in your database; however, this is not the case for temporary tables.
You can create a temporary table in SQL Azure just as you do in SQL Server. Here is an example:
CREATE TABLE #Destination (Id int NOT NULL, [Name] nvarchar(max))
-- Do Something
DROP TABLE #Destination
Dave was a technical reviewer for my Cloud Computing with the Windows Azure Platform book.
Marcello Lopez Ruiz explains Layering XML readers for OData in this 4/28/2010 post:
If you've spent any time looking at the new Open Data Protocol Client Libraries on CodePlex, you may have run into the internal XmlWrappingReader class. I'll look into why this was a useful thing to have and what important OData processing aspect it helps with in the future, but for today I want to touch a bit on how and why the layering works.
XmlReader is a great class to wrap with, well, another XmlReader. This seems like a pretty obvious statement, but there are ways of designing APIs that make this easier, and other that make this much harder.
In design-pattern speak, the wrapper we are talking about is more like a decorator that simply forwards all calls to another XmlReader instance. One of the things that makes this straightforward is the relatively "flat" nature of the API. When building a wrapper object such as this one, you'll find that complex object models make the "illusion" of speaking with the original XmlReader harder. …
Thankfully, XmlReader is a simple API that can be easily wrapper, which allows us to overlay behaviors ("decorate it" in design-pattern-speak), and next time we'll see how the Open Data Protocol Client Library puts that to good use.
Stephen O’Grady’s Cassandra and The Enterprise Tension post of 4/28/2010 analyzes forthcoming commercial support for the Cassandra database by “NoSQL stores”:
Apache Cassandra LogoIt was really just a matter of time until someone began offering commercial support for Cassandra, so it was with no surprise that I read that Jonathan Ellis and co had spun out of Rackspace and up Riptano. Given the market opportunities in front of NoSQL generally (coverage), and the interest in Cassandra specifically, this was not just logical but inevitable. What I am fascinated to watch with Riptano, however, as with many of the other commercialized or soon-to-be NoSQL stores is how they manage the enterprise vs web tension.
It’s no secret that web native firms and traditional enterprises are different entitities with differing workloads. There is overlap, to be sure, but if the needs of enterprises were aligned with web native firms, we probably wouldn’t have projects like Cassandra in the first place: the relational databases would have long since been adapted to the challenges of multi-node, partitioned systems demanding schema flexibility. But they weren’t, and so we do.
As we’ve documented here before, there is perhaps no bigger embodiment of this tension than MySQL. Originally the default choice of firms built on the web, it gravitated further towards the more traditional enterprise relational database market over the years in search of revenues. In went features such as stored procedures and triggers, and up went the complexity of the codebase. By virtually any metric, the MySQL approach was wildly succcessful. It remains the most popular database on the planet, but it was versatile enough in web and enterprise markets to command a billion dollar valuation.
It is virtually certain that Riptano and every other NoSQL store will, at some point, face a similar fork in the road. Priortize web workloads and features, or cater to the needs of enterprises who will be looking for things that users like Facebook and Twitter not only might not benefit from, but actively might not want. While the stated intention of Ellis is to not fork Cassandra, then, I’m curious as to whether or not there will come a time when it will become necessary.
Enterprise users, at some point, will undoubtedly want something added to Cassandra that makes it less attractive to, say, Facebook. At which point there Riptano has a choice: add it – they’ll have commit rights, obviously – and trust that the fallout from the unwanted (by one group) feature will be minimal, decline to add it, or maintain a fork. With the latter less logistically expensive these days, perhaps that will become more viable an approach – even in commercial distributions – over time. Assuming the feature is added, Facebook, Twitter et al then have a similar choice: use the project, evolving in a direction inconvenient to them though it may be, fork it, or replace it.
Either way, it will be interesting to watch the developmental tensions play out. Kind of makes you curious as to what Drizzle versions of Cassandra might look like.
MSDN’s SQL Azure Overview topic appears to have been updated recently:
Microsoft SQL Azure Database is a cloud-based relational database service that is built on SQL Server technologies and runs in Microsoft data centers on hardware that is owned, hosted, and maintained by Microsoft. This topic provides an overview of SQL Azure and describes some ways in which it is different from SQL Server.
<Return to section navigation list>

AppFabric: Access Control and Service Bus

No significant articles today.
<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruce Kyle suggests that you Add a Windows Azure CloudPoll to Your Facebook Page in this 4/29/2010 post to the US ISV Developer blog:
The CloudPoll Facebook application has launched and is hosted on Windows Azure. CloudPoll is available for free to all Facebook users to create, manage, and evaluate polls for their pages, all using blob storage, SQL Azure, and compute hosted in Windows Azure.
What is CloudPoll?
CloudPoll is Live, hosted on Windows Azure, and ready for the everyone on Facebook to create funny, serious, strange, informative, silly, and cool polls. Just follow these three simple steps:
1. When signed into Facebook go to http://apps.facebook.com/cloudpoll/Home/Public
clip_image0012. Click on Create Poll in the right hand top corner
3. All left to do is to create your Pool:
a. Type the question
b. Enter answers to your poll
c. Upload a picture
d. Decide if you want to post to your wall or another page (if it is another page, such as a fan page, then you have to first add the page via the “Add Pages” link in order to be able to select it).
e. Make the poll public, or only visible to your friends
f. Click Create Poll, you are all done
clip_image002
Source Code Included in Windows Azure Toolkit for Facebook
You won’t need source code to use CloudPoll, but if you want to customize the application, the source is available on Codeplex in the Windows Azure Toolkit for Facebook. It is built by Thuzi in collaboration with Microsoft to incorporate best practices, enabling the rapid development of flexible, scalable, and architecturally sound Facebook applications. In addition to the framework you can download the Simple-Poll sample application that show how to connect you Facebook Application to Windows Azure, blob storage, and SQL Azure.
Check out Thuzi.com CTO Jim Zimmerman’s session, Facebook Apps in Azure, at Mix10 last week where he showcased live Facebook applications for Outback Steakhouse and CloudPoll that were both built using the Windows Azure Toolkit for Facebook.
Download Windows Azure Toolkit for Facebook from Codeplex.
For more information, see Gunther Lenz’s posting Windows Azure Toolkit for Facebook.
Here’s my first CloudPoll:
image
The empty choice is to test empty string/null (Don’t know) responses, which work.
Alex Williams reports Frustrated, Three Banks Form Alliance To Forge Ahead Into Cloud Computing in this 4/29/2010 post to the ReadWriteCloud blog:
Frustrated with high maintenance costs, three of the world's largest banks are forming a technology buying alliance to forge ahead into cloud computing.
According to Jeanne Capachin of IDC Insights , Bank of America, Commonwealth Bank of Australia, and Deutsche Bank are forming a technology buying alliance which they see as a way to reduce their infrastructure costs and forge ahead into cloud computing.
Driving the issue are the high maintenance costs that they are charged by technology companies.
The banks believe that by joining together they can "force a change in procurement practices and move to more shared or even open source solutions when they make sense."
The banks believe that traditional technology suppliers are not embracing cloud computing. Instead, they continue to show dependence on decades old revenue structures.
Another driving force in the alliance is the financial crisis and the resulting reduction revenues.
According to Capachin: "Embracing an increased off-the-shelf approach is a necessary prerequisite for these banks and it sounds like, at least in theory, they are ready,"
She says banks have in the past ignored off the shelf services and instead have built their own custom software solutions.
But things are bad enough now that it looks like they are ready to move forward.
The winners? SaaS providers. The losers? The technology giants.
This is one of those events that could have major ramifications. If the banks really are wising up then it should mean considerable disruption in the technology world and the real emergence of cloud computing as the force that will drive innovation and change for many years ahead in the financial services world.
Jim Wooley has started a series of posts about .NET 4’s Reactive Framework (Rx) and LINQ:

Processing data streams (called Complex Event Processing or CEP) is likely to be a popular application for cloud computing and, if data ingress costs aren’t excessive, Windows Azure. Future Rx articles will appear in this Live Windows Azure Apps, APIs, Tools and Test Harnesses section.
The Windows Azure team appears to have started the Windows Azure <stories> site as a stealth location for developers to describe commercial applications that use Windows Azure compute, storage, or both.
As of 4/29/2010, there were only two real entries. The remainder were test entries with meaningless content.
MSDN appears to have updated the Windows Azure Troubleshooting Guide’s General Troubleshooting topic:
Following are some tips for troubleshooting during the service development process.
Consider Whether Your Role Requires Admin Privileges
If your role runs in the development environment but is not behaving as expected once published to Windows Azure, the issue may be that your role depends on admin privileges that are present on the desktop but not in the cloud. In the development environment, a role runs with administrative privileges, but in the cloud it runs under a restricted Windows service account.
Run Without the Job Object
A role running in Windows Azure or in the development environment runs in a job object. Job objects cannot be nested, so if your code is running in a job object, it may fail. Try running your code without the job object if it is failing for unknown reasons.
Consider Network Policy Restrictions
In previous versions of Windows Azure, network policy restrictions were enforced by Code Access Security under Windows Azure partial trust. In this environment, a violation of network policy results in a security exception.
Windows Azure full trust now enforces network policy restrictions via firewall rules. Under full trust, a violation of network policy causes the connection to fail without an error. If your connection is failing, consider that your code may not have access to a network resource.
Configure the Default Application Page
A web role is automatically configured to point to a page named Default.aspx as the default application page. If your application uses a different default application page, you must explicitly configure the web role project to point to this page in the web.config file.
Define an Internal Endpoint on a Role to Discover Instances
The Windows Azure Managed Library defines a Role class that represents a role. The Instances property of a Role object returns a collection of RoleInstance objects, each representing an instance of the role. The collection returned by the Instances property always contains the current instance. Other role instances will be available via this collection only if the role has defined an internal endpoint, as this internal endpoint is required for the role's instances to be discoverable. For more information, see the Service Definition Schema.
Re-create ADO.NET Context Object After Unexpected Internal Storage Client Error
If you are using the Windows Azure Storage Client Library to work with the Table service, your service may throw a StorageClientException with the error message "Unexpected Internal Storage Client Error" and the status code HttpStatusCode.Unused. If this error occurs, you must re-create your TableServiceContext object. This error can happen when you are calling one of the following methods:
  • TableServiceContext.BeginSaveChangesWithRetries
  • TableServiceContext.SaveChangesWithRetries
  • CloudTableQuery.Execute
  • CloudTableQuery.BeginExecuteSegmented
If you continue to use the same TableServiceContext object, unpredictable behavior may result, including possible data corruption. You may wish to track the association between a given TableServiceContext object and any query executed against it, as that information is not provided automatically by the Storage Client Library.
This error is due to a bug in the ADO.NET Client Library version 1.0. The bug will be fixed in version 1.5. (Emphasis added.)
Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie asks Is PaaS Just Outsourced Application Server Platforms? in this 4/29/2010 post from the Interop 2010 conference:
There’s a growing focus on PaaS (Platform as a Service), particularly as Microsoft has been rolling out Azure and VMware continues to push forward with its SpringSource acquisition. Amazon, though generally labeled as IaaS (Infrastructure as a Service) is also a “player” with its SimpleDB and SQS (Simple Queue Service) and more recently, its SNS (Simple Notification Service). But there’s also Force.com, the SaaS (Software as a Service) giant Salesforce.com’s incarnation of a “platform” as well as Google’s App Engine. As is the case with “cloud” in general, the definition of PaaS is varied and depends entirely on to whom you’re speaking at the moment.
What’s interesting about SpringSource and Azure and many other PaaS offerings is that as far as the customer is concerned they’re very much like an application server platform. The biggest difference being, of course, that the customer need not concern themselves with the underlying management and scalability. The application however, is still the customer’s problem.
That’s not that dissimilar from what enterprise-class organizations build out in their own data centers using traditional application server platforms like .NET and JavaEE. The application server platform is, well, a platform, in which multiple applications are deployed in their own cozy little isolated containers. You might even recall that JavaEE containers are called, yeah, “virtual machines.” And even though Force.com and Google App Engine are proprietary platforms (and generally unavailable for deployment elsewhere) they still bear many of the characteristic marks of an application server platform.
SO WHAT’S the DIFFERENCE?

tweetIn the middle of a discussion on PaaS at Interop I asked a simple question: explain the difference between PaaS and application server platforms. Of course Twitter being what it is, the answer had to be less than 140 characters. Yes, the “Twitter pitch” will one day be as common  (or more so) as the “elevator pitch.”
The answers came streaming in and, like asking for a definition of cloud computing, there was one very clear consensus: PaaS and application server platforms are not the same thing, though the latter may be a component of the former. Unlike asking for a definition of cloud computing no one threw any rotten vegetables in my general direction though I apparently risked being guillotined for doing so. Here’s a compilation of the varied answers to the question with some common themes called out.
Lori continues with a compilation of answers from her Connecting On-Premise and On-Demand with Hybrid Clouds co-panelists.
Read Lori’s At Interop You Can Find Out How Five 'Ates' Can Net You Three 'Ables' article:
The biggest disadvantage organizations have when embarking on a “we’re going cloud” initiative is that they’re already saddled with an existing infrastructure and legacy applications. image That’s no surprise as it’s almost always true that longer-lived enterprises are bound to have some “legacy” applications and infrastructure sitting around that’s still running just fine (and is a source of pride for many administrators – it’s no small feat to still have a Novell file server running, after all). Applications themselves are almost certainly bound to rely on some of that “legacy” infrastructure and integration and let’s not even discuss the complex web of integration that binds applications together across time and servers.
The “ates” are:
    1. SEPARATE test and development
    2. CONSOLIDATE servers
    3. AGGREGATE capacity on demand
    4. AUTOMATE operational processes
    5. LIBERATE the data center with a cloud computing model
and the “ables” are (as expected):
    1. scalable
    2. reliable
    3. available
Brian Madden prefaces his What the Windows desktop will look like in 2015: Brian's vision of the future post of 4/29/2010 with “In this article I look at what a corporate Windows desktop will look like in five years. (Hint: it's still Windows-based and the "cloud" doesn't impact it in the way you might think it would.)”:
Our future includes Windows and Windows apps
Let's be perfectly clear about one thing right off the bat. The future of desktops is the future of Windows. You can talk all you want about Mac and Linux and Rich Internet Apps and The Cloud and Java and Web 3.0, but the reality is that from the corporate standpoint, we live in a world of Windows apps. The applications are what drive business, and as long as those apps are Windows apps then we're going to have to deal with Windows desktops.
Even though those new technologies might be better in every way, there's a lot of momentum of Windows apps we need to overturn to move away from Windows. I mean just think about how many 16-bit Windows apps are out there now even though 32-bit Windows has been out for fifteen years. Heck, look at how many terminal apps are still out there! I like to joke that if we ever have a nuclear holocaust, the only things that will survive will be cockroaches, Twinkies, and Windows apps.
That said, I love the concept of a world of apps that aren't Windows apps (and therefore I love the concept of a world without Windows.) I love the rich Internet app concept. I love how Apple is shaking things up. I think VMware's purchase of SpringSource was pure brilliance and I can't wait to see that app platform and Azure duke it out.
But in the meantime we're dealing with Windows apps. And Windows apps require a Windows OS. (We tried without.) The unfortunate reality is that the Windows OS was designed to be a single monolithic brick that is installed and run on a single computer for a single user with a single set of apps.
Brian continues with detailed analyses of:
  • The Windows desktop of 2015: Assumptions
  • How these layers will work in 2015
  • What this gives us
  • Who will deliver this Desktop in 2015?
We will.
(P.S. I started trying to create a visual representation of this. Here's my work-in-progress. We'll discuss more at BriForum.)
James Urquhart comments on James Hamilton on cloud economies of scale in this 4/28/2010 essay posted to CNet News’ The Wisdom of Clouds blog:
While it is often cited that cloud computing will change the economics of IT operations, it is rare to find definitive sources of information about the subject. However, the influence of economies of scale on the cost and quality of computing infrastructure is a critical reason why cloud computing promises to be so disruptive.
James Hamilton, a vice president and distinguished engineer at Amazon.com, and one of the true gurus of large-scale data center practices, recently gave a presentation at Mix 10 that may be one of the most informative--and influential--overviews of data center economies of scale to date. Hamilton was clearly making a case for the advantages that public cloud providers such as Amazon have over enterprise data centers, when it comes to cost of operations.
However, as he presented his case, he presented a wide variety of observations about everything from data center design to how human resource costs differ between the enterprise and a service provider. …
Graphics credit: James Hamilton
Urquhart continues with highlights of Hamilton’s presentation.
Check out James Hamilton’s Facebook Flashcache post of 4/29/2010:
Facebook released Flashcache yesterday: Releasing Flashcache. The authors of Flashcache, Paul Saab and Mohan Srinivasan, describe it as “a simple write back persistent block cache designed to accelerate reads and writes from slower rotational media by caching data in SSD's.”
There are commercial variants of flash-based write caches available as well. For example, LSI has a caching controller that operates at the logical volume layer. See LSI and Seagate take on Fusion-IO with Flash. The way these systems work is, for a given logical volume, page access rates are tracked. Hot pages are stored on SSD while cold pages reside back on spinning media. The cache is write-back and pages are written back to their disk resident locations in the background.
For benchmark workloads with evenly distributed, 100% random access patterns, these solutions don’t contribute all that much. Fortunately, the world is full of data access pattern skew and some portions of the data are typically very cold while others are red hot. 100% even distributions really only show up in benchmarks – most workloads have some access pattern skew. And, for those with skew, a flash cache can substantially reduce disk I/O rates at lower cost than adding more memory.
What’s interesting about the Facebook contribution is that its open source and supports Linux. From: http://github.com/facebook/flashcache/blob/master/doc/flashcache-doc.txt:
Flashcache is a write back block cache Linux kernel module. [..]Flashcache is built using the Linux Device Mapper (DM), part of the Linux Storage Stack infrastructure that facilitates building SW-RAID and other components. LVM, for example, is built using the DM.
The cache is structured as a set associative hash, where the cache is divided up into a number of fixed size sets (buckets) with linear probing within a set to find blocks. The set associative hash has a number of advantages (called out in sections below) and works very well in practice.
The block size, set size and cache size are configurable parameters, specified at cache creation. The default set size is 512 (blocks) and there is little reason to change this.
More information on usage: http://github.com/facebook/flashcache/blob/master/doc/flashcache-sa-guide.txt.  Thanks to Grant McAlister for pointing me to the Facebook release of Flashcache. Nice work Paul and Mohan.
<Return to section navigation list> 

Cloud Security and Governance

F5 Networks, Inc. prefaced its F5 Extends Application and Data Security to the Cloud press release of 4/28/2010 with “New BIG-IP release augments F5’s integrated security services with improved attack protection, simplified access control, and optimized performance”:
F5 Networks, Inc. … today announced enhanced BIG-IP solution capabilities delivering new security services for applications deployed in the cloud. Application security provided by F5 solutions, including BIG-IP® Local Traffic Manager™, BIG-IP Edge Gateway™, and BIG-IP Application Security Manager™, ensures that enterprise applications and data are safe even when deployed in the cloud. The new BIG-IP Version 10.2 software enhances F5’s security offerings, enabling customers to lower infrastructure costs, optimize application access, and secure applications in the enterprise and the cloud.
Details
Cloud environments can be key resources for IT teams leveraging enhanced scalability and lower expenses, but there are security concerns when it comes to moving organizations’ sensitive applications and data into the cloud. With BIG-IP solutions, customers can assign policy-based access permissions based on user, location, device, and other variables. This enables organizations to extend context-aware access to corporate materials while keeping their most valuable assets secure whether the data stored is in the data center, internal cloud, or external cloud. Additional details on how F5 helps organizations securely extend enterprise data center architecture to the cloud can be found in a separate F5 announcement issued earlier this week.
The BIG-IP v10.2 release introduces new security functionality throughout the BIG-IP product family. By unifying application delivery, security, optimization, and access control on a single platform, security capabilities can be extended across data center environments and in the cloud. F5 security solutions provide comprehensive application security, including packet filtering, port lockdown, attack protection, network/administrative isolation, protocol validation, dynamic rate limiting, SSL termination, access policy management, and much more.
The press release continues with additional details about application delivery and security.
<Return to section navigation list>

Cloud Computing Events

Forrester’s IT Forum 2010 titled “The Business Technology Transformation: Making It Real” will take place on 5/26 through 5/28/2010 at The Palazzo hotel in Las Vegas, Nevada:
Transformation: one of the more overused terms in the short history of IT. But as we work our way through the worst economic slump in recent memory, transformation takes on a real meaning. Think of the changes leaders face: customers and employees, demographically and geographically dispersed, creating new ways of doing business; myriad cloud-based or other lighter, fit-to-purpose delivery models available; and of course, fiscal and regulatory pressures not seen in decades.
These trends and others are combining to make technology more ubiquitous and core to all domains of the business, not just the cow paths of old. Thus, leaders outside IT need to engage more inside IT. And as the clouds of recession begin to lift, these stakeholders want to move faster. This time, the need for transformation is real.
But it’s also a time for some soul searching in IT, a time to learn from past lessons and create a better way. Forrester calls this new mandate business technology (BT), where every business activity is enabled by technology and every technology decision hinges on a business need. But like you, we’re asked to make BT more than a name change, to bring the concept down from the ether and into reality.
At this Event, we’ll help each of the roles we serve lead the shift from IT to BT, but we’ll do so in pragmatic, no-nonsense terms. We’ll break the transformation into five interrelated efforts:

  • Connect people more fluidly to drive innovation. You serve a more socially oriented, device-enabled population of both information workers and customers. You want to empower both groups without losing control of costs or hurting productivity.
  • Infuse business processes with business insight. You support structured business processes but lose control as they bump heads with a multitude of unstructured processes. You want to connect both forms of process to actionable data, but you struggle with data quality and silos.
  • Simplify always and everywhere. You have the tools to be more agile, but you face a swamp of software complexity and unnecessary functionality. You want technologies, architectures, and management processes that are more fit-to-purpose.
  • Deliver services, not assets. You want to speak in terms that the business understands, but you find your staff confined to assets and technologies. You want to shift more delivery to balance-sheet-friendly models but struggle to work through vendor or legacy icebergs.
  • Create new, differentiated business capabilities. Underpinning all of these efforts, you want to link every technology thought — from architecture to infrastructure to communities — to new business capabilities valued by your enterprise.
Each of these efforts represents both challenge and immense opportunity. Addressing them collectively rather than independently will help IT resurge and ultimately will enable the BT transformation to take root. At IT Forum 2010, we’ll help you accelerate that journey.
For a conference emphasizing “Transformation,” there are surprisingly few cloud-computing sessions (an average of about one per track.)
<Return to section navigation list>

Other Cloud Computing Platforms and Services

William Vambenepe’s PaaS portability challenges and the VMforce example post of 4/29/2010 discusses the portability of Salesforce.com’s Apex applications:
VMforce announcement is a great step for SalesForce, in large part because it lets them address a recurring concern about the force.com PaaS offering: the lack of portability of Apex applications. Now they can be written using Java and Spring instead. A great illustration of how painful this issue was for SalesForce is to see the contortions that Peter Coffee goes through just to acknowledge it: “On the downside, a project might be delayed by debates—some in good faith, others driven by vendor FUD—over the perception of platform lock-in. Political barriers, far more than technical barriers, have likely delayed many organizations’ return on the advantages of the cloud”. The issue is not lock-in it’s the potential delays that may come from the perception of lock-in. Poetic.
Similarly, portability between clouds is also a big theme in Steve Herrod’s blog covering VMforce as illustrated by the figure below. The message is that “write once run anywhere” is coming to the Cloud.
Because this is such a big part of the VMforce value proposition, both from the SalesForce and the VMWare/SpringSource side (as well as for PaaS in general), it’s worth looking at the portability aspect in more details. At least to the extent that we can do so based on this pre-announcement (VMforce is not open for developers yet). And while I am taking VMforce as an example, all the considerations below apply to any enterprise PaaS offering. VMforce just happens to be one of the brave pioneers, willing to take a first step into the jungle.
Beyond the use of Java as a programming language and Spring as a framework, the portability also comes from the supporting tools. This is something I did not cover in my initial analysis of VMforce but that Michael Cote covers well on his blog and Carl Brooks in his comment. Unlike the more general considerations in my previous post, these matters of tooling are hard to discuss until the tools are actually out. We can describe what they “could”, “should” and “would” do all day long, but in the end we need to look at the application in practice and see what, if anything, needs to change when I redirect my deployment target from one cloud to the other. As SalesForce’s Umit Yalcinalp commented, “the details are going to be forthcoming in the coming months and it is too early to speculate”. …
William continues with a description of “what portability questions any PaaS platform would have to address (or explicitly decline to address).”
R “Ray” Wang prefaces yet another News Analysis: Salesforce.com and VMware Up The Ante In The Cloud Wars With VMforce with “VMWare and Salesforce.com Battle For The Hearts And Minds Of Cloud-Oriented Java Developers” on 4/29/2010:
On April 27th, 2010, Salesforce.com, [NYSE: CRM] and VMware, Inc. (NYSE: VMW) formed VMforce, a strategic alliance to create a deployment environment for Java based apps in the cloud.  The Platform-as-a-Service (PaaS) offering builds on Java, Spring, VMware vSphere, and Force.com.  Key themes in this announcement:
  • Growing the developer ecosystem. VMware and Salesforce.com realize that the key to growth will be their appeal to developers.  The VMforce offering courts 6 million enterprise Java developers and over 2 million using SpringSource’s Spring framework with an opportunity to build Cloud 2 applications.  VMware brings application management and orchestration tools via VMware vSphere.  Salesforce.com opens up its applications, Force.com database, Chatter collaboration, search, workflow, analytics and mobile platforms.

    Point of View (POV):
    By betting on Java and the Spring framework for this Cloud2 PaaS, both vendors gain immediate access to one of the largest developer communities in the world.  Salesforce.com developers no longer have to use the highly flexible, but very proprietary APEX code base to create Cloud2 apps.   Java developers can now reach the large base of Salesforce.com customers and use the Salesforce.com apps and Force.com.
  • Creating cloud efficiencies for Java development. VMforce brings global infrastructure, virtualization platform, orchestration and management technology, relational cloud database, development platform and collaboration services, application run time, development framework, and tooling to the cloud.  Organizations can build code in Java and integrate with apps in Salesforce.com without having to retrain existing resources.  Environments can scale as needed and take advantage of the massive economies of scale in the cloud.

    POV:
    As with all PaaS offerings, cost and time savings include not dealing with hardware procurement, pesky systems management software, configuration and tuning, and multiple dev, test, and production environment set up.  Developers can focus on business value not infrastructure.  What will they do with their free time not scaling up databases and app servers?

The Bottom Line For Buyers – Finally, A Worthy Java Competitor To Azure And An Upgrade Path For Force.com [Italic emphasis added.]
Ray continues his thoughtful, comprehensive analysis with a “The Bottom Line For Vendors – Will You Have Your Own PaaS Or Will You Join In?” topic and links to related topics.
Werner Vogel’s Expanding the Cloud - Opening the AWS Asia Pacific (Singapore) Region post of 4/29/2010 begins:
singapore.jpgToday Amazon Web Services has taken another important step in serving customers worldwide: the AWS Asia Pacific (Singapore) Region is now launched. Customers can now store their data and run their applications from our Singapore location in the same way they do from our other U.S. and European Regions.
The importance of Regions
Quite often "The Cloud" is portrayed as something magically transparent that lives somewhere in the internet. This portrayal can be a desirable and useful abstraction when discussing cloud services at the application and end-user level. However, when speaking about cloud services in terms of Infrastructure-as-a-Service, it is very important to make the geographic locations of services more explicit. There are four main reasons to do so:
  • Performance - For many applications and services, data access latency to end users is important. You need to be able to place your systems in locations where you can minimize the distance to your most important customers. The new Singapore Region offers customers in APAC lower-latency access to AWS services.
  • Availability - The cloud makes it possible to build resilient applications to make sure they can survive different failure scenarios. Currently, each AWS Region contains multiple Availability Zones, which are distinct locations that are engineered to be insulated from failures in other Availability Zones. By placing instances in different Availability Zones, developers can build systems that can survive many complex failure scenarios. The Asia Pacific (Singapore) region launches with two Availability Zones.
  • Jurisdictions - Some customers face regulatory requirements regarding where data is stored. AWS Regions are independent, which means objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU. Customers thus maintain control and maximum flexibility to architect their systems in a way that allows them to place applications and data in the geographic jurisdiction of their choice.
  • Cost-effectiveness - Cost-effectiveness continues to be one of the key decision making factors in managing IT infrastructure, whether physical or cloud-based. AWS has a history of continuously driving costs down and letting customers benefit from these cost reductions in the form of reduced pricing. Our prices vary by Region, primarily because of varying costs associated with running infrastructure in different geographies; for example, the cost of power may vary quite a bit across different regions, countries, or even cities. We are committed to delivering the lowest cost services possible to our customers based on the cost dynamics of each particular Region.
It appears that Amazon is determined to maintain 1:1 regional parity with Microsoft data centers.
Geva Perry’s Thoughts on VMForce and PaaS post of 4/28/2010 adds to the litany of cloud thought-leaders’ musing about this forthcoming Windows Azure competitor:
Back in January I wrote about how VMWare will monetize on the SpringSource acquisition via the cloud, and specifically a Java Platform-as-a-Service. Yesterday, VMWare and Salesforce.com made a big announcement about VMForce - their joint Java Platform-as-a-Service, which leverages the Force.com platform with VMWare virtualization technology and more importantly, the Spring products from VMWare's SpringSource division.
The announcement has been widely covered so I won't go over the details. I embedded the brief VMForce demo from their site.
But I have been thinking about some of the implications of the VMForce offering and announcement and wanted to share those.
  • Platform-as-a-Service is coming of age: It has always been my contention that the end game for cloud computing (and by extension, all of IT) is PaaS and SaaS, and eventually, IaaS will remain a niche business. This move by two major vendors, such as Salesforce (who already had a generic PaaS in Force.com) and VMWare (with both its virtualization technology and SpringSource is a major step towards that end game with the creation of a mainstream, enterprise-grade (?) Java platform.
  • Developers are the name of the game and will continue to grow in influence. There is an ongoing debate on the roles of operations and development in this brave new world. If PaaS indeed becomes mainstream in the enterprise, there is no doubt that the need for ops personnel in the enterprise is reduced. Some of those jobs will shift to the cloud providers, but some will be entirely and forever lost to automation. On the flip side, the influence of developers (as opposed to both ops and central IT/CIO) is significantly increasing. Platforms such as VMForce further reduce the dependence of dev team on IT  - and they become the de facto purchasing decision makers. Adoption of these technologies, consequently, is happening bottom-up - much as open source software did.
  • What about the LAMP PaaS? Google App Engine was already a Java PaaS, but developers who used it told me it was not a business-grade platform. Now VMForce offers the market what is supposedly an enterprise-class Java PaaS. There are already two credible RoR PaaS - Heroku and Engine Yard. The big stack that is glaringly missing is perhaps the most mainstream web framework - the LAMP stack. No player has yet taken this one up and it remains an opportunity.
  • What else is coming from VMWare. Expect VMWare to offer as part of vCloud, or whatever their latest cloud offering is for service provider and internal clouds, the same Java PaaS capability. Of course, this will not include the functionality provided in VMForce by Force.com, but it will make its Java PaaS offering available to others.
  • Reaffirmation that cloud is the way to monetize open source. As I opened this post, this move once again shows that the way to monetize on widely adopted open source software is via the cloud.
<Return to section navigation list>

blog comments powered by Disqus