Friday, June 24, 2011

Windows Azure and Cloud Computing Posts for 6/24/2011+

image222 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image433    

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi announced SQL Azure Blog is moving to the Official Windows Azure Platform Team Blog in a 6/24/2011 post to the SQL Azure blog:

image In an effort to make it easier for you to stay on top of the latest product releases, resources, tools and news around the Windows Azure platform, today, we’re excited to announce that the SQL Azure blog is moving into what will now be called the ‘Windows Azure Platform Team Blog’. From now on, you can find all SQL Azure blog posts at www.WindowsAzureBlog.com.

imageThis one-stop-shop blog is where you’ll be able to find the latest information about the Windows Azure platform and all its components, including SQL Azure. This blog will continue to cover the wide range of topics we have always addressed, from product announcements, to technical “how-to” posts, and conversations with our customers. If you’re primarily interested in just the posts about SQL Azure, you can subscribe to a dedicated RSS feed for that. Alternatively, you have the option of getting all the Windows Azure platform posts in one integrated RSS feed.

To continue to provide a single view of the Windows Azure platform, expect to see a similar evolution across Facebook, Twitter, and YouTube over the coming weeks. We’ll be sure to notify you when those properties are integrated. In the meantime, we hope these changes will make your life a little easier by making the Windows Azure Platform blog your single destination for all the latest news and information about SQL Azure and the rest of Windows Azure Platform.


David Cooksey interviewed Cihan Biyikoglu (pictured below) in an SQL Azure Database Scalability with Federations post of 6/23/2011 to the InfoQ Blog:

image Cihan Biyikoglu introduced an upcoming feature for scalability in SQL Azure databases called Federations at Tech Ed 2011. In his presentation he explains that Federations are objects inside an Azure database which allow the data they contain to scale. This is done through the use of federation members, which are additional databases; each containing part of the data that the Federation holds. Data is distributed across the Federation members according to the Federation distribution key, which is defined when the Federation is created. A block of data containing the same distribution key is considered an atomic unit and will never be split across multiple federation members. The SPLIT and MERGE commands allow the number of federation members to be increased or decreased at runtime.

imageWhile the individual Federation member databases are directly accessible, the model expects that connections will be made through the Federation root database. After the connection is made, the USE FEDERATION statement must be executed in order to tell the system what Federation to run the following queries against. Federation members contain both Federated data and reference data. Federated data is data that is split across multiple federation members while reference data is lookup data that is cloned across all members.

image We contacted Cihan Biyikoglu to ask him a few questions.

InfoQ: The root db seems like a single point of failure - can the root db be mirrored? Is there any recommended method for boosting reliability with this framework?

Cihan Biyikoglu: In sql azure, we already provide HA built in with 3 copies of your database. root db is made available by sql azure internally keeping 3 copies of the database.

InfoQ: How do operations that act on an entire table function when the table is split across multiple databases?

Cihan Biyikoglu: In the federations model, Apps have to be aware of partitioning of data. No doubt about that. That said, there is a safe mode for developers to operate on in federations; that is called an atomic unit. If you target atomic units (AUs) only, you don’t have a problem with any repartitioning operation. We don’t split an atomic unit. However for cases like fan-out querying or schema deployment, you would typically target a larger range of data as opposed to AUs. For these operations, there is some help in v1 through metadata to discover which range you are targeting. Fairly simple piece of code can help you ensure you can safely address ranges of data in the presence of repartitioning operations. However it takes awareness in app to do this. In future we plan to enhance this support and make that even easier.

InfoQ: Is there any way to influence the resources for Foundation Members? For example, maybe newer data receives 100x the traffic of older data, so one FM receives 1 request a second and another receives 100 requests per second.

Cihan Biyikoglu: This is very common for partitioned apps and federations as well; you can express this by setting your physical distribution to accommodate this. You get to pick the SPLIT points and SKU and size of data in each fed member.

InfoQ: Will Azure Foundations work out-of-the box with ORMs such as the Entity Framework, Link to SQL, and nHibernate?

Cihan Biyikoglu: Teams are working on this as we speak. We may not have full blown support for existing versions but we will lay out how one can work with ORMs’ existing versions with federations and some will have native support in future for federations.

InfoQ: After a table is created, can you use ALTER TABLE to change its type from Federated to Reference or Central?

Cihan Biyikoglu: No architectural limitations on why we could not do this but we decided to scope this out from v1. So there is no way to do this in v1 but in the future we may enable this.

Nominations for the Federations technology preview program can be submitted via the link on Cihan’s blog. Cihan can be contacted via email at cihangib@microsoft.com.


James Hamilton posted a trip report for SIGMOD 2011 in Athens on 6/23/2011:

image Earlier this week, I was in Athens Greece attending annual conference of the ACM Machinery Special Interest Group on Management of Data. SIGMOD is one of the top two database events held each year attracting academic researchers and leading practitioners from industry.

I kicked off the conference with the Plenary keynote. In this talk I started with a short retrospection on the industry over the last 20 years. In my early days as a database developer, things were moving incredibly quickly. Customers were loving our products, the industry was growing fast and yet the products really weren’t all that good. You know you are working on important technology when customers are buying like crazy and the products aren’t anywhere close to where they should be.

In my first release as lead architect on DB2 20 years ago, we completely rewrote the DB2 database engine process model moving from a process-per-connected-user model to a single process where each connection only consumes a single thread supporting many more concurrent connections. It was a fairly fundamental architectural change completed in a single release. And in that same release, we improved TPC-A performance a booming factor of 10 and then did 4x more in the next release. It was a fun time and things were moving quickly.

From the mid-90s through to around 2005, the database world went through what I refer to as the dark ages. DBMS code bases had grown to the point where the smallest was more than 4 million lines of code, the commercial system engineering teams would no longer fit in a single building, and the number of database companies shrunk throughout the entire period down to only 3 major players. The pace of innovation was glacial and much of the research during the period was, in the words of Bruce Lindsay, “polishing the round ball”. The problem was that the products were actually passably good, customers didn’t have a lot of alternatives, and nothing slows innovation like large teams with huge code bases.

In the last 5 years, the database world has become exciting again. I’m seeing more opportunity in the database world now than any other time in the last 20 years. It’s now easy to get venture funding to do database products and the number of and diversity of viable products is exploding. My talk focused on what changed, why it happened, and some of the technical backdrop influencing.

A background thesis of the talk is that cloud computing solves two of the primary reasons why customers used to be stuck standardizing on a single database engine even though some of their workloads may have run poorly. The first is cost. Cloud computing reduces costs dramatically (some of the cloud economics argument: http://perspectives.mvdirona.com/2009/04/21/McKinseySpeculatesThatCloudComputingMayBeMoreExpensiveThanInternalIT.aspx) and charges by usage rather than via annual enterprise license. One of the favorite lock-ins of the enterprise software world is the enterprise license. Once you’ve signed one, you are completely owned and it’s hard to afford to run another product.

My fundamental rule of enterprise software is that any company that can afford to give you 50% to 80% reduction from “list price” is pretty clearly not a low margin operator. That is the way much of the enterprise computing world continues to work: start with a crazy price, negotiate down to a ½ crazy price, and then feel like a hero while you contribute to incredibly high profit margins.

Cloud computing charges by the use in small increments and any of the major database or open source offerings can be used at low cost. That is certainly a relevant reason but the really significant factor is the offloading of administrative complexity to the cloud provider. One of the primary reasons to standardize on a single database is that each is so complex to administer, that it’s hard to have sufficient skill on staff to manage more than one. Cloud offerings like AWS Relational Database Service transfer much of the administrative work to the cloud provider making it easy to chose the database that best fits the application and to have many specialized engines in use across a given company.

As costs fall, more workloads become practical and existing workloads get larger. For example, If analyzing three months of customer usage data has value to the business and it becomes affordable to analyze two years instead, customers correctly want to do it. The plunging cost of computing is fueling database size growth at a super-Moore pace requiring either partitioned (sharded) or parallel DB engines.

Customers now have larger and more complex data problems, they need the products always online, and they are now willing to use a wide variety of specialized solutions if needed. Data intensive workloads are growing quickly and never have there been so many opportunities and so many unsolved or incompletely solved problems. It’s a great time to be working on database systems.

The talk video is available but, unfortunately, only to ACM digital library subscribers.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Elise Flasko announced the Windows Azure Marketplace DataMarket Blog is moving! in a 6/24/2011 post:

image The Windows Azure Marketplace DataMarket Blog is moving to the official ‘Windows Azure Platform Team Blog’ at www.WindowsAzureBlog.com. This will be the new destination for all product release announcements, resources, tools and news around DataMarket. Please bookmark this new blog location and sign up for the DataMarket RSS feed there.

imageThe Windows Azure Platform Team Blog will be the one-stop-shop blog for all components of Windows Azure platform, including DataMarket. You no longer have to visit multiple blog locations to be on top of all the late-breaking news and information. The Windows Azure Platform Team Blog will continue to cover the wide range of topics we have addressed on this blog, from product announcements to technical “how-to” posts and customer experiences. You can continue to expect the same quality content you have been receiving, on this new integrated blog.

Don’t need to know about everything else going on in the cloud? Don’t worry, you can subscribe to just the DataMarket RSS feed. Alternatively, you have the option of getting all the information about Windows Azure platform in a single integrated RSS feed.

To continue to provide a single view of the Windows Azure platform, expect to see a similar evolution across Facebook and YouTube over the coming weeks. We’ll be sure to notify you when those properties are integrated. In the meantime, we hope these changes will make your life a little easier by making the Windows Azure Platform Team Blog your single destination blog for DataMarket, and the rest of Windows Azure platform.

Suggested Tags:

Thank you,

The DataMarket Team


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Itai Raz of the Windows Azure Team posted See what our customers are already doing with the Windows Azure AppFabric June CTP to the Windows Azure blog on 6/24/2011:

The Windows Azure June CTP has only been released a few days ago, and we already have great samples built by our customers that show how to use the CTP capabilities.

image72232222222Alan Smith has created two webcasts on CloudCasts that show how easy it is to develop AppFabric applications using the June CTP capabilities:

Watch these videos to learn more about these capabilities and how to use them. We hope you enjoy them as we did!

If you would also like to start using the June CTP here is what you need to do:

1. To request access to the Application Manager follow these steps:

  • Sign in to the AppFabric Management Portal at http://portal.appfabriclabs.com/.
  • Choose the entry titled “Applications” under the “AppFabric” node on the left side of the screen.
  • Click on the “Request Namespace” button on the toolbar on the top of the screen.
  • You will be asked to answer a few questions before you can request the namespace.
  • Your request will be in a “pending” state until it gets approved and you can start using the Application Manager capabilities.

2. In order to build applications you will need to install the Windows Azure AppFabric CTP SDK and the Windows Azure AppFabric Tools for Visual Studio. Even if you don’t have access to the Application Manager you can still install the tools and SDK to build and run applications locally in your development environment.

Please don’t forget to visit the Windows Azure AppFabric CTP Forum to ask questions and provide us with your feedback.


The Windows Azure Team announced Windows Azure AppFabric will be available on the Official Windows Azure Platform Team Blog on 6/24/2011:

image72232222222We are combining all Windows Azure platform related content into one place: the Windows Azure Platform Team Blog at www.WindowsAzureBlog.com. This will be the new destination for all product releases, resources, tools and news around the Windows Azure platform. You can bookmark this new blog location and sign up for the Windows Azure AppFabric RSS feed there.

We will continue to post about Windows Azure AppFabric on this blog, as well as post about Windows Server AppFabric, WCF, WF, development, deployment, and management.

Why are we adding the Windows Azure AppFabric content to the Windows Azure Platform Team Blog? The Windows Azure Platform Team Blog will be the one-stop-shop blog for all components of Windows Azure platform, including Windows Azure, SQL Azure, Windows Azure AppFabric and DataMarket. If you are interested in the Windows Azure platform you will no longer have to visit multiple blog locations to be on top of all the late-breaking news and information. The Windows Azure Platform Team Blog will continue to cover the wide range of topics we have addressed on this blog, from product announcements to technical “how-to” posts. You can continue to expect the same quality content you have been receiving, on this new integrated blog. If you’re primarily interested in posts about Windows Azure AppFabric, you can subscribe to a dedicated RSS feed in the new blog or continue following it on this blog. Alternatively, now you will also have the option to get all the Windows Azure platform information in one integrated RSS feed.

To continue to provide a single view of the Windows Azure platform, expect to see a similar evolution across Facebook, Twitter, and YouTube over the coming weeks. We’ll be sure to notify you when that happens. In the meantime, we hope these changes will make your life a little easier by making the Windows Azure Platform Team Blog your destination for all the latest news and information about the Windows Azure platform.


Bruce Kyle posted Azure Team Introduces Windows Azure AppFabric Applications to the US ISV Evangelism blog on 6/24/2011:

image A Windows Azure AppFabric Application is any n-tier .NET application that spans the web, middle, and data tiers, composes with external services, and is inherently written to the cloud architecture for scale and availability.

image72232222222Build AppFabric Applications using AppFabric Developer Tools, run them in the AppFabric Container service, and manage them using the AppFabric Application Manager.

The goal is to enable both application developers and ISVs to be able to leverage these technologies to build and manage scalable and highly available applications in the cloud. In addition, the goal is to help both developers and IT Pros, via the AppFabric Developer Tools and AppFabric Application Manager to be able to manage the entire lifecycle of an application from coding and testing to deploying and managing.

The three key pieces fit together to help you through the lifecycle of AppFabric Applications:

  • Developer Tools
  • AppFabric Container
  • Application Manager

See how they work together in the Azure Team blog post, Introducing Windows Azure AppFabric Applications.

The first Community Technology Preview (CTP) of these capabilities in Windows Azure is now shipping. As announced here, the June CTP of AppFabric is now live, and you can start by downloading the new SDK and Developer Tools and by signing up for a free account at the AppFabric Labs portal.


The Windows Azure AppFabric team posted Windows Azure AppFabric – Service Bus Access Control Federation and Rule Migration on 6/23/2011:

image72232222222In the Windows Azure AppFabric May 2011 CTP, we released enhancements to Service Bus including the new message-oriented middleware features around Queues and Topics. More details regarding these features can be found here: Introducing the Windows Azure AppFabric Service Bus May 2011 CTP. These enhancements will go live with an update to the commercially available service in a few months.

Currently we have two versions of the Access Control service in our production environment: the January 2010 version and the April 2011 version. More details regarding this can be found here: Windows Azure AppFabric April release now available featuring a new version of the Access Control service!.

Until the production Service Bus is updated it continues to use the January 2010 version of the Access Control service. But once Service Bus is updated it will also be able to use the new April 2011 version of the Access Control service. Once the Service Bus update goes live we strongly recommend to customers to update their Service Bus solutions to use the new version of the Access Control service for two reasons:

  1. Service Bus will cease to use the older version of the Access Control service at some point in the future. In compliance with our lifecycle support policies for Windows Azure, we will provide you with 12 month notice before this event.
  2. The April 2011 version of Access Control has great benefits as noted in the blog post referenced in the above.

Following are explanations on the current state, what will change once Service Bus gets updated, and how you can prepare.

Current state in the production environment

In the Windows Azure AppFabric Management Portal, the two versions of Access Control are labeled ACSV1 for the January 2010 release and ACSV2 for the April 2011 release.

Today when you create a new Service Bus namespace, a matching ACSV1 namespace is automatically created with the namespace suffix “–sb”. This ACSV1 namespace is used by Service Bus.

The two versions of the Access Control service support the same protocol and authorization token format that is expected by Service Bus today (OAuth WRAP SWT), and are fully compatible for all runtime operations. However, ACSV2 supports a new, richer management service protocol based on OData. While the two services are runtime-compatible, they are not compatible for code performing automated setup of access control rules and service identities via the management service.

Following the Service Bus update

For backwards-compatibility with existing applications, Service Bus namespaces that exist in the production environment at the time of the update will not be automatically switched over to ACSV2, and will continue to use ACSV1. You will need to migrate these namespaces from ACSV1 to ACSV2 on your own, coordinated with updates to your code that calls the management service.

As we update the service in a few months, we will provide tooling and step-by-step guidance for how to perform the migration from ACSV1 to ACSV2.

While we will provide detailed guidance as we release the update, the migration process will generally involve the following test-then-migrate sequence of steps:

  • First, you will create a brand-new, temporary Service Bus namespace that will automatically come paired with an ACSV2 namespace. This namespace will be used for end to end testing of your application with ACSV2, including your updated code to call the management service.
  • Using a to-be-provided command line tool, you will copy the state of your production ACSV1 namespace into the temporary ACSV2 namespace. Your production Service Bus namespace and corresponding ACSV1 namespace will remain untouched during this process. You can now test your application and updated code against the temporary Service Bus and ACSV2 namespace to ensure everything works as expected.
  • After you have verified that your application works as expected, you will use the to-be-provided command line tool to change the DNS name of the temporary ACSV2 namespace to be the name of the ACSV1 namespace associated with your production Service Bus – effectively swapping the V2 namespace into the place of the existing V1 namespace. This completes the migration.

Running applications should not experience any service disruption resulting from this switch.

The version differences and migration will mostly affect applications that automatically provision rules in ACSV1 using the management service. Since the management service differs between ACSV1 and ACSV2, the guidance will suggest a process that can be outlined as follows:

  • As you prepare for the switchover, you should have a mode in your application, or a procedure in your operations process, that allows you to temporarily suspend automated creation of rules. This may be a simple notice to operations staff or may require some work in your application.
  • Immediately before you copy rules to the temporary namespace for the final time before making the DNS name change, you should engage this suspension mode so that no new rules are created in the ACSV1 namespace that end up being orphaned after the data migration.
  • After you have switched to ACSV2, you can resume automated creation of rules by having your code target the new ACSV2 management service.
  • We suggest that you have both, the ACSV1 and the ACSV2 client code existing in your application side-by-side and that you switch between the code paths using a simple check against an HTTP URL. If a simple GET request against the ACSV2 management endpoint https://tenant.accesscontol.windows.net/v2/mgmt/service yields an HTTP status code 404, the application runs against ACSV1, otherwise it already runs against ACSV2.

How to prepare

Our LABS environment at http://portal.appfabriclabs.com already includes the enhancements to Service Bus, as well as the new version of Access Control. Hence, you can use this environment, free of charge, to start planning your migration and test it.

In the LABS environment already today, when customers create new Service Bus namespaces the system will generate a new Access Control namespace with the “-sb” suffix, but that namespace is created in ACSV2. This will be the production experience when we release the update to Service Bus in a few months.

You can immediately start developing this new management code path creating rules against ACSV2 against the versions of Service Bus and Access Control that are available in the May 2011 CTP environment at http://portal.appfabriclabs.com.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Updated my (@rogerjenn) Windows Azure Platform Maligned by Authors of NetworkWorld Review post of 6/23/2011 with link to PCWorld copy of the NetworkWorld review and announcement of Windows Azure Wins Best Cloud Service at Cloud Computing World Series Awards:

imageMy updated conclusion:

Tom Henderson provide no citations to support his refutation of my objections.

If the authors “can’t test it until it’s really, really ready” and believe the product isn’t “finished,” I contend NetworkWorld should not have reviewed it and PCWorld should not have published the copy.

imageThe Windows Azure Team reported Windows Azure Wins Best Cloud Service at Cloud Computing World Series Awards in a 6/23/2011 post.


The Windows Azure Team posted Introducing the Consolidated Windows Azure Team Blog on 6/24/2011:

imageWe are continually looking for ways to make it easier for you to stay on top of the latest product releases, resources, tools and news around Windows Azure & SQL Azure. Today, we’re happy to announce another step in that direction with the consolidation of the SQL Azure, Windows Azure AppFabric and Windows Azure Marketplace DataMarket blogs into this Windows Azure blog, which will now be called the ‘Windows Azure Team Blog’.

Starting today, this is where you’ll be able to find the latest information about Windows Azure and all its components. This blog will cover a wide variety of topics, from the latest news and announcements, to technical and “how-to” posts, as well as conversations with customers, partners and Windows Azure experts. If you’re primarily interested in just the posts about Windows Azure, SQL Azure, Windows Azure AppFabric or Windows Azure Marketplace DataMarket, you can opt to subscribe to dedicated RSS feeds or just select one of the product categories on the right. If you subscribe to the main RSS feed for this blog, you’ll get posts covering all of Windows Azure.

Over time, you’ll see a similar evolution across our other social media channels and we’ll be sure to notify you when those happen. In the meantime, we hope these changes will make your life a little easier by making this blog your single destination for all the latest news and information about Windows Azure.

The preceding notice doesn’t include the Windows Azure Connect Team and Windows Azure Storage Team blogs. Will they move too?


Steve Marx (@smarx) posted Cloud Cover Episode 48 - Node.js, Ruby, and Python in Windows Azure to Channel9 on 6/24/2011:

Note: Apologies to Nate! The video has his last name wrong. To be clear, he's Nate Totten. An updated video will come out later with the correct name.

imageJoin Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

In this episode, Nathan Totten joins Steve as they discuss the Smarx Role, an easy way to run Node.js, Ruby, and Python in Windows Azure. Nate also shows off his Windows Azure CDN Helpers, which make optimizing your ASP.NET MVC application for caching static content like JavaScript and CSS a breeze.

In the news:


Tim Anderson (@timanderson) analyzed the Microsoft partners with Joyent to bring node.js server-side JavaScript to Windows announcement in a 6/24/2011 post:

Microsoft will port node.js to Windows in partnership with Joyent. This will work on Windows Azure as well as other versions of Windows back to Server 2003.

But can you not already run node.js on Windows? This is possible using Cygwin and instructions are here. Cygwin makes Windows more like Linux by providing familiar Linux tools and a Linux API layer. Cygwin is a great tool, though it can be an awkward dependency, but a true Windows port should be higher performance and more robust, particularly as the intention is to use the IOCP API. See here for an explanation of IOCP:

With IOCP, you don’t need to supply a completion function, wait on an event handle to signal, or poll the status of the overlapped operation. Once you create the IOCP and add your overlapped socket handle to the IOCP, you can start the overlapped operation by using any of the I/O APIs mentioned above (except recv, recvfrom, send, or sendto). You will have your worker thread block on GetQueuedCompletionStatus API waiting for an I/O completion packet. When an overlapped I/O completes, an I/O completion packet arrives at the IOCP and GetQueuedCompletionStatus returns.

IOCP is the Windows NT Operating System support for writing a scalable, high throughput server using very simple threading and blocking code on overlapped I/O operations. Thus there can be a significant performance advantage of using overlapped socket I/O with Windows NT IOCPs.

imageI was impressed by node.js when I saw it presented by author Ryan Dahl at a pre-Dreamforce event last year. Since then it has become better known. This is an interesting move, particularly in the context of an greater focus on JavaScript in the forthcoming version of Windows known as Windows 8. End to end JavaScript for your next-generation real time networking applications?

Related posts:

  1. Don’t miss Ryan Dahl on Node.js
  2. Speeding page load with dynamic JavaScript
  3. IE9 in Windows Phone will be good for cross-platform JavaScript and HTML5 apps

Alex Handy posted Node.js pushes JavaScript to the server-side on 6/24/2011 to the SD Times on the Web blog:

JavaScript has long held its place as the language of choice for the Web, but it's only recently become fashionable to run it on the server-side. While Sun Microsystems and others mumbled about server-side JavaScript as early as 2006, it is the recent popularity of Node.js that has given the language a foothold on the server. And while Node.js has grown on Unix systems, its creators are now working on bringing it to Windows.

Node.js was created by Ryan Dahl when he was searching for a way to bring event-driven programming to the Web. The project is currently sponsored by hosting company Joyent.

"Ryan is a C developer, and he didn't have any relationship with JavaScript before Node,” said Tom Hughes-Croucher, chief evangelist for Joyent. “He was writing high-performance Web servers in C and C++. He wanted to use this event-driven model more. He saw himself writing the same applications again and again. He played with Twisted, the Python event-driven framework.”

But the existing libraries and languages didn't quite do it for Dahl, said Hughes-Croucher. “The predominant reason is that there is a lot of heritage in server-side programming already, so when he wanted to use some other library, or access a database, or do something that involved input/output, the existing heritage of those languages didn't work very well with an event driven system,” he said.
This is because all the existing libraries were blocking, he said. “The event-driven system requires that it can continue doing other work while it's waiting for a task to be completed. I don't have to wait for the database process to be complete in order to do more work. People had built all this infrastructure that didn't work this way."

The eventual catalyst for Node.js actually came out of Google. The Google Chrome team opened up its own JavaScript runtime, V8, as an open-source project, and this provided the language runtime Dahl needed to build out an event-driven framework.

But Hughes-Croucher calls the creation of Node.js a perfect-storm situation. He said that the release of V8 was only half of the recipe for success. The other half came from the fact that JavaScript was essentially devoid of server-side code. Dahl would have to write the code for handling essentials like TCP/IP and file access.

Next Page


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@bethmassi) announced Three New “How Do I” Videos Released! (Beth Massi) in a 6/24/2011 post to the Visual Studio LightSwitch blog:

image We just released three new “How Do I'” videos on the LightSwitch Developer Center check them out:

#14 - How Do I: Modify the Navigation of Screens in a LightSwitch Application?
#15 - How Do I: Open a Screen After Saving Another Screen in a LightSwitch Application?
#16 - How Do I: Connect LightSwitch to an Existing Database?

And if you missed them, you can access all the “How Do I” videos here (like the one we released last week on deploying to Azure):

Watch the LightSwitch How Do I Videos
Watch all the LightSwitch How Do I videos

image222422222222I’m doing more each week so keep an eye out for the next ones. I’ll have some out next week on using SharePoint in your LightSwitch applications. (Please note that if the video doesn’t appear right away, try refreshing. You can also download the video in a variety of formats at the bottom of the video page.)


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Lori MacVittie (@lmacvittie) asserted No, not World of Warcraft “Damage per Second” - infrastructure “Decisions per second” in an introduction to her F5 Friday: Performance, Throughput and DPS post of 6/24/2011 to F5’s DevCentral blog:

image Metrics are tricky. Period. Comparing metrics is even trickier. The purpose of performance metrics is, of course, to measure performance. But like most tests, before you can administer such a test you really need to know what it is you’re testing. Saying “performance” isn’t enough and never has been, as the term has a wide variety of meanings that are highly dependent on a number of factors.

imageThe problem with measuring infrastructure performance today – and this will continue to be a major obstacle in metrics-based comparisons of cloud computing infrastructure services – is that we’re still relying on fairly simple measurements as a means to determine performance. We still focus on speeds and feeds, on wires and protocols processing. We look at throughput, packets per second (PPS) and connections per second (CPS) for network and transport layer protocols. While these are generally accurate for what they’re measuring, we start running into real problems when we evaluate the performance of any component – infrastructure or application – in which processing, i.e. decision making, must occur.

Consider the difference in performance metrics between a simple HTTP request / response in which the request is nothing more than a GET request paired with a 0-byte payload response and an HTTP POST request filled with data that requires processing not only on the application server, but on the database, and the serialization of a JSON response. The metrics that describe the performance of these two requests will almost certainly show that the former has a higher capacity and faster response time than the latter. Obviously those who wish to portray a high-performance solution are going to leverage the former test, knowing full well that those metrics are “best case” and will almost never be seen in a real environment because a real environment must perform processing, as per the latter test.

decision servicesSuggestions that a standardized testing environment, similar to application performance comparisons using the Pet Shop Application, are generally met with a frown because using a standardized application to induce real processing delays doesn’t actually test the infrastructure component’s processing capabilities, it merely adds latency on the back-end and stresses capacity of the infrastructure component. Too, such a yardstick would fail to really test what’s important – the speed and capacity of an infrastructure component to perform processing itself, to make decisions and apply them on the component – whether it be security or application routing or transformational in nature.

It’s an accepted fact that processing of any kind, at any point along the application delivery service chain induces latency which impacts capacity. Performance numbers used in comparisons should reveal the capacity of a system including that processing impact. Complicating the matter is the fact that since there are no accepted standards for performance measurement, different vendors can use the same term to discuss metrics measured in totally different ways.

THROUGHPUT versus PERFORMANCE

Infrastructure components, especially those that operate at the higher layers of the networking stack, make decisions all the time. A firewall service may make a fairly simple decision: is this request for this port on this IP address allowed or denied at this time? An identity and access management solution must make similar decisions, taking into account other factors, answering the question is this user coming from this location on this device allowed to access this resource at this time? Application delivery controllers, a.k.a. load balancers, must also make decisions: which instance has the appropriate resources to respond to this user and this particular request within specified performance parameters at this time?

We’re not just passing packets anymore, and therefore performance tests that measure only the surface ability to pass packets or open and close connections is simply not enough. Infrastructure today is making decisions and because those decisions often require interception, inspecting and processing of application data – not just individual packets – it becomes more important to compare solutions from the perspective of decisions per second rather than surface-layer protocol per second measurements.

z-f performance

Decision-based performance metrics are a more accurate gauge as to how the solution will perform in a “real” environment, to be sure, as it’s portraying the component’s ability to do what it was intended to do: make decisions and perform processing on data. Layer 4 or HTTP throughput metrics seldom come close to representing the performance impact that normal processing will have on a system, and, while important, should only be used with caution when considering performance.

Consider the metrics presented by Zeus Technologies in a recent performance test (Zeus Traffic Manager - VMware vSphere 4 Performance on Cisco UCS – 2010 and F5’s performance results from 2010 (F5 2010 Performance Report) While showing impressive throughput in both cases, it also shows the performance impact that occurs when additional processing – decisions – are added into the mix.

The ability of any infrastructure component to pass packets or manage connections (TCP capacity) is all well and good, but these metrics are always negatively impacted once the component begins actually doing something, i.e. making decisions. Being able to handle almost 20 Gbps throughput is great but if that measurement wasn’t taken while decisions were being made at the same time, your mileage is not just likely to vary – it will vary wildly.

Throughput is important, don’t get me wrong. It’s part of – or should be part of – the equation used to determine what solution will best fit the business and operational needs of the organization. But it’s only part of the equation, and probably a minor part of that decision at that. Decision based metrics should also be one of the primary means of evaluating the performance of an infrastructure component today. “High performance” cannot be measured effectively based on merely passing packets or making connections – high performance means being able to push packets, manage connections and make decisions, all at the same time.

This is increasingly a fact of data center life as infrastructure components continue to become more “intelligent”, as they become a first class citizen in the enterprise infrastructure architecture and are more integrated and relied upon to assist in providing the services required to support today’s highly motile data center models. Evaluating a simple load balancing service based on its ability to move HTTP packets from one interface to the other with no inspection or processing is nice, but if you’re ultimately planning on using it to support persistence-based routing, a.k.a. sticky sessions, then the rate at which the service executes the decisions necessary to support that service should be as important – if not more – to your decision making processes.

DECISIONS per SECOND

There are very few pieces of infrastructure on which decisions are not made on a daily basis. Even the use of VLANs requires inspection and decision-making to occur on the simplest of switches. Identity and access management solutions must evaluate a broad spectrum of data in order to make a simple “deny” or “allow” decision and application delivery services make a variety of decisions across the security, acceleration and optimization demesne for every request they process.

And because every solution is architected differently and comprised of different components internally, the speed and accuracy with which such decisions are made are variable and will certainly impact the ability of an architecture to meet or exceed business and operational service-level expectations. If you’re not testing that aspect of the delivery chain before you make a decision, you’re likely to either be pleasantly surprised or hopelessly disappointed in the decision making performance of those solutions.

It’s time to start talking about decisions per second and performance of infrastructure in the context it’s actually used in data center architectures rather than as stand-alone, packet-processing, connection-oriented devices. And as we do, we need to remember that every network is different, carrying different amounts of traffic from different applications. That means any published performance numbers are simply guidelines and will not accurately represent the performance experienced in an actual implementation. However, the published numbers can be valuable tools in comparing products… as long as they are based on the same or very similar testing methodology. Before using any numbers from any vendor, understand how those numbers were generated and what they really mean, how much additional processing do they include (if any).

When looking at published performance measurements for a device that will be making decisions and processing traffic, make sure you are using metrics based on performing that processing.

imageNo significant articles today.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Dana Gardner asked “Where is the golden mean, a proper context for real-world and likely cloud value?” as an introduction to a transcript of a Private Clouds: Debunking the Myths that Can Slow Adoption panel discussion to his Briefings Direct blog on 6/24/2011:

image The popularity of cloud concepts and the expected benefits from cloud computing have certainly raised expectations. Forrester now predicts that cloud spending will grow from $40 billion to $241 billion in the global IT market over the next 10 years, and yet, there's still a lot of confusion about the true payoffs and risks associated with cloud adoption. IDC has it's own numbers.

Some enterprises expect to use cloud and hybrid clouds to save on costs, improve productivity, refine their utilization rates, cut energy use and eliminate gross IT inefficiencies. At the same time, cloud use should improve their overall agility, ramp up their business-process innovation, and generate better overall business outcomes.

To others, this sounds a bit too good to be true, and a backlash against a silver bullet, cloud hype mentality is inevitable and is probably healthy. Yet, we find that there is also unfounded cynicism about cloud computing and underserved doubt.

So, where is the golden mean, a proper context for real-world and likely cloud value? And, what are the roadblocks that enterprises may encounter that would prevent them from appreciating the true potential for cloud, while also avoiding the risks?

We assembled a panel to identify and debunk myths on the road to cloud-computing adoption. Such myths can cause confusion and hold IT back from embracing cloud model sooner rather than later. We also define some clear ways to get the best out of cloud virtues without stumbling.

Joining our discussion about the right balance of cloud risk and reward are Ajay Patel, a Technology Leader at Agilysys; Rick Parker, IT Director for Fetch Technologies, and Jay Muelhoefer, Vice President of Enterprise Marketing at Platform Computing. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Let's begin to tackle some of the cloud computing myths.

Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud.

There's an understanding that virtualization is private cloud and private cloud is virtualization. Clearly, that's not the case. Help me understand what you perceive in the market as a myth around virtualization and what should be the right path between virtualization and a private cloud?

Parker: Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud. That's opposed to virtualization, which could just be a single physical host with a couple of virtual machines (WMs) running on it and doesn't provide the redundancy and cost-effectiveness of an entire private cloud or the ease of management of a private cloud.

So there is a huge difference between virtualization and use of a hypervisor versus an entire private cloud. A private cloud is comprised of virtualized routers, firewalls, switches, in a true data center not a server room. There are redundant environmental systems, like air-conditioning and Internet connections. It’s comprised of an entire infrastructure, not just a single virtualized host.

Moving to a private cloud is inevitable, because the benefits so far outweigh the perceived risks, and the perceived risks are more toward public cloud services than private cloud services.

Gardner: We’ve heard about fear of loss of control by IT. Is there a counter-intuitive effect here that cloud will give you better control and higher degrees of security and reliability?

Redundancy and monitoring

Parker: I know that to be a fact, because the private cloud management software and hypervisors provide redundancy and performance monitoring that a lot of companies don't have by default. You don’t only get performance monitoring across a wide range of systems just by installing a hypervisor, but by going with a private cloud management system and the use of VirtualCenter that supports live motion between physical hosts.

It also provides uptime/downtime type of monitoring and reporting capacity planning that most companies don't even attempt, because these systems are generally out of their budget.

Gardner: Tell us about Fetch Technologies.

Parker: Fetch Technologies is a provider of data as a service, which is probably the best way to describe it. We have a software-as-a-service (SaaS) type of business that extracts formats and delivers Internet-scale data. For example, two of our clients are Dow Jones and Shopzilla.

Gardner: Let’s go next to Ajay. A myth that I encounter is that private clouds are just too hard. "This is such a departure from the siloed and monolithic approach to computing that we'd just as soon stick with one server, one app, and one database," we hear. "Moving toward a fabric or grid type of affair is just too hard to maintain, and I'm bound to stumble." Why would I be wrong in assuming that as my position, Ajay?

The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue.

Patel: One of the main issues that the IT management of an organization encounters on a day-to-day basis is the ability for their current staff to change their principles of how they manage the day-to-day operations.

The training and the discipline need to be changed. The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue. By doing the actual cloud assessment, by understanding what the cloud means, it's closer to home to what the IT infrastructure team does today than one would imagine through the myth.

For example, virtualization is a key fundamental need of a private cloud -- virtualization at the servers, network and storage. All the enterprise providers at the servers, networks, and storage are creating a virtualized infrastructure for you to plug into your cloud-management software and deliver those services to a end-user without issues -- and in a single pane of glass.

If you look at the some of the metrics that are used by managed service companies, SIs, and outsourcing companies, they do what the end-user companies do, but they do it much cheaper, better and faster.

More efficient manner

How they do it better is by creating the ability to manage several different infrastructure portfolio components in a much more efficient manner. That means managing storage as a virtualized infrastructure; tier storage, network, the servers, not only the Windows environment, but the Unix environment, and the Linux environment, including all that in the hands of the business-owners.

Today, with the money being so tight to come by for a corporation, people need to look at not just a return on investment (ROI), but the return on invested capital.

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry, than if you were to just expand utilizing the islands of bills of test, dev environment, by application, by project.

Gardner: I'd like to hear more about Agilysys? What is your organization and what is your role there as a technology leader?

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry.

Patel: I am the technology leader for cloud services across the US and UK. Agilysys is a value-added reseller, as well as a system integrator and professional services organization that services enterprises from Wall Street to manufacturing to retail to service providers, and telecom companies.

Gardner: And do you agree, Ajay, with Forrester Research and IDC, when they show such massive growth, do you really expect that cloud, private cloud, and hybrid cloud are all going to be in such rapid growth over the next several years?

Patel: Absolutely. The only difference between a private cloud and public cloud, based on what I'm seeing out there, is the fear of bridging that gap between what the end-user attains via private cloud being inside their four walled data center, to how the public cloud provides the ability for the end-user to have security and the comfort level that their data is secure. So, absolutely, private to hybrid to public is definitely the way the industry is going to go.

Gardner: Jay at Platform, I'm thinking about myths that have to do with adoption, different business units getting involved, lack of control, and cohesive policy. This is probably what keeps a lot of CIOs up at night, thinking that it’s the Wild West and everyone is running off and doing their own thing with IT. How is that a myth and what does a private cloud infrastructure allow that would mitigate that sense of a lot of loose cannons?

Muelhoefer: That’s a key issue when we start thinking about how our customers look to private cloud. It comes back a little bit to the definition that Rick mentioned. Does virtualization equal private cloud -- yes or no? Our customers are asking for the end-user organizations to be able to access their IT services through a self-service portal.

But a private cloud isn’t just virtualization, nor is it one virtualization vendor. It’s a diverse set of services that need to be delivered in a highly automated fashion. Because it's not just one virtualization, it's going to be VMware, KVM, Xen, etc.

A lot of our customers also have physical provisioning requirements, because not all applications are going to be virtualized. People do want to tap in to external cloud resources as they need to, when the costs and the security and compliance requirements are right. That's the concept of the hybrid cloud, as Ajay mentioned. We're definitely in agreement. You need to be able to support all of those, bring them together in a highly orchestrated fashion, and deliver them to the right people in a secure and compliant manner.

The challenge is that each business unit inside of the company typically doesn’t want to give up control. They each have their own IT silos today that meet their needs, and they are highly over provisioned.

Some of those can be at 5 to 10 percent utilization, when you measure it over time, because they have to provision everything for peak demands. And, because you have such a low utilization, people are looking at how to increase that utilization metric and also increase the number of servers that are managed by each administrator.

You need to find a way to get all the business units to consolidate all these underutilized resources. By pooling, you could actually get effects just like when you have a portfolio of stocks. You're going to have a different demand curve by each of the different business units and how they can all benefit. When one business unit needs a lot, they can access the pool when another business unit might be low.

You need to find a way to get all the business units to consolidate all these underutilized resources.

But, the big issue is how you can do that without businesses feeling like they're giving up that control to some other external unit, whether it's a centralized IT within a company, or an external service provider? In our case, a lot of our customers, because of the compliance and security issues, very much want to keep it within their four walls at this stage in the evolution of the cloud marketplace.

So, it’s all about providing that flexibility and openness to allow business units to consolidate, but not giving up that control and providing a very flexible administrative capability. That’s something that we've spent the last several years building for our customers.
It’s all about being able to support that heterogeneous environment, because every business unit is going to be a little different and is going to have different needs. Allowing them to have control, but within a defined boundaries, you could have centralized cloud control, where you give them their resources and quotas for what they're initially provisioned for, and you could support costing and charge back, and provide a lot more visibility in to what’s happening.

You get all of that centralized efficiency that Ajay mentioned, but also having a centralized organization that knows how to run a larger scale environment. But then, each of the business units can go in and do their own customized self-service portal and get access to IT services, whether it's a simple OS or a VM or a way to provision a complex multi-tier application in minutes, and have that be an automated process. That’s how you get a lot of the cost efficiencies and the scale that you want out of a cloud environment.

Gardner: And, for those business units, they'd also have to watch the cost and maybe have their own P&L. They might start seeing their IT costs as a shared services or charge-backs, get out of the capital expense business, and so it could actually help them in their business when it comes to cost.

Still in evolution

Muelhoefer: Correct. Most of our customers today are very much still in evolution. The whole trend towards more visibility is there, because you're going to need it for compliance, whether it’s Sarbanes-Oxley (SOX) or ITIL reporting.

Ultimately, the business units of IT are going to get sophisticated enough that they can move from being a cost center to a value-added service center. Then, they can start doing that granular charge-back reporting and actually show at a much more fine level the value that they are adding to the organization.

Parker: Different departments, by combining their IT budgets and going with a single private cloud infrastructure, can get a much more reliable infrastructure. By combining budgets, they can afford SAN storage and a virtual infrastructure that supports live VMotion.
They get a fast response, because by putting a cloud management application like Platform on top it, they have much more control, because we are providing the interface to the different departments. They can set up servers themselves and manage their own servers. They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.

IT gives end-users more control by providing a cloud management application and also gives them a much more reliable, manageable system. We've been running a private cloud here at Fetch for three years now, and we've seen this. This isn’t some pie-in-the-sky kind of thing. This is, in fact, what we have seen and proven over and over.

They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.

Gardner: I asked both Ajay and Rick to tell us about their companies. Jay, why don’t you give us the overview of Platform Computing?

Muelhoefer: Platform Computing is headquartered in Toronto, Canada and it's about an 18-year-old company. We have over 2,000 customers, and they're spread out on a global basis.

We have a couple of different business units. One is enterprise analytics. Second, is cloud, and the third is HPC grids and clusters. Within the cloud space, we offer a cloud management solution for medium and large enterprises to build and manage private and hybrid cloud environments.

The Platform cloud software is called Platform ISF. It's all about providing the self-service capability to end-users to access this diverse set of infrastructure as a service (IaaS), and providing the automation, so that you can get the efficiencies and the benefits out of a cloud environment.

Gardner: Rick, let’s go back to you. I've heard this myth that private clouds are just for development, test, and quality assurance (QA). Is cloud really formed by developers and it’s being getting too much notoriety, or is there something else going that it’s for test, dev, and a whole lot more?

Beginning of the myth

Parker: I believe that myth just came from the initial availability of VMware and that’s what it was primarily used for. That’s the beginning of that myth.

My experience is that our private cloud isn't a specific use-case. A well designed private cloud should and can support any use case. We have a private cloud infrastructure and on top of this infrastructure, we can deliver development resources and test resources and QA resources, but they're all sitting on top of a base infrastructure of a private cloud.

But, there isn't just a single use case. It’s detrimental to define use cases for private cloud. I don't recommend setting up a private cloud for dev only, another separate private cloud for test, another separate private cloud for QA. That’s where a use case mentality gets into it. You start developing multiple private clouds.

If you combine those resources and develop a single private cloud, that lets you divide up the resources within the infrastructure to support the different requirements. So, it’s really backward thinking, counter-intuitive, to try to define use cases for private cloud.

We run everything on our private cloud. Our goal is 100 percent virtualization of all servers, of running everything on our private cloud. That includes back-office corporate IT, Microsoft Exchange services like domain controllers, SharePoint, and all of these systems run on top of our private cloud out of our data centers.

We don't have any of these systems running out of an office, because we want the reliability that the cost savings that our private cloud gives us to deploy these applications on servers in the data center where these systems belong.

Muelhoefer: Some of that myth is maybe because the original evolution of clouds started out in the area of very transient workloads. By transient, I mean like a demonstration environments. or somebody that just needs to do a development environment for a day or two. But we've seen a transition across our customers, where they also have these longer-running applications that they're putting in the production type of environments, and they don't want to have to over-provision them.

At the end of the quarter, you need to have a certain capacity of 10 units, you don’t want to have that 10 units throughout the entire quarter as resource-hogs. You want to be able to flex up and flex down according to the requirements and the demand on it. Flexing requires a different set of technology capabilities, having the right sets of business policies and defining your applications so they can dynamically scale. I think that’s one of the next frontiers in the world of cloud.

We've seen with our customers that there is a move toward different application architectures that can take advantage of that flexing capability in Web applications and Java applications. They're very much in that domain, and we see that the next round of benefits is going to come from the production environments. But it does require you to have a solid infrastructure that knows how to dynamically manage flexing over time.

It’s going to be a great opportunity for additional benefits, but as Rick said, you don't want to build cloud silos. You don't want to have one for dev, one for QA, one for help desk. You really need a platform that can support all of those, so you get the benefits of the pooling. It's more than just virtualization. We have customers that are heavily VMware-centric. They can be highly virtualized, 60 percent-plus virtualized, but the utilization isn’t where they need it to be. And it's all about how can you bring that automation and control into that environment.

Gardner: Next myth, it goes to Ajay. This is what I hear more than almost any other: "There is no cost justification. The cloud is going to cost the same or even more. Why is that cynicism unjustified?

Patel: One of the main things that proves to be untrue is that when you build a private cloud, you're pulling in the capabilities of the IT technology that is building the individual islands of environments. On top of it, you're increasing utilization. Today, in the industry, I believe the overall virtualization is less than 40 percent. If you think about it, taking the less-than-40 percent virtualized environment, the remaining is 60 percent.

Even if you take 30 percent, which is average utilization -- 15-20 percent in the Windows environment. By putting it on a private cloud, you're increasing the utilization to 60 percent, 70 percent, 80 percent. If you can hit at 85 percent utilization of the resources, now you are buying that much less of every piece of hardware, software, storage, and network.

You put the right infrastructure in place with the ability to service your business, what you do successfully

When you pool all the different projects together, you build an environment. You put the right infrastructure in place with the ability to service your business, what you do successfully. You end up saving minimally 20 percent, if you just keep the current service level agreements (SLAs) and current deliverables, the way you do today.
But, if you retrain your staff to become cloud administrators -- to essentially become more agile in the ability to create the workloads that are virtual-capable versus standalone-capable -- you get much more benefit, and your cost of entry is minimally 20-30 percent lower on day one. Going forward, you can get more than 50 percent lower cost.

[Private cloud] is killing two birds with one stone, because not only can you re-utilize your elasticity of a 100,000 square-foot facility of data center, but you can now put in 2-3 times more compute capacity without breaking the barriers of the power, cooling, heating, and all the other components. And by having cloud within your data center, now the disaster-recovery capabilities of cloud failover is inherent in the framework of cloud.

You no longer have to worry about individual application-based failover. Now, you're looking at failing over an infrastructure instead of applications. And, of course, the framework of cloud itself gives you a much higher availability from the perspective of hardware up-time and the SLAs than you can obtain by individually building sets of servers with test, dev, QA, or production.

Days to hours

Operationally beyond the initial set up of the private cloud environment, the cost to IT, in an environment and the IT budget goes down drastically on the scale based on our interaction to end-users and our cloud providers is anywhere from 11 days to 15 days down to 3-4 hours.

This means that the hardware is sitting on the dock in the old infrastructure deployment model, versus the cloud model. And when you take three to four hours down into individual components it takes one to two to three days to build the server, rack it, power it, connect it.

It takes 10 minutes today within the private cloud environment to install the operating system. It used to take one to two days, maybe two-and-a-half days, depending on the patches and the add-ons. It takes 30 to 60 minutes starting with a template that is available within private cloud and then setting up the dev environments at the application layer, goes down from days down to 30 minutes.

When you combine all that, the operational efficiency you gain definitely puts your IT staff at a much greater advantage than your competitor.

Gardner: Ajay just pointed out that there is perhaps a business continuity benefit here. If your cloud is supporting infrastructure, rather than individual apps, you can have failover, reliability, redundancy, and disaster recovery at that infrastructure level. Therefore, having it across the board.

In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.

What's the business continuity story and does that perhaps provide a stepping stone to hybrid types of computing models?

Parker: To backtrack just a little bit, at Fetch Technologies, we've cut our data-center cost in half by switching to a private cloud. That's just one of the cost benefits that we've experienced.

Going back to the private cloud cost, one of the myths is that you have to buy a whole new set of cloud technology, cloud hardware, to create a private cloud. That's not true. In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.

In other words, you can reconfigure existing hardware into a private cloud. You don't necessarily need to buy, and there is really no such thing as specific cloud hardware. There are some hardware systems and models that are more optimal in a private cloud environment, but that doesn't necessarily mean you need to buy them to start. You get some initial cost savings, do virtualization to pay for maybe more optimal hardware, but you don't have to start with the most optimal hardware to build a private cloud.

As far as the business continuity, what we've found is that the benefit is more for up-time maintenance than it is for reliability, because most systems are fairly reliable. You don't have servers failing on a day-to-day basis.

Zero downtime

We have systems, at least one server, that's been up for two years with zero downtime. For updating firmware, we can VMotion servers and virtual machines off to other hosts, upgrade the host, and then VMotion those virtual servers back on to the upgraded host so we have a zero downtime maintenance. That's almost more important than reliability, because reliability is generally fairly good.

Gardner: Is there another underlying value here that by moving to private cloud, it puts you in a better position to start leveraging hybrid cloud, that is to say more SaaS or using third-party clouds for specific IaaS and/or maybe perhaps over time moving part of your cloud into their cloud.

Is there a benefit in terms of getting expertise around private cloud that sets you up to be in a better position to enjoy some of the benefits of the more expensive cloud models?

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud.

Muelhoefer: That's a really interesting question, because one of the main reasons that a lot of our early customers came to us was because there was uncontrolled use of external cloud resources. If you're a financial services company or somebody else who has compliance and security issues and you have people going out and using external clouds and you have no visibility into that, it's pretty scary.

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud that people may be using as part of their overall IT footprint. It's really the ability to synthesize and figure out -- if an end user is making a request, what's the most efficient way to service that request?
Is it to serve up something internally or externally, based upon the business policies? Is it using very specific customer data that can't go outside the organization? Does it have to use a certain type of application that goes with it where there's a latency issue about how it's served, and being able to provide a lot of business policy context about how to best serve that whether it's a cost, compliance, or security type of objective that you’re going against?

That’s one key thing. Another important aspect we do see in our customers is the disaster recovery and reliability issue is very important. We've been working with a lot of our larger customers to develop a unique ability to do Active/Active failover. We actually have customers that have applications that are running real-time across multiple data centers.

So, in the case of not just the application going down, but an entire data center going down, they would have no loss of continuity of those resources. That’s a pretty extreme example, but it goes to the point of how important meeting some of those metrics are for businesses and making that cost justification.

Stepping stone

Gardner: We started out with some cynicism, risk, and myths, but it sounds like private clouds are a stepping stone, but at the same time, they are attainable. The cost structure sounds very attractive, certainly based on Rick and Ajay’s experiences.

Jay, where do you start with your customers for Platform ISF, when it comes to ease of deployment? Where do you start that conversation? I imagine that they are concerned about where to start. There is a big set of things to do when it comes to moving towards virtualization and then into private cloud. How do you get them on a path where it seems manageable?

Muelhoefer: We like to engage with the customer and understand what their objectives are and what's bringing them to look at private cloud. Is it the ability to be a lot more agile to deliver applications in minutes to end users or is it more on the cost side or is it a mix between the two? It's engaging with them on a one-on-one basis and/or working with partners like Agilysys where we can build out that roadmap for success and that typically involves understanding their requirements and doing a proof of concept.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment. Look at what types of processes you're going to be modifying in addition to the technologies that you’re going to be implementing, so that you can achieve the right set of pooling.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment.

You’re a very VMware-centric shop, but you don’t want to be locked into VMware. You want to look at KVM or Xen for non-production-type use cases and what you’re doing there. Are you looking at how can you make yourself more flexible and leverage those external cloud resources? How can you bring physical into the cloud and do it at the right price point?

A lot of people are looking at the licensing issue of cloud, and there are a lot of different alternatives, whether it's per VM, which is quite expensive, or other alternatives like per socket and helping build out that value roadmap over time.

For us, we have a free trial on our website that people can use. They can also go to our website to learn more which is http://www.platform.com/privatecloud. We definitely encourage people to take a look at us. We were recently named the number one private cloud management vendor by Forrester Research. We are always happy to engage with companies that want to learn more about private cloud.

You [might] also be interested in:


<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) described SecurityAutomata: A Reference For Security Automation… in a 6/24/2011 post to his Rational Survivability blog:

image The SecurityAutomata Project is themed toward enabling consumers, service and technology solution providers to collectively share knowledge on how to automate and focus on the programmability of “security” across physical, virtual and cloud environments.

image It’s a bit of an experiment, really. I want to enable better visibility into the state-of-the-art (as it were) of security automation by providing a neutral ground to discuss and demonstrate how security can be automated in physical, virtual and cloud computing environments.

There are many solutions available today but it’s often difficult to grasp how the approaches differ from one another and what sort of capabilities must exist to get them to work.

Please help us organize and contribute content to the SecurityAutomata Wiki here.

/Hoff

Related articles


Maureen O’Gara claimed “NY Times: FBI had confiscated three racks of servers and the equipment plugged into them from space in a data center in Virginia” as a deck for her Perils of the Cloud – FBI Seizure post of 6/23/2011 to the Cloud Security Journal blog:

image As Andy Grove would say, "Only the paranoid survive."

After Amazon went down and the general chatter suddenly became shot through with talk of availability zones and redundancy, public cloud users and people hesitating to use a public cloud started articulating another deeply held fear: what if the FBI or the CIA or one of their spook friends seized the multi-tenant machine.

Roundarch, for instance, one of Abiquo's enterprise customers, raised the potential risk with Abiquo CEO Peter Malcolm after the Sony hacks.

At the time - which was right before the recent flood of cyber break-ins started - it seemed a tad far-fetched that an innocent multi-tenant cloud user could be shut down because the government seized the machines he was using.

Au contraire.

News broke Tuesday justifying Roundarch's paranoia.

The New York Times reported that in the wee hours that morning the FBI had confiscated three racks of servers and the equipment plugged into them from space in a data center in Reston, Virginia, leased by a web hoster in Switzerland called DigitalOne.

It caused a number of evidently innocent bystanders, like New York publisher Curbed Networks and Instapaper, to go down or look for lodgings elsewhere.

The Times and then Cnet reported that feds were only after one specific DigitalOne client, still unidentified.

Cnet quoted DigitalOne CEO Sergej Ostroumow as saying, the "FBI was interesting only in one of the clients and it is absolutely unintelligible why they took servers of tens of clients. After FBI's unprofessional ‘work' we can not restart our own servers, that's why our web site is offline and support doesn't work."

Ostroumow told the Times the feds took more servers than they had to even though DigitalOne pinpointed the servers they were looking for through the specific IP address.

The Times got a statement from an unidentified government official that suggested the FBI was hot on the trail of the Lulz Security miscreants and was working in concert with the CIA and their counterparts in Europe.

The Times said DigitalOne had no employees on-site and thought a technical hitch was responsible for the outage until the data center operator explained three hours later that they had been raided.

Peter Malcolm says, "The fact that the FBI appears to have walked in and removed hardware irrespective of what was running on it, or what the consequences would be to entirely innocent third parties, is alarming to say the least."


<Return to section navigation list> 

Cloud Computing Events

My (@rogerjenn) Giga Om Structure Conference 2011 - Links to Archived Videos for 6/23/2011 post of 6/24/2011 provided links to video archives all but three sessions from the Structure Conference’s second day.

image I’ll update the post if and when the missing videos are available. The video archives for Wednesday are at Giga Om Structure Conference 2011 - Links to Archived Videos for 6/22/2011.


Jo Maitland (@JoMaitlandTT) posted Dispatches from Structure cloud conference to the SearchCloudComputing.com blog on 6/24/2011:

image For a who's who of cloud industry bigwigs debating how to do cloud computing versus the standard "what is" cloud computing blah-blah, this week's Structure conference was the place to be. Let's face it, that "what is" debate is seriously old.

imageUnfortunately, Amazon's Werner Vogels didn't get the memo. Or maybe that was a hologram of him on stage from 12 months ago? Same speech, different year. We get it Werner: Cloud, Day 1…

Simon Crosby introduced a new way to resign, on stage, announcing his departure from Citrix Systems where he ran the data center and cloud business. He's started a new company in the security market called Bromium, with his co-founder from XenSource, Ian Pratt, and Gaurav Banga, previously CTO at Phoenix Technologies.

Crosby was tight-lipped about the company's plans, except to say that Bromium will get more granular than existing VM-centric protection strategies. For that, he'll have to take it to the hardware. George Kurtz, CTO and EVP at McAfee/Intel, is on the Bromium board, so the company could be working on some kind of hypervisor plug in that'll work with TPM/TXT. Meanwhile, Crosby's departure leaves big shoes to fill on the data center and cloud side of the house over at Citrix.

Anyone who thinks cloud isn't displacing jobs should talk with Dries Buytaert, co-founder and CTO of Acquia, a Drupal-based PaaS. During a panel on the future of cloud, he said one of the largest media and entertainment companies has moved a bunch of sites to the Acquia service and let go the "entire IT team" that was running those sites. Word to the wise: If your job title is "Web master" at Acme Corp., watch out.

John Dillon, CEO of Engine Yard, described the current bubble in cloud computing as "funding pollution." We'll see if his company avoids getting lost in the fog. VCs at the conference said that we're in the middle of the cloud bubble now and the inevitable burst is on its way.

image Meanwhile, the world's increasing appetite for compute and storage will keep Microsoft afloat as its business model shifts from selling shrink-wrapped software to SaaS. At least according to Microsoft's new Azure chief, Satya Nadella.

"As long as our software operating system at the cloud level or the server level is something customers are willing to write to and use, we will be competitive," he said. How the mighty have fallen, from global IT domination just a few years ago to fighting for a place in line for cloud compute services. And Satya ,in case you're reading, lose the snoozer PR spin. If you want the Valley crowd to look up from TweetDeck, you better say something that's not already plastered across the Web.

Structure talk on taking the next step in cloud
Moving on, there was a lot of talk about the use of solid state drives (SSDs) to accelerate performance. It was all about beefing up boxes and not that interesting. I am intrigued, though, by Nasuni's plans. CEO Andres Rodriquez says to understand what Nasuni is doing next, look no further than the Apple iCloud logo. It's a simple drawing of a cloud on brushed aluminum, meaning the cloud is not something out there and unknown, like the Microsoft's ubiquitous "To the Cloud" campaign. Instead it's right there, in your hand, on your Apple device. Bye bye to fears and anxiety about moving to the cloud.

Nasuni aims to achieve the same effect with its cloud file server by taking full responsibility, through service-level agreements (SLAs), for the entire service, from its filer inside your data center out to either Amazon S3 or Microsoft Azure on the back end. Previously, the company left the cloud component of its product up to the customers to figure out, offering them a choice of cloud providers to connect with. Taking on responsibility for the cloud end, with SLAs around the whole service, will certainly ease enterprise IT fears about moving to the cloud. But unlike Apple, Nasuni doesn't own the cloud it's selling, so it can only hope Amazon S3 and Microsoft Azure do a good job. It's a bold move though, and worth watching.

imageAnother interesting tidbit from a storage luminary came by way of NetApp founder Dave Hitz, who said he's fed up with hearing that nobody's running anything mission critical on the cloud.

"Try turning off payroll and tell me if that's not mission critical," he said. Payroll is perhaps one of the oldest SaaS apps around. He's got a point; thank you, ADP.

And last but not least, there's an "open hardware" movement gathering in cloud that involves bundling the OpenStack software with commodity hardware, that companies in this space say will enable true cloud-to-cloud portability.Randy Bias's company Cloudscaling is working on something in this area, as isformer NASA CTO Chris Kemp'scompany, funded by Kleiner Perkins and possibly called Piston? Ssssh! They are both in stealth mode, and we aren't supposed to know about them yet.

Jo Maitland is the Senior Executive Editor of SearchCloudComputing.com.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Martin Tantau described The Future of Cloud Computing – 10 Predictions from the Structure 2011 conference in a 10/24/2011 post to the CloudTimes blog:

image The cloud is revolutionizing computing as businesses and organizations shift from client-server model to cloud computing. In the next years, technology experts and users expect to ‘live mostly in the cloud’ as they work through cyberspace-based applications accessed from networked devices.

GigaOM Pro’s research releases new opportunities in the cloud, new architectures and startups in the space.

Here are the 10 cloud predictions for the next year.

  1. Large organizations will host important applications with cloud providers like AWS and Rackspace. Also, commodity IaaS providers will build up their services for these enterprises.
  2. There will be an increase in solid-state drives among commodity and enterprise IaaS providers. New classes of applications and services will be made possible to run optimally in the cloud.
  3. In the private cloud space, there will be contraction. The presence of large-vendors and OpenStack-based products will make way for less-successful startups to gather acquisitions. Specialized private cloud startups won’t have a hard time finding buyers as they will begin rounding out their private cloud portfolios with acquisitions that deliver specific capabilities.
  4. As large vendors realize the need to stake their claims, PaaS acquisitions and launches will be a profitable market. HP, Dell and even Oracle will facilitate PaaS in their public and/or private offerings.
  5. The convergence of big data will continue. It will result to advanced analytics features, publicly hosted data-crunching services like the Amazon Elastic MapReduce, and optimization of private cloud software as it incorporate Hadoop clusters or other parallel-processing systems into the cloud infrastructure.
  6. Bigger revenue will be generated for startups that address data-center-to-cloud latency. With the rapid improvement of intra-cloud computing, storage and networking performance, one hindrance would be moving some applications types to the public cloud because of the large quantities of existing data.
  7. AWS will be launching a partner program to increase integration with private cloud software. There will also be an open source play for the growing OpenStack support.
  8. After the data breach that involves an IaaS cloud or cloud storage service, we will get to see the emergence of a de facto or an official cloud security standard. Cloud providers will be driven to agree on a security protocol that is much better that what they currently have.
  9. There will be an increase in PaaS offerings for specialized mobile platforms because Apple’s iCloud and other consumer-focused cloud services’ popularity. The current PaaS offering are not well-suited for mobile applications. Developers will look into cloud-based gaming and other mobile applications.
  10. Data virtualization will pick up its momentum as data integration gives way. Data virtualization offers the benefits of centralized access without the need to maintain extract-transform load (ETL) system or as large a data warehouse, critical differences as data sources multiply to include SaaS applications, cloud servers and mobile devices.

Brian Swan (@brian_swan) posted SQL Server JumpIn! Camp Wrap Up on 6/24/2011:

image As I arrived yesterday for day 4 (the last day) of the SQL Server JumpIn! Camp, one participant said to me,“I’m starting to feel worn down.” I think that was the general sentiment of everyone that was there…and with good reason. During each day of the camp, PHP developers worked side-by-side with Microsoft developers to add SQL Server and SQL Azure support to their projects, but nearly everyone put in many extra hours late at night (and even early morning!) to add support for other Microsoft technologies (such as IIS, Web Platform Installer, and Windows Azure). The amount of work done by the participants was incredible. You can get a sense of just how much progress was made by this picture of our “progress” board taken on the last day of the camp:

JIC-Work_done

imageI can’t say thanks enough to all the participants for being 100% invested in the camp and for going the extra mile to investigate how their projects might be able to integrate Microsoft technologies beyond SQL Server and SQL Azure. I know that the entire SQL Server team echoes my thanks.

And now, the hard work for Microsoft developers begins. A primary goal of the camp was for Microsoft developers to learn from PHP developers so that they can build Microsoft products that work better and better with PHP and PHP applications/frameworks. Throughout the camp, we tracked requests by asking participants to post their “wish list” items on a white board. This picture will give you a sense for some of the feedback we received, though we actually got much more feedback in one-on-one conversations:

JIC-Wish_list

So now it’s time to take this feedback, prioritize it, plan, and act on it. We did this after the last SQL Server JumpIn! Camp, and the progress we have made since then was well received at this camp. With more hard work, we will make similar progress by the next camp.

An observation: The camp wasn’t all about progress and feedback. Some of the best conversations centered on why something was the way it was. PHP developers learned why a Microsoft feature or API was designed the way it was, and Microsoft developers learned why those features/APIs might pose hurdles for PHP developers. In some cases, these conversations led to something actionable, but in other cases it just led to understanding. This, IMHO, was one of the most valuable aspects of the camp. That mutual understanding will eventually lead to better interoperability.

Finally, don’t get the impression that the camp was all work. It wasn’t. We had a BBQ competition, dinner at Seattle’s Space Needle, and wine tasting/dinner at the Columbia winery in Woodinville. In addition to all that, everyone found time (at the expense of enough sleep!) to catch up with old friends and make new ones. I, personally, had a great time getting to know new people or people I had only met briefly or know only through Twitter and/or blogs.

I think it’s safe to say that you will see more posts (but of a technical nature) that are the fruit of this camp (I learned a ton and want to share what I learned). In the mean time, another HUGE thanks to all the camp participants! I’m looking forward to the next camp.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Matthew Weinberger (@MattNLM) announced DotCloud ‘Next Generation’ PaaS Enters General Availability in a 6/24/2011 post to the TalkinCloud:

image DotCloud’s namesake “next-generation” platform-as-a-service offering has left beta and has entered general availability, adding high availability and scaling options. So what’s so futuristic about DotCloud? The provider is boasting it’s one platform that provides automatically managed development and deployment environments for no fewer than 12 stacks and databases. If that sounds good to you, sign-ups are free, with paid plans starting at $99/month. Here are some more details:

image According to DotCloud’s press materials, the modern enterprise developer faces the problem of using standardized technologies to minimize risk when building and running applications. But by combining management and monitoring for various stacks into one platform, it removes the need for expertise in any one scripting language.

image The full list of languages and databases supported by DotCloud at launch includes: PHP, Ruby, Python, Perl, Java, Node.JS, MySQL, Redis, RabbitMQ, Solr, MongoDB and PostgreSQL. And those newly available HA and scaling capabilities do exactly what they say on the tin: host your application across several availability zones with automatic failover, and add additional resources for your app on demand as needed, respectively.

Here’s what Solomon Hykes, co-founder and CEO of DotCloud, had to say about the PaaS offering’s value proposition in a prepared statement:

“Cloud computing has had a huge impact on how companies build and manage their infrastructure, but we are still in the early phases of this revolution. With DotCloud, cloud computing and PaaS are making a huge stride forward — for the first time enabling companies to innovate with new development stacks without increasing complexity and cost in their architecture.”

Hykes talks a good game, and no doubt, DotCloud will garner some fans in the cloud ISV world. But with Microsoft Windows Azure recruiting its own platform-as-a-service developer base, can this startup make a dent in the marketplace?

Follow Talkin’ Cloud via RSS, Facebook and Twitter. Sign up for Talkin’ Cloud’s Weekly Newsletter, Webcasts and Resource Center. Read our editorial disclosures here.

Read More About This Topic

James Niccolai (@jniccolai) reported Four Companies Rethink Databases for the Cloud in a 6/23/2011 post to PCWorld’s Business Center from Giga Om’s Structure conference:

image Several companies are developing new database technologies to solve what they see as the shortcomings of traditional, relational database management systems in a cloud environment. Four of them described the approaches they're taking during a panel at the GigaOm Structure conference on Thursday.

The basic problem they're trying to solve is the difficulty of scaling today's RDBMS systems across potentially massive clusters of commodity x86 servers, and doing so in a way that's "elastic," so that an organization can scale its infrastructure up and down as demand requires.

"The essential problem, as I see it, is that existing relational database management systems just flat-out don't scale," said Jim Starkey, a former senior architect at MySQL and one of the original developers of relational databases.

Starkey is founder and CTO of NimbusDB, which is trying to address those problems with a "radical restart" of relational database technology. Its software has "nothing in common with pre-existing systems," according to Starkey, except that developers can still use the standard SQL query language.

NimbusDB aims to provide database software that can scale simply by "plugging in" new hardware, and that allows a large number of databases to be managed "automatically" in a distributed environment, he said. Developers should be able to start small, developing an application on a local machine, and then transfer their database to a public cloud without having to take it offline, he said.

"One of the big advantages of cloud computing is you don't have to make all the decisions up front. You start with what's easy and transition into another environment without having to go offline," he said.

NimbusDB's software is still at an "early alpha" stage, and Starkey didn't provide a delivery date Thursday. The company expects to give the software away free "for the first couple of nodes," and customers can pay for additional capacity, he said. Its product is delivered as software, rather than a service, but not open-source software, Starkey said.

Xeround aims to solve similar problems as NimbusDB but with a hosted MySQL service that's been in beta with about 2,000 customers and went into general availability last week, said CEO Razi Sharir. It, too, wants to offer the elasticity of the cloud with the familiarity of SQL coding.

"We're a distributed database that runs in-memory, that splits across multiple virtual nodes and multiple data centers and serves many customers at the same time," he said. "The scaling and the elasticity are handled by our service automatically."

Xeround is designed for transactional workloads, and the "sweet spot" for its database is between 2GB and 50GB, Sharir said.

Its service is available in Europe and the U.S., hosted by cloud providers including Amazon and Rackspace. While Xeround is "cloud agnostic," cloud database customers in general need to run their applications and database in the same data center, or close to each other, for performance reasons.

"If your app is running on Amazon East or Amazon Europe, you'd better be close to where we're at. The payload [the data] needs to be in the same place" as the application, he said.

Unlike Xeround, ParAccel's software is designed to run analytics workloads, and the sweet spot for its distributed database system is "around the 25TB range," said CTO Barry Zane.

"We're the epitome of big data," he said. ParAccel's customers are businesses that rely on analyzing large amounts of data, including financial services, retail and online advertising companies.

One customer, interclick, uses ParAccel to analyze demographic and click-through data to let online advertising firms know which ads to display to end users, he said. It has to work in near real-time, so interclick runs an in-memory database of about 2TB on a 32-node cluster, Zane said. Other customers with larger data sets use a disk-based architecture.

ParAccel also lets developers write SQL queries, but with extensions so they can use the MapReduce framework for big-data analytics.

"SQL is a really powerful language, it's very easy to use for amazingly sophisticated stuff, but there's a class of things SQL can't do," he said. "So what you've seen occurring at ParAccel, and frankly at our competitors, is the extensibility to do MapReduce-type functions directly in the database, rather than try to move terabytes of data in and out to server clusters."

Cloudant, which makes software for use on-premise or in a public cloud, was the only company on the panel that has developed a "noSQL" database. It was designed to manage both structured and unstructured data, and to shorten the "application lifecycle," said co-founder and chief scientist Mike Miller.

"Applications don't have to go through a complex data modelling phase," he said. The programming interface is HTTP, Miller said. "That means you can sign up and just start talking to the database from a browser if you wanted to, and build apps that way. So, we're really trying to lower the bar and make it easier to deploy."

"We also have integrated search and real-time analytics, so we're trying to bring concepts from the warehouse into the database itself," he said.

The company's software is hosting "tens of thousands of applications" on public clouds run by Amazon EC2 and SoftLayer Technologies, according to Miller.

Cloudant databases vary from a gigabyte all the way to 100TB, he said. Customers are running applications for advertising analytics, "datamart-type applications," and "understanding the connections in a social graph -- not in an [extract, transform and load] workflow kind of way using Hadoop, but in real time," he said.

While cloud databases can solve scaling problems, they also present new challenges, the panelists acknowledged. The quality of server hardware in the public cloud is "often a notch down," said Zane, so companies for whom high-speed analytics are critical may still want to buy and manage their own hardware, he said.

And while many service providers claim to be "cloud agnostic," the reality is often different, Miller said. Cloud software vendors need to do "a lot of reverse engineering" to figure out what the architectures at services like Amazon EC2 look like "behind the curtain," in order to get maximum performance from their database software.

Still, Sharir and Zane were both optimistic that "big data analytics" would be the"killer application" for their products. For Starkey it is simply "the Web."

"Everyone on the Web has the same problem, this very thin pipe trying to get into database systems," he said. "Databases don't scale, and it shows up in a thousand places."


SearchCloudComputing.com asserted “Enterprises have been lagging in adopting cloud computing, but open source is progressively making it more attractive for enterprise users to make the jump into the cloud” as a deck for the 00:17:34 Open source's influence on enterprise cloud CloudCover video segment of 6/22/2011:

imageThough enterprises have hesitated in adopting the cloud, new innovations in integrating open source into services is making the move more enticing for IT pros. Abiquo's CEO Pete Malcolm explains how the company is bridging problems many enterprises have with the public cloud in this week's episode of Cloud Cover TV.

We discuss:

  • OpenStack's impact in the cloud computing world
  • The community version of Abiquo's Infrastructure as a Service and enterprise growth
  • What features people will be willing to pay for
  • Provisioning virtual networks
  • Why it's taking so long for cloud to take off with enterprises
  • Forrester's research on the private cloud
  • The criteria for scoring private cloud companies in Forrester's study
  • Abiquo's plans for the future with a hypervisor solution
  • Bridging problems enterprises have with public cloud
  • The impact cost has on the cloud computing world

Click here to watch the video segment.

Full disclosure: I’m a paid contributor to the SearchCloudComputing.com blog.


<Return to section navigation list> 

0 comments: