Friday, December 17, 2010

Windows Azure and Cloud Computing Posts for 12/17/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
• Updated Valery Mizonov’s Best Practices for Maximizing Scalability and Cost Effectiveness of Queue-Based Messaging Solutions on Windows Azure post in the Azure Blob, Drive, Table and Queue Services section on 12/20/2010 at his request.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.


Azure Blob, Drive, Table and Queue Services

The Windows Azure Storage Team posted on 12/17/2010 a workaround for ERROR_UNSUPPORTED_OS seen with Windows Azure Drives:

imageWhen running in the cloud, a few customers have reported that CloudDrive.InitializeCache() and CloudDrive.Mount() occasionally fail with ERROR_UNSUPPORTED_OS when running with SDK 1.3.   This error can occur when your service calls a CloudDrive API before the Azure CloudDrive service has been started by the operating system. In older versions of the SDK, your application was started later, so the Azure CloudDrive service would have already been running.

We will fix this issue in a future release. In the meantime, we recommend working around the issue by retrying the first CloudDrive API your service calls. The following is an example of code that retries the CloudDrive.InitializeCache() operation.   A similar loop should be placed around any CloudDrive APIs that your application may call first, including CloudDrive.Create, CloudDrive.Mount, and/or CloudDrive.GetMountedDrives.

For (int i = 0; i < 30; i++)
{
       try
       {
           CloudDrive.InitializeCache(localCache.RootPath,localCache.MaximumSizeInMegabytes);
           break;                   
       }
       catch (CloudDriveException ex)
       {
           if (!ex.Message.Equals("ERROR_UNSUPPORTED_OS") || i == 29)
                 throw;
           Thread.Sleep(10000);
       }
}


Joe Giardino of the Windows Azure Storage Team provided a workaround for Page Blob Writes in Windows Azure Storage Client Library do not support Streams with non-zero Position on 12/17/2010:

imageThe current Windows Azure Storage Client Library does not support passing in a stream to CloudPageBlob.[Begin]WritePages where the stream position is a non-zero value. In such a scenario the Storage Client Library will incorrectly calculate the size of the data range which will cause the server to return HTTP 500: Internal Server Error. This is surfaced to the client via a StorageServerException with the message "Server encountered an internal error. Please try again after some time.”

HTTP 500 errors are generally retryable by the client as they relate to an issue on the server side, however in this instance it is the client which is supplying the invalid request. As such this request will be retried by the storage client library N times according to the RetryPolicy specified on the CloudBlobClient (default is 3 retries with an exponential backoff delay). However these requests will not succeed and all subsequent retries will fail with the same error.

Workarounds

In the code below I have included a set of extension methods that provide a safe way to invoke [Begin]WritePages which will throw an exception if the source stream is at a non-zero position.  You can alter these methods for your specific scenario in case you wish to possibly rewind the stream. Future releases of Storage Client Library will accommodate for this scenario as we continue to expand support for PageBlob at the convenience Layer.

public static class StorageExtensions
{
    /// <summary>
    /// Begins an asynchronous operation to write pages to a page blob while enforcing that the stream is at the beginning
    /// </summary>
    /// <param name="pageData">A stream providing the page data.</param>
    /// <param name="startOffset">The offset at which to begin writing, in bytes. The offset must be a multiple of 512.</param>
    /// <param name="callback">The callback delegate that will receive notification when the asynchronous operation completes.</param>
    /// <param name="state">A user-defined object that will be passed to the callback delegate.</param>
    /// <returns>An <see cref="IAsyncResult"/> that references the asynchronous operation.</returns>
    public static IAsyncResult BeginWritePagesSafe(this CloudPageBlob blobRef, Stream pageData, long startOffset, AsyncCallback callback, object state)
    {1
        if (pageData.Position != 0)
        {
            throw new InvalidOperationException("Stream position must be set to zero!");
        }

        return blobRef.BeginWritePages(pageData, startOffset, callback, state);
    }

    /// <summary>
    /// Writes pages to a page blob while enforcing that the stream is at the beginning
    /// </summary>
    /// <param name="pageData">A stream providing the page data.</param>
    /// <param name="startOffset">The offset at which to begin writing, in bytes. The offset must be a multiple of 512.</param>
    public static void WritePagesSafe(this CloudPageBlob blobRef, Stream pageData, long startOffset)
    {
        if (pageData.Position != 0)
        {
            throw new InvalidOperationException("Stream position must be set to zero!");
        }

        blobRef.WritePages(pageData, startOffset, null);
    }

    /// <summary>
    /// Writes pages to a page blob while enforcing that the stream is at the beginning
    /// </summary>
    /// <param name="pageData">A stream providing the page data.</param>
    /// <param name="startOffset">The offset at which to begin writing, in bytes. The offset must be a multiple of 512.</param>
    /// <param name="options">An object that specifies any additional options for the request.</param>
    public static void WritePagesSafe(this CloudPageBlob blobRef, Stream pageData, long startOffset, BlobRequestOptions options)
    {
        if (pageData.Position != 0)
        {
            throw new InvalidOperationException("Stream position must be set to zero!");
        }

        blobRef.WritePages(pageData, startOffset, options);
    }
}
Summary

The current Storage Client Library requires an additional check prior to passing in a Stream to CloudPageBlob.[Begin]WritePages in order to avoid producing an invalid request. Using the code above or applying similar checks at the application level can avoid this issue. Please note that other types of blobs are unaffected by this issue (i.e. CloudBlob.UploadFromStream) and we will be addressing this issue in a future release of the Storage Client Library.


imageValery Mizonov posted Best Practices for Maximizing Scalability and Cost Effectiveness of Queue-Based Messaging Solutions on Windows Azure on 12/14/2010 (updated at Valery’s request on 12/20/2010):

This whitepaper describes several best practices for building scalable, highly efficient and cost effective queue-based messaging solutions on the Windows Azure platform. The intended audience for this whitepaper includes solution architects and developers designing and implementing cloud-based solutions which leverage the Windows Azure platform’s queue storage services.

Introduction

A traditional queue-based messaging solution utilizes the concept of a message storage location known as a message queue, which is a repository for data that will be sent to or received from one or more participants, typically via an asynchronous communication mechanism.

The queue-based data exchange represents the foundation of a reliable and highly scalable messaging architecture capable of supporting a range of powerful scenarios in the distributed computing environment. Whether it’s high-volume work dispatch or durable messaging, a message queuing technology can step in and provide first-class capabilities to address the different requirements for asynchronous communication at scale.

The purpose of this whitepaper is to examine how developers can take advantage of particular design patterns in conjunction with capabilities provided by the Windows Azure platform to build optimized and cost-effective queue-based messaging solutions. The whitepaper takes a deeper look at most commonly used approaches for implementing queue-based interactions in Windows Azure solutions today and provides recommendations for improving performance, increasing scalability and reducing operating expense.

The underlying discussion is mixed with relevant best practices, hints and recommendations where appropriate. The scenario described in this whitepaper highlights a technical implementation that is based upon a real-world customer project.

Customer Scenario

For the sake of a concrete example, we will generalize a real-world customer scenario as follows.

A SaaS solution provider launches a new billing system implemented as a Windows Azure application servicing the business needs for customer transaction processing at scale. The key premise of the solution is centered upon the ability to offload compute-intensive workload to the cloud and leverage the elasticity of the Windows Azure infrastructure to perform the computationally intensive work.

The on-premises element of the end-to-end architecture consolidates and dispatches large volumes of transactions to a Windows Azure hosted service regularly throughout the day. Volumes vary from a few thousands to hundreds of thousands transactions per submission, reaching millions of transactions per day. Additionally, assume that the solution must satisfy a SLA-driven requirement for a guaranteed maximum processing latency.

The solution architecture is founded on the distributed map-reduce design pattern and is comprised of a multi-instance Worker role-based cloud tier using the Windows Azure queue storage for work dispatch. Transaction batches are received by Process Initiator Worker role instance, decomposed (de-batched) into smaller work items and enqueued into a range of Windows Azure queues for the purposes of load distribution.

Workload processing is handled by multiple instances of the Processing Worker role fetching work items from queues and passing them through computational procedures. The processing instances employ multi-threaded queue listeners to implement parallel data processing for optimal performance.

The processed work items are routed into a dedicated queue from which these are dequeued by the Process Controller Worker role instance, aggregated and persisted into a data store for data mining, reporting and analysis.

The solution architecture can be depicted as follows:

Windows Azure Queue-based Sample Arch

The diagram above depicts a typical architecture for scaling out large or complex compute workloads. The queue-based message exchange pattern adopted by this architecture is also very typical for many other Windows Azure applications and services which need to communicate with each other via queues. This enables taking a canonical approach to examining specific fundamental components involved in a queue-based message exchange.

Queue-Based Messaging Fundamentals

A typical messaging solution that exchanges data between its distributed components using message queues includes publishers depositing messages into queues and one or more subscribers intended to receive these messages. In most cases, the subscribers, sometimes referred to as queue listeners, are implemented as single- or multi-threaded processes, either continuously running or initiated on demand as per a scheduling pattern.

At a higher level, there are two primary dispatch mechanisms used to enable a queue listener to receive messages stored on a queue:

  • Polling (pull-based model): A listener monitors a queue by checking the queue at regular intervals for new messages. When the queue is empty, the listener continues polling the queue, periodically backing off by entering a sleep state.

  • Triggering (push-based model): A listener subscribes to an event that is triggered (either by the publisher itself or by a queue service manager) whenever a message arrives on a queue. The listener in turn can initiate message processing thus not having to poll the queue in order to determine whether or not any new work is available.

It is also worth mentioning that there are different flavors of both mechanisms. For instance, polling can be blocking and non-blocking. Blocking keeps a request on hold until a new message appears on a queue (or timeout is encountered) whereas a non-blocking request completes immediately if there is nothing on a queue. With a triggering model, a notification can be pushed to the queue listeners either for every new message, only when the very first message arrives to an empty queue or when queue depth reaches a certain level.

NoteNoBgNote
The dequeue operations supported by Windows Azure Queue Service API are non-blocking. This means that the API methods such as GetMessage or GetMessages will return immediately if there is no message found on a queue. By contrast, the Durable Message Buffers (DMB) provided by Windows Azure AppFabric Service Bus accommodate blocking receive operations which block the calling thread until a message arrives on a DMB queue or a specified timeout period has elapsed.

The most common approach to implementing queue listeners in Windows Azure solutions today can be summarized as follows:

  1. A listener is implemented as an application component that is instantiated and executed as part of a worker role instance.

  2. The lifecycle of the queue listener component would often be bound to the run time of the hosting role instance.

  3. The main processing logic is comprised of a loop in which messages are dequeued and dispatched for processing.

  4. Should no messages be received, the listening thread enters a sleep state the duration of which is often driven by an application-specific back-off algorithm.

  5. The receive loop is being executed and a queue is being polled until the listener is notified to exit the loop and terminate.

The following flowchart diagram depicts the logic commonly used when implementing a queue listener with a polling mechanism in Windows Azure applications:

Classic Queue Listener Flow

NoteNoBgNote
For purposes of this whitepaper, more complex design patterns, for example those that require the use of a central queue manager (broker) are not used.

The use of a classic queue listener with a polling mechanism may not be the optimal choice when using Windows Azure queues because the Windows Azure pricing model measures storage transactions in terms of application requests performed against the queue, regardless of if the queue is empty or not. The purpose of the next sections is to discuss some techniques for maximizing performance and minimizing the cost of queue-based messaging solutions on the Windows Azure platform.

Best Practices for Performance, Scalability & Cost Optimization

In this section we must examine how to improve the relevant design aspects to achieve higher performance, better scalability and cost efficiency.

Perhaps, the easiest way of qualifying an implementation pattern as a “more efficient solution” would be through the design which meets the following goals:

  • Reduces operational expenditures by removing a significant portion of storage transactions that don’t derive any usable work.
  • Eliminates excessive latency imposed by a polling interval when checking a queue for new messages.
  • Scales up and down dynamically by adapting processing power to volatile volumes of work.

The implementation pattern should also meet these goals without introducing a level of complexity that effectively outweighs the associated benefits.

Best Practices for Optimizing Storage Transaction Costs

When evaluating the total cost of ownership (TCO) and return on investment (ROI) for a solution deployed on the Windows Azure platform, the volume of storage transactions is one of the main variables in the TCO equation. Reducing the number of transactions against Windows Azure queues decreases the operating costs as it relates to running solutions on Windows Azure.

In the context of a queue-based messaging solution, the volume of storage transactions can be reduced using a combination of the following methods:

  1. When putting messages in a queue, group related messages into a single larger batch, compress and store the compressed image in a blob storage and use the queue to keep a reference to the blob holding the actual data.

  2. When retrieving messages from a queue, batch multiple messages together in a single storage transaction. The GetMessages method in the Queue Service API enables de-queuing the specified number of messages in a single transaction (see the note below).

  3. When checking the presence of work items on a queue, avoid aggressive polling intervals and implement a back-off delay that increases the time between polling requests if a queue remains continuously empty.

  4. Reduce the number of queue listeners - when using a pull-based model, use only 1 queue listener per role instance when a queue is empty. To further reduce the number of queue listeners per role instance to 0, use a notification mechanism to instantiate queue listeners when the queue receives work items.

  5. If working queues remain empty for most of the time, auto-scale down the number of role instances and continue monitoring relevant system metrics to determine if and when the application should scale up the number of instances to handle increasing workload.

Most of the above recommendations can be translated into a fairly generic implementation that handles message batches and encapsulates many of the underlying queue/blob storage and thread management operations. Later in this whitepaper, we will examine how to do this.

ImpNoteNoBgImportant
When retrieving messages via the GetMessages method, the maximum batch size supported by Queue Service API in a single dequeue operation is limited to 32. Exceeding this limit will cause a runtime exception.

Generally speaking, the cost of Windows Azure queue transactions increases linearly as the number of queue service clients increases, such as when scaling up the number of role instances or increasing the number of dequeue threads. To illustrate the potential cost impact of a solution design that does not take advantage of the above recommendations; we will provide an example backed up by concrete numbers.

The Cost Impact of Inefficient Design

If the solution architect does not implement relevant optimizations, the billing system architecture described above will likely incur excessive operating expenses once the solution is deployed and running on the Windows Azure platform. The reasons for the possible excessive expense are described in this section.

As noted in the scenario definition, the business transaction data arrives at regular intervals. However, let’s assume that the solution is busy processing workload just 25% of the time during a standard 8-hour business day. That results in 6 hours (8 hours * 75%) of “idle time” when there may not be any transactions coming through the system. Furthermore, the solution will not receive any data at all during the 16 non-business hours every day.

During the idle period totaling 22 hours, the solution is still performing attempts to dequeue work as it has no explicit knowledge when new data arrives. During this time window, each individual dequeue thread will perform up to 79,200 transactions (22 hours * 60 min * 60 transactions/min) against an input queue, assumed a default polling interval of 1 second.

As previously mentioned, the pricing model in the Windows Azure platform is based upon individual “storage transactions”. A storage transaction is a request made by a user application to add, read, update or delete storage data. As of the writing of this whitepaper, storage transactions are billed at a rate of $0.01 for 10,000 transactions (not taking into account any promotional offerings or special pricing arrangements).

ImpNoteNoBgImportant
When calculating the number of queue transactions, keep in mind that putting a single message on a queue would be counted as 1 transaction, whereas consuming a message is often a 2-step process involving the retrieval followed by a request to remove the message from the queue. As a result, a successful dequeue operation will attract 2 storage transactions. Please note that even if a dequeue request results in no data being retrieved; it still counts as a billable transaction.

The storage transactions generated by a single dequeue thread in the above scenario will add approximately $2.38 (79,200 / 10,000 * $0.01 * 30 days) to a monthly bill. In comparison, 200 dequeue threads (or, alternatively, 1 dequeue thread in 200 Worker role instances) will push the cost to $457.20 per month. That is the cost incurred when the solution was not performing any computations at all, just checking on the queues to see if any work items are available. The above example is abstract as no one would actually implement their service this way, which is why it is important to do the optimizations described next.

Best Practices for Eliminating Excessive Latency

To optimize performance of queue-based Windows Azure messaging solutions one approach is to use the publish/subscribe messaging layer provided with the Windows Azure AppFabric Service Bus, as described in this section.

In this approach, developers will need to focus on creating a combination of polling and real-time push-based notifications, enabling the listeners to subscribe to a notification event (trigger) that is raised upon certain conditions to indicate that a new workload is put on a queue. This approach enhances the traditional queue polling loop with a publish/subscribe messaging layer for dispatching notifications.

In a complex distributed system, this approach would necessitate the use of a “message bus” or “message-oriented middleware” to ensure that notifications can be reliably relayed to one or more subscribers in a loosely coupled fashion. Windows Azure AppFabric Service Bus is a natural choice for addressing messaging requirements between loosely coupled distributed application services running on Windows Azure and running on-premises. It is also a perfect fit for a “message bus” architecture that will enable exchanging notifications between processes involved in queue-based communication.

The processes engaged in a queue-based message exchange could employ the following pattern:

QL Pattern With Service Bus

Specifically, and as it relates to the interaction between queue service publishers and subscribers, the same principles that apply to the communication between Windows Azure role instances would meet the majority of requirements for push-based notification message exchange. We have already covered these fundamentals in one of our previous posts.

ImpNoteNoBgImportant
The usage of the Windows Azure AppFabric Service Bus is subject to a billing scheme that takes into account 2 major elements. First, there are ingress and egress charges related to data transfer in and out of the hosting datacenter. Second, there are charges based on the volume of connections established between an application and the Service Bus infrastructure.

It is therefore important to perform a cost-benefit analysis to assess the pros and cons of introducing the AppFabric Service Bus into a given architecture. Along those lines, it is worth evaluating whether or not the introduction of the notification dispatch layer based on the Service Bus would, in fact, lead to cost reduction that can justify the investments and additional development efforts.

For more information on the pricing model for Service Bus, please refer to the relevant sections in Windows Azure Platform FAQ.

While the impact on latency is fairly easy to address with a publish/subscribe messaging layer, a further cost reduction could be realized by using dynamic (elastic) scaling, as described in the next section.

Best Practices for Dynamic Scaling

The Windows Azure platform makes it possible for customers to scale up and down faster and easier than ever before. The ability to adapt to volatile workloads and variable traffic is one of the primary value propositions of the Cloud platform. This means that “scalability” is no longer an expensive IT vocabulary term, it is now an out-of-the-box feature that can be programmatically enabled on demand in a well-architected cloud solution.

Dynamic scaling is the technical capability of a given solution to adapt to fluctuating workloads by increasing and reducing working capacity and processing power at runtime. The Windows Azure platform natively supports dynamic scaling through the provisioning of a distributed computing infrastructure on which compute hours can be purchased as needed.

It is important to differentiate between the following 2 types of dynamic scaling on the Windows Azure platform:

  • Role instance scaling refers to adding and removing additional Web or Worker role instances to handle the current workload. This often includes changing the instance count in the service configuration. Increasing the instance count will cause Windows Azure runtime to start new instances whereas decreasing the instance count will in turn cause it to shut down running instances.

  • Process (thread) scaling refers to maintaining sufficient capacity in terms of processing threads in a given role instance by tuning the number of threads up and down depending on the current workload.

Dynamic scaling in a queue-based messaging solution would attract a combination of the following general recommendations:

  1. Monitor key performance indicators including CPU utilization, queue depth, response times and message processing latency.

  2. Dynamically increase or decrease the number of worker role instances to cope with the spikes in workload, either predictable or unpredictable.

  3. Programmatically expand and trim down the number of processing threads to adapt to variable load conditions.

  4. Partition and process fine-grained workloads concurrently using the Task Parallel Library in the .NET Framework 4.

  5. Maintain a viable capacity in solutions with highly volatile workload in anticipation of sudden spikes to be able to handle them without the overhead of setting up additional instances.

The Service Management APIs make it possible for a Windows Azure hosted service to modify the number of its running role instances by changing deployment configuration at runtime.

NoteNoBgNote
The maximum number of Windows Azure compute instances in a typical subscription is limited to 20 by default. This is intended to prevent Windows Azure customers from receiving an unexpectedly high bill if they accidentally request a very large number of role instances. This is a “soft” limit. Any requests for increasing this quota should be raised with the Windows Azure Support team.

Dynamic scaling of the role instance count may not always be the most appropriate choice for handling load spikes. For instance, a new VM instance can take a few seconds to spin up and there are currently no SLA metrics provided with respect to VM spin-up duration. Instead, a solution may need to simply increase the number of worker threads to deal with temporary workload increase. While workload is being processed, the solution will monitor the relevant load metrics and determine whether it needs to dynamically reduce or increase the number of worker processes.

ImpNoteNoBgImportant
At present, the scalability target for a single Windows Azure queue is “constrained” at 500 transactions/sec. If an application attempts to exceed this target, for example, through performing queue operations from multiple role instance running hundreds of dequeue threads, it may result in HTTP 503 “Server Busy” response from the storage service. When this occurs, the application should implement a retry mechanism using exponential back-off delay algorithm. However, if the HTTP 503 errors are occurring regularly, it is recommended to use multiple queues and implement a sharding-based strategy to scale across them.

In most cases, auto-scaling the worker processes is the responsibility of an individual role instance. By contrast, role instance scaling often involves a central element of the solution architecture that is responsible for monitoring performance metrics and taking the appropriate scaling actions. The diagram below depicts a service component called Dynamic Scaling Agent that gathers and analyzes load metrics to determine whether it needs to provision new instances or decommission idle instances.

Windows Azure Queueing Sample Arch With DSA

It is worth noting that the scaling agent service can be deployed either as a worker role running on Windows Azure or as an on-premises service. Irrespectively of the deployment topology, the service will be able to access the Windows Azure queues.

Now that we have covered the latency impact, storage transaction costs and dynamic scale requirements, it is a good time to consolidate our recommendations into a technical implementation.

Valery continues with detailed “Technical Implementation” sections and concludes:

To maximize the efficiency and cost effectiveness of queue-based messaging solutions running on the Windows Azure platform, solution architects and developers should consider the following recommendations.

As a solution architect, you should:

  • Provision a queue-based messaging architecture that uses the Windows Azure queue storage service for high-scale asynchronous communication between tiers and services in cloud-based or hybrid solutions.

  • Recommend sharded queuing architecture to scale beyond 500 transactions/sec.

  • Understand the fundamentals of Windows Azure pricing model and optimize solution to lower transaction costs through a series of best practices and design patterns.

  • Consider dynamic scaling requirements by provisioning an architecture that is adaptive to volatile and fluctuating workloads.

  • Employ the right auto-scaling techniques and approaches to elastically expand and shrink compute power to further optimize the operating expense.

  • Evaluate the cost-benefit ratio of reducing latency through taking dependency on Windows Azure AppFabric Service Bus for real-time push-based notification dispatch.

As a developer, you should:

  • Design a messaging solution that employs batching when storing and retrieving data from Windows Azure queues.

  • Implement an efficient queue listener service ensuring that queues will be polled by a maximum of 1 dequeue thread when empty.

  • Dynamically scale down the number of worker role instances when queues remain empty for a prolonged period of time.

  • Implement an application-specific random exponential back-off algorithm to reduce the effect of idle queue polling on storage transaction costs.

  • Adopt the right techniques that prevent from exceeding the scalability targets for a single queue when implementing highly multi-threaded multi-instance queue publishers and consumers.

  • Employ a robust retry policy framework capable of handling a variety of transient conditions when publishing and consuming data from Windows Azure queues.

  • Use the one-way multicast eventing capability provided by Windows Azure AppFabric Service Bus to support push-based notifications in order to reduce latency and improve performance of the queue-based messaging solution.

  • Explore the new capabilities of the .NET Framework 4 such as TPL, PLINQ and Observer pattern to maximize the degree of parallelism, improve concurrency and simplify the design of multi-threaded services.

A link to sample code which implements most of the patterns discussed in this whitepaper will be made available in the upcoming weeks as part of a larger reference application. The sample code will also include all the required infrastructure components such as generics-aware abstraction layer for the Windows Azure queue service which were not supplied in the above code snippets.

Additional Resources/References

For more information on the topic discussed in this whitepaper, please refer to the following:

Authored by: Valery Mizonov
Reviewed by: Christian Martinez, Paolo Salvatori, Curt Peterson, Steve Marx, Trace Young, Brad Calder


<Return to section navigation list> 

SQL Azure Database and Reporting

Megan Keller posted an Emerging Database Technologies: Jeremiah Peschka and Kevin Kline on NoSQL interview to SQL Server Magazine’s Database Administration Blog on 12/16/2010:

image NoSQL is a topic that seems to pop up in every conversation about current SQL Server trends. When I was at TechEd in June, people were still wondering what exactly NoSQL was, and they were concerned about what it would mean for their jobs as DBAs and developers. Six months later, the SQL Server community seems to have a better grasp on NoSQL and the scenarios that it’s best suited for. At PASS Summit 2010, I had the opportunity to sit down with Quest Software’s Kevin Kline, strategy manager for SQL Server at Quest Software, and Jeremiah Peschka, emerging technology expert, to discuss the strength in the NoSQL market and how companies are deciding where to implement NoSQL rather than SQL Server.

imageMegan Keller: Kevin, when you and I spoke with Brent Ozar at TechEd 2010, we discussed the current trends around NoSQL and Azure. A lot has changed in the NoSQL market in the past six months, though. What are some of the trends you’re seeing in this market now?

image Kevin Kline: Well first of all, in support of the trends we discussed earlier, let me introduce to you Jeremiah Peschka, [Quest Software’s] evangelist and technology specialist in new and emerging technologies. If that doesn’t validate where we think some trends are going, then I don’t know what does. Definitely we’re seeing quite a bit of exciting things happening with new and emerging technologies. Jeremiah does have a great deal of depth in development, as well as in SQL Server DBA work, but he also is experienced with all of these really strange sounding things like Hadoop.

Jeremiah Peschka: Sawzall.

Kevin: Sawzall. Voldemort. Cassandra. There’s really a NoSQL database called Voldemort. Lots of interesting things happening, and I’m thankful that’s Jeremiah’s area.

Jeremiah: I do think there’s a lot of strength in the market behind that. You’re seeing a lot of players like Google have started releasing a lot of their tools to the community. Things they’ve built up internally for eight, nine years they’re letting the community actually use now. Yahoo! has been contributing back a lot of the technology they developed to process 22 petabytes of data a day. I think as the amount of data we collect grows, it’s a matter of when you’re going to be switching to using one of these systems. They have a lot of strengths that complement where SQL Server doesn’t do too well.

Megan: Do you see companies implementing both traditional SQL Server systems and NoSQL, all within the same environment?

Jeremiah: That’s exactly what my research is showing; what I’m seeing when talking with people. You can’t get away from all the benefits that a relational database gives you. It’s a known quantity, we know how it performs. But at the same time, there’s a lot of benefit from using batch processing systems like Hadoop, NoSQL. There are areas where SQL Server doesn’t perform quite as well; you have to do a lot of tricks to get it to do things. Whereas with Hadoop it’s built for this out of the box; that’s exactly what it does.

Megan: Are you seeing specific types of workloads being used with NoSQL?

Jeremiah: Definitely. One of the workloads that I see a lot of is batch processing, like image processing. eBay uses NoSQL for a lot of bulk image processing. Yahoo! does a lot of raw analysis of data, and then they push it back into Analysis Services. Or, if you have data that’s very poorly defined, it has to be structured, that’s another good place to use NoSQL, where with a relational database it gets very convoluted. The New York Times uses NoSQL to do a lot of their form building for very loosely defined forms. So it really works well there.

Kevin: I think a really interesting question to look at, too, is how are the mainstream relational database vendors going to address this? There are a lot of different strategies you could take. You could build an extension to your existing product, you could build a brand-new product and try to launch it, you could build a toolkit to utilize an existing open-source kind of code—something like Cloudera has done where they’re building out a lot of offerings around Hadoop.

I’m really keen to see what Dave DeWitt is going to say on Thursday [during his PASS Summit 2010 keynote]. This time last year when he was giving his keynote, he said “I’m going to teach you a little bit about key value stores and column stores, but do not for one second assume that this means there will be anything related to it in any of our products, anywhere. So what did we see this morning, Jeremiah?

Jeremiah: That would be columnar indexing.

Kevin: A columnar indexing system, isn’t that interesting? So the major vendors recognize that there are simply situations where a relational database, by its very nature, has certain kinds of overhead. And that overhead means that we’re going to guarantee certain levels of service. For example, a transaction is either rolled forward and applied to the system or it is completely rolled back and doesn’t exist in the system. That has overhead; a great deal of overhead. It’s called the ACID property of transactions. We get to skirt all of those rules and all of that overhead with these other high-end systems that are NoSQL systems. So what do you do? Do you build in a NoSQL, no ACID capability, or do you offer a separate product, or do you try to leverage something that already exists out there? Not only are we watching Microsoft and SQL Server, but we’re looking at what is Oracle going to do; what is IBM DB2 going to do. Sybase is doing really interesting stuff.

Megan: Do you see third-party vendors eventually tying this into their products as well?

Jeremiah: That is a good question. Obviously, we can’t talk about future product direction, but I know that other vendors make extensions to MySQL and they’ve started building a lot of different products to go on MySQL’s backend. And I think the market really is too young to speculate what people are actually going to be doing.

Kevin: That’s one of the really interesting things about this broader scene is that it’s still the Wild West. It’s kind of like the turn of the century and the gold rush. We know people are going there, they’re trying to get something out of it, but who’s going to come out on top, we don’t know.

Jeremiah: At the beginning of the year, there was something like 27 different NoSQL database vendors on the market. Several more have come up, several more have folded.

Megan: Is there a NoSQL database vendor that stands out above the rest?

Jeremiah: Cloudera is making a lot of waves, whether or not they have marketing or they’re very, very successful, either way, they’re making a lot of waves; a lot of people are talking about them.

Kevin: I think the Apache implementation of Cassandra is definitely worth keeping your eyes on. Again, it’s still a little too broad to pick your winners, but there are certainly a handful of leaders. I think that one of the other questions that comes to mind is “What is Quest going to do?” Just to speak to that a little bit more, we are definitely observing it very closely, and we are doing some work in that space. We do have a free beta product, Toad for Cloud Databases.

Jeremiah: We also started up NoSQLPedia, in addition to SQLServerPedia and OraDBPedia, where we’re building up a community knowledge base. We have a couple of syndicated bloggers on board helping out with that. And we talk about not just traditional NoSQL databases like Hadoop, but we’re also talking about Azure table services in SQL Azure because a lot of people lump cloud in with NoSQL as well. We’re trying to get that information out there because it’s new and it’s different. A lot of DBAs are like “Is this going to take my job away from me?” Well, no, it’s still a database; you still need to be able to work with it. Someone needs to manage it and understand what’s going on under the hood.

In an upcoming blog post, Jeremiah and Kevin share their thoughts on the growing cloud market.

Related Articles


Gaurav Gupta summarized SSRS 2008/2008 R2 New Features in a 12/16/2010 post:

image Data Source

  • imageMicrosoft SQL Azure data source – connects to SQL SERVER database in cloud.
  • Microsoft SQL Server Parallel Data Warehouse data source – Connect SQL SERVER parallel data warehouse. Haven’t really worked with this one. ;).so not much clue on this one.
  • Microsoft SharePoint List data source – Connects to SharePoint List and pull information from there. Lot of clients ask for this one and is really is a charm.

Secondary Axis Charts:

  • Now SSRS Reports can have two Ax[e]s, which allow us to create 2 types of chart in a single chart area, like combination of bar and Line chart. That has been asked by many clients in past, I am glad now this can be easily achieved.

Tool tip functionality:

  • Now we can have tooltip on the data point. Gives end user[s the capability to] get the exact number at any data point of a chart.

Sparklines, Data Bars and Indicators

  • Sparklines and data bars are charts that can be used within table or matrix. This will help to compare information at same time.
  • Indicators are red, yellow and green light or symbol to visualize data in row.

Text Rotation

  • Text Rotation Functionality is available, now we can rotate text to 270 degree make it fit into our text boxes.

New Charts and Gauges

  • Charts and gauges have been improved. They look more attractive.

Map and Spatial data functionality

  • New Map features allow you to connect your spatial data with Map. Mapping was the key element missing from SQL Server reporting. Including it made SSRS a complete Reporting package.

Integration with SharePoint 2010

  • Is more effective. Now subscriptions and drill-through links will work directly. End User can access new report builder from SharePoint and create and deploy reports.

Well that’s all for now, will be working on this individual point and publish individual example for every one of this features. Thanks.

Guarav works for Sonata Software Ltd., Bangalore, as a Senior BI Consultant.


<Return to section navigation list> 

Marketplace DataMarket and OData

The Windows Azure Team added a DataMarket for Government page on 12/17/2010:

image


PRNewswire published a Verify Identity and Prevent Identity Fraud in Real Time with CDYNE Corporation's Social Security Number Verification Web Service press release on 12/17/2010:

image CDYNE Corporation, a leading provider of Data Quality and Communication Web Services announced today the release of Death Index 3.0 to verify identity and prevent identity fraud in real time. Death Index 3.0 is available at cdyne.com as part of its suite of Data Quality Web Services. Death Index 3.0 was also introduced as one of the first 35 partnership offerings to distribute data sets on the Windows Azure Marketplace.

image CDYNE Death Index 3.0 is a hosted, programmable Web Service that validates Social Security Numbers against the Social Security Administration's Death Master File. With access to over 86 million records of deaths that have been reported to the Social Security Administration, Death Index 3.0 will return the Social Security Number, name, date of birth, date of death, state or country of residence, zip code of last residence, and zip code of lump sum payment.

imageThe Death Index 3.0 Web Service provides key business benefits of integrating Social Security Number verification with existing business applications, without the need to maintain and update a large database. Monthly updates are provided automatically and come directly from the United States Social Security Administration.

Web Services are a key point of integration for business applications that exist on different platforms, languages, and systems. IT departments can interface with Death Index 3.0 Web Service using SOAP (Simple Object Access Protocol) or RESTful (Representational State Transfer) endpoints, using JSON (JavaScript Object Notation) or XML formats to build Social Security Number validation features into existing business applications.

"This new release meets the latest standards and requirements in the industry," said Valentin Ivanov, Chief Software Architect of CDYNE Corporation. "We are very excited to be a partner with Microsoft and offer Death Index 3.0 on Windows Azure Marketplace."

CDYNE's partnership with the Windows Azure Marketplace makes it easy for developers to consume CDYNE Death Index 3.0. Windows Azure Marketplace is a data marketplace that matches information providers with end users and developers, and it is a feature of the Windows Azure cloud platform where data sets can be listed, searched for, and consumed.

The Death Index Web Service is backed up with a 100% Service Level Agreement (SLA), protecting clients from unscheduled outages. CDYNE's data centers are strategically located in different areas across the United States, providing redundancy and location fail-over enabling.

About CDYNE Corporation

Since 1999, CDYNE has provided enterprise Data Quality and Communications Web Services to solve the business need for real-time communication and data quality verification. Web Services include Phone Notify!, SMS Notify!, Postal Address Verification, Phone Verification, Demographics, Death Index, and IP2Geo. CDYNE billing is transaction-based and post-pay. Clients pay for only what is consumed, eliminating overage charges and unused credits. There are no contracts, startup fees, or cancellation charges. For more information, visit cdyne.com or call 1-800-984-3710.


Alan Earls interviewed Lance Olson, a Group Program Manager at Microsoft, in his OData provides patterns for HTTP, JSON, data access post of 12/15/2010 to SearchSOA.com’s SOA News blog:

imageHave you been following the evolution of OData?  “O,” what, you ask?  OData (Open Data Protocol) is a Web protocol for querying and updating data. It provides a way to unlock your data and free it from silos that exist in applications today.  It does this by defining a common set of patterns for working with data based on HTTP, AtomPub, and JSON. Started originally at Microsoft, OData has been gaining adherents. For instance, IBM uses OData to link eXtreme Scale REST data service with clients. Meanwhile, eBay is making available an OData API that supports search for items available via the eBay Finding API, using the query syntax and formats in the Open Data Protocol. Other high profile users include Facebook, Netflix, and Zillow.

Lance Olson, Group Program Manager at Microsoft, took time to answer some questions about what's new with OData and where it is going.

image Q:  Briefly, what is OData, how and when did it get started?

OLSON: OData got started in a project at Microsoft that is now called WCF Data Services.  We were looking for an easier way to deal with data that was being passed through a service interface.  We noticed that many of the current interfaces, whether based on SOAP, REST, or something else were all using different patterns for exchanging data.  This method would then return a bunch of Customer records.  You might then decide to add paging support. Then you'd want to bind the results to a grid and sort the data. Finally, you'd decide you want to query by something other than, say,  Zip code.

This approach has led to a couple of key challenges.

First, there is no way to build generic clients that do much with these API's because it doesn't know about the ordering of the parameters or the pattern being used.  Because you can't build a generic client, you have to build clients for each API you want to expose.  The simplicity of basic HTTP API's helps with this, but it still is very costly.  The growing diversity of clients that talk to these API's only exacerbates this problem.

The second problem with this pattern is that it forces the service developer to make a difficult trade-off.  How many queries should I expose?  You have to do a balancing act between exposing everything you can possibly imagine and exposing too little such that the value of the service is diminished.  The former leads to a proliferation of API surface area to manage and the latter results in what it often called a "data silo" where critical data is locked up in a particular pattern which is often unavailable to other applications simply because it doesn't expose the data in quite the way that is needed by that application.  Services tend to live a lot longer than a single application, so you really need to design the API in a way that will last, so it isn't great if you find that you need to keep adding new versions of the service interface as you build new clients.  In many cases the service developer and the client developer aren't even the same person, so making changes to the service interface ranges from difficult to impossible.

With OData we took a very different approach.  Instead of creating custom signatures and parameters we asked the following question: "What would a service interface look like if you treated data sets as resources and defined common patterns for the most frequently used operations such as querying, paging, sorting, creating, deleting, and updating?"  This brought us to the creation of OData.  OData solves the key service interface design challenges that are mentioned above.  Most importantly, OData enables client applications and libraries to be written once and then be reused with any OData service endpoint.  This has enabled a broad ecosystem of clients ranging from .NET on Windows to PHP on Linux, Java, Objective-C on iOS, etc. At Microsoft it has also enabled us to go farther than we could with traditional service interfaces by adding support to products that go beyond developers.

Over time, we continued to see customers using the protocol for a broader set of scenarios than what you could do by using WCF Data Services.  It was increasingly clear that we needed to be able to talk about the protocol on its own.  As a result, in the Fall of 2009 we launched the OData site and began talking about OData in Microsoft's Professional Developers conference.  This was followed up with a much larger presence in March of 2010 at Microsoft's MIX10 conference which is when OData really began to get broader coverage on its own.

Q: What is Microsoft's role now?

OLSON: Today Microsoft has published its .NET client for OData under the Apache 2.0 license on the site.  Microsoft has also released the OData protocol specification under its Open Specification Promise encouraging use by anyone who could benefit from it.  Anyone who wants to do so can use OData.  People interested in participating in the design can join in on the discussion via the OData mailing list.

Q: Who seems to care about OData and who should care?

OLSON: If you're using services to exchange data between two endpoints, you should take a look at OData.  OData fits best in cases where you have services that share data, like in the GetProductsByZip example.  As the number of client platforms continues to grow at an increasing rate, or the need for Web properties to have a great API is becoming a requirement if you want to be relevant for consumer facing Web sites.  If you're in the enterprise, similar pressure is being felt to create applications that work well for employees, which often means providing key business information on the devices they use.  OData applies to a broad set of industry categories ranging from consumer-facing Web to the enterprise to the public sector. I'm seeing customers use OData across these segments.  There is a list of public services and server implementations available and client/application implementations which gets updated whenever we hear about new implementations.

Q:  How is OData progressing, what has changed, who is involved?

OData is officially just over 1 year old.  For the first year we've seen amazing uptake as people continue to look for ways to better scale their API's and deal with the diversity of the client ecosystem.  Multiple new open source community implementations have cropped up giving people fairly broad coverage across languages and platforms.  While Microsoft is using OData in many of its products and services, there are also a number of external implementations like IBM WebSphere, Facebook's Insights service, and the Netflix catalog.  We're also seeing a lot of growth inside of the firewall.

Q: What is the outlook for the near future? What do you expect to happen in 2011 and what will this mean for the IT community?

OLSON: One of the most significant things we'll see for OData in 2011 is the rollout of support for OData across a number of key server products in the IT space.  That's exciting because it will continue to bring more momentum to the ecosystem, and that creates a positive cycle for the community.  We'll also see more client tools and applications getting on board, like Tableau, which make it easier to visualize, analyze, shape, and combine data from all kinds of different sources to get a better view on the problems we need to solve.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Louis Columbus summarized the latest Windows Azure AppFabric Update in a 12/17/2010 post:

image Microsoft’s cloud-based middleware platform, Azure AppFabric, is designed to streamline the development, deployment and support of applications on the Windows Azure platform.   The Azure AppFabric initiative serves as  the foundation of the Platform-as-a-Service (PaaS) in the Windows Azure stack as well.

image722322Microsoft is supporting four types of multitenancy with Azure AppFabric. These types of multitenancy are explained in the presentation, with an analysis by Gartner of the multitenancy options also provided.  You will find a link to the Gartner research note below.

AppService Bus Is Key To Integration in Windows Azure. The AppFabric Service Bus is an interesting integration concept Microsoft is working on right now, as its design goal is to connect systems and content outside the firewalls of companies, unifying it with internal, often legacy systems’ data.  How ServiceBus will define context has not been shared by Microsoft.  That however will be interesting to see, as contextual content in this type of configuration has much potential for redefining internal search.

Usability is King. Azure’s design objective for a usability standpoint  is to deliver the content to any device, anywhere in the world, at any time.  It is a very ambitious project and the following presentation does an excellent job of putting Azure AppFabric into context.

Research note from Gartner. Yefim Natis,  David Mitchell Smith, David Cearley have written an insightful research note on AppFabric’s current status (as of November, 2010) and have also defined in detail its architectural components. Here is the research note:  Windows Azure AppFabric: A Strategic Core of Microsoft’s Cloud Platform.


MSCerts.net explained Cloud-Enabling the ESB with Windows Azure (part 1) - Receiving Messages from Azure’s AppFabric Service Bus on 12/16/2010:

image722322There are inevitably times when you will need to cross organizational boundaries in order to get trading partners, customers or suppliers integrated into your business processes. Traditionally, in a BizTalk environment, this would mean that you would have Web services hosted in IIS or in the BizTalk process on the BizTalk Servers. You then would reverse-proxy those services to make them available outside the firewall. You will also likely have a load balancer in play for either volume or high-availability purposes.

imageYou would further need to define a security strategy. Crossing a security boundary like this is never easy and commonly introduced moving parts and some degree of risk.

By extending the ESB on-ramp to the Windows Azure platform, we can address several of these concerns. Windows Azure provides the Windows Azure platform Service Bus and the Windows Azure platform Access Control Service. These are both services we can use to extend the BizTalk on-ramp to Windows Azure.

Receiving Messages from Azure’s AppFabric Service Bus

The previously described WCF-Custom adapter allows you to select any bindings that are available on a given machine. When you install the Windows Azure platform AppFabric SDK, you get several new WCF relay bindings that allow you to communicate with the Service Bus. Instead of directly opening our infrastructure to the outside world, we will instead use that relay feature. External partners can publish their messages to the Service Bus, and we will receive them because we are the subscriber.

From an implementation perspective, it is trivial to receive messages from the Service Bus. You just create a new BizTalk receive location, choose one of the relay bindings, set the security credentials, and enable the receive location. Once you are done that, you have created an endpoint in the Service Bus (with an identifying namespace), and the Windows Azure Service Bus will send messages matching that endpoint to you. Figure 1 shows what this receive location looks like.

Figure 1. Notice the WCF-Custom adapter, as well as the URI, which indicates the Service Bus endpoint address.

The receive pipeline being used here (ItineraryReceiveXml), is one of the standard pipelines included with the ESB Toolkit. This means we could potentially implement something like passing a received message into the business rules engine; having a business rules engine evaluation determines which itinerary to use, retrieves that itinerary from the repository and stamps it on the message. This is identical to the sort of sequence we may go through if we were picking up a message from a SharePoint document list or from a flat file. The only difference is we made a couple of minor configuration changes to the WCF-Custom adapter settings and now we have extended our on-ramp to the cloud.

We have a secured pipe up to the Service Bus because we are the ones that initiated and secured the connection (using standard WCF message and transport security options). In addition, anyone publishing messages intended for the service endpoint will need to be authorized by the Windows Azure platform Access Control Service before they can do so. This secures the link from the external organization to the cloud.


MSCerts.net continued with Cloud-Enabling the ESB with Windows Azure (part 2) - Sending Messages to Azure’s AppFabric Service Bus in a post of the same date:

image722322In addition to extending the ESB on-ramp to the cloud, we can take advantage of the Windows Azure platform Service Bus by sending messages to it from our on-premise ESB. Using the WCF-Custom adapter provider and the Windows Azure platform, AppFabric SDK gives us the necessary relay bindings and we simply need to set the appropriate adapter provider properties.

image From a BizTalk perspective, we can use the WCF-Custom adapter with either a static or dynamic send port. From an ESB perspective, though, the preferred approach would be to use an itinerary that uses a dynamic off-ramp (send port) to send the message. This itinerary would specify the processing steps to receive a message, resolve the destination and adapter properties, and then relay the message on (Figure 2).

Figure 2. A visual representation of a three-step itinerary.

The properties of the Resolve Settings resolver are shown in Figure 3. To keep things simple, this example uses a static resolver (which means that the statically defined settings will be dynamically applied when the itinerary is executed).

Figure 3. Sample Resolve Settings property values.

In order for this to function properly, it is crucial that the Action, Endpoint Configuration and Transport Location properties be set correctly. The Endpoint Configuration properties are now set as shown in Figure 4.

Figure 4. The Endpoint Configuration settings. This example uses the netTcpRelayBinding (one of the bindings you get from installing the Azure AppFabric SDK). There are additional options available to accommodate other messaging patterns, such as multi-case and request-and-response.


 David Chou posted a Windows Azure AppFabric presentation to SlideShare on 12/16/2010:

image  Dave is a Microsoft software architect


Hilton Giesenow posted a 00:24:03 How Do I: Get Started With the Azure Service Bus? MSDN video:

image

image722322The AppFabric services, which form part of the Windows Azure platform, allow for various cloud-based service routing scenarios.

image In this introductory video, Hilton Giesenow, host of The MOSS Show SharePoint podcast (http://www.TheMossShow.com/) shows us how to sign up to the service, create a new project and namespace, and redirect a Windows Communication Foundation client and service to route the conversation via the service bus environment.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Wade Wegner recommended Using Web Deploy with Windows Azure for Rapid Development in a 12/17/2010 post:

image While building your web application, have you ever deployed to Windows Azure and then realized you forgot to make a change?  Or forgot to include an update?  Or maybe you deployed and realized you made a simple mistake, and you want to quickly update it?  We all have.  You’ll then find yourself making the change, creating a new package, and uploading/deploying it.  Then you wait.

imageNow, in the grand scheme of things, waiting 10 minutes is not a big deal – think about everything you’re getting that you don’t already have at your disposal.  That said, during development and QA, it can be frustrating to have to wait while your role instance upgrades or restarts.

Fortunately, with the updates provided in the Windows Azure SDK 1.3, we can benefit from an existing technology called Web Deploy to make our lives much easier.

A few caveats first:

  1. This technique should only be used for development purposes.
  2. You can only update a single role instance with this technique.
  3. Since you are not updating the Windows Azure package you may lose your changes at anytime.

Be sure and understand the above caveats.  This is only for development purposes.

Okay, ready to get started?  Here are the steps to follow.

  1. Create a new Windows Azure Project called WindowsAzureWebDeploy.  Add an ASP.NET MVC 2 Web Role called MvcWebRole to the solution.  It’s up to you if you want a unit test project.
  2. Created a folder called Startup in your MvcWebRole project.
  3. Create three files in this folder: CreateUser.cmd, EnableWebAdmin.cmd, and InstallWebDeploy.cmd.  For each of these files, change the Copy to Output Directory value to Copy always.
  4. Within the Startup folder, create a folder called webpicmd.
  5. Download WebPICmdLine here.  For more information, see the MSDN article Using the WebPICmd Command-Line Tool.
  6. Unzip the file webpicmdline_ctp.zip into the webpicmd folder.
  7. In Visual Studio, add the following four files into the solution.    For each of these files, change the Copy to Output Directory value to Copy always.
    • Microsoft.Web.Deployment.dll
    • Microsoft.Web.PlatformInstaller.dll
    • Microsoft.Web.PlatformInstaller.UI.dll
    • WebPICmdLine.exe
  8. Update CreateUser.cmd to include the following code (change “webdeployuser” and “password” to different values if you’d like):

    Code: CreateUser.cmd

    1. Echo Creating user
    2. net user webdeployuser password /add
    3. net localgroup administrators webdeployuser /add
    4. Echo User creation done
    5. exit /b 0
  9. Update InstallWebDeloy.cmd to include the following code:

    Code: InstallWebDeploy.cmd

    1. @echo off
    2. ECHO "Starting WebDeploy Installation" >> log.txt
    3. "%~dp0\webpicmd\WebPICmdLine.exe" /XML:http://www.microsoft.com/web/webpi/3.0/beta3/webproductlist.xml /Products: WDeploy /log:webdeploy.txt
    4. ECHO "Completed WebDeploy Installation" >> log.txt
  10. Update EnableWebAdmin.cmd to include the following code:

    Code: EnableWebAdmin.cmd

    1. start /w ocsetup IIS-ManagementService
    2. reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server /v EnableRemoteManagement /t REG_DWORD /d 1 /f
    3. net start wmsvc
    4. sc config WMSVC start= auto
    5. net start MsDevSvc
    6. sc config MsDevSvc start= auto
    7. exit /b 0
  11. Open the ServiceDefinition.csdef file in the WindowsAzureWebDeploy project.
  12. Add a new InputEndpoint named mgmtsvc with the following values:

    Code: InputEndpoint

    1. <Endpoints>
    2. <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    3. <InputEndpoint name="mgmtsvc" protocol="tcp" port="8172" localPort="8172" />
    4. </Endpoints>
  13. Create three Startup Tasks to call the command files in the MvcWebRole.

    Code: Startup Tasks

    1. <Startup>
    2. <Task commandLine="Startup\EnableWebAdmin.cmd" executionContext="elevated" taskType="simple" />
    3. <Task commandLine="Startup\InstallWebDeploy.cmd" executionContext="elevated" taskType="simple" />
    4. <Task commandLine="Startup\CreateUser.cmd" executionContext="elevated" taskType="simple" />
    5. </Startup>
  14. That’s it!

At this point, you are ready to deploy your application.  When the role instance starts-up, the three start-up tasks will run and 1) create an user, 2) install web deploy through the Web Platform installer, and 3) enable web administration.  This process can take several minutes to complete.

Once deployed, your application should look like this:

Pre-Web Deploy

Now it’s time to setup your MvcWebRole project to publish directly to your role instance through web deploy.  First, though, make a quick change to your application so that, after you deploy, you can verify that the new bits are deployed.

Update Views –> Home –> Index.aspx and replace the existing header with:

  1. <h2>Deployed with Web Deploy</h2>

Now, let’s publish the update.

  1. Right-click MvcWebRole and click Publish….
  2. Create a profile name (e.g. MvcWebRole).
  3. Add your full DNS name as the Service URL (e.g. mywebdeploy.cloudapp.net).
  4. Add the Site/application value.  This is essentially yourrolename_IN_0_Web (e.g. MvcWebRole_IN_0_Web).  Don’t forget the “_Web” at the end.
  5. Check the Mark the IIS application on destination checkbox.
  6. Remove the check from the Leave extra files on the destination (do not delete) checkbox.
  7. Check the Allow untrusted certificate checkbox.
  8. Add your User name (e.g. “webdeployuser”).
  9. Add your Password (e.g. “password”).
  10. Check the Save password checkbox.
  11. Click Publish.

Once complete, your Publish Web window should look something like this:

image

After you get the message Publish succeeded, refresh your page.  It should now look like this:

DeployedWithWebDeploy

And there you have it!  Within seconds you’ve deployed updates to your application running in Windows Azure.

Now, do you remember the caveats above?  If nothing else, remember that this is only for development purposes.  I don’t want to hear that you used this in production then lost all your changes when a role instance was restarted.

I hope this helps!


GrapeCity announced ActiveReports 6 Silverlight Report Viewer Beta with enhanced Azure reporting in a 12/17/2010 press release:

imageGrapeCity PowerTools today announced the availability of ActiveReports 6 Silverlight Report Viewer Beta Preview for .NET developers. The preview build also includes enhanced support for Windows Azure reporting.

image The new release makes ActiveReports 6 the only .NET reporting tool to provide report viewers for Windows, ASP.NET, Adobe Flash and Microsoft Silverlight, all in one convenient royalty-free package.

Designed for reporting in Microsoft Silverlight 4 and higher, the Silverlight Report Viewer includes these features:

  • Preview reports loaded from a file, document stream, ASPX page, or RPX handler.
  • End-user toolbar with Table of Contents (TOC), Thumbnails, Print, Search, Zoom and Navigation buttons.
  • Printing support with an extra print-to-PDF option.
  • Built-in themes for easy report viewer customization.
  • Custom localization support with Japanese and Chinese included.
  • Enhanced product documentation, samples and walkthroughs.

The Windows Azure reporting enhancements include:

  • Export your reports to Microsoft Excel from within Azure in full trust mode.
  • PDF digital signatures are now supported in Azure full trust mode.
  • ActiveReports also supports Azure in medium trust with standard platform limitations.

The ActiveReports Silverlight Report Viewer will be licensed for use in the ActiveReports 6 Professional Edition. Existing ActiveReports 6 Professional Edition customers will receive the Silverlight Report Viewer control at no additional charge as part of the next Service Pack (SP2) release.

More information is available on the announcement page at http://www.datadynamics.com/forums/137729/ShowPost.aspx


Cory Fowler (@SyntaxC4) started a Setting up RDP to a Windows Azure Instance: Part 1 tutorial on 12/16/2010:

image In my previous post, Export & Upload a Certificate to an Azure Hosted Service, I outlined some of the common tasks which are necessary to RDP into a Windows Azure Hosted Instance. In this post I will outline how to use the tools in Visual Studio to setup the RDP Configuration values.

Part 2 of this Series will outline how to Configure the RDP Manually, using IIS, Powershell and the Service Management API. One final post will outline how to get the RPD Connection launched.

Using Visual Studio to RDP to an Azure Instance

If you’re a Developer, this is most likely the simplest process for you. The following steps explain the process of setting up RDP to the Cloud, so a number of Development Processes, including building out your Website are *not* covered.

Visual Studio 2010 Cloud Service Project

1. Welcome to the Start Screen.

Open-Visual-Studio-2010

2. Create a New Cloud Service Project.

Create-New-Cloud-Project

3. Select the Required Projects for the Solution.

Add-Your-Roles

Publish the Application to Windows Azure

1. [Time Lapsed: Build Application] Right-Click on Cloud Service Project and select Publish.

Publish-Your-Website

2. Choose your Hosted Service and Storage Account to Deploy to.

Deploy-Windows-Azure-Project-Final

3. Configure Remote Desktop connections.

Default-Remote-Desktop-Configuration-Dialog

4. Create a Certification (this is used to encrypt the credentials).

Create-Certificate-For-Password-Encryption

5. Create a Username and Password for the RPD Connection.

Fill-Out-RDP-Credentials

6. Export and Upload the Certificate to the Hosted Service.

Certificate-Uploaded

7. Press OK on both the Remote Desktop Setup and Publish Dialog boxes. This will begin the Publish Process.

Azure-Deployment-In-Progress

Next Steps

This concludes the configuration of RDP into a Windows Azure Instance using Visual Studio 2010. The next step would be to connect to the Windows Azure instance. I will be posting another entry to cover the steps to connect, however it will be after I complete my next entry on how to Manually Configure the RDP Connection.


The Windows Azure Team posted Real World Windows Azure: Interview with Wolf Ruzicka, CEO of EastBanc Technologies, and Evgeny Popov, Head of the Microsoft Business Unit at EastBanc Technologies on 12/16/2010:

imageThe Real World Windows Azure series spoke to Wolf Ruzicka and Evgeny Popov at EastBanc Technologies about using the Windows Azure platform to deliver a cloud-based solution for the public transportation industry. Here's what they had to say:

MSDN: Can you give us a quick overview of what EastBanc Technologies does and who your customers are?

image Ruzicka: We provide custom software solutions and systems integration services for public agencies and private organizations that produce technology or use IT to manage their business more effectively. We help our customers succeed by delivering simple-to-use solutions for complex challenges, and then wherever possible-just as with the topic we are discussing here-we retain the IP necessary to develop repeatable software solutions that help our customers meet common challenges.

MSDN: Was there a particular challenge you or your customers were trying to overcome that led you to develop a cloud-computing solution?

Popov: Public transportation authorities have data on vehicles, routes, and schedules that the technology-development community could use to build applications and services that can enhance the public transit experience. But to expose the data in a way developers can use, the transit agencies would need complex IT infrastructures that require high initial investments. We wanted to lower that barrier by using cloud computing to aggregate transit data from multiple sources and expose it as a service on the web.

Ruzicka: From our perspective, we needed to remove the uncertainty from scalability. When you create a new service, it's too expensive and too risky to build in excess capacity. If the application becomes very popular, you can consume time and money with maintenance and hardware issues as you fight for scalability. We did not want to manage our own infrastructure-we wanted to minimize management and maximize flexibility.

MSDN: Why did you decide to adopt Windows Azure? Did you evaluate other offerings such as Amazon Web Services, Google App Engine, or Salesforce.com?

Ruzicka: We have previously developed applications on all these services. But for this high-profile public-transit project, we immediately eliminated providers that offered solutions based simply on virtual machine hosting, and of all the providers we looked at, Windows Azure was the only one that actually integrated the work we wanted to do with the tools and technologies we use. A big decision factor for us was the Microsoft SQL Azure database management service. With a familiar database engine in the cloud, our developers did not have to adjust to new data management processes.

MSDN: Can you describe the solution you built with Windows Azure?

Ruzicka: We built the Public Transit Data Community (PTDC), an external application programming interface that uses computing and storage resources in Windows Azure and SQL Azure to combine heterogeneous data feeds from transportation agencies around the United States into a variety of open formats. It exposes the data as a web service that developers can use to create desktop, web, and mobile applications like trip planning tools, interactive route maps, and live information and notification services. PTDC is a nationwide one-stop data shop that developers can use to build public transportation applications that can work across multiple geographical areas.

Figure 1: EastBanc Technologies used Windows Azure to build the Public Transit Data Community (PTDC), a data service that developers can use to create applications like trip planning tools and live information services for devices such as Windows Phone 7.

MSDN: How do you and your customers benefit from using Windows Azure?

Popov: We're using Windows Azure to connect public transportation agencies with a development community that creates applications to enhance and promote public transit services. With PTDC, transportation authorities, EastBanc Technologies, other ISVs, and independent developers around the world can build innovative applications for computers, smartphones, and other mobile devices that can streamline urban navigation and give commuters more transit options. These options can help make commutes more predictable, save time, and enhance the commuting experience, which makes public transportation more attractive and increases ridership. 

Ruzicka: For us, our startup, operational, and maintenance costs are low compared to managing an on-premises infrastructure or using a local hosting provider, and we have quick, easy, pay-as-you-go scalability. Best of all, with Windows Azure, our team can concentrate on what they do best-software development-and not on solving maintenance headaches.

To read more Windows Azure customer success stories, visit: www.windowsazure.com/evidence.


The Windows Azure Team reminded developers Now Available: Updated Windows Azure Platform Training Kit Covering All the Latest Features and Enhancements on 12/16/2010:

To help you understand how to use the new Windows Azure features and enhancements, an updated version of the Windows Azure Platform training kit is now available.   This version includes several new and updated hands-on labs (HOLs), demo scripts, and presentations for the Windows Azure SDK and Windows Azure Tools for Visual Studio release 1.3 and the new Windows Azure Management Portal.   You can download the training kit here or walkthrough the hands-on labs on MSDN here.

image722322This release also includes hands-on labs that were updated in late October and November 2010 to demonstrate some of the new Windows Azure AppFabric services that were announced at PDC10, including Windows Azure AppFabric Access Control, Caching Service, and the Service Bus.

imageIn addition to updating all content to use the new Windows Azure SDK, several new presentations have been added:

  • Identity and Access Control in the Cloud
  • Introduction to SQL Azure Reporting
  • Advanced SQL Azure
  • Windows Azure Marketplace DataMarket
  • Managing, Debugging, and Monitoring Windows Azure
  • Building Low Latency Web Applications
  • Windows Azure AppFabric Service Bus
  • Windows Azure Connect
  • Moving Applications to the Cloud with VM Role

To make it easier for you to review and use content in the training kit without having to download the entire package, we'll continue to publish the HOLs directly to MSDN.  You can browse all of the HOLs here.

The new WAPTK release solved most of the problems reported in the update of 12/16/2010 to my Strange Behavior of Windows Azure Platform Training Kit with Windows Azure SDK v1.3 under 64-bit Windows 7 post.


Christian Weyer explained Sending emails from Windows Azure using Exchange Online web services (BPOS, for the search engines) in a 12/13/2010 post:

image There already have been several blog posts (e.g. 1, 2, 3) about why and how to send emails from an Azure-hosted application.

I just wanted to summarize the essence and show some code on how to send email from Azure code via Exchange Online web services if you have an Exchange Online email subscription.

Turns out I was able to register a nice domain for my Exchange Online trial: windowsazure.emea.microsoftonline.com. 

imageSo, here is the essential code snippet to send an email via EWS (Exchange Web Services) by leveraging the EWS Managed API 1.1 (get the download here):

var service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
service.Url = new Uri(
"https://red002.mail.emea.microsoftonline.com/ews/exchange.asmx"); service.Credentials = new WebCredentials(userName, password); var message = new EmailMessage(service); message.ToRecipients.Add("joe@doe.com"); message.From = new EmailAddress(
"foobarbaz@windowsazure.emea.microsoftonline.com"); message.Subject = "Hello EMail - from Windows Azure"; message.Body = new MessageBody(BodyType.HTML, "Email from da cloud :)"); message.SendAndSaveCopy();

In the code above I am sending the email via the EWS host for Europe – you may need different URLs for your location:

Asia Pacific (APAC): https://red003.mail.apac.microsoftonline.com
Europe, the Middle East, and Africa (EMEA): https://red002.mail.emea.microsoftonline.com
North America: https://red001.mail.microsoftonline.com

Hope this helps.


<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.

 


Return to section navigation list> 

Windows Azure Infrastructure

Srinivasan Sundara Rajan asserted “Traditional HA architectures [are] made redundant on cloud” in a preface to his Shrinking World of High Availability Skills post of 12/17/2010:

High Availability in the Data Center
High Availability (HA) is the term used to describe systems that run and are available to customers more or less all the time.

HA is basically meant for failover protection of the systems, whereby if the primary server fails, the standby machine takes over without interruption to the customers.

However, the enterprises have spent  lot of money and time architecting the HA strategies and implementations, as there are multiple options and each of them differ with respect to the individual operating systems and hardware.

Some of the strategies are :

  • Idle Standby: In this configuration, one system will be the primary server, and the second system will be idle or in standby mode, ready to take over the work load if there is a failure in the primary server.
  • Mutual Takeover: In this configuration, each system acts as an HA alternative of the other system. If there is a failure, the other system should  continue with its primary workload as well as the workload of the failed server.

Complexities in HA Setup in Traditional Data Centers
While high availability has always been available to enterprises, it has come with a cost and complexity associated with it, as individual vendor configurations are costly and difficult to implement without proper consulting support. Here are some of the popular  HA options available to the enterprises in a non-cloud world.

High Availability Cluster Multi-Processing (HACMP)
HACMP is IBM's solution for high-availability clusters on the AIX Unix IBM System p platforms and stands for High Availability Cluster Multiprocessing. While this is a proven and robust  platform, it has lot of setup tasks, and testing of the same is always complex and time-consuming.

  • The servers need to be set up in HACMP ES clusters so that they exchange heartbeats
  • Disks and adapters needs to be mirrored accordingly to support the failover configuration
  • For applications like databases, both servers have to have access to common installation directories and other log files
  • Some manual restarting operations may be needed
  • Configuration files needs to be updated with the IP Address and names of servers so that the failover operations can be completed

Microsoft Cluster Service (MSCS)
Microsoft Cluster Service (MSCS) is a feature of Windows Operating systems. It helps to connect multiple servers into a cluster for high availability. It involves the same set of complex tasks:

  • Set up private and public network
  • Set up shared every thing common storage
  • Set up protocol components
  • Install and configure other components

Veritas Cluster Server
Veritas Cluster Server can protect everything, from a single critical database instance to very large multi-application clusters in networked storage environments. Again like other cluster, setting them involves multiple complex tasks.

  • Create disk groups and shared storage
  • Set up Global Atomic Broadcast facility
  • Set up enterprise agents
  • Create resources, resources types and resource groups

How Cloud Abstracts the Complexities of HA Setup
With rapid provisioning,  virtual server management and migration of VM workload tenants, cloud computing  makes much of the complex tasks of HA architectures mentioned above redundant and helped organizations concentrate on core business processes and not on non-functional needs. Let us see how some of the top cloud platforms handle High Availability without much involvement from the cloud consumer.

Amazon Cloud Platform
The Amazon Cloud Platform consists of EC2, EBS, S3 and  has several out-of-the-box features to support high availability. Amazon EC2 provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of Regions and Availability Zones. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries.

  • The Amazon EC2 Service Level Agreement commitment has 99.95% availability for each Amazon EC2 Region.
  • Amazon EBS volumes offer greatly improved durability over local Amazon EC2 instance stores, as Amazon EBS volumes are automatically replicated on the back end (in a single Availability Zone).
  • You can build an application across multiple Availability Zones that will be protected against the loss of an entire physical location.
  • Other concepts such as Elastic IP and EBS, S3 also support high availability configurations.
  • The following diagram gives a view of High Availability in the Amazon Cloud Platform without any specific setup from the cloud consumer. Diagram courtesy of Amazon.

Windows Azure Platform
imageWindows Azure provides on-demand compute and storage capabilities to host, scale, and manage Web applications and services on the Internet hosted in Microsoft data centers.

  • The physical hardware resources are abstracted away and exposed as compute resources ready to be consumed by cloud applications. Physical storage is abstracted with storage resources and exposed through well-defined storage interfaces.
  • Each instance of the application is monitored for availability and scalability and automatically managed.
  • If an application in an instance goes down, the Fabric controller will be notified and another instance in another virtual machine (VM) will be instantiated with limited impact to end users
  • The foundation of SQL Azure is Microsoft SQL Server - proven enterprise database technology that Microsoft has further enhanced to support a scalable cloud platform. In addition, SQL Azure automatically offers built-in server and storage redundancy, a data replication solution for built-in high availability, and transparent application failover to ensure minimal disruption.

Google Apps Engine for Business
App Engine for Business enables you to build your enterprise applications on the same scalable systems that power Google applications. App Engine for Business provides all the ease of use and flexibility of App Engine with more power to manage enterprise use cases

  • 99.9% Service Level Agreement
  • Bigtable which is the data storage for Google Platform also provides high availability

How Cloud Abstracts the Complexities of HA Setup
Most of the time-consuming tasks in the traditional high availability architectures have become redundant on cloud and enterprises need not spend time and money on these architectures; rather they should concentrate on the business capability needs.


Scott Campbell reported IDC: 15 Percent of IT Spending Will Be Tied to Cloud in 2011 in a 12/15/2010 article for Computer Reseller News (CRN):

image Cloud computing will be moving from a talking point to just another way to deliver IT in 2011 as one of the key transformation technologies in the marketplace, according to Stephen Minton, vice president of worldwide IT markets at IDC.

"Mobile apps are taking off but cloud computing is the biggest story for the next 12 months," Minton told an audience of Wall Street investors and channel executives at the 2010 Raymond James IT Supply Chain Conference in New York. "We’ve been talking about cloud for a number of years but it’s moving from early adoption to mainstream adoption and it will create whole new factors like creating data centers and how you analyze the data in the cloud."

IDC estimates that by 2011, 15 percent of all IT revenue will be tied to the cloud, either directly or through supporting infrastructure, Minton said.

"Public cloud services adoption is growing 30 percent and it will also be a pretty big year for private cloud deployment," he said. "We still believe the long term trend favors public, but he reality is for large enterprises concerned about security issues, companies are going to be investing and getting the best of both worlds."

One area watch is the development of platform-as-a-service (Paas), which could help one more companies develop into the "next Microsoft" by delivering best-in-class cloud-based applications, Minton said. "It could also be Microsoft or Oracle or Google," he said. "It's the battle for hybrid cloud management. [Companies are] looking to establish leadership positions that can help customers on the mid and large side intelligently move from initial [private cloud] adoption to leverage the benefits of moving more into a public cloud model."

Meanwhile, the next generation of enterprise software will be designed from the ground up with the cloud in mind, Minton said. "As enterprises develop new applications, they’re doing it with cloud deliver at the forefront," he said.

Overall, IDC projects IT spending to increase 7 percent this year compared to the previous year, a healthy increase from a 4-percent decline in 2009 compared to 2008. A strong fourth-quarter surge could increase that growth to a double-digit percentage increase for the first time since 2000, Minton said.

Hardware sales have increased 13 percent this year, with only half of that due to pent-up demand from the sluggish economy in 2009, Minton said. The remainder is due to the "deluge of data" in which businesses need to better manage the reliability of their networks and increase storage capacity, he said. "And the cloud is a driver itself," he added.

For 2011, IDC projects IT spending growth of 4 percent to 5 percent, Minton said. "Softening indicators are software and services. The good news is the risk of a double-dip recession is significantly lower than than it was a few months ago," Minton said.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

The Cloud Security Alliance announced CSA Cloud Controls Matrix V1.1 is Released on 12/17/2010:

image The Cloud Security Alliance Cloud Controls Matrix (CCM) is specifically designed to provide fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a cloud provider. The CSA CCM provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains.

The foundations of the Cloud Security Alliance Controls Matrix rest on its customized relationship to other industry-accepted security standards, regulations, and controls frameworks such as the ISO 27001/27002, ISACA COBIT, PCI, and NIST, and will augment or provide internal control direction for SAS 70 attestations provided by cloud providers. As a framework, the CSA CCM provides organizations with the needed structure, detail and clarity relating to information security tailored to the cloud industry.

image The CSA CCM strengthens existing information security control environments by emphasizing business information security control requirements, reduces and identifies consistent security threats and vulnerabilities in the cloud, provides standardize security and operational risk management, and seeks to normalize security expectations, cloud taxonomy and terminology, and security measures implemented in the cloud.

Spreadsheet

The Cloud Controls Matrix is part of the CSA GRC Stack.


<Return to section navigation list> 

Cloud Computing Events

Brian Hitney posted an Azure Firestarter Fall 2010 - Session 3 (of 3) video with links to Session 1 and 2 videos on 12/17/2010:

image

Is cloud computing still a foggy concept for you? Have you heard of Windows Azure, but aren’t quite sure of how it applies to you and the projects you’re working on?

image

Windows Azure was first announced at the Microsoft PDC in 2008.  A year later, shortly after PDC 09, Windows Azure went into production.  At PDC 2010 in Redmond, a whole slew of new features for the Windows Azure platform were announced.

In November & December 2010, the Microsoft US Cloud team hosted a series of Windows Azure Firestarter events in several US East Coast cities.  Brian Hitney, Jim O’Neil, and Peter Laudati combined presentations and hands-on exercises to demystify this disruptive (and super-hyped!) technology and to provide clarity as to where the cloud and Windows Azure can take you.

We recorded the last event on December 9th in the Washington, DC area, and now it is here for your learning pleasure!  So, pop on some headphones and listen & learn at your own pace! 

There are three recorded sessions:

Session 1: Getting Your Head Into the Cloud by Peter Laudati (presented by Brian Hitney)

Ask ten people to define “Cloud Computing,” and you’ll get a dozen responses. To establish some common ground, we’ll kick off the event by delving into what cloud computing means, not just by presenting an array of acronyms like SaaS and IaaS , but by focusing on the scenarios that cloud computing enables and the opportunities it provides. We’ll use this session to introduce the building blocks of the Windows Azure Platform and set the stage for the two questions most pertinent to you: “how do I take my existing applications to the cloud?” and “how do I design specifically for the cloud?”

Session 2: Migrating Your Applications to the Cloud by Brian Hitney

How difficult is it to migrate your applications to the cloud? What about designing your applications to be flexible inside and outside of cloud environments? These are common questions, and in this session, we’ll specifically focus on migration strategies and adapting your applications to be “cloud ready.”

We’ll examine how Azure VMs differ from a typical server – covering everything from CPU and memory, to profiling performance, load balancing considerations, and deployment strategies such as dealing with breaking changes in schemas and contracts. We’ll also cover SQL Azure migration strategies and how the forthcoming VM and Admin Roles can aid in migrating to the cloud.

Session 3: Creating (and Adapting) Applications for the Cloud by Jim O’Neil

Windows Azure enables you to leverage a great deal of your Visual Studio and .NET expertise on an ‘infinitely scalable’ platform, but it’s important to realize the cloud is a different environment from traditional on-premises or hosted applications. Windows Azure provides new capabilities and features – like Azure storage and the AppFabric – that differentiate an application translated to Azure from one built for Azure. We’ll look at many of these platform features and examine tradeoffs in complexity, performance, and costs.

Note: There was an instructor-led hands-on lab at the Firestarter events.  However, we did not record this portion. The lab is based on the Azure @Home application, and features a fun exercise using distributed computing to help with medical research.  You can view previous screencasts about the lab, and follow along at home by visiting: http://distributed.cloudapp.net.

You can find all of the Firestarter session slides at the resource page on the US Cloud Connection site.  US Cloud Connection is our site for staying connected with the Microsoft Evangelists in the US focused on Windows Azure.


Dan Drew posted AzureFest on 12/14/2010:

Last Saturday I attended AzureFest at Microsoft Canada. Big thanks to Corey Fowler at ObjectSharp for the presentation! In addition to learning about Azure and meeting some great people, I was also inspired by the bizarre “development machines” that some of the attendees showed up with…


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

The A NoSQL Summer reading club posted a link on 12/17/2010 to Pat Helland’s Life Beyond Distributed Systems: an Apostate’s Opinion paper from the 3rd Biennial Conference on Innovative DataSystems Research (CIDR 2007) held January 7-10, 2007 at Asilomar, California. From the Abstract:

image Many decades of work have been invested in the area of distributed transactions including protocols such as 2PC, Paxos, and various approaches to quorum. These protocols provide the application programmer a façade of global serializability. Personally, I have invested a non- trivial portion of my career as a strong advocate for the implementation and use of platforms providing guarantees of global serializability.

image My experience over the last decade has led me to liken these platforms to the Maginot Line. In general, application developers simply do not implement large scalable applications assuming distributed transactions. When they attempt to use distributed transactions, the projects founder because the performance costs and fragility make them impractical. Natural selection kicks in …

Instead, applications are built using different techniques which do not provide the same transactional guarantees but still meet the needs of their businesses.

This paper explores and names some of the practical approaches used in the implementations of large-scale mission-critical applications in a world which rejects distributed transactions. We discuss the management of fine-grained pieces of application data which may be repartitioned over time as the application grows. We also discuss the design patterns used in sending messages between these repartitionable pieces of data.

The reason for starting this discussion is to raise awareness of new patterns for two reasons. First, it is my belief that this awareness can ease the challenges of people hand-crafting very large scalable applications. Second, by observing the patterns, hopefully the industry can work towards the creation of platforms that make it easier to build these very large applications.

When he presented the paper, Pat had moved to Amazon.com from a career at Microsoft. He’s been back at Microsoft for more than two years. His current blog is here.


Lee Geishecker, Charlie Burns and Bruce Guptill co-authored a Dell Acquires Compellent: A Compelling Acquisition for a Traditional IT Path Toward Cloud Research Alert of 12/17/2010 for Saugatuck Technology:

image In a move that clearly illustrates the emerging importance of Cloud IT to even the most traditional IT vendors – and another move toward the “Cloud and Hybrid” future for IT and IT providers – Dell Computer Corp. (“Dell”) announced this week that it has agreed to acquire data storage company Compellent Technologies for $27.75 per share in cash; the total equity value of the deal is $960 million, or $820 million net of Compellent's cash. Compellent’s 2009 revenues were $125.3M, a 38 percent increase over the previous year.

image Coming on the heels of Dell’s unsuccessful attempt to acquire storage vendor 3Par, the Compellent move confirms that Dell is jockeying to improve its position not only in storage, but toward a one-stop-shop approach to Cloud IT (“The Roiling Cloud”, 774RA, 24Aug2010).  And it confirms that Dell, along with other traditional IT Master Brands, will continue to pursue a Cloud strategy by following traditional IT / data center paths.

Please view the full article here (site registration required).


Jeff Barr (@jeffbarr) announced on 12/16/2010 that you can Run Oracle Applications on Amazon EC2 Now!:

image Earlier this year I discussed our plans to allow you to run a wide variety of Oracle applications on Amazon EC2 in the near future. The future is finally here; the following applications are now available as AMIs for use with EC2:

  • Oracle PeopleSoft CRM 9.1 PeopleTools
  • Oracle PeopleSoft CRM 9.1 Database
  • Oracle PeopleSoft ELM 9.1 PeopleTools
  • Oracle PeopleSoft ELM 9.1 Database
  • Oracle PeopleSoft FSCM 9.1 PeopleTools
  • Oracle PeopleSoft FSCM 9.1 Database
  • Oracle PeopleSoft PS 9.1 PeopleTools
  • Oracle PeopleSoft PS 9.1 Database
  • Oracle E-Business Suite 12.1.3 App Tier
  • Oracle-E-Business-Suite-12.1.3-DB
  • JD Edwards Enterprise One - ORCLVMDB
  • JD Edwards Enterprise One - ORCLVMHTML
  • JD Edwards Enterprise One - ORCLVMENT

The application AMIs are all based on Oracle Enterprise Linux and run on 64-bit high-memory instances atop the Oracle Virtual Machine (OVM). You can use them as-is or you can create derivative versions tuned to your particular needs. We'll start out in one Region and add more in the near future.

image As I noted in my original post, you can use your existing Oracle licenses at no additional license cost or you can acquire new licenses from Oracle. We implemented OVM support on Amazon EC2 with hard partitioning so Oracle's standard partitioned processor licensing models apply.

All of these applications are certified and supported by Oracle. Customers with active Oracle Support and Amazon Premium Support will be able to contact either Amazon or Oracle for support.

You can find the Oracle AMIs in the Oracle section of the AWS AMI Catalog.


<Return to section navigation list> 

0 comments: