Sunday, September 26, 2010

Windows Azure and Cloud Computing Posts for 9/23/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb311  
Update 9/26/2010: Articles marked
• Update 9/25/2010: Articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

•• Ryan Dunn (&dunnry) and Steve Marx (@smarx) produced a 00:41:10 Cloud Cover Episode 27 - Combining Roles and Using Scratch Disk on 9/26/2010:

image

imageJoin Steve and Ryan [above] each week as they cover the Microsoft cloud. You can follow and interact with the show at@cloudcovershow.

In this episode:  

  • Learn how to effectively use the local disk (scratch disk) on your running instances.
  • Discover how to combine your roles in your Windows Azure services

Show Links:

Announcing the 'Powered by Windows Azure' logo program
Announcing Compute Hours Notifications for Windows Azure Customers
ASP.NET Security Vulnerability
Windows Azure Companion
Maximizing Throughput in Windows Azure Part 2 [by Rob Gillen]
Web Image Capture in Windows Azure

Windows Azure’s local disk (scratch disk, not persistent) isn’t the same as Windows Azure Drive (persistent), but this seemed to me to be the appropriate location for the article.


<Return to section navigation list> 

SQL Azure, Codenames “Dallas” and “Houston,” and OData

Eyal Vardi updated the CodePlex download of his OData Viewer Tool on 9/23/2010:

image Project Description

I buil[t] this tool to help me build OData URL quer[ies] and see the result.
( for example http://e4d.com/Courses.svc/Courses?$orderby=Date/ )

Please send feedback...

imageKey Features:

  1. URL IntelliSense
  2. URL Tooltip
  3. Data Grid View
  4. XML Atom View
  5. Data Service Metadata View
Data Grid View
GridViewAdnIntellisense.PNG
XML Atom View
XmlView.PNG
MetaData View
MetaData.PNG

Source code is available in a separate download: Src - OData Viewer Tool.


• Kevin Blakey provided on 9/25/2010 the slide deck and sample code for his South West Code Camp - Presentation on OData:

imageThis is the contents of my presentation from the South West Code Camp in Estoro. The presentation  is on Building and Consuming OData Custom Providers. In this presentation I demo a basic OData provider using the Entity Framework provider. I show how to construct the provider and then show how easy it is to consume the data via Silverlight. In the second half of the presentation, I demonstrate how to create a provider using the reflection capabilities of .NET. This demo[nstrate]s how to create a provider for data that is not in a EF compatible database.

Download: OData.zip


• Dan Jones explains frequent release scheduling for SQL Server Management Studio (SSMS) and preventing DOS attacks on Project “Houston” in this 9/24/2010 post to TechNet’s SQL Server Experts blog;

imageSince my team started work on Project Houston hardly a day goes by I’m not in some discussion about the Cloud. It also hits me when I go through my RSS feeds each day: cloud this and cloud that. There is obviously a significant new trend here, it may even qualify as a wave; as in the Client/Server Wave or the Internet Wave or the SOA Wave, to name a few – ok, to name all the ones I can name. Almost 100% of what I read is focused on how the Cloud Wave impacts IT and the “business” in general. Don’t get me wrong, I completely agree it’s real and it’s going to have a profound impact. I think this posting by my friend Buck Woody (and fellow BIEB blogger) sums it up pretty succinctly. The primary point being it doesn’t matter what size your business is today or tomorrow, the Cloud will impact you in a very meaningful way.

What I don’t see much discussion about is how the Cloud Wave (is it just me or does that sound like a break dancing move?) changes the way software vendors (ISVs) develop software. In addition to Project Houston my team is responsible for keeping SQL Server Management Studio (SSMS) up with the SQL Azure feature set. If we step back for a second and look at what we have, we have a product that runs as a service on Windows Azure (running in multiple data centers across the globe) and the traditional thick client application that is downloaded, installed, and run locally. These are very different beasts but they are both challenged with keeping pace with SQL Azure.

image SSMS has traditionally been on the release rhythm of the boxed product. This means a release every three years. The engineering system we use to develop software is finely tuned to the three year release cycle. The way we package up and distribute the software is also tuned to the three year release cycle. It’s a pretty good machine and by and large it works. When I went to the team who manages our release cycle and explained to them that I needed to release SSMS more frequently, as in at least once a quarter if not more often, they didn’t know what to say. This isn’t to say these aren’t smart people, they are. But they had never thought about how to adjust the engineering system to release a capability like SSMS more often than the boxed product, let alone every quarter. I hate to admit it but it took a couple of months of discussion just to figure how we could do this. It challenged almost every assumption made about how we develop, package and release software. But the team came through and now we’re able to release SSMS almost any time we want. There are still challenges but at least we have the backing of the engineering system. I’m pretty confident we would have eventually arrived at this solution even without SQL Azure. But given the rapid pace of innovation in the Cloud we were forced to arrive at it sooner.

image Project Houston is an entirely different. There is no download for Project Houston, it only runs in the Cloud. The SQL Azure team runs a very different engineering system (although it is a derivation) than what we run for the boxed product. It’s still pretty young and not as tuned but it’s tailored to suit the needs of a service offering. When we first started Project Houston we tried to use our current engineering system. During development it worked pretty well. However, when we got to our first deployment it was a complete mess. We had no idea what we were doing. We worked with an Azure deployment team and we spoke completely different languages. It took a few months of constant discussion and troubleshooting to figure out what we were doing wrong and how we needed to operate to be successful. Today we snap more closely with the SQL Azure engineering system and we leverage release resources on their side to bridge the gap between our dev team and the SQL Azure operations team. It used to take us weeks to get a deployment completed. Now we can do it, should we have to, in the matter of hours. That’s a huge accomplishment by my dev team, the Azure ops team, and our release managers.

There’s another aspect to this as well. Releasing a product that runs as a service introduces an additional set of requirements. One in particular completely blindsided us. Sure when I tell you it’ll be obvious but it caught my team completely off guard. As we get close to releasing of any software (pre-release or GA) we do a formal security review. We have a dedicate[d] team that leads this. It’s a very thorough investigation of the design in an attempt to identify problems. And it works – let me just leave it at that. In the case of Project Houston we hit a situation no one anticipated. The SQL Azure gateway has built-in functionality to guard against DOS (Denial of Service) Attacks. Project Houston is running on the opposite side of the gateway from SQL Azure, it runs on the Windows Azure platform. Since Project Houston handles multiple users connecting to different servers & databases there’s an opportunity for a DOS. During the security review the security team asked how we were guarding against a DOS. As you can image our response was a blank stair and the words D-O-what were uttered a few times.

We had been heads down for 10 months with never a mention of handling a DOS. We were getting really close to releasing the first CTP. We could barely spell D-O-S much less design and implement a solution in the matter of a few weeks. The team jumped on it calling on experts from across the company. We reviewed 5 or 6 different designs each with their own set of pros and cons. The team finally landed on a design and got it implemented. We did miss the original CTP date but not by much.

You’re probably an IT person wondering why this is relevant to you. The point in all this is simple. When you’re dealing with a vendor who claims their product is optimized for the Cloud or designed for the Cloud do they really know what they’re talking about or did the marketing team simply change the product name, redesign the logo and graphics to make it appear Cloud enabled. Moving from traditional boxed software to Cloud is easy. Do it right, is hard – I know, I’m living through it every day.

I’d be interested in the details of the DOS-prevention design and implementation for Codename “Houston” and I’m sure many others would be, too.


Jani Järvinen suggested “Learn about the OData protocol and how you can combine SQL Azure databases with WCF Data Services to create OData endpoints that utilize a cloud-based database” in his Working with the OData protocol and SQL Azure article of 9/24/2010 for the Database Journal:

imageOData is an interesting new protocol that allows you to expose relational database data over the web using a REST-based interface. In addition to just publishing data over an XML presentation format, OData allows querying database data using filters. These filters allow you to work with portions of the database data at a time, even if the underlying dataset is large.

Although these features – web-based querying and filtering – alone would be very useful for many applications, there is more to OData. In fact, OData allows you to also modify, delete and insert data, if you wish to allow this to your users. Security is an integral part of OData solutions.

In ASP.NET applications OData support is implemented technically using WCF Data Services technology. This technology takes for instance an ADO.NET Entity Framework data model, and exposes it to the web with settings you specify. Then, either using an interactive web browser or a suitable client application, you can access the database data over the HTTP protocol.

Although traditional on-premises ASP.NET and WCF applications continue to be work horses for many years, cloud based applications are galloping into the view. When working with Microsoft technologies, cloud computing means the Azure platform. The good news is that you can also expose data sources using OData from within your Azure applications.

This article explores the basics of OData, and more importantly, how you can expose OData endpoints from within your applications. Code shown is written in C#. To develop the applications walked through in this article, you will need Visual Studio 2010 and .NET 4.0, an Azure account, and the latest Azure SDK Kit, currently in version 1.2 (implicitly, this requires Windows Vista or Windows 7). See the Links section for download details.

Introduction to OData

OData (“Open Data”) is, as its name implies, an open standard for sharing database data over the Internet. Technically speaking, it uses the HTTP protocol to allow users and applications to access the data published through the OData endpoint. And like you learned earlier, OData allows intelligent querying of data directly with filters specified in the URL addresses.

In addition to reading, the OData protocol also allows modifying data on the server, according to specified user rights. Applications like Excel 2010 (with its PowerPivot feature) and most web browsers can already access OData directly, and so can libraries in Silverlight 4, PHP and Windows Phone 7, for instance

All this sounds good, but how do OData based systems tick? Take a look at Figure 1, where a WCF Data Service application written with .NET 4.0 shows an OData endpoint in a browser.


Figure 1: An OData endpoint exposed by a WCF Data Service in a web browser

The application exposes the Northwind sample database along with its multiple tables. For instance, to retrieve all customer details available in the Customers table, you could use the following URL:

http://myserver/NorthwindDataService.svc/Customers

This is a basic query that fetches all records from the table. The data is published in XML based ATOM format, which is commonly used for RSS feeds for example. Here is a shortened example to show the bare essentials:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<feed xml:base="http://myserver/NorthwindDataService.svc/" 
xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" 
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" 
xmlns="http://www.w3.org/2005/Atom">
  <title type="text">Customers</title>
  <id>http://myserver/NorthwindDataService.svc/Customers</id>
  <updated>2010-09-18T04:12:14Z</updated>
...
    <content type="application/xml">
      <m:properties>
        <d:CustomerID>ALFKI</d:CustomerID>
        <d:CompanyName>Alfreds Futterkiste</d:CompanyName>
        <d:ContactName>Maria Anders</d:ContactName>
        <d:ContactTitle>Sales Representative</d:ContactTitle>
        <d:Address>Obere Str. 57</d:Address>
        <d:City>Berlin</d:City>
        <d:Region m:null="true" />
        <d:PostalCode>12209</d:PostalCode>
        <d:Country>Germany</d:Country>
        <d:Phone>030-0074321</d:Phone>
        <d:Fax>030-0076545</d:Fax>
      </m:properties>
    </content>
...

Using an XML based format means that the data downloaded is quite verbose, using a lot more bandwidth than for example SQL Server’s own TDS (Tabular Data Stream) protocol. However, interoperability is the key with OData.

If the Customer table would contain hundreds or thousands of records, then retrieving all records with a single query becomes impractical. To filter the data and for example return all customers starting with the letter A, use the following URL:

http://myserver/NorthwindDataService.svc/Customers?$filter=startswith(CompanyName, 'A') eq true

In addition to simple queries like this, you can also select for instance top five or ten records, sort based on a given column (both ascending and descending), and use logical operations such as NOT, AND and OR. This gives you a lot of flexibility in working with data sources exposed through the OData protocol.

Different Ways to Work with OData on the Cloud

When speaking of OData and the cloud, you should be aware that there is no single architecture that alone fits all needs. In fact, you can easily come up with different options, Figure 2 showing you three of them. The two right-most options have one thing in common: they run the actual data source (the SQL Server database) inside the cloud, but the location of the WCF Data Service varies.


Figure 2: Three common architectures for working with OData

That is, the WCF data service can run on your own servers and fetch data from the database on the cloud. This works as long as there is a TCP/IP connection to Microsoft’s data centers from your own location. This is important to keep in mind, as you cannot guarantee connection speeds or latencies when working through the open Internet.

To create a proof-of-concept (POC) application, you could start with the hybrid option shown in the previous figure. In this option, the database sits in the cloud, and your application on your own computers or servers. To develop such an application, you will need to have an active Azure subscription. Using the web management console for SQL Azure (sql.azure.com), create a suitable database to which you want to connect, and then migrate some data to the newly created database (see another recent article “Migrating your database applications to SQL Azure” by Jani Järvinen at DatabaseJournal.com for tips on migration issues).

While you are working with the SQL Azure management portal, take a note of the connection string to your database. You can find the connection string under the Server Administration web management page, selecting the database of your choice and finally clicking the Connection Strings button at the bottom. Usually, this is in the format like the following:

Server=tcp:nksrjisgiw.database.windows.net;Database=Northwind;User ID=username@nksrjisgiw;Password=mysecret;Trusted_Connection=False;Encrypt=True;

Inside Visual Studio, you can use for instance the Server Explorer window to test access to your SQL Azure database. Or, you could use SQL Server Management Studio to do the same (Figure 3). With an interactive connection possibility, it is easy to make sure everything (such as Azure firewall policies) is set up correctly.


Figure 3: SQL Server Management Studio 2008 R2 can be used to connect to SQL Azure

Creating a WCF Data Service Wrapper for an SQL Azure Database

imageCreating an OData endpoint using WCF technologies is easiest using the latest Visual Studio version 2010. Start or open an ASP.NET web project of your choice, for instance a traditional ASP.NET web application or a newer ASP.NET MVC 2 application. A WCF Data Service, which in turn is able to publish an OData endpoint of the given data source, requires a suitable data model within your application.

Although you can use for instance LINQ to Sql as your data model for a WCF Data Service, an ADO.NET Entity Framework data model is a common choice today. An Entity Framework model works well for the sample application, so you should continue by adding an Entity Framework model to your ASP.NET project. Since you want to connect to your SQL Azure database, simply let Visual Studio use the connection string that you grabbed from the SQL Azure web management portal. After this, you should have a suitable data model inside your project (Figure 4).


Figure 4: A simple ADO.NET Entity Framework data model (entity data model) in Visual Studio

After the entity data model has been successfully created, you can publish it through a WCF Data Service. From Visual Studio’s Project menu, choose the Add New Item command, and from the opening dialog box, navigate to the Web category on the left, and choose the WCF Data Service icon (the one with a person in it). After clicking OK, Visual Studio will add a set of files and references to your project. Visual Studio will also open a skeleton code file for you, which you then must edit.

Basically, you must include your data model to the class definition, and then set various options for things like protocol version, security, access control, and so on. Once edited the finished code file should look like the following:

using System.Data.Services.Common;
using System.Linq;
using System.ServiceModel.Web;
using System.Web;

namespace SqlAzureODataWebApplication
{
  public class NorthwindDataService :
    DataService
  {
    public static void InitializeService(
      DataServiceConfiguration config)
    {
      config.SetEntitySetAccessRule("*",
        EntitySetRights.AllRead);
      config.DataServiceBehavior.MaxProtocolVersion =
        DataServiceProtocolVersion.V2;
    }
  }
}

At this point, your application is ready to be tested. Run your application, and your web browser should automatically launch. In your browser, you should see an XML presentation of your data, coming straight from the SQL Azure cloud database (Figure 5)!


Figure 5: An example OData endpoint showing data coming directly from SQL Azure [You must turn off Feed Reading in Tools | Options | Content to view OData documents in IE’s XML stylesheet.]

Be also sure to test out URL addresses like the following (note that IE cannot display the first URL directly, but view the source of the resulting page to see the details):

http://localhost:1234/NorthwindDataService.svc/Customers('ALFKI')

http://localhost:1234/NorthwindDataService.svc/Customers?$filter=Country eq 'USA'

http://localhost:1234/NorthwindDataService.svc/Customers?$orderby=City&$top=10

Query strings can have lots of power, can’t they?

Conclusion

OData is a new protocol for sharing data over the web. Although still much in its infancy, the protocol has lots of potential, especially when it comes to Microsoft platforms. For developers, OData gives a cross-platform and cross-language way of sharing relational data using web-friendly technologies such as HTTP, JSON and XML/Atom.

For .NET developers, OData is easy to work with. Publishing an OData endpoint is done using WCF Data Service technology, and with Visual Studio 2010, this doesn’t require many clicks. Internally inside your application, you will need to be using some object-oriented database technology, such as LINQ to SQL or ADO.NET Entity Framework.

On the client side, OData-published data can be explored in multiple ways. The good news is that .NET developers have many options: Silverlight 4 contains direct support for WCF Data Services, and in any other .NET application type, you can import an OData endpoint directly as a web service reference. This allows you to use technologies such as LINQ to query the data. Excel 2010 can do the trick for end-users.

Combining OData endpoints with a cloud-based database can give you several benefits. Firstly, excellent scalability allows you to publish large datasets over the OData protocol, meaning that you are not limited to data size of just a gigabyte or two. Secondly, decreased maintenance costs and the possibility to rapidly create databases for testing purposes are big pluses.

In this article, you learned about the OData protocol, and how you can combine SQL Azure databases with WCF Data Services to create OData endpoints that utilize a cloud based database. You also saw that Visual Studio 2010 is a great tool to build this kind of modern applications. It doesn’t require many clicks anymore to be able to use cloud databases in your applications, and to publish that data over the web. Happy hacking!

Links


• Channel9 offers an Intro to Dallas course as part of its Windows Azure Training Course with team members David Aiken (@thedavidaiken), Ryan Dunn (@dunnry), Vittorio Bertocci (@vibronet), and Zack Skyles Owens (@ZackSkylesOwens):

image In this lab we will preview Microsoft Codename “Dallas”. Dallas is Microsoft’s Information Service offering which allows developers and information workers to find, acquire and consume published datasets and web services. Users subscribe to datasets and web services of interest and can integrate the information into their own applications via a standardized set of API’s. Data can also be analyzed online using the Dallas Service Explorer or externally using the PowerPivot Add-In for Excel.

Objectives

In this Hands-On Lab, you will learn how to:

  • imageExplore the Dallas Developer Portal and Service Explorer
  • Query a Dallas dataset using a URL
  • Access and consume a Dallas dataset and Service via Managed Code
  • Use the PowerPivot Add-In for Excel to consume and analyze data from Dallas datasets
Exercises

This Hands-On Lab comprises the following exercises:

  1. Exploring the Dallas Developer Portal and Service Explorer
  2. Querying Dallas Datasets via URL’s
  3. Consuming Dallas Data and Services via Managed Code
  4. Consuming Dallas Data via PowerPivot

Estimated time to complete this lab: 60 minutes.


Steve Yi announced patterns & practices’ Developing Applications for the Cloud on Azure online book is now available in this 9/24/2010 post to the SQL Azure team blog:

imageThe Microsoft Patterns & Practices group has published the second free online book volume about Developing Applications for the Cloud on the Microsoft Windows Azure Platform. This book demonstrates how you can create a multi-tenant, Software as a Service (SaaS) application to run in the cloud by using the Windows Azure platform.

MSFT PnP

The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud.

imageThe Working with Data in the Surveys Application is especially interesting to me as it pertains to SQL Azure. Do you have questions, concerns, comments? Post them below and we will try to address them.


Liam Cavanagh (@liamca) posted How to Sync Large SQL Server Databases to SQL Azure to the Sync Framework Team blog on 9/24/2010:

image Over the past few days I have seen a number of posts from people who have been looking to synchronize large databases to SQL Azure and have been having issues. In many of these posts, people have looked to use tools like SSIS, BCP and Sync Framework and have run into issues such as SQL Azure closing the transaction due to throttling of the connection (because it took to apply the data) and occasionally local out-of-memory issues as the data is sorted.

image For today’s post I wanted to spend some time discussing the subject of synchronizing large data sets from SQL Server to SQL Azure using Sync Framework. By large database I mean databases that are larger than 500MB in size. If you are synchronizing smaller databases you may still find some of these techniques useful, but for the most part you should be just fine with the simple method I explained here.

imageFor this situation, I believe there are three very useful capabilities within the Sync Framework:

1) MemoryDataCacheSize: This helps to limit the amount of memory that is allocated to Sync Framework during the creation of the data batches and data sorting. This typically helps to fix any out-of-memory issues. In general I typically allocate 100MB (100000) to this parameter as the best place to start, but if you have larger or smaller amounts of free memory, or if you still run out-of-memory, you can play with this number a bit.

RemoteProvider.MemoryDataCacheSize = 100000;

2) ApplicationTransactionSize (MATS): This tells the Sync Framework how much data to apply to the destination database (SQL Azure in this case) at one time. We typically call this Batching. Batching helps us to work around the issue where SQL Azure starts to throttle (or disconnect) us if it takes too long to apply the large set of data changes. MATS also has the advantage of allowing me to tell sync to pick up where it left off in case a connection drops off (I will talk more about this in a future post) and has the advantage that it provides me the ability to get add progress events to help me track how much data has been applied. Best of all it does not seem to affect performance of sync. I typically set this parameter to 50MB (50000) as it is a good amount of data that SQL Azure can commit easily, yet is small enough that if I need to resume sync during a network disconnect I do not have too much data to resend.

RemoteProvider.ApplicationTransactionSize = 50000;

3) ChangesApplied Progress Event: The Sync Framework database providers have an event called ChangesApplied. Although this does not help to improve performance, it does help in the case where I am synchronizing a large data set. When used with ApplicationTransactionSize I can tell my application to output whenever a batch (or chunk of data) has been applied. This helps me to track the progress of the amount of data that has been sent to SQL Azure and also how much data is left.

RemoteProvider.ChangesApplied += new EventHandler<DbChangesAppliedEventArgs>(RemoteProvider_ChangesApplied);

When I combine all of this together, I get the following new code that I can use to create a command line application to sync data from SQL Server to SQL Azure. Please make sure to update the connection strings and the tables to be synchronized.

100+ lines of C# code omitted for brevity.

Liam Cavanagh currently works as a Sr. Program Manager for Microsoft in the Cloud Data Services group. In this group he works on the Data Sync Service for SQL Azure – enabling organizations to extend on-premises data to the cloud as well as to remote offices and mobile workers to remain productive regardless of network availability.


Zane Adam from the SQL Azure Team described New Features in SQL Azure on 9/23/2010:

image As part of our continued commitment to provide regular and frequent updates to SQL Azure, I am pleased to announce Service Update 4 (SU4). SU4 is now live with three new updates - database copy, improved help system, and deployment of our web-based tool Code-Named "Houston" to multiple data centers. Here are the details:

  • imageSupport for database copy: Database copy allows you to make a real-time complete snapshot of your database into a different server in the data center. This new copy feature is the first step in backup support for SQL Azure, allowing you to get a complete backup of any SQL Azure database before making schema or database changes to the source database. The ability to snapshot a database easily is our top requested feature for SQL Azure. Note that this feature is in addition to our automated database replication, which keeps your data always available. The relevant MSDN Documentation is available here, titled: Copying Databases in SQL Azure.
  • Update on "Houston": Microsoft Project Code-Named "Houston" is a light weight web-based database management tool for SQL Azure. Houston, which runs on top of Windows Azure, is now available in multiple datacenters, reducing the latency between the application and your SQL Azure database.


Steve Yi posted Creating Tables with Project Houston as his first contribution to the SQL Azure blog on 9/23/2010:

image This is the second post in a series about getting started with Microsoft Project Code-Named “Houston” (Houston) (Part 1). In part 1 we covered the basics, logging in and navigation. In this post we will cover how to use the table designer in Project Houston. As a quick reminder, Microsoft Project Code-Named “Houston” (Houston) is a light weight database management tool for SQL Azure and is a community technology preview (CTP). Houston can be used for basic database management tasks like authoring and executing queries, designing and editing a database schema, and editing table data.

imageCurrently, SQL Server Management Studio 2008 R2 doesn’t have a table designer implemented for SQL Azure. If you want to create tables in SQL Server Management Studio 2008 you have to type a CREATE TABLE script and execute it as a query. However, Project Houston has a fully implemented web-based table designer. Currently, Project Houston is a community technology preview (CTP).

You can start using Houston by going to: http://www.sqlazurelabs.com/houston.aspx (The SQL Azure labs site is the location for projects that are either in CTP or incubation form). Once you have reached the site login to your server and database to start designing a table. For more information on logging in and navigation in Houston, see the first blog post.

Designing a Table

Once you have logged in, the database navigation bar will appear in the top left of the screen. It should look like this:

clip_image001

To create a new table:

  1. Click on “New Table”
  2. At this point, the navigation bar will change to the Table navigation. It will look like this:

    clip_image002

  3. A new table will appear in the main window displaying the design mode for the table.

    clip_image004

The star next to the title indicates that the table is new and unsaved. Houston automatically adds three columns to your new table. You can rename them or modify the types depending on your needs.

If you want to delete one of the newly added columns, just click on the column name so that the column is selected and press the Delete button in the Columns section of the ribbon bar.

If you want to add another column, just click on the + Column button or the New button in the Columns section of the ribbon bar.

Here is a quick translation from Houston terms to what we are used to with SQL Server tools:

  • Is Required? means that data in this column cannot be null. There must be data in the column.
  • Is Primary Key? means that the column will be a primary key and that a clustered index will be created for this column.
  • Is Identity? means that the column is an auto incrementing identity column, usually associate with the primary key. It is only available on the bigint, int, decimal, float, smallint, and tinyint data types.
Saving

When you are ready to commit your table to SQL Azure you need to save the table using the Save button in the ribbon bar.

clip_image005

We’re aware of a few limitations with the current offering such as creating tables with multiple primary key fails, and some renaming of tables and column and causes errors when saving. As with any CTP we are looking for feedback and suggestions, please log any bugs you find.

Feedback or Bugs?

Again, since this is CTP Project “Houston” is not supported by standard Microsoft support services. For community-based support, post a question to the SQL Azure Labs MSDN forums. The product team will do its best to answer any questions posted there.

To log a bug about Project “Houston” in this release, use the following steps:

  1. Navigate to Https://connect.microsoft.com/SQLServer/Feedback.
  2. You will be prompted to search our existing feedback to verify that your issue has not already been submitted.
  3. Once you verify that your issue has not been submitted, scroll down the page and click on the orange Submit Feedback button in the left-hand navigation bar.
  4. On the Select Feedback form, click SQL Server Bug Form.
  5. On the feedback form, select Version = Houston build – CTP1 – 10.50.9700.8.
  6. On the feedback form, select Category = Tools (Houston).
  7. Complete your request.
  8. Click Submit to send the form to Microsoft.

To provide a suggestion about Project “Houston” in this release, use the following steps:

  1. Navigate to Https://connect.microsoft.com/SQLServer/Feedback.
  2. You will be prompted to search our existing feedback to verify that your issue has not already been submitted.
  3. Once you verify that your issue has not been submitted, scroll down the page and click on the orange Submit Feedback button in the left-hand navigation bar.
  4. On the Select Feedback form, click SQL Server Suggestion Form.
  5. On the feedback form, select Category = Tools (Houston).
  6. Complete your request.
  7. Click Submit to send the form to Microsoft.

If you have any questions about the feedback submission process or about accessing the portal, send us an e-mail message: sqlconne@microsoft.com.

Summary

This is just the beginning of our Microsoft Project Code-Named “Houston” (Houston) blog posts, make sure to subscribe to the RSS feed to be alerted as we post more information.


Kathleen Richards reported Microsoft Kills Key Components of the 'Oslo' Modeling Platform in favor of OData and Entity Data Model v4 in this 9/22/2010 article for Redmond Developer News:

imageMicrosoft is announcing today that key components of its "Oslo" modeling platform are no longer part of its model-driven development strategy. In the on-going battle of competing data platform technologies at Microsoft, the company is focusing its efforts on the Open Data Protocol (OData) and the Entity Data Model, which underlies the Entity Data Framework and other key technologies.

image Announced in October 2007, the Oslo modeling platform consisted of the 'M' modeling language, a "Quadrant" visual designer and a common repository based on SQL Server. The technology was initially targeting developers, according to Microsoft, with an eye towards broadening tools like Quadrant to other roles such as business analysts and IT. Alpha bits of some of the components were first made available at the Professional Developers Conference in October 2008. Oslo was renamed SQL Server Modeling technologies in November 2009. The final community technical preview was released that same month and required Visual Studio 2010/.NET Framework 4 Beta 2.

The Quadrant tool and the repository, part of SQL Server Modeling Services after the name change, are no longer on the roadmap. Microsoft's Don Box, a distinguished engineer and part of the Oslo development team, explained the decision in the Model Citizen blog on MSDN:

"Over the past year, we’ve gotten strong and consistent feedback from partners and customers indicating they prefer a more loosely-coupled approach; specifically, an approach based on a common protocol and data model rather than a common store. The momentum behind the Open Data Protocol (OData) and its underlying data model, Entity Data Model (EDM), shows that customers are acting on this preference."

The end of Oslo is not surprising based on the project's lack of newsworthy developments as it was bounced around from the Connected Services division to the Developer division to the Data Platform team. The delivery vehicle for the technology was never disclosed, although it was expected to surface in the Visual Studio 2010 and SQL Server wave of products.

The "M" textual modeling language, originally described as three languages--MGraph, MGrammar and MSchema -- has survived, for now. Microsoft's Box explained:

"While we used "M" to build the "Oslo" repository and "Quadrant," there has been significant interest both inside and outside of Microsoft in using "M" for other applications. We are continuing our investment in this technology and will share our plans for productization once they are concrete."

The Oslo platform was too complex for the benefits that it offered, said Roger Jennings, principal consultant of Oakleaf Systems, "The Quadrant and 'M' combination never gained any kind of developer mindshare."

More and more people are climbing on the OData bandwagon, which is a very useful and reasonably open protocol, agreed Jennings. A Web protocol under the Open Specification Promise that builds on the Atom Publishing Protocol, OData can apply HTTP and JavaScript Object Notation (JSON), among other technologies, to access data from multiple sources. Microsoft "Dallas", a marketplace for data- and content-subscription services hosted on the Windows Azure platform, is driving some of the developer interest in OData, according to Jennings.

DynamicsCrm2011Logo2Developers may run into problems with overhead when using OData feeds, however. "XML is known as a high-overhead protocol, but OData takes that to an extreme in some cases," said Jennings, who is testing OData feeds with Microsoft's Dynamic CRM 2011 Online beta, the first product to offer a full-scale OData provider[*]. Jennings blogs about OData explorations, including his experiences with the Dynamic CRM Online beta in his Oakleaf Systems blog.

*The last paragraph needs a minor clarification: Dynamic CRM 2011 is the first version of the Dynamics CRM product to offer a full-scale OData provider. Many full-scale OData providers predate Dynamics CRM 2011.

imageFull disclosure: I’m a contributing editor for Visual Studio Magazine, which is published by 1105 Media. 1105 Media is the publisher of Redmond Developer News.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Windows Azure AppFabric team adds more details in Windows Azure AppFabric SDK September Release available for download of 9/24/2010:

image72As part of the Windows Azure AppFabric September Release we are now providing both 32- and 64-bit versions of the Windows Azure AppFabric SDK.

In addition to addressing several deployment scenarios for 64-bit computers, Windows Azure AppFabric now enables integration of WCF services that use AppFabic Service Bus endpoints in IIS 7.5, with Windows® Server AppFabric. You can leverage Windows® Server AppFabric functionality to configure and monitor method invocations for the Service Bus endpoints.

Note that the name of the installer has changed:

  • Old Name: WindowsAzureAppFabricSDK.msi
  • New names: WindowsAzureAppFabricSDK-x64.msi WindowsAzureAppFabricSDK-x86.msi

Any automated deployment scripts you might have built that use the previous name will need to be updated to use the new naming. No other installer option has changed.

The Windows Azure AppFabric SDK September Release is available here for download.


Wes Yanaga announced Windows Azure AppFabric SDK V1.0–September Update Released in a 9/24/2010 post to the US ISV Evangelism blog:

image Windows Azure AppFabric provides common building blocks required by .NET applications when extending their functionality to the cloud, and specifically, the Windows Azure platform. The Windows Azure AppFabric is a key component of the Windows Azure Platform. It includes two services: AppFabric Access Control and AppFabric Service Bus.

image72This SDK includes API libraries for building connected applications with the Windows Azure AppFabric. It spans the entire spectrum of today’s Internet applications – from rich connected applications with advanced connectivity requirements to Web-style applications that use simple protocols such as HTTP to communicate with the broadest possible range of clients.

As part of the Windows Azure AppFabric September Release, we are now providing both 32- and 64-bit versions of the Windows Azure AppFabric SDK.

In addition to addressing several deployment scenarios for 64-bit computers, Windows Azure AppFabric now enables integration of WCF services that use AppFabric Service Bus endpoints in IIS 7.5, with Windows® Server AppFabric. You can leverage Windows® Server AppFabric functionality to configure and monitor method invocations for the Service Bus endpoints.

The Windows Azure AppFabric SDK September Release is available here for download.

The downloadable file is dated 9/23/2010.


Zane Adams posted Announcing BizTalk Server 2010 RTM and General Availability date on 9/23/2010:

image Today we are excited to announce that we have Released to Manufacturing (RTM) BizTalk Server 2010 and it will be available for purchase starting October 1st, 2010.

image BizTalk Server 2010 is the seventh major release of our flagship enterprise integration product, which includes new support for Windows Server AppFabric to provide pre-integrated support for developing new composite applications.  This allows customers to maximize the value of existing Line of Business (LOB) systems by integrating and automating their business processes, and putting real-time, end-to-end enterprise integration within reach of every customer.  All this coupled with the confidence of a proven mission-critical integration infrastructure that is available to companies of all sizes at a fraction of the cost of other competing products.

imageAccording to Steven Smith, President and Chief Executive Officer at GCommerce, “GCommerce has bet our mission-critical value-chain functionality on BizTalk Sever 2010 which we use to automate the secure and reliable exchange of information with our trading partners. Additionally, the Windows Azure Platform allows us to extend our existing business-to-business process into new markets such as our cloud-based inventory solution called the Virtual Inventory Cloud (VIC), which is based upon Windows Azure and SQL Azure. This extension to our trading platform allows us to connect our buyers experience from the purchasing process within VIC all the way through to the on-premises business systems built around BizTalk.  This new GCommerce capability drives both top-line revenue as well as reduces bottom line costs, by making special orders faster than before.” Details are available in the GCommerce case study. [Emphasis added.]

The new BizTalk Server 2010 release enables businesses to:

  • Maximize existing investments by using pre-integrated BizTalk Server 2010 support with both Windows Server AppFabric and SharePoint Server 2010 to enable new composite application scenarios;

  • Further reduce total cost of ownership and enhance performance for large and mission-critical deployments through a new pre-defined dashboard that enables efficient performance tuning and streamlining deployments across environments, along with pre-integration with System Center;

  • Efficient B2B integration with highly-scalable Trading Partner Management and advanced capabilities for complex data mapping.

BizTalk Server 2010 also delivers updated platform support for Windows Server 2008 R2, SQL Server 2008 R2, .NET Framework 4 and Visual Studio 2010.

BizTalk Server is the most widely deployed product in the enterprise integration space with over 10,000 customers and 81% of the Global 100. 

image72This release is another step in the on-going investments we have made in our application infrastructure technologies. Along with the releases earlier this calendar year of .NET Framework 4, Windows Server AppFabric and Windows Azure AppFabric, BizTalk Server 2010 makes it easier for customers to build composite applications that span both on-premises LOB systems and new applications deployed onto the public cloud. 

To learn more regarding the BizTalk Server 2010 release and download our new free developer edition, please visit BizTalk Server website; more detailed product announcements about BizTalk Server 2010 are also contained on the BizTalk Server product team blog.  You can learn more about Microsoft’s Application Infrastructure capabilities by exploring on-demand training at www.appinfrastructure.com.


Brian Loesgen adds download locations for BizTalk Server 2010 Evaluation and Developer editions in this 9/23/2010 post:

imageThe BizTalk team announced today that the latest version of BizTalk Server, BizTalk Server 2010, has been released to manufacturing. It will be available for purchase starting October 1 2010.

The Developer and Evaluation Editions are available effective today at the links below.

For those of you that were working with the public beta, your results may vary, but I was able to do an in-place upgrade from the BETA to the RTM with no issues at all, all BizTalk applications/settings/artifacts/BAM/etc were maintained. Nice!

The Developer Edition (which is now FREE) download is available here:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=938102b8-a677-4c20-906d-f6ae472b3a6a&displaylang=en

The Evaluation Edition download is available here:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=8b1069cf-202b-462b-8d10-bec65d315c65

I won’t re-hash what the team said, you can read *all* about the new release at the official BizTalk team blog post here:

http://blogs.msdn.com/b/biztalk_server_team_blog/archive/2010/09/22/biztalk-server-2010-released-for-manufacturing.aspx

Summary by Charles Young, one of my co-authors on BizTalk Server 2010 Unleashed, is here. http://geekswithblogs.net/cyoung/archive/2010/09/23/biztalk-server-2010-rtm.aspx


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Bart Wullums posted on 9/25/2010 a reminder about the Layered architecture sample for Azure sample on CodePlex:

When browsing through Codeplex, I found the following interesting sample: Layered Architecture Sample for Azure. It is a layered application in .NET ported to the Windows Azure platform to demonstrate how a carefully designed layered application can be easily deployed to the cloud. It also illustrates how layering concepts can be applied in the cloud environment.

Cloud-sample.jpg


•• Jeff Sandquist announced that Channel9 v5 uses Windows Azure, SQL Azure, Azure Table Storage, and memcache in his Welcome to the all new Channel 9 post of 9/17/2010 (missed when posted):

image Welcome to the all new Channel 9. This is the fifth major release of Channel 9 since our original launch back on April 6, 2004.

With this major release we focussed on top requests from you in the forums along telemetry data to guide our design. We've made it easier to find content through our browse section and did a dramatic redesign across the board for our popular Channel 9 Shows Area, the forums and learning.

image Behind the scenes ait has been a complete rewrite of our code, a rebuild of the infrastructure and development methodology.

This fifth edition of Channel 9 is built using [emphasis added]:

  • ASP.NET MVC
  • SparkView engine
  • jQuery
  • Silverlight 4
  • Windows Azure, SQL Azure, Azure Table Storage, and memcache
  • ECN for the Content Delivery Network (videos)

imageFor a deeper dive into how we built the site, watch Mike Sampson and Charles Torre Go Deep on Rev 9. All of these code improvements have resulted in our page load times improving dramatically and a greater simpli[fi]cation of our server environment too.

imageWith well over a year in planning, development and testing today is the day for us to make the change over to the new Channel 9. We have been humbled by your never ending support, feedback and enthusiasm for Channel 9.

If you have feedback where we can make this a better place for all of us, please leave your feedback over on our Connect Site. We're listening.

On behalf of the entire Channel 9 Team , welcome to Rev 9.
Jeff Sandquist

Note: “Rev 9” appears to be a typo. The current version is Channel9 v5.


John C. Stame reminded readers about Microsoft Patterns and Practices – Moving Applications to the Cloud on Windows Azure:

MSFT PnP

Microsoft Patterns & Practices provides Microsoft’s applied engineering guidance and includes both production quality source code and documentation.  Its a great resource that has mounds of great architectural guidance around Microsoft platforms. 

Moving apps to Azure PnPBack in June, Patterns & Practices release a new book on Windows Azure, titled “Moving Applications to the Cloud”.   I recently came across this and found it to be a great resource for my customers and architects that are looking at moving existing workloads in their enterprise to Windows Azure.  Download it (its free) and check it out.


Maarten Balliauw described Windows Azure Diagnostics in PHP and his Windows Azure Diagnostics Manager for PHP on 9/23/2010:

imageWhen working with PHP on Windows Azure, chances are you may want to have a look at what’s going on: log files, crash dumps, performance counters, … All this is valuable information when investigating application issues or doing performance tuning.

Windows Azure is slightly different in diagnostics from a regular web application. Usually, you log into a machine via remote desktop or SSH and inspect the log files: management tools (remote desktop or SSH) and data (log files) are all on the same machine. This approach also works with 2 machines, maybe even with 3. However on Windows Azure, you may scale beyond that and have a hard time looking into what is happening in your application if you would have to use the above approach. A solution for this? Meet the Diagnostics Monitor.

Diagnose Azure ApplicationThe Windows Azure Diagnostics Monitor is a separate process that runs on every virtual machine in your Windows Azure deployment. It collects log data, traces, performance counter values and such. This data is copied into a storage account (blobs and tables) where you can read and analyze data. Interesting, because all the diagnostics information from your 300 virtual machines are consolidated in one place and can easily be analyzed with tools like the one Cerebrata has to offer.

Configuring diagnostics

imageConfiguring diagnostics can be done using the Windows Azure Diagnostics API if you are working with .NET. For PHP there is also support in the latest version of the Windows Azure SDK for PHP. Both work on an XML-based configuration file that is stored in a blob storage account associated with your Windows Azure solution.

The following is an example on how you can subscribe to a Windows performance counter:

1 /** Microsoft_WindowsAzure_Storage_Blob */ 2 require_once 'Microsoft/WindowsAzure/Storage/Blob.php'; 3 4 /** Microsoft_WindowsAzure_Diagnostics_Manager */ 5 require_once 'Microsoft/WindowsAzure/Diagnostics/Manager.php'; 6 7 $storageClient = new Microsoft_WindowsAzure_Storage_Blob(); 8 $manager = new Microsoft_WindowsAzure_Diagnostics_Manager($storageClient); 9 10 $configuration = $manager->getConfigurationForCurrentRoleInstance(); 11 12 // Subscribe to \Processor(*)\% Processor Time 13 $configuration->DataSources->PerformanceCounters->addSubscription('\Processor(*)\% Processor Time', 1); 14 15 $manager->setConfigurationForCurrentRoleInstance($configuration);

Introducing: Windows Azure Diagnostics Manager for PHP

Just for fun (and yes, I have a crazy definition of “fun”), I started working on a more user-friendly approach for configuring your Windows Azure deployment’s diagnostics: Windows Azure Diagnostics Manager for PHP. It is limited to configuring everything and you still have to know how performance counters work, but it saves you a lot of coding.

Windows Azure Diagnostics Manager for PHP

The application is packed into one large PHP file and coded against every best-practice around, but it does the job. Simply download it and add it to your application. Once deployed (on dev fabric or Windows Azure), you can navigate to diagnostics.php, log in with the credentials you specified and start configuring your diagnostics infrastructure. Easy, no?

Here’s the download: diagnostics.php (27.78 kb)
(note that it is best to get the latest source code commit for the Windows Azure SDK for PHP if you want to configure custom directory logging)


The Windows Azure Team posted Real World Windows Azure: Interview with David Ruiz, Vice President of Products at Ravenflow on 9/23/2010:

As part of the Real World Windows Azure series, we talked to David Ruiz, Vice President of Products at Ravenflow, about using the Windows Azure platform to deliver the company's cloud-based process analysis and visualization solution. Here's what he had to say:

MSDN: Tell us about Ravenflow and the services you offer.

Ruiz: Ravenflow is a Microsoft Certified Partner that uses a patented natural language technology, called RAVEN, to turn text descriptions into business process diagrams. Our customers use RAVEN to quickly analyze and visualize their business processes, application requirements, and system engineering needs.

MSDN: What were the biggest challenges that Ravenflow faced prior to implementing the Windows Azure platform?

Ruiz: We already offered a desktop application but wanted to create a web-based version to give customers anywhere, anytime access to our unique process visualization capabilities. What we really wanted to do was expand the market for process visualization and make Ravenflow easily available for more customers. At the same time, we wanted to deliver a rich user experience while using our existing development skills and deep experience with the Microsoft .NET Framework.

MSDN: Can you describe some of the technical aspects of the solution you built by using the Windows Azure platform to help you reach more customers?

imageRuiz: After evaluating other cloud platforms, we chose the Microsoft-hosted Windows Azure platform and quickly created a scalable new service: RAVEN Cloud. It has a rich, front-end interface that we developed by using the Microsoft Silverlight 3 browser plug-in. Once a customer enters a narrative into the interface, Web roles in Windows Azure place the narrative in Queue storage. From there, Worker roles access the narrative and coordinate and perform the RAVEN language analysis. The results are placed in Blob storage where they are collected by a Worker role, aggregated back together, and then returned to the Web role for final processing. RAVEN Cloud also takes advantage of Microsoft SQL Azure to store application logs as well as user account and tracking information.

RAVEN Cloud uses the patented RAVEN natural language technology to generate accurate process diagrams from the text that users enter through a website.

MSDN: What makes your solution unique?

Ruiz: The natural language engine behind RAVEN Cloud and its ability to generate diagrams from text is what makes it really stand out. Natural language analysis and visualization is a complex mathematical operation, and the Windows Azure platform is a natural fit for such compute-heavy processes. The elastic scalability features of Windows Azure allow us to scale with ease as the number of users grows.

MSDN: Are you offering RAVEN Cloud to any new customer segments or niche markets?

Ruiz: Since launching RAVEN Cloud in May 2010, we have served more than 1,000 customers each week-a number that continues to grow. While there is no specific industry segment that stands out, we are finding a very significant audience of business analysts who struggle with process modelling and like the fact that RAVEN Cloud helps them do that automatically.

MSDN: What kinds of benefits is Ravenflow realizing with Windows Azure?

Ruiz: We were able to quickly develop and deploy our natural language engine to the cloud as a service, and we look forward to improved time-to-market for new features and products in the future. By offering our software as a service, we have not only opened new business opportunities and extended the market reach for our process visualization solutions, but we can also maintain high levels of performance for our CPU-intensive application while minimizing operating costs.

Read the full story at: www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008080

To read more Windows Azure customer success stories, visit: www.windowsazure.com/evidence


John C. Stame described David Chou’s Building Highly Scalable Java Apps on Windows Azure presentation to the JavaOne conference in this 9/23/2010 post:

imageOne of my brilliant colleagues, and friend David Chou (Architect Evangelist) at Microsoft just posted slide deck on SlideShare.net from his talk at JavaOne.  David is a former Java Architect and has been at Microsoft for several years talking about our platform and now specifically our Windows Azure Platform.

I have embedded his presentation [link] and you can also click here to take you to SlideShare, where you can find more of David’s presentations.


Steve Marx (@smarx) explained Web Page Image Capture in Windows Azure in a CloudCover video episode according to this 9/23/2010 post:

image In this week’s upcoming episode of Cloud Cover [see below], Ryan and I will show http://webcapture.cloudapp.net, a little app that captures images of web pages, like the capture of http://silverlight.net you see on the right.

46aecb90-73e5-44ae-ab47-6e50ff13f6d6[1]When I’ve seen people on the forum or in email asking about how to do this, they’re usually running into trouble using IECapt or the .NET WebBrowser object. This is most likely due to the way applications are run in Windows Azure (inside a job object, where a number of UI-related things don’t work). I’ve found that CutyCapt works great, so that’s what I used.

Using Local Storage

The application uses CutyCapt, a Qt- and WebKit-based web page rendering tool. Because that tool writes its output to a file, I’m using local storage on the VM to save the image and then copying the image to its permanent home in blob storage.

This is the meat of the backend processing:

var proc = new Process()
{
    StartInfo = new ProcessStartInfo(Environment.GetEnvironmentVariable("RoleRoot")
                    + @"\\approot\CutyCapt.exe",
            string.Format(@"--url=""{0}"" --out=""{1}""",
                url,
                outputPath))
        {
            UseShellExecute = false
        }
};
proc.Start();
proc.WaitForExit();
if (File.Exists(outputPath))
{
    var blob = container.GetBlobReference(guid);
    blob.Properties.ContentType = "image/png";
    blob.UploadFile(outputPath);
    File.Delete(outputPath);
}
Combining Roles

imageTypically, this sort of architecture (a web UI which creates work that can be done asynchronously) is accomplished in Windows Azure with two roles. A web role will present the web UI and enqueue work on a queue, and a worker role will pick up messages from that queue and do the work.

For this application, I decided to combine those two things into a single role. It’s a web role, and the UI part of it looks like anything else (ASP.NET MVC for the UI, and a queue to track the work). The interesting part is in WebRole.cs. Most people don’t realize that the entire role instance lifecycle is available in web roles just as it is in worker roles. Even though the template you use in Visual Studio doesn’t do it, you can simply override Run() as you do in a worker role and put all your work there. The code that I pasted above is in the Run() method in WebRole.cs.

If I later want to separate the front-end from the back-end, I can just copy the code from Run() into a new worker role.

Get the Code

You can download the full source code here: http://cdn.blog.smarx.com/files/WebCapture_source.zip, but note that it’s missing CutyCapt.exe. You can download the most recent version of CutyCapt.exe here: http://cutycapt.sourceforge.net. Just drop it in the root of the web role, and everything should build and run properly.

Watch Cloud Cover!

Be sure to watch the Cloud Cover episode about http://webcapture.cloudapp.net, as well as all of our other fantastic episodes.

If you have ideas about other things you’d like to see covered on the show, be sure to ping us on Twitter (@cloudcovershow) to let us know.


Ryan Dunn (&dunnry) and Steve Marx (@smarx) produced a 00:29:47 Cloud Cover Episode 26 - Dynamic Workers on 9/19/2010:

image

In this episode:  

  • Discover how to get the most out of your Worker Roles using dynamic code.
  • Learn how to enable multiple admins on your Windows Azure account.

Show Links:
Maximizing Throughput in Windows Azure – Part 1
Calling a Service Bus HTTP Endpoint with Authentication using WebClient
Requesting a Token from Access Control Service in C#
Two New Nodes for the Windows Azure CDN Enhance Service Across Asia


Steve Sfartz, Vijay Rajagopalan and Yves Yang updated their Windows Azure SDK for Java Developers to v2.0.0.20100913 on 9/13/2010 (missed when updated):

Windows Azure SDK for Java enables Java developers to take advantage of the Microsoft Cloud Services Platform – Windows Azure. Java APIs for for Windows Azure Storage - Blobs, Tables & Queues Helper Classes for HTTP transport, REST, Error Management.


Kapil Mehra posted Announcing the [Windows Azure] Group Policy Search Service on 6/24/2010 to TechNet’s Ask Directory Devices Team blog (missed when posted):

imageHello, Kapil here. I am a Product Quality PM for Windows here in Texas [i.e. someone who falls asleep cuddling his copy of Excel - Ned]. Finding a group policy when starting at the "is there even a setting?" ground zero can be tricky, especially in operating systems older than Vista that do not include filtering. A solution that we’ve recently made available is a new service in the cloud:

Group Policy Search

With the help of Group Policy Search you can easily find existing Group Policies and share them with your colleagues. The site also contains a Search Provider for Internet Explorer 7 and Internet Explorer 8 as well as a Search Connector for Windows 7. We are very interested in hearing your feedback (as responses to this blog post) about whether this solution is useful to you or if there are changes we could make to deliver more value.

Note - the Group Policy search service is currently an unsupported solution. At this time the maintenance and servicing of the site (to update the site with the latest ADMX files, for example) will be best-effort only.

Using GPS

image

In the search box you can see my search string “wallpaper” and below that are the search suggestions coming from the database.

On the lower left corner you see the search results and the first result is automatically displayed on the right hand side. The search phrase has been highlighted and in the GP tree the displayed policy is marked bold.

Note: Users often overlook the language selector in the upper right corner, where one can switch the policy results (not the whole GUI itself) to “your” language (sorry for having only UK English and not US English ;-) …

image

Using the “Tree” menu item you can switch to the “registry view”, where you can see the corresponding registry key/value, or you can reset the whole tree to the beginning view:

image

In the “Filter” menu, you can specify for which products you want to search (this means, if you select IE7, it will find all policies which are usable with IE7, not necessarily only these only available for IE7 and not for IE6 or IE8; this is done using the “supported on” values from the policies):

image

In the “copy” menu you can select the value from the results that you want to copy. Usually “URL” or “Summary” is used (of course you can easily select and CTRL+C the text from the GUI as well):

image

In the “settings” menu you can add the search provider and/or Connector.

image

Upcoming features (planned for the next release)

“Favorites” menu, where you can get a list of some “interesting” things like “show me all new policies IE8 introduced”:

image

“Extensions” menu:

image

We will introduce a help page with a description for the usage of the GPS.

GPS was written by Stephanus Schulte and Jean-Pierre Regente, both MS Internet Explorer Support Engineers in Germany. Yep, this tool was written by Support for you. :-)

image

The cool part – it’s all running in:

Kapil “pea queue” Mehra

I would have loved to have had this tool when writing Admin911: Windows 2000 Group Policy 10 years ago:

image


Return to section navigation list> 

VisualStudio LightSwitch

Beth Massi delivered a 20-foot, lavishly illustrated Deployment Guide: How to Configure a Machine to Host a 3-tier LightSwitch Beta 1 Application on 9/24/2010. Following are the first few feet:

image A lot of people have been asking in the forums about how to deploy a LightSwitch application and there are some really great tutorials out there like: Deploy and Update a LightSwitch (Beta 1) 3-tier Application

There’s also a lot of information in the official documentation on Deployment:

image22Deploying a LightSwitch application on the same machine as you develop on is pretty easy because all the prerequisites are installed for you with Visual Studio LightSwitch, including SQL Server Express. In this post I’d like to walk you through configuring a clean machine to host a 3-tier LightSwitch application that shouldn’t have the development environment installed.

Before I begin please note: There is NO “go live” license for the LightSwitch Beta. You can deploy your LightSwitch applications to IIS for testing purposes ONLY.  Also currently the Beta only supports IIS 7 at this time. You can only use Windows 7 or Windows 2008 Server (not 2003) to test deployment for Beta 1 LightSwitch applications. As you read through this guide I’ll note in sections where the team is still fixing bugs or changing experiences for the final release (RTM). Please be aware that the deployment experience will be much easier and full-featured for RTM.

In this post we will walk through details of configuring a web server first and then move onto deployment of a LightSwitch Beta 1 application. (BTW, a lot of this information should be useful even if you are creating other types of .NET web applications or services that connect to databases.)

Configuring the server

    • Installing Beta 1 Prerequisites
    • Verifying IIS Settings and Features
    • Configuring Your Web Site for Network Access
    • Configuring an Application Pool and Test Site
    • Add User Accounts to the Database Server

Deploying and testing your LightSwitch application

    • Publishing a LightSwitch Beta 1 Application
    • Installing the LightSwitch Application Package on the Server
    • Using Windows Integrated Security from the Web Application to the Database
    • Launching the LightSwitch Application

So let’s get started!

Installing LightSwitch Beta 1 Prerequisites

You can use the Web Platform Installer to set up a Windows web server fast. It allows you to select IIS 7, the .NET Framework 4 and a whole bunch of other free applications and components, even PHP. All the LightSwitch prerequisites are there as well including SQL Server Express and the LightSwitch middle-tier framework. This makes it super easy to set up a machine with all the components you need.

NOTE: The team is looking at simplifying this process and possibly making the LightSwitch server component pre-reqs go away so this process will likely change for RTM.

To get started, on the Web Platform tab select the Customize link under Tools and check the Visual Studio LightSwitch Beta Server Prerequisites. This will install IIS 7, .NET Framework, SQL Server Express 2008 and SQL Server Management Studio for you so you DO NOT need to select these components again on the main screen.

WebPI1

image

If you already have one or more of these components installed then the installer will skip those. Here's the breakdown of the important dependencies that get installed:

  • .NET Framework 4
  • Middle-tier components for the LightSwitch runtime, for Beta 1 these are installed in the Global Assembly Cache (GAC)
  • IIS 7 with the correct features turned on like ASP.NET, Windows Authentication, Management Services
  • Web Deployment Tool 1.1 so you can deploy directly from the LightSwitch development environment to the server
  • SQL Server Express 2008 (engine & dependencies) and SQL Server Management Studio (for database administration)(Note: LightSwitch will also work with SQL Server 2008 R2 but you will need to install that manually if you want that version)
  • WCF RIA Services Toolkit (middle-tier relies on this)

Click the "I Accept" button at the bottom of the screen and then you'll be prompted to create a SQL Server Express administrator password. Next the installer will download all the features and start installing them. Once the .NET Framework is installed you'll need to reboot the machine and then the installation will resume.

Once you get to the SQL Server Management Studio 2008 setup you may get this compatibility message:

sqlcompat

If you do, then just click "Run Program" and after the install completes, install SQL Server 2008 Service Pack 1.

Plan on about an hour to get everything downloaded (on a fast connection) and installed.

In the next couple sections I'm going to take you on a tour of some important IIS settings, talk you through Application Pools & Identities and how to get a simple test page to show up on another networked computer. Feel free to skip to the end if you know all this already and just want to see how to actually package and deploy a LightSwitch application. :-) …

Beth continues with more details.

I’m glad to see someone else challenge me for the longest blogposts in history prize.

Update 9/24/2010, 4:50 PM PDT Beth replies:

image


<Return to section navigation list> 

Windows Azure Infrastructure

• Dan Grabham asserted “Integrated mobile apps mean a new dawn in automotive design” as he asked Will [auto] sat navs all be cloud-based by 2020? in a 9/24/2010 article for TechRadar.com:

bmw-station

A leading automotive analyst says that by the end of the decade all navigation will be cloud-based. Phil Magney, vice president of Automotive Research at analyst iSuppli, spoke about how mobile apps and the cloud are revolutionising the design of in-car HMI (Human Machine Interface) design.

[Image] The BMW Station will be launched at the Paris Motor Show in October

"What do I use? I use my Android phone. The content is just more relevant. In five years half the navigation users will be cloud-based... by the end of the decade everything will be cloud-based. The general telematics trend is moving [towards having] open platforms and app stores." [Emphasis added.]

"On-board resources are going out in favour of cloud-based resources. No matter what you say, it's all moving to the cloud."

imageMagney was speaking about the changing times in HMI design at the SVOX Forum in Zurich. SVOX is a provider for text-to-speech systems and has been working on more natural speech recognition for in-car use – its partners include Clarion, Microsoft Auto and the Open Handset Alliance (Android). [Emphasis added.]

"TTS (Text To Speech) is very, very important with the emphasis on bringing messaging and email into the car", said Magney. "This heightens the need for TTS."

Mobile apps running on smartphones can provide information or even a skin which runs on the head unit. Mini Connected is an iPhone app which enables you to listen to internet radio through your iPhone but using the controls of your Mini's HMI.

Mini connected

The stage on from that is to have apps running on the head unit itself, with a smartphone OS like Android inside the car – however, iSuppli warns that would require work on how the apps can be distributed and who gets a share in the revenue.

Connectivity and bandwidth will, however, surely be a major stumbling block with any of these systems. Magney was vague as to how this would be paid for. "I presume they'll go to a tiered pricing plan," he tamely suggested.

Likewise, Magney was also questioned about the quality of service on mobile networks while driving. "I guess it's my belief that LTE comes along and takes care of the issues with regard to bandwidth."

In another talk, BMW's Alexandre Saad said that mobile apps have to be well designed to succeed in-car, not least because of the cycle of car design. "A head unit could be four years old... the apps are not known at the design stage. Applications should be developed independently from car production cycles and other car technology."

Phil Magney also talked about the example of the BMW Station – pictured above – which enables an iPhone to effectively be embedded into the dashboard and - via a BMW app due in early 2011 – control in-car systems. We've also previously seen Audi's Google-based system at CES while Mercedes Benz has also shown a cloud-based head unit.

Next Page: In car tech: Potential for distraction


The Windows Azure team reported Windows Azure Domain Name System Improvements on 9/24/2010:

imageThe Windows Azure Domain Name System (DNS) is moving to a new globally distributed infrastructure, which will increase performance and improve reliability of DNS resolution.  In particular, users who access Windows Azure applications from outside the United States will see a decrease in the time it takes to resolve the applications' DNS names.  This change will take effect on October 5th, at midnight UTC.  Customers don't need to do anything to get these improvements, and there will be no service interruption during the changeover.

Because of the inherently distributed nature of this new global infrastructure, the creation of new DNS names associated with a Windows Azure application can take up to two minutes to be fully addressable on a global basis.  During this propagation time, new DNS records will resolve as NXDOMAIN.  If you attempt to access the new DNS name during this time window, most client computers will cache these negative responses for up to 15 minutes, causing "DNS Not Found" messages.  Under most circumstances you will not notice this delay, but if you promote a Windows Azure staging deployment to production for a brand new hosted service, you may observe a delay in availability of DNS resolution.

Jame Urquhart asked What is the 'true' cloud journey? in a 9/24/2010 post to C|Net’s The Wisdom of Clouds blog:

image The adoption of cloud computing is happening today, or so say a wide variety of analysts, vendors, and even journalists. The surveys show greatly increased interest in cloud computing concepts, and even increased usage of both public and private cloud models by developers of new application systems.

(Credit: Flickr/thomas_sly)

But does your IT organization really understand its cloud journey?

Friend, colleague, and cloud blogger Chris Hoff wrote a really insightful post today that digs into the reality--worldwide--of where most companies are with cloud adoption today...at least in terms of internal "private cloud" infrastructure. [See the Cloud Security and Governance section below.] In it, he describes the difficult options that are on the table for such deployments:

There is, however, a recurring theme across geography, market segment, culture, and technology adoption appetites; everyone is seriously weighing their options regarding where, how and with whom to make their investments in terms of building cloud computing infrastructure (and often platform) as-a-service strategy. The two options, often discussed in parallel but ultimately bifurcated based upon explored use cases, come down simply to this:

  1. Take any number of available open core or open-source software-driven cloud stacks, commodity hardware and essentially engineer your own Amazon, or

  2. Use proprietary or closed source virtualization-nee-cloud software stacks, high-end "enterprise" or "carrier-class" converged compute/network/storage fabrics and ride the roadmap of the vendors

One option means you expect to commit to an intense amount of engineering and development from a software perspective. The other means you expect to focus on integration of other companies' solutions. Depending upon geography, it's very, very unclear to enterprises [and] service providers what is the most cost-effective and risk-balanced route when use cases, viability of solution providers, and the ultimate consumers of these use-cases are conflated.

Hoff is pointing out that there are no "quick and easy" solutions out there. Even if, say, a vendor solution is a "drop in" technology initially, the complexity and tradeoffs of a long-term dependency on the vendor adds greatly to the cost and complexity.

On the other hand, open-source cloud stacks enable cheaper acquisition and more ways to implement the features that best suit your company's needs, but only at the cost of requiring additional development, engineering, and operations skills to get it working--and keep it working.

All of which leads to the likelihood that, as a whole, global IT will take some time to take private cloud "mainstream." So that means cloud computing isn't as disruptive as it was made out to be, right?

Wrong. Bernard Golden, CEO of Hyperstratus and a leading cloud blogger in his own right, pointed out today that it isn't the CIO or CTO that will control the pace of cloud adoption, but software developers:

The implication for organizations...is that decisions made by developers create commitments for the organizations they are part of--commitments that the organization does not recognize at the time they are made by the developer and may, in fact, be decisions that, had the organization understood them at the time they were made by the developer, it would have eschewed them. The result is that two or three years down the road, these organizations "discover" technology decisions and applications that are based on choices made by developers without organizational review.

This may account for the curious lack of respect given Amazon on the part of IT organizations and vendors. O'Grady addresses this in a second post titled "Hiding in Plain Sight: The Rise of Amazon Web Services." In it, he primarily addresses the fact that most technology vendors evince little fear of Amazon, preferring to focus on private cloud computing environments. He attributes this, in part, to the vendors' desire to keep traditional margins rather than descending into a pricing battle with Amazon.

Golden goes on to point out that, in his opinion, the reason many vendors aren't seeing Amazon as direct competition in many deals is that they aren't talking to the same people:

I might attribute it to a different factor: Vendors primarily seek to talk to senior management, those who control budgets that pay for the vendors' products--and, as we've just noted, those managers often miss the reality of what developers are actually doing. Consequently, they won't be telling vendors how much public cloud is being used, and the vendors will respond with the time-honored "we don't really see them much in competitive situations" (I used to hear this a lot from proprietary software vendors about the open source alternatives to their products).

This, then, gets around to the point I made about the future of IT operations in my last post. If, indeed, cloud computing is an applications-driven operations model, and if application operations is managed separately from service or infrastructure operations, then application operations will almost certainly be developed/configured/whatever on an application-by-application basis.

If that is true, then the operations automation for applications will be:

Created as part of the software development process, therefore influenced much more by development decisions than by IT operations decisions, and

Will target specific deployment environments (clouds or cloud ecosystems), thereby predestining their ongoing operations requirements.

This is why I have said so many times on this blog that, though I believe that the short-medium term will see slow public cloud adoption, especially for critical workloads with legal compliance consequences, the long term will see an IT landscape in which hybrid IT rules, but public cloud will be a dominant deployment model (or acquisition model, in the case of software as a service).

The economics are just too compelling, and the technical issues just too solvable. Solving the legal issues remains to be seen, but there have been signs in the last year that both the courts and various legislative bodies are understanding the importance of protecting data in third-party environments. The corporate lawyers I've spoken to are reasonably sure that legal issues between cloud providers and their customers will be worked out in the next two to three years.

In the end, the journey to cloud computing won't be a planned one. It will be, as disruptions often are, evolutionary and happen in often unexpected ways. The question is, can your IT culture handle that?

Graphic Credit: Flickr/thomas_sly


Phil Wainwright suggested Walk Away From Your IT Debt With SaaS in this 9/24/2010 post to the Enterprise Irregulars blog:

image A new report from industry analyst Gartner says that IT departments around the world have deferred a collective half-billion dollars’ worth of projects and upgrades that they really should have done already but didn’t have the time or resources to get around to yet (thanks to Joe McKendrick for highlighting the news).

This ‘technical debt’ is risky, because the skipped investment or upgrades usually means the applications and infrastructure affected is less reliable and up-to-date than it should be. And Gartner warns that, instead of doing something to reduce the exposure, enterprises are going to carry on letting it grow, potentially reaching a mind-boggling $1 trillion by 2015.

I’d say there’s a simple cure for this problem, though — one that Gartner has omitted to mention in its coverage of the report. It’s the IT equivalent of canceling your credit cards or walking away from your mortgage — instead of stumping up yet another downpayment to your conventional IT supplier, why not get rid of the headache and put in a SaaS solution instead?

The continuous upgrade ethos of a true, multitenant SaaS solution means that you’ll always be on the latest version without any extra cost or disruption, and the pay-as-you-go business model means that you can always align your spending with your budget. For many SaaS solutions, the monthly all-in cost is less than the ongoing maintenance payment for an equivalent but outdated conventionally installed application. Implementation doesn’t have to be costly or disruptive either, as it can be phased in over time, allowing an orderly retirement of the system it’s replacing.

The only catch is working out which of your current installed applications should go first. Gartner recommends IT leaders should “produce an annual report on the status of the application portfolio … detailing the number of applications in use, the number acquired, the number decommissioned, and the current and projected costs of both operating and sustaining or improving the integrity of the application assets.” But few organizations currently have that good a handle on the state of their IT infrastructure, and in the time it takes to compile the report, even more IT debt will have accumulated.

A better idea is to rapidly identify the applications that are most underwater in terms of IT debt. Which of them generate the most helpdesk hassle, the most disproportionate maintenance bills, or are running on the least appropriate platforms (for example, those mission critical departmental apps that run on servers under people’s desks — you know which ones they are, don’t you?). Then work out which of them have the most readily deployed SaaS replacements available and get started on handing back the keys on your IT debt mountain.

David Linthicum asserted “Kill enterprise architecture, provide infinite scalability, cost pennies per day -- these are just a few of our overblown expectations for the cloud” as a preface to his What cloud computing can and can't do post of 9/24/2010 to InfoWorld’s Cloud Computing blog:

image A recent post by Deloitte asked whether cloud computing makes enterprise architecture irrelevant: "With less reliance on massive, monolithic enterprise solutions, it's tempting to think that the hard work of creating a sustainable enterprise architecture (EA) is also behind us. So, as many companies make the move to cloud computing, they anticipate leaving behind a lot of the headaches of enterprise architecture."

image In short, we make a lot of money from consulting on enterprise architecture, so please don't take my enterprise architecture away. It's analogous to saying that some revolutionary new building material makes structurally engineering irrelevant. Even if that were the case, I still wouldn't go into that building.

I'm disturbed that the question is being asked at all. We should've evolved a bit by now, considering the amount of time cloud computing has been on the scene. However, silly questions such as this will continue to come up as we oversell the cloud; as a consequence of these inflated claims, I expect we'll be underdelivering pretty soon.

Cloud computing does not replace enterprise architecture. It does not provide "infinite scalability," it does not "cost pennies a day," you can't "get there in an hour" -- it won't iron my shirts either. It's exciting technology that holds the promise of providing more effective, efficient, and elastic computing platforms, but we're taking this hype to silly levels these days, and my core concern is that the cloud may not be able to meet these overblown expectations.

It's not politically correct to push back on cloud computing these days, so those who have concerns about the cloud are keeping their opinions to themselves. A bit of healthy skepticism is a good thing during technology transitions, considering that many hard questions are often not being asked. As much as I love the cloud, I'll make sure to hit those debates in this blog.


Jeffrey Schwartz (@JeffreySchwartz) reported Gartner Says Cloud Spending On the Rise in a 9/22/2010 post to the Redmond Developer News blog:

image Cloud computing services accounted for 12.5 percent of overall IT budgets, according to a research report released this week by Gartner.

image The report found that 39 percent of those surveyed allocated portions of their IT budgets for cloud computing, while 44 percent of those surveyed said they procured services from outside providers. Of those, 46 percent said they will increase that spending in the next budget year by an average of 32 percent.

Gartner said it surveyed more than 1,500 IT professionals throughout 40 countries between April and July of this year. Another key finding of the report revealed that one-third of spending came from last year's budget, another third was new spending, and 14 percent was diverted from a different budget category.

"Overall, these are healthy investment trends for cloud computing," said Gartner analyst Bob Igou, in a statement. "This is yet another trend that indicates a shift in spending from traditional IT assets such as the data center assets and a move toward assets that are accessed in the cloud."

Igou, who authored the study, pointed out that 24.1 percent of budgets covered data center systems and business applications, 19.7 percent for PC and related gear, 13.7 percent for telecom costs and 30 percent for IT personnel.

For the next budget year, the report found 40 percent will increase spending on development of cloud-based applications and solutions, while 56 percent will spend the same amount. Forty three percent said they will increase spending on implementation of cloud computing for internal and/or restricted and 32 percent will increase spending on such environments for external and/or public use.

imageFull disclosure: I’m a contributing editor for Visual Studio Magazine, which is published by 1105 Media. 1105 Media is the publisher of Redmond Developer News.


Rob Blackwell suggested that you Telnet to Windows Azure in this 9/23/2010 post:

Debugging Windows Azure applications can be time consuming, particularly if you make a mistake and have to redeploy. If you've done any serious work you'll know that it's easy to write an app that runs in the developer fabric but won't work in production.

Sometimes you just want to log on to the box and have a poke around. We have a rudimentary way of doing that using Telnet...

Telnet to Windows Azure

First install the Windows Telnet client - Control Panel > Programs and Features > Turn Windows features on or off > Telnet Client.

Now you need to install a telnet server in Azure. We've written a rudimentary telnet daemon that's available on GitHub . You can download and compile it with Visual Studio and test it out on your development box.

To get it running on Azure, grab a copy of AzureRunMe

AzureRunMe is a kind of bootstrap program that makes it easy to run EXEs, BATch files and many Windows apps that are copy deployable in Azure.

Upload AzureRunMe.cspkg upto your blob storage account (I recommend CloudStorageStudio for this). I use a container name called "packages".

AzureRunMe sets up some nice environment variables for you, including %ipaddress% and %telnet% which are the IP Address and telnet port respectively. Using that information, create a runme.bat file as follows:

Telnetd.exe %ipaddress% %telnet%

Zip up the two files - runme.bat and telnetd.exe together as telnetd.zip - upload that to blob store too.

Now you need an AzureRunMe configuration file, something like this:

<?xml version="1.0"?>

<ServiceConfiguration serviceName="AzureRunMe" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">

<Role name="WorkerRole">

   <Instances count="1" />

   <ConfigurationSettings>

        <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName=YOURACCOUNTNAME;AccountKey=YOURACCOUNTKEY" />

        <Setting name="Packages" value="packages\telnetd.zip" />

        <Setting name="WorkingDirectory" value="$roleroot$" />

        <Setting name="Commands" value="runme.bat" />

       

        <Setting name="DefaultConnectionLimit" value="12" />


        <Setting name="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName=YOURACCOUNTNAME;AccountKey=YOURACCOUNTKEY" />

        <Setting name="ScheduledTransferLogLevelFilter" value="Verbose" />

        <Setting name="ScheduledTransferPeriod" value="1" />


        <Setting name="TraceConnectionString" value="ServicePath=trace/$roleinstanceid$;ServiceNamespace=YOURNAMESPACE;IssuerName=YOURISSUERNAME;IssuerSecret=YOURISSUERSECRET" />

        <Setting name="LogFormat" value="$computername$: {0:u} {1}" />


        <Setting name="CloudDriveConnectionString" value="DefaultEndpointsProtocol=http;AccountName=YOURACCOUNTNAME;AccountKey=YOURACCOUNTKEY" />

        <Setting name="CloudDrive" value ="drives\$computername$.vhd"/>

        <Setting name="CloudDriveSize" value="64" />

       

   </ConfigurationSettings>

</Role>

</ServiceConfiguration>

view raw TelnetD.cscfg This Gist brought to you by GitHub.

You need to fill in your various Azure details of course, and then save as telnetd.cscfg, upload this file to Blob storage too.

Now log onto the Windows Azure portal and provision a new service. Browse for the CSPKG and CSCFG files from blob store. Run the new role.

Deploy to Windows Azure

Optionally you can use the TraceConsole from AzureRunMe to watch the boot-up process.

Trace output via the AppFabric Service
 Bus

When the instance is up (give it 10 minutes or so), you should be ready to telnet in

> telnet claptrap.cloudapp.net

And away you go. Try typing SET, DIR etc.

Note that the terminal interaction is rudimentary. In particular the delete key doesn't work. It's not really secure enough for production servers like this, but if you need a more robust solution, feel free to hire us.


David Pallman posted My Windows Azure Wish List – The Future Cloud I Hope to See by 2012 on 9/23/2010:

image What will cloud computing be like in a couple of years? I got my first look at Windows Azure 2 years ago, and the rate of progress has been nothing short of amazing--and shows no sign of slowing down. What will the cloud be like in another year or two? Where should it go? Here’s where I’d like to see the cloud go over the next couple of years:

1. Auto-Sizing: Out-of-box Governance

imageMany people don’t seem to be aware that cloud computing brings with it a new management responsibility. A big selling point for the cloud is its elasticity and subsequent cost efficiency—but you only get that if you monitor activity and manage the size of your assets in the cloud. That is not by any means automatic today, so you must elect to do it yourself or through a third-party, either through automated means or human oversight.

We could debate whether this is the cloud provider’s responsibility or the customer’s, and in fact it needs to be a partnership between the two. Since this is something everyone needs to do, however, it seems fair to expect the cloud provider to more than meet us halfway.

In the Future Cloud, I’d like to be able to easily set technical or financial thresholds and have the cloud monitor them for me—notifying me about changes and trends and taking action as per my marching orders.

We may get some of these capabilities as cloud integrations become available to operations monitoring software such as System Center—but that’s not a full realization of this idea. The modern start-up may run 100% in the cloud with no on-premise IT. Those companies need a completely in-cloud way to do governance.

Human beings shouldn’t have to babysit the cloud, at least not beyond an oversight/approval level of involvement. It should watch itself for us, and governance should be an out-of-box cloud service.

2. Auto Shut-off: App and Data Lifetime Management

I don’t know about you, but my house probably would have burned down long ago and my electric bills gone through the roof if it were not for the auto shut-off feature of many household appliances such as irons and coffee-makers. You only have to browse the forums to see the daily postings of people who are in shock because they left the faucet running or didn’t realize other so-called hidden costs of the cloud.

It’s human nature to be forgetful, and in the cloud forgetfulness costs you money. Every application put in the cloud starts a run of monthly charges that will continue perpetually until you step in and remove it someday. Every datum put in the cloud is in the same boat: ongoing charges until you remove it. It’s extremely unwise to do either without thinking about the endgame: when will this application need to come out of the cloud? What is the lifetime for this data? You might think you won’t forget about such things, but think about what it will be like when you are using the cloud regularly and have many applications and data stores online.

What we need to solve this problem is lifetime management for assets in the cloud.

In the Future Cloud, I’d like to see lifetime policies you can specify up-front when putting applications and data into the cloud—with automated enforcement.

You can imagine this including ‘keep it until I delete it’ and ‘keep until [time]’—similar to the options you get on your DVR at home. Auto delete could be dangerous, of course, so we will want more sophisticated options such as an ‘archive’ option, where we take something offline but don’t lose it altogether. Perhaps the best choice we could be given is a lease option, where the app or data’s expiration period gets renewed whenever they are used. This is how auto-shutoff works for many appliances: the shut-off timer gets reset whenever we use them, and only after a certain period of inactivity does deactivation take place.

As with the previous wish list item, this is something everyone needs and is therefore a valid ask of cloud providers. Let us set lifetime policies for our apps and data when we put them in the cloud, and enforce them for us.

3. Mothballing & Auto-Activation: Dehydrate & Rehydrate Apps and Data

As described in the previous wish list item, an ideal implementation of lifetime management for applications and data would include decommissioning and archiving. That is, apps and data that become inactive should be mothballed automatically where they cost us far less than when they are formally deployed.
Along with mothballing comes the need for reactivation. Here I think we can take an idea from workflow technologies such as WF and BizTalk Server, where long-running workflows are dehydrated so that they do not consume finite resources such as threads and memory. They get persisted, and the workflow engine knows what events to look for in order to rehydrate them back into running, active entities.

In the Future Cloud, I’d like apps and data to be dehydrated when inactive and rehydrated when needed again—with greatly reduced costs during the inactive period. We can thus imagine an app that people start to use less and less, and eventually stop using altogether. An example of this might be a health care plan enrollment portal, only used once or twice a year. As the app moves to an inactive state, an expiration policy would cause the cloud to remove all of the server instances. However, the “light would be on”: a future access to the application would bring it back online. We can similarly imagine account data that moves into archive mode when inactive: kept around, but not at the premium rate.

The best realization of this concept would be that mothballed apps and data cost us nothing until they are re-activated. That might be a little unrealistic since the cloud provider is keeping the light on for us, but a mothballed asset should certainly cost a small fraction of an activated one.

4. Automatic Encryption

Most customers go through a period of considering risks and concerns (real or imagined) before they start using the cloud. A common concern that surfaces is the use of shared resources in the cloud and the specter of your critical data somehow falling into the wrong hands. The best way to feel okay about that is to encrypt all data transmitted and stored by your application. That way, if data does fall into the wrong hands—remote as that may be—it won’t be intelligible to them.

In the Future Cloud, I’d like all data I store—database and non-database—to be automatically encrypted. This is another example of something I believe we will all be doing: encryption of data will become a standard practice for all data we put into the cloud. As previously mentioned, whenever there is something everyone wants to do in the cloud it’s fair to ask the cloud provider to provide a service rather than each of us having to separately implement the capability. Naturally, the customer should remain in control of keys and strong encryption methods should be used.

5. Get Closer to True Consumption-based Pricing

Cloud computing has great appeal because of the consumption-based pricing model and the analogies we can make to electricity and other utilities. However, the implementation of that idea today leaves room for improvement. While we do have consumption-based pricing it’s very coarse-grained.
For example, let’s consider Windows Azure hosting. For each VM you allocate, you are reserving that ‘machine’ and are paying $0.12/hour or more for wall clock time. The actual usage of each VM has nothing to do with your charges. Is this really consumption-based pricing? Yes, but at a coarse level of granularity: you add or remove servers to match your load. Can we imagine something more ideal? Yes, charging for the machine hours used to service actual activity. This would work well in combination with an auto-sizing feature as previously discussed.
We can make the same observation about SQL Azure. Today, you buy a database bucket in a certain size, such as 1GB or 10GB or 50GB. Whether that database is full, half full, or even completely empty does not affect the price you pay. Is this really consumption-based pricing? Yes, but again at a very coarse level. We can imagine a future where the amount of database storage in use drives the price, and we don’t have to choose a size bucket at all.

In the Future Cloud, I’d like to see more granular consumption-based pricing that more naturally lines up with usage and activities the way the customer thinks about them. It’s when the pricing model is at a distance from actual activity that surprises and disappointments come in using the cloud. We’ve already sold the ‘metering’ concept: now we need to give the customer the kind of meter they are expecting and can relate to.

6. Public-Private Portability: Doing Things the Same Way On-Prem or in the Cloud

I’m convinced many, many more businesses would be exploring the cloud right now if they could easily move portable workloads between cloud and on-premise effortlessly. Today, the cloud is a bit of a big step that requires you to change some things about your application. The cloud would be far more approachable if instead of that one big step, an enterprise could take several small, reversible steps.

In the Future Cloud, I’d like to be able to host things the same way in the cloud and on-premise so that I can effortlessly shuttle portable workloads between cloud and on-prem. Portable workloads would be huge. It doesn’t seem realistic that existing enterprise apps are going to just work in the cloud unchanged, because they weren’t designed to take advantage of a cloud environment. What does seem realistic is that you can update your apps to work “the cloud way” but be able to host identical VMs locally or in the cloud, giving you the ability to change your workload split anytime. The advent of private cloud will play a big role in making this possible.

7. Hybrid Clouds: Joining My Network to the Cloud

Today, on-premise and in-cloud are two very separate places separated by big walls. IT assets are either “over here” or “over there”, and special activities are needed to move applications, data, or messages between them. This makes certain scenarios a poor fit for the cloud today. Consider what I call the “Molar” pattern: an application with so many internal integrations that its deep roots make it impractical to extract out of the enterprise and move into the cloud.

In the Future Cloud, I’d like to be able to bridge parts of my local network to my assets in the cloud. The picture of what makes sense in the cloud changes radically if we can make connections between the cloud and our local network. That molar pattern, for example, might now be a suitable thing for the cloud because the in-cloud application now has a direct way to get to the internal systems it needs to talk to.

We know this is coming for Windows Azure. “Project Sydney”, announced at PDC 2009, will provide us with a gateway between our local networks and our assets in the cloud. What we can expect from this is that in addition to the “first wave” of applications that make sense in the cloud now, there will be a second wave.

8. Effortless Data Movement

Moving data to and from the cloud is not particularly hard—if it’s small, and of the type where you have a convenient tool at hand. When working with large amounts of data, your options are reduced and you may find yourself doing a lot of manual work or even creating your own tools out of necessity.

It’s not just moving data into the cloud and out that’s at issue: you may want to copy or move data between projects in the data center; or you may want to copy or move data to a different data center.

In the Future Cloud, I’d like to be able to easily move data between on-premise and cloud data centers around the world, regardless of the amount of data.

9. A Simpler Pricing Model

If you look at Azure ROI Calculators and TCO tools, you’ll see that there are many dimensions to the pricing model. As we continue to get more and more services in the cloud, they will only increase. Although there’s something to be said for the transparency of separately accounting for bandwidth, storage, etc. it certainly puts a burden on customers to estimate their costs correctly. It’s very easy to get the wrong idea about costs by overlooking even one dimension of the pricing model.

In the Future Cloud, I’d like to see a simpler, more approachable pricing model. This might mean a less itemized version of the pricing model where you consume at a simple rate; with the ability to reduce your costs slightly if you are willing to go the itemized route. This would be similar to tax returns, where you can choose between easy and itemized forms.

10. Provide SaaS Services

Software-as-a-Service providers are ISVs who face a common set of challenges: they need to provide multi-tenancy and engineer their solutions in a way that protect tenants well. This includes protection and isolation of data, and may involve customer-controlled encryption keys. SaaS providers also have to deal with provisioning of new accounts, which they would like to be as automated as possible. Change management is another consideration, where there is a tension between the ability to provide customizations and the use of a common deployment to serve all customers.

In the Future Cloud, I’d like to see services and a framework for SaaS functionality. Microsoft themselves are solving this for SaaS offerings such as SharePoint Online and CRM Online. Why not offer provisioning, multi-tenancy, and data isolation services for SaaS ISVs as a general cloud service?

11. E-Commerce Services in the Cloud

In line with the BizSpark program and other Microsoft initiatives to support emerging business, e-commerce services in the cloud would be highly useful. A cloud-based shopping cart and payment service would an excellent beginning, best implemented perhaps in conjunction with a well-known payment service such as PayPal. For more established businesses, we could imagine a deeper set of services that might include common ERP and commerce engine features.

In the Future Cloud, I’d like to see shopping, payment, and commerce services.

12. Basic IT Services in the Cloud

It may be unrealistic to expect enterprises will put everything they have in the cloud, but start-ups are another matter altogether. For many start-ups, all of their IT will be in the cloud. They won’t have any local IT assets whatsoever beyond laptops. That means the basics, such as email, conferencing, Active Directory, domain management, and backup/restore will need to be in the cloud. We have a start on that today with Exchange Online, Office Communications Online, and Live Meeting in BPOS, but more is needed to complete the picture.

In the Future Cloud, I’d like to see basic IT services provided by the cloud to support the fully-in-the-cloud customer.

Well, there’s my wish list. What do you think needs to be in the future cloud? Send me your comments.


Geva Perry posted Shopping the Cloud: Pricing (or Apples and Oranges in the Cloud) on 9/22/2010:

image In Shopping the Cloud: Performance Benchmarks I listed a number of services and reports that compare cloud provider performance results, but the truth is that in computing (cloud included) you can throw money at almost any performance and scale problem. It doesn't make any sense, therefore, to talk about performance alone, you want to compare price/performance.

But here's the rub: it is becoming increasingly difficult to compare the pricing of the various cloud providers.

Problem #1: Cloud providers use non-standard, obfuscated terminology

About a year and a half ago I wrote What Are Amazon EC2 Compute Units? in which I raised the issue of how difficult it is to know what it is you are actually getting for what you are paying in the cloud. Other vendors use their own terminology, such as Heroku's Dynos. I'm not just picking on these two, everyone has their own system.

Problem #2: Cloud providers use wildly varying pricing schemes

In addition, the pricing schemes by the various vendors include different components. Take storage as the simplest example, which clearly illustrates the point. Here's a screenshot from the Rackspace Cloud Files pricing page:

Rackspace Cloud Files Pricing

It is fairly straightforward, but also contains many elements that are extremely difficult to project (especially for a new application), such as the Bandwidth and Request Pricing. That's OK - you have to make some assumptions.

But here's my main point -- now compare it to Amazon S3 pricing:

Amazon s3 Pricing
Good luck with that.

Problem #3: Not all cloud offerings are created equal

To make things worse, not all cloud storage services were made equal. They have different features, different SLAs, varying levels of API richness, ease-of-use, compliance and on and on.

Problem #4: Cloud computing pricing is fluctuating rapidly

Another big problem with dealing with pricing is that the market is very dynamic and prices change rapidly. Fortunately, most of the movement right now is downwards, due to the increased competitiveness (especially in the IaaS space) and thanks to vendors benefiting from economies of scale and increased efficiency due to innovation.

Andrew Shafer from CloudScaling wrote a blog post a couple of weeks ago in which he shows how Amazon pricing is constantly shrinking. Check out this graphic he created:

AWS-Price-Announcments

What to do about it?

So what do you do in such a complex landscape? There seems to be no escape from creating a test application and running it on multiple services to see where the cost comes out. Then again, that may turn out to be a very time-consuming and expensive effort that may not be worth it -- at least not initially. So you should be prepared to have to move your app across cloud providers if and when the costs become prohibitive (which I am seeing happening to more and more companies).

Hopefully, the cloud benchmark services will also start paying attention to pricing and provide a comparison of price/performance and not just performance.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

imageNo significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) asked Hack The Stack Or Go On a Bender With a Vendor? in this 9/24/2010 post:

Cloud computing icon

I have the privilege of being invited around the world to talk with (and more importantly) listen to some of the biggest governments, enterprises and service providers about their “journey to cloud computing.”

I feel a bit like Kwai Chang Caine from the old series Kung-Fu at times; I wander about blind but full of self-assured answers to the questions I seek to ask, only to realize that asking them is more important than knowing the answer — and that’s the point.  Most people know the answers, they just don’t know how — or which — questions to ask.

Yes, it’s a Friday.  I always get a little philosophical on Fridays.

In the midst of all this buzz and churn, there’s a lot of talk but depending upon the timezone and what dialect of IT is spoken, not necessarily a lot of compelling action.  Frankly, there’s a lot of analysis paralysis as companies turn inward to ask questions of themselves about what cloud computing does or does not mean to them.

There is, however, a recurring theme across geography, market segment, culture and technology adoption appetites; everyone is seriously weighing their options regarding where, how and with whom to make their investments in terms of building cloud computing infrastructure (and often platform) as-a-service strategy.  The two options, often discussed in parallel but ultimately bifurcated based upon explored use cases come down simply to this:

  1. Take any number of available open core or open source software-driven cloud stacks, commodity hardware and essentially engineer your own Amazon, or
  2. Use proprietary or closed source virtualization-nee-cloud software stacks, high-end “enterprise” or “carrier-class” converged compute/network/storage fabrics and ride the roadmap of the vendors

One option means you expect to commit to an intense amount of engineering and development from a software perspective, the other means you expect to focus on integration of other companies’ solutions.  Depending upon geography, it’s very, very unclear to enterprises of service providers what is the most cost-effective and risk-balanced route when use-cases, viability of solution providers and the ultimate consumers of these use-cases are conflated.

There is no one-size-fits-all solution.  There is no ‘THE Cloud.”

This realization is why most companies are spinning around, investigating the myriad of options they have available and the market is trying to sort itself out, polarized at one end of the spectrum or trying to squeeze out a happy balance somewhere in the middle.

The default position for many is to go with what they know and “bolt on” new technology both tactically (in absence of an actual long-term strategy) to revamp what they already have.

This is where the battle between “public” versus “private” cloud rages — where depending upon which side of the line you stand, the former heralds the “new” realized model of utility computing and the latter is seen as building upon virtualization and process automation to get more agile.  Both are realistically approaching a meet-in-the-middle strategy as frustration mounts, but it’s hard to really get anyone to agree on what that is.  That’s why we have descriptions like “hybrid” or “virtual private” clouds.

The underlying focus for this discussion is, as one might imagine, economics.  What architects (note I didn’t say developers*) quickly arrive at is that this is very much a “squeezing the balloon problem.” Both of these choices hold promise and generally cause copious amounts of iteration and passionate debate centered on topics like feature agility, compliance, liability, robustness, service levels, security, lock-in, utility and fungibility  of the solutions.  But it always comes back to cost.

Hard costs are attractive targets that are easily understood and highly visible.  Soft costs are what kill you.  The models by which the activity and operational flow-through — and ultimate P&L accountability of IT — are still black magic.

The challenge is how those costs are ultimately modeled and accounted for and how to appropriately manage risk. Nobody wants the IT equivalent of credit-default swaps where investments are predicated on a house of cards and hand-waving and at the same time, nobody wants to be the guy whose obituary reads “didn’t get fired for buying IBM.”

Interestingly, the oft-cited simplicity of the “CapEx vs. OpEx” discussion isn’t so simple in hundred year old companies whose culture is predicated upon the existence of processes and procedures whose ebb and flow quite literally exist on the back of TPM reports.  You’d think that the way many of these solutions are marketed — both #1 and #2 above — that we’ve reached some sort of capability/maturity model inflection point where either are out of diapers.

If this were the case, these debates wouldn’t happen and I wouldn’t be writing this blog.  There are many, many tradeoffs to be made here. It’s not a simple exercise, no matter who it is you ask — vendors excluded ;)

Ultimately these discussions — and where these large companies and service providers with existing investment in all sorts of solutions (including previous generations of things now called cloud) are deciding to invest in the short term — come down to the following approaches to dealing with “rolling your own” or “integrating pre-packaged solutions”:

  1. Keep a watchful eye on the likes of mass-market commodity cloud providers such as Amazon and Google. Use (enterprise) and/or emulate the capabilities (enterprise and service providers) of these companies in opportunistic and low-risk engagements which distribute/mitigate risk by targeting non-critical applications and information in these services.  Move for short-term success while couching wholesale swings in strategy with “pragmatic” or guarded optimism.
    .
  2. Distract from the back-end fracas by focusing on the consumption models driven by the consumerization of IT that LOB and end users often define as cloud.  In other words, give people iPhones, use SaaS services that enrich user experience, don’t invest in any internal infrastructure to deliver services and call it a success while trying to figure out what all this really means, long term.
    .
  3. Stand up pilot projects which allow dabbling in both approaches to see where the organizational, operational, cultural and technological landmines are buried.  Experiment with various vendors’ areas of expertise and functionality based upon the feature/compliance/cost see-saw.
    .
  4. Focus on core competencies and start building/deploying the first iterations of “infrastructure 2.0″ with converged fabrics and vendor-allied pre-validated hardware/software, vote with dollars on cloud stack adoption, contribute to the emergence/adoption of “standards” based upon use and quite literally *hope* that common formats/packaging/protocols will arrive at portability and ultimately interoperability of these deployment models.
    .
  5. Drive down costs and push back by threatening proprietary hardware/software vendors with the “fact” that open core/open source solutions are more cost-effective/efficient and viable today whilst trying not to flinch when they bring up item #2 questioning where and how you should be investing your money and what your capabilities really are is it relates to development and support.  React to that statement by threatening to move all your apps atop  someone elses’ infrastructure. Try not to flinch again when you’re reminded that compliance, security, SLA’s and legal requirements will prevent that.  Rinse, lather, repeat.
    .
  6. Ride out the compliance, security, trust and chasm-crossing comfort gaps, hedging bets.

If you haven’t figured it out by now, it’s messy.

If I had to bet which will win, I’d put my money on…<carrier lost>

/Hoff

*Check out Bernard Golden’s really good post “The Truth About What Really Runs On Amazon” for some insight as to *who* and *what* is running in public clouds like AWS.  The developers are leading the charge.  Often times they are disconnected from the processes I discuss above, but that’s another problem entirely, innit?

Image via Wikipedia


The HPC in the Cloud blog regurgitated a Survey Suggests Multi-Factor Authentication Critical to Widespread Adoption of Cloud Computing press release on 9/23/2010:

image PhoneFactor, the leading global provider of phone-based multi-factor authentication, today released the results of its recent survey on the role of security in cloud computing adoption. The results point to a strong interest in cloud computing, but an equally strong fear about the security implications.

The survey included more than 300 information technology professionals from a wide variety of industries and looked at their organizations' current and planned use of cloud computing, what perceived benefits are driving adoption, and conversely which factors are limiting adoption. Key findings in PhoneFactor's study include:

  • Security is a primary barrier to cloud computing adoption for nearly three-quarters of respondents (73%) followed by Compliance (54%) and Portability/Ownership of Data (48%).
  • 42% of respondents indicated that security concerns had held their company back from adopting cloud computing. 30% were unsure, and only 28% indicated it had not been a deterrent.
  • Leading cloud services were rated only moderately secure or worse: Google Apps, Amazon Web Services, and SalesForce/Force.com were all rated only moderately or less secure by more than 74% of respondents.
  • Preventing unauthorized access to data was the greatest cloud computing security concern. The overwhelming majority of respondents (93%) were at least moderately concerned about preventing unauthorized access to company data in the cloud; more than half (53%) were extremely concerned by it. Fear of the unknown ranked second highest with 89% of respondents indicating they were at least moderately concerned about the inability to evaluate the security of cloud-based systems.
  • What can be done to increase confidence in cloud computing? The top three security measures respondents thought were critical to securing the cloud included: Encryption (84%), Multi-Factor Authentication (81%), and Intrusion Prevention (80%).
  • Reduced cost (65%), Scalability (62%), and Rapid Implementation (50%) are seen as primary benefits to cloud computing. 87% of respondents indicated that they were planning to at least evaluate the use of cloud services.

"Companies are eager to take advantage of the benefits of cloud computing," said Steve Dispensa, PhoneFactor CTO and co-founder. "Demand for cloud computing systems clearly exists. However, survey results indicate that better security, like multi-factor authentication and encryption, are going to be required if cloud computing adoption is going to move forward."

Google's announcement earlier this week that they are adding two-step authentication to Google Apps validates the need for additional security for user access to cloud applications. The Google authentication solution, like that provided by PhoneFactor, leverages an everyday device -- the phone, which is ideal for cloud applications because it mirrors the scalability and cost savings touted as key benefits of cloud computing.

Read more: Page 2 All »


Simon Ellis asked Cloud Security – Is The Cloud Insecure? in this 9/23/2010 post to the CloudTweaks blog:

image Cloud security is on the top of every CIO’s mind. Apparently some people even consider that cloud risks outweigh cloud benefits.  Unfortunately, an overzealous approach to cloud security can lead to arguments that detract from the real issues, with little to no analysis of the specific problems at hand.

Below is a list of cloud security issues that I believe affect large organizations:
  • Separation of duties Your existing company probably has separate application, networking and platform teams. The cloud may force a consolidation of these user groups. For example, in many companies the EC2 administrators are application programmers, have access to Security Groups (firewall) and can also spin up and take down virtual servers.
  • Home access to your servers Corporate environments are usually administered on-premise or through a VPN with two-factor authentication. Strict access controls are usually forgotten for the cloud, allowing administrators to access your cloud’s control panel from home and make changes as they see fit. Note further that cloud access keys/accounts may remain available to people who leave or get fired from your company, making home access an even bigger concern…
  • Difficulty in validating security Corporation are used to stringent access and audit controls for on-premise services, but maintaining and validating what’s happening in the cloud can become a secondary concern. This can lead some companies to lose track of the exact security posture of their cloud environments.
  • Appliances and specialized tools do not support the cloud Specialized tools may not be able to go into the cloud. For example, you may have Network Intrusion Detection appliances sitting in front of on-premise servers, and you will not be able to move such specialized boxes into the cloud. A move to Virtual Appliances may make this less of an issue for future cloud deployments.
  • Legislation and Regulations Cross border issues are a big challenge in the cloud. Privacy concerns may forbid certain user data from leaving your country, while foreign legislation may become an unneeded new challenge for your business. For example, a European business running systems on American soil may open themselves up to Patriot Act regulations.
  • Organizational processes Who has access to the cloud and what can they do? Can someone spin up an Extra Large machine and install their own software? How do you backup and restore data? Will you start replicating processes within your company simply because you’ve got a separate cloud infrastructure? Many companies are simply not familiar enough with the cloud to create the processes necessary for secure cloud operations.
  • Auditing challenges Any auditing activities that you normally undertake may be complicated if data is in the cloud. A good example is PCI — Can you actually prove that CC data is always within your control, even if it’s hosted outside of your environment somewhere in the cloud ether?
  • Public/private connectivity is a challenge Do you ever need to mix data between your public and private environments? It can become a challenge to send data between these two environments, and to do so securely. New technologies for cloud impedance matching may help.
  • Monitoring and logging You will likely have central systems monitoring your internal environment and collecting logs from your servers. Will you be able to achieve those same monitoring and log collection activities if you run servers off-premise?
  • Penetration testing Some companies run periodic penetration testing activities directly on public infrastructure. Cloud environments may not be as amenable to ‘hacking’ type activities from taking place on cloud infrastructure that they provide.

Simon is the owner of LabSlice, a new startup that allows companies to distribute Virtual Demos using the cloud.


<Return to section navigation list> 

Cloud Computing Events

Andrew reported Cloud CodeRetreat will be held 11/6/2010 from 8:30 AM to 5:30 PM at the Cloudscaling offices in San Francisco, CA:

Cloudscaling and the great crew from Cloudkick want you to come sharpen your cloud coding skills Saturday, Nov 6th, 2010.

Yes, it’s a Saturday. Yes, it’s pretty much all day. Yes, it will be fun.

Bring a laptop and the willingness to teach and learn from each other.

This one will be all about Python. (the language of OpenStack)

We’ll bring food.

Please RSVP


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Doug Rehstrom posted Understanding Amazon EC2 Security Groups and Firewalls to the Learning Tree blog on 9/24/2010:

image When launching an Amazon EC2 instance you need to specify its security group.  The security group acts as a firewall allowing you to choose which protocols and ports are open to computers over the internet.  You can choose to use the default security group and then customize it, or you can create your own security group.  Configuring a security group can be done with code or using the Amazon EC2 management console.

If you choose to use the default security group it will initially be configured as shown below:

image The protocols to configure are TCP, UDP and ICMP.  (ICMP is used for ping.)  There is also a range of ports for each protocol.  (ICMP uses no port, that is why the range is -1 to -1.)  Lastly, the source allows you to open the protocols and ports to either a range of IP addresses or to members of some security group.

The default security group above may be a little confusing.  It appears everything is wide open.  In fact everything is closed.  The default group, by default, opens all ports and protocols only to computers that are members of the default group (if that makes any sense).  Anyway, no computer across the Internet can access your EC2 instance at that point.

Most likely, you’ll need to open some protocols and ports to the outside world.  There are a number of common services preconfigured in the Connection Method dropdown as shown below.

As an example, if you are configuring an EC2 instance to be a Web server you’ll need to allow the HTTP and HTTPS protocols.  Select them from the list and the security group would be altered as shown below.

The most important thing to note is the Source IP.  When you specify “0.0.0.0/0″ that really means your allowing every IP address access the specified protocol and port range.  So in the example,  TCP ports 80 and 443 are open to every computer on the Internet.

You might also want to allow services to manage the server, upload files and so on.  For example, if I was configuring a Windows server I’d want to use Remote Desktop which would require me to enable RDP which uses TCP port 3389.  However, I’d only want my IP address to have access to that protocol.  It would be crazy to allow every computer in the world access to services like RDP, FTP, database services etc. See the screenshot below.

Now RDP is enabled on TCP port 3389, but only for the IP address 75.88.111.9.  Note that after the IP address you don’t specify “/0″.  If you do, every computer in the world would have access to that port.  To restrict access to a single address specify “/32″ after the IP.  If you want to know why see the following article: http://en.wikipedia.org/wiki/CIDR.

You may also need to know what your public IP address is.  Search Bing for “My IP address” and a number of Web sites will come up that will tell you.

For an easy tool to test whether a port is open try paping from Google.

To learn more about EC2 and cloud computing, come to Learning Tree’s Cloud Computing course.


Alex Popescu posted Graph Databases: More than An Introduction on 9/23/2010 to his myNoSQL blog:

image Found a very informed and detailed presentation about graph databases from Marko Rodriguez, covering:

  • graph structures, algorithms, and algebras
  • graph databases and the property graph
  • TinkerPop open-source graph product suite
  • real-time, real-world use cases for graphs

Make sure to set aside enough time to go through the 120+ slides as they are definitely worth your time.

Graph Databases: Trends in the Web of Data


Ayende Rahien (@ayende) posted RavenDB: Replicating to a relational database on 9/22/2010:

image I just finished implementing a very cool feature for RavenDB, the Index Replication bundle allows you to replicate an index to a relational database.

What does this mean? Well, consider the following document:

image var q = new Question
 {
     Title = "How to replicate to SQL Server?",
     Votes = new[]
     {
         new Vote{ Up = true, Comment = "Good!"}, 
         new Vote{ Up = false, Comment = "Nah!"}, 
         new Vote{ Up = true, Comment = "Nice..."}, 
     }
 };

And this index:

from q in docs.Questions
select new 
{
         Title = q.Title, 
         VoteCount = q.Votes.Count
}

With the aid of the Index Replication bundle, that index will be replicated to a relational database, giving us:

image

You can find full documentation for this feature here and the bundle itself is part of RavenDB’s unstable as of build 159.


<Return to section navigation list> 

 

0 comments: