Wednesday, December 21, 2011

Windows Azure and Cloud Computing Posts for 12/21/2011+

A compendium of Windows Azure, Service Bus, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Jaikumar Vijayan (@jaivijayan, pictured below) explained “Hadoop's creator discusses how the technology is making its presence felt industrywide” in a deck for his The Grill: Doug Cutting interview of 12/19/2011 for ComputerWorld:

imageDoug Cutting, creator of the open-source Hadoop framework that allows enterprises to store and analyze petabytes of unstructured data, led the team that built one of the world's largest Hadoop clusters while he was at Yahoo. Formerly an engineer at Excite, Apple and Xerox PARC, Cutting also developed Lucene and Nutch, two open-source search engine technologies now being managed by the Apache Foundation.

imageCutting is now an architect at Cloudera, which sells and supports a commercial version of Hadoop. Here he talks about the reasons for the surging enterprise interest in Hadoop.

Doug Cutting:
  • The most interesting thing people don't know about you: One summer I worked in a salmon cannery 14 hours a day while camping in a swamp.
  • Favorite technology: The bicycle derailleur.
  • Favorite nonwork pastime: Walking, cycling, skiing or swimming with friends.
  • Favorite vice: It's a tie between an espresso at 9 a.m. and a beer at 5 p.m.
  • Four people you'd invite to dinner together: Thomas Pynchon (author), Bootsy Collins (musician), John Muir (naturalist) and my wife.
  • Best movie ever: Once Upon a Time in the West (Sergio Leone, 1968).

How would you describe Hadoop to a CIO or a CFO?
 
Why should enterprises care about it? At a really simple level, it lets you affordably save and process vastly more data than you could before. With more data and the ability to process it, companies can see more, they can learn more, they can do more. [With Hadoop] you can start to do all sorts of analyses that just weren't practical before. You can start to look at patterns over years, over seasons, across demographics. You have enough data to fill in patterns and make predictions and decide, "How should we price things?" and "What should we be selling now?" and "How should we advertise?" It is not only about having data for longer periods, but also richer data about any given period.

What are Hive and Pig?
Hive gives you [a way] to query data that is stored in Hadoop. A lot of people are used to using SQL and so, for some applications, it's a very useful tool. Pig is a different language. It is not SQL. It is an imperative data flow language. It is an alternate way to do higher-level programming of Hadoop clusters. There is also HBase, if you want to have real-time [analysis] as opposed to batch. There is a whole ecosystem of projects that have grown up around Hadoop and that are continuing to grow. Hadoop is the kernel of a distributed operating system, and all the other components around the kernel are now arriving on the stage.

Why do you think there's so much interest in Hadoop right now?
It is a relatively new technology. People are discovering just how useful it is. I think it is still in a period of growth where people are finding more and more uses for it. To some degree, software has lagged hardware for some years, and now we are starting to catch up. We've got software that lets companies really exploit the hardware they can afford.

What is it about relational database technologies that makes them unsuitable for some of the tasks that Hadoop is used for?
Some of it is technological challenges. If you want to write a SQL query that has a "join over tables" that are petabytes [in size] -- nobody knows how to do that. The standard way you do things in a database tops out at a certain level. [Relational databases] weren't designed to support distributed parallelism, to the degree that people now find affordable. You can buy a Hadoop-based solution for a 10th of the price [of conventional relational database technology]. So there is the affordability. Hadoop is a fairly crude tool, but it does let you really use thousands of processors at once running over all of your data in a very direct way.

What are enterprises using Hadoop for?
Well, we see a lot of different things, industry by industry. In the financial industry, people are looking at fraud detection, credit card companies are looking to see which transactions are fraudulent, banks are looking at credit worthiness -- deciding if they should give someone a loan or not. Retailers are looking at long-term trends, analyzing promotions, analyzing inventory. The intelligence community uses this a lot for analyzing intelligence.

Are those users replacing relational databases, or just supplementing them?
They are augmenting and not replacing. There are a lot of things I don't think Hadoop is ever going to replace, things like doing payroll, the real nuts-and-bolts things that people have been using relational databases for forever. It's not really a sweet spot for Hadoop.

imageMicrosoft, Oracle, IBM and other big vendors have all begun doing things with Hadoop these days.
What do you think about that trend? It's a validation that this is real, that this is a real need that people have. I think this is good news.

What advice would you give to enterprises considering Hadoop?
I think they should identify a problem that, for whatever reason, they are not able to address currently, and do a sort of pilot. Build a cluster, evaluate that particular application and then see how much more affordable, how much better it is [on Hadoop]. I think you can do bakeoffs, at least for some initial applications. There is a real synergy when you get more data into a Hadoop cluster. Hadoop lets you get all of your data in one place so you can do an analysis of it together and combine it.

Where do you see Hadoop five years from now?
It is going to start to be a real established part of IT infrastructure. Right now, these things from Oracle and Microsoft are experiments. I think they are trying to tinker with it. I think in five years those won't be experiments. [Hadoop] will be the incumbent.

My hope is to build something that is loosely coupled enough that it can evolve and change and we can replace component by component [so] there doesn't need to be a revolution again anytime soon.

This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.


<Return to section navigation list>

SQL Azure Database and Reporting

Paras Doshi (@paras_doshi) concluded his SQL Azure series with Getting started with SQL Azure - Part 10 B: Conclusion on 12/20/2011:

image[The a]im of “Getting started with SQL Azure” series is to offer you a set of brief articles that could act as a Launchpad for your to-be wonderful journey of exploring Microsoft’s cloud based database solution i.e. SQL Azure.

First summary of previous blog posts.

  1. Part 1: We defined SQL Azure and discussed advantages of SQL Azure
  2. Part 2: We created an Azure account and created our very first SQL Azure database
  3. Part 3: We discussed about the provisioning and the billing model of SQL Azure
  4. part4: We discussed the SQL Azure architecture
  5. part 5: We discussed the SQL Azure security model
  6. Part 6: We discussed how to migrate Databases to SQL Azure
  7. part 7: We discussed how to improve performance of SQL Azure DB and options for planning backup and restore strategies.
  8. Part 8: We discussed administrative tasks related to SQL Azure.
  9. Part 9: we discussed about Developing SQL Azure applications.
  10. Part 10: We discussed about SQL Azure Data SYNC and SQL Azure reporting
Conclusion:

As summarized – above list were the set the topics that you need to know to get started with SQL Azure. And I hope that gave you a idea of what SQL Azure has to offer. Using this knowledge, I would like you to encourage and explore the use cases of SQL Azure. For instance – this case study http://www.microsoft.com/casestudies/Windows-Azure/Flavorus/Ticketing-Company-Scales-to-Sell-150-000-Tickets-in-10-Seconds-by-Moving-to-Cloud-Computing-Solution/4000011072.

In couple of sentences – what they do is they spawn large number of databases that are used for a small amount of time by a application that requires high-throughput. So it’s the perfect example of how SQL Azure opens new set of possibilities and at the same time, SQL Azure tries it best to support the traditional SQL server centric scenario’s. To better understand the difference between SQL Azure and SQL Azure, conceptually - following image may help:

Img 1 SQL Azure vs SQL server

Source: http://parasdoshi.com/2011/07/21/diagrammatic-representation-of-sql-azure-vs-sql-server/

Both solutions are essentially Microsoft’s RDBMS but are different in certain manners. I have tried to point to differences during the series – and I hope you understand that there is a difference from architecture standpoint as well as from technical (Developer, DBA) standpoint.

And when the series was being written – SQL Azure federations was not publicly available but, Good news is that it is now open for public! Basically, SQL Azure federations provides the out of the box solution for “scaling out” your databases. To learn more about this unique feature, you can go to http://msdn.microsoft.com/en-us/library/windowsazure/hh597452.aspx

Also, I would say that SQL Azure has rapid service update cycles. So within a year, you may find 4-5 service updates. To keep track of what’s new in SQL Azure, please visit: http://beyondrelational.com/whatisnew/sqlserver/azure.aspx Also few upcoming projects are posted on SQL Azure labs. check those projects out! Link: http://www.microsoft.com/en-us/sqlazurelabs/default.aspx

And have a look at SQL Azure Interview QA’s and I hope you are now able to answer these questions! http://blog.sqlauthority.com/2011/07/25/sql-server-azure-interview-questions-and-answers-guest-post-by-paras-doshi-day-25-of-31/.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

David Ramel (@dramel) asserted Microsoft's Windows Azure Leads the Data Revolution in a 12/20/2011 to Redmond Developer News’ DataDriver column:

imageIt was about two years ago when I first wrote about the exciting development possibilities of "Mining the Cloud," with new data markets such as the "Dallas" project on Windows Azure.

imageWell, Dallas has matured into the Windows Azure Marketplace, and at least one forward-looking research organization is predicting the fruition of that effort into something really big. One of O'Reilly Radar's "Five big data predictions for 2012" published last week is the "Rise of data marketplaces." It reads:

"Your own data can become that much more potent when mixed with other datasets. For instance, add in weather conditions to your customer data, and discover if there are weather related patterns to your customers' purchasing patterns. Acquiring these datasets can be a pain, especially if you want to do it outside of the IT department, and with some exactness. The value of data marketplaces is in providing a directory to this data, as well as streamlined, standardized methods of delivering it. Microsoft's direction of integrating its Azure marketplace right into analytical tools foreshadows the coming convenience of access to data."

imageIndeed, from the "dozens of feeds" I discovered in my initial exploration of Dallas, Windows Azure Marketplace now boasts "thousands of subscriptions and trillions of data points," with more coming online regularly, such as historical weather data and a "Stock Sonar Sentiment Service" added last month.

Two years ago I demonstrated how easy it was to subscribe to a data feed and incorporate it into custom reports and visualizations. Imagine what developers can do now.

While Microsoft may be the vanguard of new data-centric initiatives, it's not alone, of course. ReadWriteWeb summarized the emerging data market ... uh, market that developers might tap into in this July piece, and reviewed some of the other players such as Datamarket.com, Factual, CKAN Data Hub and Kasabi. But looks like Microsoft is indeed the frontrunner. The site even wondered "Is Microsoft's Future in Data-as-a-Service?"

But one worrisome trend that could curtail this movement is the possible loss of hundreds of thousands of raw data sources that come from the federal government as the tanking economy threatens to impose cost-cutting measures that will eliminate or severely curtail services such as Data.gov. "When the current budget cuts were revealed to include cuts to the e-government fund that supports Data.gov, everyone starting questioning Data.gov's value," reads a blog posting from the Sunlight Foundation last April when budget cuts were announced. "The cuts could spell the end of Data.gov," warned a Washington Post blog at the time. And this is with a Democrat in the White House!

The site is still up for the time being, but it's somewhat alarming that the last blog posting on the Data.gov site's Open Data section announced the resignation of the program executive last summer. And there's little activity on the forums in the "Developer's Corner" section of the site.

But with demand, there will be supply, of course, so data markets such as Windows Azure Marketplace will continue to provide valuable information that can be incorporated into exciting new development opportunities -- you just might have to pay more for less. But that's nothing new these days.

Full disclosure: I’m an occasional paid contributor to 1105 Media’s Redmond Developer News and a contributing editor of 1105’s Visual Studio Magazine.


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

Jim O’Neil continued his Service Bus series with Photo Mosaics Part 8: Caching on 12/21/2011:

My previous post in this series - on the use of the Service Bus within the Azure Photo Mosaics application – for all intents and purposes completed the explanation of all of the application features, but there’s an alternative implementation that I had planned for, specifically to demonstrate the (then new) feature of Caching. And that’s what I’ll focus on for this post.

Caching Overview

Windows Azure Caching provides a distributed in-memory cache as a service that you can leverage via applications running in the Windows Azure cloud. The cache is essentially a massive property bag of name/value pairs where data is stored across multiple nodes (machines) in the Windows Azure data center and managed on your behalf. Caching is a true service, since the only thing you have to do to set it up is pick which data center will host it, specify the size of the cache, and pick an endpoint name within the Windows Azure Portal. There are a discrete number of cache sizes available (from 128MB to 4GB), and although you pay the same amount for the cache whether it’s 0 or 100% utilized, you can increase or decrease its size when needed (although just once per day).

You may have noticed too that a similarly named feature, Windows Server AppFabric Caching, exists for providing an analogous caching capability for on-premises applications. Although Windows Azure Caching shares a common core (and genesis from the project codenamed Velocity), there are some notable differences that you should be aware of when developing applications that run both on-premises and in the cloud.

Usage Scenario

Within the Azure Photo Mosaics application, I included a configuration option to enable Caching for the storage of the image library tiles. If you recall the flow of the application (below), when the user creates a photomosaic, one of the inputs is a library of images (say from Flickr, his or her own vacation pictures, or what have you) that have been stored in a Windows Azure blob container. Those images are raw images and not yet resized to the dimensions requested for the final image – that after all is another input variable - so the same base images might be used to generate one mosaic with 16x16 pixel tiles and then another with 32x32. Rather than store versions of the same tile for all the available tile sizes, the tiles are generated dynamically by the ImageProcessor Worker Role.

Photo Mosaics architecture

With the default implementation, each instance of the ImageProcessor Worker Role creates an in-memory “Image Library” that contains each of the tiles, resized from the original image in the selected image library. Although the in-memory implementation works fine for the application, there are a couple of drawbacks:

Scalability – since the entire image library is held in the RAM of the virtual machine hosting the given Worker Role, there’s an absolute limit of how large an image library can be. If storage requirements for the generated tiles exceed the RAM allocation, your only option is to scale the application up by selecting a larger VM size, say medium versus small doubling your RAM allocation to 3.5GB. You can only scale up so far, however. Recall that Windows Azure data centers house homogeneous, commodity hardware, so once you reach the largest option (currently extra-large with 14GB) there’s no where else to go!

Performance – each instance of an ImageProcessor Worker Role creates an in-memory tile library that it uses to generate a slice of the final image, and that complete library is re-created for each slice. So, for instance, if you generate a mosaic for a given image and specify you want it processed into six slices, then six tasks will be queued, and the processing for each task will involve recreating the image library. This seeming redundancy is required, since the application is multi-tenanted and stateless, so you cannot rely on the same instance of a worker role processing all of the slices for a given image.

Enter the caching implementation (zoomed from the overall architecture diagram above):

Here, the ImageProcessor first consults the cache to see if a tile for the requested image library in the requested dimension exists. If so, it uses that cached image rather than regenerating it anew (and recalculating its average color). If the tile is not found in the cache, then it does have to retrieve the original image from the image library blob container and resize it to the requested dimensions. At that point it can be stored in the cache so the next request will have near-immediate access to it.

Creating a Cache

Creating a cache with the Windows Azure Management Portal is quite simple and straightforward. When you login to the portal, select the Service Bus, Access Control & Caching option on the left sidebar and then the Caching service at the top left. You’ll then get a list of the existing Caching namespaces:

Cache Configuration

The properties pane to the right shows information on existing caches, including the current size and peak sizes over the past month and year. To create a new cache, simple select the New option on the ribbon, and you’ll be prompted for four bits of information:

  1. a namespace, which is the first part of the URL by which your cache is accessed. The URL is {namespace}.cache.windows.net, and so {namespace} must be unique across all of Windows Azure,
  2. The region where your cache is located; you’ll pick one of the six Windows Azure data centers here,
  3. The Windows Azure subscription that owns this cache,
  4. The cache size desired, ranging from 128MB to 4GB in six discrete steps (multiples of two).

Cache creation dialog

Configuring the Cache in Code

The easiest way to leverage a Windows Azure Cache in your code is via configuration. You can generate the necessary entries from the Windows Azure Management Portal (as shown below), and simply cut-and-paste the configuration into the web.config file of your Web Role or the app.config file of your Worker Role.

Cache configuration markup

What you do in configuration, you can of course do programmatically – with a bit more elbow grease.

Programming Against the Cache

The Microsoft.ApplicationServer.Caching namespace hosts the classes you’ll need to interface with the Windows Azure cache. Note that this namespace is also used for accessing Windows Server AppFabric caches, and so contains classes and properties which will not apply to Windows Azure Caches (such as DataCacheNotificationProperties). The two primary classes you will use for your Windows Azure caches are:

  • DataCacheFactory which is used to configure the cache client (via DataCacheFactoryConfiguration) and return a reference to a DataCache. Named caches are not supported in Windows Azure, so you’ll get the reference to the default cache via code similar to the following:
   Dim theCache As DataCache   
   Try
        theCache = New DataCacheFactory().GetDefaultCache()
    Catch ex As Exception
        Trace.TraceWarning("Cache could not be instantiated: {0}", ex.Message)
        theCache = Nothing
    End Try
  • DataCache which is a reference to the cache itself, and includes the methods Add, Get, Put, and Remove methods to manipulate objects in the cache. Each of these methods deals with the cached item as a System.Object; if you want to retrieve the object as well as additional metadata like the timeout and version, you can use the GetCacheItem method to return a DataCacheItem instance.

Within the Azure Photo Mosaics application, the following code is used to retrieve the tile thumbnail images from the default cache (Line 5). If an image is not found in the cache, the thumbnail is generated (Line 18) and then stored in the cache (Lines 16-19) ready to service the next request.

   1: Dim cachedTile As CacheableTile
   2:  
   3: Me.Library.ImageRequests += 1
   4: Try
   5:     cachedTile = CType(Me.Library.Cache.Get(Me.TileUri.ToString()), CacheableTile)
   6: Catch
   7:     cachedTile = Nothing
   8: End Try
   9:  
  10: If (cachedTile Is Nothing) Then
  11:     Me.Library.ImageRetrieves += 1
  12:     Trace.TraceInformation(String.Format("Cache miss {0}", Me.TileUri.ToString()))
  13:  
  14:     Dim fullImageBytes As Byte() = Me.Library.TileAccessor.RetrieveImage(Me.TileUri)
  15:  
  16:     cachedTile = New CacheableTile() With {
  17:             .TileUri = Me.TileUri,
  18:             .ImageBytes = ImageUtilities.GetThumbnail(fullImageBytes, Me.Library.TileSize)
  19:         }
  20:  
  21:     Me.Library.Cache.Put(Me.TileUri.ToString(), cachedTile)
  22: Else
  23:     Trace.TraceInformation(String.Format("Cache hit {0}", Me.TileUri.ToString()))
  24: End If

Monitoring the Cache

The Windows Azure Management Portal includes some high level statistics about your cache, namely the current size, peak size for the month, and peak size for the year; however, these statistics are not real time. Additionally there is no way to determine transaction, bandwidth, or connection utilization. Given the fact that access to your cache can be throttled, you need to program defensively and handle DataCacheExceptions. For a quota exception, for instance, the SubStatus value will be set to DataCacheErrorSubStatus.QuotaExceeded. See the Capacity Planning for Caching in Windows Azure whitepaper for additional insight into effective use of your caches.

Windows Server AppFabric Caching provides additional transparency into cache utilization for on-premises applications; for more information see Windows Server AppFabric Caching Capacity Planning Guide

If you do want to collect additional metrics on the cache utilization, consider overloading the Add, Put, Get, and other relevant methods of DataCache to maintain counters of utilization. In the Azure Photo Mosaics application, I added some simple properties to track cache hits and misses in each of the ImageProcessor Worker Roles:

Public Property ImageRequests As Int32 = 0
Public Property ImageRetrieves As Int32 = 0
Public Property ColorValueRequests As Int32 = 0
Public Property ColorValueRetrieves As Int32 = 0

The “requests” variables indicate the number of times an item was requested (tile thumbnail or color value), and “retrieves” indicate the number of times the item was retrieved from the original source (the same as a cache miss). Cache hits are calculated as requests – retrieves.

In the next post, I’ll continue on this theme of caching by comparing several implementations of the ImageProcessor component that use various approaches to caching.


Some FAQs about Windows Azure Caching

Can I control how long an item will be cached? By default, items expire in 48 hours. You cannot override the expiration policy for a cache (in Windows Azure); however, you can specify eviction times on an item by item basis when adding them to the cache. There is no guarantee an item will be cached for the duration requested, since memory pressure will always push out the least recently used item.

Can I clear the cache manually or programmatically? Windows Azure Caching does not provide this capability at this time.

Is there a limit on what can be cached? Items are cached in a serialized XML format (although you can provide for custom serialization as well) and must be 8KB or less after serialization.

How much does caching cost? The costs for caching are wholly based upon the size of the cache (but do not include data transfer rates out of the data center, if applicable). As of this writing (December 2011) the cost schedule is as follows:

image

Beyond cache size are there other constraints? Yes, each cache size designation comes with an associated amount of bandwidth, transaction, and connection limits. Since caching is a shared resource, it’s possible that your usage will be throttled to fall within the limitations listed below (current as of December 2011):

image

Is there guidance on how to select the right cache size for my application? Yes, see the Capacity Planning for Caching in Windows Azure whitepaper.

Where can I read about those differences between Windows Azure Caching and Windows Server AppFabric Caching? The MSDN article Differences Between Caching On-Premises and in the Cloud covers that topic.


Sam Vanhoutte (@SamVanhoutte) described How the Azure ServiceBus EAI & EDI Labs SDK impacts Microsoft integration on 12/19/2011:

From private to public CTP

imageLast weekend, a new big step was set for companies in the Microsoft integration space, since the Microsoft Azure team released the first public CTP of the ServiceBus EAI & EDI Labs SDK. This was formally known as Integration Services. In september, we got a chance to play with a private CTP of this Azure component and since then we have provided a lot of feedback, some of which has already been taken up in the current release. The current release seems much more stable and the installation experience was great. (I installed the CTP on various machines, without a hickup)

It wasn’t surprising to see the BizTalk community picking up this CTP and start trying out various typical BizTalk scenarios in the cloud:

  • Rick Garibay discussed the various components in his blog post.
  • Kent Weare posted two articles (introduction & mapper) on his blog.
  • Mikael HÃ¥kansson showed the content based routing capabilities in this article.
  • Steef-Jan Wiggers added an overview TechNet wiki.
  • Harish Agarwal wrote an overview on EAI bridges.
Multi-tenancy & high-density

A huge difference for this model is that our pipelines, mappings and message flows are running on shared resources in the Windows Azure data center in some kind of high-density / multi-tenant container. This introduces an entire new architecture and concept for isolation. That also seems like the main reason why following capabilities are not yet possible or available:

  • It is not possible (yet) to develop custom pipeline components and have them running in the Azure runtime.
  • The mapper does not support (yet) custom XSLT or Scripting functoids.
  • Message flow does not contain workflow or custom endpoints.

This is feedback that the Microsoft team has received and is well aware of. Things just get more complex and risky, when running custom applications or components in a shared environment. You don’t want my memory leak causing a downgrade, or even failure, in your processes, right? Based on feedback I gave on the connect web site, I really have the feeling these items will be available over time, so being patient will be important here.

The various components

This CTP contains a nice set of components:

Service Bus Connect

Service Bus Connect introduces an on premises service that exposes local LOB applications (like SAP, SQL, Oracle…) over the Service Bus Relay to the cloud. This is a more lightweight service, compared with BizTalk and gives customers the ability to have a more lightweight and cheaper solution than buying BizTalk Server for the LOB connectivity. It also make sure we no longer have to write custom services, or even worse: console applications to expose LOB adapter services over the Service Bus.

Service Bus Connect leverages the LOB Adapter SDK and the available adapters and exposes these endpoints over the Relay services of the Service Bus. The local endpoints are hosted in Windows Server AppFabric and we can manage these endpoints in various ways:

  • In Visual Studio Server Explorer (ServiceBus Connect Servers). This makes me think back about the BizTalkExplorer in Visual Studio that took 2 or 3 releases to disappear. Admins still want MMC, in my opinion.
  • Over a management service, exposed as a WCF service. One of the things that we want to do for our Codit Integration Cloud, is exposing this management service over the service bus, so that we can maintain and operate the on premises services from our Cloud management portal.
  • Through powershell

EDI processing

It took Microsoft 5 releases of BizTalk Server, until they provided a full fledged EDI solution in BizTalk Server. Therefore, it’s great to see these capabilities are already available in this release, because EDI is not disappearing and doesn’t seem to disappear any time soon, how much we all would like it to... Support for X12 is available and also for AS/2 (the exchange protocol). What is missing for us, Europeans, is EDIFACT. Again, some patience will be rewarded over time. What is very promising for migration or hybrid scenarios, is the fact that the provided EDI/X12 schemas look the same as those that were provided with BizTalk Server. That will make migration and exchange of these messages from Cloud to On premises BizTalk much easier.

The EDI Trading Management portal allows to define partners and agreements, linked with various messaging entities on the service bus. All of this is provided through a nice and friendly metro-style UI (what else).

I believe this is one of the most suitable scenarios for cloud integration. B2B connectivity has been very expensive for years, because of the big EDI Value Added Networks and this is, after AS/2, a new step forward in making things cheaper and more manageable.

EAI processing

One of the strengths and differentiators of BizTalk towards its competition was that it was one generic suite of components that could be used for both EAI, B2B and ESB solutions. One products, one tool set, one deployment model. That’s why it worried me a little to see that EDI and EAI are being seen as two totally different things in the release. Looking a litle deeper shows that the EDI portal is mainly configuration of partners and endpoints, in order to link messages to the right Service Bus or messaging entities. Still I think the same portal concepts can become valid for non-EDI integration, where companies just exchange messages in different formats (flat file, XML…) over various protocols (mail, file, FTP, AS/2). I hope to see that we will be able to configure all of these connections in the same and consistent way.

Looking at the EAI capabilities, there are some nice new and refreshed items available:

  • Mapper: this looks like the BizTalk mapper, but is much different. One of the biggest differences is that it does not create or support XSLT, or it does not support custom code.
  • XML Bridges: these are the new pipelines. A bridge is a head in front of the service bus that is exposed as an endpoint. Various options are available and are discussed in the posts, mentioned above. The biggest missing link here, is the extensibility with custom components. At this moment, it doesn’t look possible to have (de)batching, custom validation, zipping and all these other things we do in BizTalk pipeline components.
    • Another thing that is interesting that a new concept of pub/sub is introduced here. You can use filters in the message flows and those filters are applied in memory and the first matching filter will make sure the incoming request gets routed to the right endpoint. The pub/sub here is in memory and is not durable. Making it durable, requires to have it routed to either a queue or topic. So, this routing model looks much similar to the WCF Routing service and less to the durable BizTalk or Service Bus topics.

Conclusion

This release is much more stable than the private CTP and looks promising. To be able to deliver real world integration projects with this, there is still a lot of things needed towards customization and extensibility and towards management and monitoring. The components are also targeted to real technical/developer oriented people. With Codit Integration Cloud, we provide the level of abstraction, monitoring and extensibility, while using these EDI & EAI capabilities under the hood.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Brent Stineman (@BrentCodeMonkey) finally continued his PHP series with Windows Azure & PHP (for nubs)–Part 2 of 4 on 12/21/2011:

imageOk, took a bit longer to get back to this than planned, but we’re here. So lets get stated.

In part one of this series, I talked about the tooling and getting a simple test app running in Windows. In this edition, we’ll take that to the next step and make it a Windows Azure web role and get that deployed to the local development emulator.

Solution Scaffolds

Unlike the previous Eclipse toolkit I had used, the new Windows Azure SDK for PHP is entirely command line based. This will be a bit of a shock of those of you used to the integrated experience offered by Visual Studio. But for those familiar with powershell and heaven forbid, older devs like me that used to get by on PERL or KornShell, it’s a pretty comfortable place to be.

This command line centers around the use of scaffolds, or templates. You can use the bare bones, default template, or if you’re more ambitious, create your own custom scaffolds. I’ll leave the topic of creating custom templates to those more experienced (and interested). So lets just focus on the default template in the SDK and what is gives us.

imageBen Lobaugh of Microsoft has a great set of tutorials over at the interoperability bridges site, including one on setting up your first Windows Azure and PHP project. As this points out, to create our first, bare bones app we just open up the command prompt and type this:

scaffolder run -out="C:\temp\WindowsAzurePHPApp"

This creates an initial project at the location provided and by default it’s a Windows Azure web role. Ben’s tutorial also breaks down the contents of that location (the solution files, subdirectories, etc…). But lets look at that command a bit closer.

The “scaffolder” utility has two commands, ‘run’ which executes a scaffold (or template) and ‘build’ which is used to create a new scaffold (again, we’re going skip creating a custom scaffold). The ‘run’ command supports two parameters, –OutputPath (aka –Out) and –Scaffolder (aka –s). We specified the out location and since we didn’t specify a scaffolder, it used the default “Scaffolders/DefaultScaffolder.phar” which sits just under where the scaffolder program sits (which on my machine is C:\Program Files\Windows Azure SDK for PHP). At least that’s what the’scaffolder’ command’s help tells us. But that’s not the entire story. The DefaultScaffolder.phar file is really just a php script, and it defines additional parameters (you can open it with the text editor of your choice).

For example, take this comment block from that phar file:

/**
* Runs a scaffolder and creates a Windows Azure project structure which can be customized before packaging.
*
* @command-name Run
* @command-description Runs the scaffolder.
*
* @command-parameter-for $scaffolderFile Argv –Phar Required. The scaffolder Phar file path. This is injected automatically.
* @command-parameter-for $rootPath Argv|ConfigFile –OutputPath|-out Required. The path to create the Windows Azure project structure. This is injected automatically.
* @command-parameter-for $diagnosticsConnectionString Argv|ConfigFile|Env –DiagnosticsConnectionString|-d Optional. The diagnostics connection string. This defaults to development storage.
* @command-parameter-for $webRoleNames Argv|ConfigFile|Env –WebRoles|-web Optional. A comma-separated list of names for web roles to create when scaffolding. Set this value to an empty parameter to skip creating a web role.
* @command-parameter-for $workerRoleNames Argv|ConfigFile|Env –WorkerRoles|-workers Optional. A comma-separated list of names for worker roles to create when scaffolding.
*/

We see in this parameters such as “-WebRoles”, “-WorkerRoles”, and even “-DiagnosticsConnectionString” (along with short names for each). And if we go look at this tutorial by Brian Swan (another great Windows Azure and PHP resource), you can see him using some of these parameters.

So what we see here is that the scaffolder can be used to quickly create various project templates for us. And if we customize the scaffolds (the php archive or .phar files), we can tailor them for our own unique needs. Powerful, if you don’t mind doing some scripting of your own. Smile

Putting the “Azure” in

imageAlright, back to the bare bones sample project we created. We can easily put the same “phpinfo();” command into it, but that does us nothing. At least in a “Windows Azure” sense. What we really want to do is start leveraging the “Windows Azure” part of the SDK to put some cloud flavor into it.

First off, we need to locate the SDK location. In my case, its “C:\Program Files\Windows Azure SDK for PHP”. From there we go down a level into the “library” folder and copy the “Microsoft” folder from that location. We don’t HAVE to copy everything in the Microsoft folder, but I’d rather not get drug down into mapping dependencies just yet. Suffice to say that that this location contains all the php include files and binary proxies we need to interact with most areas of the Windows Azure platform.

The root of these files AutoLoader.php, which helps resolve class names at runtime. This also helps remove the need for you as a developer to juggle lots of includes. If we drill down into the WindowsAzure folder, we’ll also find items like a SessionHandler, a proxy class for RoleEnvironment, and sub-libraries for things like Azure Storage.

But I digress, we’re going to copy the Microsoft folder into our sample web role. I put this file right into the .Web folder, same place my default index.php and Web.config files are located. I then update my index.php file to look like this:

<?php    require_once('Microsoft/AutoLoader.php');

    $val = Microsoft_WindowsAzure_RoleEnvironment::isAvailable();

    if ($val)        echo "in environment";    else        echo "not in environment";?>

What this is doing is referencing the AutoLoader, using it to pull in a reference to the SDK’s RoleEnvironment class, and calling a static method of that class, isAvailable, to determine if we are running in the Windows Azure environment or not. Now at this point we could load the site using the virtual directory we created last night, or we could deploy the project to local Windows Azure development emulator with a command like this:

package create -in="C:\temp\WindowsAzurePHPApp" -out="C:\temp\WindowsAzurePHPApp\build" -dev=true

This command uses the “package” program of the SDK to create a cloud service deployment package for Windows Azure using the project files at the location specified by the ‘-in’ parameter. Then, by specifying ‘-dev=true’, we tell the utility to deploy that package to the Windows Azure development fabric. If we load out index.php file from the resulting URI, we should (hopefully, see note below) see that we’re now running in the Windows Azure environment.

Note: isAvailable does not always work properly in the local development emulator. It appears to be a bug that creeps up occasionally but is difficult to reproduce. It can also change from fixed to broken depending on the version of the emulator you’re using.

Some debugging tips

Now if you’re like me and bit of a php nub (and you likely are if you’re reading this), you’re going to have some php parser errors when working up this sample. PHP syntax is so close to C# .NET that it is easy to mis-type things when moving back and forth between the languages. When this happens, you’ll get a blank page. The reason for this is that by default PHP doesn’t display detailed errors to help protect your sites. Detailed error messages can provide information that could make it easier for people to hack into a web site.

So the first thing we need to do is locate the php.ini file that controls this setting. The easiest way to do this (thanks for this tip Maarten), is to create a page that has “phpinfo();” in it and then load it. Check the “Loaded Configuration File” location and that will show you which file you need to update. Open it with your favorite text editor and track down the “display_errors” setting and set it from “off” to “on”.

Now this change only impacts your local environment. If you look at the template project you created using the scaffold, it includes a php folder with a php.ini file already in it. Its fairly bare bones, but this file will be used when you deploy your project to Windows Azure. So any tweak you make to the way PHP is configured for your solution , need to be added to this file as well if you want then to be applied when its deployed out to the Microsoft datacenters.

And next time on Script Adventures

So that’s it for part 2. Next time, we’ll explore the deployment process and what happens inside Windows Azure. If I’m lucky, I’ll even be able to show you how to setup remote desktop so we can peak into the virtual machine hosting our application and look at how the deployment process was executed.


Bruce Kyle posted an ISV Case Study: Add Geo-location with iLink GIS Framework on Windows Azure on 12/21/2011:

By Suresh Warrier

Frameworks are pre-built code libraries and well defined business processes that are designed to facilitate software development and implementation of business processes. Developers can use frameworks to speed up the development of solutions and products. iLink Systems provides solution frameworks that are nearly complete solutions of pre-built code that can be used to reduce time to market, support best practices, reduce delivery cost and improve ROI.

imageOnce a potential need has been pinpointed, these frameworks can be leveraged individually, in parts or in combination with other frameworks. Frameworks are consumable by ASP.NET, Silverlight apps, or Web Services and tie in data from various formats, including Windows Azure data storage and SQL Azure. iLink’s Geographical Information System (GIS) Framework enables to add Geolocation to solutions and products by offering rich visualization of geography sensitive data.

The Issue

Imagine a user who can geographically filter complex data by merely navigating around the map. Most existing solutions with geography sensitive data make it a cumbersome process to apply geographical filters. And these geographical filters are often limited to the span of dropdown entries. The natural way to filter is for the user to just click into a particular geographic region and see detailed information. Or a user could click on the information and see all the information located nearby.

Geo-sensitive information could be of many types - aggregate information such as summarized metrics for sales or operations in the region, presence and status information such as that of assets, service personnel etc. For many applications Geo-sensitive status information is critical for success, such as status Information of field service agent in a location of crisis.

To be of greatest value that product or solution should offer an easy to use geographic filter. This would allow the user to view a particular geographic region and see detailed information or look for local information nearby. As another example, a sales manager may want to see sales by region and then drill into a specific sale item. Or click on a slice of a pie slice that shows the relative position of each sales region and see the map zoom in on the sales activity in that region.

The Solution

The iLink Systems Geographical Information System (GIS) Framework provides connections between multiple data providers and consumers. This means that the framework can access data one or more providers such as Web Services, SharePoint, Facebook, or SQL Azure and combine the data for dashboards in Silverlight, SharePoint, or ASP.NET pages.

GIS Framework used in a Sales Dashboarding Application written in Silverlight, offers rich reporting capabilities on sales data and Key Performance Indicators at various geographical levels. Drilling down into different regions is achieved by just clicking through the map which will seamlessly update the KPIs being reported. Clicking a region in a chart will also update the map to drill down or up the right geographical level in the map.

clip_image002

Application Architecture and High Level Design

clip_image004

clip_image006

clip_image008

The front end is configured so that it can be hooked into any think or thin client along with the map layer. The map layer provides all the features to add custom push pins and routes.

The Geographic Information Service is divided into Map Service, Custom Service and Metric Service. The Custom and the metric Service are written so that it can be expanded as needed in future. The web methods developed so that it could be re-used for n number of applications with minimal modifications to it.

For Example, the method – Get Service Location returns the Latitude and Longitude for a service based on the Service ID. The Service ID can point to any service like gas station, hospitals, restaurants etc. Similarly all the web methods are developed so that it could be re-used.

The Data Access layer is developed so that it can fetch the data from any data providers. Some of the data providers we have used to get the data from are SQL 2008, SharePoint 2010, and Microsoft CRM.

The iLink Framework can be incorporated into either a Web Service hosted on Azure or in a thicker client in a Silverlight or WCF application. Communication happens through a set of well-defined interfaces that facilitates easy creation of new data providers and consumers.

Reusability

The Geographic Information service is built so that it can be re-used in various areas. At iLink, the developers are trained to build the application so that we could plug and play as much code as possible. Re-using is our key. This resulted in a number of frameworks in different areas. Similarly GIS framework is built so that it could be re-used. The core of the framework could be used in multiple areas of interest with a very minimal change and it could be consumed by any think and thin clients like Silverlight, WPF, SharePoint and so on.

Apart from the services which are globalized and could be used in multiple verticals, the map itself is built as a control so that it could be plug and played around in any application. There are different layers in the map like Route layer, content popup layer, push pin layer and the map itself as another layer. Depending on the needs the layers could be enabled or disabled. There are interfaces written corresponding to these layers so that the consumer could use those interfaces to connect to the actual application.

Following are some of the applications where are framework is used extensively.

Service Locator: This is a Silverlight application which has very rich UI and the data provider being SQL Server 2008. It is built to locate different services in a given area. The user can select the service like hospitals, gas stations, restaurants based on different categories and sub categories. The user is initially asked to select the area in which he wanted to look up for the services. The control is just added to the Silverlight application and the respective interfaces are used to implement the boundary service, search service, push pin service and content popup service.

clip_image010

Flu Shot Locator: This is a WPF application and SQL Server is used in the backend to provide the data. The user is asked to select the category and sub-category of the disease. Based on that, the map locates the areas where the flu shot is offered. On selecting a particular location, it shows in detail about who is offering the shot, number of people affected by that flu and the full address of the location. All the code is re-used from the framework.

clip_image012

Sales & Company Performance: Another very good example of where our GIS framework is used is to measure sales performance across the country by a Multi National Company. Instead of selecting the categories and sub-categories, the user is asked to select the time frame and team, the map locates the places and the sales in each location. Along with the sales information, it also gives information on how the company is performing in that particular area as KPIs with respective colors using customized pushpins.

clip_image014

Capabilities

iLink GIS Framework can:

  • Show intelligent business data, pattern and KPIs on top of a map.
  • Facilitate decision making
  • Enable streamlined communication between geographically disbursed teams
  • Consolidate data from multiple sources
  • Plot areas were flu shots are offered filtered based on the disease.
  • Indicate the Oil rigs across the whole country.
  • Search for service locations such as schools, hospitals, gas stations across a country or State of interest.
  • Draw routes from one point to the other based on the search location.
  • List the directions which could be filtered by walk or bus or public transit.
Reusability Enablers

GIS Frameworks integrate providers of data from SQL Server, SharePoint, Azure, or other Web services. And it provides the data to consumers of the data, such as Silverlight, ASP.NET, or SharePoint applications.

The Framework is a set of code and binaries are built to include:

  • Best practices such as error handling, retries
  • Database model
  • Providers & Consumers agnostic – open standards based

Developers use GIS Framework for either:

  • Out-of-box adoption in the areas for which iLink has already written providers/services
  • Customize existing providers/services - New providers/services can be written
Technologies Used
  • Microsoft Bing API
  • Silverlight
  • Windows Azure
  • Microsoft SharePoint Server 2010
About iLink

GIS Framework is one of the many frameworks offered by iLink to develop custom solutions. Frameworks are pre-built code libraries and well defined business processes that are designed to facilitate software development and implementation of business processes. For More information contact Ravi Mallikarjuniah, Practice Manager Commercial and ISV, email ravim@ilink-systems.com, phone +1-425-677-4424 or visit the website http://www.ilink-systems.com/ThinkingBeyond/iLinkFrameworks.aspx

iLink Systems is an ISO and CMMI certified global software solutions provider, managed Microsoft Independent Software Vendor (ISV) and Systems Integration (SI) Partner, winner of Microsoft's 2009 Public Sector, Health Partner of the Year award and Mobile Partner of the Year for 2011.

iLink integrates software systems and develops custom applications, components, and frameworks on the Microsoft platform for IT departments, application services provides and independent software vendors. iLink solutions are used in a broad range of industries and functions, including healthcare, telecom, government, education, and life sciences. iLink's expertise also includes mobile and embedded systems, business intelligence, portals and collaboration, and user experience design.


Himanshu Singh (pictured below) posted a Real-World Windows Azure: Interview with Ernest Kwong, Director of Technology Solutions at Designercity mini-case study on 12/21/2011:

imageAs part of the Real World Windows Azure interview series, I talked to Ernest Kwong, director of technology solutions at Designercity, about using Windows Azure for its interactive karaoke application. Here’s what he had to say.

Himanshu Kumar Singh: Tell me about Designercity.

imageErnest Kwong: Designercity was founded in the United Kingdom in 1995 as a web consulting firm. In 2000, we reestablished our headquarters in Hong Kong, China and began focusing on broader, integrated digital solutions. We’re now an independent digital solution consultant and software vendor and a Microsoft Partner Network member, developing mobile applications, multi-touch solutions, and out-of-home applications.

Our success is built on identifying and filling gaps in the market with innovative technology solutions. We have a high level of technological expertise and make it a focus to adopt new technology as it emerges and apply it to real-life business situations. As a result of this approach, we’ve won mandates to develop digital solutions for prominent organizations, including the Hong Kong International Airport and the Hong Kong Tourist Board.

HKS: What kinds of solutions are your customers asking for?

EK: We increasingly get requests from customers to provide hosting for their applications. Customers want highly available, scalable applications but do not want to manage infrastructure.

One such customer was Red MR a subsidiary of bma Investment Group, which has interests in the sports, hospitality, and entertainment and media industries. Red MR is a new karaoke brand that operates two popular karaoke complexes in Hong Kong. The karaoke market in Hong Kong was in decline and had seemingly lost touch with the ‘digital generation. Red MR wanted an interactive karaoke experience for its customers but was hesitant to invest significant capital resources to build an infrastructure to host the application it imagined amidst the declining market.

HKS: How did you respond to Red MR’s request?

EK: We wanted to tackle this market with a new spirit but we did not want to build a data center of our own. Data centers and infrastructure are not our core business. We’re looking for solutions that can provide the infrastructure components of solutions we develop.

HKS: What did develop for the customer?

EK: In October 2010, we started developing a revolutionary karaoke concept for Red MR using Microsoft Surface to develop the multiuser, multi-touch karaoke application and the Microsoft Silverlight 4 browser plug-in and Windows Presentation Foundation to deliver a rich user interface. Customers who visit the Red MR karaoke complexes can use the Microsoft Surface unit to select karaoke songs, play games, upload photos, and even order food and beverages as part of a complete entertainment experience.

imageWe used web roles in Windows Azure to host the front-end application and worker roles to handle background tasks, tapping Blob Storage in Windows Azure to store binary files, including photos, videos, and songs. Windows Azure is the backbone of our solution, providing the information hub that brings all of the content together.

We also developed an app for Windows Phone 7 as a complement to the Microsoft Surface application, enabling unique features that integrate customers’ smartphones with the Microsoft Surface unit. Customers can play games on the unit by using their Windows Phone as a controller, for instance.

HKS: How has the customer responded to this solution?

EK: We finished the karaoke application by the end of December 2010, and Red MR implemented the solution at two complexes. Each complex has 10 Microsoft Surface units, one for each of the complexes’ private suites, which accommodate four to 40 people. By June 2011, the karaoke experience was so popular that Red MR was preparing to open two more complexes and the number of Microsoft Surface units it uses to 100.

HKS: What benefits have you seen as a result of using WindowsAzure?

EK: By using the Red MR project as a springboard, we’ve developed an extensible framework that we can reuse to create other interactive solutions for customers in the entertainment industry. We’ve evolved over the last decade, and, with Windows Azure and Microsoft Surface, we’re able to take the next step in that evolution. We charge business customers a flat rate for each Microsoft Surface unit installed, including software, and the customers pay Microsoft directly for their own monthly Windows Azure subscription.

We’re also confident that we can expand its business operations by using Windows Azure. In the future, our focus will be to serve customers throughout the greater Asia-Pacific region and Windows Azure gives us the confidence to design distributed, highly scalable applications and serve a larger region outside of Hong Kong.

Finally, by using Windows Azure, our customers can run the karaoke application and deliver engaging content without investing in and managing a physical server infrastructure—a considerable cost savings. Our customers can easily save 10 to 20 percent of the initial IT investment by using a cloud-based infrastructure built on Windows Azure. When we save customers that much money, our business becomes even more valuable.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Beth Massi (@bethmassi) continued her introductory LightSwitch series with Beginning LightSwitch Part 6: I Feel Pretty! Customizing the "Look and Feel" with Themes on 12/21/2011:

imageWelcome to Part 6 of the Beginning LightSwitch series! In parts 1 thru 5 we built an Address Book application and learned all about the major building blocks of a Visual Studio LightSwitch application -- entities, relationships, screens, queries and user permissions. If you missed them:

image222422222222In this post I want to talk about themes. Themes allow you to change the colors, fonts and styles of all the visual elements in the user interface. Now that our Address Book application is complete, we’re almost ready to get this in front of real users. But before we do, it would be really nice to apply a different look-and-feel to our application in order to make it stand out above the rest. Visual Studio LightSwitch comes with only one theme out of the box, but you can download more. In fact, there are all kinds of extensions you can download to enhance what LightSwitch can do out of the box, not just themes.

Visual Studio LightSwitch Extensibility

LightSwitch provides an entire extensibility framework so that professional developers can write extensions to enhance the LightSwitch development experience. Many third-party component vendors as well as the general community have released all sorts of extensions for LightSwitch. These include custom controls, business types, productivity libraries and designers, and of course themes. Check out some of the featured extensions from our partners. If you are a code-savvy, professional .NET developer with Visual Studio Professional or higher, then you can create your own extensions. For more information on creating extensions see the Extensibility section of the LightSwitch Developer Center.

Downloading and Enabling Extensions

Luckily, you don’t need to be a hardcore programmer to use extensions. They are easy to find and install. Just open the Extension Manager from the Tools menu.

image

The Extension Manager will come up and display all your installed extensions. Select the “Online Gallery” tab to choose from all the LightSwitch extensions from the Visual Studio Gallery. (Note: if you have Visual Studio Professional or higher and not just the LightSwitch edition installed, then you will need to filter the Extension Manager on “LightSwitch” to see those extensions.)

image

You can also download these extensions directly from the Visual Studio Gallery. Select the extension you want and click the download button to install.

For our Address Book application I’m going to apply the LightSwitch Metro Theme which is one of the most popular extensions (at the time of this writing) so it’s right at the top. Once you install the extension, you’ll need to restart Visual Studio LightSwitch. After extensions are installed, you need to enable them on a per-project basis. Open the project properties by right-clicking on the project in the Solution Explorer and select “Properties”. Then select the “Extensions” tab to enable the extension.

image

For our Address Book application, enable the Metro Theme extension. Also notice that there is the “Microsoft LightSwitch Extensions” also in this list which is always enabled and used in new projects. This is an extension that is included with the LightSwitch install and contains the business types Email Address, Phone Number, Money and Image that you can use when defining your data entities like we did in Part 1. You should never have to disable these.

Applying a Theme

Now that the theme extension is installed and enabled, you can apply the theme by selecting the “General Properties” tab and then choosing the LightSwitch Metro Theme.

image

Then just build and run (F5) the application to see how it looks!

image

For more information on the Metro Theme extension (and source code) see the LightSwitch Metro Theme Extension Sample.

Some More Cool LightSwitch Themes

Besides the Metro Theme, there are a lot of other nice looking themes available, some for free and some for a small fee. If you open the Extension Manager to the Online Gallery tab and enter the search term “theme” you will see a long list of them. Here are the top 5 most popular on the gallery (at the time of this writing):

  1. LightSwitch Metro Theme
  2. Spursoft LightSwitch Extensions
  3. Themes by Delordson (LightSwitchExtras.com)
  4. Luminous Themes
  5. VS Dark Blue Theme

Also check out Paul Patterson’s “Uber Theme Resource” which provides more screenshots and reviews of all the themes on the gallery!

Wrap Up

As you can see it’s easy to download and enable themes in Visual Studio LightSwitch in order to change the look-and-feel of your business applications. LightSwitch provides an entire extensibility model that not only allows the community to create themes, but all kinds of extensions that enhance the LightSwitch development experience that you can take advantage of. If you’re a code-savvy developer that wants to create your own themes, head to the Extensibility section of the LightSwitch Developer Center to get set up and then read this walkthrough Creating a Theme Extension.

Well that wraps up what I planned for the Beginning LightSwitch Series! I hope you enjoyed it and I hope it has helped you get started building your own applications with Visual Studio LightSwitch. For more LightSwitch training please see the Learn section of the LightSwitch Developer Center. In particular, I recommend going through my “How Do I” videos next.

Now I’m going to go enjoy some well-earned Christmas vacation. I’ll be back in a couple weeks. Happy Holidays LightSwitch-ers! Enjoy!

You deserve a vacation, Beth, after these six monster tutorials.


Jan van den Haegen (@janvanderhaegen) reported Microsoft LightSwitch Black Theme: the mystery solved… on 12/21/2011:

imageOver three months ago, we discovered the LightSwitch crew had a hidden theme in their Microsoft.LightSwitch.Client.Internal assembly, called the BlackTheme

Without documentation, and with no way to select it from the normal LightSwitch development environment, we were left with no other options than to guess the reason for its existence… Well, we could have just asked the LightSwitch crew… But where’s the fun in that?

image222422222222After countless sleepless nights, wondering, pondering, trying to get a grasp on this deep, dark secret they have hidden for us, I tried to move on with my life, never to think about it again. Until recently, when – purely by accident – I discovered that the BlackTheme actually has a very obvious use…

When you select the default – Blue Glossy – LightSwitch theme to use in your applications, the framework checks to see if your Windows machine is running in high contrast or not. If it isn’t, the default theme is applied, but when it is, the framework applies the BlackTheme instead. The BlueTheme has it’s brushes “hardcoded”, but the BlackTheme uses the system brushes instead… That explains why I couldn’t find much of a difference when I used my ExtensionsMadeEasy to force the BlackTheme, apart from the blue (windows default) selection color…

Really short post this time, because I’m digging into another LightSwitch feature I didn’t know about (and can’t find any posts about on the web either), and can’t wait to get back to it.


Kostas Christodoulou discussed Custom Controls vs. Extensions in a 12/21/2011 post:

Did you know that, although I have previously voted for extensions, my experience with LightSwitch helped me realize that in some cases custom controls can be more effective than extensions or even handle issues extensions are not meant to handle.

image222422222222When special manipulation and user interaction has to be handled (like drag’ n’ drop handling especially between differently typed collections) although extensions can be employed, IMHO, the plumbing needed to achieve the desired goal is to be taken under serious consideration. In general the question to be answered, as to whether one should implement an Extension Control or a Custom Control, is if the effort needed to extract the general behaviors and characteristics required by the entities to be handled and thus generalize the control, is worth it. And by “worth it” I mean what are the chances one will be able to reuse it.

There are cases that very special data manipulation requirements need to be met (once and maybe never again). In these cases a “Project” Custom Control (proprietary terminology) is the best solution. By “Project” I mean a custom control that is aware of the domain. Created in the Client project maybe or any other project with a reference to the Client project. The Client project apart from being aware of the domain has also direct reference to the concrete client LightswitchApplication with all the plumbing required to have access to the concrete Dataworkspace.

One thing is for sure, as soon as you have a custom control operating without having concrete reference to your domain and application, you have a perfect candidate for a Control Extension and you should implement it as soon as possible.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Judith Hurwitz (@jhurwitz) published Predicting 2012: What’s old is new again – or is it? on 12/21/2011:

imageMaybe I have been around the technology market too long but it appears to me that there is nothing new under the sun. The foundational technologies that are the rage today all have their roots in technology that has been around for decades. That is actually a good thing. Simply put, a unique technology concept often will not be commercially viable for at least a decade. So, as I look into 2012, it is clear to me that we are at a tipping point where technologies that have been incubating for many years are customer requirements. Some, like cloud computing are just emerging from the early adopter phase. Others like big data are still in early hype mode. It will be an interesting year. Here are some of my predictions for 2012.

  1. Cloud computing is the new Internet. Only ten years ago companies talked about having an Internet strategy – today Internet is simply part of the fabric of organizations. Likewise, I am predicting that within 10 years we won’t hear customers talk about their cloud strategy — it will simply be the way business is done.
  2. Analytics is one of the most important trends – being able to anticipate what will happen next – whether it is a retailer trying to determine what products will be hot or a business trying to anticipate where a problem will emerge.
  3. The best new ideas are old ideas re-envisioned. The greatest and hottest companies will be based on taking existing ideas and products and revamping them with newer technologies and inventive go to market initiatives.
  4. Platform as a Service (PaaS) is the next hot thing in cloud this coming year. This market will have a rocky evolution since existing Infrastructure as a Service vendors and Software as a Service vendors will all create their version of PaaS tied to their existing offerings. Needless to say, this will confuse customers.
  5. Service Management becomes the defining differentiator for all types of cloud service providers. Especially as the hybrid cloud becomes the norm, providers will have to provide a required level of privacy, security, and governance depending on the customer need. I expect to see a flood of cloud service management products.
  6. While 2011 was the year of innovation, 2012 will be the year of the customer experience. Apple’s success and huge market cap can be, at least in part, attributed to its ability to delight customer with well-executed and well-designed products. The best new products do not simply give customers what they said they wanted, but anticipate what they didn’t know they wanted until it becomes available.
  7. The number of vendors focused on cloud security, governance, and compliance will expand dramatically. The successful companies will focus on protecting all end points – including mobile.
  8. Big data will be the most important silver bullet of 2012. It will indeed be overhyped to the point where every aspect of computing will be tagged as a part of the big data ecosystem. With that said, it is one of the most important trends because it will provide a set of techniques to enable companies to gain knowledge and insight in new ways from the huge volumes of unstructured data. The successful companies are those that create solutions focused on solving specific industry problems. One size does not fit all.
  9. Many copycat companies will tank. Companies like Groupon and Zynga simply do not have the depth of technology or differentiation to have sustainability. But that will not stop hundreds or perhaps thousands of copycat companies with equally weak value propositions from setting up shop.
  10. It is probably redundant to say that this will be the year of the mobile app. In fast the same could be said for 2011. But there is a subtle difference. This year will see sophisticated developers focusing on creating applications that can easily move across tablets, laptops, and phones. The successful companies will make it easy to synchronize data across these devices as well as integrating with related applications. A healthy ecosystem will make the difference between success and failure.

I republished Judith’s predictions because I wish I had made all 10.


Himanshu Singh belatedly reported Windows Azure Achieves IS0 27001 Certification from the British Standards Institute in a 12/19/2011 post to the Windows Azure blog:

imageOn November 29, 2011, Windows Azure obtained ISO 27001 certification for its core services following a successful audit by the British Standards Institute (BSI). You can view details of the ISO certificate here, which lists the scope as: “The Information Security Management System for Microsoft Windows Azure including development, operations and support for the compute, storage (XStore), virtual network and virtual machine services, in accordance with Windows Azure ISMS statement of applicability dated September 28, 2011. The ISMS meets the criteria of ISO/IEC 27001:2005 ISMS requirements Standard.”

The ISO certification covers the following Windows Azure features:

  • Compute (includes Web and Worker roles)
  • Storage (includes Blobs, Queues, and Tables)
  • Virtual Machine (includes the VM role)
  • Virtual Network (includes Traffic Manager and Connect)

Included in the above are Windows Azure service management features and the Windows Azure Management Portal, as well as the information management systems used to monitor, operate, and update these services.

In our next phase, we will pursue certification for the remaining features of Windows Azure, including SQL Azure, Service Bus, Access Control, Caching, and the Content Delivery Network (CDN).

Microsoft’s Global Foundation Services division has a separate ISO 27001 certification for the data centers in which Windows Azure is hosted.

Learn more about ISO 27001 certification. View the certificate for Windows Azure.

See my Windows Azure and Cloud Computing Posts for 12/3/2011+ of 12/5/2011 for early details of the ISO 27011 certification.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Christian Nese (@KristianNese) listed Required Management Packs for the Cloud Process Pack in a 12/21/2011 post to his Virtualization and Some Coffee blog:

imageOk, I`ve done this a couple of times no, so I want to share it.

First you make the SCVMM/SCOM integration (http://kristiannese.blogspot.com/2011/12/integrate-scvmm-2012-with-scom-2012.html ) and import all the required management packs in that process.

imageOnce this is done, you have to import several management packs into System Center Service Manager 2012 before you`re able to install the Cloud Process Pack.

Tip: Download all management packs with Operations Manager to a folder, and import the following management pack to Service Manager 2012:

  • Microsoft.SQLServer.Library.mp
  • Microsoft.SQLServer.2008.Monitoring.mp
  • Microsoft.Windows.InternetInformationServices.2003.mp
  • Microsoft.Windows.InternetInformationServices.2008.mp
  • Microsoft.Windows.InternetInformationServices.CommonLibrary.mp
  • Microsoft.Windows.Server.2008.Discovery.mp
  • Microsoft.Windows.Server.2008.Monitoring.mp

It`s important that you import those MP`s above prior to the MP`s below, since many of them have these as requirements

In Service Manager 2012, import all the management packs from these locations:

  • From your SCVMM install: %Programfiles%\Microsoft System Center 2012\Virtual Machine Manager\ManagementPacks
  • From your Service Manager 2012 install: %Programfiles%\Microsoft System Center 2012\Service Manager 2012\Operations Manager Management Packs
  • Also from your Service Manager 2012 install: %Programfiles%\Microsoft System Center 2012\Service Manager 2012\Operations Manager 2012 Management Packs

<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

Richard Santalesa announced a two-hour webcast about Contracting for Cloud Computing Services on 2/14/2012 at 9:00 AM PST:

imageThe Knowledge Group/The Knowledge Congress Live Webcast Series, a leading producer of regulatory focused webcasts, has announced that InfoLawGroup attorney, Richard Santalesa, will be speaking at the Knowledge Congress’ webcast entitled: “Contracting for Cloud Computing Services: What You Need to Know” scheduled for February 14, 2012 from 12:00 PM to 2:00 PM ET.

For more details and to register for this event, please visit the event homepage: http://www.knowledgecongress.org/event_2012_Cloud_Computing.html.

Richard is is Senior Counsel in Information Law Group's east coast office, based in Fairfield, Connecticut and New York City. He focuses on representing clients on electronic commerce and internet issues, software and content licensing, privacy and data security, outsourcing, software and website development transactions and other commercial arrangements involving intellectual property and technology-savvy companies.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Bloomberg Businesweek reported Oracle Falls Most Since 2002 After Missing Profit Estimates on 12/21/2011:

Dec. 21 (Bloomberg) -- Oracle Corp. dropped the most in more than nine years, dragging down other software makers, after it reported quarterly sales and profit that missed analysts' estimates in a sign companies are spending less on programs that help them manage operations.

The second-largest software maker said profit before some costs in the fiscal second quarter ended Nov. 30 was 54 cents a share, on revenue excluding certain items of $8.81 billion, according to a statement yesterday. Analysts had projected profit of 57 cents on sales of $9.23 billion, the average of estimates compiled by Bloomberg.

Oracle, based in Redwood City, California, and other business-software companies are taking longer to close deals as companies gird for slow economic growth in the U.S. and the possibility of a recession in Europe next year, said Rick Sherlund, an analyst at Nomura Holdings Inc. New software licenses, an indicator of future revenue, rose less than Sherlund projected, and sales of hardware acquired through the Sun Microsystems deal fell more than expected.

“The economy got a little harder for them,” Pat Walravens, an analyst at JMP Securities in San Francisco, said in an interview on Bloomberg Television's “Bloomberg West.”

“In that situation you need to manage your sales force a little more carefully. They were not doing that this quarter.” Walravens has a “market outperform” rating on Oracle shares.

Share Price Drop

The stock tumbled 14 percent to $25.20 at 12:55 p.m. New York time, after earlier declining as much as 15 percent for the biggest intraday drop since March 2002. Before today, Oracle was down 6.8 percent this year. The company also said it will buy back as much as $5 billion in shares.

Read more.


Oracle also won first and fifth places in Charles Babcock’s 5 Worst Cloud Washers Of 2011 article of 12/13/2011 for InformationWeek:

imageOn Wednesday, Dec. 14, cloud consulting company Appirio will host the Cloudwashies—an "awards" event similar to the Razzies awards for bad movies. I've got my own list of nominees for top cloud washers--those parties who have done the most to try to make a previous generation product look and sound like cloud.

1. Oracle Exalogic Elastic Cloud

imageAt Oracle, you couldn't find someone willing to use the word "cloud" in a complete sentence before Oracle OpenWorld in September, 2010, when Larry Ellison did an about-face and finally started using the term himself. After that, the needle on the chutzpah meter went off the scale and one Oracle product after another not only had cloud attributes but also was suddenly "the cloud." Until then, Ellison had been denouncing cloud computing--and even in the turnaround speech, he offered instruction on how to detect a cloud washer at work.

So my number one candidate for cloud washing is the Oracle Exalogic Elastic Cloud, a name that contains so many contradictions of the definition of cloud computing that it threatens to render the term meaningless. It's an old fashioned appliance that's been renamed "a cloud in a box," when we thought the cloud couldn't be put in a box. Granted, some automated administration applies, thanks to all that highly engineered integration, but where is the end user self provisioning, charging based on use, those economies of scale that are supposed to be part of the cloud environment, and the escape from lifetime licenses? What's in the box, judging by the price tag, is a whole bunch of lifetime licenses for previous generation software. This is pre-cloud middleware and applications wrapped up in a cloud bow.

2. Lawson Cloud Services

The warning flags should have been unfurled when Lawson Software senior VP Jeff Comport took to the YouTube airwaves in April, 2010, to announce that Lawson had full cloud services, unlike those vendors relying on multi-tenant architecture, such as Salesforce.com. Whenever you hear a putdown of the capabilities of Amazon.com, Microsoft, Rackspace, Terremark, or Salesforce.com, cloud washing is sure to follow.

What Comport was saying was that Lawson's financial, budgeting and other core applications had been made available in the Amazon cloud As Lawson Cloud Services. They had not been reduced to lowest common denominator applications services, as Salesforce.com's were, he claimed: "In order to pursue multi-tenancy, they've had to commoditize the application, dumb it down…"

This was similar to Larry Ellison's attempt to attack Salesforce.com for offering "insecure" multi-tenant database services (even though Salesforce.com uses the Oracle database.) It is now established that the boundaries set by the machine in a multi-tenant environment are sufficiently hard to protect the privacy of data and business processes of users, as they use the same server and software.

But cloud washers, lacking other cloud features, trumpet on-premise applications made available in an Amazon data center as the real cloud thing. It doesn't work that way. If anything, it's much more difficult to architect cloud applications that take full advantage of multi-tenancy and pass along to customers the economies of scale that follow.

3. Storage Vendors Who Don't Come Clean

Storage was in many ways the first cloud service and major cloud providers made storage a part of their service portfolio at an early date. Hence, I was surprised to see two years of tests of 16 top cloud storage services offering a range of variable reliability. The two years of tests were conducted by Nasuni, a supplier of storage-as-a-service through various cloud vendors. Amazon Web Services' S3 service was ranked as most reliable.

According to Nasuni,, Amazon's S3 was available 100% of the time on average each month, during its two year test period. In fact, S3 experienced 1.43 outages a month, said Nasuni, but they were of such short duration that they were unlikely to be noticed by a customer.

Likewise, three other companies were tied at 99.9% availability, Microsoft Azure, Rackspace, and Peer 1 Hosting, with Nirvanix nearly in the same league at 99.8% availability, according to Nasuni. Number six was AT&T Synaptic storage-as-a-service at 99.5%. AT&T had few outages per month "but their duration impacted its availability," the study reported.

Nasuni tested 16 services but only disclosed the results on the top six. The gap is big enough (100% to 99.5%) between the top six to make me wonder what the bottom six look like.

In other words, some storage services listing themselves as using the cloud approach have not yet mastered something that should be taken for granted in the cloud, the reliability factor. Part of the cloud computing mystique is that cloud services are resilient. Hardware devices fail, but the services continues running without loss of data.

If we took 99.5% reliability as the minimum to be expected of a cloud storage service (that's 3.75 hours of downtime in January,) what should we call the 10 vendors below the minimum? And who are they, I asked Nasuni. "We don't want to disclose that information," was the response. Well, Nasuni leaves a lot of legitimate participants in cloud storage that it may not have tested as suspects. How about full disclosure, next time, to protect the innocent?

4. HP Cloud System

Former HP CEO Leo Apotheker showed a certain proclivity for commissioning grand cloud initiatives, while behind the scenes, HP was busy rolling up previous generations of software into some things with cloud names. On March 15, Apotheker described an Internet service integration platform for the 100 million WebOS devices that HP would sell over the coming year. The mobile platform was an integral part of his strategic vision for an "HP Cloud."

Only a few weeks later, the first HP WebOS device, the TouchPad tablet, was launched, then soon abandoned. Other WebOS devices failed to make it to the launch pad. WebOS has been recently jettisoned as open source code donated to the community, and Apotheker moved on to new challenges, outside of HP at the board's request, as it replaced him with Meg Whitman.

His platform in the sky as a strategic vision came back down to earth as simpler and plainer cloud services, wrapped up what's become known at various stages in 2011 as the HP Cloud System. In fact, many HP cloud operations depend on pieces of its existing system management software.

Cloud Service Automation, part of Cloud System, does include gleanings from the 2010 Stratavia acquisition, which tracks configuration changes in deployed systems, useful in launching cloud workloads. But much of Cloud Service Automation is a reassembly of predecessor products, such as HP Network Management Center, the former OpenView, and HP Performance Center. Together, they provide configuration management, monitoring, and deployment in the cloud, as they did earlier for physical assets in the data center. HP's services consultants say they want to do more Cloud Discovery Workshops for prospects. Maybe a place to begin would be HP's own marketing department, so that it learns to blur the line less between old and new.

5. Oracle Public Cloud

imageYes, Oracle deserves a second entry in my list. Oracle Public Cloud is a service that uses cloud in its name in such a way that tends to render the term meaningless. (See number 1.) The public cloud does not consist of highly engineered appliances that run vendor specific systems, on which only one vendor's software may be installed.

Anyone tried to run Red Hat JBoss or IBM WebSphere middleware on the Oracle Public Cloud recently? It's as if Mr. Ellison came across a whole warehouse of unsold Exadata and ExalLogic machines and said, "What's this? I've been telling Wall Street we sold all we could build. Get rid of 'em." The result: Oracle Public Cloud, built out of Oracle appliances.

imageAs a development platform, I get it--that part makes sense--and software as a service consisting of Oracle applications and middleware. That's fine, also. Just don't call it public cloud. In real "public" cloud computing, choice of and control over what's running has passed to the end user.

See Oracle Offers Database, Java and Fusion Apps Slideware in Lieu of Public Cloud post of 12/14/2011 for my take on Oracle’s cloudwashing.


<Return to section navigation list>

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, Azure Service Broker, Azure Access Services, SQL Azure Database, Open Data Protocol, OData, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, Oracle, Cloudwashing, Windows Azure Service Bus EAI and EDI, ISO 27001

1 comments:

Anonymous said...

Thank you for aggregating blog posts! helps a lot! :)