Wednesday, November 27, 2013

Links to My Cloud Computing Tips at TechTarget’s SearchCloudComputing and Other Search… Sites

OakLeafLogo100pxI’m a regular contributor of tips and techniques articles for Windows Azure apps development and DevOps strategy to TechTarget’s (@TechTarget) SearchCloudComputing site, and an occasional contributor of articles to their SearchSQLServer and SearchCloudApplications sites. Article require free registration to read in full.

The following table lists the topics I’ve covered to date:

Date Title
11/27/2013 Visual Studio 2013 launch brings free Azure-based VSO preview: Visual Studio Online
10/31/2013 Five things every cloud developer needs to know about Windows Azure: Best practices for Azure developers
10/5/2013 Microsoft continues to improve Windows Azure autoscaling features, catches up with Amazon Web Services
8/15/2013 Unlock Windows Azure Development in Visual Studio 2013 Preview (.NET SDK 2.1 for Windows Azure)
8/??/2013 Integrate Private, Hybrid and Public Clouds with Windows Azure Pack’s Management Portal (in editing)
7/15/2013 Pay-As-You-Go Windows Azure BizTalk Services Changes EAI and EDI (“Changes” was “Democratizes” in my original version.)†
4/30/2013 Windows Azure competes with AWS, pushes more frequent, granular updates (Windows Azure IaaS GA, 4/26/2013 updates, SDK for .NET 2.0)
4/10/2013 HDInsight Service Preview for Azure debunks Hadoop's big data analytics angle (Store Hadoop data in Azure Storage Vault instead of HDFS)
1/22/2013 Build Device-Agnostic Data-Driven Apps for the Cloud with Visual Studio LightSwitch (HTML Client Preview 2, SharePoint Online Apps and Windows Azure Web Sites)
10/31/2012 Windows Azure Mobile Services creates backends for Windows 8, iPhone
9/13/2012 Windows Azure Services allows multi-tenant IaaS cloud (Windows Azure Services for Windows Server)
8/24/2012 Windows Azure updates create full-service cloud (New “Spring Wave” features)
7/19/2012 Windows Azure Active Directory enables single sign-on with cloud apps
7/12/2012 Tips for deploying SQL Azure Federations (for SearchCloudApplications.com)
6/25/2012 Analyzing 'big data' with Microsoft [Codename] Cloud Numerics 
6/18/2012 Choosing SaaS applications for PaaS clouds* (Added-value/third-party apps)
6/18/2012 The battle for cloud services: Microsoft vs. Amazon* (Hadoop implementations)
6/11/2012 Windows Azure updates create full-service cloud (“Spring Wave” Upgrade)
5/1/2012 Big data buzz gets louder with Apache Hadoop and Hive (Hadoop on Azure)
4/5/2012 Manage, query SQL Azure Federations using T-SQL (for SearchSQLServer.com)
3/28/2012 Tips for deploying SQL Azure Federations (for SearchSQLServer.com)
3/24/2012 Examining the state of PaaS in the year of ‘big data’
1/24/2012 Microsoft cloud service lets citizen developers crunch big data (“Data Explorer”)
12/1/2011 Microsoft tests Social Analytics experimental cloud (Codename “Social Analytics”)
11/7/2011 Google, IBM, Oracle want piece of big data in the cloud
9/15/2011 Developments in the Azure and Windows Server 8 pairing (from //BUILD/)
9/8/2011 DevOps: Keep tabs on cloud-based app performance (Resources links)
8/2011 Microsoft's, Google's big data [analytics] plans give IT an edge (Resources links)
7/2011 Connecting cloud data sources with the OData API
7/2011 Sharding relational databases in the cloud
6/2011 Choosing a cloud data store for big data
4/2011 Microsoft brings rapid application development to the cloud
3/2011 How DevOps brings order to a cloud-oriented world
2/2011 Choosing from the major Platform as a Service providers
2/2011 How much are free cloud computing services worth?

* For SearchCloudApplications.techtarget.com.
† For SearchWinDevelopment.techtarget.com

imageUpdated 4/5/2012 for the second SQL Federations article on SearchSQLServer.com.

imageI’ll update this table with new articles as the SearchCloudComputing editors post them.

Links to my cover stories for 1105 Media’s Visual Studio Magazine from November 2003 to the present, including My “Big Data in the Cloud” Cover Article for Visual Studio Magazine’s July 2012 Issue, are here.

Sunday, November 24, 2013

Windows Azure and Cloud Computing Posts for 11/18/2013+

Top Stories This Week:

A compendium of Windows Azure, Service Bus, BizTalk Services, Access Control, Caching, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1_thumb1_thumb_thu


‡ Updated 11/22/2013 with new articles marked .
• Updated
11/22/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Windows Azure Blob, Drive, Table, Queue, HDInsight and Media Services

<Return to section navigation list>

The Windows Azure Storage Team reported Windows Azure Tables Breaking Changes (November 2013) on 11/23/2013:

imageIn preparation for adding JSON support to Windows Azure Tables, we are pushing an update that introduces a few breaking changes for Windows Azure Tables. We strive hard to preserve backward compatibility and these changes were introduced due to dependencies we have on WCF Data Services. [Emphasis added.]

image_thumb75_thumb3_thumb_thumb_thu[20]There are some changes in the WCF Data Services libraries which should not break XML parsers and HTTP readers written to standards. However, custom parsers may have taken certain dependencies on our previous formatting of the responses and the following breaking changes might impact them. Our recommendation is to treat XML content to standard as valid parsers do and to not take strong dependency on line breaks, whitespaces, ordering of elements etc.

Here are a list of changes:

  • AtomPub XML response as part of the new release does not have line breaks and whitespaces in between the XML elements; It is in a compact form which would help in reducing the amount of data transferred while staying equivalent to the XML generated prior to the service update. Standard XML parsers are not impacted by this but customers have reported breaks in custom logic. We recommend that clients that roll their own parsers are compatible with XML specifications which handle such changes seamlessly.
  • AtomPub XML response ordering of xml elements (title, id etc.) can change. Parsers should not take any dependency on ordering of elements.
  • A “type” placeholder has been added to the Content-Type HTTP header. For example, for a query response (not point query) the content type will have “type=feed” in addition to charset and application/atom+xml.
    • Previous version: Content-Type: application/atom+xml;charset=utf-8
    • New version:       Content-Type: application/atom+xml;type=feed;charset=utf-8
  • A new response header is returned: X-Content-Type-Options: nosniff to reduce MIME type security risks.

Please reach out to us via forums or this blog if you have any concerns.


The Windows Azure Storage Team described Windows Azure Storage Known Issues (November 2013) on 11/23/2013: 

In preparation for a major feature release such as CORS, JSON etc. we are pushing an update to production that introduced some bugs. We were notified recently about these bugs and plan to address in an upcoming hotfix. We will update this blog once the fixes are pushed out.

Windows Azure Blobs, Tables and Queue Shared Access Signature (SAS)

imageOne of our customers reported an issue that SAS with version 2012-02-12 failed with HTTP Status Code 400 (Bad Request). Upon investigation, the issue is caused by the fact that our service had a change in how “//” gets interpreted when such sequence of characters appear before the container.

Example: http://myaccount.blob.core.windows.net//container/blob?sv=2012-02-12&si=sasid&sx=xxxx

Whenever it receives a SAS request with version 2012-02-12 or prior, the previous version of our service collapses the ‘//’ into ‘/’ and hence things worked fine. However, the new service update returns 400 (Bad Request) because it interprets the above Uri as if the container name is null which is invalid. We will be fixing our service to revert back to the old behavior and collapse ‘//’ into ‘/’ for 2012-02-12 version of SAS. In the meantime, we advise our customers to refrain from sending ‘//’ at the start of the container name portion of the URI.

Windows Azure Tables

imageBelow are 2 known issues that we intend to hotfix either on the service side or as part of our client library as noted below:

1. When clients define DataServiceContext.ResolveName and in case they provide a type name other than <Account Name>.<Table Name>, the CUD operations will return 400 (Bad Request). This is because ATOM “Category” element with “term” must either be omitted or be equal to the <Account Name>.<Table Name> as part of the new update. Previous version of the service used to ignore any type name being sent. We will be fixing this to again ignore what is being sent, but until then client applications would need to consider the below workaround. The ResolveName is not required for Azure Tables and client application can remove it to ensure that OData does not send “category” element.

Here is an example of a code snippet that would generate a request that fails on the service side:

CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
TableServiceContext tableServiceContext = cloudTableClient.GetDataServiceContext();
tableServiceContext.ResolveName = delegate(Type entityType)
{
// This would cause class name to be sent as the value for term in the category element and service would return Bad Request.
return entityType.FullName;
};


SimpleEntity entity = new SimpleEntity("somePK", "someRK");
tableServiceContext.AddObject("sometable", entity);
tableServiceContext.SaveChanges();

To mitigate the issue on the client side, please remove the highlighted “tableServiceContext.ResolveName” delegate.

We would like to thank restaurant.com for bringing this to our attention and helping us in investigating this issue.

2. The new .NET WCF Data Services library used on the server side as part of the service update rejects empty “cast” as part of the $filter query with 400 (Bad Request) whereas the older .NET framework library did not. This impacts Windows Azure Storage Client Library 2.1 since the IQueryable implementation (see this post for details) sends the cast operator in certain scenarios.

We are working on fixing the client library to match .NET’s DataServiceContext behavior which does not send the cast operator and this should be available in the next couple of weeks. In the meantime we advise our customers to consider the following workaround.

This client library issue can be avoided by ensuring you do not constrain the type of enumerable to the ITableEntity interface but to the exact type that needs to be instantiated.

The current behavior is described by the following example:

static IEnumerable<T> GetEntities<T>(CloudTable table)  where T : ITableEntity, new()
{
IQueryable<T> query = table.CreateQuery<T>().Where(x => x.PartitionKey == "mypk");
return query.ToList();
}

The above code in 2.1 storage client library’s IQueryable interface will dispatch a query that looks like the below Uri that is rejected by the new service update with 400 (Bad Request).

http://myaccount.table.core.windows.net/invalidfiltertable?$filter=cast%28%27%27%29%2FPartitionKey%20eq%20%27mypk%27&timeout=90 HTTP/1.1

As a mitigation, consider replacing the above code with the below query. In this case the cast operator will not be sent.

    IQueryable<SimpleEntity> query = table.CreateQuery<SimpleEntity>().Where(x => x.PartitionKey == "mypk");
return query.ToList();

The Uri for the request looks like the following and is accepted by the service.

http://myaccount.table.core.windows.net/validfiltertable?$filter=PartitionKey%20eq%20%27mypk%27&timeout=90

We apologize for these issues and we are working on a hotfix to address them.


‡ David Hardin described AzCopy and the Azure Storage Emulator on 11/22/2013:

imageAzCopy is now part of the Azure SDK and can copy files to and from Azure Storage.  I couldn't find any examples of it copying to the Storage Emulator but it works.  Here is an example batch file I use:

set Destination=http://127.0.0.1:10000/devstoreaccount1/
set DestinationKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

set AzCopy="%ProgramFiles(x86)%\Microsoft SDKs\Windows Azure\AzCopy\AzCopy.exe"

%AzCopy% FooFolder %Destination%FooContainer /S /BlobType:block /Y /DestKey:%DestinationKey%
%AzCopy% C:\BarFolder %Destination%BarContainer /S /BlobType:block /Y /DestKey:%DestinationKey%

First script copies all files and subfolders from FooFolder into FooContainer then it copies all files and subfolders from C:\BarFolder to BarContainer.


‡ Nuno Filipe Godinho (@NunoGodinho) listed Windows Azure Storage Performance Best Practices in an 11/22/2013 post:

imageWindows Azure storage is a very important part of Windows Azure and most applications leverage it for several different things from storing files in blobs, data in tables or even message in queues. Those are all very interesting services provided by Windows Azure but there are some performance best practices you can use in order to make your solutions even better.

imageIn order to help you do this and speed your learning process I decided to share some of the best practices you can use in order to achieve this.

Here is a list of those best practices:

image1. Turn off Nagling and Expect100 on the ServicePoint Manager

By now we might be thinking what is Nagling & Expect100. Let me help you better understand that.

1.1. Nagling

“The Nagle algorithm is used to reduce network traffic by buffering small packets of data and transmitting them as a single packet. This process is also referred to as "nagling"; it is widely used because it reduces the number of packets transmitted and lowers the overhead per packet.”

So after understanding the Nagle algorithm should we take it off?

Nagle is great for big messages and when you don’t care about latency but really about optimizing the protocol and what is sent over the wire. In small messages or when you really want to send something immediately the nagling algorithm will create an overhead since it will delay the sending of the data.

1.2. Expect100

“When this property is set to true, 100-Continue behavior is used. Client requests that use the PUT and POST methods will add an Expect header to the request if the Expect100Continue property is true and ContentLength property is greater than zero or the SendChunked property is true. The client will expect to receive a 100-Continue response from the server to indicate that the client should send the data to be posted. This mechanism allows clients to avoid sending large amounts of data over the network when the server, based on the request headers, intends to reject the request.” from MSDN

In order to do this there are two ways:

// Disconnects for all the endpoints Table/Blob/Queue

ServicePointManager.Expect100Continue = false;

ServicePointManager.UseNagleAlgorithm = false;

// Disconnects for only the Table Endpoint

var tableServicePoint = ServicePointManager.FindServicePoint(account.TableEndpoint);

   tableServicePoint.UseNagleAlgorithm = false;

   tableServicePoint.Expect100Continue = false;

Check that it is done before creating a connection with the client or this won’t have any effect on the performance. This means before you use one of these.

    account.CreateCloudTableClient();

    account.CreateCloudQueueClient();

    account.CreateCloudBlobClient();

2. Turn off the Proxy Auto Detection

By default the proxy auto detection is on in Windows Azure which means that it will take a bit more time in order to do the connection since it still needs to get the proxy for each request. For that reason it is important for you to turn it off.

For that you should do the following change in the web.config / app.config file of you solution.

<defaultProxy>
   <proxy bypassonlocal="True" usesystemdefault="False" />
</defaultProxy>

3. Adjust the DefaultConnectionLimit value of the ServicePointManager class

“The DefaultConnectionLimit property sets the default maximum number of concurrent connections that the ServicePointManager object assigns to the ConnectionLimit property when creating ServicePoint objects.” from MSDN

In order to optimize your default connection limit you first need to understand the conditions on which the application actually runs. The best way to do this is by doing performance tests with several different different values and then analyze them.

ServicePointManager.DefaultConnectionLimit = 100;

Hope this helps you the way it helped me.

image_thumb1_thumb_thumb_thumb_thumb

<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Carlos Figueira (@carlos_figueira) described New tables in Azure Mobile Services: string id, system properties and optimistic concurrency on 11/22/2013:

imageWe just released update to Azure Mobile Services in which new tables created in the services have a different layout than what we have right until now. The main change is that they now have ids of type string (instead of integers, which is what we’ve had so far), which has been a common feature request. Tables have also by default three new system columns, which track the date each item in the table was created or updated, and its version. With the table version the service also supports conditional GET and PATCH requests, which can be used to implement optimistic concurrency. Let’s look at each of the three changes separately.

String ids

imageThe type of the ‘id’ column of newly created tables is now string (more precisely, nvarchar(255) in the SQL database). Not only that, now the client can specify the id in the insert (POST) operation, so that developers can define the ids for the data in their applications. This is useful on scenarios where the mobile application wants to use arbitrary data as the table identifier (for example, an e-mail), make the id globally unique (not only for one mobile service but for all applications), or is offline for certain periods of time but still wants to cache data locally, and when it goes online it can perform the inserts while maintaining the row identifier.

image_thumb75_thumb3_thumb_thumb_thu[7]For example, this code used to be invalid up to yesterday, but it’s perfectly valid today (if you update to the latest SDKs):

  1. private async void Button_Click(object sender, RoutedEventArgs e)
  2. {
  3. var person = new Person { Name = "John Doe", Age = 33, EMail = "john@doe.com" };
  4. var table = MobileService.GetTable<Person>();
  5. await table.InsertAsync(person);
  6. AddToDebug("Inserted: {0}", person.Id);
  7. }
  8. public class Person
  9. {
  10. [JsonProperty("id")]
  11. public string EMail { get; set; }
  12. [JsonProperty("name")]
  13. public string Name { get; set; }
  14. [JsonProperty("age")]
  15. public int Age { get; set; }
  16. }

If an id is not specified during an insert operation, the server will create a unique one by default, so code which doesn’t really care about the row id (only that it’s unique) can still be used. And as expected, if a client tries to insert an item with an id which already exists in the table, the request will fail.

Additional table columns (system properties)

In addition to the change in the type of the table id column, each new table created in a mobile service will have three new columns:

  • __createdAt (date) – set when the item is inserted into the table
  • __updatedAt (date) – set anytime there is an update in the item
  • __version (timestamp) – a unique value which is updated any time there is a change to the item

The first two columns just make it easier to track some properties of the item, and many people used custom server-side scripts to achieve it. Now it’s done by default. The third one is actually used to implement optimistic concurrency support (conditional GET and PATCH) for the table, and I’ll talk about it in the next section.

Since those columns provide additional information which may not be necessary in many scenarios, the Mobile Service runtime will not return them to the client, unless it explicitly asks for it. So the only change in the client code necessary to use the new style of tables is really to use string as the type of the id property. Here’s an example. If I insert an item in my table using a “normal” request to insert an item in a table:

POST https://myservice.azure-mobile.net/tables/todoitem HTTP/1.1
User-Agent: Fiddler
Content-Type: application/json
Host: myservice.azure-mobile.net
Content-Length: 37
x-zumo-application: my-app-key

{"text":"Buy bread","complete":false}

This is the response we’ll get (some headers omitted for brevity):

HTTP/1.1 201 Created
Cache-Control: no-cache
Content-Length: 81
Content-Type: application/json
Location: https://myservice.azure-mobile.net/tables/todoitem/51FF4269-9599-431D-B0C4-9232E0B6C4A2
Server: Microsoft-IIS/8.0
Date: Fri, 22 Nov 2013 22:39:16 GMT
Connection: close

{"text":"Buy bread","complete":false,"id":"51FF4269-9599-431D-B0C4-9232E0B6C4A2”}

No mention of the system properties. But if we go to the portal we’ll be able to see that the data was correctly added.

SystemPropertiesInPortal

If you want to retrieve the properties, you’ll need to request those explicitly, by using the ‘__systemProperties’ query string argument. You can ask for specific properties or use ‘__systemProperties=*’ for retrieving all system properties in the response. Again, if we use the same request but with the additional query string parameter:

POST https://myservice.azure-mobile.net/tables/todoitem?__systemProperties=createdAt HTTP/1.1
User-Agent: Fiddler
Content-Type: application/json
Host: myservice.azure-mobile.net
Content-Length: 37
x-zumo-application: my-app-key

{"text":"Buy bread","complete":false}

Then the response will now contain that property:

HTTP/1.1 201 Created
Cache-Control: no-cache
Content-Length: 122
Content-Type: application/json
Location: https://myservice.azure-mobile.net/tables/todoitem/36BF3CC5-E4E9-4C31-8E64-EE87E9BFF4CA
Server: Microsoft-IIS/8.0
Date: Fri, 22 Nov 2013 22:47:50 GMT

{"text":"Buy bread","complete":false,"id":"36BF3CC5-E4E9-4C31-8E64-EE87E9BFF4CA","__createdAt":"2013-11-22T22:47:51.819Z"}

You can also request the system properties in the server scripts itself, by passing a ‘systemProperties’ parameter to the ‘execute’ method of the request object. In the code below, all insert operations will now return the ‘__createdAt’ column in their responses, regardless of whether the client requested it.

  1. function insert(item, user, request) {
  2. request.execute({
  3. systemProperties: ['__createdAt']
  4. });
  5. }

Another aspect of the system columns is that they cannot be sent by the client. For new tables (i.e., those with string ids), if an insert of update request contains a property which starts with ‘__’ (two underscore characters), the request will be rejected. The ‘__createdAt’ property can, however, be set in the server script (although if you really don’t want that column to represent the creation time of the object, you may want to use another column for that) – the code below shows one way where this (rather bizarre) scenario can be accomplished. If you try to update the ‘__updatedAt’ property, it won’t fail, but by default that column is updated by a SQL trigger, so any updates you make to it will be overridden anyway. The ‘__version’ column uses a read-only type in the SQL database (timestamp), so it cannot be set directly.

  1. function insert(item, user, request) {
  2. request.execute({
  3. systemProperties: ['__createdAt'],
  4. success: function () {
  5. var created = item.__createdAt;
  6. // Set the created date to one day in the future
  7. created.setDate(created.getDate() + 1);
  8. item.__createdAt = created;
  9. tables.current.update(item, {
  10. // the properties can also be specified without the '__' prefix
  11. systemProperties: ['createdAt'],
  12. success: function () {
  13. request.respond();
  14. }
  15. });
  16. }
  17. });
  18. }

Finally, although those columns are added by default and have some behavior associated with them, they can be removed from any table which you don’t want. As you can see in the screenshot of the portal below, the delete button is still enabled for those columns (the only one which cannot be deleted is the ‘id’).

DeletingSystemColumn

Conditional retrieval / updates (optimistic concurrency)

Another feature we added in the new style tables is the ability to perform conditional retrieval or updates. That is very useful in the case where multiple clients are accessing the same data, and we want to make sure that write conflicts are handled properly. The MSDN tutorial Handling Database Write Conflicts gives a very detailed, step-by-step description on how to enable this (currently only the managed client has full support for optimistic concurrency and system properties; support for the other platforms is coming soon) scenario. I’ll talk here about the behind-the-scenes of how this is implemented by the runtime.

The concept of conditional retrieval is this: if you have the same version of the item which is stored in the server, you can save a few bytes of network traffic (and time) by having the server reply with “you already have the latest version, I don’t need to send it again to you”. Likewise, conditional updates work by the client sending an update (PATCH) request to the server with a precondition that the server should only update the item if the client version matches the version of the item in the server.

The implementation of conditional retrieval / updates is based on the version of the item, from the system column ‘__version’. That version is mapped in the HTTP layer to the ETag header responses, so that when the client receives a response for which it asked for that system property, the value will be lifted to the HTTP response header:

GET /tables/todoitem/2F6025E7-0538-47B2-BD9F-186923F96E0F?__systemProperties=version HTTP/1.1
User-Agent: Fiddler
Content-Type: application/json
Host: myservice.azure-mobile.net
Content-Length: 0
x-zumo-application: my-app-key

The response body will contain the ‘__version’ property, and that value will be reflected in the HTTP header as well:

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Length: 108
Content-Type: application/json
ETag: "AAAAAAAACBE="
Server: Microsoft-IIS/8.0
Date: Fri, 22 Nov 2013 23:44:48 GMT

{"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","__version":"AAAAAAAACBE=","text":"Buy bread","complete":false}

Now, if we want to update that record, we can make a conditional GET request to the server, by using the If-None-Match HTTP header:

GET /tables/todoitem/2F6025E7-0538-47B2-BD9F-186923F96E0F?__systemProperties=version HTTP/1.1
User-Agent: Fiddler
Content-Type: application/json
Host: myservice.azure-mobile.net
If-None-Match: "AAAAAAAACBE="
Content-Length: 0
x-zumo-application: my-app-key

And, if the record had not been modified in the server, this is what the client would get:

HTTP/1.1 304 Not Modified
Cache-Control: no-cache
Content-Type: application/json
Server: Microsoft-IIS/8.0
Date: Fri, 22 Nov 2013 23:48:24 GMT

If however, if the record had been updated, the response will contain the updated record, and the new version (ETag) for the item.

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Length: 107
Content-Type: application/json
ETag: "AAAAAAAACBM="
Server: Microsoft-IIS/8.0
Date: Fri, 22 Nov 2013 23:52:01 GMT

{"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","__version":"AAAAAAAACBM=","text":"Buy bread","complete":true}

Conditional updates are similar. Let’s say the user wanted to update the record shown above but only if nobody else had updated it. So they’ll use the If-Match header to specify the precondition for the update to succeed:

PATCH /tables/todoitem/2F6025E7-0538-47B2-BD9F-186923F96E0F?__systemProperties=version HTTP/1.1
User-Agent: Fiddler
Content-Type: application/json
Host: myservice.azure-mobile.net
If-Match: "AAAAAAAACBM="
Content-Length: 71
x-zumo-application: my-app-key

{"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","text":"buy French bread"}

And assuming that it was indeed the correct version, the update would succeed, and change the item version:

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Length: 98
Content-Type: application/json
ETag: "AAAAAAAACBU="
Server: Microsoft-IIS/8.0
Date: Fri, 22 Nov 2013 23:57:47 GMT

{"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","text":"buy French bread","__version":"AAAAAAAACBU="}

If another client which had the old version tried to update the item:

PATCH /tables/todoitem/2F6025E7-0538-47B2-BD9F-186923F96E0F?__systemProperties=version HTTP/1.1
User-Agent: Fiddler
Content-Type: application/json
Host: ogfiostestapp.azure-mobile.net
If-Match: "AAAAAAAACBM="
Content-Length: 72
x-zumo-application: wSdTNpzgPedSWmZeuBxXMslqNHYVZk52

{"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","text":"buy two baguettes"}

The server would reject the request (and return to the client the actual version of the item in the server)

HTTP/1.1 412 Precondition Failed
Cache-Control: no-cache
Content-Length: 114
Content-Type: application/json
ETag: "AAAAAAAACBU="
Server: Microsoft-IIS/8.0
Date: Sat, 23 Nov 2013 00:19:30 GMT

{"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","__version":"AAAAAAAACBU=","text":"buy French bread","complete":true}

That’s how conditional retrieval and updates are implemented in the runtime. In most cases you don’t really need to worry about those details – as can be seen in the tutorial on MSDN, the code doesn’t need to deal with any of the HTTP primitives, and the translation is done by the SDK.

Creating “old-style” tables

Ok, those are great features, but you really don’t want to change anything in your code. You still want to use integer ids, and you need to create a new table with that. It cannot be done via the Windows Azure portal, but you can still do that via the Cross-platform Command Line Interface, with the “--integerId” modifier in the “azure mobile table create” command:

azure mobile table create --integerId [servicename] [tablename]

And that will create an “old-style” table, with the integer id and none of the system properties.

Next up: clients support for the new features

In this post I talked about the changes in the Mobile Services runtime (and in its HTTP interface) with the new table style. In the next post I’ll talk about the client SDK support for them – both system properties and optimistic concurrency. And as usual, please don’t hesitate in sending feedback via comments or our forums for those features.


‡ Hasan Khan (@AzureMobile) posted Welcome to the Windows Azure Mobile Services team blog! on 11/19/2013:

imageSince Mobile Services preview in late August last year, we have greatly appreciated the overwhelming response from and interaction with the developer community. Many team members have benefited from your feedback on their blogs so we decided to extend it to a team blog. Here, we hope to give you quick updates on new features that we ship and solicit your feedback. 

imageThis blog is complementary to existing discussion sites and venues such as:

imageWe'll also post updates on twitter @AzureMobile. Feel free to leave your comments and suggestions on twitter and this blog. We're looking forward to your valuable feedback.

Azure Mobile Services Team

image_thumb18_thumb_thumb_thumb_thum


<Return to section navigation list>

Windows Azure Marketplace DataMarket, Cloud Numerics, Big Data and OData

Glenn Gailey (@ggailey777) asserted Time to get the WCF Data Services 5.6.0 RTM Tools in an 11/18/2013 post:

imageFolks who have recently tried to download and install the OData Client Tools for Windows Store apps (the one that I have been using in all my apps, samples, and blog posts to date) have been blocked by a certificate issue that prevents the installation of this client and tools .msi. While a temporary workaround (hack, really) is to simply set the system clock back by a few weeks to trick the installer, this probably means that it’s time to start using the latest version of WCF Data Services tooling.

image_thumb8_thumb_thumb_thumb_thumbBesides actually installing, the WCF Data Services 5.6.0 Tools provide the following benefits:

  • Support for Visual Studio 2013 and Windows Store apps in Windows 8.1.
  • Portable libraries, which enables you to code both Windows Store and Windows Phone apps
  • New JSON format

You can find a very detailed description of the goodness in this version at the WCF Data Services team blog.

Get it from: http://www.microsoft.com/en-us/download/details.aspx?id=39373

Why you need the tools and not just the NuGet Packages

Besides the 5.6.0 tools, you can also use NuGet.org to search for, download, and install the latest versions of WCF Data Services (client and server). This will get you the latest runtime libraries, but not the Visual Studio tooling integration. Even if you plan to move to a subsequent version of WCF Data Services via NuGet, make sure to first install these tools—with them you get these key components to the Visual Studio development experience:

  • Add Service Reference

Without the tools, the Add Service Reference tool in Visual Studio will use the older .NET Framework version of the client tools, which means that you get the older assemblies and not the latest NuGet packages with all the goodness mentioned above.

  • WCF Data Services Item Template

Again, you will get the older assemblies when you create a new WCF Data Service by using the VS templates. With this tools update, you will get the latest NuGet packages instead.

Once you get the 5.6.0 tools installed on your dev machine, you can go back to using the goodness of NuGet to support WCF Data Services in your apps.

My one issue with the new OData portable client library

The coolness of a portable client library for OData is that both Windows Store and Windows Phone apps use the same set of client APIs, which live in a single namespace—making it easier to write common code that will run on either platform. However, for folks who have gotten used to the goodness of using DataServiceState to serialize and deserialize DataServiceContext and DataServiceCollection<T> objects, there’s some not great news. The DataServiceState object didn’t make it into the portable version of the OData client. Because I’m not volunteering to write a custom serializer to replace DataServiceState (I heard it was a rather tricky job), I’ll probably have to stick with the Windows Phone-specific library for the time being. I created a request to add DataServiceState to the portable client (I think that being able to cache relatively stable data between executions is a benefit, even on a Win8 device) on the feature suggestions site (please vote if you agree): Add state serialization support via DataServiceState to the new portable client.

WCF Data Services and Entity Framework 6.0

The Entity Framework folks have apparently made some significant changes to things in Entity Framework 6.0, so significant in fact that it breaks WCF Data Services. Your options to work around this are:

For details about all of this, see Using WCF Data Services 5.6.0 with Entity Framework 6+.

I actually came across this EF6 issue when I was trying to re-add a bunch of NuGet packages to a WCF Data Services project. When you search for NuGet packages to add to your project, the Manage NuGet Packages wizard will only install the latest versions (pre-release or stabile) of a given package. I found that to install an older version of a package, I had to use the Package Manager Console instead. One tip is to watch out for the Package Source setting in the console, as this may be filtering out the version you are looking for.

No significant articles so far this week.

<Return to section navigation list>

Windows Azure Service Bus, BizTalk Services and Workflow

‡ Morten la Cour (@mortenlcd) described Installing Windows Azure BizTalk Services in an 11/23/2013 post to the Vertica.dk blog:

imageToday, Windows Azure BizTalk Services (WABS) was released for General Availability (GA). In this blog post I will show how to setup a Windows Azure BizTalk Service in Windows Azure, after the GA. Several things have changed from the Preview bits, and the setup experience has changed quite a bit, not needing to create your own certificate nor an Access Control Namespace anymore.

To follow this blog post, you will need a Windows Azure account. If you do not already have this go to this site for further details:

https://account.windowsazure.com

Setup a new BizTalk Service in Azure

before setting up a WABS, we need to consider the following:

  • A SQL Azure Server and a database will be required for Tracking. Should we use an existing or let the setup wizard create a new?
  • An Azure Storage Account will also be required. (Can also be created from within the setup Wizard).

It is recommended that both the BizTalk Service, SQL Server and Storage Account reside in the same region.

Setting up WABS

Now let’s start setting up a new BizTalk Service in Azure.

  1. In the Windows Azure Portal, choose + NEW | APP SERVICES | BIZTALK SERVICE | CUSTOM CREATE and choose a BIZTALK SERVICE NAME (making sure that the name is unique, but verifying that a green check mark appears), choose an appropriate EDITION and REGION, and select either an existing Azure SQL database, or choose to have the wizard create one for you.

01_CreateBizTalkService_Wizard_Page1

The edition choice will have impact on how much you are charged for the WABS. See the pricing model for further details:

http://www.windowsazure.com/en-us/pricing/details/biztalk-services/

  1. Click Next (right arrow).
  2. Specify your SQL credentials and the name of the new database if needed, click Next.

02_CreateBizTalkService_Wizard_Page2

  1. Choose a Storage Account or have the wizard create a new one.
  2. If you choose to create a new Storage Account, a name will also be required.
  3. Click Complete.

The BizTalk Service will take approx. 10 minutes to be created.

Once created, we need to fetch some information about our newly created WABS.

Retrieve the WABS certifcate.

First we need to download a public version of the certificate used for HTTPS communication with our BizTalk Service.

  1. In the Windows Azure Portal, select BizTalk Services, and select your new BizTalk Service.
  2. A Dashboard should now appear:

The WABS Dashboard

The WABS Dashboard

Note:A new Access Control Namespace has been automatically created together with our WABS.

  1. Select Download SSL Certificate, and save the .cer file for later usage. (This file will be needed once we start deploying and submitting to our BizTalk Service).

Register the Portal

Now all we need to do is to register our new BizTalk Service.

  1. Select CONNECTION INFORMATION at the bottom of the Windows Azure Portal, and copy the three values to notepad for later usage:

05_WABS_credentials

  1. Select MANAGE.
  2. A Register New Deployment form should appear.
  3. Fill in your BizTalk Service Name, the ACS Issuer name (owner) and ACS Issuer secret (DEFAULT KEY fetched from CONNECTION INFORMATION before).

06_WABS_register_portal

Click REGISTER and the WABS Portal should now appear.

07_WABS_portal

Note: In my screenshot, I have deployed a BRIDGE, so I have a 1 where you will probably have a 0. In the next blog we will look at how to setup the development environment for WABS, and later on how to deploy artifacts (such as BRIDGES), so stay tuned…


• Brian Benz (@bbenz) reported AMQP 1.0 is one step closer to being recognized as an ISO/IEC International Standard in an 11/21/2013 post to the Interoperability @ Microsoft blog by Ram Jeyaraman, Senior Standards Professional, Microsoft Open Technologies, Inc. and co-Chair of the OASIS AMQP Technical Committee:

Microsoft Open Technologies is excited to share the news from OASIS that the formal approval process is now underway to transform the AMQP 1.0 OASIS Standard to an ISO/IEC International Standard.

The Advanced Message Queuing Protocol (AMQP) specification enables interoperability between compliant clients and brokers. With AMQP, applications can achieve full-fidelity message exchange between components built using different languages and frameworks and running on different operating systems. As an inherently efficient application layer binary protocol, AMQP enables new possibilities in messaging that scale from the device to the cloud.

Submission for approval as an ISO/IEC International Standard builds on AMQP’s successes over the last 12 months, including AMQP 1.0 approval as an OASIS Standard in October 2012 and the ongoing development of extensions that greatly enhance the AMQP ecosystem.

The ISO/IEC JTC 1 international standardization process is iterative, and consensus-driven. Its goal is to deliver a technically complete standard that can be broadly adopted by nations around the world.

Throughout the remainder of this process, which may take close to a year, the MS Open Tech standards team will continue to represent Microsoft and work with OASIS to advance the specification.

You can learn more about AMQP and get an understanding of AMQP’s business value here. You can also find a list of related vendor-supported products, open source projects, and details regarding customer usage and success on the AMQP website: http://www.amqp.org/about/examples.

imageimageimageIf you’re a developer getting started with AMQP, we recommend that you read this overview. For even more detail and guidance here’s a Service Bus AMQP Developer's Guide, which will help you get started with AMQP for the Windows Azure Service Bus using .NET, Java, PHP, or Python. Also, have a look at this recent blog post from Scott Guthrie, called Walkthrough of How to Build a Pub/Sub Solution using AMQP.

Whether you’re a novice user or an active contributor the community, we’d like to hear from you! Let us know how your experience with AMQP has been so far by leaving comments here. As well, we invite you to connect with the community and join the conversation on LinkedIn, Twitter, and Stack Overflow.


• Nick Harris (@cloudnick) and ChrisRisner (@chrisrisner) produced Cloud Cover Episode 120: Service Agility with the Service Gateway for Channel9  on 11/21/2013:

imageIn this episode Nick Harris and Chris Risner are joined by James Baker, Principle SDE on the Windows Azure Technical Evangelism team.  In this episode James goes over the Service Gateway project.  The Service Gateway provides an architectural component that businesses can use for composition of disparate web assets.  Using the gateway, an IT-Pro can control the configuration of:

  • Roles
  • AuthN/AuthZ
  • A/B Testing
  • Tracing

You can read more about the Service Gateway and access the source code for it here.


Damir Dobric (@ddobric) described Windows Azure Service Bus Connection Quotas in an 11/18/2013 post to the Microsoft MVP Award Program blog:

imageThis table contains in fact all you want to know about quotas. But if you don’t know how to deal with TCP connection to Service Bus and if you don’t know the meaning of “connection link”, this table will not help much.

The number of “subsequent request for additional connections” is in fact the number of “connection links” which can be established to a single Messaging Entity (Queues and Topics).

imageIn other words, if you have one Queue/Topic then you can create maximal 100
MessageSender-s/QueueClients/TopicClients which will send messages. This is the Service Bus quota independent on number of Messaging Factories used behind clients. If you are asking yourself now, why is the Messaging Factory important at all, if the quota is limited by number of “connection links” (clients). You are right. There is no correlation between Messaging Factory and quota of 100 connections.
Remember, quota is limited by number of “connection links”. Messaging Factory helps you to increase throughput, but not the number of concurrent connections.
Following picture illustrates this:

image_thumb75_thumb3_thumb_thumb_thu[11]The picture above shows maximal number of 100 clients (senders, receivers, queue clients, topic clients) which are created on top of 2 Messaging Factories. Two clients uses one Messaging Factory and 98 clients use another Messaging Factory.

Altogether, there are two TCP connections and 100 “connection links” shared across two connections.

How to observe huge number of devices?

Probably most important question in this context is how to observe a huge number of devices (clients) if the Messaging entity limit is that low (remember 100). To be able to support for example 100.000 devices you will have to create 1000 connections assuming that one device creates one “connection link” through one physical connection.

image

That means, if you want to send messages from 100.000 devices you need 1000 queues or topics which will receive messages and aggregate them.
The quota table shown above defines also the limit of “concurrent receive requests”, which is right now limited by 5000. This means you can create maximal 100 receivers (or QueueClients, SubscriptionClients) and send 5000 receive requests shared across 100 receivers. For example you could create 100 receivers and concurrently call Receive() in 50 threads. Or, you could create one receiver and concurrently call Receive() 5000 times.

But again, if devices have to receive messages from queue, then only 100 devices can be concurrently connected to the queue.

If each device has its own subscription then you will probably not have a quota issue on topic-subscription, because one subscription will usually observe one device. But if all devices are concurrently receiving messages, then there is a limit of 5000 on topic level (across all subscriptions). Here can also be important another quota which limits the number of subscriptions per topic on 2000.

If your goal is to use for example less queues, then HTTP/REST might be a better solution than SBMP and AMQP. If the send operations are not frequently executed (not very chatty), then you can use HTTP/REST. In this case the number of concurrent “connection links” statistically decreases, because HTTP does not relay on a permanent connection.

How about Windows RT wind Windows Phone?

Please also note, that Windows 8 messaging is implemented in WindowsAzure.Messaging assembly, which uses HTTP/REST as a protocol. This is because RT-devices are mobile devices, which are statistically limited to HTTP:80. In this case Windows 8 will not establish permanent connection to SB as described above. But it will activate HTTP polling, if you use Message Pump – pattern (OnMessage is used instead of on demand invoke of ReceiveAsync). That means the permanent connections to Service Bus will not be created, but the network pressure will remain du the polling process, which is right not set on 2 minutes. That means Windows RT will send receive request to Service Bus and wait for two minutes to receive the message. If the message is not received witching timeout period, request will time out and new request will be sent. By using of this pattern Windows RT device is permanently running on the receive mode.

In an enterprise can happen that many devices are doing polling for messages. If this is a problem in a case of huge number of devices on specific network segment, you can rather use dedicated ReceiveAsync() instead of OnMessage. ReceiveAsync() operation connects on demand and after receiving of response simply closes the connection. In this case you can dramatically reduce the number of connections.

image_thumb11_thumb2_thumb_thumb


<Return to section navigation list>

Windows Azure Access Control, Active Directory, Identity and Workflow

The Microsoft Download Center made Forefront Identity Manager Connector for Windows Azure Active Directory available for download on 11/21/2013:

Overview

Forefront Identity Manager Connector for Windows Azure Active Directory helps you synchronize identity information to Azure Active Directory.

System Requirements
  • Supported Operating Systems:Windows Server 2008;Windows Server 2008 R2;Windows Server 2012

Minimum Requirements

  • FIM Synchronization Service (FIM2010 R2 hotfix 4.1.3496.0, or later)
  • Microsoft .NET 4.0 Framework
  • Microsoft Online Services Sign-In Assistant

The Windows Azure Active Directory Connector for FIM 2010 R2 Technical Reference provides additional details:

The objective of this document is to provide you with the reference information that is required to deploy the Windows Azure Active Directory (AAD) connector for Microsoft® Forefront® Identity Manager (FIM) 2010 R2.

Overview of the AAD Connector

The AAD connector enables you to connect to one or multiple AAD directories from FIM2010. AAD is the infrastructure backend for Office 365 and other cloud services from Microsoft.

The connector is available as a download from the Microsoft Download Center.

From a high level perspective, the following features are supported by the current release of the connector:

Requirement Support

FIM version

FIM 2010 R2 hotfix 4.1.3493.0 or later (2906832)

Connect to data source

Windows Azure Active Directory

Scenario

  • Object Lifecycle Management
  • Group Management
noteNote
The Password Hash Sync feature available in DirSync is not supported with FIM2010 and the AAD Connector.
Operations The following operations are supported:
  • Full import
  • Delta import
  • Export
noteNote
This connector does not support any password management scenarios
Schema The schema is fixed in the AAD connector and it is not possible to add additional objects and attributes.

Connected Data Source Requirements

In order to manage objects using a connector, you need to make sure that all requirements of the connected data source are fulfilled. This includes tasks such as opening the required network ports and granting the necessary permissions. The objective of this section is to provide an overview of the requirements of a connected data source to perform the desired operations.

Connected Data Source Permissions

When you configure the connector, in the Connectivity section, you need to provide the credentials of an account that is a Global Administrator of the AAD tenant you wish to synchronize with. This account can be either a managed (i.e. username/password) or federated identity.

Important: When you change the password associated with this AAD administrator account, you must also update the AAD connector in FIM 2010 to provide the new password.

Ports and Protocols

The AAD Connector communicates with AAD using web services. For additional information which addresses are used by AAD and Office 365, please refer to Office 365 URLs and IP address ranges. …

The Technical Reference continues with detailed deployment instructions.


Steven Martin (@stevemar_msft) posted Announcing the General Availability of Biz Talk Services, Azure Active Directory and Traffic Manager, and Preview of Azure Active Directory Premium to the Windows Azure blog on 11/21/2013:

imageIn addition to economic benefits, cloud computing provides greater agility in application development which translates into competitive advantage. We are delighted to see over 1,000 new Windows Azure subscriptions created every day and even more excited to see that half of our customers are using higher-value services to build new modern business applications. Today, we are excited to announce general availability and preview of several services that help developers better integrate applications, manage identity and enhance load balancing.

Windows Azure Active Directory

image_thumb75_thumb3_thumb_thumb_thu[4]We are thrilled to announce the general availability of the free offering of Windows Azure Active Directory.

As applications get increasingly cloud based, IT administrators are being challenged to implement single sign-on (SSO) against SaaS applications and ensure secure access. With Windows Azure Active Directory, it is easy to manage user access to hundreds of cloud SaaS applications like Office 365, Box, GoToMeeting, DropBox, Salesforce.com and others. With these free application access enhancements you can:

  • imageSeamlessly enable single sign-on to many popular pre-integrated cloud apps for your users using the new application gallery on Windows Azure portal
  • Provision (and de-provision) your users' identities into selected featured SaaS apps
  • Record unusual access patterns to your cloud-based applications via predefined security reports
  • Assign cloud-based applications to your users so they can launch them from a single web page, the Access Panel

These features are available at no cost to all Windows Azure subscribers. If you are already using application access enhancements in preview, you do not have to take any action. You will be automatically transitioned to the generally available service.

Windows Azure Active Directory Premium Offering

The Windows Azure Active Directory Premium offering is now available in public preview. Built on top of the free offering of Windows Azure Active Directory, it provides a robust set of capabilities to empower enterprises with more demanding needs on identity and access management.

In its first milestone, the premium offer enables group-based provisioning and access management to SaaS applications, customized access panel, and detailed machine learning-based security reports. Additionally, end-users can perform self-service password resets for cloud applications.

Windows Azure Active Directory Premium is free during the public preview period and will add additional cloud focused identity and access management capabilities in the future.

At the end of the free preview, the Premium offering will be converted to a paid service, with details on pricing being communicated at least 30 days prior to the end of the free period.

We encourage you to sign up for Windows Azure Active Directory Premium public preview today.

Windows Azure Biz Talk Services

Windows Azure Biz Talk Services is now generally available. While customers continue to invest in cloud based applications, they need a scalable and reliable solution for extending their on-premises applications to the cloud. This cloud based integration service enables powerful business scenarios like supply chain, cloud-based electronic data interchange (EDI) and enterprise application integration (EAI), all with a familiar toolset and enterprise grade reliability.

If you are already using BizTalk Services in preview, you will be transitioned automatically to the generally available service and new pricing takes effect on January 1, 2014.

To learn more about the services and new pricing, visit the BizTalk Services website.

Windows Azure Traffic Manager

Windows Azure Traffic Manager service is now generally available. Leveraging the load balancing capabilities of this service, you can now create highly responsive and highly available production grade applications. Many customers like AccuWeather are already using Traffic Manager to boost performance and availability of their applications. We are also delighted to announce that Traffic Manager now carries a service level agreement of 99.99% and is supported through your existing Windows Azure support plan. We are also announcing new pricing for Traffic Manager. Free promotional pricing will remain in effect until December 31, 2013.

If you already are using Traffic Manager in preview, you do not have to take any action. You will be transitioned automatically to the generally available service, and the new pricing will take effect on January 1, 2014.

For more information on using Traffic Manager, please visit the Traffic Manager website.

For further details on these services and other enhancements, visit Scott Guthrie's blog.


Scott Guthrie (@scottgu) added details in his Windows Azure: General Availability Release of BizTalk Services, Traffic Manager, Azure AD App Access + Xamarin support for Mobile Services post of 11/21/2013:

imageThis morning we released another major set of enhancements to Windows Azure.  Today’s new capabilities include:

  • BizTalk Services: General Availability Release!
  • Traffic Manager: General Availability Release!
  • Active Directory: General Availability Release of Application Access Support!
  • Mobile Services: Active Directory Support, Xamarin support for iOS and Android with C#, Optimistic concurrency
  • Notification Hubs: Price Reduction + Debug Send Support
  • Web Sites: Diagnostics Support for Automatic Logging to Blob Storage
  • Storage: Support for alerting based on storage metrics
  • Monitoring: Preview release of Windows Azure Monitoring Service Library

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

imageBizTalk Services: General Availability Release

I’m excited to announce the general availability release of Windows Azure Biz Talk Services.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios.

Windows Azure BizTalk Services enables powerful business scenarios like supply chain and cloud-based electronic data interchange and enterprise application integration, all with a familiar toolset and enterprise grade reliability.  It provides built-in support for managing EDI relationships between partners, as well as setting up EAI bridges with on-premises assets – including built-in support for integrating with on-premises SAP, SQL Server, Oracle and Siebel systems.  You can also optionally integrate Windows Azure BizTalk Services with on-premises BizTalk Server deployments – enabling powerful hybrid enterprise solutions. 

Creating a BizTalk Service

Creating a new BizTalk Service is easy – simply choose New->App Services->BizTalk Service to create a new BizTalk Service instance:

image

Windows Azure will then provision a new high-availability BizTalk instance for you to use:

image

Each BizTalk Service instance runs in a dedicated per tenant environment. Once provisioned you can use it to integrate your business better with your supply chain, enable EDI interactions with partners, and extend your on-premises systems to the cloud to facilitate EAI integration.

Changes between Preview and GA

The team has been working extremely hard in preparing Windows Azure BizTalk Services for General Availability.  In addition to finalizing the quality, we also made a number of feature improvements to address customer feedback during the preview.  These improvements include:

  • B2B and EDI capabilities are now available even in the Basic and Standard tiers (in the preview they were only in the Premium tier)
  • Significantly simplified provisioning process – ACS namespace and self-signed certificates are now automatically created for you
  • Support for worldwide deployment in Windows Azure regions
  • Multiple authentication IDs & multiple deployments are now supported in the BizTalk portal.
  • BackUp-Restore is now supported to enable Business Continuity 

If you are already using BizTalk Services in preview, you will be transitioned automatically to the GA service and new pricing will take effect on January 1, 2014.

Getting Started

Read this article to get started with provisioning your first BizTalk Service.  BizTalk Services supports a Developer Tier that enables you to do full development and testing of your EDI and EAI workloads at a very inexpensive rate. To learn more about the services and new pricing, read the BizTalk Services documentation.

Traffic Manager: General Availability Release

I’m excited to announce that Windows Azure Traffic Manager is also now generally available.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios.

Windows Azure Traffic Manager allows you to control the distribution of user traffic to applications that you host within Windows Azure. Your applications can run in the same data center, or be distributed across different regions across the world.  Traffic Manager works by applying an intelligent routing policy engine to the Domain Name Service (DNS) queries on your domain names, and maps the DNS routes to the appropriate instances of your applications.

You can use Traffic Manager to improve application availability - by enabling automatic customer traffic fail-over scenarios in the event of issues with one of your application instances.  You can also use Traffic Manager to improve application performance - by automatically routing your customers to the closet application instance nearest them (e.g. you can setup Traffic Manager to route customers in Europe to a European instance of your app, and customers in North America to a US instance of your app).

Getting Started

Setting up Traffic Manager is easy to do.  Simply choose New->Network Services->Traffic Manager within the Windows Azure Management Portal:

image

When you create a Windows Azure Traffic Manager you can specify a “load balancing method” – this indicates the default traffic routing policy engine you want to use. Above I selected the “failover” policy. 

image

Once your Traffic Manager instance is created you can click the “endpoints” tab to select application or service endpoints you want the traffic manager to route traffic to.  Below I’ve added two virtual machine deployments – one in Europe and one in the United States:

image

Enabling High Availability

Traffic Manager monitors the health of each application/service endpoint configured within it, and automatically re-directs traffic to other application/service endpoints should any service fail.

In the following example, Traffic Manager is configured in a ‘Failover’ policy, which means by default all traffic is sent to the first endpoint (scottgudemo11), but if that app instance is down or having problems (as it is below) then traffic is automatically redirected to the next endpoint (scottgudemo12):

image

Traffic Manager allows you to configure the protocol, port and monitoring path used to monitor endpoint health. You can use any of your web pages as the monitoring path, or you can use a dedicated monitoring page, which allows you to implement your own customer health check logic:

image

Enabling Improved Performance

You can deploy multiple instances of your application or service in different geographic regions, and use Traffic Manager’s ‘Performance’ load-balancing policy to automatically direct end users to the closest instance of your application. This improves performance for a end user by reducing the network latency they experience:

image

In the traffic manager instance we created earlier, we had a VM deployment in both West Europe and the West US regions of Windows Azure:

image

This means that when a customer in Europe accesses our application, they will automatically be routed to the West Europe application instance.  When a customer in North America accesses our application, they will automatically be routed to the West US application instance. 

Note that endpoint monitoring and failover is a feature of all Traffic Manager load-balancing policies, not just the ‘failover’ policy.  This means that if one of the above instances has a problem and goes offline, the traffic manager will automatically direct all users to the healthy instance.

Seamless application updates

You can also explicitly enable and disable each of your application/service endpoints in Traffic Manager.  To do this simply select the endpoint, and click the Disable command:

image

This doesn’t stop the underlying application - it just tells Traffic Manager to route traffic elsewhere. This enables you to migrate traffic away from a particular deployment of an application/service whilst it is being updated and tested and then bring the service back into rotation, all with just a couple of clicks.

General Availability

As Traffic Manager plays a key role in enabling high availability applications, it is of course vital that Traffic Manager itself is highly available. That’s why, as part of general availability, we’re announcing a 99.99% uptime SLA for Traffic Manager

Traffic Manager has been available free of charge during preview. Free promotional pricing will remain in effect until December 31, 2013. Starting January 1, 2014, the following pricing will apply:

  • $0.75 per million DNS queries (reducing to $0.375 after 1 billion queries)
  • $0.50 per service endpoint/month.

Full pricing details are available on the Windows Azure Web Site.  Additional details on Traffic Manager, including a detailed description of endpoint monitoring, all configuration options, and the Traffic Manager management REST APIs, are available on MSDN.

Active Directory: General Availability of Application Access

image_thumb7_thumb_thumb_thumbThis summer we released the initial preview of our Application Access Enhancements for Windows Azure Active Directory, which enables you to securely implement single-sign-on (SSO) support against SaaS applications as well as LOB based applications. Since then we’ve added SSO support for more than 500 applications (including popular apps like Office 365, SalesForce.com, Box, Google Apps, Concur, Workday, DropBox, GitHub, etc).

Building upon the enhancements we delivered last month, with this week’s release we are excited to announce the general availability release of the application access functionality within Windows Azure Active Directory. These features are available for all Windows Azure Active Directory customers, at no additional charge, as of today’s release:

  • SSO to every SaaS app we integrate with
  • Application access assignment and removal
  • User provisioning and de-provisioning support
  • Three built-in security reports
  • Management portal support

Every customer can now use the application access features in the Active Directory extension within the Windows Azure Management Portal.

Getting Started

To integrate your active directory with either a SaaS or LOB application, navigate to the “Applications” tab of the Directory within the Windows Azure Management Portal and click the “Add” button:

image

Clicking the “Add” button will bring up a dialog that allows you to select whether you want to add a LOB application or a SaaS application:

image

Clicking the second link will bring up a gallery of 500+ popular SaaS applications that you can easily integrate your directory with:

image

Choose an application you wish to enable SSO with and then click the OK button.  This will register the application with your directory:

image

You can then quickly walkthrough setting up single-sign-on support, and enable your Active Directory to automatically provision accounts with the SaaS application.  This will enable employees who are members of your Active Directory to easily sign-into the SaaS application using their corporate/active directory account. 

In addition to making it more convenient for the employee to sign-into the app (one less username/password to remember), this SSO support also makes the company’s data even more secure.  If the employee ever leaves the company, and their active directory account is suspended/deleted, they will lose all access to the SaaS application.  The IT administrator of the Active Directory can also optionally choose to enable the Multi-Factor Authentication support that we shipped in September to require employees to use a second-form of authentication when logging into the SaaS application (e.g. a phone app or SMS challenge) to enable even more secure identity access.  The Windows Azure Multi-Factor Authentication Service composes really nice with the SaaS support we are shipping today – you can literally set up secure support for any SaaS application (complete with multi-factor authentication support) to your entire enterprise within minutes.

You can learn more about what we’re providing with Azure Directory here, and you can ask questions and provide feedback on today’s release in the Windows Azure AD Forum.

Mobile Services: Active Directory integration, Xamarin support, Optimistic concurrency

Enterprises are increasingly going mobile to deliver their line of business apps. Today we are introducing a number of exciting updates to Mobile Services that make it even easier to build mobile LOB apps.

Preview of Windows Azure Active Directory integration with Mobile Services

I am excited to announce the preview of Widows Azure Active Directory support in Mobile Services.  Using this support, mobile business applications can now use the same easy Mobile Services authentication experience to allow employees to sign into their mobile applications with their corporate Active Directory credentials.  

With this feature, Windows Azure Active Directory becomes supported as an identity provider in Mobile Services alongside with the other identity providers we already support (which include Microsoft Accounts, Facebook ID, Google ID, and Twitter ID).  You can enable Active Directory support by clicking the “Identity” tab within a mobile service:

image

If you are an enterprise developer interested in using the Windows Azure Active Directory support in Mobile Services, please contact us at mailto:mobileservices@microsoft.com to sign-up for the private preview.

Cross-platform connected apps using Xamarin and Mobile Services

We earlier partnered with Xamarin to deliver a Mobile Services SDK that makes it easy to add capabilities such as storage, authentication and push notifications to iOS and Android applications written in C# using Xamarin. Since then, thousands of developers have downloaded the SDK and enjoyed the benefits of building cross platform mobile applications in C# with Windows Azure as their backend.  More recently as part of the Visual Studio 2013 launch, Microsoft announced a broad collaboration with Xamarin which includes Portable Class Library support for Xamarin platforms.

With today’s release we are making two additional updates to Mobile Services:

  • Delivering an updated Mobile Services Portable Class Library (PCL) SDK that includes support for both Xamarin.iOS and Xamarin.Android
  • New quickstart projects for Xamarin.iOS and Xamarin.Android exposed directly in the Windows Azure Management Portal

These updates make it even easier to build cloud connected cross-platform mobile applications.

Getting started with Xamarin and Mobile Services

If you navigate to the quickstart page for your Windows Azure Mobile Service you will see there is now a new Xamarin tab:

image

To get started with Xamarin and Windows Azure Mobile Services, all you need to do is click one of the links circled above, install the Xamarin tools, and download the Xamarin starter project that we provide directly on the quick start page above:

image

After downloading the project, unzip and open it in Visual Studio 2013. You will then be prompted to pair your instance of Visual Studio with a Mac so that you can build and run the application on iOS. See here for detailed instructions on the setup process.

Once the setup process is complete, you can select the iPhone Simulator as the target and then just hit F5 within Visual Studio to run and debug the iOS application:

image

The combination of Xamarin and Windows Azure Mobile Services make it incredibly easy to build iOS and Android applications using C# and Visual Studio.  For more information check out our tutorials and documentation.

Optimistic Concurrency Support

Today’s Mobile Services release also adds support for optimistic concurrency. With optimistic concurrency, your application can now detect and resolve conflicting updates submitted by multiple users. For example, if a user retrieves a record from a Mobile Services table to edit, and meanwhile another user updated this record in the table, without optimistic concurrency support the first user may overwrite the second user’s data update. With optimistic concurrency, conflicting changes can be caught, and your application can either provide a choice to the user to manually resolve the conflicts, or implement a resolution behavior.

When you create a new table, you will notice 3 system property columns added to support optimistic concurrency: (1) __version, which keeps the record’s version, (2) __createdAt, which is the time this record was inserted at, and (3) __updatedAt, which is the time the record was last updated.

image

You can use optimistic concurrency in your application by making two changes to your code:

First, add a version property to your data model as shown in code snippet below. Mobile Services will use this property to detect conflicts while updating the corresponding record in the table:

public class TodoItem

{

public string Id { get; set; }

[JsonProperty(PropertyName = "text")]

public string Text { get; set; }

[JsonProperty(PropertyName = "__version")]

public byte[] Version { get; set; }

}

Second, modify your application to handle conflicts by catching the new exception MobileServicePreconditionFailedException. Mobile Services will send back this error, which includes the server version of the conflicting item. Your application can then decide on which version to commit back to the server to resolve this detected conflict.

To learn more about optimistic concurrency, review our new Mobile Services optimistic concurrency tutorial.  Also check out the new support for Custom ID property support we are also adding with today’s release – which makes it much easier to handle a variety of richer data modeling scenarios (including sharding support).

Notification Hubs: Price Reduction and Debug Send Improvements

In August I announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency to any mobile device (including Windows Phone, Windows 8, iOS and Android devices). Notification hubs can be used with any mobile app back-end (including ones built using Windows Azure Mobile Services) and can also be used with back-ends that run in the cloud as well as on-premises.

Pricing update: Removing Active Device limits from Notification Hubs paid tiers

To simplify the pricing model of Notification Hubs and pass on cost savings to our customers, we are removing the limits we previously had on the number of Active Devices allowed.  For example, the consumption price for Notification Hubs Standard Tier will now simply become $75 for 1 million pushes per month, and $199 per 5 million pushes per month (prorated daily).

These changes and price reductions will be available to all paid tiers starting Dec 15th.  More details on the pricing can be found here.

Troubleshooting Push Notifications with Debug Send

Troubleshooting push notifications can sometimes be tricky, as there are many components involved: your backend, Notification Hubs, platform notification service, and your client app.

To help with that, today’s release adds the ability to easily send test notifications directly from the Windows Azure Management portal. Simply navigate to the new DEBUG tab in every Notification Hub, specify whether you want to broadcast to all registered devices or provide a tag (or tag expression) to only target specific devices/group of devices, specify the notifications payload you wish to send, and then hit “Send”.  For example: below I am choosing to send a test notification message to all my users who have the iOS version of my app, and who have registered to subscribe to “sport-scores” within my app:

image

After the notification is sent, you will get a list of all the device registrations that were targeted by your notifications and the outcomes of their specific notifications sent as reported by the corresponding platform notification services (WNS, MPNS, APNS, and GCM). This makes it much easier to debug issues.

For help on getting started with Notification Hubs, visit the Notification Hub documentation center

Web Sites: Diagnostics Support for Automatic Logging to Blob Storage

In September we released an update to Windows Azure Web Sites that enables you to automatically persist HTTP logs to Windows Azure Blob Storage.

Today we also updated Web Sites to support persisting a Web Site’s application diagnostic logs to Blob Storage as well.  This makes it really easy to persist your diagnostics logs as text blobs that you can store indefinitely (since storage accounts can maintain huge amounts of data) and which you can also use to later perform rich data mining/analysis on them.  This also makes it much easier to quickly diagnose and understand issues you might be having within your code.

Adding Diagnostics Statements to your Code

Below is a simple example of how you can use the built-in .NET Trace API within System.Diagnostics to instrument code within a web application.  In the scenario below I’ve added a simple trace statement that logs the time it takes to call a particular method (which might call off to a remote service or database that might take awhile): 

image

Adding instrumentation code like this makes it much easier for you to quickly determine what might be the cause of a slowdown in an application in production.  By logging the performance data it also makes it possible to analyze performance trends over time (e.g. analyze what the 99th percentile latency is, etc).

Storing Diagnostics Log Files as Blobs in Windows Azure Storage

To enable diagnostic logs to be automatically written directly to blob storage, simply navigate to a Web Site using the Windows Azure Management Portal and click the CONFIGURE tab.  Then navigate to the APPLICATION DIAGNOSTICS section within it.  Starting today, you can now configure “Application Logging” to be persisted to blob storage.  To do this, just toggle the button to be “on”, and then choose the logging level you wish to persist (error, verbose, information, etc):

image

Clicking the green “manage blob storage” button brings up a dialog that allows you to configure which blob storage account you wish to store the diagnostics logs within:

image

Once you are done just click the “ok” button, and then hit “save”.  Now when your application runs, the diagnostic data will automatically be persisted to your blob storage account. 

Looking at the Application Diagnostics Data

Diagnostics logging data is persisted almost immediately as your application runs (we have a trace listener that automatically handles this within web-sites and allows you to write thousands of diagnostics messages per second).

You can use any standard tool that supports Windows Azure Blob Storage to view and download the logs.  Below I’m using the CloudXplorer tool to view my blob storage account:

image

The application diagnostic logs are persisted as .csv text files.  Windows Azure Web Sites automatically persists the files within sub-folders of the blob container that map to the year->month->day->hour of the web-site operation (which makes it easier for you to find the specific file you are looking for).

Because they are .csv text files you can open/process the log files using a wide variety of tools or custom scripts (you can even spin up a Hadoop cluster using Windows Azure HDInsight if you want to analyze lots of them quickly).  Below is a simple example of opening the above file diagnostic file using Excel:

image

Notice above how the date/time, information level, application name, webserver instance ID, eventtick, as well as proceed and thread ID were all persisted in addition to my custom message which logged the latency of the DoSomething method.

Running with Diagnostics Always On

Today’s update now makes it super easy to log your diagnostics trace messages to blob storage (in addition to the HTTP logs that were already supported).  The above steps are literally the only ones required to get started.

Because Windows Azure Storage Accounts can store 100TB each, and Windows Azure Web Sites provides an efficient way to persist the logs to it, it is now also possible to always leave diagnostics on in production and log everything you do within your application.  Having this data persisted makes it much easier for you to understand the health of your applications, debug them when there are issues, and analyze them over time to make even better.

Storage: Support for Alerting based on Storage metrics

With today’s release we have added support to enable threshold based alert rules for storage metrics. If you have enabled storage analytics metrics, you can now configure alert rules on these metrics.

You can create an alert rule on storage metrics by navigating to Management Services -> Alert tab in the Windows Azure Management Portal. Click the Add Rule button, and then in the rule creation dialog select service type as storage, select the storage account that you want to enable alerts on, followed by the storage service (blob, table, queue).

image

Then select the blob service metric and configure threshold value and email address to send the notification:

image

Once setup and enabled the alert will be listed in the alerts tab:

image

The rule will then be monitored against the storage metric. If it triggers above the configured threshold an alert email will automatically be sent.

Monitoring: Preview release of Windows Azure Monitoring Service Library

Today we are releasing a preview of our new Window Azure Monitoring Services library. This library will allow you get monitoring metrics, and programmatically configure alerts and autoscale rules for your services.

The list of monitoring services clients that we are shipping today include:

image

Let’s walk through an example of creating an alert rule using the AlertsClient library. For creating an alert rule you will need to specify the service that you are creating the alert on and the metric on which the alert rule operates. In addition, you will need to specify the rule settings for the condition and the action taken when the alert threshold is reached.  The below code shows how to programmatically do this:

image

Once the code above executes our monitoring alert rule will be configured without us ever having to manually do anything within the management portal.  You can write similar code now to retrieve operational metrics about a service and setup autoscale rules as well.  This makes it really easy to fully automate tasks.

Installing via nuget

The monitoring service library is available via nuget. Since it is still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the package.

image

Documentation

The alerts, autoscale and metrics client API documentation can be accessed here.

Summary

Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it.


Alex Simons (@Alex_A_Simons) described App Access Enhancements GA + Azure AD Premium Preview on 11/21/2013:

imageIt's a BIG day here in the Active Directory team.

As ScottGu announced in his blog post, we've just GA'd the Application Access Enhancements for Windows Azure Active Directory. This is a HUGE milestone for us on our path to delivering the world's richest cloud based identity management service.

In addition to these GA features, today we've also turned on the first preview of Windows Azure Active Directory Premium, a version of Azure AD designed to meet the identity needs of enterprises.

Application Access Enhancements is now GA

image_thumb7_thumb_thumb_thumbAs I blogged about yesterday, we've been working hard to integrate with more and more SaaS applications. Since July we've completed integrations of Windows Azure AD with more than 500 applications and we are now adding 3-4 new applications a day. In addition, we've also completed our early customer previews and end-to-end testing and certification.

Now that we've reached this point, we're making application access generally available. Starting today, every organization in the world can manage access to their SaaS apps, all at no charge.

These enhancements include:

  • SSO to the 500+ app we integrate with
  • Application access assignment and removal
  • User provisioning and de-provisioning
  • Basic security reporting
  • Our Application Access Panel

Windows Azure Active Directory Premium

As I mentioned above, today we're starting the public preview of Windows Azure Active Directory Premium.

Our goal with Windows Azure Active Directory Premium is to provide a robust set of capabilities tailored to meet the demanding identity and access management needs of enterprises. This is the first of several previews of Windows Azure AD Premium and includes:

  • Self-service password reset for users: Whenever employees forget their password, Windows Azure AD gives them a self-service way to reset their password rather than having to call your helpdesk.

  • Group-based provisioning and access management to SaaS apps: You can leverage existing groups that have been synced in from your on-premises Active Directory to assign users access in bulk to SaaS apps and to automate the ongoing assignment of users to apps.
  • Customizable access panel: Organizations can now customize the app access panel for their employees with company logos and color schemes.
  • Machine learning based security monitoring and reports: Azure AD premium using advance machine learning systems to monitor and protect access to your cloud applications and provides detailed security reports showing anomalies and inconsistent access patterns.

    You can view logins by users who logged in from unknown sources, logins that occurred after multiple failures and logins from multiple geographies in short timespans. Security reports will help you gain new insights to improve access security and respond to potential threats.

And this is only the first preview so this is not an exhaustive list of features -- Windows Azure Active Directory Premium will continue to grow and evolve to embrace the hybrid world of devices and cloud services.

During this public preview we're starting, Windows Azure Active Directory Premium features are available at no charge. At the end of the preview the Premium offering will be converted to a paid service. And we'll let you know the final pricing at least 30 days prior to the end of the free public preview period. Of course the basic Windows Azure Active Directory will continue to remain free.

You can log on to and sign up for Windows Azure Active Directory and start using these features in preview at no charge. To evaluate this preview, navigate to Windows Azure Preview Feature page and add Windows Azure Active Directory Premium to your subscription by clicking "try it now", selecting the "Free Trial" subscription and confirming by clicking on the check on the bottom right.

Figure 4: Opting into the Azure AD Premium Preview

Then, in the Windows Azure Management Portal, select a directory where you want to use the Windows Azure Active Directory Premium features. (You can add the features to multiple directories if you wish).

Figure 5: Navigating in a specific directory

On the configure tab of the directory, move the slider for Premium features to enabled,

Figure 6: Enabling Azure AD Premium in a directory

This will cause new premium features, such as enabling a password reset policy for end users, to be enabled on that directory.

Figure 7: Password reset policy enabled.

There are a lot of new capabilities going into Windows Azure AD Premium. In upcoming posts we'll cover more details on self-service password reset, tenant branding, assigning users, the advanced reports and additional features – so stay tuned! In the meantime, let us know if you have any questions, and you can give us your feedback at the Windows Azure AD Forum.


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

image_thumb75_thumb3_thumb_thumb_thu[19]No significant articles so far this week.

image_thumb11_thumb_thumb_thumb_thum


<Return to section navigation list>

Windows Azure Cloud Services, Caching, APIs, Tools and Test Harnesses

• Nick Harris (@cloudnick) and Chris Risner (@chrisrisner) produced Cloud Cover Episode 121: New Relic on 11/22/2013:

imageIn this episode Nick Harris and James Baker are joined by Nick Floyd, an Agent Developer at New Relic.  In this episode, Nick will walk through:

  • Signing up for a free New Relic subscription in the Windows Azure Portal
  • Connect the subscription to a new web site in Azure
  • Add the New Relic NuGet package to a web site in Visual Studio
  • Navigate to the New Relic portal from the Windows Azure Portal
  • View different performance metrics for the web site
  • View performance metrics on a site and it's connected SQL Database

• Scott Hanselman (@shanselman) posted Introducing node.js Tools for Visual Studio on 11/21/2013:

Just when you thought it couldn't be crazier in Redmond, today they are introducing node.js Tools for Visual Studio!

node.js and Express running in VS

imageNTVS runs inside VS2012 or VS2013. Some node.js enthusiasts had forked PTVS and begun some spikes of node tools for VS. At the same time the PTVS team was also working on node.js integration, so they all joined forces and made NTVS a community project. NTVS was developed by the same team that brought you PTVS with help from friends like Bart Read from Red Gate (he did the npm GUI), and Dmitry Tretyakov from Clickberry for several debugger fixes & features.

NTVS is open source from the start, and has taken contributions from the very start. It supports Editing, Intellisense, Profiling, npm, Debugging both locally and remotely (while running the server on Windows/MacOS/Linux), as well publishing to Azure Web Sites and Cloud Service.

It's actually pretty freaking amazing how they did it, so I encourage you to download it and give it a try because some of the stuff (even given this is an alpha) is very very clever.

Blank Express Application

Node.js Tools for Visual Studio takes advantage of V8 Profiling API's as well Visual Studio's Reporting features to give you a sense of where your program is spending its time.

NOTE: See that File | New Project dialog up there? Visual Studio organizes things by language, so node.js is under JavaScript. But you've also got Python and Django, iOS and Android via C#, TypeScript, VB, F#, all in Visual Studio.

One of the things that's impressed me about the way they integrated node.js into Visual Studio was that they didn't try to recreate or re-do things that already worked well. It's node, it runs node.exe, it uses the V8 debugger, it uses the V8 profiler because that's what people use. Duh. But, for example, NTVS can take the output from the V8 profiler and display it using the Visual Studio Profiler Reporting Tools. No need to reinvent the wheel, just use the right tool for the job.

Hacking on the Ghost blogging engine with node.js for Visual Studio

Let's look at an example.

From within Visual Studio, go File New Project, click JavaScript, then "From Existing Node.js code."

From Existing node.js Code

Point NTVS to your Ghost folder.

Create from Existing Code

Then tell node.js for VS that the startup file is index.js, hit Next, save the project file and Finish.

Create New Project from Existing Code

At this point, you've got Ghost inside VS.

Random: that since I have Web Essentials I also get a nice split-screen markdown editor as well.

From here, just hit F5 to Debug, or Ctrl-F5 to start without Debugging. Also notice the properties of the Project in the lower right corner there showing the node path and port as well as the Startup File. You can change these, of course.

Ghost inside Visual Studio with NTVS

Here's me running Ghost locally. You can see the path to node, the ghost.js file and my browser.

Running Ghost in VS with node for VS

You'll get good intellisense for completions and help for method signatures.

Intellisense example

Debugging

Node.js Tools for Visual Studio includes complete support for debugging node apps. This includes support for Stepping, Breakpoints, "Break on exception", as well as Locals, Watch, Immediate and Call Stack tool windows.

You can manage Exceptions just like any other language service. See in the dialog below node.js exceptions are listed along with other exceptions in managed and unmanaged code.

Managing Exceptions in node.js for Visual Studio

The debugging still happens like it always has, with the node V8 debugger, except Visual Studio connects to the debugger over another socket (remember, you can even debug node.js remotely running on a Linux or Mac like this!) and translates how V8 thinks into how Visual Studio thinks about debugging. The experience is seamless.

See in this screenshot, you can see node.exe is being debugged, I'm running Ghost. You can see my Call Stack, and the Locals in the Watch Window. I can inspect variables, step around and do everything you'd want to do when debugging a Web App.

Debugging Session of Ghost in VS with Node Tools for Visual Studio

npmVisual Studio

The npm experience is pretty cool as well. Node.js for Visual Studio is always watching the file system so are more than welcome to run npm from the command line or from within the node immediate window and Visual Studio will see the changes.

You can also use the npm Package Management dialog and search the repository and install packages graphically. It's up to you.

npm package management within VS

Here's a package installing...

Installing a module

The physical node_modules and how modules are handled is pure node...VS doesn't touch it or care. However, the Solution Explorer in Visual Studio also presents a logical view on top of the physical view.

image

NOTE: I really like this. I think it has potential and I'd even like to see references in .NET treated like this. The physical and the logical, along with a dependency tree showing NuGet packages. It helped me understand the project much better.

There's lots more. There's an REPL interactive window, and you can just publish like any other web project using the same Publish Wizard that ASP.NET projects use. You can publish node.js apps directly to Azure as well, either with Git or with Visual Studio publishing.

You can also remotely debug node instances running on other machines by starting node with the included Remote Debugging Proxy.

image

node.exe RemoteDebug.js -machineport 5860 script.js

As mentioned, you can do remote debugging between Visual Studio and node running on any server OS.

Conclusion

I'm personally pretty happy with the way that Visual Studio is turning (in a short amount of time, seems to me) into quite the competent language and environment factory.

Node.js Tools for Visual Studio is entirely open source under the Apache license and they welcome contributions and bug reports. It's Alpha and it's early but it's awesome. Go get it. Big congrats to all involved!


Sponsor: Thanks to Red Gate for sponsoring the feed this week! Easy release management: Deploy your .NET apps, services and SQL Server databases in a single, repeatable process with Red Gate’s Deployment Manager. There’s a free Starter edition, so get started now!

Disclosure: FYI, Red Gate does advertise on this blog, but it was a total coincidence that a Red Gate employee helped with node.js Tools for VS. I just found that out today. They are very nice people.


App Dynamics (@AppDynamics) sponsored an Optimizing application performance in Windows Azure post to the GigaOm blog on 11/19/2013:

imageMacmillan English, a global publishing company based in the U.K., decided to migrate its online learning applications to Windows Azure in order to better serve students in India, Singapore and other parts of the world far from the company’s data center in the U.K. However, it found that the monitoring tools built into Windows Azure were not enough to provide the level of visibility it needed into the performance of these applications. In order to get the visibility it needed to monitor, optimize and scale its applications, it chose AppDynamics for Windows Azure, an application performance management (APM) solution available as an add-on in the Windows Azure store. Read the full case study to learn how Macmillan English optimizes the performance of its Windows Azure applications with AppDynamics.

image_thumb75_thumb3_thumb_thumb_thu[10]AppDynamics is an application performance management (APM) solution that allows organizations like Macmillan to understand how application performance affects their end-user experience at the code level. AppDynamics is available as an add-on for Windows Azure and can be installed and deployed with your solution as a NuGet package in a matter of minutes. Get AppDynamics in the Windows Azure store.

Full disclosure: I’m a registered GigaOm Analyst.


• Haishi Bai (@HaishiBai2010) explained WACEL + Windows Azure Cache vs. Kinesis: Implementing data acquisition scenarios on Windows Azure in a 11/17/2013 post:

imageThis week, Amazon just announced their data ingress service, Kinesis, for data acquisition tasks in Big Data scenarios. Coincidentally, my open source library, Windows Azure Cache Extension Library (WACEL), had just been released two weeks before the announcement, and data acquisition is one of the key scenarios supported by WACEL. This post introduces how to use WACEL + Windows Azure Cache to implement data acquisition scenarios on Windows Azure. It then compares differences between WACEL and Kinesis. Before I continue, I need to clarify that WACEL is not an official product and this post reflects only my person opinions. The post doesn’t reflect or suggest Microsoft’s product roadmaps in any way.

The problem

imageBoth WACEL+ Cache and Kinesis attempt to solve the same problem: to provide an easy-to-use, high-throughput, scalable and available data acquisition solution for Big Data scenarios. Data acquisition is often the starting point of solving (or creating) a Big Data problem. A feasible solution not only need to provide efficient, high-throughput data pipelines that allow large amount of data to be pumped into the system, but also need to provide mechanisms for backend data processors to handle the data, often at a much slower rate as the data comes in, without being overwhelmed by the  data flood.

Kinesis focuses on providing a general, hosted solution for high-throughput data ingress. However WACEL+Cache is built with sensor data collection in mind. For a large number of sensors, having a single fat data pipeline is not necessarily desirable. A data processor will have to filter through the large data stream and group information by sensor. When scaling out, this means either a large number of processors scanning the same stream, with each taking only a very small potion of the data stream, or some sort of data filtering and dispatching mechanisms need to be built.

WACEL examines sensor data acquisition more closely. It classifies sensor data into two broad categories: streams and events, each with different characteristics and processing requirements, as summarized in the following table:

image

In the case of streams, because sensor data is constantly pumped into the system, higher, and more stable throughput is required. Example of sensor streams include GPS sensors and various environmental sensors, etc. In the case of events, because events only happen occasionally, the throughput requirement is not as high in most cases. Examples of sensor events include motion detections and other threshold violations, etc. Streamed data is often temporal, which means if it’s not examined within a short period of time, it loses its value quickly. For example, the coordinate reported by a GPS a minute ago is most like irrelevant to a auto-piloting program. On the other hand, events may or may not be time-sensitive. It’s often acceptable to lose some of the streamed data, however it’s often unacceptable to lose events.  And for streamed sensor data, it’s often pointless to record the raw data for extended period of time, but to record characteristics that are abstracted from the raw data, such as moving averages, trends, and summaries.

WACEL provides two data structures for ingress data from sensors: a circular buffer that keeps only latest n items, which is ideal for handling streams; and a queue, which is ideal for handling events. Unlike Kinesis, you don’t need have to pump data through a few fat data pipelines. WACEL allows you create and tear-down pipelines instantly whenever you like it. You can have a separate data pipeline for a group of sensors, or even for each individual sensor. And for different types of sensors, you can pick between circular buffers or queues to better fit with your scenarios. Separate data pipelines also simplify data processing. You can attach processors to specific data streams, instead of having to filter data by yourself.

Getting Started: A Simple Scenario

Now let’s see a simple scenario. In this scenario, a GPS sensor is connected to a client computer and it sends standard NMEA statements. The client computer listens to the stream and forward all $GPGAA statements to a backend server. The GPS data is kept in a circular buffer, which keeps the last 5 position fixes. On the server side, a data processor gets the average location from the last 5 statements (as moving averages often provide better positioning results by reducing drifting).

Client implementation
  1. On client program, add a reference to WACEL NuGet package.
  2. Modify your application configuration file to point to a Windows Azure Cache cluster. Many WACEL data structures support either Windows Azure Cache or Windows Azure Table Storage as backend (with extensibility to support other backend). In this case we’ll use a cache cluster.
  3. Create a new CachedCompressedCircularBuffer instance and start sending data. That’s really all you need to do, WACEL takes care of everything else, including batching, compressing, handling transient errors, and in near future offline supports.
     CachedCompressedCircularBuffer buffer = new CachedCompressedCircularBuffer("mygps", 5, batchSize: 5);
     buffer.Add("$GPGAA,151119.00,4307.0241,N,07729.2249,W,1,06,03.2,+00125.5,M,,,,*3F");

Note although we’ve specified to use a 5-item batch, the client code doesn’t need to explicitly manage batches. It simply adds item (or items) to the buffer. WACEL automatically batches the items up, compress them, and send them to server – this is one of the benefits of using a client library. On the other hand, if you want to commit a batch before it’s filled up, you can call the Flush() method any time to commit a partially filled batch.

Server implementation

The server implementation is just as easy:

  1. Add a reference to WACEL NuGet package.
  2. Modify your application configuration to point to the same Windows Azure Cache cluster.
  3. Create a new CachedCompressedCircularBuffer instance and start reading data. WACEL allows you two easy ways to consume data on circular buffer: first, you can use the Get() method to get all data items that are currently on the buffer; second, you can simply to use a indexer to access individual items in the buffer. For instance, buffer[0] for latest item, buffer[-1] for second latest item, and so on. Note WACEL chooses to use negative indexes for older items to reflect the fact that you are tracing back to older data. In this case, we’ll simply read all statements in the buffer and calculate average lat/long locations.
    CachedCompressedCircularBuffer buffer = new CachedCompressedCircularBuffer("mygps", 5, batchSize: 5);
    string[] coordinates = buffer.Get(0);
    //parse statements and calculate average lat/long

The above Get() call returns all the 5 $GPGAA statements in the buffer, and the processor can calculate the average lat/long based on these statements.

Throughput and scaling

The throughput of WACEL is bound to your bandwidth. When a 5-item is used, the above code provides roughly 50 tps, which is far more than sufficient for capturing and transferring a single GPS sensor data. If you chose to use a larger batch size such as 50, the code allows 470+ tps, which is roughly 33K data / second. And remember, this is only from a single thread from a single client. WACEL library itself is stateless, so it scales as you scale out your applications, and its throughput is decided by your network speed and how fast Windows Azure Cache can handle data operations. WACEL’s automatic batching and compression also improve throughputs. In a load test (CompressedCircularPerfTest.AddWithLargeBatch), I was able to achieve 6M/sec (and 2000 tps) using a single client. Both metrics exceeds Kinesis throughput promises, btw. Of course, the data is acquired under optimum conditions. You can find source code of these tests on WACEL’s CodePlex site – look for the CompressedCircularPerfTest class under Microsoft.Ted.Wacel.TestConsole.

WACEL + Windows Azure Cache vs. Kinesis

The following table is a quick comparison between the WACEL + Windows Azure Cache solution and Kinesis.

image

image_thumb33_thumb


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Charles Babcock (@babcockcw) reported Windows Azure Gaining 1000 Customers Per Month in an 11/20/2013 article for Information Week:

imageIaaS for Azure and inclusion of cloud use in enterprise agreements has helped Microsoft's cloud services gain momentum, says general manager Mike Neil.

Microsoft's Mike Neil said the Windows Azure cloud may have joined the infrastructure-as-a-service (IaaS) competition only last April, but he said the chart of Azure IaaS growth "looks more like a flagpole than a hockey stick."

image_thumb75_thumb3_thumb_thumb_thu[22]Azure had been primarily a platform-as-a-service (PaaS), a developer's platform with Visual Studio and other tools available in Hyper-V and Windows Server settings. Azure IaaS came out of beta only seven months ago, while the PaaS version became available in preview form in the fall of 2009. Perhaps it's because supported IaaS has been available such a short time that its use "looks more like a flagpole" -- a spike represents both pent-up demand and a relatively small base on which to start adding customers. Now, Neil said Azure is adding 1,000 customers a day and Azure revenues are up 100% year over year; that includes the PaaS revenues as well as IaaS.

Neil is the executive who helped establish the Microsoft Hyper-V hypervisor and who is now general manager of Azure. When there was a slowdown or freeze up in Azure's staging API at the end of October, it was Neil who reported to Qi Lu, president of Online Services, and Satya Nadella, president of the Server and Tools business, on what went wrong.

Neil says that reporting process is a routine that Microsoft has incorporated into its culture. The point of it is to continuously improve Azure's uptime, not place blame on individuals. "It's not a blaming process. Our culture says an incident is a learning experience. I should fix this before it bites me in the butt. We take these learnings and translate them for customer use as well," he said in an interview earlier this month.

Implicit in the comment is the fact that Microsoft is running its own technologies to power Azure, Hyper-V, Windows Server, System Center with its Azure release pack, and its own Azure cloud software, and therefore it should be able to get the maximum out of them. It wants these components to provide highly reliable operations on the new scale of cloud computing. That scale dwarfs the enterprise operations using Microsoft that preceded it. According to documents filed in locations where it has built cloud datacenters, Microsoft invests $450 million to $500 million in a major new data center, such as the one outside Chicago, designed to hold 300,000 servers.

The more reliably it operates its cloud datacenters, the greater the chance it has to convince customers to engage in two-tiered, hybrid cloud computing through Microsoft. In fact, on the reliability front, Microsoft's record, while shorter, is as good as anyone's. One of its few major outages was the eight-hour Leap Day failure on Feb. 29, 2012. It published a full disclosure of what had gone wrong with misdated security certificates setting off a cascading shutdown of servers.

imageNeil [pictured at right] said Microsoft will complete root cause analysis of its end of October service slowdowns and disruptions and publish a post mortem on it as well. "We've been pretty forthright with customers. Enterprise customers appreciate that. Our competitors have been a little more opaque," he said.

In an attempt to further engage enterprise customers, Microsoft is including whatever hours of Azure use that a customer wants in its Enterprise agreements. "That has greatly reduced friction for customers" getting started with Azure, he said. They have a predictable cloud bill stated in their overall Microsoft contract, which they prefer to a bill that dips to a $10,000 low one month and a $100,000 high the next. "It shouldn't be like your cell phone bill after you come back from Europe," says Neil, with a wry nod to roaming charges. …

Read page 2 of Babcock’s story


<Return to section navigation list>

Windows Azure Pack, Hosting, Hyper-V and Private/Hybrid Clouds

Brad Anderson (@InTheCloudMSFT) continued his series with Success with Hybrid Cloud: The Components of a Hybrid Cloud on 11/18/2013:

Overview11

imageFor a lot of IT pros, the last two posts (here and here) have been a nice overview of the Hybrid Cloud model, but they’ve been waiting to get technical. Starting with this post, I’ll begin a deep look at the specific technology that supports and maximizes Hybrid Clouds, as well as the tools available for use in this environment. A lot of IT pros are looking for how to build this, and in this post I’ll identify what those component pieces are, and take a look at the work Microsoft is doing to provide support for these hybrid scenarios.

image_thumb75_thumb3_thumb_thumb_thu[3]In terms of pure component pieces, a Microsoft Hybrid Cloud is pretty simple:

  • Windows Server for your on-prem cloud resources.
  • Windows Azure for your public cloud resources.
  • System Center manages and monitors both of these environments and the apps running in them.
  • Windows Azure Pack integrates with System Center and Windows Server to create a self-service portal to manage sites, VMs, and Service Bus.

To start, it’s important to understand how each of these four pieces operate individually and how they then interoperate to create the Hybrid environment.

Windows Server

I’ve posted dozens of times about the enterprise-class, simple, cost-effective, app-focused, user-centric cloud-optimized business solutions offered by Windows Server. What isn’t talked about as often is that Windows Server is the on-prem foundation of the Hybrid Cloud. Because Windows Server can take advantage of the latest and greatest datacenter-class hardware available, building and operating private and public clouds is easy.

With Windows Server as a foundation, Windows Azure, System Center, and Windows Azure Pack each amplify the inherent capabilities of Windows Server in a different way – and these capabilities are really exciting for enterprises and service providers.

(As a quick refresher, the seven key capabilities that we focus on with Windows Server are: 1) Server virtualization, 2) Storage, 3) Networking, 4) Server management & automation, 5) Web & application platform, 6) Access & information protection, and 7) Virtual desktop infrastructure. You’ll see each of these capabilities represented throughout this Hybrid Cloud series.)

Windows Server means Power for your datacenter and your cloud.

Windows Azure

With Windows Server as the on-prem foundation of the Hybrid Cloud, Windows Azure is the critical other half – the 100% cloud-based part of the Hybrid Cloud. Azure and Windows Server are equals in this Hybrid model, and we’ve worked to make the user experience of these two environments consistent. Azure is open and flexible, and it enables quick development, deployment and management of applications. This application support is a huge benefit – you can build, test, deploy, and maintain apps built using any language, any tool, and/or any framework.

Creating a connection between Windows Server, System Center, and Windows Azure Pack from Windows Azure is possible with these configuration steps.

Creating a private cloud with WS, SC and WAP are possible with these configuration.

With Windows Azure, you have Flexibility.

System Center

System Center can be described in two simple words: Unified Management. With System Center, you get management, not only for on-prem resources, but also across the entire Hybrid Cloud. This management is so thorough and so powerful that a truly Hybrid environment is impossible without a world-class management solution like System Center. The ability to provision, automate, self-serve, and monitor Windows Server and Azure through a consistent interface and with a consistent management experience simply cannot be overstated – and only System Center can do it.

(One other quick refresher – the five key capabilities delivered by System Center are pretty straightforward: 1) Infrastructure provisioning, 2) Infrastructure monitoring, 3) Automation and self-service, 4) Application performance monitoring, and 5) IT service management. Each of these things are delivered consistently across the entire Microsoft Hybrid Cloud.)

With System Center you get Control of your Hybrid Cloud.

Windows Azure Pack

Historically, private and public cloud consumption have been dramatically different. The Windows Azure Pack (WAP) significantly reduces the user interface differences between Azure and private clouds. WAP enables customers to embrace public and private clouds by integrating directly with System Center and Windows Server, and it ships with built-in familiarity and a gentle learning curve.

WAP provides IT pros with multiple management avenues for their environment depending on their needs and how they access the data, and it also provides a consistent experience between System Center and Azure.

In conjunction with System Center, WAP provides a self-service portal for managing many of the moving parts found in any cloud infrastructure: Websites, Virtual Machines and Networks, Service Bus, Usage reporting, Automation tasks, Users, and many other cloud resources. It brings the power of Windows Azure into the on-prem IT environment, and it enables multi-tenant cloud management with a Windows Azure-consistent experience.

How does our competition stack up? With public cloud-only vendors like Amazon, customers need to spend time setting up and maintaining a patchwork of partner solutions to manage across public and private clouds – and no matter how many vendor solutions are used, any admin running a hosted cloud will be hard pressed to find something like the Windows Azure Pack which has been specifically built to provide customers with a consistent management experience across public and private environments.

This kind of Visibility means you can better control, understand, and capitalize on your IT resources.

To recap: Power (Windows Server), Flexibility (Windows Azure), Control (System Center), and Visibility (Windows Azure Pack) are the strengths combined in a Hybrid Cloud. This Hybrid environment effectively crosses conventional boundaries to maximize traditional resources and enable a truly modern datacenter.

When public and private environments are proficiently managed, the extensibility and functionality of a Hybrid Cloud environment can be uniquely powerful.

What I think is especially cool about Microsoft’s approach to this Hybrid Cloud model is that our support for this environment is limited to the compute, network, and storage aspects – but we have also created an awesome array of tools to make your cloud more efficient, productive, and economical.

These tools cover capabilities like: Infrastructure Provisioning, Infrastructure Monitoring, Application Performance Monitoring, Automation & Self-Service, and IT Service Management.

To really understand how these tools operate within a Hybrid Cloud, I want to explain what they do and how enterprises and service providers use them on a daily basis.

Infrastructure Provisioning

The capability to provision physical, virtual, and cloud infrastructures to effectively manage workload scale/performance, multi-tenancy, and chargeback is handled simply and efficiently with Virtual Machine Manager and Windows Server/Windows PowerShell. An example of Infrastructure Provisioning at work is during the deployment any of the tenant provisioning out-of-the-box workloads (like Exchange, SharePoint, or Lync) for tenants. In fact, the majority of the IaaS/PaaS/SaaS configuration and deployment steps fall into this category, too.

For more information on how these tools enable Infrastructure Provisioning, you can check out some deep, technical content from reference the following:

Infrastructure & Application Performance Monitoring

Operations Manager provides cloud admins with the ability to do two critical things: Ensure reliable performance/availability for the delivery of underlying business and operational SLAs, and provide incredibly deep and detailed insight that enable predictable application SLAs to application owners and business stakeholders.  This builds upon the work we have done with the Azure Management Pack which allow users to monitor hybrid environments from the System Center solutions they already know and love.

Tools like infrastructure monitoring and application performance monitoring are in action whenever there is live application monitoring/debugging, or when there are environment health state awareness with actionable remediation steps.

For more information on how Operations Manager enables Infrastructure and Application Performance Monitoring, you can read more here:

Automation and Self-Service

For centralized visibility and control over all of the datacenter infrastructure used to host applications and resources, Automation and Self Service are achieved with Windows PowerShell and System Center + Windows Azure. Common scenarios that call for these tools include, automated VM migration and management, automated app management (like updates, remediation, and rollback), and self-service/automated provisioning of resources, services, or apps.

For more information on how these tools enable Automation and Self-Service, check out these overviews:

IT Service Management

IT Service Management covers processes like ITIL/MOF enablement, service and release management, and things like incident, change, problem, and event management.

For more information about how these tools enable IT Service Management, you can reference the following:

Taken together, these tools and products offer something very unique in the tech industry.

One of the biggest values to enterprise customers and service providers, as noted above, is the user experience with the Microsoft Hybrid Cloud. Some vendors (like Amazon, for example) put the burden of integrating public and private clouds on the customer, while others (VMware, for example) simply do not have anything that equals our experience managing large scale public clouds – and this limits their ability to anticipate and plan for customer needs in this space. Microsoft’s experience managing large public clouds has directly led to the things we’ve developed in Windows Azure, and what’s now available in the Windows Azure Pack.

With these component pieces of the Microsoft Hybrid Cloud in mind, later this week I’ll start discussing in detail the best practices for planning, building, deploying, and maintaining a Hybrid Cloud.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

• Paul van Bladel (@PaulBladel) described Embedding LightSwitch in Enterprise Services (Part 2/N): Architecture for Scaling Out Long-Running Processes in an 11/22/2013 post:

imageIn this very first very article on embedding LightSwitch in enterprise services, I will first focus on using a WCF service as a vehicle for encapsulating a “business engine” that could work seamlessly with the online processing of a LightSwitch application.  I currently will leave in the middle for what this business engine could be used. We just focus on the architecture. Obviously, in later articles in this series I’ll provide hands on examples in visual studio. So currently, see a long running process for example as kind of side-effect processing on things happing in the request response pipeline  in the world of the online transaction processing application. That could be simply sending out thousands of mails, making a calculation which depends on other services in your enterprise, orchestrating other services,  you name it. ….

image_thumb1211_thumb_thumbEncapsulating such complex or heavy processing in something like a “business engine” can make sense, apart from a performance perspective, also for being able to apply unit and integration testing.

What is the problem with invoking long running processes directly in LightSwitch?

I have seen various attempts of people trying to invoke a long running process directly inside the request response pipeline of the LightSwitch application by

  • spinning up a background worker process or
  • by using the “async pattern” in one way or another.

All this is, in my view, a genuine anti-pattern! Why?  Well, it might work for processing that does not require access to the LightSwitch data (e.g. simply sending mails), but from the moment you need data access, following issues arise:

  • How to hook up a ServerApplicationContext on other thread?
  • where is the response handled?

So, although you might succeed in not blocking the request response pipeline of the LightSwitch server application, anyhow the overall performance of the LightSwitch server application  is impacted.

My view on this is simple: the request-response pipeline of the LightSwitch application (both odata and web-api) should remain lean, light and responsive.

That’s why we need something better for handling long running processes.

Let’s introduce WCF

The top of mind solution for such a business engine for running long running processes is WCF.

Ok, great ! But, we adopted LightSwitch because we wanted to get rid of these hard core .Net artifacts, so there is a huge risk of getting very fast involved in a highly complex architecture. That’s definitely a path that I want to avoid !

Why WCF?

WCF allows to very elegantly dispatch a long running process to another service in a completely “fire and forget” manner, so by means of a “one way” service call. That’s something which is not possible with e.g. a REST based web api call. Furthermore, WCF allows also to introduces the concept of message queues.  I’ll elaborate more on this in later articles.

Another level of complexity that such a WCF service can introduce is data access. What will we do when the service operation itself needs access to the LightSwitch domain model? Will we setup an own Entity Framework framework, own repository pattern, etc. … for this?

Well, I can imagine that in certain conditions we have no other option, but I tend to use the “LightSwitch goodness” as far as possible.

The overall architecture

Let’s first look to the architecture of a a normal LightSwitch application:

Basic

Pretty simple, the silverlight or html5 client communicates for it’s normal REST based handling over an odata pipeline, potentially complemented with a web-api pipeline handling “commands” for more RPC (remote procedure call) style calls.

Let’s introduce now our WCF service encapsulating the business engine:

wcf

Ok, don’t get intimidated by this more complex picture.

Obviously, the Silverlight or Html5 client still communicates with the LightSwitch server app running on IIS. It’s called “LightSwitch Server App Copy1”. The reason why it’s called copy 1 will become clear in a minutes or two.

In the event that during the request-response pipeline a long running process needs to be invoked, we call the WCF service. This WCF service might orchestrate other services, make a complex calculation, whatever.

But, from the moment the WCF service needs access to the domain model logic of the application we make a web api call to another instance of the LightSwitch application, which is an exact copy of LightSwitch Server App copy 1.

So, LightSwitch Server App copy 1 is our normal LightSwitch app and serves the online transaction processing.

LightSwitch server application Copy 2, will never be used for online transaction processing, but is only meant in the scenario where the WCF Service needs access to the LightSwitch domain model.

This is really the base for scaling out long running processes.

Anyhow, deploying a second instance of the LightSwitch app is very simple: one additional press on the publish button (after changing the name of the app). You could decide to deploy LightSwitch server app 2 in “service only” mode. Also deploying a WCF service is a piece of cake.

The WCF protocols in the above picture are both of the type “fire and forget” and the more regular “request response”.  Fire and forget is that one we want to use for long running processes, where the request response pipeline doesn’t need an immediate “answer” from the service. For, let’s call it, “inline calculations”, where the result of the calculation matters for the LightSwitch response towards the html5/SL client, we will use the classic request-response WCF service calls.

Can we still debug all this in visual studio in an end-to-end way?

Indeed, we can still have a very nice debug environment experience (in visual studio). The only thing that matters is adding the WCF service to the LightSwitch solution. Obviously, what will not work in debug mode, is calling the second LightSwitch instance.  But that’s not a big deal, the WCF service will simply call back to the running LightSwitch instance in debug. As a result, we’ll miss the scalability gain in debug, but we can debug the whole thing in an end-to-end manner and that’s what really matters.

Depending on which WCF Bindings we want to use, we’ll need to host the WCF in debug either in IIS Express (which is used for the LightSwitch app) or in the local full IIS.

Full IIS will be needed when we’ll used binding using message queues or in the scenario where we want to use “netTcpBinding” as WCF binding protocol. Later on that more.

All this seems to generate a lot of additional “traffic”?

Not at all. Unless you deploy the service to another server, everything happens in “localhost” mode. If you are worried about speed, use the netTcpBinding, which is almost as fast as an in process call.

Is LightSwitch intended for this?

Up to you to judge, but we are not at all doing “exotic” things. The only “special thing” we use here is the the ServerApplicationContext, when the WCF service calls the LightSwitch app. But, that’s really mainstream functionality in the mean time.

I like the approach because it allows us to use the LightSwitch application exactly for what it is meant for: doing online transaction processing. So, in my view, the WCF approach simplifies things, at least when you have apart from the online processing also service processes.

What are the infrastructural options?

We can run the 2 instances of the LightSwitch application and the WCF Service on one IIS Server. But what we definitely need to do is to give the all different application pools. By doing so, we can potentially give lower priority to LightSwitch app 2, in such a way that long running processes will not have a performance impact on the online transaction processing. As you might know, IIS (internet information services) is a complex and powerful piece of software. It’s really specialized in separating different processes in completely isolated sandboxes with advanced memory management capabilities.

Another scalability option is of course, to run the WCF service and the second LightSwitch instance on a dedicated IIS server. Then, you will notice what real scalability means.

Indeed, our service logic is spread between the WCF Service and the second LightSwitch instance

Well, only in the event when the service needs access to the LightSwitch domain model.

But the fact that things are spread, is not that bad at all, when you think about (unit) testability. Unit tests for the WCF service could simply mock out the data access logic towards the LightSwitch app.

Of course, not all WCF service operations will need data access. Even more,  some service operations might need data access, but without using the LightSwitch server. For example, for  more administrative tasks it might be better to use Entity framework and connect directly to the database from the WCF Service, or use stored procedures.

What’s next?

A lot Glimlach

In a next article, I’ll first start with setting up a WCF in a robust way in our LightSwitch solution. We’ll skip for the first articles both security and transaction management. We’ll make baby steps. The precise content is not yet clear for me neither.

image_thumb_thumb_thumb_thumb_thumbNo significant Entity Framework articles so far this week.

 


<Return to section navigation list>

Cloud Security, Compliance and Governance

image_thumb_thumbNo significant articles so far this week.

 


<Return to section navigation list>

Cloud Computing Events

‡ Steve Plank (@Plankytronixx) announced on 11/21/2013 Azure Bootcamp (UK): Building out Windows Azure Infrastructures and Services (Saturday, 14th December 2013, Central London):

imageTargeting the ITPro, Devop or Developer this bootcamp will go into the intricacies of deploying a network infrastructure to Windows Azure. It will cover all of the various tools and ways of deployment (APIs, SDKs, CLI and Powershell) as well as the considerations of what you can deploy. And, if you’re busy with fee-paying clients in the week, this is the session for you – it runs on a Saturday! 

imageRegister @ https://ukwaug-infra-dec2013.eventbrite.com/


<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡ Matt Asay (@mjasay) asserted “One analyst is projecting Amazon's cloud business to be worth $50 billion by 2015. That valuation may not be nearly optimistic enough” in a deck for his Amazon Web Services Worth $50 Billion By 2015, And That May Be Too Low article of 11/20/2013 for the ReadWrite blog:

imageFor a company that charges so little for its products and services, Amazon sure is worth a lot. Nowhere is this more true than in its Amazon Web Services (AWS) business unit. Despite dropping prices dozens of times over the past few years, one analyst is now projecting the value of the AWS business to top $50 billion by 2015, driven by success in its Marketplace.

At a mere 6X multiple on an estimated $8 billion in 2015 revenues, $50 billion may actually undervalue Amazon’s cloud business.

Selling Cloudy 1s And 0s

imageOther businesses command much higher revenue multiples than Evercore Partners gives AWS. Dropbox, for example, is seeking an $8 billion valuation, which represents a 34X multiple on 2012 revenues. And in Sillycon Valley, revenue-free Snapchat gets a 3,000,000,000X multiple.

And yet which of these can claim to be the future of enterprise computing, as AWS can?

AWS would almost certainly get a more bubble-esque valuation multiple if it operated more profitably. But as Amazon has done in retail, it generally favors pricing that gives it slim profits but a fat market share. It is pricing for future domination, in other words. So far this approach has ensured rapidly growing revenues, as Macquarie Capital estimates suggest:

Not that Amazon completely eschews profits. For example, like Apple and others it has introduced an app store of sorts, allowing vendors to sell digital services through the AWS Marketplace. As Evercore analyst Ken Sena explains in a research note: Amazon’s AWS “Marketplace [is] an important source of growth and margin expansion for AWS as Amazon collects 20% on the software billings, similar to its third party retail business with virtually all of it dropping to its bottom line.”

Sena goes on in his note to project Marketplace contributing $1 billion in the next two years, which represents 13% of projected AWS revenues, up from an estimated 5% today. Ultimately Evercore sees Marketplace accounting for 40% of AWS’ valuation. …

Read the rest of the article here.


• David Linthicum (@DavidLinthicum) asserted “Enterprises need multiple cloud options, which no single provider has -- and AWS competitors need to focus on its gaps” in his AWS may dominate, but it's not necessarily best for business article of 11/22/2013 for InfoWorld’s Cloud Computing blog:

imageComing off its record-breaking re:Invent show last week, there isn't much you can find wrong with Amazon Web Services (AWS) these days. Don't think that hasn't been noticed by the other public cloud providers, which are looking at AWS's dominant market position as a scenario they never actually thought would happen.

These days, I hear a lot of snarky remarks from the other cloud providers and, in some cases, downright anger as the scope of that dominance becomes clear.

imageInfoWorld's Eric Knorr saved me some effort by providing key stats on the growth of Amazon Web Services:

  • "According to a recent Gartner report, AWS has five times the compute capacity of its nearest 14 cloud competitors combined. And it's growing that capacity at a prodigious rate."
  • "At re:Invent, James Hamilton, an AWS vice president and distinguished engineer, claimed that every day Amazon adds to AWS the equivalent of the infrastructure necessary to power its $7 billion e-commerce business."

In addition, last week, Morgan Stanley analyst Scott Devitt went as far as to predict that AWS's revenue could increase from its current $3 billion level to $30 billion by 2022, GigaOm reported.

imageTraditional enterprise technology providers such as Hewlett-Packard, IBM, Microsoft, and Oracle -- which believe they own the enterprise -- are now seeing AWS penetrate their enterprise territory. In response, they are rapidly ramping up public cloud infrastructure, spending billions of dollars on acquisitions, and building new cloud services. However, enterprises of all sizes keep moving steadily toward AWS -- and not paying much attention to the other options.

Most enterprises aren't really selecting the right public cloud provider. Instead, most are picking the most popular provider, which happens to be AWS. Effective enterprise cloud solutions should be made up of many types of cloud technologies, including private and public IaaS, private and public PaaS, cloud management platforms, and use-based accounting.

Rarely does a single provider offer all the required technology, which is why it doesn't make sense to pick one vendor, even one with AWS's scope. If a single provider does provide all the required technology, perhaps you don't understand the entirety of your needs.

I advise enterprises to focus on getting the right solution, not going for what seems to be popular. And I urge competitors to focus on providing the best cloud solution that they can, filling in the gaps in the marketplace rather than trying to replicate AWS. Trust me: There's plenty of cloud to go around.


Derek Lyon described Amazon EC2 Resource-Level Permissions for RunInstances on 11/20/2013:

imageI am happy to announce that Amazon EC2 now supports resource-level permissions for the RunInstances API. This release enables you to set fine-grained controls over the AMIs, Snapshots, Subnets, and other resources that can be used when creating instances and the types of instances and volumes that users can create when using the RunInstances API.

This release is part of a larger series of releases enabling resource-level permissions for Amazon EC2, so let’s start by taking a step back and looking at some of the features that we already support.

EC2 Resource-Level Permission So Far
image_thumb311_thumb_thumbIn July, we announced the availability of Resource-level Permissions for Amazon EC2. Using the initial set of APIs along with resource-level permissions, you could control which users where allowed to do things like start, stop, reboot, and terminate specific instances, or attach, detach or delete specific volumes.

Since then, we have continued to add support for additional APIs, bringing the total up to 19 EC2 APIs that currently support resource-level permissions, prior to today's release. The additional functionality that we have added allows you to control things like which users can modify or delete specific Security Groups, Route Tables, Network ACLs, Internet Gateways, Customer Gateways, or DHCP Options Sets.

We also provided the ability to set permissions based on the tags associated with resources. This in turn enabled you to construct policies that would, for example, allow a user the ability to modify resources with the tag “environment=development” on them, but not resources with the tag “environment=production” on them.

We have also provided a series of debugging tools, which enable you to test policies by making “DryRun” API calls and to view additional information about authorization errors using a new STS API, DecodeAuthorizationMessage.

Resource-level Permissions for RunInstances
Using EC2 Resource-level Permissions for RunInstances, you now have the ability to control both which resources can be referenced and used by a call to RunInstances, and which resources can be created as part of a call to RunInstances. This enables you to control the use of the following types of items:

  • The AMI used to run the instance
  • The Subnet and VPC where the instance will be located
  • The Availability Zone and Region where the instance and other resources will be created
  • Any Snapshots used to create additional volumes
  • The types of instances that can be created
  • The types and sizes of any EBS volumes created

You can now use resource-level permissions to limit which AMIs a user is permitted to use when running instances. In most cases, you will want to start by tagging the AMIs that you want to whitelist for your users with an appropriate tag, such as “whitelist=true.” (As part of the whitelisting process, you will also want to limit which users have permission to the tagging APIs, otherwise the user can add or remove this tag.) Next, you can construct an IAM policy for the user that only allows them to use an AMI for running instances if it has your whitelist tag on it. This policy might look like this:

{
    "Version": "2012-10-17",
    "Statement": [{
       "Effect": "Allow",
       "Action": "ec2:RunInstances",
       "Resource": [
          "arn:aws:ec2:region::image/ami-*"
            ],
       "Condition": {
         "StringEquals": {
           "ec2:ResourceTag/whitelist": "true"
         }
       }
     }
    ],
    "Statement": [{
       "Effect": "Allow",
       "Action": "ec2:RunInstances",
       "Resource": [
          "arn:aws:ec2:region:account:instance/*",
         "arn:aws:ec2:region:account:security-group/sg-1a2b3c4d"
       ]
     }
   ]
}

Or, if you want to grant a user the ability to run instances in a certain subnet, you can do this with a policy that looks like this:

{
    "Version": "2012-10-17",
    "Statement": [{
       "Effect": "Allow",
       "Action": "ec2:RunInstances",
       "Resource": [
          "arn:aws:ec2:region:account:subnet/subnet-1a2b3c4d"
       ]
     }
    ],
    "Statement": [{
       "Effect": "Allow",
       "Action": "ec2:RunInstances",
       "Resource": [
         "arn:aws:ec2:region:account:instance/*",
         "arn:aws:ec2:region:account:image/*",
         "arn:aws:ec2:region:account:security-group/sg-1a2b3c4d"
       ]
     }
   ]
}

If you want to set truly fine-grained permissions, you can construct policies that combine these elements. This enables you to set fine-grained policies that do things like allow a user to run only m3.xlarge instances in a certain Subnet (i.e. subnet-1a2b3c4d), using a particular Image (i.e. ami-5a6b7c8d) and a certain Security Group (i.e. sg-11a22b33). The applications for these types of policies are far-reaching and we are excited to see what you do with them.

Because permissions are applied at the API level, any users that the IAM policy is applied to will be restricted by the policy you set, including users who run instances using the AWS Management Console, the AWS CLI, or AWS SDKs.

You can find a complete list of the resource types that you can write policies for in the Permissions section of the EC2 API Reference. You can also find a series of sample policies and use cases in the IAM Policies section of the EC2 User Guide.

-- Derek Lyon, Principal Product Manager


<Return to section navigation list>