Friday, July 12, 2013

Windows Azure and Cloud Computing Posts for 7/8/2013+

Top news this week:

A compendium of Windows Azure, Service Bus, BizTalk Services, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1_thumb1

Updated 7/13/2013 with new articles marked .
  ‡ Updated
7/12/2013 with new articles marked .
  • Updated
7/11/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Windows Azure Blob, Drive, Table, Queue, HDInsight and Media Services

Guarav Mantri (@gmantri) explained Uploading Large File By Splitting Into Blocks In Windows Azure Blob Storage Using Windows Azure SDK For PHP in a 7/13/2013 post:

imageIn this blog post, we will see how you can upload a large blob in blob storage using Windows Azure SDK for PHP. I must state that I don’t know anything about PHP and did this exercise in order to help somebody out on StackOverflow. What helped me in the process is excellent documentation on PHP’s website and my knowledge on how Windows Azure Blob Storage Service REST API works.

imageWhat I realized during this process is that it’s fun to get out of one’s comfort zone (.Net for me) once in a while. It’s extremely rewarding when you accomplish something.

Since I’m infamous for writing really long blog posts, if you’re interested in seeing the final code, either scroll down below to the bottom of this post or head on to StackOverflow. Otherwise please read on.

Since we’re uploading really large files, the procedure would be to split the file in chunks (blocks), upload these chunks and then commit those chunks.

Getting Started

I’m assuming you have installed Windows Azure SDK for PHP. If not you can download it from here: http://www.windowsazure.com/en-us/downloads/?sdk=php. This SDK depends on some external packages. For this blog post, only thing we need is to install Http_Request2 PEAR package which you can download from here: http://pear.php.net/package/HTTP_Request2.

Add Proper Classes

We just have to ensure that we have referenced all the classes we need in our code

<?php 
require_once 'WindowsAzure/WindowsAzure.php';
use WindowsAzure\Common\ServicesBuilder;
use WindowsAzure\Common\ServiceException;
use WindowsAzure\Blob\Models\Block;
use WindowsAzure\Blob\Models\BlobBlockType;
?>
Get Azure Things in place

This would mean creating an instance of “BlobRestProxy” class in our code. For the purpose of this blog, I’m uploading a file in storage emulator.

	$connectionString = "UseDevelopmentStorage=true";
	$instance = ServicesBuilder::getInstance();
	$blobRestProxy = $instance -> createBlobService($connectionString);
	$containerName = "[mycontainer]";
	$blobName = "[myblobname]";

Here’re the operations we would need to do:

Read file in chunks

To read file in chunks, first we’ll define the chunk size

define('CHUNK_SIZE', 1024*1024);//Block Size = 1 MB

Then we’ll get the file handler by specifying the file name and opening the file

$handler = fopen("[full file path]", "r");

and now we’ll read the file in chunks

	while (!feof($handler))
	{
		$data = fread($handler, CHUNK_SIZE);
	}
	fclose($handler); 
Prepare blocks

Before this, there are a few things I want to mention about blocks:

  • A file can be split into fifty thousand (50000) blocks.
  • Each block must be assigned a unique id (block id).
  • All block ids must have the same length. I would encourage you to read my previous blog for more details regarding this.
  • When sending to Windows Azure, each block id must be Base64 encoded.

Based on this, what we’re going to do is assign each block an incrementing integer value and to keep block id length the same, we’ll pad it with zeros so that the length of the block id is 6 characters.

		$counter = 1;
		$blockId = str_pad($counter, 6, "0", STR_PAD_LEFT);

Then we’ll create an instance of “Block” class and add that block id there with type as “Uncommitted”.

		$block = new Block();
		$block -> setBlockId(base64_encode($blockId));
		$block -> setType("Uncommitted");

Then we add this block to an array. This array will be used in the final step for committing the blocks (chunks).

		$blockIds = array();
		array_push($blockIds, $block);
Upload Blocks

Now that we have the chunk content ready, we just need to upload it. We will make use of “createBlobBlock” function in “BlobRestProxy” class to upload the block.

		$blobRestProxy -> createBlobBlock($containerName, $blobName, base64_encode($blockId), $data);

We would need to do this for each and every block we wish to upload.

Committing Blocks

This is the last step. Once all the blocks have been uploaded, we need to tell Windows Azure Blob Storage to create a blob by adding all blocks we’ve uploaded so far. We will make use of “commitBlobBlocks” function again in “BlobRestProxy” class to commit the block.

	$blobRestProxy -> commitBlobBlocks($containerName, $blobName, $blockIds);

That’s it! You should be able to see the blob in your blob storage. It’s that easy.

Complete Code

Here’s the complete code:

<?php 
require_once 'WindowsAzure/WindowsAzure.php';
use WindowsAzure\Common\ServicesBuilder;
use WindowsAzure\Common\ServiceException;
use WindowsAzure\Blob\Models\Block;
use WindowsAzure\Blob\Models\BlobBlockType;
define('CHUNK_SIZE', 1024*1024);//Block Size = 1 MB
try {
	
	$connectionString = "UseDevelopmentStorage=true";
	$instance = ServicesBuilder::getInstance();
	$blobRestProxy = $instance -> createBlobService($connectionString);
	$containerName = "[mycontainer]";
	$blobName = "[myblobname]";
	$handler = fopen("[full file path]", "r");
	$counter = 1;
	$blockIds = array();
	while (!feof($handler))
	{
		$blockId = str_pad($counter, 6, "0", STR_PAD_LEFT);
		$block = new Block();
		$block -> setBlockId(base64_encode($blockId));
		$block -> setType("Uncommitted");
		array_push($blockIds, $block);
		$data = fread($handler, CHUNK_SIZE);
		echo " \n ";
		echo " -----------------------------------------";
		echo " \n ";
		echo "Read " . strlen($data) . " of data from file";
		echo " \n ";
		echo " -----------------------------------------";
		echo " \n ";
		echo "Uploading block #: " . $blockId . " into blob storage. Please wait.";
		echo " \n ";
		echo " -----------------------------------------";
		echo " \n ";
		$blobRestProxy -> createBlobBlock($containerName, $blobName, base64_encode($blockId), $data);
		echo "Uploaded block: " . $blockId . " into blob storage.";
		echo " \n ";
		echo " -----------------------------------------";
		echo " \n ";
		$counter = $counter + 1;
	}
	fclose($handler); 
	echo "Now committing block list. Please wait.";
	echo " \n ";
	echo " -----------------------------------------";
	echo " \n ";
	$blobRestProxy -> commitBlobBlocks($containerName, $blobName, $blockIds);
	echo "Blob created successfully.";
}
catch(Exception $e){
    // Handle exception based on error codes and messages.
    // Error codes and messages are here: 
    // http://msdn.microsoft.com/en-us/library/windowsazure/dd179439.aspx
    $code = $e->getCode();
    $error_message = $e->getMessage();
    echo $code.": ".$error_message."<br />";
}
?>
Summary

As you saw, how insanely easy it is to upload a large file in blob storage using PHP SDK. I didn’t (and still don’t) know anything about PHP but I was able to put this code together in a matter of hours. I think if you’re a PHP developer, you should be able to do so in minutes. I hope you’ve found this information useful. As always, if you find any issues with the post please let me know and I’ll fix it ASAP.


Joe Giardino of the Windows Azure Storage Team posted Introducing Storage Client Library 2.1 RC for .NET and Windows Phone 8 on 7/11/2013:

imageWe are pleased to announce the public availability of 2.1 Release Candidate (RC) build for the storage client library for .NET and Windows Phone 8. The 2.1 release includes expanded feature support, which this blog will detail.

Why RC?

imageWe have spent significant effort in releasing the storage clients on a more frequent cadence as well as becoming more responsive to client feedback. As we continue that effort, we wanted to provide an RC of our next release, so that you can provide us feedback that we might be able to address prior to the “official” release. Getting your feedback is the goal of this release candidate, so please let us know what you think.

What’s New?

This release includes a number of new features, many of which have come directly from client feedback (so please keep this coming), which are detailed below.

Async Task Methods

Each public API now exposes an Async method that returns a task for a given operation. Additionally, these methods support pre-emptive cancellation via an overload which accepts a CancellationToken. If you are running under .NET 4.5, or using the Async Targeting Pack for .NET 4.0, you can easily leverage the async / await pattern when writing your applications against storage.

Table IQueryable

In 2.1 we are adding IQueryable support for the Table Service layer on desktop and phone. This will allow users to construct and execute queries via LINQ similar to WCF Data Services, however this implementation has been specifically optimized for Windows Azure Tables and NoSQL concepts. The snippet below illustrates constructing a query via the new IQueryable implementation:

var query = from ent in currentTable.CreateQuery<CustomerEntity>()

where ent.PartitionKey == “users” && ent.RowKey = “joe”

select ent;

 

The IQueryable implementation transparently handles continuations, and has support to add RequestOptions, OperationContext, and client side EntityResolvers directly into the expression tree. To begin using this please add a using to the Microsoft.WindowsAzure.Storage.Table.Queryable namespace and construct a query via the CloudTable.CreateQuery<T>() method. Additionally, since this makes use of existing infrastructure optimization such as IBufferManager, Compiled Serializers, and Logging are fully supported.

Buffer Pooling

For high scale applications, Buffer Pooling is a great strategy to allow clients to re-use existing buffers across many operations. In a managed environment such as .NET, this can dramatically reduce the number of cycles spent allocating and subsequently garbage collecting semi-long lived buffers.

To address this scenario each Service Client now exposes a BufferManager property of type IBufferManager. This property will allow clients to leverage a given buffer pool with any associated objects to that service client instance. For example, all CloudTable objects created via CloudTableClient.GetTableReference() would make use of the associated service clients BufferManager. The IBufferManager is patterned after the BufferManager in System.ServiceModel.dll to allow desktop clients to easily leverage an existing implementation provided by the framework. (Clients running on other platforms such as WinRT or Windows Phone may implement a pool against the IBufferManager interface)

Multi-Buffer Memory Stream

During the course of our performance investigations we have uncovered a few performance issues with the MemoryStream class provided in the BCL (specifically regarding Async operations, dynamic length behavior, and single byte operations). To address these issues we have implemented a new Multi-Buffer memory stream which provides consistent performance even when length of data is unknown. This class leverages the IBufferManager if one is provided by the client to utilize the buffer pool when allocating additional buffers. As a result, any operation on any service that potentially buffers data (Blob Streams, Table Operations, etc.) now consumes less CPU, and optimally uses a shared memory pool.

.NET MD5 is now default

Our performance testing highlighted a slight performance degradation when utilizing the FISMA compliant native MD5 implementation compared to the built in .NET implementation. As such, for this release the .NET MD5 is now used by default, any clients requiring FISMA compliance can re-enable it as shown below:

CloudStorageAccount.UseV1MD5 = false;

Client Tracing

The 2.1 release implements .NET tracing, allowing users to enable log information regarding request execution and REST requests (See below for a table of what information is logged). Additionally, Windows Azure Diagnostics provides a trace listener that can redirect client trace messages to the WADLogsTable if users wish to persist these traces to the cloud.

To enable tracing in .NET you must add a trace source for the storage client to the app.config and set the verbosity

 

<system.diagnostics>
<sources>
<source name="Microsoft.WindowsAzure.Storage">
<listeners>
<add name="myListener"/>
</listeners>
</source>
</sources>
<switches>
<add name="Microsoft.WindowsAzure.Storage" value="Verbose" />
</switches>

The application is now set to log all trace messages created by the storage client up to the Verbose level. However, if a client wishes to enable logging only for specific clients or requests they can further configure the default logging level in their application by setting OperationContext.DefaultLogLevel and then opt-in any specific requests via the OperationContext object:

// Disbable Default Logging
OperationContext.DefaultLogLevel = LogLevel.Off;

// Configure a context to track my upload and set logging level to verbose
OperationContext myContext = new OperationContext() { LogLevel = LogLevel.Verbose };

blobRef.UploadFromStream(stream, myContext);

New Blob APIs

In 2.1 we have added Blob Text, File, and Byte Array APIs based on feedback from clients. Additionally, Blob Streams can now be opened, flushed, and committed asynchronously via new Blob Stream APIs.

New Range Based Overloads

In 2.1 Blob upload API’s include an overload which allows clients to only upload a given range of the byte array or stream to the blob. This feature allows clients to avoid potentially pre-buffering data prior to uploading it to the storage service. Additionally, there are new download range API’s for both streams and byte arrays that allow efficient fault tolerant range downloads without the need to buffer any data on the client side.

IgnorePropertyAttribute

When persisting POCO objects to Windows Azure Tables in some cases clients may wish to omit certain client only properties. In this release we are introducing the IgnorePropertyAttribute to allow clients an easy way to simply ignore a given property during serialization and de-serialization of an entity. The following snippet illustrates how to ignore my FirstName property of my entity via the IgnorePropertyAttribute:

public class Customer : TableEntity
{
[IgnoreProperty]
public string FirstName { get; set; }
}

Compiled Serializers

When working with POCO types previous releases of the SDK relied on reflection to discover all applicable properties for serialization / de-serialization at runtime. This process was both repetitive and expensive computationally. In 2.1 we are introducing support for Compiled Expressions which will allow the client to dynamically generate a LINQ expression at runtime for a given type. This allows the client to do the reflection process once and then compile a lambda at runtime which can now handle all future read and writes of a given entity type. In performance micro-benchmarks this approach is roughly 40x faster than the reflection based approach computationally.

All compiled expressions for read and write are held in a static concurrent dictionaries on TableEntity. If you wish to disable this feature simply set TableEntity.DisableCompiledSerializers = true;

Easily Serialize 3rd Party Objects

In some cases clients wish to serialize objects in which they do not control the source, for example framework objects or objects form 3rd party libraries. In previous releases clients were required to write custom serialization logic for each type they wished to serialize. In the 2.1 release we are exposing the core serialization and de-serialization logic for any CLR type via the static TableEntity.[Read|Write]UserObject methods. This allows clients to easily persist and read back entities objects for types that do not derive from TableEntity or implement the ITableEntity interface. This pattern can also be especially useful when exposing DTO types via a service as the client will longer be required to maintain two entity types and marshal between them.

Numerous Performance Improvements

As part of our ongoing focus on performance we have included numerous performance improvements across the APIs including parallel blob upload, table service layer, blob write streams, and more. We will provide more detailed analysis of the performance improvements in an upcoming blog post.

Windows Phone

The Windows Phone client is based on the same source code as the desktop client, however there are 2 key differences due to platform limitations. The first is that the Windows Phone library does not expose synchronous methods in order to keep applications fast and fluid. Additionally, the Windows Phone library does not provide MD5 support as the platform does not expose an implementation of MD5. As such, if your scenario requires it, you must validate the MD5 at the application layer. The Windows Phone library is currently in testing and will be published in the coming weeks. Please note that it will only be compatible with Windows Phone 8, not 7.x.

Summary

We have spent considerable effort in improving the storage client libraries in this release. We welcome any feedback you may have in the comments section below, the forums, or GitHub.


• Eric D. Boyd (@EricDBoyd) described My Windows Azure Data Services Session at WPC 2013 in a 7/8/2013 post:

imageThis afternoon, I have the privilege of joining Scott Klein and Joanne Marone on stage at the 2013 Microsoft Worldwide Partner Conference. If you are wanting to learn more about key Windows Azure scenarios that we see with our customers and how Windows Azure Data Services help you drive more opportunities in these key scenarios, you will not want to miss this interactive session. You will have the chance to get involved, ask questions and get involved in this interactive session. The when, where and what for this session is below.

image_thumb75_thumb1_thumbDrive Opportunities with Windows Azure Data Services

When: Monday, July 8th @ 4:30 PM
Session Code: SC27i
Room: GRBCC: 372 A

Create new business opportunities with Windows Azure, which enables partners to mix-and-match cloud-based data management services to reimagine application design and IT solutions. In this session, you will be exposed to a variety of real-world scenarios that can be used to solve today’s real-world challenges and, based on Microsoft experience, you’ll see where the hidden revenue potential lies.

image_thumb1_thumb

Great new samples and tutorials for Mobile Services integration scenarios


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

‡ Mike Taulty (@mtaulty) described Azure Mobile Services–Custom APIs and URI Paths in a 7/12/2013 post:

imageIn the previous post where I was taking a look at Custom URIs in Azure Mobile Services I’d got to the point where I was defining server-side handlers in JavaScript for specific HTTP verbs.

For instance, I might have a service at http://whatever.azure-mobile.net/myService (I don’t have this service by the way) and I might want to accept actions like;

  • GET /myService
  • GET /myService/1
  • POST /myService
  • PATCH /myService/2
  • DELETE /myService/3

imageIn my previous post I didn’t know how to do this – that is, I didn’t know how I could enable Mobile Services such that I could add something like a resource identifier as the final part of the path and so I ended up using a scheme something like:

  • GET /myService?id=1
  • DELETE /myService?id=2

and so on.

imageSince that last post, I’ve been educated and I learned a little bit more by asking the Mobile Services team. I suspect that my ignorance largely comes from not having a real understanding of node.js and also of the express.js library that seems to underpin Mobile Services. Without that understanding of the surrounding context I feel I’m poking around in the dark a little when it comes to figuring out how to tackle things that aren’t immediately obvious.

Since asking the Mobile Services team, I’ve realised that the answer was already “out there” in the form of this blog post which covers similar ground to mine but comes from a more trusted source so you should definitely take a look at that and the section called “Routing” is the piece that I was missing (along with this link to the routing feature from express.js).

Even with that already “out there”, I wanted to play with this myself so I set up a service called whatever.azure-mobile.net and added a custom API to it called myService.

When you do that via the Azure management portal, some nice bit of code somewhere throws in a little template example for you which looks like this;

exports.post = function(request, response) {
    // Use "request.service" to access features of your mobile service, e.g.:
    //   var tables = request.service.tables;
    //   var push = request.service.push;

    response.send(200, "Hello World");
};

and what I hadn’t appreciated previously was that this is in some ways a shorthand way of using the register function from express.js as in a longer-hand version might be;

exports.register = 
    function(api)
    {
        console.log("Hello");
        api.get('*', 
            function(request, response) 
            {    
                response.send(200, "Hello World");
            });
    };

Now, when I first set about trying to use that I crafted a request in Fiddler;

image

and I got a 404 until I realised that I needed to append the trailing slash  - I’m unsure of exactly how that works;

image

but this does now mean that I can pick up additional paths from the URI and do something with them and there’s even some pattern matching built-in for me which is nice. For instance;

exports.register = 
    function(app)
    {
        app.get('/:id',
            function(request, response)
            {
                response.send(200, "Getting record id " + request.params.id);
            });
            
        app.get('*', 
            function(request, response) 
            {    
                response.send(200, "Getting a bunch of data");
            });
            
        app.delete('/:id',
            function(request,response)
            {
                console.log("Deleting record " + request.params.id);
                response.send(204);  
            });     
    };

and that’s the sort of thing that I was looking for in the original post – from trying this out empirically, it does seem to be important to add the more specific routes prior to the less specific routes. That seemed to make a difference to handling /myService/ versus /myService/1.


• Glenn Gailey (@ggailey777) described Great new samples and tutorials for Mobile Services integration scenarios in a 7/11/2013 post:

imageI just wanted to let everyone know that Paolo Salvatori, one of the rock stars on the Customer Advisory Team (CAT), has just published a suite of integration tutorials for Windows Azure Mobile Services, including sample project code, covering the following key Windows Azure enterprise and LOB scenarios:

imageThis is just super content that Mobile Services has been needing for a while. Great job Salvatori!

See Paolo’s article at the end of this section.


Jim O’Neil (@jimoneil) produced a Practical Azure #24: Windows Azure Mobile Services video on 7/8/2013:

imageWell, this one went a bit longer than most, but only because Windows Azure Mobile Services has so many cool features that are relevant to just about anyone building mobile applications today - regardless of platform. Join me for this pre-Build 2013 session, in which I build a Windows Phone 8 application complete with data, authentication, and push notifications hosted on Window Azure Mobile Services Backend-as-a-Service offering.

imageNote too that there were some significant announcements at Build 2013 regarding Windows Azure Mobile Services, so be sure to check out these sessions as well:

  1. imageMobile Services - Soup to Nuts
  2. Building Cross-Platform Apps with Windows Azure Mobile Services
  3. Protips for Windows Azure Mobile Services
  4. Connected Windows Phone Apps Made Easy with Azure Mobile Services
  5. Securing Windows Store Applications and REST Services with Active Directory
  6. Going Live and Beyond with Windows Azure Mobile Services

Download: MP3 MP4
(iPod, Zune HD)
High Quality MP4
(iPad, PC)
Mid Quality MP4
(WP7, HTML5)
High Quality WMV
(PC, Xbox, MCE)

And here are the Handy Links for this episode:


Steven Martin (@stevemar_msft) announced a new Premium offer for Windows Azure SQL Database in a 7/8/2013 post to the Windows Azure Team blog:

image… To further advance Windows Azure’s platform services for business-critical applications in the cloud, we are excited to announce a new Premium offer for Windows Azure SQL Database.  Available via a limited preview in a few weeks, the Premium offer will help deliver greater performance for cloud applications by dedicating a fixed amount of reserved capacity for a database including its built-in secondary replicas. [Emphasis added.] …

imageSee the Windows Azure Access Control, Active Directory, and Identity section below for details.


The SQL Server Team (@SQLServer) posted A Closer Look at the Premium Offer for Windows Azure SQL Database on 7/9/2013:

imageAs part of the main keynote yesterday at the Worldwide Partner Conference (WPC) in Houston, Texas, Satya Nadella, Server and Tools President, discussed partner and customer innovations around modern business applications built with the Windows Azure platform. As part of this cloud momentum, Satya announced a new Premium offer for Windows Azure SQL Database that delivers more predictable performance. With the limited preview for this new Premium database offer coming in a few weeks, we wanted to take a closer look at the additional value SQL Database will deliver.

imageThis new capability enables both customers and Service Integrator (SI), ISV, and CSV partners to raise the bar on the types of modern business application services and products they can build and offer to customers. The Premium offer for Windows Azure SQL Database will help deliver more powerful and predictable performance for cloud applications by dedicating a fixed amount of reserved capacity for a database including its built-in secondary replicas. Reserved capacity is ideal for cloud-based applications with the following requirements:

  • Sustained resource demand
  • Many concurrent requests
  • Predictable latency

As part of our product testing, customers with business-critical cloud applications are already experiencing tremendous value from using the Premium offer with reserved capacity:

  • MYOB: “The Premium offer from Windows Azure SQL Database plays a valuable role in MYOB’s cloud solutions offering. We moved across only a few weeks ago and are already enjoying the benefits of a more insulated infrastructure environment. Our many AccountRight Live clients rely on Windows Azure, where the reservation service sits, as the first step in accessing the cloud file that stores their critical business financial data. It supports significant traffic – our clients log in thousands of times each day.” James Scollay - MYOB GM, Business Division
  • easyJet: “We use Windows Azure SQL Database for our seating selection tool online at easyJet. The reserved capacity with the new SQL Database Premium offer has enabled us to add an extra facet of performance predictability during important periods of peak workload where customer demands are high. This reliable, business-grade solution has also allowed us to gather telemetry against a known, fixed resource so we can better benchmark and capacity plan our solutions moving forward.”  Bert Craven, Enterprise Architect Manager

Initially, we will offer two reservation sizes: P1 and P2. P1 offers better performance predictability than the SQL Database Web and Business editions, and P2 offers roughly twice the performance of P1 and is suitable for applications with greater peaks and sustained workload demands. Premium database are billed based on the Premium database reservation size and storage volume of the database.

Premium Database:

image 

Storage: $0.095 per GB per month (prorated daily)

If you are as excited as we are and want a heads up when the preview is available, sign up for Microsoft Cloud OS Bits Alert—we’ll send you an email when signups can begin! Preview is initially limited and subscriptions on a free trial are not eligible for a Premium database.


Paolo Salvatore (@babosbird) explained How to integrate a Mobile Service with a SOAP Service Bus Relay Service in a 7/8/2013 post:

Introduction

imageThis sample demonstrates how to integrate Windows Azure Mobile Service with a line of business application running on-premises via Service Bus Relayed Messaging. The Access Control Service is used to authenticate Mobile Services against the underlying application.

Scenario

imageA mobile service receives CRUD operations from an HTML5/JavaScript and Windows Store app, but instead of accessing data from a SQL Azure database, it invokes a LOB application running in a corporate data center. The LOB system uses WCF service layer to expose its functionality via SOAP to external applications. In particular, the WCF service uses a BasicHttpRelayBinding endpoint to expose its functionality via a Service Bus Relay Service. The endpoint is configured to authenticate incoming calls using a relay access token issued ACS. The WCF service accesses data from the ProductDb database hosted by a local instance of SQL Server 2012. In particular, the WCF services uses the new asynchronous programming feature provided by ADO.NET 4.5 to access data from the underlying database.

Architecture

imageThe following diagram shows the architecture of the solution.

Architecture2

Message Flow

  1. The HTML5/JavaScript site or Windows Store app sends a request to a user-defined custom API of a Windows Azure Mobile Service via HTTPS. The HTML5/JS application uses the invokeApi method exposed by the HTML5/JavaScript client for Windows Azure Mobile Services to call the mobile service. Likewise, the Windows Store app uses the InvokeApiAsync method exposed by the MobileServiceClient class. The custom API implements CRUD methods to create, read, update and delete data. The HTTP method used by the client application to invoke the user-defined custom API depends on the invoked operation:
    • Read: GET method
    • Add: POST method
    • Update: POST method
    • Delete: DELETE method
  2. The custom API sends a request to the Access Control Service to acquire a security token necessary to be authenticated by the underlying Service Bus Relay Service. The client uses the OAuth WRAP Protocol to acquire a security token from ACS. In particular, the server script sends a request to ACS using a HTTPS form POST. The request contains the following information:
    • wrap_name: the name of a service identity within the Access Control namespace of the Service Bus Relay Service (e.g. owner)
    • wrap_password:  the password of the service identity specified by the wrap_name parameter.
    • wrap_scope: this parameter contains the relying party application realm. In our case, it contains the http base address of the Service Bus Relay Service (e.g. http://paolosalvatori.servicebus.windows.net/)
  3. ACS issues and returns a security token. For more information on the OAuth WRAP Protocol, see How to: Request a Token from ACS via the OAuth WRAP Protocol.
  4. The mobile service user-defined custom API extracts the wrap_access_token from the security token issued by ACS. The custom API uses a different function to serve the request depending on the HTTP method and parameters sent by the client application:
    • getProduct: this function is invoked when the HTTP method is equal to GET and the querystring contains a productid or id parameter. This method calls the GetProduct operation exposed by the underlying WCF service.
    • getProducts: this function is invoked when the HTTP method is equal to GET and the querystring does not contain any parameter. This method calls the GetProducts operation exposed by the underlying WCF service.
    • getProductsByCategory: this function is invoked when the HTTP method is equal to GET and the querystring contains a category parameter. This method calls the GetProductsByCategory operation exposed by the underlying WCF service.
    • addProduct: this function is invoked when the HTTP method is equal to POST and the request body contains a new product in JSON format. This method calls the AddProduct operation exposed by the underlying WCF service.
    • updateProduct: this function is invoked when the HTTP method is equal to PUT or PATCH and the request body contains an existing product in JSON format. This method calls the UpdateProduct operation exposed by the underlying WCF service.
    • deleteProduct: this function is invoked when the HTTP method is equal to DELETE and the querystring contains a productid or id parameter. This method calls the DeleteProduct operation exposed by the underlying WCF service.
    Each of the above functions performs the following actions:
    • Creates a SOAP envelope to invoke the underlying WCF service. In particular, the Header contains a RelayAccessToken element which in turn contains the wrap_access_token returned by ACS in base64 format. The Body contains the payload for the call. The node-uuid Node.js module is used to generate a unique id for the security token at each call. This module is downloaded from Git using NPM (Node Package Manager) and then uploaded to the mobile service again using Git. See below to see more on how to accomplish this task.
    • Uses the https Node.js module to send the SOAP envelope to the Service Bus Relay Service.
  5. The Service Bus Relay Service validates and remove the security token, then forwards the request to one the listeners hosting the WCF service.
  6. The WCF service uses the new asynchronous programming feature provided by ADO.NET 4.5 to access data from the ProductDb database. For demo purpose, the WCF service runs in a console application, but the sample can easily be modified to host the service in IIS.
  7. The WCF service returns a response message to the Service Bus Relay Service.
  8. The Service Bus Relay Service forwards the message to the mobile service. The custom API performs the following actions:
    • Uses the xml2js Node.js module to change the format of the response SOAP message from XML to JSON.
    • Flattens the resulting JSON object to eliminate unnecessary arrays.
    • Extracts data from the flattened representation of the SOAP envelope and creates a response message in  JSON object.
  9. The mobile service returns data in JSON format to the client application.

NOTE: the mobile service communicates with client applications using a REST interface and messages in JSON format, while it communicates with the Service Bus Relay Service using the SOAP protocol.

Access Control Service

The following diagram shows the steps performed by a WCF service and client to communicate via a Service Bus Relay Service. The Service Bus uses a claims-based security model implemented using the Access Control Service (ACS). The service needs to authenticate against ACS and acquire a token with the Listen claim in order to be able to open an endpoint on the Service Bus. Likewise, when both the service and client are configured to use the RelayAccessToken authentication type, the client needs to acquire a security token from ACS containing the Send claim. When sending a request to the Service Bus Relay Service, the client needs to include the token in the RelayAccessToken element in the Header section of the request SOAP envelope. The Service Bus Relay Service validates the security token and then sends the message to the underlying WCF service. For more information on this topic, see How to: Configure a Service Bus Service Using a Configuration File.

Prerequisites
Building the Sample

Proceed as follows to set up the solution.

Create the Mobile Service

Follow the steps in the tutorial to create the mobile service.

  1. Log into the Management Portal.
  2. At the bottom of the navigation pane, click +NEW.

  3. Expand Mobile Service, then click Create.

    This displays the New Mobile Service dialog.

  4. In the Create a mobile service page, type a subdomain name for the new mobile service in the URL textbox and wait for name verification. Once name verification completes, click the right arrow button to go to the next page.

    This displays the Specify database settings page. NOTE: as part of this tutorial, you create a new SQL Database instance and server. However, the database is not used by the present solution. Hence, if you already have a database in the same region as the new mobile service, you can instead choose Use existing Database and then select that database. The use of a database in a different region is not recommended because of additional bandwidth costs and higher latencies.

  5. In Name, type the name of the new database, then type Login name, which is the administrator login name for the new SQL Database server, type and confirm the password, and click the check button to complete the process.

Define the custom API
  1. Log into the Windows Azure Management Portal, click Mobile Services, and then click your app.
  2. Click the API tab, and then click Create a custom API. This displays the Create a new custom API dialog.
  3. Enters products in the API NAME field. Select Anybody with the Application Key permission for all the HTTP methods and then click the check button. This creates the new custom API.
  4. Click the new products entry in the API table.
  5. Click the Scripts tab and replace the existing code with the following: …

Paolo continues with hundreds of lines of source code.

image_thumb18_thumb


<Return to section navigation list>

Windows Azure Marketplace DataMarket, Power BI, Big Data and OData

Max Uritsky (@max_data) reported Windows Azure Marketplace is available in 50 additional countries and features new exciting content including our own Bing Optical Character Recognition service in a 7/11/2013 post:

Hello Windows Azure Marketplace users,

imageWe have some very exciting news to share with you – we are now commercially available in 50 additional countries!, taking our total countries support up to 88. We also added some new and exciting content to the Marketplace, including Microsoft’s Optical  Character Recognition service recently announced at //build conference, new data services from D&B, French postal offices locations directly from La Poste and great UK location services from MapMechanics.

1)     Customers from the following countries can now purchase and consume qualified Marketplace offers:

imageAlgeria, Argentina, Bahrain, Belarus, Bulgaria, Croatia, Cyprus, Dominican Republic, Ecuador, Egypt, El Salvador, Estonia, Guatemala, Iceland, India, Indonesia, Jordan, Kazakhstan, Kenya, Kuwait, Latvia, Liechtenstein, Lithuania, Macedonia, Malta, Montenegro, Morocco, Nigeria, Oman, Pakistan, Panama, Paraguay, Philippines, Puerto Rico, Qatar, Russia, Saudi Arabia, Serbia, Slovakia, Slovenia, South Africa, Sri Lanka, Taiwan, Thailand, Tunisia, Turkey, Ukraine, United Arab Emirates, Uruguay, Venezuela.

Already registered in a different market?

You can migrate your account to your native market by few simple steps:

  • Go to https://datamarket.azure.com/account
  • Cancel any existing application or data subscriptions
  • Click “Edit” on “Account Information” page      
  • Select the region associated with your Microsoft Account
  • Click “Save”
  • Subscribe to all your favorite offers again and enjoy new features like local currency pricing and billing

2)     Bing OCR (Optical Character Recognition) Control  is now available on the Windows Azure Marketplace – you can now leverage Microsoft’s cloud-based optical character recognition capabilities and integrate it into your Windows 8 and Windows 8.1 apps. Click here to check out the offer and here to learn more about the OCR service.

3)     We are pretty pumped to have a dataset from the prestigious French Postal Office, La Poste. Please check out the offer here

Here is a snapshot of the offer in our Service Explorer:

4)    D&B (Dun & Bradstreet), the company known as the leading source of commercial data and insight on businesses, has added six new APIs to the growing list of data offers on the Windows Azure Marketplace.  Check out the new offerings from D&B:

Identification:

Enrichment:

Discovery:

  • D&B Business Insight ( AKA Company prospect builder )   - same access methods as above.          

To get a full list of data services, please click here and to get a full list of all the applications available through the Windows Azure Marketplace, please click here


The Microsoft Business Intelligence Team (@MicrosoftBI) reported Power BI for Office 365 Revealed at Microsoft Worldwide Partner Conference (WPC) on 7/8/2013:

imageToday, we are pleased to announce a new offering that builds on our self-service BI story -- Power BI for Office 365. Unveiled this morning by Satya Nadella, President of the Server and Tools Business, during his keynote at our annual Worldwide Partner Conference, Power BI for Office 365 is a complete Self-service BI solution delivered as part of Excel and as an offer for Office 365.

PowerBI for MSBIPower BI provides everyone with powerful new ways to work with data in Excel and Office 365. Search, discover, and access data from public and internal sources and then transform and shape that data from within Excel. Analyze and create bold interactive visualizations to uncover insights to then share and collaborate out through new BI experiences for Office 365.

For more information, see the Power BI team blog post, “Announcing Power BI for Office 365”.  This post will give you more information on the features and benefits, some great examples and screenshots, and links to additional resources.  Sound interesting?  The public preview of Power BI for Office 365 will be available this summer. You can sign up now at http://www.office.com/powerbi to be notified when the preview is available

image_thumb8_thumbFor more information, see the new Power BI blog at http://blogs.msdn.com/b/powerbi/.

The Microsoft “Data Explorer” Preview for Excel Team (@DataExplorer) reported "Data Explorer" is now Microsoft Power Query for Excel in a 7/6/2013 post:

imageAlong with the Power BI related announcements at the Worldwide Partner Conference on July 8 2013, we are pleased to announce that Microsoft Power Query for Excel (previously known as codename “Data Explorer”) has reached General Availability and is now available for download. Please follow the Power BI blog for future updates and how-to articles on Power Query. We also announced the upcoming preview of Power BI for Office 365, to learn more visit us at office.com/powerbi


<Return to section navigation list>

Windows Azure Service Bus, BizTalk Services and Workflow

image_thumb75_thumb3_thumbNo significant articles today

image_thumb11


<Return to section navigation list>

Windows Azure Access Control, Active Directory, Identity and Workflow

Francois Lascelles (@flascelles) explained Federation Evolved: How Cloud, Mobile & APIs Change the Way We Broker Identity in a 7/9/2013 post:

imageThe adoption of cloud by organizations looking for more efficient ways to deploy their own IT assets or as a means to offset the burden of data management drives the need for identity federation in the enterprise. Compounding this is the mobile effect from which there is no turning back. Data must be available any time, from anywhere and the identities accessing it must be asserted on mobile devices, in cloud zones, always under the stewardship of the enterprise.

APIs serve federation by enabling lightweight delegated authentication schemes based on OAuth handshakes using the same patterns as used by social login. The standard specifying such patterns is OpenID Connect where a relying party subjects a user to an OAuth handshake and then calls an API on the identity provider to discover information about the user thus avoiding having to setup a shared secret with that user – no identity silo. This new type of federation using APIs is easier to implement for relying party as it avoids parsing and interpreting complex SAML messages with XML digital signatures both of which tend to suffer from interoperability challenges.

Now, let’s turn this around, sometimes what needs to be federated is the API itself, not just the identities that consume it. For example, consider the common case of a cloud API consumed by a social media team on behalf of an organization. When the social media service is consumed from mobile apps, the cloud API is consumed directly and the enterprise has no ability to control or monitor information being posted on its behalf.

Cloud api consumption by mobile - not federated

In addition to this lack of control, this simplistic cloud api consumption on behalf of an organization by a group of users require that they share the organization account itself, including the password associated to it. The security implications of shared passwords are often overlooked. Shared service accounts multiply the risk of a password being compromised. There are numerous recent examples of enterprise social media being hacked with disastrous PR consequences. Famous examples from earlier this year include twitter hacks of the Associated Press leading to a false report of explosions at the White House and Burger King promoting competitor McDonalds.

Federating such cloud API calls involves the applications sending the API calls through an API broker under the control of the organization. Each of these API calls is made through an enterprise identity context, that is, each user signs in with its own enterprise identity. The API broker then ‘converts’ these API calls into API calls to the cloud provider using the identity context of the organization.

Cloud api, federated

In this case, federating the cloud API calls means that the enterprise controls the organization’s account. Its password is not shared or known by anybody outside of an administrator responsible for maintaining a session used by an API broker. Users responsible for acting on that cloud service on behalf of the organization can do so while mobile but are authenticated using their enterprise credentials. The ability of a specific user to act on behalf of an organization is controlled in real time. This can for example be based on attributes read from a user directory or pre-defined white list in the broker itself.

By configuring policies in this broker, the organization has the ability to filter the information sent to and received from the cloud provider. The use of the cloud provider is also monitored and the enterprise can generate its own metrics and analytics relating to this cloud provider.

On July 23, I will be co-presenting a Layer 7 webinar with CA’s Ehud Amiri titled Federation Evolved: How Cloud, Mobile & APIs Change the Way We Broker Identity. In this webinar, we will examine the differences between identity federation across Web, cloud and mobile, look at API specific use cases and explore the impact of emerging federation standards.


Steven Martin (@stevemar_msft) posted Going ‘Cloud First’ with Windows Azure from Microsoft’s Worldwide Partners Conference on 7/8/2013:

imageThere’s no doubt that the cloud is providing tremendous opportunity to our partners, with businesses forecast to spend $98 billion on public cloud services by 2016 worldwide, according to IDC. Today at the Worldwide Partner Conference in Houston, we’re excited to connect with our partners to talk about new ways they can seize this opportunity to help their customers thrive. We are committed to helping our partners and customers embrace cloud computing, using a “cloud-first” approach to all we do, from our partner programs to our engineering principles to our product innovations.

imageWindows Azure is a key ingredient of this approach, and today we’re highlighting a number of ways for partners to integrate with the cloud through new offerings for both platform and infrastructure services.

First, we’re pleased to announce that new application access enhancements for Windows Azure Active Directory are now available in Preview at no additional cost.   These new enhancements enable integration of identities across both Microsoft and third-party SaaS applications, making it super easy for users to get their work done without having to remember individual user ids and passwords for each online app.  You don’t need to recreate all of your corporate identities; you can easily synch the on-premise Active Directory with Windows Azure Active Directory and be up and running in no time.  If you are running Office 365 you are already running an Azure cloud directory! (Emphasis added.)

image_thumb211Here are some of the key capabilities you’ll find in the Preview:

  • Seamlessly enable single sign-on for many popular pre-integrated cloud apps
  • Add and remove identities from the top SaaS apps such as Office365, Box, Salesforce and others using the Azure Management Portal
  • Report on unusual application access patterns using predefined security reports
  • Launch assigned apps from a single web page called the Access Panel.

Windows Azure Active Directory provides unmatched integration and global scale. There isn’t another cloud provider in the market that delivers the level of identity integration across platform and infrastructure services, third party SaaS apps, and on-premise directories via a single identity store. We’ll continue to innovate in this space in the coming months including more pre-integrated apps so keep an eye out for future updates.   For a deeper dive on the new enhancements read Alex’s blog post.

To further advance Windows Azure’s platform services for business-critical applications in the cloud, we are excited to announce a new Premium offer for Windows Azure SQL Database.  Available via a limited preview in a few weeks, the Premium offer will help deliver greater performance for cloud applications by dedicating a fixed amount of reserved capacity for a database including its built-in secondary replicas. [Emphasis added.]

At our Build developer conference just a few weeks ago we introduced a new investments in monitoring, alerting and autoscale.  Starting today, you can also use autoscale with Windows Azure Mobile Services Standard and Premium tiers, enabling you to automatically scale your mobile applications based on demand. There is no additional cost for this feature while in preview. Check it out on the Mobile Services page. 

Also at Build we shared that SQL Server 2014 and Windows Server 2012 R2 preview images are ready for use with Windows Azure Infrastructure Services.  Today we are also including the Windows HPC Pack to this list of supported workloads. Whether you are testing the latest parallel algorithm you have written for that risk modeling app or need infrastructure to offload your periodic batch processing jobs, you can simply deploy Windows HPC Pack in Azure and let Windows Azure do the heavy lifting for you.

In addition to building great products and services just about everything that we do at Microsoft involves our deep partner ecosystem and we’re thankful for the many great companies that we’ve had the opportunity to integrate with over the years across the many products we produce, both on premises and in the cloud.  Let’s take a closer look at the opportunities that partners have to integrate with Windows Azure.

If you are a solution integrator there are some great opportunities to help customers and the cloud opens many new opportunities.  Customers everywhere are evaluating how the cloud can help them to get ahead of their competition, innovate more quickly, and scale globally all while benefiting from cloud economics.  Here are a few ways that you can work with your customers right now.

  • Assist customers with their strategy for integrating their existing on premises identities with the new opportunities that the cloud brings to the market for accessing Saas Apps.  Help them to design, integrate and operationalize the new world of identity and access management.
  • Assist customers with their strategy to integrate Windows Azure Infrastructure Services for development and test scenarios, migrating apps to take advantage of cloud scale and to integrate the cloud with on-premises infrastructure.

For example, Microsoft Partner Skyline Technologies is moving their business forward with new service offerings based on Microsoft’s cloud services and they recently helped Trek to take their business to the next level using Azure.

“Microsoft’s Azure and Cloud offerings have resulted in two new practice areas for Skyline – Azure Services and Office 365. These new services have created better alignment with our customers by allowing us to develop cloud strategies that provide holistic solutions while reducing cost and time to market.” – Kenny Young - Director, Cloud Computing & Development at Skyline Technologies

If you’re an ISV, Windows Azure provides you with a global cloud platform for developing your apps and it’s available at your fingertips.  You can take your app from local to global in no time.  CaptiveLogix is taking advantage of cloud innovation to expand their business and help their customers develop new applications.

“Windows Azure has allowed CaptiveLogix to grow our business by providing us a platform to incubate customers and deliver proof of concepts on Azure Infrastructure Services, while also offering the full elasticity of Windows Azure platform services for new applications and systems integrations. We are in a place where cloud has quickly become a viable option for most organizations and Azure has positioned CaptiveLogix well to offer custom solutions that meet the needs of customers today and into the foreseeable future.  With a clear roadmap and access to pre-release technology we are able to continually offer our customers growth opportunities.” Tim Fernandes, CaptiveLogix

Partners also have a great opportunity to integrate with the Windows Azure Store to raise visibility to the solutions you’ve built on the platform as well as an additional channel for monetizing your offerings.

These partners see the opportunity, innovation and new capabilities the cloud can offer.  They are working to evolve their businesses, are helping customers on the journey to the cloud and are seeing tremendous opportunity ahead of them.  But don’t take it from us, try Windows Azure for yourself and see how you can help to move your business and customers forward.

image_thumb7


The Office 365 Team offers an Identity and Authentication in the cloud: Office 2013 and Office 365 (Poster) in *.pdf format. Sample illustration:

image


Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Craig Kitterman (@craigkitterman, pictured below) posted Byron Tardiff’s Scaling Up and Scaling Out in Windows Azure Web Sites article on 7/11/2013:

imageWhen you are starting on a new web project, or even just beginning to develop web sites and applications in general, you might want to start small.  But you probably won’t stay that way.  You might not want to expend resources on a new web farm when you are entering the proof-of-concept phase, but when things get going, you also wouldn’t launch a major marketing campaign on a server under your desk.  Developing and deploying in the cloud on Windows Azure Web Sites is no different.

In this blog post, I’ll take you through the ways that you can develop, test, and go live, all while staying within budgeted time and costs.

Standard, Free, and Shared modes in Windows Azure Web Sites

imageOne of the most important considerations when deploying your web site is choosing the right pricing tier for your site.  Once you exit the development and test cycle, this will be your corporate web presence site, or an important new digital marketing campaign. or line-of-business app.  So you want to make sure that your site is as available and responsive as your business needs demand, while staying comfortably within budget.

Your choice depends on a number of factors like:

  • How many individual sites are you planning to host?  For example, a digital marketing campaign might include a social-media page for each service you are using, and a different landing page for each target segment.
  • How popular do you expect the sites to be?   When should traffic levels change?   You estimate might be based on numbers of employees for a line-of-business application, or the number of Twitter followers and Facebook Likes for a campaign site.  You may also expect differences in traffic due to seasonaility, demand-generation activities like social media activities and ads.
  • How much resources (CPU, memory, and bandwidth) will the sites consume? 

One of the best things about Windows Azure Web Sites is that you do not need to be able to answer this questions at the time that you launch your web apps and web site into production.   Using the scale options provided in the Management Portal you can scale your site on the fly according to user demand and your business goals.

Site Modes in Windows Azure Web Sites

Windows Azure Web Sites (WAWS) offers 3 modes:  Standard, Free, and Shared.

Each one of these modes – Standard, Shared, and Free -- offers  different set of quotas that control how many resources your site can consume and provides different scaling capabilities.  These quotas are summarized in the chart below.

Standard mode & Service Level Agreement (SLA)

Standard mode runs on dedicated instances, making it different from the other ways to buy Windows Azure Web Sites. It also comes with no limits on the CPU usage, and the largest amount of included storage of the 3 modes.  See the table above for details.

Standard also has some important capabilities worth highlighting:

  • No data egress bandwidth limit – The first 5 GB of data served on the site is included, additional bandwidth is priced according to the “pay-as-you-go” rate.
  • Custom DNS Names – Free mode does not allow custom DNS.  Standard allows CNAME and A Records.

Standard mode carries an enterprise-grade SLA (Service Level Agreement) of 99.9% monthly, even for sites with just one instance.  Windows Azure Web Sites can provide this SLA on a single instance site because of our design, which includes on the fly site provisioning functionality.  Provisioning happens  behind the scenes without the need to change your site, and happens transparently to any site visitor.  By doing this we eliminate availability concerns as part of the scale equation.

Shared and Free modes

Simply put, Shared and Free modes do not offer the scaling flexibility of Standard, and they have some important limits.

Free mode runs on shared Compute resources with other sites in Free or Shared mode, and has an upper limit capping the amount of CPU time that that site (and the other Free sites under the subscription) can use per quota interval.   Once that limit is reached, the site – and the other Free sites under the subscription – will stop serving up content/data until the next quota interval.  Free mode also has a cap on the amount of data the site can serve to clients, also called “data egress”.

Shared mode, just as the name states, also uses shared Compute resources, and also has a CPU limit – albeit a higher one than Free, as noted in the table above.   Shared mode also allows  5 GB data egress with “pay-as-you go” rates beyond that.

So, while neither Free nor Shared is likely to be the best choice for your production environment due to the limits above, they are useful.   Free is fine for limited scenarios like trying and learning Windows Azure Web Sites, such as learning how to setup a publish config, connect to Visual Studio, or deploy with TFS, Git, or other deployment tools.  Shared has additional capabilities vs. Free that make it great for development and testing out your site under limited, controlled load.  For more serious production environments, Standard has way more to offer.

Scale operations, your code, and user sessions/experience

Scale operation have no impact on existing user sessions, beyond improving the user experience when the operation scales the site up, or scales the site out. 

Additionally, each scale operation happens quickly – typically within seconds – and does not require changes to your site’s code, nor a redeployment of your site.

Next we’ll discuss what it means to “scale up” and to “scale out.”

Windows Azure Web Sites Scaling Dynamics

Windows Azure Web sites offers multiple ways to scale your website using the Management Portal.  These operations are also available if you are managing your site via Microsoft Visual Studio 2012, as detailed in our service documentation.

Scale Up

A scale up operation is the Azure Web Sites cloud equivalent of moving your non-cloud web site to a bigger physical server.   So, scale up operations are useful to consider when your site is hitting a quota, signaling that you are outgrowing your existing mode or options.  In addition, scaling up can be done on virtually any site without worrying about the implications of multi-instances data consistency.

Two  examples of scale up operations in Windows Azure Web Sites are:

  • Changing the site mode:  If you choose Standard mode, for example, your web site will have no quotas imposed  on the CPU usage, and data egress will only be limited by the cost of data egress over the included 5 GB/month. 
  • Instance Size in Standard mode : In Standard mode, Windows Azure Web Sites allows a choice of different instance sizes, Small, Medium, and Large.  This is also analogous to moving to a bigger physical server with increasing numbers of cores and amounts of memory:
    • Small: 1 core, 1.75 GB memory
    • Medium: 2 cores, 3.5 GB memory
    • Large: 4 cores, 7 GB memory\

Scale Out

A scale out operation is the equivalent of creating multiple copies of your web site and adding a load balancer to distribute the demand  between them. When you scale out a web site in Windows Azure Web Sites there is no need to configure load balancing separately since this is already provided by the platform.

To scale out your site in Windows Azure Web Sites you would use the Instance Count slider to change the instance count between 1 and 6 in Shared mode and 1 and 10 in reserved mode. This will generate multiple running copies of your website and handle the load balancing configurations necessary to distribute incoming requests across all instances.

To benefit from scale OUT operations your site must be multi-instance safe. Writing a multi-instance safe site is beyond the scope of this posting, but please refer to MSDN resources for .NET languages, such as http://msdn.microsoft.com/en-us/library/3e8s7xdd.aspx

Scale up and scale out operations can be combined for a website to provide hybrid scaling. The same contentions about multi-instance sites would apply to this scenario.

Autoscaling and Scaling in Windows Azure PowerShell

In this blog post, I discussed the concepts involved in scaling up and scaling out in Windows Azure Web Sites, focusing on doing these tasks manually via the Management Portal; similar manual settings are also available in Visual Studio.

We have also added Autoscaling to Windows Azure Web Sites, allowing unattended changes to the scale up/scale out settings on your web site in response to demand.

In addition, the Windows Azure PowerShell allows some scaling operations as well as versatile control of your site and subscription.  

Final Thoughts

Windows Azure Web Sites allows you to develop, deploy, and test a web site or web app at low, or even no cost,  and seamlessly scale that site all the way up to a more production-ready configuration, and then to scale further in a cost-effective way.

I focused in this blog post on scaling up and scaling out in your web site, but, keep in mind that your site  is potentially only a portion of a more complex application  that uses other components such as databases, data feeds, storage, or 3rd Party Web API’s.  Each one of this components will have its own scale operations and should be taken into consideration when evaluating your scaling options.

Scaling your web sites will have cost implications, of course.  And an easy way to help estimate your costs and the impact a given scale operation will have on your wallet, is to use the Azure Pricing Calculator.


• Dave Bost (@davebost) continued his series with Moving a WordPress Blog to Windows Azure – Part 5: Moving From a Subfolder to the Root on 7/11/2013:

imageIn Part 1, I created a new WordPress site hosted on Windows Azure. In Part 2, I transferred all of the relelvant content from my old WordPress site to my new one hosted on Windows Azure. In Part 3, we made the necessary configuration changes from my domain registrar to Windows Azure to have my custom domain (http://davebost.com) direct people to my new blog site on Windows Azure. In Part 4, I had to define some URL Rewrite rules with a web.config file to handle my custom permalinks in my WordPress blog.

imageIn Four (somewhat) short steps, I’ve completely moved my WordPress blog over to Windows Azure! However, I’m not done yet. As part of this transition, I made the decision to finally move my blog off of a subfolder off of my site (http://www.davebost.com/blog) to the root of my domain (http://www.davebost.com).

imageThis made me nervous as I imagined all of the links scattered across the Interwebs that I’ve built up for the past 10 years suddenly blowing up. However, with a little plug-in magic and a new URL Rewrite redirection rule I was able to finally check off this task that’s been on my list for several years!

The Database Search/Replace Magic

WARNING! Before you do anything, make sure you have a backup of your WordPress site and WordPress database. See WordPress Backups on steps on how to protect yourself. I am not responsible for any catastrophic events that may take place.

The good news is you are migrating from an existing site, so you should be covered. Plus, short of completely deleting  your database without a backup, any changes made should be easily remedied. But…you’ve been warned!

My WordPress content database is littered with relics of the past. Namely various references to my old blog address (http://davebost.com/blog). From various WordPress configuration settings to my post content as well. The recommended approach found within the Moving WordPress documentation is to search all references of the past and replace with the new. According to this document, there are no less than 15 (!) steps to accomplish this task. Thankfully, the great folks over at interconnect/it have created a Search and Replace for WordPress Databases Script to accomplish this feat for us.

Download the script zip file.

If you’re running Windows, I recommend ‘unblocking’ the zip file once it’s downloaded and before you unzip it. Open up Windows Explorer and navigate to the folder containing the downloaded zip file. Right-click on the zip file name and select properties. On the ‘General’ tab, click the ‘Unblock’ button and click ‘Ok’.

Unzip the file and upload the ‘searchreplacedb2.php’ script file to the root of your WordPress site.

Some helpful steps on how to upload files to your WordPress site using FTP can be found in Part 2 and Part 4.

Run the script by opening your favorite web browser and navigating to the location of the script file (ie. http://davebost.com/searchreplacedb2.php).

image

Click the ‘Submit’ button to have the script retrieve your database connection strings as defined in your wp-config file.

On the ‘Tables’ step, I kept the defaults and selected Continue.

For my purposes, I chose to replace the following on the ‘What to replace?’ step:

image

Thankfully, I didn’t encounter any errors:

image

Don’t forget that once the script is finished running to DELETE THE SCRIPT FILE FROM YOUR SITE!

After a cursory scan of my blog content, everything seems to be in working order!

Handling 301 Redirects with a URL Rewrite Rule

Now that my site content has been updated with the new permalink content, what about all of those hanging links to my content scattered across the Internet. Are they forever broken? Thankfully, with a little URL Rewrite magic, they’re not. The recommended approach to notify the various search engines of this change and handle the redirection from existing links is to use a 301 Redirect.

To handle this for my purposes, I added a Redirect rule to my <system.webServer> configuration section in my web.config file:

<system.webServer>
   <rewrite>
      <rules>
         <rule name=”RedirectRule” stopProcessing=”true”>
             <match url=”^blog/?(.*)$” ignoreCase=”true” />
             <action type=”Redirect” url=”http://www.davebost.com/{R:1}”
                         redirectType=”Permanent” />
          </rule>
         …
      </rules>
   </rewrite>
</system.webServer>

Setting ‘redirectType’ to ‘Permanent’ identifies this as a 301 Permanent Redirect. In short, the match regular expression looks for the string ‘blog’ within the url, captures everything following ‘blog/’ and replaces {R:1} with the contents in the <action> element. My rule defined in Part 4 for handling the custom permalinks in my WordPress site follows this RedirectRule definition.

WE DID IT! In a few short steps we migrated a WordPress site over to Windows Azure with a couple of additional tweaks required for my particular purposes to move from a subfolder to the root of my domain. FINALLY!

What I’ve learned is that within a matter of a minute or so, I can stand up a WordPress site on Azure. And within an hour or two, I can migrate all of my data over from my old WordPress site to my new site hosted on Windows Azure.

I hope these instructions were valuable in your pursuit. Let me know how everything turns out in the comments.

GOOD LUCK!


Dave Bost (@davebost) continued his series with Moving a WordPress Blog to Windows Azure – Part 4: Pretty Permalinks and URL Rewrite rules on 7/11/2013:

imageWe’re down to the last few, albeit most important, steps in this little project.

In Part 1, we created a new WordPress site hosted on Windows Azure. Part 2, we transferred the content from our old site to the new one. With Part 3, we set up our custom domain with some DNS wizardry.

In this fourth step, I’m going to walk through the steps to make sure my custom permalinks are handled correctly in Windows Azure.

imageAs with most blogs, I want to have “pretty permalinks”. I’m opting for something like http://mysite.com/2013/07/10/my-blog-post-entry over the less friendly http://mysite.com/?p=123. Chances are your permalinks are configured just the way you want them. After all, your entire existing WordPress site options and configurations were moved over as part of the data content transfer in Part 2.

However, if you’ve elected to go with any custom permalink format in WordPress other than the default, you’re going to need to set up some URL Rewrite rules.

imageAlthough your WordPress site hosted on Windows Azure is running PHP, it’s running PHP on an instance of Internet Information Server (IIS). Setting up URL Rewrite rules on an IIS server is slightly different than modifying an .htaccess file on an Apache instance.

URL Rewrite rules are defined in the web.config file in IIS.

That’s right. Even though we’re running an PHP website, web.config is still utilized for any server configuration items, including URL Rewrite rules.

Open up your favorite text editor and create a ‘web.config’ file with the following contents:

<?xml version=”1.0″ encoding=”utf-8″ ?>
< configuration>
    <system.webServer>
        <rewrite>
            <rules>
                <rule name=”Main Rule” stopProcessing=”true”>
                    <match url=”.*” />
                    <conditions logicalGrouping=”MatchAll”>
                        <add input=”{REQUEST_FILENAME}” matchType=”IsFile” negate=”true” />
                        <add input=”{REQUEST_FILENAME}” matchType=”IsDirectory” negate=”true” />
                    </conditions>
                    <action type=”Rewrite” url=”index.php” />
                </rule>
            </rules>
        </rewrite>
    </system.webServer>
< /configuration>

Save the ‘web.config’ file and upload it to the root folder of your WordPress application. For my purposes this is the same location for the root of my website as well. As stated in Part 2, an easy method to upload files to your website is through FTP. The FTP HOST NAME and DEPLOYMENT / FTP USER information is available on the website’s dashboard in the Windows Azure Management Portal. You may have to reset your FTP login credentials if you haven’t already done so. You can do this by clicking on the ‘Reset your deployment credentials’ link on your websites Dashboard page.

More information on creating this URL Rewrite rule is available on the IIS Product site.

Assuming you have custom permalinks for your website, you should be up and running with your WordPress on Windows Azure now. CONGRATULATIONS!

However, that’s not the end of this article series. I chose to take a big leap during this transition to running my WordPress blog on Windows Azure to finally removing my blog off of a subdomain (http://davebost.com vs. http://davebost.com/blog). We’ll cover that adventure next.


My (@rogerjenn) Uptime Report for My Live Windows Azure Web Site: June 2013 = 99.34% post of 7/9/2013 begins:

imageMy Android MiniPCs and TVBoxes blog runs WordPress on WebMatrix with Super Cache on Windows Azure Web Site (WAWS) Preview and ClearDB’s MySQL database (Venus plan) in Microsoft’s West U.S. (Bay Area) data center. Service Level Agreements aren’t applicable to the Web Services Preview; only sites with two or more Reserved Web Site instances qualify for the usual 99.95% uptime SLA.

imageI use Windows Live Writer to author posts that provide technical details of low-cost MiniPCs with HDMI outputs running Android JellyBean 4.1+. The site emphases high-definition 1080p video recording and rendition.

image_thumb9

The site commenced operation on 4/25/2013. To improve response time, I implemented WordPress Super Cache on May 15, 2013.

Here’s Pingdom’s summary report for June 2013:

imageimage[image%255B11%255D.png]

image_thumb75_thumb4_thumbAnd continues with detailed downtime and response time reports.

image_thumb11_thumb


<Return to section navigation list>

Windows Azure Cloud Services, Caching, APIs, Tools and Test Harnesses

‡ Windows Azure Customer Acceptance Team (@WindowsAzureCAT) posted Telemetry – Application Instrumentation CSF Blog to the Windows Azure Team blog on 7/11/2013:

imageIn Telemetry Basics and Troubleshooting we introduced the basic principles around monitoring and application health by looking at fundamental metrics, information sources, tools, and scripts that the Windows Azure platform provides. We showed how you can use these to troubleshoot a simple solution deployed on Windows Azure (few compute node instances, single Windows Azure SQL Database instance).  In this post we expand on that entry and cover the application instrumentation aspects of the telemetry system that was implemented in the Cloud Service Fundamentals in Windows Azure code project.  In the detailed wiki entry that accompanies this blog, we show how you can use the CSF instrumentation framework, which integrates with Windows Azure Diagnostics (WAD), to provide a consistent instrumentation experience for your application.  The techniques we have implemented in the CSF application have been proven on large-scale Azure deployments.

imageThe best source of information about your applications is the applications themselves. However, while good tools and a robust telemetry system make acquiring information easier, if you don’t instrument your application in the first place you cannot get at that information at all. In addition, if you don’t consistently instrument across all your application components, you are unlikely to achieve operational efficiency when you begin scaling in production. (Troubleshooting problems becomes far more complex than individuals – or even teams – can tackle in real-time.) Consistent, application-wide instrumentation and a telemetry which can consume it is the only way to extract the information you need to keep your application running well, at scale, with relative efficiency and ease.

CSF provides a number of components that you can use to quickly instrument your application and build an effective telemetry system:

  • Data access layer that implements retry logic and provides sensible retry policies designed for scale.
  • A logging framework built on top of NLOG
  • Scalable custom configuration for WAD that will support scaling.
  • A data pipeline that collects and moves this information into a queryable telemetry system.
  • A sample set of operational telemetry reports you can use to monitor your application

By adopting these practices and using the components and configuration we have provided you can help your system scale as well as give you the insight to target your development effort more precisely and improve your operational efficiency -- which ultimately makes your customers happier for fewer resources. This allows you to provide a high quality user experience, and identify upcoming problems before your users do. There is a corresponding wiki article that goes deeper into “Telemetry: Application Instrumentation”

It’s very easy to read this and yet be just too busy growing your user base and deploying new code features.

Distrust this feeling. Many, many companies have had a hot product or service that at some point couldn’t scale and experienced one -- or more -- extended outages. Users often have little fidelity to any system that is unreliable, they may choose to just move elsewhere -- perhaps to the upstart that is chasing on your heels and ready to capture your market.

Of course some of you may already have built your own application instrumentation framework and implemented many of the best practices.  For that reason we have provided the CSF application in whole including all the telemetry components as source code on the MSDN Code Gallery.  Some of the key things to remember as you implement instrumentation in your application:

  • Create separate channels for chunky (high-volume, high-latency, granular data) and chatty (low-volume, low-latency, high-value data) telemetry.
  • Use standard Windows Azure Diagnostics sources, such as performance counters and traces, for chatty information.
  • Log all API calls to external services with context, destination, method, timing information (latency), and result (success/failure/retries). Use the chunky logging channel to avoid overwhelming the telemetry system with instrumentation information.
  • Log the full exception details, but do not use exception.ToString()
  • Data written into table storage (performance counters, event logs, trace events) are written in a temporal partition that is 60 seconds wide. Attempting to write too much data (too many point sources, too low a collection interval) can overwhelm this partition. Ensure that error spikes do not trigger a high volume insert attempt into table storage, as this might trigger a throttling event.
  • Collect database and other service response times using the stopwatch approach
  • Use common logging libraries, such as the Enterprise Application Framework Library, log4net or NLog, to implement bulk logging to local files. Use a custom data source in the diagnostic monitor configuration to copy this information periodically to blob storage.
  • Do not publish live site data and telemetry into the same storage account. Use a dedicated storage account for diagnostics.
  • Choose an appropriate collection interval (5 min – 15 min) to reduce the amount of data that must be transferred and analyzed, for example, “PT5M”.
  • Ensure that logging configuration can be modified at run-time without forcing instance resets. Also verify that the configuration is sufficiently granular to enable logging for specific aspects of the system, such as database, cache, or other services.

Thank you for taking the time to read this blog post. To learn more about how to implement the CSF instrumentation components in your application there is a corresponding wiki article that goes deeper into “Telemetry: Application Instrumentation”.  In the next article in this series we will explore the data pipeline that we have implemented to provide a comprehensive view of the overall CSF application and its performance characteristics; including how we capture this information in a relational operational store and provide you with an overall view across the Azure platform.


Mark Gayler (@MarkGayler) reported More W3C Pointer Events Implementations with Dojo and IE11 in a 7/8/2013 post to the Interoperability @ Microsoft blog:

imageThe W3C Pointer Events emerging standard continues to gain traction, advancing support for interoperable mouse, touch, and pen interactions across the web. Further to our previous Blog where we highlighted the work the Dojo team are doing with Pointer Events, we can now confirm an implementation of Pointer Events has now been added to the patch list for Dojo Toolkit 2.0.

imagePointer Events makes it easier to support a variety of browsers and devices by saving Web developers from writing unique code for each input type. The specification has earned positive feedback from the developer community -- many are already embracing it as a unified model for cross-browser multi-modal input.

In our previous Blog on W3C Pointer Events, we highlighted feedback shared by members of the jQuery, Cordova, and Dojo communities. The team at Nokia are also excited about progress with the Pointer Events standardization work as Nokia's Art Barstow, Chair of the W3C's Pointer Events Working Group noted:

Google, Microsoft, Mozilla, jQuery, Opera and Nokia are among the industry members working on the Pointer Events standard in the W3C's Pointer Events Working Group. Pointer Events is designed to handle hardware-agnostic multi user inputs input from devices like a mouse, pen, or touchscreen and we are pleased to see it achieve Candidate Recommendation status in W3C. Pointer Events is a great way for developers to enable better user interaction with the mobile Web and we are excited to see the various implementations around the Web that are already underway. Web developers can start coding with Pointer Events today and we look forward to further progress with the standard and adoption within the Web community.

Pointer Events at //Build 2013

During the recent //Build 2013 event, Jacob Rossi of the Internet Explorer (IE) team presented Lighting Your Site Up on Windows 8.1 which included guidance on how Web developers can use the capabilities of Pointer Events to make web sites ‘shine’ across many devices such as touch/mouse/pen, high resolution screens, and screen sizes from phones to desktops, taking advantage of sensors and other hardware innovations. The Internet Explorer 11 Preview implementation has been updated from Internet Explorer 10 to include the latest Candidate Recommendation specification for W3C Pointer Events - see Pointer Events updates in the IE11 Developer Guide for further details.

As we continue to work with the vibrant Web community, we look forward to seeing even more Pointer Events support across a growing number of JavaScript libraries and frameworks – there’s more to come! To learn more about using and implementing Pointer Events, feel free to check out and contribute to the Pointer Events Wiki on Web Platform Docs which includes community generated polyfills, tests, demos, and tutorials, or join the discussion at #PointerEvents. Point. Click. Touch.

image_thumb13


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Travis Wright (@radtravis, pictured below) described What’s New in System Center 2012 R2? Catch the In the Cloud Blog Series! in a 7/10/2013 post to the System Center blog:

imageBrad Anderson, Corporate Vice President for Windows Server and System Center, continued his blog series introducing Windows Server 2012 R2 and System Center 2012 R2 today with the second blog post in the nine part series.  You can follow the entire series on the In the Cloud blog.  Each Wednesday for the next couple of months, Brad will be publishing a blog post on each of the themes of the 2012 R2 wave.  At the end of each of Brad’s posts, there will be links to further detailed, technical blog posts from the engineers on the Windows Server and System Center engineering blogs.

imageIn the blog post today, Brad talks about Microsoft’s vision for People-Centric IT (PCIT):

What’s New in 2012 R2: Making Device Users Productive and Protecting Corporate Information

In case you missed it, you can read the introductory post to this series:

What’s New in 2012 R2: Beginning and Ending with Customer-specific Scenarios

You can also follow Brad on Twitter here:

https://twitter.com/InTheCloudMSFT


• Kenneth van Surksum (@kennethvs) reported Citrix XenDesktop 7 available on Windows Azure in a 7/9/2013 post:

imageCitrix has announced that version 7 of its Virtual Desktop Infrastructure (VDI) product XenDesktop can now be deployed on top of a Windows Azure virtual machine. This is now possible because Microsoft has made Remote Desktop Services (RDS) Subscriber Access Licensing (SAL) available on Azure, paving the way to install XenDesktop on a VM running on Azure.

imageIn order to support this announcement, Citrix published two new Design Guides detailing how to design a VDI environment on top of Windows Azure.


<Return to section navigation list>

Windows Azure Pack, Hosting, Hyper-V and Private/Hybrid Clouds

‡ Rob Tiffany (@RobTiffany) published Keeping Windows 8 Tablets in Sync with SQL Server 2012: Private and Hybrid Cloud Solutions for the Mobile Enterprise [Kindle Edition] to Amazon.com (free to borrow by Kindle owners who are Amazon Prime subscribers.)

From the Book Description:

imageGive your company a competitive advantage by creating mobile business apps for the world’s first enterprise-class tablet.

What you’ll Learn

  • Use Hyper-V to create a virtualized SQL Server, Active Directory, and IIS infrastructure that runs either in a private cloud or Windows Azure to support public and hybrid cloud scenarios
  • Microsoft wireless data sync and mobile database technologies accelerate your time to market while reducing risk to your solutions
  • Rapidly create immersive Windows 8 tablet apps with .NET 4.5, C# and Visual Studio 2012
  • Modern UI design principles empower you to deliver an aesthetically pleasing UX intended for touch

The Value Prop

  • Unlock the value of your company’s back end systems by securely delivering critical data to employees in the field so they can make decisions at the point of activity
  • Security is ensured at every tier of the solution with password protected and encrypted data-at-rest, TLS protected data-in-transit, plus corporate network authentication and authorization via Active Directory
  • Companies save money by leveraging their existing Microsoft server assets, IT Pro training, and the same .NET development skills used to build Windows and Windows Mobile apps over the last decade
  • Corporate investment in Windows 7 laptops and tablets is secured since the touch-first apps described in this book are backwards-compatible and work perfectly with a keyboard and mouse

imageimage

Rob’s book is based on an application implemented by the Société Nationale des Chemins de fer français (SNCF), the French National Railway Corporation.


• Dattatrey Sindol (@dattatreysindol) described BI on Cloud using SQL Server on IaaS in a 7/7/2013 post to the Aditi Technologies blog:

imageToday’s Business Intelligence (BI) Systems are analyzing huge volumes of data, which is growing at a rapid pace requiring organizations to scale their hardware/infrastructure at the same pace to continue to do BI and make the best use of the available data.

BI-on-CloudProcuring and adding hardware/infrastructure is a time consuming process in any organization and hence leads to a delay in getting access to the right information at the right time. To overcome this and the other limitations of on-premise BI Systems, BI Applications can be hosted to Cloud and various benefits of Cloud can be leveraged to deliver efficient and effective BI Solutions to the organization’s decision makers.

imageDownload the BI on Cloud using SQL Server on IaaS whitepaper which focuses on hosting a Microsoft (SQL Server) Business Intelligence (BI) Application on Cloud using Microsoft’s IaaS offering “Windows Azure Virtual Machines”. Since Windows Azure VMs are currently in preview, these VMs or the steps outlined in this white paper should not be used in a production/mission critical environment.

BI-on-Cloud-using-SQL-Server-on-IaaS


Cisco Systems (@Cisco) asserted “Cisco Teams With Microsoft on Go-to-Market Acceleration Initiatives to Drive Worldwide Demand for Integrated Data Center Solutions” in a deck for its Cisco to Accelerate Microsoft-Based Private Cloud Deployments press release of 7/8/2013:

imageSAN JOSE, Calif. – July 8, 2013:  Today at the Microsoft Worldwide Partner Conference Cisco announced it would team with Microsoft to accelerate the deployment of private and hybrid cloud infrastructure worldwide.  The companies will invest in initiatives to align sales teams and channel partners, while helping customers modernize their data centers with private cloud solutions.

Over the past two years, Cisco and Microsoft have invested significant development resources to integrate best-in-class data center technologies.  By combining the innovative Cisco® Unified Data Center architecture with Windows Server 2012 Hyper-V and System Center 2012, Cisco and Microsoft are helping customers simplify data center operations, boost IT productivity and improve data center economics

Highlights:

  • Go-to-Market Acceleration:  Cisco and Microsoft plan to invest together to stimulate hundreds of private cloud and hybrid cloud deployments worldwide.  Building on a successful pilot with channel partners over the last 10 months, the planned joint initiative will be rolled out globally in conjunction with Microsoft's Cloud OS Accelerate program, which provides funds to help customers offset the services cost of deploying these Microsoft solutions, accelerate deals, and drive competitive wins.

  • Integrated Solutions: By bringing together innovative technology such as the Cisco Unified Computing System™, Cisco UCS® Manager, and Cisco Nexus® switching, with Microsoft Windows Server 2012 Hyper-V and System Center 2012, Cisco and Microsoft are helping customers drive out the complexity from data center operations.  Key integrated solutions include:

Executive Quotes:

Padmasree Warrior, chief technology and strategy officer, Cisco

"In the mobile-cloud era IT organizations need an integrated approach, incorporating technology architectures that enable business agility, operational simplicity and improved application performance. Over the last two years Cisco and Microsoft engineering teams have worked diligently to combine our industry-leading technology into integrated data center solutions.  Now, we're taking the next step by investing together to rapidly accelerate demand for our private cloud solutions."

image_thumb75_thumb7_thumbSatya Nadella, president, Server & Tools Business, Microsoft
"Microsoft's Cloud OS, built on Windows Server 2012 and Windows Azure, provides a consistent platform that spans our customers data centers, service providers' data centers and Windows Azure.  We are excited to work with Cisco to help our joint customers implement proven private and hybrid cloud solutions and improve their data center operations."

About Cisco

Cisco (NASDAQ: CSCO) is the worldwide leader in IT that helps companies seize the opportunities of tomorrow by proving that amazing things can happen when you connect the previously unconnected. For ongoing news, please go to http://thenetwork.cisco.com.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Michael Washington (@ADefWebserver) published Creating Web Pages Using the LightSwitch HTML Client In Visual Studio 2012 as a US$9.99 ebook:

Michael WashingtonVisual Studio LightSwitch 2012 is a development tool that provides the easiest and fastest way to create forms over data, line of business applications and build applications for the desktop and the internet cloud by providing a tool that allows you to quickly and easily define and connect to your data, program your security and business rules, expose this via OData to practically any client such as mobile devices and web pages.

imageThis book will demonstrate its use, explain, and provide examples of important concepts of its API (Application Programming Interface).

Creating Web Pages Using the LightSwitch HTML Client In Visual Studio 2012 screen shot

Creating Web Pages Using the LightSwitch HTML Client In Visual Studio 2012 Table Of Contents

I bought a copy.

Raghuveer Gopalakrishnan concluded the series with Team Development Series–Part 2: Best Practices for Source Code Control (Raghuveer Gopalakrishnan) on 7/11/2013:

image_thumb6_thumbIn this post, we will look at some of the best practices and patterns which will enable multiple developers to collaborate efficiently on a LightSwitch project. In Visual Studio 2013, there are several new features geared toward enabling team development like partitioned model files, flat project structure, and better integration with Source Code Control providers. Let us look at some of the highlights before diving into best practices.

Partitioned Model Files

In Visual Studio 2013 there is now a separate model file for every entity, screen, query, data source. This dramatically reduces the probability of two developers stepping on each other’s changes and encountering merge conflicts during development -- especially if they’re working on unrelated entities/screens.

.image

Flat Project Structure

Getting Changes from a Source Code Control Server (Repository) is easier and the changes propagate correctly to all the client projects, even when the project is open in the Solution explorer. So any changes pulled from the SCC server are immediately visible in the Solution Explorer.

More Support for Version Control Providers

Developers can choose from Team Foundation Server’s version control for on premises (TFS on-premises), in the cloud (TFS online), and Git. All source code control providers supported by Visual Studio can be used for LightSwitch projects.

image

To use TFS online, you can sign up for up for a free account at http://tfs.visualstudio.com/ . Git is also an option. To start using Git with Visual Studio, I would highly recommend reading Scott Hanselman’s blog post: Git support for Visual Studio - Git, TFS, and VS put into Context

Source Code Control Best Practices

The following are some of the patterns and practices which can help in making team collaboration on LightSwitch projects easier.

  1. Focus on developing the Data Model and its associated Schema before designing Screens.
  2. It is a best practice to have developers divide the tasks so that each developer is responsible for a set of Entities, Screens and Queries. For instance, if there are two Entities namely Customers and Products, Developer1 should work on Entity/Screens/Queries for Customers while Developer 2 should focus on Products entity and its associated screens and queries.

  3. Once the initial data model is ready and checked-in, try to keep subsequent changes as atomic as possible. Incremental and frequent check-ins are preferred over bulk and infrequent check-ins. This practice also helps in resolving manual merge conflicts easily.
  4. When checking in, it’s necessary to check in all the changes to related model files at once rather than single files at a time. This is necessary to keep the overall model (combination of all the lsml files) consistent.
  5. Sync to latest changes and sync often. Also sync latest/resolve merge conflicts before checking in changes.
  6. If you need to roll back any changes, make sure to roll back complete changesets rather than specific files. It’s critical to keep the underlying model consistent and making sure your changes are atomic (sync, roll back, undo, etc.).
Handling Merge Conflicts

Merge conflicts are bound to occur when two developers try to check-in conflicting changes or sync to the server version of the source code control repository while having pending changes locally. There are two possible outcomes for this scenario.

  1. Auto Resolved: This happens most often when the changes are in different files or the changes involve actions like the addition of files. Developers do not have to manually resolve any conflicts in this case. This is one of the most noticeable impacts of having partitioned model files in Visual Studio 2013. The net result is that both changes get checked in seamlessly.
  2. Resolved Manually: There will be times when developers make changes to the same files. here are 3 possible ways to resolve the conflicts.

Take Server Version: If this option is selected, all the conflicting local changes are overwritten with the server version.

Keep Local Version: If this option is selected, the local changes are persisted. Note that after this step, the changes still need to be checked in and are listed under "Pending changes". A subsequent check-in operation will replace the conflicting server files with the changes that got checked in.

Manual Merge: By selecting this option, we intend to resolve each conflict by deciding which portions of the server changes to take and which portions of the local changes to retain. A merge tool is provided by Visual Studio to assist in merging the conflicts.

clip_image002

In this example, we would like to select both Entities; namely, User Profile from Server and Gadget from Local workspace. To merge correctly, first check the box to the left of Server. Then at the end, select Gadget from Local workspace. The result gets populated in the result box.

image

NOTE: There are a few known issues for LightSwitch in Visual Studio 2013 Preview that we're still working on. You can find those in our release notes located here: LightSwitch in Visual Studio 2013 Preview Release Notes

Conclusion

We hope that with the Visual Studio 2013 preview release of LightSwitch, you would find collaboration on LightSwitch projects to be a smooth experience. The best practices and patterns mentioned above will hopefully help you in planning for multiple developers to work productively while collaborating on LightSwitch projects.

We hope you will try out these features by downloading the Visual Studio 2013 preview and let us know your feedback by adding a comment below or visiting the LightSwitch forum.


Peter Hauge started a series with Team Development Series–Part 1: Introduction on 7/9/2013:

imageWe recently announced a new version of LightSwitch available in Visual Studio 2013 Preview!  In this post we’ll discuss Team Development as one of our major investment areas for Visual Studio 2013 and how the latest version dramatically improves the experience working in teams on your LightSwitch project.

Visual Studio LightSwitch is a rapid development environment for building line of business applications for the desktop and mobile devices.  As more enterprises have adopted LightSwitch we’ve received lots of feedback that building larger & more complex line of business apps is critical.  To scale out the development experiences to better align with the complexity of applications, we needed to rethink the way teams interact when working in Visual Studio.

The changes necessary to truly enable multi-developer teams working on the same LightSwitch project required architectural changes to the product.  You’ve seen some of the key ones already with our prior post on Solution Explorer Enhancements.  A quick note on Upgrade, you can bring forward your Visual Studio 2012 projects automatically!  Just open the projects in Visual Studio 2013 and we’ll make the changes for you so you can take advantage of the new features!  (NOTE:  Upgrading a project created in a prior version of Visual Studio is a 1-way transform, you won’t be able to continue working on this project with prior versions of Visual Studio after upgrade.)

imageKey Architecture Changes supporting Team Development

Flat Project Structure:  We used to have a ‘nested’ project structure, where the Server & Client projects were sub-projects of the main LightSwitch project.  We’ve found that this is a very unusual project hierarchy in Visual Studio and there were many features (and 3rd party tools) that didn’t work well with this structure.  Source Code Control was a key area impacted by our nested projects where none of the 3rd party SCC add-ins worked properly with common operations (check in, get latest, etc).  Now that we have a flat project structure the SCC providers work as expected, along with other built-in Visual Studio tools like FX Cop, Code Map, etc., are all working!

Breaking up the model:  If you’re not familiar with our model, take a look the Architectural Overview by Stephen Provine.  In Visual Studio 2012, the definitions and information for all the entities, queries, screens, etc., are stored in only a few files.  The challenge with Source Code Control operations is having too much information in single files (that multiple people modify) leads to frequent manual merge operations that can go awry.  In Visual Studio 2013, we’ve divided the model so that each screen, entity and query have their own model files dramatically reducing the potential for merge conflicts.  When conflicts arise, they’re easier to understand and manage so it’s less likely to merge incorrectly.

In the series of posts we’ll talk about the following areas:

Now that Visual Studio 2013 Preview is available, please give it a spin and send us your feedback by visiting the LightSwitch forum or sending us a smile or frown from the feedback button within the product.  I’m especially interested in any feedback (smile or frown!) if you have multiple developers working on the same LightSwitch project!

image_thumb_thumbNo significant Entity Framework articles today

 


<Return to section navigation list>

Cloud Security, Compliance and Governance

• David Linthicum (@DavidLinthicum) asserted “With the recent NSA blowback in Europe, we will likely see the privacy battles heat up in the United States as well” in a deck for his The cloud privacy wars are coming article of 7/9/2013 for InfoWorld’s Cloud Computing blog:

imageGermany's interior minister, Hans-Peter Friedrich -- the country's top security official -- cautioned privacy-conscious residents and organizations to steer clear of U.S.-based service companies, according to the Associated Press. As InfoWorld's Ted Samson has reported, "Friedrich is by no means the first E.U. politician to issue this type of warning, and as details continue to emerge about the U.S. government's widespread surveillance programs, such warnings are certain to garner greater attention."

imageThe blowback in Europe around NSA surveillance is no surprise. Privacy has always been a huge issue in Europe, as demonstrated by confrontations with Google, among others.

However, the real privacy wars in the cloud have yet to be fought, both in the United States and in Europe. This battle will likely occur in courtrooms and in government regulatory agencies.

image_thumb2_thumbThe reality is that people who working with cloud-based platforms won't stop using those platforms -- but they will get much better at security and privacy. With such improvements in security and privacy, law enforcement and government agencies won't have ready access to some data. That means legal battles will occur in many countries, with the use of remote data hosting services, such as cloud services, in the middle of those frays.

One result of businesses taking steps to ensure that their data won't be monitored by government agencies will be wider use of both encryption and physical restrictions on access. However, if the government wants to see the data and obtains a court order (sometimes in secret), it will want access to that data. To get that encrypted or restricted-access data in the cloud, the government will need an additional court order to gain access. That's when lawsuits will be filed and all hell breaks loose.

Some people believe these issues can be avoided by not using public cloud providers. But that's naive. If the government wants your data and if there is cause to support their concerns to a judge, it will come after that date whether it's in your closet or a cloud. Welcome to the new world order.


<Return to section navigation list>

Cloud Computing Events

•• James Watters (@wattersjames) reported that the 3rd Annual [Executive] Cloud Club Stinson Beach Day will occur this year at 203 Dipsea Rd., Stinson Beach, CA 94970 on Saturday, 7/20/2013 at 1:00 PMish:

imageYes, this is the real deal. The Club is being called back into action, for the 3rd annual executive cloud beach day at Rodrigo place in Stinson.
203 Dipsea on the lagoon towards Ocean side 1 pm-ish.

Questions, hit up @RFFlores on Twitter!

Bring warm clothes. It can be cold as hell at Stinson Beach, even in mid-summer.


My (@rogerjenn) Windows Azure Sessions at Microsoft’s Wordwide Partner Conference 2013 post of 6/7/2013 begins:

image_thumb75_thumb8_thumbMicrosoft is putting the heat on partners to resell Windows Azure cloud services. Following are sessions with Cloud tags and “Azure” keyword organized by audience in the following categories:

  • imageMicrosoft Dynamics GP
  • Independent Software Vendor
  • Large Account Reseller and EA Direct Advisor
  • Partner
  • Server and Cloud
  • System Integrator
  • U.S. Subsidiary

<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡ Jeff Barr (@jeffbarr) reported AWS Elastic MapReduce Adds MapR M7 for 24x7 HBase Applications in a 7/12/2013 post:

imageI'm happy to announce that Elastic MapReduce now includes the option to choose MapR M7, an enterprise-grade, high performance platform for HBase and Hadoop applications.

We joined forces with MapR Technologies last June to deliver enterprise-grade Hadoop on EMR with their M5 and M3 Editions. Today we're making MapR's M7 Edition available on EMR, enabling users to run 24x7 HBase applications in addition to their Hadoop ones. The M7 architecture provides the following advantages for HBase users:

imageMapR is the only distribution that enables Linux applications and commands to access data directly in the cluster via the NFS interface that is available with all MapR editions. MapR M7 was optimized for cloud deployments including high performing instances such as High Storage and High I/O.

To launch an M7 cluster, select the MapR M7 Edition in the EMR New Job Flow Wizard:

You can also use the elasticmapreduce CLI. To launch the latest version of M7 on EMR, use the following command:

./elastic-mapreduce --create --alive --instance-type hi1.4xlarge --num-instances 5 --supported-product mapr --args "--edition,m7"

Visit the MapR M7 documentation to learn more.

-- Jeff;

PS - If you don't need the full feature set offered by M7, you can now run MapR M5 at prices that have been reduced by 13% to 45%, depending on instance size.


• Jeff Barr (@jeffbarr) described an substantial EC2 Dedicated Instance Price Reduction in a 7/10/2013 post:

imageI'm happy to announce that we are reducing the prices for Amazon EC2 Dedicated Instances.

Launched in 2011, Dedicated Instances run on hardware dedicated to a single customer account. They are ideal for workloads where corporate policies or industry regulations dictate physical isolation from instances run by other customers at the host hardware level.

image_thumb111_thumbLike our multi-tenant EC2 instances, Dedicated Instances let you take full advantage of On-Demand and Reserved Instance purchasing options. Today’s price drop continues the AWS tradition of innovating to reduce costs and passing on the savings to our customers. This reduction applies to both the dedicated per region fee and the per-instance On-Demand and Reserved Instance fee across all supported instance types and all AWS Regions. Here are the details:

  • Dedicated Per Region Fee – An 80% price reduction from $10 per hour to $2 per hour in any Region where at least one Dedicated Instance of any type is running.
  • Dedicated On-Demand Instances – A reduction of up to 37% in hourly costs. For example the price of an m1.xlarge Dedicated Instance in the US East (Northern Virginia) Region will drop from $0.840 per hour to $0.528 per hour.
  • Dedicated Reserved Instances – A reduction of up to 57% on the Reserved Instance upfront fee and the hourly instance usage fee. Dedicated Reserved Instances also provide additional savings of up to 65% compared to Dedicated On-Demand instances.

These changes are effective July 1, 2013 and will automatically be reflected in your AWS charges.

To launch a Dedicated Instance via the AWS Management Console, simply choose a target VPC and select the Dedicated Tenancy option when you configure your instance. You can also create a Dedicated VPC to ensure that all instances launched within it are Dedicated Instances.

To learn more about Dedicated Instances and to see a complete list of prices, please visit the Dedicated Instances page.


• Jeff Barr (@jeffbarr) explained Amazon Elastic Transcoder - Watermarking, Bit / Frame Rate Control in a 7/9/2013 post:

imageWe added a big batch of features to the Amazon Elastic Transcoder just a couple of months ago. Let' s do it again!

Today we are adding three new features that will give you additional control of the appearance, bit rate, and frame rate of the videos that you transcode. As you can see from the screen shots below, you can access all three of these features from the AWS Management Console. Here's the scoop:

imageVisual Watermarking allows you to overlay up to four still images (PNG or JPG format) on your output video, with full control over the position, size, and scale, and opacity. You can use this to add a logo, legend, or other identifying information to your video.

Maximum Bit Rate Control lets you limit the instantaneous bit rate of your output video.  You can use this setting to ensure that your video meets the playback specifications and bandwidth requirements of your desired output devices.

Maximum Frame Rate control lets you specify a maximum frame rate for your output video. This is useful when you wish to maintain the frame rate of the source media except in cases where it would otherwise exceed a certain frame rate threshold.

As always, these features are available now and you can start using them today! Read the Elastic Transcoder Documentation to learn more.


• Werner Vogels (@werner) explained Exerting Fine Grain Control Over Your Cloud Resources in a 7/7/2013 post:

imageI am thrilled that now both Amazon EC2 and Amazon RDS support resource-level permissions. As customers move increasing amounts of compute and database workloads over to AWS, they have expressed an increased desire for finer grain control over their underlying resources. You can now use these new features to define the permissions your AWS IAM users (and applications) have to perform actions on specific or groups of Amazon EC2 and Amazon RDS resources.

imageYou can apply user-defined tags to your EC2 and RDS resources to help organize resources according to whatever schema is most relevant for a particular organization – be it an application stack, an organization unit, a cost center, or any other schema that might be appropriate. These user-defined tags can already be used to generate detailed chargeback reports that provide a view into the costs associated with these resources. And now these user-defined tags can also be used to create AWS IAM policies to define which users have permissions to use the resources that have certain tags associated with them.

For example, you can mandate that only Senior Database Administrators in your company can modify “production” Amazon RDS DB instances. You do this by first tagging the relevant Amazon RDS DB instance resources as “production” instances, then creating an AWS IAM policy that permits the modify action on these “production” instances, and finally assigning the AWS IAM policy to your group of AWS IAM users who are Senior Database Administrators.

Additionally, you can set policies such as the following:

  • Only certain users can terminate “production” EC2 or RDS instances
  • Only certain EBS volumes can be attached or detached from certain EC2 instances
  • Users can only stop or terminate EC2 instances that are tagged with their username
  • Only certain users can create larger RDS instances (e.g. M2.4Xlarge)
  • Only certain database engines, parameter groups and security groups can be used by users when they create RDS DB instances
  • Only certain users can create RDS instances that are Multi-AZ and PIOPs enabled

Because AWS provides customers with fundamental infrastructure building blocks, there are a wide range of additional policy scenarios that you can support using tools like IAM, tags, and resource-level permission. And our development teams are already hard at work on the next wave of features to extend our support for setting and managing resource-level permissions, so expect even more tools to help control your AWS resources soon.


<Return to section navigation list>

0 comments: