A crash course for Windows Azure IaaS deployment


The Windows Azure has both Platform as a Service (PaaS) and Infrastructure as a Service(IasS) offering. My friend in the Windows Azure recommends me to use PaaS:

The PaaS is built upon the IaaS, the Web sites and the Cloud Service run in the dedicated VMs just as IaaS, and the PasS handles all the managment overhead, such as windows update, fail-over for you.

There are still some use cases IaaS outshines, for example you want to squeeze more out of Azure VMs by deploying more services; or prototyping the deployment plan for your data center. Before we dive into all the Azure jargons, let’s step back to review what we want from an IaaS service offering:

In Windows Azure, these requirements are met with the following configurations:

Here are some best practice to deploy IaaS in my humble opinion:

For any public service, we first create a cloud service and an availability set, then create each virtual machine with the following setting specified:

Then we create an endpoint and load-balanced set to expose the public service.


Assume we are build a service with code name, Garnet, we need to deploy a LEMP stack for a web service, ELK stack for log / metrics analysis, and Salt master for orchestration.

Garnet Network Diagram
Garnet Network Diagram


The SaltStack has some limited support for Azure in SaltCloud, so I decide to use the azure-cli to provision the environment until the SaltCloud is mature enough.

Set up the credentials

azure account download
# ... the browser is launched to download the publicsetting file
azure account import /path/to/publicsettings

Use the existing OpenSSH key to create the Azure compatible certificate:

openssl req -x509 -key ~/.ssh/id_rsa -nodes -days 365 -newkey rsa:2048 -out ~/.ssh/cert.pem

Create an affinity group, garag first to exploit locality.

azure account affinity-group create garag -l 'WEST US' -d 'The Garnet Project.'

Create a storage account, garstor with the garag affinity group.

azure storage account create garstor --disable-geoReplication -a garag

And configure it as the default storage account.

azure config set defaultStorageAccount garstor

Then create a virtual network, garnet in the management console. We will create 3 subnets: service, infra and analytics. We export the network config for future reference, and we may recreate the network as:

azure network import NetworkConfig.xml

Deploying web servers

It is a little misleading for the name of command vm create: it essentially gets or creates a cloud service, then creates a VM to implement the service. You can not create a virtual machine without binding to any cloud service. Thus we can create the cloud service, availability set and VM in one shot:

azure vm create garweb \
b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140416.1-en-us-30GB \
alice -e -P -t  ~/.ssh/cert.pem \
-n web1 -w garnet -b service \
-z small -a garag -A webas

This command instantiate a cloud service garweb with the availability set webas which contains only one instance, web1. The web1 is a small (A1) VM deployed in the garnet virtual network, service subnet with garag affinity group. We also instruct the Azure service to create a wheel user, alice and configure the sshd to use public key authentication only. It is also worthy noting that an endpoint, ssh is implicitly created to allow remote access.

We can add the second instance, web2 to to garweb cloud service as such:

azure vm create garweb \
b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140416.1-en-us-30GB \
alice -e 2222 -P -t  ~/.ssh/cert.pem \
-n web2 -w garnet -b service \
-z small -a garag -A webas -c garweb

This command is almost identical to the command we just used to create web1, with subtle twist: the VM is named as web2 obviously; and it connects to garweb service. We also update the SSH endpoint to 2222 to avoid the port collision with web1.

Endpoint Management

It is pointless to run a web server without exposing HTTP service ports.

azure vm endpoint create web1 80 80 \
--endpoint-name webep \
--lb-set-name weblb   \
--probe-port 80 
--endpoint-protocol tcp

This command will create a endpoint webep for VM web1 with a load balanced-set, weblb; it maps the internal port 80 to the external port 80 using the probe port 80 for heartbeat check. The same applies to web2:

azure vm endpoint create web2 80 80 \
--endpoint-name webep \
--lb-set-name weblb   \
--probe-port 80 \
--endpoint-protocol tcp

It is generally a bad practice to expose ssh port directly to the external world. However, it seems impossible to import an SSH certificate without creating an SSH endpoint.You can always delete the ssh endpoint after deploying a SSH bouncer, or recreate it on demand:

azure vm endpoint delete web1 ssh
azure vm endpoint delete web2 ssh

Name resolutions

The last missing piece is the name resolution. There are two options:

The caveat is if you want to use the Azure built-in name resolution, all VMs MUST resident in the same cloud service, which renders the virtual network almost pointless. There are still some hoops to jump through for BYOD solution:

Neither of them is a real issue if the VMs are running in 24x7, custom facing fashion, and here is the static IP allocation to rescue. But just be aware that Azure networking is significantly limited:

Please read the Virual Network FAQ thoroughly for more information.