A crash course for Windows Azure IaaS deployment

devops

Overview

The Windows Azure has both Platform as a Service (PaaS) and Infrastructure as a Service(IasS) offering. My friend in the Windows Azure recommends me to use PaaS:

The PaaS is built upon the IaaS, the Web sites and the Cloud Service run in the dedicated VMs just as IaaS, and the PasS handles all the management overhead, such as windows update, fail-over for you.

There are still some use cases IaaS outshines, for example you want to squeeze more out of Azure VMs by deploying more services; or prototyping the deployment plan for your data center. Before we dive into all the Azure jargons, let’s step back to review what we want from an IaaS service offering:

  • a handful running virtual machine(VM)s to host our services
  • a safe storage to persist our data
  • a private network to prevent eavesdropping
  • a endpoint to expose our service to the wild web
  • a load balancer for high availability and scalability
  • and everything should be fast, lightening fast.

In Windows Azure, these requirements are met with the following configurations:

  • Affinity Group: the affinity group will instrument the platform to put resources as close as possible to eliminate the network latency, so everything should run faster.

  • Storage Account: the storage account supports various replication options to keep your data safe. Also you can setup the affinity group to make the storage close to your server.

  • Virtual Network: the concept is the same as VPC, all your VMs will be deployed in an isolated network for additional privacy and security.

  • Endpoint: the endpoint expose the vm’s port to the external world, and you may bind it to a load-balanced set for high availability. See doc here

  • Load Balanced Set: a load-balanced set is a simple round-robin load balancer provided by Azure.

  • Availability Set: any VM in the same availability set is guaranteed to be deployed to different hosts to avoid the single point hardware failure.

Here are some best practice to deploy IaaS in my humble opinion:

  • bootstrap the environment, such as import the publicsetting, configure the certificate.
  • create an affinity group
  • create the storage account with the above affinity group
  • set the newly-created storage account as the default storage account to ensure all storage is in the same affinity group as the virtual machines.
  • create a virtual network in the management console, and dump the NetworkConfig.xml for the future reference.

For any public service, we first create a cloud service and an availability set, then create each virtual machine with the following setting specified:

  • affinity group
  • virtual network and subnet
  • cloud service
  • availability set

Then we create an endpoint and load-balanced set to expose the public service.

Architecture

Assume we are build a service with code name, Garnet, we need to deploy a LEMP stack for a web service, ELK stack for log / metrics analysis, and Salt master for orchestration.

Garnet Network Diagram

Bootstrap

The SaltStack has some limited support for Azure in SaltCloud, so I decide to use the azure-cli to provision the environment until the SaltCloud is mature enough.

Set up the credentials

azure account download
# ... the browser is launched to download the publicsetting file
azure account import /path/to/publicsettings

Use the existing OpenSSH key to create the Azure compatible certificate:

openssl req -x509 -key ~/.ssh/id_rsa -nodes -days 365 -newkey rsa:2048 -out ~/.ssh/cert.pem

Create an affinity group, garag first to exploit locality.

azure account affinity-group create garag -l 'WEST US' -d 'The Garnet Project.'

Create a storage account, garstor with the garag affinity group.

azure storage account create garstor --disable-geoReplication -a garag

And configure it as the default storage account.

azure config set defaultStorageAccount garstor

Then create a virtual network, garnet in the management console. We will create 3 subnets: service, infra and analytics. We export the network config for future reference, and we may recreate the network as:

azure network import NetworkConfig.xml

Deploying web servers

It is a little misleading for the name of command vm create: it essentially gets or creates a cloud service, then creates a VM to implement the service. You can not create a virtual machine without binding to any cloud service. Thus we can create the cloud service, availability set and VM in one shot:

azure vm create garweb \
b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140416.1-en-us-30GB \
alice -e -P -t  ~/.ssh/cert.pem \
-n web1 -w garnet -b service \
-z small -a garag -A webas

This command instantiate a cloud service garweb with the availability set webas which contains only one instance, web1. The web1 is a small (A1) VM deployed in the garnet virtual network, service subnet with garag affinity group. We also instruct the Azure service to create a wheel user, alice and configure the sshd to use public key authentication only. It is also worthy noting that an endpoint, ssh is implicitly created to allow remote access.

We can add the second instance, web2 to to garweb cloud service as such:

azure vm create garweb \
b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140416.1-en-us-30GB \
alice -e 2222 -P -t  ~/.ssh/cert.pem \
-n web2 -w garnet -b service \
-z small -a garag -A webas -c garweb

This command is almost identical to the command we just used to create web1, with subtle twist: the VM is named as web2 obviously; and it connects to garweb service. We also update the SSH endpoint to 2222 to avoid the port collision with web1.

Endpoint Management

It is pointless to run a web server without exposing HTTP service ports.

azure vm endpoint create web1 80 80 \
--endpoint-name webep \
--lb-set-name weblb   \
--probe-port 80
--endpoint-protocol tcp

This command will create a endpoint webep for VM web1 with a load balanced-set, weblb; it maps the internal port 80 to the external port 80 using the probe port 80 for heartbeat check. The same applies to web2:

azure vm endpoint create web2 80 80 \
--endpoint-name webep \
--lb-set-name weblb   \
--probe-port 80 \
--endpoint-protocol tcp

It is generally a bad practice to expose ssh port directly to the external world. However, it seems impossible to import an SSH certificate without creating an SSH endpoint.You can always delete the ssh endpoint after deploying a SSH bouncer, or recreate it on demand:

azure vm endpoint delete web1 ssh
azure vm endpoint delete web2 ssh

Name resolutions

The last missing piece is the name resolution. There are two options:

  • Azure built-in name resolution
  • Your own DNS service.

The caveat is if you want to use the Azure built-in name resolution, all VMs MUST resident in the same cloud service, which renders the virtual network almost pointless. There are still some hoops to jump through for BYOD solution:

  • You may want to deploy the DNS server as the very first VM in the subnetwork, so it will get a predictable IP address.
  • Every VM leases a IP address from the Azure platform, and it will lose its network configuration and therefore IP address if it is shutdown from the portal.

Neither of them is a real issue if the VMs are running in 24x7, custom facing fashion, and here is the static IP allocation to rescue. But just be aware that Azure networking is significantly limited:

  • you can not assign more than one internal IP address to the VM, so bye-bye ethernet alias. The built-in internal load balancer may help though.
  • you can not assign more than one public IP address to a cloud service, so you may have to use SNI TLS extension.

Please read the Virual Network FAQ thoroughly for more information.