For the next big project i'm working on ( web based service ) I have been looking at how to efficiently create a backbone to handle many clients, how to enable proper scaling and getting the most bang for each buck.

There is a lot of exiting things going on in the world of the cloud - the big guys Amazon and Azure are starting to lower the prices and new startups like Digital Ocean and Linode are pushing simpler producs at a attractive price.
Yet there is reason to be alert - not everything is as clear as it seems.. and unless you don't care about the pricing there are quite a few gaps you can fall in between.

So im gonna note here a few design decisions, methods of analysis and comparisons i stumbled across when designing my application.

In almost any case, the first thing you need is a web server.. 
At least one web server node needs to be always on to service the site and as load increases more might be needed.
Immediately it sounds like a cloud is a good solution - they provide scaling opportunities within few minutes and some even have build in load balancers.. nice and easy, amazon is the uncrowned king of the hill here so we pick that one and we are done.... or are we?
For a web server we need mostly CPU power and to a lesser degree RAM, so lets see what a decent server costs these days, with an apple to apple comparison, as far as that is possible.
We know we need to handle quite a load so we want a quad core with at least 6GB RAM for our server as a base unit.
  • Amazon m3.xlarge - 4 vCPU, 15GB RAM  = 231$ / month Linux, 399$ / month windows
  • Azure Large(A3) - 4 vCPU, 7GB RAM = 179$ / month, 268$ / month windows
  • Linode - 6 cores, 8 GB RAM = 80$ Month - Linux only
  • Digital Ocean - 4 cores, 8GB RAM = 80$ Month  - Linux Only
  • Hetzner - 4 cores, 32GB  RAM = 54$ Month Linux, 86$ Windows

Now thats quite the difference...  take a moment to really let the numbers sink in.
Its 427% more expensive to use Amazon than Hetzner and still 289% more than Linode and Digital Ocean.
There is a caveat there though - Hetzner delivers a real old school harware server not a virtual one... which means that it takes longer for them to provide the server and for you to install the VM images manually as there is no central image database you can instantiate from. 

Now the CPU performance of each server should be roughly the same core for core i would have imagined but benchmarks tell a different story, take a look:
8 vCPU of Amazons Intel Xeon E5-2670 v2 2,5 Ghz CPU  we get a DaCapo time of 118,35 seconds
Azure Extra Large (A4) - DaCapo time of 170,58 seconds
My 4 core i7-2600K gives a dacapo time of 95 seconds
While the 4 cores of the i7-4770 3,5Ghz CPU (Hetzner) do the same in 76,2 seconds
All in all it means that 8vCPU = ~4 cores in both azure and amazon... so essentially we are paying 231$ for a dual core.

Lets fix the pricing to reflect this knowledge:
  • Amazon m3.xlarge - 8 vCPU, 30GB RAM  = 462$ / month Linux, 798$ / month windows
  • Azure Large(A3) - 8 vCPU, 7GB RAM = 358$ / month, 536$ / month windows
  • Linode - 6 cores, 8 GB RAM = 80$ Month - Linux only
  • Digital Ocean - 4 cores, 8GB RAM = 80$ Month  - Linux Only
  • Hetzner - 4 cores, 32GB  RAM = 54$ Month Linux, 86$ Windows

Im really not sure how to say this, exept: dont choose Amazon or Azure for cpu time / always on instances.

There is one BIG difference though - once we start leveraging what the cloud was made for - imagine we need to do a certain amount of tasks within a short time frame, and the load for this duration warrants some new instances..
Lets say its a start of day / end of day event so 2 really big spikes within 20 minutes.

Price of 20 minute instance:
  • Amazon m3.xlarge - 8 vCPU, 30GB RAM  = 0,616$ Linux, 1,064$ windows
  • Azure Large(A4) - 8 vCPU, 14GB RAM = 0,144$ Linux, 0,216$ windows
  • Linode - 6 cores, 8 GB RAM = 0,12$ Linux
  • Digital Ocean - 4 cores, 8GB RAM = 0,12$ Linux
  • Hetzner ( only monthly instances ) 

Notice the sharp decline in Azure - thats because Azure only bills you for the 20 minutes and not rounds up to 1 full hour like the others... Quite important to take into consideration.
This should give you an overview over CPU time prices of the different providers In the next part i will talk a bit about how to optimize the servers for large streaming of data and how the design decisions affect the price calculation within the cloud providers.
Instead of adding a long list of previus projects on the homepage ( which would be a boring read ) , we created this blog that will explain a bit about whats going on atm, and a bit about why its important and the challenges it contains.

First off is a few projects we currently develop for Nautronic:

Some Background: 
NauCon-1000 is a 5" touch screen controller for 15+ sport types.
  • Standard RS485 for special cases
  • Long range wireless 2,4Ghz tranciever - using a unified protocol stack, which was designed to have high throughput, low jitter and effective use of transmit time ( thus leaving the channel open for longer )
  • USB bootloader to easely update software in the field ( Of course settings and game data can also be exported or loaded from USB ) 
  • All Coded in plain C, with the crossworks tasking library used as a "OS".

After the release of NAUCON-1000 in 2013, which now controls all of their new scoreboards, there was a need to gather its data for third parties like TV stations and the like ( Cant have lets say game time differ on screen from whats in the game ), and also have a PC overtake the role of controlling the displays ( either for sports or custom industrial controls)

We continued to develop that protocol access component and now are proud to present a Virtual Scoreboard.
You can now get the look and feel of a Score Board on the big screens.

To give you a feel of how large these systems really are, take a look the gallery below.