<![CDATA[BL-Logic - Blog]]>Fri, 21 Jul 2017 01:30:43 +0100Weebly<![CDATA[Misc... a bit of everything]]>Tue, 15 Sep 2015 22:47:02 GMThttp://bl-logic.dk/1/post/2015/09/misc-a-bit-of-everything.htmlPicture
First of: Intelligent Road Guides.

The new design of LED-GUIDE is long range 868Mhz (2KM), high power LED (1,2A) made for harsh environments like being glued on the road.

Now the new development here is that they need to be programmed remotely, meaning from the internet - for that we made a internet gateway as seen on the left image, based on our custom long range radio and a standard raspberry pi ( indoor only ).

For that we made a mobile first web app ( bootstrap)  hosted in the azure cloud.
The trick here is to give good reaction times without incurring high costs due to excessive polling, especially since we cannot wait long between a car has been detected and its guidance needs to be ready.

Picture
For this we used websockets, which gave very quick reaction time.. down to the actual round trip time, and also proved very successful to reduce jitter which was a requirement to make the flashing visually pleasing.

The web application also controls timer schedules in a separate thread so any user can specify a program to be run in certain time periods.



Cash Registers:

Picture
Too much changes to take in one go, but highlights:

* Support for next generation of wireless payments, like meewallet and s5.

* Improved statistics for handling discounts - now discounts can be tracked down to individual lines on each bill - meaning total control over how coupons are used, which combinations are most used and how effective the coupons/discounts are used in general - this also gives a lot more traceability towards abuse and "theft" by employees.

* Improved reporting regarding customer cards and VIP cards, including tracking of income vs. discounts given.

* Improved stock management in high throughput situations ( like stadiums with dozens of cash registers running concurrently in high intensity )

* Overall much improved speed of transactions




]]>
<![CDATA[Music Streaming - Done Right?]]>Thu, 30 Apr 2015 13:41:43 GMThttp://bl-logic.dk/1/post/2015/04/music-streaming-done-right.htmlPicture
Its already been half a year and the music service is out in the open.

The initial reception of the new in store music player have been very positive.
Beeing a Windows universal app with emphasis on drag and drop + touch gestures makes organizaing your music and playlist quick, easy and intuitive.





The server API was made in C# with nancyFX's "super duper happy path" - which indeed it is.
Its clean and consize - making changes is a breeze, so you can focus on the real issues on hand.
http://nancyfx.org/


Couchbase was selcted for the database backend, as it easely scales with the expected load, is Integrated with elasticseatch and is easy to administrate as a high availibility service.
N1QL previews where not stable enough when we evaluated the database, so we had to opt out of that... we will have to look into that later when Couchbase 4 has been released and has been patched a few times with the reports of the first adopters.

So instead of SQL /& its full text search we opted to use elasticsearch as a full text search engine to find song matches for search query, giving good fuzzyness and less frustrated users that cannot find songs they accidentally misspelled.
Luckely its also fast - queries are updated with the client almost realtime as he types letters.

Atm we are working on improving  the backend of it like subscription handling, hompage design etc so we are ready to start the PR engine for real.
Work is beeing done to release a private version later this year, so feel free to give your critizisms at that time on what can be improved.

]]>
<![CDATA[Computing on the Cloud Part 2]]>Wed, 29 Oct 2014 22:50:06 GMThttp://bl-logic.dk/1/post/2014/10/computing-on-the-cloud-part-2.htmlI have been sidetracked by a lot of projects since the last post.
But let me share a project with you that uses the knowledge based on the last post.

I have been working on reworking / redesigning our current in-store music streaming system.
While it works well, its not really easily scalable.
Its based on a windows server containing our application services & mysql  + standard Linux web hosts for streaming the actual music.

The Issues here are:
mySQL is a security issue ( single point of failures, unless large and complicated deployments ) , bottleneck ( doesn't scale well with writes ) , data loss is likely  ( we had a few DB failures & corrupt tables over the years )
Standard web hosting required us to manually upload the files by FTP for each server once they where tagged and indexed.

The Goals Where:
  • High reliability - no single point of failure
  • Automatic Failover in case a server dies
  • No administration
  • Easely Scalable

Those who read Part1 of this series will know that any machine that is required to always be on should not be placed on azure / amazon AWS - you simply get too little for your money.
And since the bare-metal servers we already have are so powerful, they have lots of capacity to spare  - so the decision was made to run on our existing hardware from Hetzner and Leaseweb ( giving geo redudnancy), which really just where bored with their current jobs.

The Search For mySQL replacememt
After trying out pretty much all free DB's availible, i found just ONE(!) that actually works out of the box on windows, can deliver high availibility, doesnt require the servers to be on LAN, and didn't involve setting a lot of manual settings like name & IP adresses that will need to be updated each time a server is decomissioned / instanced.
Its also the only one which had a fully working web interface that didnt crumble during my testing abuse :)
The last candiate standing was: couchbase 
( although i did find a memory leak in couchbase if you abuse it like i did - they are looking into it )

So now we have 1 cluster with 2 * Quad core & 32GB RAM servers as a testing base, with a second cluster as backup & replication.
So what does that give us?
  • There is very little management - all needed actions & viewing current status can be done at any time from a good web interface.
  • Low Cost - you can use consumer hardware, and they don't need any special connections to work as a cluster.
    In this case i'm just using the RAM & CPU cycles that where free on the servers to begin with.

  • Couchbase automatically re-balances & replicates data between nodes, there is no master, so no single point of failure
  • Easily scalable - as the queries they get are load balanced between servers ( adding more servers -> quicker response times for ALL queries  )
all in all a good match, we have the database covered ( the issues i had with not being able to rely on SQL is another matter in itself - be ready to rethink how to query / structure the data )

How to handle streaming and administration of files
To have a no administration system you need to have a reliable backend.. we already have couchbase - how about using that as a file store?
Sadly that wont do - couchbase has a file limit of 20MB, so thats a no go.
The second solution would be to use amazon s3 or azure storage, and just let them handle the sharding and balancing.
while it works perfectly even in high stress situations - the high bandwidth pricing ( costs more to move file once than store it for a full month ) makes it a really bad deal.
So what i did was to create a hybrid approach:
The program that reads the music tags from files automatically splits, obfusciates and uploads them to azure storage ( might just as well be S3 )
Then i created a lightweight cache web server that really only does two things, when a file is requested - it will look in local cache.. if file is not there, request it from azure and store locally.. once that is done, start streaming.
Usually with a 4MB mp3 thats over in 1-2 seconds. ( while these servers have high upload load - the download bandwidth is free to get new cache files even under in peak hours. )

So what does that give us?
  • There is no managment - KISS
  • In case a server dies - all other can still handle any request - no single point of failure
  • Easely Scalable - just rent some new servers and copy paste a single exe.. its ready for usage.
  • As Azure only functions as backup replication service we can easily setup new servers & new files without having to worry about how many replicas are availible - and we don't pay the high bandwidth expenses for our normal usage.
    This can be easily be extended so servers try to get the files from each other before asking azure, reducing costs  for server instancing even further. 



]]>
<![CDATA[Computing on the Cloud part 1]]>Fri, 13 Jun 2014 21:45:13 GMThttp://bl-logic.dk/1/post/2014/06/1.htmlFor the next big project i'm working on ( web based service ) I have been looking at how to efficiently create a backbone to handle many clients, how to enable proper scaling and getting the most bang for each buck.

There is a lot of exiting things going on in the world of the cloud - the big guys Amazon and Azure are starting to lower the prices and new startups like Digital Ocean and Linode are pushing simpler producs at a attractive price.
Yet there is reason to be alert - not everything is as clear as it seems.. and unless you don't care about the pricing there are quite a few gaps you can fall in between.

So im gonna note here a few design decisions, methods of analysis and comparisons i stumbled across when designing my application.

In almost any case, the first thing you need is a web server.. 
At least one web server node needs to be always on to service the site and as load increases more might be needed.
Immediately it sounds like a cloud is a good solution - they provide scaling opportunities within few minutes and some even have build in load balancers.. nice and easy, amazon is the uncrowned king of the hill here so we pick that one and we are done.... or are we?
For a web server we need mostly CPU power and to a lesser degree RAM, so lets see what a decent server costs these days, with an apple to apple comparison, as far as that is possible.
We know we need to handle quite a load so we want a quad core with at least 6GB RAM for our server as a base unit.
  • Amazon m3.xlarge - 4 vCPU, 15GB RAM  = 231$ / month Linux, 399$ / month windows
  • Azure Large(A3) - 4 vCPU, 7GB RAM = 179$ / month, 268$ / month windows
  • Linode - 6 cores, 8 GB RAM = 80$ Month - Linux only
  • Digital Ocean - 4 cores, 8GB RAM = 80$ Month  - Linux Only
  • Hetzner - 4 cores, 32GB  RAM = 54$ Month Linux, 86$ Windows

Now thats quite the difference...  take a moment to really let the numbers sink in.
Its 427% more expensive to use Amazon than Hetzner and still 289% more than Linode and Digital Ocean.
There is a caveat there though - Hetzner delivers a real old school harware server not a virtual one... which means that it takes longer for them to provide the server and for you to install the VM images manually as there is no central image database you can instantiate from. 

Now the CPU performance of each server should be roughly the same core for core i would have imagined but benchmarks tell a different story, take a look:
8 vCPU of Amazons Intel Xeon E5-2670 v2 2,5 Ghz CPU  we get a DaCapo time of 118,35 seconds
Azure Extra Large (A4) - DaCapo time of 170,58 seconds
My 4 core i7-2600K gives a dacapo time of 95 seconds
While the 4 cores of the i7-4770 3,5Ghz CPU (Hetzner) do the same in 76,2 seconds
All in all it means that 8vCPU = ~4 cores in both azure and amazon... so essentially we are paying 231$ for a dual core.

Lets fix the pricing to reflect this knowledge:
  • Amazon m3.xlarge - 8 vCPU, 30GB RAM  = 462$ / month Linux, 798$ / month windows
  • Azure Large(A3) - 8 vCPU, 7GB RAM = 358$ / month, 536$ / month windows
  • Linode - 6 cores, 8 GB RAM = 80$ Month - Linux only
  • Digital Ocean - 4 cores, 8GB RAM = 80$ Month  - Linux Only
  • Hetzner - 4 cores, 32GB  RAM = 54$ Month Linux, 86$ Windows


Im really not sure how to say this, exept: dont choose Amazon or Azure for cpu time / always on instances.

There is one BIG difference though - once we start leveraging what the cloud was made for - imagine we need to do a certain amount of tasks within a short time frame, and the load for this duration warrants some new instances..
Lets say its a start of day / end of day event so 2 really big spikes within 20 minutes.

Price of 20 minute instance:
  • Amazon m3.xlarge - 8 vCPU, 30GB RAM  = 0,616$ Linux, 1,064$ windows
  • Azure Large(A4) - 8 vCPU, 14GB RAM = 0,144$ Linux, 0,216$ windows
  • Linode - 6 cores, 8 GB RAM = 0,12$ Linux
  • Digital Ocean - 4 cores, 8GB RAM = 0,12$ Linux
  • Hetzner ( only monthly instances ) 

Notice the sharp decline in Azure - thats because Azure only bills you for the 20 minutes and not rounds up to 1 full hour like the others... Quite important to take into consideration.
This should give you an overview over CPU time prices of the different providers In the next part i will talk a bit about how to optimize the servers for large streaming of data and how the design decisions affect the price calculation within the cloud providers.
]]>
<![CDATA[First development diary]]>Fri, 13 Jun 2014 14:39:21 GMThttp://bl-logic.dk/1/post/2014/06/11.htmlInstead of adding a long list of previus projects on the homepage ( which would be a boring read ) , we created this blog that will explain a bit about whats going on atm, and a bit about why its important and the challenges it contains.

First off is a few projects we currently develop for Nautronic: http://www.nautronic.com/

Some Background: 
NauCon-1000 is a 5" touch screen controller for 15+ sport types.
Features:
  • Standard RS485 for special cases
  • Long range wireless 2,4Ghz tranciever - using a unified protocol stack, which was designed to have high throughput, low jitter and effective use of transmit time ( thus leaving the channel open for longer )
  • USB bootloader to easely update software in the field ( Of course settings and game data can also be exported or loaded from USB ) 
  • All Coded in plain C, with the crossworks tasking library used as a "OS".

After the release of NAUCON-1000 in 2013, which now controls all of their new scoreboards, there was a need to gather its data for third parties like TV stations and the like ( Cant have lets say game time differ on screen from whats in the game ), and also have a PC overtake the role of controlling the displays ( either for sports or custom industrial controls)

We continued to develop that protocol access component and now are proud to present a Virtual Scoreboard.
You can now get the look and feel of a Score Board on the big screens.

To give you a feel of how large these systems really are, take a look the gallery below.

]]>
<![CDATA[New Site up]]>Wed, 05 Mar 2014 01:37:53 GMThttp://bl-logic.dk/1/post/2014/03/new-site-up.htmlFinally got the new site up - was a long time in between updates.
]]>