Thursday, November 5, 2015

60 messages per second

Kx 1.3.2 is our next release of Kronometrix for X64 platform.  Currently we are busy testing some new features in our QA environment. We are sending data from 750 data sources, data delivered by our DataCenter Simulator. Each system is delivering 5 messages. 

Since yesterday we been processing more than 5 million messages at 60% usage.

Appliance Usage

These are the main screens within the appliance administration, which shows the utilization and throughput of our system in real-time. As well we include some other information about how many users, subscriptions and data messages etc we are processing. 



and here the system utilization:



This runs our latext Kronometrix on a *very* old hardware appliance, on purpose to test and stress our software on a bit older spec. Red and Blue pipelines working just fine.

Kronometrix can consume events from different sources: IT computer systems, weather stations, loggers, applications in real-time. This means we can process and display data as soon as it arrives to our machinery.  And we can prepare the information for consumption to different dashboards, continuously. But not all data can be dispatched and displayed as soon as it arrives. Here is why and how we do it.


The red (hot) pipeline

Inside our kernel we process all incoming data, the raw data on a main line, the red pipeline. This is the most important pipeline within our kernel analytics. Here we transform raw data in statistics, like MAX, MIN, in AVGes for charting across different time intervals.

All sorts of UI dashboards are attached to the red pipeline, to display data as immediate as it has been processed. On this line a high level of redundancy and efficiency is enforced. Here the kernel will spend as much as possible to filter and calculate things at a very fast rate so we spend lots of time optimizing things and testing.


The blue (cold) pipeline

On a different line, we compute the summary of summaries, numerical and inventory. Sort of top aggregates which should live different life and not cause any trouble to the main incoming red line. Some example of such aggregates:

  • Top 10 systems consuming highest CPU utilization 
  • Avg, Max CPU Utilization across all computers in a data center on different time intervals
  • Total number of network cards across all computer systems
  • Operating System distribution across data center
  • Disk kIOPS across all systems

We call this the blue line. In here things are aggregated at a slower rate but still available for UIs and dashboards. This line should not use as much computing power as the red line and it should be ok to shutdown and run Kronometrix without it, if we want so.

A top goal was to make the entire processing as modular as possible and ensure one failure in one part of the kernel will not bring down entire kernel.


Top goals

  • modular design
  • red pipeline must be always ON and highly available
  • red is hot, will always run and use more computing resources
  • blue pipeline can be stopped
  • blue is cold, and should not use lots of computing power
  • easy to stop blue pipeline without disrupting the red pipeline
  • dashboards can be bound to red or blue
With these top goals we can say we are able to efficiently process data in real-time, display it as soon as it arrives and summarize it as we want without sacrificing performance and uptime. 

We are the makers of Kronometrix, come and join us