Monday, December 28, 2015

Web Performance, tell me the story

Suppose you would like to know how your web site, behaves to clients from two different geo locations: for example, Finland and Ireland regarding response time. How quickly can you see and understand visually what's going on and how things are.

Do you need to improve something ? Change your network provider ? Tune your application(s) ? Spend more money. Start from a top level view, understanding the big picture and having the possibility to dive and analyze at the detail level. Finally, can you visually tell the story of your web application as in a short chart ?

Enter Kronometrix Web Performance ...


Finland

  • Something happened between 1:00 - 1:15 AM
  • We can see more than one request affected by this event
  • Overall we see all requests are executing fast, below 0.5 seconds
  • And we have some exceptions, some are taking more than 0.5 seconds

Ireland

  • Same things from Ireland
  • It is different, requests usually take longer to execute
  • Some requests are much slower than in Finland


Then we want to know more and check individual requests and observe their evolution in time.




And then we can dive and analyze even further ...

Kronometrix has to be simple, and self explanatory. We believe our implementation, based on the streamgraph can very quickly identify who is slow, and check the evolution in time and then in one click dive into other level of detail. It is like telling the story of your application over time.




Thursday, December 10, 2015

Web Performance subscription

We been busy to add support for Web Performance to our appliance. That means anyone running any HTTP applications can send data to Kronometrix for analysis. Our idea of monitoring Web applications is a simple one: it starts from the operating system, HTTP server and the application itself. We report the most important metrics including the response times for the application. To make things even easier we wanted to add support for complete solution stacks, like LAMP. (We still have lots of work to fully support them).

And to have a complete picture of the Web service, we have introduced the workload management concept inside Kronometrix to gather and report data from one or many data sources and bound those to a SLA for upper management reporting. Nice and easy.

Some first development snapshots from our implementation. Let's first switch to night mode, it is 23:10 here in Finland. So, here you go:

All requests dashboard


This shows a number of HTTP requests gathered as a stream graph,  over time.

All requests as a stream
The for instant consumption we switch to something simpler as a bar chart with break downs for each request:

Instant response time

Simple to see all requests as a continuous stream of data over time


Per-request dashboard


A simple break down reporting the following metrics:

  #01 timestamp : seconds since Epoch, time

  #02 request   : the HTTP request name

  #03 ttime     : total time, entire operation lasted, seconds

  #04 ctime     : connect time it took from the start until the TCP 
                  connect to the remote host (or proxy) was completed, seconds

  #05 dnstime   : namelookup time, it took from the start until the name 
                  resolving was completed, seconds

  #06 ptime     : protocol time, it took from the start until the file 
                  transfer was just about to begin, seconds

  #07 pktime    : first packet time, it took from the start until the first 
                  byte was just about to be transferred, seconds

  #08 size      : page size, the total amount of bytes that were downloaded

  #09 status    : response status code, the numerical response code 
                  that was found in the last retrieved HTTP(S) transfer

looking like this:

Per-request response time

Easy to break down at the request level, including outliers


Close to these dashboards are the operating system metrics, to have a complete picture of the running system. Later we will show how we define and report combined all resources as a complete workload. Stay tuned and join our discussion group, here


We are the makers of Kronometrix, come and join us !




Thursday, November 5, 2015

60 messages per second

Kx 1.3.2 is our next release of Kronometrix for X64 platform.  Currently we are busy testing some new features in our QA environment. We are sending data from 750 data sources, data delivered by our DataCenter Simulator. Each system is delivering 5 messages. 

Since yesterday we been processing more than 5 million messages at 60% usage.

Appliance Usage

These are the main screens within the appliance administration, which shows the utilization and throughput of our system in real-time. As well we include some other information about how many users, subscriptions and data messages etc we are processing. 



and here the system utilization:



This runs our latext Kronometrix on a *very* old hardware appliance, on purpose to test and stress our software on a bit older spec. Red and Blue pipelines working just fine.

Kronometrix can consume events from different sources: IT computer systems, weather stations, loggers, applications in real-time. This means we can process and display data as soon as it arrives to our machinery.  And we can prepare the information for consumption to different dashboards, continuously. But not all data can be dispatched and displayed as soon as it arrives. Here is why and how we do it.


The red (hot) pipeline

Inside our kernel we process all incoming data, the raw data on a main line, the red pipeline. This is the most important pipeline within our kernel analytics. Here we transform raw data in statistics, like MAX, MIN, in AVGes for charting across different time intervals.

All sorts of UI dashboards are attached to the red pipeline, to display data as immediate as it has been processed. On this line a high level of redundancy and efficiency is enforced. Here the kernel will spend as much as possible to filter and calculate things at a very fast rate so we spend lots of time optimizing things and testing.


The blue (cold) pipeline

On a different line, we compute the summary of summaries, numerical and inventory. Sort of top aggregates which should live different life and not cause any trouble to the main incoming red line. Some example of such aggregates:

  • Top 10 systems consuming highest CPU utilization 
  • Avg, Max CPU Utilization across all computers in a data center on different time intervals
  • Total number of network cards across all computer systems
  • Operating System distribution across data center
  • Disk kIOPS across all systems

We call this the blue line. In here things are aggregated at a slower rate but still available for UIs and dashboards. This line should not use as much computing power as the red line and it should be ok to shutdown and run Kronometrix without it, if we want so.

A top goal was to make the entire processing as modular as possible and ensure one failure in one part of the kernel will not bring down entire kernel.


Top goals

  • modular design
  • red pipeline must be always ON and highly available
  • red is hot, will always run and use more computing resources
  • blue pipeline can be stopped
  • blue is cold, and should not use lots of computing power
  • easy to stop blue pipeline without disrupting the red pipeline
  • dashboards can be bound to red or blue
With these top goals we can say we are able to efficiently process data in real-time, display it as soon as it arrives and summarize it as we want without sacrificing performance and uptime. 

We are the makers of Kronometrix, come and join us

Thursday, August 27, 2015

The STALL data filter

Currently, Kronometrix, is supporting two types of raw data filters: the STALL and RANGE filters.

A raw data filter is a mechanism to ensure that incoming raw data follows certain rules or conditions before the numerical processing and data visualization, within Kronometrix. For example the STALL filter will ensure raw data is properly arriving to our appliance and there are no delays. The RANGE filter, will ensure incoming raw data is sane and valid, and stays within certain range of values. For example the air temperature is valid as long as it is between -50C to 50C or the CPU Utilization of a Windows server is between 0 and 100%.


The STALL Filter

Defined under messages.json and part of the Library of Monitoring Objects, the STALL filter is part of the data message definition. For example this is the way to define the STALL for a data message, called sysrec, which belongs to a data source type: Linux (a computer system running Linux CentOS operating system). 

stall: seconds

The STALL filter, is defined in seconds, describing the time spent before triggering a STALL warning under Kronometrix event management console:



Turn them off

Starting with Kronometrix 1.2 we are able to turn ON/OFF, per subscription the STALL detector. That means if we have several data subscriptions, we can say for what subscriptions the filter should be ON or OFF, no matter what the messages.json will have configured.

See here:



dcsim and mix subscriptions require some updates, some computer systems will need some maintenance work and during this period of time we dont want to receive warnings and alerts from any sources which belong to these two subscriptions. So we turned OFF the STALL filters for these two subscriptions.


Why STALL filter is important and you should use it

  • because we want to see and get notified when we are not receiving data from sensors, computer hosts, weather stations, etc
  • we want to keep a close look how often data is missing or is delayed
  • because we are forced by regulations and laws to monitor and report these delays (example, Airports, Air Traffic Controller)

Sunday, July 26, 2015

Windows, Linux, FreeBSD all welcome

By default for IT business, we support in Kronometrix, monitoring objects from Linux and FreeBSD data sources. Recently we been porting our data recorders to Windows operating system and start to offer ready made objects for this.

Example, latest Kronometrix 1.2 we plan to support Linux, FreeBSD and Windows 2008, 2012 Server editions 64 bit. Below several data sources within Kronometrix:





and then we can drill down per OS, example clicking centos:



Thats all. This is part of executive dashboard view.

Tuesday, July 21, 2015

Programming Republic of Perl, the Windows story

Task: port Kronometrix from Linux, FreeBSD to Windows platform, including all data recorders and the transporter. Preferable use a programming language, like Perl to easy the porting process and re-utilize whatever we have already in Kronometrix.

Timeline: ASAP

Open Source: Yes


Goals

Some top rules for developing the new recorders:
  • all recorders, must be coded in a scripting language
  • preferable, all recorders must work as CLI and Windows services
  • all raw data, should be easy to access, via C:\Kronometrix\log\ , no more mysteries about AppData directory
  • transporter should be done similar way, coded using a scripting language
  • memory footprint 64MB RAM

Perl5

We been experimenting previously with C/C++ for Kronometrix on Windows. Nothing wrong with C/C++ except, that for every small change we had to do a lot of work & testing. We looked to PowerShell and other langauges but nothing came closer and felt like home, than Perl.

All our data recorders are simple Perl5 citizens already, so why not to have Kronometrix on Windows done in Perl too !?

After some research and coding we found a very powerful module, Win32 which was capable to speak WMI and access almost anything from a running Windows system. That's it. Enter Perl. We selected ActiveState PDK to compile each .pl to a Win32 executable service. Nice and easy.

A simple Win32 service sample, in Perl5 using PDK:

Win32 Perl Service


Windows

Here, 2 main data recorders, and sender, the transporter, running on top of Windows 2008 Server, as services:


Kronometrix Windows Services


Source Code

Visit our repository to see and check out, our Windows data recorders. This is work in progress , more recorders will soon be published and released.

Saturday, July 18, 2015

Asus Zenbook UX32VD and FreeBSD 11, part two

This is a short description how I got running FreeBSD 11-current on my Asus Zenbook UX32VD laptop. Im very happy with the current setup but of course it is room for improvements in many areas. Having DTrace, ZFS and the other goodies makes FreeBSD a real good candidate for a mobile environment. This is a short update, regarding a regression for Xorg and drm2 module which breaks down Xorg.

Last Updated: 18 July 2015

Zenbook UX32VD Configuration


CPU: Intel(R) Core(TM) i7-3517U CPU @ 1.90GHz
Memory: 6 GB RAM
Storage: SanDisk SSD i100 24GB, Samsung SSD 840 PRO 500GB
Video: Intel HD Graphics 4000, Nvidia GT620M
Display: 13" IPS Full HD anti-glare
Wifi: Intel Centrino Advanced-N 6235
3 x USB 3.0 port(s) 
1 x HDMI 
1 x Mini VGA Port 
1 x SD card reader
Note: The laptop's internal hard drive, has been replaced by a Samsung SSD.

Wednesday, June 24, 2015

The Night Shift

The TV Series ... Night Shift ? Nope, this is Kronometrix appliance for night workers, operators, sleepless system administrators and for the rest of us, working late. An appliance ? What do you mean ?

To be serious about monitoring you better have an application running on a cloud platform, right ? Public or private, whatever that is, Google, Amazon, Azure or on a internal network powered by OpenStack, VMware ...  preferable deployed over a large number of nodes, using large memory configurations and offering advanced dashboards to keep track of hundreds of metrics collected from machines, storage, network devices, applications all available and ready to be ... read and digested by anyone.

The duty admin is confused, what dashboards will be needed for basic performance monitoring and SLA and what metrics these dashboards should include ?

We took a simpler approach with Kronometrix, which can be deployed on a cloud computing platform if really needed, which was designed to be:
  • easy to install, manage and administer, aiming for zero-administration
  • ready for computer performance analysis, including essential performance metrics
  • self maintained and automated, majority of tasks are pre-configured and already set
  • simple to read and understand using clear UI dashboards
  • available for operation people, ready for large size screens, example 51"


Night Mode

Lights off please. A simple Ops Dashboard, designed to display vital performance indicators, CPU, Memory, Disk IO and NIC IO utilization, for a host, on two time ranges: 5 and 30 minutes along with the system run queue length. On same dashboard, central we display same time series data, but represented as a chart on different time resolutions. A very simple control let us put the dashboard for night monitoring mode.

Operational Dashboard,  5 and 30 minutes

Night Mode, Zoom-In


Operational Dashboard 5 and 30 minutes, zoom in


Kronometrix Advantages

  • top essential performance metrics, per host, included
  • clear and simple UI, no confusing labels, charts, extra information
  • two time ranges, allowing operators, sysadmins to check current and past activities 
  • possibility to drill and zoom in, using the time series data charts
  • direct link to console events, alerts and thresholds
  • designed for large size screens, eq. 51", on night and day mode

Friday, June 12, 2015

One Tool to rule them all ...


Data Recording Module - Five default, main, data recorders, written in Perl 5, responsible to collect and report: overall system utilization and queuing, per-CPU statistics, per disk IO throughput and errors, per NIC IO throughput and the computer system inventory data: sysrec, cpurec, diskrec, nicrec and hdwrec.

But why not having a single process, simple to start and stop, and operate ? Does it matter, anyway ?


One tool cannot perform all jobs. We believe that.

We have assigned different tasks to different recorders, making easy and simple to update a data recorder without breaking the others. As well, we wanted to separate functionality based on the main 4 system's resources: CPU, Memory, Disk and Network.
Additional we think it is very important to have a description of the hardware and software running on that computer system, sort of the inventory, that everybody understands easily what are we looking at. Saying these, we ended up using the following data recorders:
  • sysrec: overall system CPU, MEM, DISK, NIC utilization and saturation
  • cpurec: per-CPU statistics
  • diskrec: per-DISK statistics, throughout in KBytes and IOPS
  • nicrec: per-NIC statistics, throughout, as KBytes and packets along with the link saturation, errors
  • hdwrec: hardware, software inventory, like number of CPUs, total physical RAM installed, number of disks


Footprint

How about system utilization using Kronometrix ? This is general footprint in terms of cpu, memory and disk used:
  • CPU: on an idle host, all data recorders use less than 0.5%. On a busy system 95-100%, the data recorders use up to 3%
  • Memory: All default data recorders use up to 64MB RAM including the transport utility. Windows data recorders use up to 128MB RAM
  • Disk: the default installation, without raw data uses up to 75MB disk space and the data recorders are not disk IO intensive applications

Keep it simple

We can add or change a data recorder within minutes. Having data recorders based on Perl5 is allowing us to change or add new functions, very easily. Additional we can put recorders run at different time resolutions, if needed. And in case we don't need certain functions, for example network traffic per NIC, we simple shutdown nicrec without affecting the other recorders. So its easy and simple.

Raw Data

A single recorder means, lots of metrics to report. Say we would have agent_one, the main data recorder which should look overall system, the CPUs, disks etc. The payload would increase when running a single recorder. And we want to store the data collected, so that would mean, we need to split and analyse separately data for CPUs, disks, etc.

The Package

Once upon a time Kronometrix was not using more than 1MB disk space. And we were happy like that. But soon we discovered that people from financial and banking sector were not happy changing, installing new libraries on their systems to allow us to run Kronometrix. Worse such sites, usually have very strong requirements what operating system packages are allowed to be installed and what not.
So, we needed to rethink and adopt another mechanism to deploy Kronometrix on such networks. We ended-up having our own Perl distro shipped with Kronometrix + OpenSSL. This way we were able to survive without any extra dependencies from customers and keep running.

One Tool to rule them all

Our approach is simple, easy and offers flexibility on different networks. The Kronometrix data recording package is automated for majority of operating systems out there: FreeBSD, RedHat, CentOS, ClusterLinux, OpenSUSE, Debian, Ubuntu, Solaris, Windows.



Wednesday, June 10, 2015

Who are you ? The story about DBus and cloning virtual machines

Kronometrix Data Recording Module, ships with a transport utility called sender, which ensures all raw data is shipped to one or many data analytic appliances over HTTP or HTTPS protocols. 

sender, a Perl5 citizen, checks all raw data updates and relies on a host uuid identifier from the operating system to deliver that raw data. If such host uuid is found will use that for the entire duration of its execution. If no such host uuid is found it will generate a new one and store that on its configuration file, kronometrix.json. The analytics appliance, relies that each data source is unique and valid, properly checked or generated by the transport utility, sender. 

But what's happening if this really does not work ?


Data Source Id

Kronometrix uses the data source id concept, (DSID), to identify a data source point, from where one or many data messages are received. For example for IT, Computer Performance data, a DSID identifies to a computer system host, physical or virtual, connected or not to a TCP/IP network.

The DSID is obtained from operating system core functions. Example:
  • Linux platforms we speak to DBus and we try to get that via machine-id file
  • FreeBSD we ask the sysctl interface for kern.hostuuid
  • other way: we compute one, using UUID::Tiny Perl5 module


Who are you ?

Working closely with one of our customers, we've seen that they were not receiving data from a number of virtual machines where previously we have installed Kronometrix. Looking into this we discovered that sender was producing same data source id across a number of virtual machines, using same DSID:

"dsid" : "96d5b4a4-d0fa-54a8-ba74-14cc978041f1"
So to Kronometrix Analytics Appliance all these hosts were more or less similar, having same DSID. Not good.


Whats wrong ?

As simple as that, we found out that one VM was used to clone other VMs and by mistake the machine-id file was cloned as well. DBus on CentOS 6.x did produce a sane and valid machine-id, but then the VM configuration was taken and cloned, including the machine-id. On this respect from operating system level all hosts were similar identified as having same host UUID. No software was reporting this nor complain about this malfunction.

Our system was able to immediately discover this trouble and we proposed a fix to the Operation Center group. Later the cloning procedure was fixed to ensure machine-id on Linux will not be cloned anymore and data was finally flying to our appliance. Nice and easy. 

No more clones :)

Tuesday, May 5, 2015

Kronometrix Analytics Appliance: Xen vs ESXi

Latest Update: Tue May  5 15:54:10 EEST 2015

We heavily use Redis as part of our Kronometrix analytics software. For those which do not know what Redis is, see here.

Recently we have upgraded our kernel from Redis 2.8.x branch to 3.x, latest release. And we were curious to see what sort of improvements and how latest Redis release works for our application in regard with ESXi and Xen. As already said, Kronometrix Appliance can works as a:
  • physical appliance, operating system + analytics software on bare metal
  • virtual appliance, running as one or many virtual machines within Xen or ESXi
We are planning to check the redis-server proces utilization, extract a stack trace for the redis-server on both hypervisors.

Xen

Kronometrix 1.0.0 on Xen

Generic X86_64 Server
Ubuntu Server 12.04.5 LTS, Xen 4.2.2 amd64
1 x Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz
Hyperthreading: Available, ON
24 GB RAM
1 TB internal storage for dom0, 2TB NAS for domU




We have configured dom0 to boot and have 2 assigned vCPUs and 6 GB RAM:  GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=6500M dom0_max_vcpus=2 dom0_vcpus_pin"


Kronometrix Kernel Build 97, Redis 3.0.1, 450 DS

We are testing Kronometrix Kernel against 450 data sources, a data source here means a computer system like Linux, or FreeBSD running as virtual machine, sending 5 data messages per host, every minute, running on Xen.

CPU Utilization:
Xen redis-server process, 18% CPU usage



ESXi

Kronometrix 1.0.0 on ESXi

Dell PowerEdge 2950
ESXi 5.5.0 Update 2
1 x Intel(R) Xeon Core(TM) E5430 @ 2.66GHz
Hyperthreading: Not Available, OFF
20 GB RAM, 2 x 1 TB internal storage RAID 1 via Dell PERC 6/i (LSI)


We have configured the 2 x internal disks as RAID 1 mirror where we have installed the ESXi hypervisor and have the datastorage for VMs.


Kronometrix Kernel Build 97, Redis 3.0.1, 450 DS

We are testing Kronometrix Kernel against 450 data sources, a data source here means a computer system like Linux, or FreeBSD running as virtual machine, sending 5 data messages per host, every minute, running on ESXi.

CPU Utilization:
ESXi, redis-server process 70% CPU usage


Appeendix

1. DTrace 60seconds, stack trace dump:

dtrace -x ustackframes=100 -n 'profile-997 /pid == 3263/ { @[ustack()] = count(); } tick-60s { exit(0); }' -o stacks

    

Monday, May 4, 2015

The Appliance ...


Our analytic software, Kronometrix, runs on top of an operating system, tuned and configured for data analysis, bundled as a ready image, which can be downloaded and installed. The idea is simple, we wanted to offer something which is already tested, tuned and configured for hundreds, thousands of data sources. We call this, the Kronometrix Appliance and we want it to be ready to:
  • work with or without WAN connection
  • work on batteries and solar panels
  • handle large or small volume of data in real-time
  • analyse data from different data sources, different industries
  • have zero-administration and no maintenance
  • be affordable

 

Memory it's the new hard disk 


Another PHP or Java application on top of MySQL ? Or maybe PostgreSQL ? Right ? Boringggg.

When we started Kronometrix, we basically cleaned the table and we wanted to see things aligned for future not 1978. The first thing, was where to store all the time series data and how to access that. And we went full speed into the future: a NoSQL database all mapped to RAM, fast and efficient to store any sort of data structure we might need. Then it came the persistence to disk, since any system might go South, so persistence was an important factor. So we went for Redis. A solid NoSQL database with persistence on disk, fast, efficient and lots of data structures to chose from running across a number of Linux and UNIX operating systems.

Note: we evaluated at that dedicated time series databases, like InfluxDB but we found it complex, and buggy, hard to maintain and operate unless you want to deploy 20 computing nodes, poor time-series analysis capabilities within.

It's Samba time


Then it came the language and the web framework around it. Java, PHP, Python, Ruby all with their own pro and cons. But this time we looked further and stopped at Pontifical Catholic University of Rio de Janeiro in Brazil. Welcome Lua, a very powerful, fast, lightweight, embeddable scripting language running on OpenResty, one of the fastest web frameworks on planet Earth.



Kronometrix has been designed to work out of the box, no complicated installation, no extra tuning, no hours spent in adding more data base cluster nodes, etc etc. It just works as: physical or virtual appliance.


1. Physical appliance


Suppose you want to receive data from 1500 hosts in a data center, every minute and process up to 100 performance metrics, each metric computing the MIN, MAX, COUNT, SUM, AVG, LAST across 10 predefined time range intervals. That would take a bit some effort, and if you look further into even more dense data centers with hundreds of racks the physical appliance is there to help. For those ready to digest data in real-time from thousands of data sources, with minimal storage latency and full speed, we have a ready appliance which be easily racked and powered on.

Or think about a remote damaged facility, a warehouse or factory which needs to be continuous monitored using a private LAN, without a permanent WAN connection. Worse the power electricity supply on that location is not stable. For that a light and mobile appliance, can be installed to receive data from dozens of environmental sensors placed on the factory.


Another example, a small airfield, 1 run-way small airport which would like to share meteorological data to the instructor pilots and local operators, all using the private LAN. More, they would like to keep the meteorological raw data and analyse it every 6 months in order to see how many cancellations were made due bad weather and poor visibility. For such setup, Kronometrix Mobile Appliance helps in gathering data from all weather stations and nicely displaying it to pilots, operators and personnel.


2. Virtual appliance


Kronometrix can easily be installed as a virtual image under any modern hypervisor: Xen, KVM, ESX, or on any public cloud providers like Amazon EC2, Rackspace, Joyent or any other public cloud operator. Same top features are available as a virtual appliance, focusing on zero-administration, no maintenance, easy deployment. 

Using a cloud or virtualization infrastructure software will allow Kronometrix to quickly expand and use more virtual machines to handle larger amounts of data with minimal effort. 

The software is available as a ready to be installed ISO image.


Thursday, April 23, 2015

See it all: IT, Meteorology, Weather in a single package


When we started to work on Kronometrix, we redefined our requirements such way, we would be able to process data from a computer system, a weather station, an environmental monitoring sensor or you name it ... as long as data will come as a time-series.

And we did it: Information Technology, Aviation Meteorology and Weather data all on the same analytics software.


Multiple data subscription within Kronometrix

Thanks to our Library of Monitoring Objects we can easily plug into our software different objects to describe the real world and display it. And all these dashboards are available from a single click, as a simple Kronometrix user.

Wednesday, March 4, 2015

Kronometrix Data Recording 1.0 - Linux Edition

Kronometrix for Information Technology industry includes utilities to measure computer system performance. We are working heavily to expand this towards data center facility itself, and monitor the environment around data center: cooling devices, air temperature and pressure, humidity, smoke and other important metrics.

Kronometrix includes two main things:
  • data recording module
  • data analytics module

Kronometrix 1.0 supports two main flavors of linux based systems: CentOS and Debian. We build our packages on CentOS 5 and Debian 7 32 and 64bit versions. Additional we are providing support for ARM, the Raspbian operating system for Raspberry Pi hardware. To make things simpler we have packaged all software as rpm and deb package formats and all initial required steps are included in these packages.


Package Content


Kronometrix 1.0 Linux edition contains, all data records, the transport utility and additional weather data recorders. along with these come the manual pages and documentation, all for free:
  • agents: sysrec, cpurec, diskrec, nicrec, hdwrec, procrec, jvmrec, netrec, wsrec, xenrec
  • transport: sender at the moment uses HTTP protocol
  • all needed libraries: openssl, libcurl, perl
  • documentation: manual pages
The agents are a simple CLI based utilities which are designed to run interactive from command line or on the background. When started as services these utilities will keep running on the background. However these agents are not designed as daemons on purpose, since they are used heavily as command line utilities.

The transport part is designed as a daemon, which does not require interactivity. Its main purpose is to detect and find out if we have new data recorded and transport it to one or many analytic appliance back-ends.


Download


Download Kronometrix for Linux here: www.kronometrix.org/download/get-linux.html

 

Installation


The initial installation will create a directory prefix and all needed things to properly install and use the software: an user account uid 5000, the home directory, all cron job tasks needed.

 

  • RPM based systems, 32/64 bit
# rpm -ihv /var/tmp/kronometrix-1.0.9-i386.rpm
Preparing...            ########################################### [100%]
  1:kronometrix       ########################################### [100%]

# rpm -ihv /var/tmp/kronometrix-1.0.9-x86_64.rpm
Preparing...            ########################################### [100%]
  1:kronometrix       ########################################### [100%]


  • DEB based systems, 32/64 bit
# dpkg -i /var/tmp/kronometrix-1.0.9-debian7.8-amd64.deb
Selecting previously unselected package kronometrix.
(Reading database ... 31233 files and directories currently installed.)
Unpacking kronometrix (from .../kronometrix-1.0.9-debian7.8-amd64.deb) ...
Setting up kronometrix (1.0.9) ...

Uninstall


Using the package management software allows us to simplify the removal process of Kronometrix: shutdown correctly all data recorders and the transport part, remove all cronjob tasks, and finally removing the software from disk. 

Note: Important to notice is that the removal process wont touch the raw data log, just in case you still needed, the modified configuration files.


  • RPM based systems, 32/64 bit
# rpm -e kronometrix

  • DEB based systems, 32/64 bit
# dpkg -P kronometrix
(Reading database ... 33706 files and directories currently installed.)
Removing kronometrix ...
dpkg: warning: while removing kronometrix, directory '/opt/kronometrix/log/current' not empty so not removed

What's next ?


We should check all data recorders and see if we are recording data to disk:


# ps -o uid,pid,cmd -u krmx
  UID   PID CMD
 5000 12605 /opt/kronometrix/perl/bin/perl /opt/kronometrix/bin/sysrec 60
 5000 12621 /opt/kronometrix/perl/bin/perl /opt/kronometrix/bin/cpurec 60
 5000 12637 /opt/kronometrix/perl/bin/perl /opt/kronometrix/bin/diskrec 60
 5000 12653 /opt/kronometrix/perl/bin/perl /opt/kronometrix/bin/nicrec 60
 5000 12669 /opt/kronometrix/perl/bin/perl /opt/kronometrix/bin/hdwrec 60

# ls -lrt /opt/kronometrix/log/current/
total 20
-rw-r--r-- 1 krmx krmx 2144 Mar  4 14:33 sysrec.krd
-rw-r--r-- 1 krmx krmx  624 Mar  4 14:33 cpurec.krd
-rw-r--r-- 1 krmx krmx  477 Mar  4 14:33 diskrec.krd
-rw-r--r-- 1 krmx krmx  612 Mar  4 14:33 nicrec.krd
-rw-r--r-- 1 krmx krmx 1130 Mar  4 14:33 hdwrec.krd

When ready to ship data to one or many Kronometrix appliance,  we will need to configure sender and kronometrix.json the main configuration file. On the next blog entry we shall talk about kronometrix.json, the main configuration file within Kronometrix Data Recording module.


Saturday, January 17, 2015

The son of the shoemaker has no shoes


Everybody knows Google's Page Speed testing tool. Well, at least I was thinking so. Today I wanted to see how www.kronometrix.org was doing against PageSpeed, looking particular how fast, optimized my site was using latest Bootstrap library and some fixes.

Got something like: Mobile 64/100, Desktop 93/100. Not bad. See below:












Google PageSpeed 


Then I wanted to see how my site compared against bigger names from Finnish IT industry, like Tieto, IBM, Oracle, SuperCell, Rovio, SSH, F-Secure for example. Results are wow ... maybe these guys haven't heard of Google PageSpeed, or don't know how their main site really works. One might argue that PageSpeed is not useful nor bring any value directly, but in fact it does list and analyze some very important parts that any web site should properly implement and support:
  • HTTP Compression
  • Server Response times
  • Image compression
  • HTML, CSS, JS optimizations

Client: Firefox 34.0.5 FreeBSD amd64.

Note: This was a simple test, one execution, similar as the user will open a web browser and visit the Site. This experiment is done by me, it has nothing to do with my current employer, SDR Dynamics Oy.


SiteMobileDesktopUser Experience
tieto.fi
f-secure.fi
ssh.com
supercell.com
rovio.fi
ibm.fi
oracle.fi
redhat.fi
google.fi

These are all average and low scored numbers, nothing amazing. I was expecting more from top IT companies, specialized in web technologies. Some top issues found, hopefully somebody will take these as constructive feedback and fix the sites:
  • Enable compression
  • Eliminate render-blocking JavaScript and CSS in above-the-fold content
  • Leverage browser caching
  • Optimize images

Future

Got an idea to have a recorder, which can once per week review my site, kronometrix.org and send the Google PageSpeed to Kronometrix Appliance, as data message, which will help me to keep track how well my site is functioning for my clients.

This means, to add support to a new BA business analytic monitoring objects within LMO, and monitor the PageSpeed within Kronometrix