Thursday, November 6, 2014

The Library of Monitoring Objects, part 2

To process data from different industries and businesses we need to have a way to define and describe such data in a manner the analytic software will understand it. Kronometrix uses a simple and powerful concept for object definition, called the library of monitoring objects.


Suppose we want to handle data coming from Information Technology or Healthcare. Within these domains, we might have different sub-domains describing different types of analytic business. See  how different domains and sub-domains map to LMO:

The Library of Monitoring Objects

For each industry we are interested in we need to develop and write a simple formal definition how data will be expected from devices and sensors from that field. Example:

We plan to gather information regarding Information Technology, System Performance domain. For that we need to define and describe all messages expected from this field. Example filename. This file describes all needed metrics from this field.


When we know what we plan to do, and from where our data will come from, we need to define a number of host types, which describe where data is generated from. For example fetching data from a number of computer systems, we need to define each type of system we plan to collect from: a Linux or Windows computer system or a FreeBSD server:

Host types

These are the host types, linux, freebsd, sunos and windows. If, for example, we plan to collect data from a new type of server, lets say AIX, we should add it as well into LMO.

Data Messages

Knowing the hosts will allow us to get finally to the metrics, to the parameters we plan to monitor and observe. A host, for example FreeBSD, can have one or many types of metrics we plan to records. We organize these metrics into messages, a group of metrics coming from data source.

A host can have one or many data messages, FreeBSD for example having 5 data messages: sysrec, cpurec, diskrec, nicrec and hdwrec. 

Message types

The Metrics

For each message type, for example cpurec, we define a number of metrics, which we want to monitor. Field 0 and 1 are metadata fields required for handling the authentication and  host range definition. The next one, field 2 is the defined timestamp followed by a device_id field describing what part of the system we plan to monitor, in this case each CPU, cpuid.

Below a complete message definition for cpurec:

cpurec data message

Next time, we will talk about summary statistics within LMO.

Friday, October 24, 2014

Raspberry Pi and Redis

This summer we did something amazing: we took our enterprise appliance, a very powerful server and we tried to run same software, same analytic kernel and all the other things in something smaller and lot simpler, like this:

Light, mobile appliance

And we did it ! Being powered by OpenResty and Lua we were able to size our appliance easily and be up and running quickly. Low factor, low power consumption, very cheap, these ARM based devices are getting more and more attention and become more popular nowadays:

What ? Analytics on a Raspberry PI ... !?

Yep. First of all, we wanted to experiment with ARM based boards using few number of hosts and few data messages per host as input. The immediate trouble was that we should fit in 512MB RAM with all our software and customer data. Then these boards don't run on latest Intel specs but something like 700 Mhz CPU as a system on chip with 100Mbps NIC and shared USB ports.

But still we were thrilled by this idea of having, few hosts and messages per appliance and be able to deploy such appliance quicker than a larger one for different setups. For example
  • Enterprise Appliance: up to 5000 hosts, 5 messages per host, 25.000 messages
  • PI Appliance: up to 5 hosts, 2 messages per host, 10 messages
And we went Raspberry Pi. Nothing different on the appliance itself, except some configuration changes and lots of testing. And here comes the fun...

Redis on Raspberry PI

A central part of our analytic software is Redis, a very popular and powerful NoSQL in cache memory data structure server. It runs on majority of Linux distributions, *BSD and many other UNIXes. It has a nice and friendly community and you get lots of help around. For our analytic software Redis was a natural choice: several ways to structure data in memory, very fast, no threading and simple.

The RBPi model B+ used for this test:

As storage we have selected these two cards: SanDisk and Kingston 16GB , Class 10. We used default ext4 file system on both card, with default options.

[    2.112778] mmc0: SDHCI controller on BCM2708_Arasan [platform] using platform's DMA
[    2.123339] mmc0: BCM2708 SDHC host at 0x20300000 DMA 2 IRQ 77
[    2.233281] Waiting for root device /dev/mmcblk0p2...
[    2.284667] mmc0: read SD Status register (SSR) after 3 attempts
[    2.305649] mmc0: new high speed SDHC card at address 0007
[    2.319414] mmcblk0: mmc0:0007 SD16G 14.9 GiB 
[    2.327567]  mmcblk0: p1 p2

SanDisk 16GB
Class 10
[    2.131742] mmc0: SDHCI controller on BCM2708_Arasan [platform] using platform's DMA
[    2.142791] mmc0: BCM2708 SDHC host at 0x20300000 DMA 2 IRQ 77
[    2.262561] Waiting for root device /dev/mmcblk0p2...
[    2.293773] mmc0: read SD Status register (SSR) after 2 attempts
[    2.316876] mmc0: new high speed SDHC card at address aaaa
[    2.332197] mmcblk0: mmc0:aaaa SU16G 14.8 GiB
[    2.345618]  mmcblk0: p1 p2

Transcend 32GB
Class 10 MLC
[    2.167920] mmc0: SDHCI controller on BCM2708_Arasan [platform] using platform's DMA
[    2.180580] mmc0: BCM2708 SDHC host at 0x20300000 DMA 2 IRQ 77
[    2.358084] mmc0: read SD Status register (SSR) after 2 attempts
[    2.381255] Waiting for root device /dev/mmcblk0p2...
[    2.392431] mmc0: new high speed SDHC card at address 59b4
[    2.418636] mmcblk0: mmc0:59b4 USDU1 29.7 GiB 
[    2.430805]  mmcblk0: p1 p2

After several weeks of testing and testing again, we found several issues with Redis on Raspberry PI when using Kingston cards:
  • slow storage
  • lots of Lua client errors
  • high CPU Utilization

Types of Errors

2014/09/11 11:53:57 [error] 2590#0: *43736 lua tcp socket read timed out, client:, server: , request: "POST /api/private/send_data HTTP/1.0", host: ""
2014/09/11 11:53:57 [error] 2590#0: *43736 attempt to send data on a closed socket: u:B6925060, c:00000000, ft:0 eof:0, client:, server: , request: "POST /api/private/send_data HTTP/1.0", host: ""
[3305] 18 Oct 12:52:31.024 * Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.
2014/10/26 21:17:58 [error] 2712#0: *14598 lua tcp socket read timed out, client:, server: , request: "POST /api/private/send_data HTTP/1.0", host: ""  2014/10/26 21:17:58 [error] 2712#0: *14598 [lua] messages.lua:154: deposit_stats_data(): Error setting expiration for key: stats:3f87c584-2312-42e5-67e4-93005181870d:eth0:cpd-linux-nicrec:rxpkt:2338:10800:MIN | expire at: 1415232000 | err: timeout, client:, server: , request: "POST /api/private/send_data HTTP/1.0", host: "" 2014/10/26 21:17:58 [error] 2712#0: *14598 attempt to send data on a closed socket: u:B5FA41B8, c:00000000, ft:0 eof:0, client:, server: , request: "POST /api/private/send_data HTTP/1.0", host: ""

Latency Doctor

Redis has something amazing called Latency doctor. All things explained here: Redis latency monitoring. Using that we were able to see that majority of all our timeouts were caused by a slow storage media, in fact the SD and MicroSD cards. And we start asking the doctor:

1. aof-write: 160 latency spikes (average 1312ms, mean deviation 726ms, period 19.63 sec). Worst all time event 5686ms. 
2. aof-write-alone: 160 latency spikes (average 1355ms, mean deviation 749ms, period 20.49 sec).  Worst all time event 5686ms. 
3. aof-write-active-child: 11 latency spikes (average 1088ms, mean deviation 656ms, period 614.18 sec). Worst all time event 4160ms.

After all tuning we were still seeing things like these and lots of timeouts on our Lua clients. The usage of the appliance was kind of high for 5 messages per second.

Kingston 16GB, 3-5 messages per second

We changed cards. This is SanDisk 16GB. This time we were able to run without no Lua timeouts nor any other errors in the error log. Still visible a high CPU utilization:

SanDisk 16GB, 3-5 messages per second

Final configuration

The final settings OS and Redis for Raspberry Pi B+ model with SanDisk 16 GB MicroSD:

Virtual Memory Tuning
/dev/mmcblk0p2 root
port 0
unixsocket   kernel.sock
unixsocketperm 755
appendfsync no
no-appendfsync-on-rewrite yes
Redis configurations


Running on a RBPi board is cool since we can look even deeper how things could be improved and tuned for better performance and reduce utilization. We learned:
  • Running Redis on MicroSD, SD cards is a bit complicated
  • You need to measure what is slow, welcome Latency Doctor
  • To measure where we waste time: Lua, Redis, both  ?
  • To profile all our Lua scripts

Thursday, October 23, 2014

Monitor your computer performance, the weather ...

Our new Web Real-Time Analytic software has been re-written from Perl to Lua.  Nothing wrong with Perl, but we found Lua and NGINX amazing in terms of speed, stability and system utilization. So with our new analytic software came a new authentication and authorization layer, written of course in Lua.

Since we been talking a lot to allow different types of data to our analytic, we did implement and design new types of subscriptions, where people can open and send data feeds to our analytic nice and easy.

Like, say you are a meteorological institute and you have around 100 weather stations what you want to analyze with Kronometrix. You can easily configure your stations to send HTTP data to our analytic, open a subscription for your correct type of data analysis and you are all set.

Subscriptions types:
  • Aviation Weather Data - designed for aviation weather analysis, gathers data from weather stations and aggregates data for aviation
  • Weather and Climate data - designed for climate and meteorological data
  • Computer Performance Data - designed for computer performance systems, our classic data analysis usage

Each of these domains, link to a set of monitoring objects defined under our Library of Monitoring Objects, a standard set of objects used to describe metrics and their data types for the data collection and analysis process.

Under our new UI, you can see how easily you open a new subscription and you are ready for auto-provisioning:

Next time we will talk about tokens and subscriptions and how they are used to handle multiple points for data collection and transport.

The Appliance

Our next version of analytic software for computer performance data will have a totally new architecture and design, to support large or small customer accounts, using same stock base software. From Perl to Lua, from a time series data store to a very powerful in cache data structure server, all these changes were made to support more devices, efficiently use the computing power and deliver value. So what we did:
  • we dropped Perl and FastCGI
  • we dropped RRDtool
  • we started to use Lua programming language
  • we moved to OpenResty for a fast and robust HTTP processing
  • we switched to Redis for in cache memory statistics
  • and we are still designing a new exploratory data module, for direct interaction with raw data
And the results are amazing. We have a more powerful architecture which lets us build nice ready data appliances for large data-center customers and small and medium sized businesses.

We are using the Data-Driven Document JS library, the D3, a fresh and powerful data visualization library, based on standard HTML, SVG and CSS. So there are lots of new things versus our old analytic release 0.70. And we expect to have the first demo somewhere in late October for certain key accounts.

During all our development process we been heavily reading and researching in cache memory data structure servers, web analytic, real-time data processing. Two reference books we are using heavily for our software development and data streaming:

"Redis In Action", by Josiah L. Carlson
a must have about Redis and data analytic software implemented in Python or Lua. The author is a master in Redis and he is frequent contributor to Redis community. Many thanks Josiah for your support and advices for our analytic software product.

Another good book about real-time web analytic software is "Real-Time Analytics", by Byron Ellis a nice description of data analytic software technologies, methods and techniques to develop and support such applications. It does include Redis database but the others too: MongoDB, Cassandra.

The Library of Monitoring Objects, part 1

Suppose you plan to collect data from one or many computer systems you have in your data center. What data would you collect and what summary statistics would you store for those metrics ? Would it be ok, to sample every 60 seconds data from each host, or would you need to sample at second level ? How about keeping all metrics as statistics, aggregated over time or would you want to keep as well the raw data associated with those statistics !?

All these questions need to be answered when you conduct a performance monitoring setup to a site.


So, what metrics do you need ? You have lots of different systems, RedHat Enterprise Linux 5.x and 6.x, Solaris 10, 11, lots of Windows servers. So how do you know what metrics are useful and what are simple waste ... and how would you be able to understand and explain what is going on with your computing infrastructure ?

Short answer: it depends what you plan to do. If you want to be able to monitor all hosts for their availability only, probable you don't need much. But if you want to be able to answer about current utilization level, per system resource, across a large number of operating system, you will need to record a lot more data from each system and store it. A sampling interval of 60 seconds would be plenty. But still you wont need thousands of metrics to understand when you are short of memory or you need to add another CPU, or have a disk IO issue. However if you need to go deeper, being able to answer why some Java application is that slow, between 4 and 6PM, you probable need to record even more data, including the application itself data, Java Virtual Machine GC statistics.


The library

Start with the operating system. There are lots of performance metrics to understand and check what's going on. But which metrics are good enough to reveal an underutilized system, or a system running out of memory ? Enter LMO, a simple repository of the most needed operating system performance metrics, across a variety of operating systems, designed for computer performance data.


What is it ?

It is a list of of metrics and summary statistics, specific to different data collection domains, example computer performance data, organized as message types, based on data fields, with different data types. The purpose is to describe, define, and more or less standardize these metrics which can be easily used to any software products like, real-time web analytic, data analytic software or any data reporting software, available  as a JSON or XML formats.

For example, 5 different message types from a FreeBSD system: sysrec, cpurec, diskrec, nicrec and hdwrec, each having a structure and a number of data fields:

 What metrics per message ? See below:

Next time, we will show you one complete message type for collecting per cpu statistics from a Linux, FreeBSD and Solaris host.

Facts about Raw Data

Dr. Rufus Pollock, founder and co-Director of the Open Knowledge Foundation said about raw data and fancy GUIs:

"one thing I find remarkable about many data projects is how much effort goes into developing a shiny front-end for the material. Now I’m not knocking shiny front-ends, they’re important for providing a way for many users to get at the material ... think what a website designed five years ago looks like today (hello css). Then think about what will happen to that nifty ajax+css work you’ve just done. By contrast ascii text, csv files and plain old sql dumps (at least if done with some respect for the ascii standard) don’t date — they remain forever in style."

Amen to that. 

Nothing will compete with the simplicity of storing raw data as CSV files. I'm amazed to see how complex and complicated nowadays business are and how enormous amount of money people pour in to maintain such complex systems. But what people know in fact about raw data ? Are you storing your data from your hosts, sensors, weather data stations somewhere close to you and in a format which easily will let you carry on a simple time series analysis ? What on earth is raw data ?

What is it ?

We call raw data, the data which has been collected from a source: operating system, application process, a meteorological weather station, or a sensor, for example, without any changes or modification of any kind, numerical or not. Sometimes this set of data is called the primary data. The raw data can have any format, binary or text, record orientated or not, time series or not. Usually the raw data is found as a simple text orientated format, the CSV format, where data is presented as records, each record having fields comma separated.

Do I need this ?

It is important to collect and store raw data from your hosts or other types of devices, somewhere safe and easy to access. This will be your centralized consistent data point of all data recorded from your network of sensors or data-centre hosts. From this centralized point you can easily inspect, browse and conduct any type of data analysis or visualization you like without being restricted to a particular software application. So yes, you will need access to raw data.

Who else is collecting and using raw data ?

Any statistical, numerical and visualization data processing will require access to raw data. From financial market, like financial raw data feeds to medical, space, aeronautical, biochemical engineering, they all heavily use raw data.


Enter OpenResty

Wonder if your applications will scale as more users will come and visit your site ? How about system's resources: CPU, Memory ? Probable you will need to add more and more capacity every 4-6 months !? And how quickly can you add or change things to your web application platform to keep up with the competition ?
You need something: fast, simple, easy to manage, simple to learn and develop. Enter OpenResty

What is it ?

A web development platform based on Lua and NGINX HTTP server, including various modules to speed up the web development. Think of it as a web application server with lots of ready modules to help your life. But it is not Java, PHP, Perl or Ruby. Its Lua.

"By taking advantage of various well-designed Nginx modules, OpenResty effectively turns the nginx server into a powerful web app server, in which the web developers can use the Lua programming language to script various existing nginx C modules and Lua modules and construct extremely high-performance web applications that are capable to handle 10K+ connections."


Is it fast ? 

Damn fast. Take a look below. We been porting our authentication application from Perl to Lua and NGINX and see lots of improvements and no need to buy new machines:

NGINX + Perl (Plack)
N=150, R=418ms, X=117req/sec, Util=75% (2vcpus)

NGINX + Perl (Mojo)
N=150, R=326ms, X=126req/sec, Util=63% (2vcpus) 

NGINX +Lua(OpenResty)
N=150, R=27ms, X=180req/sec, Util=12% (2vcpus)

N = number virtual users
R = response time, ms
X = throughput, req/sec
Util = CPU Utilization across all CPU, percentage

Push more users and you will see the light.

So the winner is ... Lua and NGINX. Using Lua programming language we can craft nice web applications powered by NGINX HTTP server to sustain lots of users on decent computer systems without fear to upgrade every 3 months.

Big thanks to all OpenResty members to make this happen.

FreeBSD 11 spellchecker packages

Missing spell checking on FreeBSD 10 or 11 ?

Are you using LibreOffice or Sylpheed and can't spell check your emails or documents ? Keep reading ... You need to have additional packages in order spell checking will function correctly for your application.

Simple fix

Make sure you have installed all required packages: textproc/en-hunspelltextproc/en-aspell


From my running FreeBSD 11 system, these are all needed packages to have proper spell checking in place, for LibreOffice, email client or any other editor or using command line utilities, like aspell for your document files.

$ pkg info | grep spell

aspell- Spelling checker with better suggestion logic than ispell
aspell-ispell- Ispell compatibility script for aspell
en-aspell-7.1.0_1 Aspell English dictionaries
en-hunspell-7.1_1 English hunspell dictionaries
enchant-1.6.0_3 Dictionary/spellchecking framework
gtkspell-2.0.16_5 GTK+ 2 spell checking component
gtkspell-reference-2.0.16_1 Programming reference for textproc/gtkspell
hunspell-1.3.2_4 Improved spell-checker for Hungarian and other languages

Asus Zenbook and FreeBSD 11

This is a short description how I got running FreeBSD 11-current on my Asus Zenbook UX32VD laptop. Im very happy with the current setup but of course it is room for improvements in many areas. Having DTrace, ZFS and the other goodies makes FreeBSD a real good candidate for a mobile environment.

Last Updated: 16 March 2015

Zenbook UX32VD Configuration

CPU: Intel(R) Core(TM) i7-3517U CPU @ 1.90GHz
Memory: 6 GB RAM
Storage: SanDisk SSD i100 24GB, Samsung SSD 840 PRO 500GB
Video: Intel HD Graphics 4000, Nvidia GT620M
Display: 13" IPS Full HD anti-glare
Wifi: Intel Centrino Advanced-N 6235
3 x USB 3.0 port(s) 
1 x HDMI 
1 x Mini VGA Port 
1 x SD card reader
Note: The laptop's internal hard drive, has been replaced by a Samsung SSD.

FreeBSD 10 - ZFS, DTrace welcome back

For our internal testing and product development at SDR Dynamics, we are using FreeBSD 10. And DTrace is there along with ZFS, works out of the box, nothing to add or recompile. Nice thing to have these ported to BSD world from Solaris.

I did a quick update of my laptop, Asus Zenbook UX32VD, to latest FreeBSD current, 11 version, very curious to play around with dtrace.

root@nereid:~ # uname -a
FreeBSD nereid 11.0-CURRENT FreeBSD 11.0-CURRENT #0 r265628: Thu May  8 05:26:05 UTC 2014  amd64


List all probes:

root@nereid:~ # dtrace -l | wc -l

Syscalls by application name:

root@nereid:~ #  dtrace -n 'syscall:::entry { @[execname] = count(); }'
dtrace: description 'syscall:::entry ' matched 1072 probes
  wpa_supplicant                                                    1
  gvfsd-trash                                                       3
  syslogd                                                           3
  xfsettingsd                                                       4
  sendmail                                                          6
  xfce4-session                                                    20
  xscreensaver                                                     24
  xfce4-panel                                                      44
  powerd                                                           46
  xfdesktop                                                        52
  dtrace                                                          127
  ftp                                                             174
  wrapper                                                         355
  firefox                                                         364
  xfwm4                                                          1571
  xfce4-terminal                                                 3800
  Xorg                                                          21548

Top Calls

root@nereid:~ # dtrace -n 'syscall:::entry { @[probefunc] = count(); }'
dtrace: description 'syscall:::entry ' matched 1072 probes
  getpid                                                            1
  access                                                            2
  sigreturn                                                         2
  posix_fadvise                                                     3
  fcntl                                                             4
  fstat                                                             4
  getfsstat                                                         4
  sigaction                                                         6
  nanosleep                                                        12
  mmap                                                             27
  __sysctl                                                         32
  munmap                                                           33
  open                                                             60
  _umtx_op                                                         67
  write                                                           351
  select                                                          546
  readv                                                           596
  kevent                                                          699
  writev                                                          716
  setitimer                                                       966
  lseek                                                          1513
  poll                                                           1528
  sigprocmask                                                    1586
  recvmsg                                                        2044
  read                                                           2529
  ioctl                                                         24473

Looks good and nice. Start drilling :)


SystemDataRecorder is offering several data recorders for different jobs: overall system utilization, per-CPU, per-NIC utilization along with many others. On systems where we use virtualization in general we can monitor the guests directly or if we want more accurate numbers, we will need to monitor the host. The purpose of this short article is to show how you can use SystemDataRecorder to record Xen performance metrics.

Xen Hypervisor

Xen is an open-source type-1 or baremetal hypervisor which has the following structure: 

The Xen Hypervisor is an exceptionally lean, less than 150,000 lines of code, software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the boot-loader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage.

Xen dom0

The Control Domain, or Domain 0, is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the the other Virtual Machines. It also exposes a control interface to the outside world, through which the system is controlled. The Xen hypervisor is not usable without Domain 0, which is the first VM started by the system.

Xen domU

Guest Domains/Virtual Machines are virtualized environments, each running their own operating system and applications. Xen supports two different virtualization modes: Paravirtualization (PV) and Hardware-assisted or Full Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Xen guests are totally isolated from the hardware: in other words, they have no privilege to access hardware or I/O functionality. Thus, they are also called unprivileged domain, or DomU. 


SystemDataRecorder agents can be installed on dom0 or domU guests. To have the best visibility and accuracy we need to place all data recorders on dom0. There have been some comparative measurements between dom0 and domU and SDR, where we can see the difference between guest and host regarding data recording.

However during these measurements, one part, missing was the possibility to report per guests metrics directly from dom0. Welcome xenrec, a simple utility based on xentop , a standard Xen administrative tool.

Why xenrec ?

In short, because xentop does not record time series data, in a CSV format, simple to be consumed by systems like RRDtool or R Statistical. More xentop utility is an interactive tool, designed in general to be run from the terminal and visual check the results. You could run xentop and use other tools like awk, sed etc to parse and store data on disk. But we want something simple and easy to be used.

So, we did use xentop to record domain statistics and handle all output using Perl, with final results:

Tip: we used xentop -b -d1 -i2, inside our Perl agent. Why not -d0 ? Take a look:
xentop, 9 columns output, delay 0 seconds

Using a delay of 0 seconds will add an overhead on the dom0 for xentop and xenrec. So we don't want that. Increasing to 1second for example will make things different:
xentop, 9 columns output, delay 1 second
Very clear that xentop -b -d1 -i2 will do nicely the job.

Performance Metrics

xenrec will record all xentop reported parameters, which unfortunatelly are not proper documented under xentop manual page nor help system. Thats another reason we wanted to add xenrec to SystemDataRecorder, better documentation.

The metrics below, using help function, -h:
xenrec, help usage funtion
We will soon add xenrec to our data recorders and make available the source code under SystemDataRecorder repository.


cpuplayer - multiprocessor player

cpuplayer - multiprocessor player

Problem solving is a very important skill for any System Administrator, Performance Analyst or even for a System Manager. Sometimes, you try to solve a problem by building a visual model of that problem and trying to see it. But can geometry, in general, help in understanding how some workload is executed on a 72CPU server ? It seems it can.

Welcome Problem solving and Computer Graphics, a land where geometry meets performance analysis, troubleshooting, problem solving or even capacity planning. Using the power of geometric figures we can build a model of our original problem, we can simulate the conditions and we can see the results letting the computer to do all the work for us in a graphical representation - easy to be digested by us. cpuplayer is such tool, which combines problem solvinggeometryperformance analysis in one thing.

Using Barycentric coordinates, the player displays the CPU transition states from IDLE to USER or SYSTEM time. This animation shows CPU utilization for a 72-way E15K multiprocessor as it ramps up to steady state and executes a network-based transaction workload on ORACLE DB 10g.

Read all article here