4.7. Reporting

This chapter is about PerfCake's reporting abilities. It is configured using <reporting> element in the scenario definition.

The configuration consists of the following steps:

A reporter represents a different type of the reports such as average throughput or memory usage. By configuring the destinations you tell where output should be directed by the reporter (e.g. console, CSV file, etc.).

A reporter can publish multiple results (e.g. current value, average value, etc.) each with a particular name. The actual names of the results are described with the particular reporter.

When specifying the reporter class, unless you enter a fully classified class name, the default package org.perfcake.reporting.reporter is assumed. For the destination class, the default package is org.perfcake.reporting.destination.

Example 4.41. An example reporting configuration:

  1    <reporting>
  2       <reporter class="ThroughputStatsReporter">
  3          <property name="minimumEnabled" value="false"/>
  4          <property name="maximumEnabled" value="false"/>
  5          <destination class="ConsoleDestination">
  6             <period type="time" value="1000"/>
  7          </destination>
  8          <destination class="CsvDestination">
  9             <period type="time" value="2000"/>
 10             <property name="path" value="test-average-throughput.csv"/>
 11          </destination>
 12       </reporter>
 13 
 14       <reporter class="MemoryUsageReporter">
 15          <property name="agentHostname" value="localhost"/>
 16          <property name="agentPort" value="8850"/>
 17          <destination class="CsvDestination">
 18             <period type="time" value="2000"/>
 19             <property name="path" value="test-memory-usage.csv"/>
 20          </destination>
 21       </reporter>
 22    </reporting>

With this configuration the 2 reporters are specified - ThroughputStatsReporter and MemoryUsageReporter . First one will report to console each 1 second and to CSV file each 2 seconds, while the second one will report memory usage of the tested system into CSV file each 2 seconds.

Each reporter can be enabled/disabled by the optional boolean attribute called enabled . The disabled reporter is just ignored by PerfCake just like it wouldn't be there at all. If not specified the reporter is enabled by default.

Example 4.42. An example of a disabled reporter:

  1    <reporting>
  2       <reporter class="ThroughputStatsReporter">
  3          ...
  4       </reporter>
  5 
  6       <reporter class="MemoryUsageReporter" enabled="false">
  7          ...
  8       </reporter>
  9    </reporting>

In the example above there are two reporters configured, ThroughputStatsReporter which is enabled and MemoryUsageReporter which is disabled.

The following sections contain a description of reporters and destinations that can be used in PerfCake.


4.7.1. Reporters

ClassifyingReporter

The reporter can monitor a selected message attribute (i.e. a sequence), classify and count the number of appearance of individual values. The values are then reported by the class name with an optional prefix.

Property nameDescriptionRequiredDefault value
attribute

The name of the message attribute to classify.

Yes 
prefixA prefix used in the result map for individual class names.Noclass_

Table 4.30. ClassifyingReporter properties


Example 4.43. An example of ClassifyingReporter configuration

  1    <sequences>
  2       <sequence class="ThreadIdSequence" id="threadId" />
  3    </sequences>
  4    ...
  5    <reporting>
  6       <reporter class="ClassifyingReporter">
  7          <property name="attribute" value="threadId" />
  8          <property name="prefix" value="thread_" />
  9          <destination class="ConsoleDestination">
 10             <period type="iteration" value="1000"/>
 11          </destination>
 12       </reporter>
 13    </reporting>
 14    ...

In the example above there is a ClassifyingReporter configured to report the utilization of individual threads used to to send messages. The number of classes is equal to the number of threads configured. The sum of the values is the number of iterations passed in total.

An example output for the above example can be as follows.

Example 4.44. An example of the output when ClassifyingReporter is used

[0:00:00][60000 iterations][60%] [warmUp => false] [Threads => 10] [failures => 0] [thread_22 => 4381]↵
[thread_23 => 3734] [thread_24 => 7607] [thread_14 => 8016] [thread_15 => 3338] [thread_16 => 3204]↵
[thread_17 => 7790] [thread_18 => 7215] [thread_19 => 8493] [thread_21 => 6222]

GeolocationReporter

The reporter figures out geo-location information from a 3rd party service (http://ipinfo.io) and stores the returned values in the results. The values are obtained just once for the whole test execution so it does not make much sense to make this reporter report more than once. This can be achieved by setting the reporting period to a very large number - the first iteration is always reported and no others will be.

The reporter has just a single configuration property. It allows you to switch to a different geo-location service provider. However its configuration is questionable because it expects very specific output from the 3rd service. The best possibility if to provide your own service with the exact same JSON result format.

PerfCake needs internet access for this reporter to work. As a bonus, the reporter count average iterations per second in the same way as IterationsPerSecondReporter does.

Property nameDescriptionRequiredDefault value
serviceUrl

The location of the 3rd party geo-location service.

Nohttp://ipinfo.io/json

Table 4.31. GeolocationReporter properties


The following table describes result names of GeolocationReporter:

Result nameDescription
ResultThe current throughput in iterations/s
ipPublic address of the IP address where PerfCake runs (or your provider's IP address)
hostnameYour or your provider's hostname.
cityEstimated city where PerfCake runs.
regionEstimated region where PerfCake runs.
countryEstimated country where PerfCake runs.
latEstimated latitude (+ means north, - means south).
lonEstimated longitude (+ means east, - means west).

Table 4.32. GeolocationReporter result names


Example 4.45. An example of GeolocationReporter configuration

  1    <reporter class="GeolocationReporter">
  2       ...
  3       (destinations)
  4       ...
  5    </reporter>

IterationsPerSecondReporter

The reporter reports a plain high-level throughput (in the means of the number of iterations per second) from the beginning of the measuring to the moment when the results are published. The result is computed from the current number of processed iterations at the moment of publishing result and the time duration from the beginning (or warmup).

The reporter does not have any specific properties.

The following table describes result names of IterationsPerSecondReporter:

Result nameDescription
ResultThe current throughput in iterations/s

Table 4.33. IterationsPerSecondReporter result names


Example 4.46. An example of IterationsPerSecondReporter configuration

  1    <reporter class="IterationsPerSecondReporter">
  2       ...
  3       (destinations)
  4       ...
  5    </reporter>

In the example above there is an IterationsPerSecondReporter configured to report the current and the average value of the throughput.


MemoryUsageReporter

The reporter is able to report the current memory usage of the tested system at the moment when the results are published. It requires PerfCake agent to be installed in the tested system's JVM.

To be able to use MemoryUsageReporter you need to attach PerfCake agent to the tested system's JVM. The PerfCake agent is a part of binary distribution (PerfCake Agent's JAR archive). The agent is configurable by following properties:

Property nameDescriptionRequiredDefault value
hostnameIP address of hostname where PerfCake agent is listening.Nolocalhost
portA port number where PerfCake agent is listening.No8850

Table 4.34. PerfCake agent properties


To attach the agent to the tested system's JVM, append the following JVM argument to the executing java command or use JAVA_OPTS environment variable. It is also possible to attach the agent to an already running JVM since Java 7 (supposing you have tools.jar on the classpath).

Example 4.47. JVM argument to attach PerfCake agent to the tested JVM

"... -javaagent:<perfcake_agent_jar_path>=hostname=<hostname>,port=<port>"

Example 4.48. PerfCake JVM argument example

JAVA_OPTS="... -javaagent:$PERFCAKE_HOME/lib/perfcake-agent-7.x.jar=port=8850"

Example 4.49. Attaching to running JVM

java -cp $JAVA_HOME/lib/tools.jar:perfcake-agent-7.x.jar \
  org.perfcake.agent.PerfCakeAgent <PID> hostname=<hostname>,port=<8850>

Once you have started the tested system up, you should see the following line in the system's console output:

...
PerfCakeAgent > Listening at localhost on port 8850
...

Once you have the PerfCake agent attached and the tested system is up and running you can use the MemoryUsageReporter to measure the memory usage of the tested system.

MemoryUsageReporter is capable of possible memory leak detection. It is disabled by default. Once enabled, the reporter periodically gathers memory usage from the tested system using via an ordinary way (using the PerfCakeAgent) and remembers a window of N last measured values. Once the window is filled, the reporter uses a linear regression analysis over the data from the time window to compute an used memory trend. The possible memory leak is considered detected when the slope of the memory trend exceeds the specified slope threshold. All the period, the window size and the slope threshold are configurable via particular reporter's properties.

The reporter is also able to dump memory when a possible memory leak is detected. The feature can be enabled by the memoryDumpOnLeak property and the memory dump is then saved in a file which can be specified by the memoryDumpFile property. If not specified the dump name will be generated as "dump-" + System.currentTimeMillis() + ".bin" in case of the Java agent that is part of PerfCake.

The reporter can ask the agent to perform a garbage collection each time the memory usage of the tested system is measured and published. Since the garbage collection is CPU intensive operation be careful to enable it and to how often the memory usage is measured because it will have a significant impact on the measured system and naturally the measured results too.

The reporter has the following properties:

Property nameDescriptionRequiredDefault value
agentHostnameAn IP address of hostname where the PerfCake agent is listening.Nolocalhost
agentPortA port number where the PerfCake agent is listening.No8850
memoryDumpOnLeakThe property to make a memory dump, when possible memory leak is detected. The MemoryUsageReporter will send a command to PerfCake agent that will create a heap dump. Nofalse
memoryDumpFileThe property specifying the name of the memory dump file. The full "file:" URI is supported. No-
memoryLeakDetectionEnabledEnables or disables the memory leak detection.Nofalse
memoryLeakDetectionMonitoringPeriodA time period in ms in which the memory leak detection mechanism gathers memory usage data.No500
memoryLeakSlopeThresholdThe used memory trend slope threshold. The slope's unit is a byte per second.No1024
performGcOnMemoryUsageThe property is used to enable/disable performing garbage collection each time the memory usage of the tested system is measured and published. Since the garbage collection is CPU intensive operation be careful to enable it and to how often the memory usage is measured because it will have a significant impact on the measured system and naturally the measured results too. Nofalse
usedMemoryTimeWindowSizeThe used memory time window size. (Number of records in the memory data set used for the statistical analysis).No100

Table 4.35. MemoryUsageReporter properties


The following table describes result names of MemoryUsageRepoter:

Result nameDescription
UsedThe amount of currently used memory in the Java Virtual Machine.
TotalThe total amount of memory in the Java virtual machine in MiB.
MaxThe maximum amount of memory that the Java virtual machine will attempt to use in MiB
UsedTrendThe memory usage regression line slope in B/s.
MemoryLeakA boolean value indicating whether a possible memory leak has been detected yet.

Table 4.36. MemoryUsageReporter result names


Example 4.50. An example of MemoryUsageReporter configuration

  1    <reporter class="MemoryUsageReporter">
  2       <property name="agentHostname" value="localhost"/>
  3       <property name="agentPort" value="8850"/>
  4       ...
  5       (destinations)
  6       ...
  7    </reporter>

ResponseTimeHistogramReporter

Reports response time in milliseconds using HDR Histogram that can computationally correct the Coordinated omission problem.

The following paragraphs are based on the HDR Histogram documentation.

This reporter depends on the features introduced by HDR Histogram to correct the coordinated omission. This problem occurs when all sending threads are blocked waiting for a response from a system under test that suddenly stopped responding for a relatively long time (longer than it did in the past). Under these conditions, no additional bad results with high response time are recorded while the system is still blocked. To have the balanced result, we should have approximately the same number of measurements for each time interval during the test execution.

To compensate for the loss of sampled values when a recorded value is larger than the expected, interval between value samples, HDR Histogram will auto-generate an additional series of decreasingly-smaller value records. The values go down to the expectedValue in case of the user correction mode, or down to the average response time in case of the auto correction mode.

The reporter tries to divide the time range between shortest and longest response time into intervals of similar length and calculate the percentiles for the intervals.

For example, the reporter can be configured to track the counts of observed response times in milliseconds between 0 and 3,600,000 (maxExpectedValue) while maintaining a value precision of 3 (precision) significant digits across that range. Value quantization within the range will thus be no larger than 1/1,000th (or 0.1%) of any value. This example reporter could be used to track and analyze the counts of observed response times ranging between 1 millisecond and 1 hour in magnitude, while maintaining a value resolution of 1 millisecond (or better) up to one second, and a resolution of 1 second (or better) up to 1,000 seconds. At its maximum tracked value (1 hour), it would still maintain a resolution of 3.6 seconds (or better).

Property nameDescriptionRequiredDefault value
precision

Precision of the resulting histogram (number of significant digits) in range 0 - 5. This determines the memory used by the reporter. Also, for low precision, numbers are recorded in less precise ranges.

No2
detail

Detail level of the result (the number of iteration steps per half-distance to 100%).

Must be greater than 0.

No2
maxExpectedValue

The maximum expected value to better organize the data in the histogram. The response time reported must never exceed this value, otherwise the result will be skipped, an error reported and the output will be invalid.

-1 turns the optimization off. It is valuable to set some reasonable number like 3,600,000 which equals to the resolution from 1 millisecond to 1 hour.

No-1 (unspecified)
correctionMode

The correction of coordinated omission in the resulting histogram.

auto is the default value and this means that the histogram is corrected according to the average measured value.

In the user correction mode, the values are corrected according to the expectedValue specified by user. This is useful when you know thhe expected response time in advance.

The correction for coordinated omission is turned off by setting none to this property.

Noauto
expectedValue

The value of normal/typical/expected response time in ms to correct the histogram while the user correction mode is turned on.

Only when correctionMode is set to user, however the default value can still be used.1
prefixString prefix used in the result map for histogram entries. This prefix is followed by the percentile for the corresponding range cumulatively. E.g. perc0.98834000=14 means that in 98.834% of measurements, the response time was 14 or better.Noperc
filterWhen true, it tries to minimize the number of reported values while keeping the same level of information. For instance, instead of reporting perc0.0=2, perc0.5=2, perc0.75=2, perc0.882=3, just the values perc0.75=2, perc0.882=2 are reported. It is then obvious that all percentiles under 0.75 are equal to 2.Nofalse

Table 4.37. ResponseTimeHistogramReporter properties


ResponseTimeHistogramReporter can be best used with CsvDestination and ChartDestination as you can see in the following example.

Example 4.51. An example of ResponseTimeHistogramReporter configuration

  1    <reporter class="ResponseTimeHistogramReporter">
  2       <property name="detail" value="1" />
  3       <property name="precision" value="1" />
  4       <property name="maxExpectedValue" value="100" />
  5       <property name="correctionMode" value="user" />
  6       <property name="expectedValue" value="2" />
  7       ...
  8       <destination class="ChartDestination">
  9          <period type="time" value="500"/>
 10          <property name="yAxis" value="HDR Response time [ms]"/>
 11          <property name="group" value="${perfcake.scenario}_hdr_resp"/>
 12          <property name="name" value="HDR Response Time (${threads:25} threads)"/>
 13          <property name="attributes" value="*, warmUp"/>
 14          <property name="autoCombine" value="false" />
 15          <property name="chartHeight" value="1000" />
 16          <property name="outputDir" value="target/${perfcake.scenario}-charts"/>
 17       </destination>
 18       <destination class="CsvDestination">
 19          <period type="time" value="500"/>
 20          <property name="expectedAttributes" value="*" />
 21          ...
 22       </destination>
 23       ...
 24    </reporter>

In the example above there is an instance of ResponseTimeHistogramReporter configured to report at slightly lower precision and detail level than default with correction for coordinated omission expecting the system under test to response within 2ms and no response time larger than 100ms is expected. The results will be writen as a chart and into a CSV file.


ResponseTimeStatsReporter

The reporter is able to report the statistics - current, minimal, maximal and average value of a response time (in miliseconds) from the beginning of the measuring to the moment when the results are published (default) or in a specified window. The default result of this reporter is the current response time.

Property nameDescriptionRequiredDefault value
minimumEnabledEnables minimal value measuring.Notrue
maximumEnabledEnables maximal value measuring.Notrue
averageEnabledEnables average value measuring.Notrue
requestSizeEnabledEnables measuring of total size of requests sent.Notrue
responseSizeEnabledEnables measuring of total size of responses received.Notrue
windowSizeA window where the data for the statistics are taken from. The value unit depends on the window type specified by the windowType property. NoInteger.MAX_VALUE
windowTypeA type of the window. It is either number of last iterations of an amount of time in milliseconds. The values of iteration or time is supported. Noiteration
histogramA comma separated list of values where the histogram is split to individual ranges.No 
histogramPrefixString prefix used in the result map for histogram entries. This prefix is followed by the mathematical representation of the particular range.Noin

Table 4.38. ResponseTimeStatsReporter properties


The following table describes result names of ResponseTimeStatsReporter:

Result nameDescription
ResultThe current response time in ms - of the latest iteration.
MinimumThe minimal response time in ms measured so far (in a given sliding window).
MaximumThe minimal response time in ms measured so far (in a given sliding window).
AverageThe average response time in ms measured so far (in a given sliding window).
RequestSizeThe size of all requests sent so far (in a given sliding window).
ResponseSizeThe size of all responses received so far (in a given sliding window).
${histogramPrefix}<from:to) If histogram is used, there is a result with the value of histogram for each range. Example: in<100.0:200.0) for a value range between 100.0 and 200.0 and the histogramPrefix set to "in".

Table 4.39. ResponseTimeStatsReporter result names


Example 4.52. An example of ResponseTimeStatsReporter configuration

  1  <reporter class="ResponseTimeStatsReporter">
  2    <property name="minimumEnabled" value="false"/>
  3    <property name="maximumEnabled" value="false"/>
  4    ...
  5    (destinations)
  6    ...
  7  </reporter>

Example 4.53. An example of ResponseTimeStatsReporter configuration with histogram

  1    <reporter class="ResponseTimeStatsReporter">
  2       <property name="histogram" value="100,200"/>
  3       <property name="histogramPrefix" value="in"/>
  4 
  5       <destination class="ConsoleDestination">
  6          <period type="time" value="5000" />
  7       </destination>
  8       ...
  9       (destinations)
 10       ...
 11    </reporter>

In the example above a ResponseTimeStatsReporter is configured to report all statistics with the following output:

2016-06-13 23:20:22,158 INFO  {org.perfcake.ScenarioExecution} === Welcome to PerfCake 7.5 ===
2016-06-13 23:20:22,159 INFO  {org.perfcake.util.TimerBenchmark} Benchmarking system timer resolution...
2016-06-13 23:20:22,160 INFO  {org.perfcake.util.TimerBenchmark} This system is able to differentiate up to 356ns. A single thread is now able to measure maximum of 2808988 iterations/second.
2016-06-13 23:20:22,177 INFO  {org.perfcake.message.generator.DefaultMessageGenerator} Starting to generate...
[0:00:00][1 iterations][0%] [1.084454 ms] [warmUp => false] [Threads => 10] [ResponseSize => 256.00 B] [Minimum => 1.084454 ms] [Maximum => 1.084454 ms] [failures => 0] [RequestSize => 256.00 B] [Average => 1.084454 ms]
2016-06-13 23:20:22,201 INFO  {org.perfcake.message.generator.DefaultMessageGenerator} Reached test end. All messages were prepared to be sent.
2016-06-13 23:20:22,201 INFO  {org.perfcake.message.generator.DefaultMessageGenerator} Waiting for all messages to be sent...
[0:00:00][100 iterations][10%] [1.059404 ms] [warmUp => false] [Threads => 10] [ResponseSize => 25.00 KiB] [Minimum => 1.009627 ms] [Maximum => 2.501367 ms] [failures => 0] [RequestSize => 25.00 KiB] [Average => 1.11484413 ms]
[0:00:00][200 iterations][20%] [1.057651 ms] [warmUp => false] [Threads => 10] [ResponseSize => 50.00 KiB] [Minimum => 1.008045 ms] [Maximum => 3.143239 ms] [failures => 0] [RequestSize => 50.00 KiB] [Average => 1.1479294649999998 ms]
[0:00:00][300 iterations][30%] [1.06057 ms] [warmUp => false] [Threads => 10] [ResponseSize => 75.00 KiB] [Minimum => 1.008045 ms] [Maximum => 3.143239 ms] [failures => 0] [RequestSize => 75.00 KiB] [Average => 1.12146921 ms]
[0:00:00][400 iterations][40%] [1.063183 ms] [warmUp => false] [Threads => 10] [ResponseSize => 100.00 KiB] [Minimum => 1.008045 ms] [Maximum => 3.143239 ms] [failures => 0] [RequestSize => 100.00 KiB] [Average => 1.1061472125 ms]
[0:00:00][500 iterations][50%] [1.114687 ms] [warmUp => false] [Threads => 10] [ResponseSize => 125.00 KiB] [Minimum => 1.008045 ms] [Maximum => 3.143239 ms] [failures => 0] [RequestSize => 125.00 KiB] [Average => 1.1018137200000009 ms]
[0:00:00][600 iterations][60%] [1.031199 ms] [warmUp => false] [Threads => 10] [ResponseSize => 150.00 KiB] [Minimum => 1.008045 ms] [Maximum => 3.988934 ms] [failures => 0] [RequestSize => 150.00 KiB] [Average => 1.1051423050000004 ms]
[0:00:00][700 iterations][70%] [1.272101 ms] [warmUp => false] [Threads => 10] [ResponseSize => 175.00 KiB] [Minimum => 1.008045 ms] [Maximum => 3.988934 ms] [failures => 0] [RequestSize => 175.00 KiB] [Average => 1.1001457528571437 ms]
[0:00:00][800 iterations][80%] [1.059498 ms] [warmUp => false] [Threads => 10] [ResponseSize => 200.00 KiB] [Minimum => 1.007744 ms] [Maximum => 3.988934 ms] [failures => 0] [RequestSize => 200.00 KiB] [Average => 1.0959077025000006 ms]
[0:00:00][900 iterations][90%] [1.079325 ms] [warmUp => false] [Threads => 10] [ResponseSize => 225.00 KiB] [Minimum => 1.007744 ms] [Maximum => 6.986077 ms] [failures => 0] [RequestSize => 225.00 KiB] [Average => 1.1047625722222227 ms]
[0:00:00][1000 iterations][100%] [1.05882 ms] [warmUp => false] [Threads => 10] [ResponseSize => 250.00 KiB] [Minimum => 1.007744 ms] [Maximum => 6.986077 ms] [failures => 0] [RequestSize => 250.00 KiB] [Average => 1.104236193000001 ms]
2016-06-13 23:20:23,310 INFO  {org.perfcake.reporting.ReportManager} Checking whether there are more results to be reported...
2016-06-13 23:20:23,313 INFO  {org.perfcake.ScenarioExecution} === Goodbye! ===

ThroughputStatsReporter

The reporter is able to report the statistics - current, minimal, maximal and average value of a pure throughput (in the means of the number of iterations per second) from the beginning of the measuring to the moment when the results are published (default) or in a specified window. The default result of this reporter is the current pure throughput.

The pure throughput is how much iterations per second would the tested system be able to process under the current load if the overhead was zero. It is computed from a response time simply by inverting the value of the response time and multiplying by the number of threads.

Property nameDescriptionRequiredDefault value
minimumEnabledEnables minimal value measuring.Notrue
maximumEnabledEnables maximal value measuring.Notrue
averageEnabledEnables average value measuring.Notrue
requestSizeEnabledEnables measuring of total size of requests sent.Notrue
responseSizeEnabledEnables measuring of total size of responses received.Notrue
windowSizeA window where the data for the statistics are taken from. The value unit depends on the window type specified by the windowType property. NoInteger.MAX_VALUE
windowTypeA type of the window. It is either number of last iterations of an amount of time in milliseconds. The values of iteration or time is supported. Noiteration
histogramA comma separated list of values where the histogram is split to individual ranges.No 
histogramPrefixString prefix used in the result map for histogram entries. This prefix is followed by the mathematical representation of the particular range.Noin

Table 4.40. ThroughputStatsReporter properties


The following table describes result names of ThroughputStatsReporter:

Result nameDescription
ResultThe current throughput in iterations/s - of the latest iteration.
MinimumThe minimal throughput in iterations/s measured so far (in a given time window).
MaximumThe minimal throughput in iterations/s measured so far (in a given time window).
AverageThe average throughput in iterations/s measured so far (in a given time window).
RequestSizeThe size of all requests sent so far (in a given sliding window).
ResponseSizeThe size of all responses received so far (in a given sliding window).
${histogramPrefix}<from:to) If histogram is used, there is a result with the value of histogram for each range. Example: in<100.0:200.0) for a value range between 100.0 and 200.0 and the histogramPrefix set to "in".

Table 4.41. ThroughputStatsReporter result names


Example 4.54. An example of ThroughputStatsReporter configuration with a sliding window over last 30 iterations

  1    <reporter class="ThroughputStatsReporter">
  2       <property name="minimumEnabled" value="false"/>
  3       <property name="maximumEnabled" value="false"/>
  4       <property name="windowSize" value="30"/>
  5       ...
  6       (destinations)
  7       ...
  8    </reporter>

In the example above there is a ThroughputStatsReporter configured to report the current and the average value of the throughput in a sliding window of 30 iterations.

Example 4.55. An example of output with the above configuration

[0:00:01][50 iterations][10%] [68.56845152517322 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 56.58369666441687 iterations/s]
[0:00:02][125 iterations][20%] [72.44633269853925 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 71.35868254120271 iterations/s]
[0:00:03][188 iterations][30%] [73.01081433991678 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 68.11154264877203 iterations/s]
[0:00:04][253 iterations][40%] [72.21612698050471 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 71.15376567483213 iterations/s]
[0:00:05][309 iterations][50%] [72.30052923842801 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 72.50585514834 iterations/s]
[0:00:06][351 iterations][60%] [74.03760369891869 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 73.84493230828234 iterations/s]
[0:00:07][390 iterations][70%] [73.47836429196032 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 73.86340270431613 iterations/s]
[0:00:08][426 iterations][80%] [73.44844565461368 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 74.35713799467462 iterations/s]
[0:00:09][464 iterations][90%] [75.16546154258003 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 73.90578951833827 iterations/s]
[0:00:10][499 iterations][100%] [75.45269086855991 iterations/s] [warmUp => false] [Threads => 10] [ResponseSize => 19.24 KiB] [failures => 0] [RequestSize => 650.00 B] [Average => 73.98895045521625 iterations/s]

WarmUpReporter

The reporter is able to determine when the tested system is warmed up. The warming is enabled/disabled by the presence of enabled WarmUpReporter in the scenario. It does not publish any results to destinations. The minimal iterations count and the warm-up period duration can be tweaked by the respective properties minimalWarmUpCount with the default value of 10,000 and minimalWarmUpDuration with the default value of 15,000 ms).

The reporter internally keeps track of current throughput - each second checks the number of processed iterations and computes the current throughput as the difference in number of iterations per checking period (second). It also remembers the current throughput from the previous checking period to calculate a difference in throughput. The throughput is considered NOT changing in time "much" when the relative difference in current throughput between the current checking period and the previous one is less than relativeThreshold value or the absolute difference in current throughput between the current checking period and the previous one is less than absoluteThreshold value.

Normally, the maximal length of the warm-up period is determined by the length of the performance test itself. This can be further limited (by using maximalWarmUpDuration and maximalWarmUpType properties) for the test not to waste time when the system under test cannot get warmed-up within a reasonable time frame.

The system is considered warmed up when all of the following conditions are satisfied: The current throughput is not changing much over the time, the minimal iterations count has been executed and the minimal duration from the very start has exceeded.

Property nameDescriptionRequiredDefault value
minimalWarmUpDurationA minimal amount of time (in milliseconds) of the warm-up period.No15000
minimalWarmUpCountA minimal number of iterations in the warm-up period.No10000
relativeThresholdA relative difference threshold to determine whether the throughput is not changing much.No0.002
absoluteThresholdAn absolute difference threshold to determine whether the throughput is not changing much.No0.2
maximalWarmUpDuration

Maximal tolerance of waiting for the end of the warm-up period. If we run out of this time/percentage/iteration count (determined by maximalWarmUpType), we simply break the test and do not waste any more time. -1 means that the check is disabled.

No-1
maximalWarmUpTypeThe unit in which we measure the maximal warm-up count. Can be iteration, time, or percentage.Noiteration

Table 4.42. WarmUpReporter properties


Example 4.56. An example of WarmUpReporter configuration

  1    <reporter class="WarmUpReporter" enabled="true">
  2       <property name="minimalWarmUpCount" value="1000"/>
  3       <property name="minimalWarmUpDuration" value="10000"/>
  4       <property name="relativeThreshold" value="0.005"/> <!-- 0.5% -->
  5       <property name="absoluteThreshold" value="0.5"/>
  6    </reporter>

In the example above the system would be considered warmed up when at least 1000 iterations is processed AND the scenario is executed for at least 10 seconds AND, the relative change in the throughput is less then 0.5% or the absolute change in throughput is less than 0.5 iterations per second.


RawReporter

This is a very specific reporter for advanced users. RawReporter simply stores the complete test execution reporting data in a given file. The file can be later replayed with any provided scenario. The replay just reuses the reporting configuration in a scenario and emulates the test execution with the data recorded previously. RawReporter does not support any destinations to be added to it.

The results file is a compressed (gzip) serialization of RunInfo as a heaader and all MesurementUnits received by the reporter.

The following table describes configuration parameters of RawReporter:

Property nameDescriptionRequiredDefault value
outputFileThe file where the results will be recorded.Noperfcake-measurement-${timestamp}.raw

Table 4.43. RawReporter properties


Example 4.57. An example of RawReporter configuration

  1    <reporter class="RawReporter">
  2       <property name="outputFile" value="results.raw" />
  3    </reporter>

In the example above there is a RawReporter configured to report the results to results.raw file. There are no other configuration possibilities and no destinations can be specified.


4.7.2. Destinations

A destination is a representation of places where the measurements from the reporters are published. Each destination is configured to publish the results of reporter's measurements during the scenario execution periodically with a period specified by the period element in the scenario definition. Destination can have multiple periods but each destination has to have at least one period configured.

The following table shows the destination period options:

Destination period type Value description
timeTime period in milliseconds
iterationNumber of iterations
percentageThe relative percentage of the scenario run

Table 4.44. Destination period options


Example 4.58.  An example of the period configuration in a destination:

  1    <destination class="...">
  2       <period type="time" value="1000"/>
  3       ...
  4       (properties)
  5       ...
  6    </destination>

The following sections describe the destinations that can be used by reporters.

ChartDestination

Creates nice charts from the results using the C3.js library. The charts are quite powerful and we recommend reading this section thoroughly to fully discover their capabilities.

A user typically needs to specify the location where the chart(s) will be stored. This is set by the outputDir property. Next, a list of attributes that should be put in the chart is specified in the attributes property. This is a comma separated list of attributes that contains numbers. The charts cannot work with enumerations and text attributes. The names of the attributes can be seen in the console using ConsoleDestination. For example, in the listing below you can see the following attributes listed (not all of them need to be used in ChartDestination, you can select any subset suitable): Threads, Minimum, Maximum, failures, Average.

[0:00:08][97112 iterations][97%] [2.068165 ms] [warmUp => false] [Threads => 25] [Minimum => 2.004637 ms] [Maximum => 13.024963 ms] [failures => 0] [Average => 2.1204630348576172 ms]

Each chart also has some basic configuration properties specifying its name, the description of X and Y axeses, and the height of the chart image in pixels. The output is written as HTML files. The data files created are:

  • ${outputDir}/data/${name}-${timeStamp}.* - containing chart meta data in JSON format, chart data in a JavaScript file and a preview HTML file for each chart,

  • ${outputDir}/index.html - the final report generated at the end of a performance test,

  • ${outputDir}/src/* - HTML resources needed to render the charts even in the offline mode.

The destination can work in two modes. First is that all data are immediately written to the file system and you can find a preview file (see the list above) of the current state during the performance testing. Once you open the preview HTML file in the browser, it is automatically reloaded every 5 seconds. This mode can be only used when we specify precise names of the attributes to be recorded.

There is also an option to use prefix wildcard in the list of attributes. This is very useful when used with ResponseTimeHistogramReporter which reports attributes in the format <some prefix><percentile value> (e.g. perc0.0100000, perc0.2500000...). For this particular use case, we need to specify the attribute as perc*. We can also use just * to record all the available attributes. However, because we do not know all the attributes in advance, the preview is not available and no data are written to the file system until the performance test is completed.

One attribute cannot be replaced with a wildcard and this is warmUp. ChartDestination can be configured to completely ignore the warm-up period of the test when the warmUp attribute is not specified in the list of attributes. Or it records all attributes specified in the list during both warm-up and normal test phase. During the warm-up phase, the values are recorded into separate data series with the _warmUp suffix.

For example, for the attribute list Result, Average, warmUp, the following data series will be created (supposing the test has WarmUpReporter configured): Result_warmUp, Average_warmUp, Result, Average. If we set the attribute list to *, the following data series would be created: Result, Average, Threads (supposing there are no other attributes reported). If we wanted to create data series even for the warm-up phase, we would need to specify the attributes list as *, warmUp.

It is possible to keep recording new charts to the same directory location as was used for previous performance test runs. In this case, ChartDestination can automatically create combined charts comparing the same data series (according to their name) from all the charts recorded so far and having the same group property value. This behavior can be switched off by setting the autoCombine property to false.

Please use the charts with caution as the big number of results or charts recorded in the same report can take too long to load in the browser.

The following table describes the ChartDestination properties:

Property nameDescriptionRequiredDefault value
attributesAttributes that will be stored in the chart. Each attribute is a result name of the reporter from which the results are published. Prefix wildcards (e.g. perc*) can be used. Using wildcards turns off chart previews. To record during the warm-up phase, the warmUp attribute needs to be specified explicitely. Yes-
nameName of the chart for this measurement. There must not be two charts with the same name.NoPerfCake Results
groupGroup of this chart. Charts in the same group can be later matched for the column names. The group name can contain only alphanumeric characters and underscores and it is not allowed to begin with a number digit. If the group name does not follow the naming conventions, it would be converted to do so. Nodefault
xAxisX axis legend.NoTime
yAxisY axis legend.NoIterations
typeThe chart can have two visual types - a line chart or a bar chart. The bar chart is not recommended for reporting many values. It is more suitable for a few (or even a single) records (e.g. HDR histogram). Possible values are line and bar.Noline
outputDirA name of the directory where the charts are stored.Noperfcake-chart
chartHeightHeight in pixels of each individual chart graphics in the HTML report. This is useful when the legend is too long.No400
autoCombineSpecifies whether the newly created chart should be automatically combined with the previously recorded data.Notrue

Table 4.45. ChartDestination properties


Example 4.59. An example of ChartDestination configuration

  1    <reporter class="MemoryUsageReporter">
  2       ...
  3       <destination class="ChartDestination">
  4          <period type="time" value="1000"/>
  5          <property name="name" value="Memory Usage"/>
  6          <property name="group" value="${perfcake.scenario}_memory"/>
  7          <property name="yAxis" value="Memory Usage [MiB]"/>
  8          <property name="outputDir" value="${perfcake.scenario}-charts"/>
  9          <property name="attributes" value="Used,Total"/>
 10       </destination>
 11    <reporter>

In the example above there is a MemoryUsageReporter configured to publish memory usage report into a chart. The memory usage data is gathered every single second and the chart is supposed to be showing used and total memory (results taken from Used and Total attributes of theMemoryUsageReporter. The resulting chart is shown in Figure 4.2, “ChartDestination example chart” .


ChartDestination example chart

Figure 4.2. ChartDestination example chart


ConsoleDestination

A simple destination that appends the measurements to the PerfCake's console output.

Warning

Console output is not written into PerfCake log file and is lost once you close your terminal. If you want to keep the output, use the section called “Log4jDestination” or redirect PerfCake output to a file.

It is possible to setup a prefix to each line on the output to differentiate between several instances of this destination in a single performance test scenario. The ConsoleDestination can also send ANSI codes to change output color, however, this works only on certain operating systems and terminals. For example Microsoft Windows is known to support this feature since version 10.

The color codes are determined by the terminal configuration and can vary on different platforms.

The following table lists the typical colors, two values (normal and bright) each.

Code0 and 81 and 92 and 103 and 114 and 125 and 136 and 147 and 15
ColorBlackRedGreenYellowBlueMagentaCyanWhite

The following table lists the available properties of ConsoleDestination.

Property nameDescriptionRequiredDefault value
prefixA prefix string that is written to each output line.No-
foregroundOutput foreground color as a number in range 0 - 15. The real color depends on the terminal configuration. The range 8 - 15 means bold or bright version of the same colors as 0 - 7. No-
backgroundOutput background color as a number in range 0 - 7. The real color depends on the terminal configuration. Background does not support bold/bright colors.No-

Table 4.46. ConsoleDestination properties


Example 4.60. An example of ConsoleDestination configuration

  1    <destination class="ConsoleDestination">
  2       <period type="time" value="1000"/>
  3       <property name="prefix" value="===[Throughput]===>"/>
  4       <property name="foreground" value="11"/>
  5    </destination>

CsvDestination

This destination can be used to publish the measurements into a CSV file. Each result in the measurement is treated as a column in the file and the name of the result is used to name the column.

CsvDestination in its minimal configuration simply streams out all the measured data during the test execution and the CSV result file is immediately available. It always writes out Time, Iteration and Result attributes. These attributes cannot be requested in the scenario configuration and cannot be removed.

To get better idea on what attributes can be requested in your scenario configuration, use ConsoleDestination which always outputs all of them. You can then pick those that suite your needs.

By default, CsvDestination writes out all the attributes observed in the first measurement it receives. Changing the CSV result file to add a data column while the performance test is in progress would cause too much overhead. The file header once written remains unchanged. Attributes added later to the measurement are ignored. This is especially the case when it takes very long for the first result to arrive and the CsvDestination already reports to the CSV file. This would practically block any results from being reported.

To handle the situation, it is possible to specify wildcards in the form of <prefix>*. This is very useful when used with histogram reporters and we do not know the names of the attributes in advance. It is also possible to use just * as a wildcard. It is not possible to use the asterisk wildcard in the middle of an attribute name. The wildcard does not replace the warmUp attribute. However, such a configuration leads to storing all the results in the memory and writing the final CSV result file after the test is successfully finished. In case of a failure, no results are written.

If there are any attributes missing from the records, CsvDestination can either skip such a record, or it can fill in the missing values with null.How to handle the

The following table describes the configuration properties of CsvDestination.

Property nameDescriptionRequiredDefault value
pathA path to the output CSV file.No perfcake-results- ${perfcake.run.timestamp}.csv [a]
delimiterA CSV record delimiter.No;
appendStrategyA strategy that is used in case, that the output file exists. overwrite means that the file is overwritten, rename means that the current output file is renamed by adding a number-based suffix and append is for appending new results to the original file. Norename
expectedAtrributesA comma separated list of attributes to be recorded in the CSV result file. This is useful when the attributes are not present in every measurement or it takes too long for the first measurement to arrive.NoEmpty string. The attributes reported by default are Time, Iteration, Result and all others observed in the first measurement (supposing it arrives sooner than the CsvDestination reports for the first time).
missingStrategySpecifies what to do in case of a missing attribute value. It can either replace it with null when the strategy is set to null, or it can completely skip such a record when set to skip.Nonull
linePrefixThe prefix prepended to all lines in the CSV result file. This can facilitate creating JSON like records for example.No(empty string)
lineSuffixThe suffix appended to all lines in the CSV result file. This can facilitate creating JSON like records for example.No(empty string)
lineBreakLine separator used to add new entry to the CSV result file.No\n (new line character)
skipHeaderSkips writing the header in the CSV result file if set to true.Nofalse

Table 4.47. CsvDestination properties


Example 4.61. An example of CsvDestination configuration

  1    <destination class="CsvDestination">
  2       <period type="time" value="1000"/>
  3       <property name="path" value="${perfcake.scenario}-output.csv"/>
  4       <property name="appendStrategy" value="overwrite"/>
  5    </destination>

Example 4.62. Sample output of CsvDestination

Time;Iterations;Result;warmUp;Threads;ResponseSize;Minimum;Maximum;failures;RequestSize;Average
0:00:00;1;1.306239;false;10;0;1.306239;1.306239;0;0;1.306239
0:00:00;1000;1.069117;false;10;0;1.014015;10.468919;0;0;1.1494209689999997
0:00:00;2000;1.059257;false;10;0;1.012308;10.468919;0;0;1.1295661935000005
0:00:00;3000;1.069961;false;10;0;1.012308;10.468919;0;0;1.1466806060000005

ElasticsearchDestination

This destination stores results in the Elasticsearch database. The reported data have information about the test progress (time in milliseconds since start, percentage and iteration), real time of each result, and the complete results map. Quantities are stored without their unit for them to be parseable as numbers.

To properly search through the data, we need to set the mapping (to be able to interpret time as time). However, this needs to be done just once for each index and type. Another attempts to set the mapping lead to an error from the server (this does not break the test execution) because Elasticsearch cannot change existing mapping.

Property nameDescriptionRequiredDefault value
serverUrlComma separated list of Elasticsearch servers including protocol and port number. Port is typically 9292.Yes 
indexElasticsearch index name.Noperfcake
typeElasticsearch type name.Noresults
tagsComma separated list of tags to be added to results. This is useful to differentiate results from multiple test runs for example.No 
userNameElasticsearch user name. Authentication is only used when the userName property is specified.No 
passwordElasticsearch password.No 
timeoutElasticsearch client timeout in milliseconds. When getting too many missed records in the log, try increasing this value.No3000
configureMappingTrue when the mapping should be configured prior to writing any data. This needs to be done only once for each index and type.Notrue
keyStoreEnables a SSL connection to the server. Sets the location of the key store created with Java keytool. The default loaction is specified by the system property perfcake.keystores.dir which defaults to resources/keystores.No 
keyStorePasswordPassword to the key store.No 
trustStoreSee keyStore property. The only difference is that this is for the trust store.No 
trustStorePasswordPassword to the trust store.No 

Table 4.48. ElasticsearchDestination properties


Example 4.63. An example of a ElasticsearchDestination configuration

  1    <destination class="ElasticsearchDestination">
  2       <period type="iteration" value="500"/>
  3       <property name="serverUri" value="http://localhost:9292" />
  4       <property name="index" value="perfcake" />
  5       <property name="tags" value="tag1,tag2" />
  6       <property name="timeout" value="5000" />
  7    </destination>

InfluxDbDestination

This destination stores results in the InfluxDb database. The reported data have information about the test progress (time in milliseconds since start, percentage and iteration), real time of each result, and the complete results map. Quantities are stored without their unit for them to be parseable as numbers. It supports SSL connection and the database is by default created on connection.

Property nameDescriptionRequiredDefault value
serverUriInfluxDb server including protocol and port number. Supports SSL. Port is typically 8086.Yes 
databaseInfluxDb database.Noperfcake
measurementInfluxDb measurement (serves as a database table).Noresults
tagsComma separated list of tags to be added to results. This is useful to differentiate results from multiple test runs for example.No 
userNameInfluxDb user name. There always need to be some user. InfluxDb does not support empty field, however, you can configure your server to let anybody in.Yesadmin
passwordInfluxDb password. There always need to be some password. InfluxDb does not support empty field, however, you can configure your server to let anybody in.Yesadmin
createDatabaseTrue when the database should be created on connection. If the database already exists, nothing happens (all data and tables remain there).Notrue
keyStoreEnables a SSL connection to the server. Sets the location of the key store created with Java keytool. The default loaction is specified by the system property perfcake.keystores.dir which defaults to resources/keystores.No 
keyStorePasswordPassword to the key store.No 
trustStoreSee keyStore property. The only difference is that this is for the trust store.No 
trustStorePasswordPassword to the trust store.No 

Table 4.49. InfluxDbDestination properties


Example 4.64. An example of a InfluxDbDestination configuration

  1    <destination class="InfluxDbDestination">
  2       <period type="iteration" value="500"/>
  3       <property name="serverUri" value="http://localhost:8086" />
  4       <property name="database" value="perfcake" />
  5       <property name="tags" value="tag1,tag2" />
  6       <property name="userName" value="admin" />
  7       <property name="password" value="abc123" />
  8    </destination>

Log4jDestination

The destination appends the measurements to Log4j to category prg.perfcake.reporting.destination.Log4jDestination . The appropriate configurations to customize its output should be done. You can configure a separate appender only for this category for instance. Logging level can be set through the level property.

The following table describes the Log4jDestination's properties

Property nameDescriptionRequiredDefault value
levelThe logging level for the destination.NoINFO

Table 4.50. Log4jDestination properties


Example 4.65. An example of Log4jDestination configuration

  1    <destination class="Log4jDestination">
  2       <period type="time" value="1000"/>
  3       <property name="level" value="INFO"/>
  4    </destination>