The key to running any performance benchmark is - measuring
performance. In order to measure performance, timing data for each
individual operation needs to be collected, organized, and reported.
The Faban driver framework has a rich set of data collection and
reporting facilities. And the good news is - most or all of it happens
automatically.
Collecting and reporting of general metrics happens automatically. For automatic timing, the operation does not need to deal with data collection at all. With manual timing, the operation will need to call DriverContext.recordTime() and optionally, DriverContext.pauseTime(). All data collection is then done automatically by the driver framework.
The Faban driver framework automatically collects the following data:
After the run, the master will
automatically
postprocess the data and generate a summary
report and a detail
report. The summary report is in XML format and can be easily
read and reformatted to fit
specific reporting needs. However, Faban provides a stylesheet for
formatting the summary report in a
web browser.
The detail report is in a text format compatible with FenXi,
the tool we use for analysis and graph generation. It is also extremely
readable by a human eye.
While most of the standard data collection and reporting is handled by Faban, it is not uncommon for the driver to collect additional data. The Faban driver framework provides a mechanism to collect such additional metrics by attaching a CustomMetrics implementation to the DriverContext. This is typically done in the driver's constructor. There are two method signatures for attaching the custom metrics:
Each CustomMetrics can represent a multitude of data the particular driver needs to keep track. At reporting time, each of the line items are reported using a CustomMetrics.Element instance. The results are then reported in the given section of the summary report identified by the title.
Besides the standard reporting format for CustomMetrics, some workloads may need to report arbitrary tables in the summary report. This can be achieved by attaching CustomTableMetrics with DriverContext.attachMetrics(String, CustomTableMetrics). The CustomTableMetrics is an open interface used for collecting stats just like CustomMetrics while enforcing the result to be a TableModel. The TableModel will then be used for reporting the result table.
While very flexible, the CustomTableMetrics has some limitation. It does not have the ability to render a run passed or failed as the TableModel does not have a way to express such results explicitly. It is merely used for reporting that does not impact the pass/fail status of the run.
Implementing the CustomMetrics is rather straightforward. In Java terms, it is done by creating a class implementing the CustomMetrics interface. The class can have any name, be in any package, and can have any kind of constructor. The driver will need to instantiate the CustomMetrics at construction time.
A typical CustomMetrics implementation will keep the collected data in its instance variables. The driver needs to update the data during the run. The driver can both check the time by calling DriverContext.getTime() and check whether it is in the steady state by calling DriverContext.isTxSteadyState() methods to know whether or not to record data.
The form of keeping data is not prescribed and can vary from individual instance variables to arrays to collections, etc. Since a CustomMetrics instance is attached to each driver instance, thread, and simulated user, it needs to be aggregated at the run. The implementor will need to implement the methods add() and clone() which are used for aggregation which means he/she will need to know the characteristics of the data. Counts and times are generally added directly, min and max values are compared and selected. Averages and variances will need some more complex processing, accordingly.
Once all CustomMetrics instances are aggregated, the driver framework will call getResults() on the aggregated instance to obtain the final results. This method returns an array of Elements, each describing the metric to report. At the minimum, the description field must not be null. This is for just reporting metrics that have no effect on the pass or fail status of the run. If the metric has an impact on the pass or fail status of the run, the passed field need to be set. The target and allowedDeviation fields are optional and are useful to report such deviations. All fields except passed are of type java.lang.String to allow for flexibility of data type and formatting. The data will be represented in the summary report exactly the same way it is formatted in the string.
Similar to the CustomMetrics implementation, we subclass the CustomTableMetrics class and are allowed to do all kinds of data collection and keeping the data in the in the instance. It also gets aggregated similar to CustomMetrics. The only difference is that at reporting time, it has to produce a TableModel instead of an array of CustomMetrics.Element objects. This table will be placed in the summary report with the given title used when attaching the CustomTableMetrics.
The com.sun.faban.driver.util.ContentSizeStats is a convenient CustomMetrics implementation provided for collecting and averaging content sizes. It is one of the simplest cases of collecting custom metrics and is provided as part of the fhb facility and and some Faban sample code. If you just want to know the average content size, ContentSizeStats is right for you. To use it, just do the followings:
ContentSizeStats contentStats = null;
ctx = DriverContext.getContext();
contentStats = new ContentSizeStats(ctx.getOperationCount());
ctx.attachMetrics(contentStats);
if (ctx.isTxSteadyState())
contentStats.sumContentSize[ctx.getOperationId()] += contentSize;
You'll find the content sizes for each operation reported in the miscellaneous stats section of the summary result.
In many cases the results need to be calculated based on the run results. For instance, the percentage of cancelled purchase orders (custom metrics) is calculated from the number of cancelled purchase orders and the number of total purchase orders (run results). So the getResults() method implementation will need to have a way to know the number of purchase order transactions during steady state.
The Result class is provided for this purpose. It provides the CustomMetrics.getResults() with access to the run results and metrics. It is only available to the getResults() method at reporting time as this gets called after the results are established. The getResults() method may access the Result object by calling Result.getInstance() inside the method implementation.