The benchmark class, also referred to as a "harness hook" is the actual program or script that defines the process of executing the benchmark. Every benchmark deployment requires a benchmark class . However, if your benchmark is written using the Faban harness, the class com.sun.faban.harness.DefaultFabanBenchmark2 will be your default harness hook and the benchmark class is optional. DefaultFabanBenchmark2 defines the process of how to run a benchmark written using the Faban driver framework. A benchmark class is annotated using annotations. They are defined in the package com.sun.faban.harness. Since annotations are not inherited when subclassed, we need to explicitly annotate the relevant methods if you extend DefaultFabanBenchmark2.
Your own benchmark class is only required if you are using a non-Faban benchmark or if you want to customize and control the behaviour of a Faban benchmark that is different from the default. The sample web101 benchmark includes a hook in src/samples/web1/harness/WebBenchmark.java. The hook is a java class that extends com.sun.faban.harness.DefaultFabanBenchmark2 and adds annotated methods to configure and post process the benchmark run. You can customize, configure or subclass the other annotated methods to include any additional tasks that need to be done prior to or after these operations. For example, if the benchmark requires that some script be executed (say to clean out some files) before the start of a run, the start() method can be modified to execute the script before calling super.start(). This can also be done in the method annotated with @Configure (see the next section).
The benchmark class can be created in the package of your choice and must be placed into the Java source tree of your project. The provided build script will pick up all classes in the source tree and compile and package them properly for deploying into the Faban harness.
The benchmark class is implemented as a plain old Java object (Pojo) with annotated methods and referenced in the benchmark deployment descriptor benchmark.xml explained later under Benchmark Deployment. The benchmark annotations used for the benchmark class are defined in com.sun.faban.harness. The followings are the annotations used:
The method annotated with @Validate is called for validating the benchmark configuration file. It is the first method called when the benchmark starts after initialization of the benchmark class. No other facility or remote Faban agents are started before or while the validate method is in progress. The validate method is used to validate the benchmark configuration to ensure that it is not only syntactically valid (done by the xml parser) but also semantically valid. It also the only place you may, edit, change, or complete the configuration file.
For integrations of existing benchmarks or workloads, it is a
practice to use XSL
stylesheets to translate the configuration file into the target
benchmark's configuration file. Thus it makes sense to translate the
configuration files in the validate method and this is
also a way to ensure the configuration file is correct. Such code can
be found in the SPECWeb2005
integration example provided in the samples directory of the
The method annotated with @Configure is the one that gets called after the Faban infrastructure, including remote agents are set up. Having these in place allows you to make remote calls to configure or set up the benchmarking environment. This method is however called before the services, if any are started. The method implementation will commonly contain logic to prepare and reconfigure remote servers or other supporting processes to run the benchmark. For benchmarks written using the Faban driver framework, you may want to subclass the DefaultFabanBenchmark2 and provide a method with this annotation to prepare your rig.
The method annotated with @PreRun is called just before starting the run. The method implementation will commonly contain calls to prepare and reload data required to run the benchmark, etc. Those functions will need all the services to be up and available. Off-line data reload may be better done in configure. For benchmarks written using the Faban driver framework, you may want to subclass the DefaultFabanBenchmark2 and provide a method with this annotation.
The method annotated with @StartRun actually starts the benchmark driver. You will need to ensure all the driver processes on all driver systems get started, and - if feasible - enter the rampup state before returning from this method as the tools timer will get started immediately after this method terminates.
Generally, benchmarks written using the Faban driver framework will not need to override this method. However, in rare circumstances - such as starting processes not controlled by the driver framework, i.e. emulators, - you'll want to override this method. Just make sure you make a call to super.start().
The method annotated with @EndRun is called sometime after the
terminates. The implementation of the end method must wait for the
driver processes to terminate before proceeding.
The method annotated with @PostRun is called after the end method. This may do some post-processing of the benchmark and/or tools results. For benchmarks written using the Faban driver framework, you may want to subclass the DefaultFabanBenchmark2 and provide a method with this annotation.
The method annotated with @KillRun is used by the Faban harness to signal premature termination of the benchmark. It provides the opportunity to cleanup after the benchmark run. Most commonly, the kill method is left empty. Processes started by Faban's execution facilities are automatically terminated, except for daemons or other server processes over which Faban does not have control. Such processes may require explicit termination calls.
The RunContext is the central point for getting information about the run. All methods in RunContext are static. We suggest a static import of all methods in this class:
Then you'll be able to call the RunContext without referring to the RunContext class, which looks just like function calls. You can use the RunContext to obtain information about the benchmark run as well as executing commands on local or remote agents.
The ParamRepository represents the XML configuration file and allows obtaining configuration parameters using an XPath pointing to the parameter. It also allows manipulating and modifying the configuration file (effective only in the validate method) and saving it back to the run using the setParameter and save methods. To obtain the ParamRepository, just call getParamRepository(). This is a method of the RunContext. With the static import above, the benchmark class can refer to this method without providing a reference to the RunContext class.
The ParamRepository provides several methods to access the XML structure within it. Most accesses require an XPath string to locate the parameter. It also provides methods that automatically parse list of strings in the elements, or even parse host:port pairs in the hostport fields frequently used for addressing server instances. Please refer to the ParamRepostory javadocs for more detail, and the configuration file documentation for specific fields used by different Faban components, such as the host:port fields.
The RunContext provides two sets of methods for execution: 1) The exec methods can be used to execute any operating system or shell command. 2) The java methods are used to execute java commands inheriting the classpaths and other java command line parameters from the Faban environment. Both methods make use of the Faban command infrastructure to remotely start processes and execute commands. To construct a command, you will need to create a new Command object. The Command object constructor takes 3 types of parameters:
For any given command, the execution search path for scripts and binaries are as follows, in order:
Please also note that the commands may be re-mapped on an OS-specific command map infrastructure. If a command is mapped, the result follows the search path above.
The command map allows mapping commands called by the Faban harness and the benchmark class to some OS-Specific commands. The command map file is an XML file located at the following location:
The following is an example command map file we use for Solaris. It resides in the $FABAN_HOME/config/SunOS/cmdmap.xml directory:
<!-- The command map file allows commands to run with a
specific path and specific prefix and releaves users
from specifying all the detail. These commands are
used by Faban or user commands or tools specified by
<prefix sequence="2">priocntl -e -c RT</prefix>
<prefix sequence="2">priocntl -e -c RT</prefix>
this example, you can see that the commands issued by the Faban harness
or the benchmark class can be mapped to specific paths, can have
multiple prefixes, or can be mapped to totally different commands in
different operating systems.
In many instances, the benchmark class needs to process files, copy files to different systems in the rig, or get some files from those systems. The RunContext static class provides utility methods to deal with file processing. We'll just address a few here. Please see the javadoc pages for a complete list.
file, String destHost, String destFile)
Copies a file over to a remote host.
srcHost, String srcFile, String destFile)
Copies a file from a remote host.
host, String file)
Checks existence of the remote file.
For benchmarks built using the Faban driver framework, Faban
provides a default benchmark class -
- that understands how to
start the Faban driver framework implicitly. This class also contains
more complex load balancing mechanisms for balancing the driver agents
among the driver and client systems. DefaultFabanBenchmark2 deprecates
DefaultFabanBenchmark which is interface-based and used in earlier
versions of Faban. It is maintained for backward compatibility only.
Many complex benchmarks using the Faban driver framework would
want to extend the DefaultFabanBenchmark2 to provide server control
as starting/stopping the web server and database server, clearing
temporary directories on the server, and even collect server
configuration information. To do so, just provide your own benchmark
class extending the DefaultFabanBenchmark2 and implementing/overriding
methods. The most common method to implement is configure(). The
postRun() method handles some further
processing after the benchmark run is stopped.
One general guideline for overriding the methods is to always call the superclass' method from inside in order not to loose any functionality provided by the DefaultFabanBenchmark2. In addition note that, since annotations are not inherrited, you need to explicitly call annotation for the methods.