Performance Testing With Maddox
Developers have been using unit tests for some years now. One of the reasons they are so powerful is because they provide a quick feedback loop to let us know if we have broken out code. Maddox introduces the same concept for performance testing.
There is no argument that there are many ways to run performance tests on your code. The style that Maddox introduces is a little bit different. Maddox allows you to see incremental performance gains and loses with any code change, while getting the same quick feedback loop that is so advantageous in our unit testing.
Marking A Test For Performance Execution
Using Maddox performance testing is simple. All you need to do is mark all of the Scenarios that you would like to use as performance tests by adding the '.perf()' function call on that Scenario. All Scenarios that use the '.perf()' function, will be executed during the performance test execution. All Scenarios that do NOT use the '.perf()' function, will NOT be executed during the performance text execution.
An example of a Scenario that is marked as a performance test can be found below.
describe("When using Maddox Performance Testing, it", () => {
let context;
it("should run performance testing if the perf function is called.", function (done) {
new Scenario(this)
.mockThisFunction("DbProxy", "save", DbProxy)
.withEntryPoint(context.entryPointObject, context.entryPointFunction)
.withInputParams([context.inputParams])
.shouldBeCalledWith("DbProxy", "save", [param1, Maddox.constants.IgnoreParam, param3])
.doesReturnWithPromise("DbProxy", "save", Maddox.constants.EmptyResult)
// The perf function marks a test as a perf test. This means when we execute Maddox performance testing, this test will now be used.
.perf()
.test(function (err, response) {
Maddox.compare.equal(err, undefined);
Maddox.compare.equal(response, testContext.expectedResponse);
done();
}).catch(function (err) {
done(err);
});
});
});
Executing Performance Tests
Lets say that you have your unit specs in the './specs/unit' directory. When executing your unit tests, you will be running a Mocha command similar to the one below.
mocha ./specs/unit --recursive
When you want to run your performance tests, all you need to do is the following.
maddox ./specs/unit
$ maddox --help
-h, --help Show this help message and exit.
-v, --version Show program's version number and exit.
-t TIMEOUT, --TIMEOUT TIMEOUT
How long a test has (ms) to finish before timing out.
(default: 30000)
-u UI, --UI UI Specify user-interface (bdd tdd qunit exports).
(default: bdd)
-p , --PRINT Print all num requests per second. (default: null)
-P , --PRINT_ALL Print all saved statistics. (default: null)
-m MAX_RESULTS, --MAX_RESULTS MAX_RESULTS
Only keep this many historical results. Will delete
results of the number is less than current count.
(default: 10)
-n , --DO_NOT_SAVE_RESULTS
Do NOT save results of this run. (default: null)
-d [TEST_DIR [TEST_DIR ...]], --TEST_DIR [TEST_DIR [TEST_DIR ...]]
Save results of this run. (default: null)
-r , --REMOVE_EXISTING
Remove all existing results. (default: null)
-s NUM_SAMPLES, --NUM_SAMPLES NUM_SAMPLES
How many times to run the perf test. (default: 20)
-l SAMPLE_LENGTH, --SAMPLE_LENGTH SAMPLE_LENGTH
The length (in millis) to run each individual perf
test. (default: 1000)
-c NUM_CONCURRENT, --NUM_CONCURRENT NUM_CONCURRENT
How many samples to run concurrently. (default: 20)
-o , --ONLY_95 Remove all existing results that are not with 95th
percentile. WARNING: Don't use with small samples
sizes or if you have changed your code. (default:
null)
Updated about 7 years ago