testarmada-manifest
v0.0.4
Published
Suite run result storage for Magellan
Downloads
53
Readme
TestArmada Manifest
Magellan sends information about system status and test runs using events, that's great, but it's not the most convenient things to report results against. Manifest is a helper class for Magellan that tracks suite runs and builds an easy-to-use DOM style structure with tests, environments, individual test runs and so on.
Creating the SuiteRunResult object
All the information about a Magellan suite-run is stored in a SuiteRunResult
object, to create one:
const SuiteRunResult = require("testarmada-manifest");
const srr = new SuiteRunResult();
Connecting to Magellan
There are two entrypoint methods that you need to connect up to Magellan, the first is global messages:
Reporter.prototype.initialize = function (magellanGlobals) {
const analytics = magellanGlobals.analytics;
const self = this;
const deferred = Q.defer();
deferred.resolve();
analytics.sync().forEach((message) => srr.globalMessage(message));
analytics.getEmitter().addListener("message", (message) => {
srr.globalMessage(message);
});
return deferred.promise;
};
This example is from a Magellan reporter.
The second hookup is onto test messages:
Reporter.prototype.listenTo = function (testRun, test, source) {
if (test && testRun) {
source.addListener("message", (event) => {
srr.testRunMessage(testRun, test, message);
});
} else {
source.addListener("message", (message) => {
srr.globalMessage(message);
});
}
};
Events from SuiteRunResult
There are some important high level events you should pay attention to.
start
and end
are sent when the suite run starts and ends. There are no arguments with the message.
testRunStart
and testRunEnd
is sent when a test run starts and ends. There is one argument which is the testRun object.
newTest
is sent when a new test is encountered that was not previously seen. There is only one argument, which is the test object.
There are other events emit for the status of workers, when magellan is idle and busy, and when events are emit from Magellan that the SuiteRunResult object doens't understand. The last one is more for system diagnostics.
Structure of the SuiteRunResult
The SuiteRunResult is a hierarchy:
SuiteRunResult -> Tests -> Environments -> TestRuns
The suite run contains a bunch of tests, each of those is run against different environments, and each environment can be run several times, with a test-run for each time.
SuiteRunResult API
Once you have the events hooked up you'll want to inspect the result of the suite run.
| Property | Description |
|-------------------------|-------------------|
| tests
| The array of test objects |
| passed
| True if all the tests passed |
| failed
| True if a single test failed |
| retried
| The total number of retries |
| testRuns
| An array of all the tests runs |
| timePassing
| The total time spent on passing tests |
| timeFailing
| The total time spent on failing tests |
| timeRetrying
| The total time spent retrying tests |
There are also high level report properties that you can inspect.
| Property | Description |
|-------------------------|-------------------|
| testsByNameReport
| High level metrics by test name |
| testsByEnvironment
| An object that contains arrays of test runs organized by environment |
| testsByEnvironmentAndTest
| An object that returns all of the test runs organized by environment first, then by test name |
| testsByEnvironmentReport
| High level metrics organized by environment and test name |
| reports
| Returns a Report
object for this suite run (see below) |
Here are some deeper properties you may or may not need.
| Property | Description |
|-------------------------|-------------------|
| satisfiedWorkers
| An array of all the satisfied workers |
| satisfiedWorkersTime
| Total time spent on satisfied workers |
| unsatisfiedWorkers
| An array of all the unsatisfied workers |
| unsatisfiedWorkersTime
| Total time spent on unsatisfied workers |
There are methods associated with tagging tests:
| Property | Description |
|-------------------------|-------------------|
| setTags(test, tags)
| Sets the array of tags for a test with the given name |
| testsByTag(tag)
| Returns the array of tests for a given tag |
| passedByTag(tag)
| The total number of tests with that tag that passed |
| failedByTag(tag)
| The total number of tests with that tag that failed |
| timeElapsedByTag(tag)
| The total amount of time spent on tests with that tag |
Test API
Each test has these properties:
| Property | Description |
|-------------------------|-------------------|
| name
| The name of the test |
| environments
| An object with a key for each environment pointing to an Environment
object |
| testRuns
| An array of all of the test runs across all the environments |
| passed
| True if all the tests passed across all the environments |
| retried
| The total number of retries across the environments |
| timeRetrying
| The time spent retrying |
| timeElapsed
| The total time elapsed across all the environments |
| timeForPassed
| The total time for all the passed tests, but only for the passing test run |
Environment API
Each environment object has these properties:
| Property | Description |
|-------------------------|-------------------|
| id
| The environment ID |
| test
| The associated test object |
| attempts
| The test runs |
| retries
| The retry test runs |
| passingAttempt
| The passing attempt, if there was one |
| passed
| True if this test passed on this environment |
| timeRetrying
| The time spent retrying |
| timeElapsed
| The total time elapsed across all the test runs |
| timeForPassed
| The total time for all the passed tests, but only for the passing test run |
TestRun API
| Property | Description |
|-------------------------|-------------------|
| test
| The associated test object |
| environment
| The environment object |
| status
| The status |
| passed
| True if this test run passed |
| metadata
| Metadata for the test run |
| attemptNumber
| The attempt number |
Reports API
| Property | Description |
|-------------------------|-------------------|
| timeElapsed.top
| The top 10 tests sorted by least time elapsed |
| timeElapsed.bottom
| The bottom 10 tests sorted by least time elapsed |
| timeElapsed.complete
| All the tests sorted by least time elapsed |
| timeElapsed.average
| The average time elapsed |
| timeElapsed.stddev
| The time elapsed standard deviation |
| timeForPassed.top
| The top 10 tests sorted by least time for the passing test run |
| timeForPassed.bottom
| The bottom 10 tests sorted by least time for the passing test run |
| timeForPassed.complete
| All the tests sorted by least time for the passing test run |
| timeForPassed.average
| The average time for the passing test run |
| timeForPassed.stddev
| The time for the passing test run standard deviation |
| byEnvironment[env].min
| The minimum time for the passing test run on the given environment |
| byEnvironment[env].max
| The maximum time for the passing test run on the given environment |
| byEnvironment[env].average
| The average time for the passing test run on the given environment |
| byEnvironment[env].stddev
| The standard deviation for time for the passing test run on the given environment |
| byEnvironment[env].avgRetries
| The average number of retries for the passing test run on the given environment |
License
Licenses All code not otherwise specified is Copyright Wal-Mart Stores, Inc. Released under the MIT License.