@reportportal/testcafe-reporter-agent-js-testcafe
v5.1.0
Published
Agent for integration TestCafe with ReportPortal.
Downloads
1,096
Readme
@reportportal/agent-js-testcafe
Agent to integrate TestCafe with ReportPortal.
- More about TestCafe
- More about ReportPortal
Installation
Install the agent in your project:
npm install --save-dev testcafe-reporter-agent-js-testcafe@npm:@reportportal/testcafe-reporter-agent-js-testcafe
Note: This package is namespaced. Therefore the following command can be used to install the reporter in a way that TestCafe can detect it. (Related issue in TestCafé repository)
Configuration
1. Create rp.json
file with reportportal configuration:
{
"apiKey": "<API_KEY>",
"endpoint": "https://your.reportportal.server/api/v1",
"project": "YourReportPortalProjectName",
"launch": "YourLauncherName",
"attributes": [
{
"key": "YourKey",
"value": "YourValue"
},
{
"value": "YourValue"
}
],
"description": "Your launch description"
}
| Option | Necessity | Default | Description |
|-----------------------|------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| apiKey | Required | | User's reportportal token from which you want to send requests. It can be found on the profile page of this user. |
| endpoint | Required | | URL of your server. For example 'https://server:8080/api/v1'. |
| launch | Required | | Name of launch at creation. |
| project | Required | | The name of the project in which the launches will be created. |
| attributes | Optional | [] | Launch attributes. |
| description | Optional | '' | Launch description. |
| rerun | Optional | false | Enable rerun |
| rerunOf | Optional | Not set | UUID of launch you want to rerun. If not specified, reportportal will update the latest launch with the same name |
| mode | Optional | 'DEFAULT' | Results will be submitted to Launches page 'DEBUG' - Results will be submitted to Debug page (values must be upper case). |
| debug | Optional | false | This flag allows seeing the logs of the client-javascript. Useful for debugging. |
| restClientConfig | Optional | Not set | axios
like http client config. May contain agent
property for configure http(s) client, and other client options e.g. proxy
, timeout
. For debugging and displaying logs the debug: true
option can be used. Visit client-javascript for more details. |
| headers | Optional | {} | The object with custom headers for internal http client. |
| launchUuidPrint | Optional | false | Whether to print the current launch UUID. |
| launchUuidPrintOutput | Optional | 'STDOUT' | Launch UUID printing output. Possible values: 'STDOUT', 'STDERR', 'FILE', 'ENVIRONMENT'. Works only if launchUuidPrint
set to true
. File format: rp-launch-uuid-${launch_uuid}.tmp
. Env variable: RP_LAUNCH_UUID
, note that the env variable is only available in the reporter process (it cannot be obtained from tests). |
| skippedIssue | Optional | true | reportportal provides feature to mark skipped tests as not 'To Investigate'. Option could be equal boolean values: true - skipped tests considered as issues and will be marked as 'To Investigate' on reportportal. false - skipped tests will not be marked as 'To Investigate' on application. |
| token | Deprecated | Not set | Use apiKey
instead. |
2.1 Create .testcaferc.json
TestCafe configuration file and add agent-js-testcafe
to the reporter
property
{
"browsers": "chrome",
"src": "./tests/**/*.js",
"screenshots": {
"path": "./screenshots/"
},
"reporter": [
{
"name": "list"
},
{
"name": "agent-js-testcafe"
}
],
"takeScreenshotsOnFails": true
}
Run tests via testcafe
command.
2.2 As an alternative if you are using API you can create testcafe.js
file and use the reporter with provided config manually:
const createTestCafe = require('testcafe');
const { createReporter } = require('testcafe-reporter-agent-js-testcafe/build/createReporter');
const rpConfig = require('./rp.json');
async function start() {
const testcafe = await createTestCafe('localhost');
const runner = testcafe.createRunner();
await runner.reporter(createReporter(rpConfig)).run(); // or just set 'agent-js-testcafe'
await testcafe.close();
}
start();
Run tests via node testcafe.js
.
Note: TestCafe options from
.testcaferc.json
can be overwritten programmatically intestcafe.js
Reporting
This reporter provides Reporting API to use it directly in tests to send some additional data to the report.
There are two ways to add additional data to the tests.
Using TestCafe meta
method:
1. For fixture
:
fixture`Getting Started`.page`http://devexpress.github.io/testcafe/example`
.meta({
description: 'Suite description',
attributes: [{ key: 'page', value: 'testCafeExample' }, { value: 'sample' }],
});
2. For test
:
test('My first test', async (page) => {
await page
.typeText('#developer-name', 'John Smith')
.click('#submit-button')
.expect(Selector('#article-header').innerText)
.eql('Thank you, John Smith!');
}).meta({
description: 'Test form behavior',
attributes: [{ key: 'test', value: 'form' }],
});
Only attributes
and description
properties supported.
Using ReportingApi
:
To start using the ReportingApi
in tests, just import it from '@reportportal/agent-js-testcafe'
:
const { ReportingApi } = require('testcafe-reporter-agent-js-testcafe/build/reportingApi');
Reporting API methods
The API provide methods for attaching data (logs, attributes, testCaseId, status).
addAttributes
Add attributes(tags) to the current test. Should be called inside of corresponding test or fixture.
ReportingApi.addAttributes(attributes: Array<Attribute>);
required: attributes
Example:
test('should have the correct attributes', async (t) => {
ReportingApi.addAttributes([
{
key: 'testKey',
value: 'testValue',
},
{
value: 'testValueTwo',
},
]);
await t.expect(true).eql(true);
});
setTestCaseId
Set test case id to the current test. Should be called inside of corresponding test or fixture.
ReportingApi.setTestCaseId(id: string);
required: id
If testCaseId
not specified, it will be generated automatically.
Example:
test('should have the correct testCaseId', async (t) => {
ReportingApi.setTestCaseId('itemTestCaseId');
await t.expect(true).eql(true);
});
log
Send logs to report portal for the current test. Should be called inside of corresponding test or fixture.
ReportingApi.log(level: LOG_LEVELS, message: string, file?: Attachment);
required: level
, message
where level
can be one of the following: TRACE, DEBUG, WARN, INFO, ERROR, FATAL
Example:
test('should contain logs with attachments', async (page) => {
const fileName = 'test.jpg';
const fileContent = fs.readFileSync(path.resolve(__dirname, './attachments', fileName));
const attachment = {
name: fileName,
type: 'image/jpg',
content: fileContent.toString('base64'),
};
ReportingApi.log('INFO', 'info log with attachment', attachment);
await page.expect(true).eql(true);
});
info, debug, warn, error, trace, fatal
Send logs with corresponding level to report portal for the current test or for provided by name. Should be called inside of corresponding test or fixture.
ReportingApi.info(message: string, file?: Attachment);
ReportingApi.debug(message: string, file?: Attachment);
ReportingApi.warn(message: string, file?: Attachment);
ReportingApi.error(message: string, file?: Attachment);
ReportingApi.trace(message: string, file?: Attachment);
ReportingApi.fatal(message: string, file?: Attachment);
required: message
Example:
test('should contain logs with attachments', async (page) => {
ReportingApi.info('Log message');
ReportingApi.debug('Log message');
ReportingApi.warn('Log message');
ReportingApi.error('Log message');
ReportingApi.trace('Log message');
ReportingApi.fatal('Log message');
await page.expect(true).eql(true);
});
launchLog
Send logs to report portal for the current launch. Should be called inside of the any test or fixture.
ReportingApi.launchLog(level: LOG_LEVELS, message: string, file?: Attachment);
required: level
, message
where level
can be one of the following: TRACE, DEBUG, WARN, INFO, ERROR, FATAL
Example:
test('should contain logs with attachments', async (page) => {
const fileName = 'test.jpg';
const fileContent = fs.readFileSync(path.resolve(__dirname, './attachments', fileName));
const attachment = {
name: fileName,
type: 'image/jpg',
content: fileContent.toString('base64'),
};
ReportingApi.launchLog('INFO', 'info log with attachment', attachment);
await page.expect(true).eql(true);
});
launchInfo, launchDebug, launchWarn, launchError, launchTrace, launchFatal
Send logs with corresponding level to report portal for the current launch. Should be called inside of the any test or fixture.
ReportingApi.launchInfo(message: string, file?: Attachment);
ReportingApi.launchDebug(message: string, file?: Attachment);
ReportingApi.launchWarn(message: string, file?: Attachment);
ReportingApi.launchError(message: string, file?: Attachment);
ReportingApi.launchTrace(message: string, file?: Attachment);
ReportingApi.launchFatal(message: string, file?: Attachment);
required: message
Example:
test('should contain logs with attachments', async (page) => {
ReportingApi.launchInfo('Log message');
ReportingApi.launchDebug('Log message');
ReportingApi.launchWarn('Log message');
ReportingApi.launchError('Log message');
ReportingApi.launchTrace('Log message');
ReportingApi.launchFatal('Log message');
await page.expect(true).eql(true);
});
setStatus
Assign corresponding status to the current test item.
ReportingApi.setStatus(status: string);
required: status
where status
must be one of the following: passed, failed, stopped, skipped, interrupted, cancelled, info, warn
Example:
test('should have status FAILED', async (page) => {
ReportingApi.setStatus('failed');
await page.expect(true).eql(true);
});
setStatusFailed, setStatusPassed, setStatusSkipped, setStatusStopped, setStatusInterrupted, setStatusCancelled, setStatusInfo, setStatusWarn
Assign corresponding status to the current test item.
ReportingApi.setStatusFailed();
ReportingApi.setStatusPassed();
ReportingApi.setStatusSkipped();
ReportingApi.setStatusStopped();
ReportingApi.setStatusInterrupted();
ReportingApi.setStatusCancelled();
ReportingApi.setStatusInfo();
ReportingApi.setStatusWarn();
Example:
test('should call ReportingApi to set statuses', async (page) => {
ReportingAPI.setStatusFailed();
ReportingAPI.setStatusPassed();
ReportingAPI.setStatusSkipped();
ReportingAPI.setStatusStopped();
ReportingAPI.setStatusInterrupted();
ReportingAPI.setStatusCancelled();
ReportingAPI.setStatusInfo();
ReportingAPI.setStatusWarn();
});
setLaunchStatus
Assign corresponding status to the current launch.
ReportingApi.setLaunchStatus(status: string);
required: status
where status
must be one of the following: passed, failed, stopped, skipped, interrupted, cancelled, info, warn
Example:
test('launch should have status FAILED', async (page) => {
ReportingApi.setLaunchStatus('failed');
await page.expect(true).eql(true);
});
setLaunchStatusFailed, setLaunchStatusPassed, setLaunchStatusSkipped, setLaunchStatusStopped, setLaunchStatusInterrupted, setLaunchStatusCancelled, setLaunchStatusInfo, setLaunchStatusWarn
Assign corresponding status to the current test item.
ReportingApi.setLaunchStatusFailed();
ReportingApi.setLaunchStatusPassed();
ReportingApi.setLaunchStatusSkipped();
ReportingApi.setLaunchStatusStopped();
ReportingApi.setLaunchStatusInterrupted();
ReportingApi.setLaunchStatusCancelled();
ReportingApi.setLaunchStatusInfo();
ReportingApi.setLaunchStatusWarn();
Example:
test('should call ReportingApi to set launch statuses', async (page) => {
ReportingAPI.setLaunchStatusFailed();
ReportingAPI.setLaunchStatusPassed();
ReportingAPI.setLaunchStatusSkipped();
ReportingAPI.setLaunchStatusStopped();
ReportingAPI.setLaunchStatusInterrupted();
ReportingAPI.setLaunchStatusCancelled();
ReportingAPI.setLaunchStatusInfo();
ReportingAPI.setLaunchStatusWarn();
});
Integration with Sauce Labs
To integrate with Sauce Labs just add attributes for the test case:
[{
"key": "SLID",
"value": "# of the job in Sauce Labs"
}, {
"key": "SLDC",
"value": "EU (your job region in Sauce Labs)"
}]