npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

accessibility-insights-scan

v3.0.1

Published

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit

Downloads

3,367

Readme

ai-scan

AI-Scan is a Command Line Interface (CLI) tool that implements automated web accessibility checks in a local environment. The tool currently provides the following capabilities:

  • Single URL scan: Run automated checks against one URL.
  • Batch Scan: Run automated checks against a file that has list of URLs separated by a new line.
  • Scan & Crawl : Run automated checks against one URL, crawl that URL and run automated checks against all discovered URLs.

Installation

This package is available on npm as accessibility-insights-scan.

  npm install -g accessibility-insights-scan

When installing package on Windows Subsystem for Linux (WSL) follow the steps below.

  npm install --unsafe-perm=true -g accessibility-insights-scan

Example Usage

Single URL Scan

  • Required --url parameter with URL to scan.
  • An HTML report will be generated in the output folder, previous result for same URL will be overwritten.
  npx ai-scan --url https://www.example.com/

Options

  • url: --url
type: boolean
describe: The URL to scan for accessibility issues.
  • output: --output
type: string
describe: Output directory. If not set, default is ./ai_scan_cli_output, if you use the same output for different runs, an existing result might be overwritten.
default: './ai_scan_cli_output'

Batch Scan

  • Required --inputFile option with list of URLs to scan, separated by a new line.
  • Summary HTML report will be generated in the output folder; previous result will be overwritten.
  • The error log will be generated in case of any error.
  npx ai-scan --inputFile 'input file path'

Options

  • inputFile: --inputFile
type: string
describe: File path that contains list of URLs (each separated by a new line) to scan for accessibility issues.
  • output: --output
type: string
describe: Output directory. If not set, default is ./ai_scan_cli_output, if you use the same output for different runs, an existing result might be overwritten.
default: './ai_scan_cli_output'
  • keepUrlFragment: --keepUrlFragment
type: boolean
describe: To keep the hash fragment in the URLs. If set to false, it will remove the hash fragment from URL. For example, http://www.example.com/#foo will be considered as http://www.example.com.
default: false

Scan & Crawl

  • Required --crawl and --url options with URL to be crawled and scanned.
  • Summary HTML report will be generated in the output folder; previous result will be overwritten if --restart is true.
  • The error log will be generated in case of any error.
  • The crawler will start with the base URL specified in the command line and progressively discover links (URLs) to be crawled and scanned.
  • A base URL to crawl is defined as URL host and should not have query and parameters.
  npx ai-scan --crawl --url https://www.example.com/

Options

  • crawl: --crawl
type: boolean
describe: Crawl web site under the provided URL.
default: false
  • url: --url
type: boolean
describe: The URL to scan/crawl for accessibility issues.
  • simulate: --simulate
type: boolean
describe: Simulate user click on elements that match to the specified selectors.
default: false
  • selectors: --selectors
type: array
describe: List of CSS selectors to match against. Default selector is 'button'.
default: ['button']
  • output: --output
type: string
describe: Output directory. Defaults to the value of CRAWLEE_STORAGE_DIR, if set, or ./ai_scan_cli_output, if not, if you use the same output for different runs, an existing result might be overwritten.
default: './ai_scan_cli_output'
  • maxUrls: --maxUrls
type: number
describe: Maximum number of pages that the crawler will open. The crawl will stop when this limit is reached.
Note that in cases of parallel crawling, the actual number of pages visited might be slightly higher than this value.
default: 100
  • restart: --restart
type: boolean
describe: Clear the pending crawl queue and start crawl from the provided URL when set to true, otherwise resume the crawl from the last request in the queue.
default: false
  • continue: --continue
type: boolean
describe: Continue to crawl using the pending crawl queue. Use this option to continue when previous scan was terminated.
Note that --url option will be ignored and previous value will be used instead.
default: false
  • snapshot: --snapshot
type: boolean
describe: Save snapshot of the crawled page. Enabled by default if simulation option is selected, otherwise false.
  • memoryMBytes: --memoryMBytes
type: number
describe: The maximum number of megabytes to be used by the crawler.
  • silentMode: --silentMode
type: boolean
describe: Open browser window while crawling when set to false.
default: true
  • inputFile: --inputFile
type: string
describe: File path that contains list of URLs (each separated by a new line) to scan in addition to URLs discovered from crawling the provided URL.
  • inputUrls: --inputUrls
type: array
describe: List of URLs to crawl in addition to URLs discovered from crawling the provided URL.
  • discoveryPatterns: --discoveryPatterns
type: array
describe: List of RegEx patterns to crawl in addition to the provided URL.
  • baselineFile: --baselineFile
type: string
describe: Baseline file path. If specified, scan results will be compared to baseline results and the summary report will denote which results are new.
If the results do not match the baseline file, a new baseline will be written to the output directory. To update the existing baseline file instead, use --updateBaseline.
  • updateBaseline: --updateBaseline
type: boolean
describe: Use with --baselineFile to update the baseline file in-place, rather than writing any updated baseline to the output directory.
  • singleWorker: --singleWorker
type: boolean
describe: Uses a single crawler worker.
  • debug: --debug
type: boolean
describe: Enables crawler engine debug mode.
  • authType: --authType
type: string
describe: For sites with authenticated pages, specify the authentication type. The CLI currently supports "AAD" (Azure Active Directory). Use with --serviceAccountName and --serviceAccountPassword.
  • serviceAccountName: --serviceAccountName
type: string
describe: For sites with authenticated pages, set the email address for the non-people service account.
  • serviceAccountPassword: --serviceAccountPassword
type: string
describe: For sites with authenticated pages, set the password for the non-people service account.
  • userAgent: --userAgent
type: string
describe: The custom value of the User-Agent HTTP request header. Defaults to the value of USER_AGENT environment variable. The option will take precedence over environment variable.
  • httpHeaders: --httpHeaders
type: string
describe: The custom HTTP header(s) to be send on each crawl request. Accepts JSON formatted string like {"name": "value"}.
  • adhereFilesystemPattern: --adhereFilesystemPattern
type: boolean
default: true
describe: Adhere to the pattern when URLs with a trailing slash indicates a directory, and those without a trailing slash denotes a file.
The URL folder is a resource location equal to base URL up-to the last forward slash in the specified base URL, or e.g:
    -   If base URL is specified as https://www.example.com/bar/foo , URLs that are in https://www.example.com/bar/ folder will be considered for crawling and scanning.
    -   If base URL is specified as https://www.example.com/bar/foo/ , only URLs that are in https://www.example.com/bar/foo/ folder will be considered for crawling and scanning.
  • browserOptions: --browserOptions
type: array
describe: List of Chrome command line options to pass on browser start. Can be used to disable CORS to scan protected page: --browserOptions disable-web-security
  • keepUrlFragment: --keepUrlFragment
type: boolean
describe: To keep the hash fragment in the URLs. If set to false, it will remove the hash fragment from URL. For example, http://www.example.com/#foo will be considered as http://www.example.com.
default: 'false