scrapfly-fetch
v0.1.4
Published
SDK for Scrapfly.io web scraping service (using fetch for Cloudflare Workers)
Downloads
14
Maintainers
Readme
Scrapfly SDK
npm install scrapfly-sdk-fetch
Typescript/NodeJS SDK for Scrapfly.io web scraping API which allows to:
- Scrape the web without being blocked.
- Use headless browsers to access Javascript-powered page data.
- Scale up web scraping.
- ... and much more!
For web scraping guides see our blog and #scrapeguide tag for how to scrape specific targets.
Quick Intro
- Register a Scrapfly account for free
- Get your API Key on scrapfly.io/dashboard
- Start scraping: 🚀
import { ScrapflyClient, ScrapeConfig } from 'scrapfly-sdk-fetch';
const key = 'YOUR SCRAPFLY KEY';
const client = new ScrapflyClient({ key });
const apiResponse = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/product/1',
// optional parameters:
// enable javascript rendering
render_js: true,
// set proxy country
country: 'us',
// enable anti-scraping protection bypass
asp: true,
// set residential proxies
proxy_pool: 'public_residential_pool',
// etc.
}),
);
console.log(apiResponse.result.content); // html content
// Parse HTML directly with SDK (through cheerio)
console.log(apiResponse.result.selector('h3').text());
For more see /examples directory.
For more on Scrapfly API see our getting started documentation
For Python see Scrapfly Python SDK
Debugging
To enable debug logs set Scrapfly's log level to "DEBUG"
:
import { log } from 'scrapfly-sdk-fetch';
log.setLevel('DEBUG');
Additionally, set debug=true
in ScrapeConfig
to access debug information in Scrapfly web dashboard:
import { ScrapflyClient } from 'scrapfly-sdk-fetch';
new ScrapeConfig({
url: 'https://web-scraping.dev/product/1',
debug: true,
// ^ enable debug information - this will show extra details on web dashboard
});
Development
Install and setup environment:
$ npm install
Build and test:
$ npm task build
$ npm task tests