npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

crawlbase

v1.0.2

Published

Dependency free module for scraping and crawling websites using [Crawlbase](https://crawlbase.com) API

Downloads

4,160

Readme

Crawlbase node

Dependency free module for scraping and crawling websites using Crawlbase API

Installation

Install using npm

npm i crawlbase

Require the necessary API class in your project.
You can get your Crawlbase free token from here.

const { CrawlingAPI, ScraperAPI, LeadsAPI, ScreenshotsAPI } = require('crawlbase');

Crawling API usage

Initialize with one of your account tokens, either normal or javascript token. Then make get or post requests accordingly.

const api = new CrawlingAPI({ token: 'YOUR_TOKEN' });

GET requests

Pass the url that you want to scrape plus any options from the ones available in the API documentation.

api.get(url, options);

Example:

api.get('https://www.facebook.com/britneyspears').then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

You can pass any options from Crawlbase API.

Example:

api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
  userAgent: 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
  format: 'json'
}).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

POST requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.post(url, data, options);

Example:

api.post('https://producthunt.com/search', { text: 'example search' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

You can send the data as application/json instead of x-www-form-urlencoded by setting options postType as json.

api.post('https://httpbin.org/post', { some_json: 'with some value' }, { postType: 'json' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

PUT requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.put(url, data, options);

Example:

api.put('https://producthunt.com/search', { text: 'example search' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Javascript requests

If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get is available for javascript and not .post.

const api = new CrawlingAPI({ token: 'YOUR_JAVASCRIPT_TOKEN' });
api.get('https://www.nfl.com').then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Same way you can pass javascript additional options.

api.get('https://www.freelancer.com', { pageWait: 5000 }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Original status and PC status

You can always get the original status and crawlbase status from the response. Read the Crawlbase documentation to learn more about those status.

api.get('https://craiglist.com').then(response => {
  console.log(response.originalStatus, response.cbStatus);
}).catch(error => console.error);

Scraper API usage

Initialize the Scraper API and use it in the same way as the Crawling API (see above). Use it with your normal token.

const api = new ScraperAPI({ token: 'YOUR_TOKEN' });

api.get('https://www.amazon.com/Halo-SleepSack-Swaddle-Triangle-Neutral/dp/B01LAG1TOS').then(response => {
  if (response.statusCode === 200) {
    console.log(response.json);
  }
}).catch(error => console.error);

Leads API usage

Initialize with your Leads API token and call the getFromDomain method.

const api = new LeadsAPI({ token: 'YOUR_TOKEN' });

api.getFromDomain('somesite.com').then(response => {
  console.log(response.leads);
});

Screenshots API usage

Initialize with your Screenshots API token and call the get method, then do whatever you need with the binary content. For example save it in a file.

You can pass any of the available parameters

const api = new ScreenshotsAPI({ token: 'YOUR_TOKEN' });

api.get('https://www.amazon.com').then(response => {
  fs.writeFileSync('amazon.jpg', response.body, { encoding: 'binary' });
});

// Example with parameters
api.get('https://www.amazon.com', { device: 'mobile' }).then(response => {
  fs.writeFileSync('amazon-mobile.jpg', response.body, { encoding: 'binary' });
});

If you have questions or need help using the library, please open an issue or contact us.


Copyright 2024 Crawlbase