scraper-api-datachaser
v1.0.1
Published
The scraping SaaS platform provides a RESTful API for developers to perform web scraping tasks. Users can submit scraping tasks, monitor task status, retrieve scraped data, and manage their account through the API.
Downloads
30
Maintainers
Readme
scraper_api
ScraperApi - JavaScript client for scraper_api The scraping SaaS platform provides a RESTful API for developers to perform web scraping tasks. Users can submit scraping tasks, monitor task status, retrieve scraped data, and manage their account through the API. This SDK is automatically generated by the OpenAPI Generator project:
- API version: 1.0.0
- Package version: 1.0.0
- Generator version: 7.6.0
- Build package: org.openapitools.codegen.languages.JavascriptClientCodegen
Installation
For Node.js
npm
To publish the library as a npm, please follow the procedure in "Publishing npm packages".
Then install it via:
npm install scraper_api --save
Finally, you need to build the module:
npm run build
Local development
To use the library locally without publishing to a remote npm registry, first install the dependencies by changing into the directory containing package.json
(and this README). Let's call this JAVASCRIPT_CLIENT_DIR
. Then run:
npm install
Next, link it globally in npm with the following, also from JAVASCRIPT_CLIENT_DIR
:
npm link
To use the link you just defined in your project, switch to the directory you want to use your scraper_api from, and run:
npm link /path/to/<JAVASCRIPT_CLIENT_DIR>
Finally, you need to build the module:
npm run build
git
If the library is hosted at a git repository, e.g.https://github.com/GIT_USER_ID/GIT_REPO_ID then install it via:
npm install GIT_USER_ID/GIT_REPO_ID --save
For browser
The library also works in the browser environment via npm and browserify. After following
the above steps with Node.js and installing browserify with npm install -g browserify
,
perform the following (assuming main.js is your entry file):
browserify main.js > bundle.js
Then include bundle.js in the HTML pages.
Webpack Configuration
Using Webpack you may encounter the following error: "Module not found: Error: Cannot resolve module", most certainly you should disable AMD loader. Add/merge the following section to your webpack config:
module: {
rules: [
{
parser: {
amd: false
}
}
]
}
Getting Started
Please follow the installation instruction and execute the following JS code:
var ScraperApi = require('scraper_api');
var defaultClient = ScraperApi.ApiClient.instance;
// Configure API key authorization: key
var key = defaultClient.authentications['key'];
key.apiKey = "YOUR API KEY"
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//key.apiKeyPrefix['Key'] = "Token"
var api = new ScraperApi.APIKeysApi()
var body = new ScraperApi.DeleteApiKeyRequest(); // {DeleteApiKeyRequest}
var callback = function(error, data, response) {
if (error) {
console.error(error);
} else {
console.log('API called successfully. Returned data: ' + data);
}
};
api.keysDelete(body, callback);
Documentation for API Endpoints
All URIs are relative to https://scraper.datachaser.local/api/v1
Class | Method | HTTP request | Description ------------ | ------------- | ------------- | ------------- ScraperApi.APIKeysApi | keysDelete | DELETE /keys | Delete API key by id ScraperApi.APIKeysApi | keysGet | GET /keys | Retrieve API keys list ScraperApi.APIKeysApi | keysPost | POST /keys | Create new API Key ScraperApi.AuthenticationApi | accountPut | PUT /account | Update user account ScraperApi.DataApi | dataGet | GET /data | Get all data ScraperApi.DataApi | dataIdDelete | DELETE /data/{id} | Delete data by id ScraperApi.DataApi | dataJobIdGet | GET /data/job/{id} | Get data by job id ScraperApi.DataApi | dataUserIdPost | POST /data/user/{id} | WebHook to get data when done scraped by user id ScraperApi.JobsApi | jobAsyncPost | POST /job/async | Receives a job and process it asyncrhonously ScraperApi.JobsApi | jobGet | GET /job | Get jobs ScraperApi.JobsApi | jobIdDelete | DELETE /job/{id} | Delete existing job ScraperApi.JobsApi | jobIdGet | GET /job/{id} | Retrieve specific job status ScraperApi.JobsApi | jobIdPut | PUT /job/{id} | Update existing job ScraperApi.JobsApi | jobPost | POST /job | Create new job ScraperApi.JobsApi | jobSyncPost | POST /job/sync | Receives a job and process it syncrhonously ScraperApi.LogsApi | logJobIdGet | GET /log/job/{id} | Get log by job id ScraperApi.LogsApi | logsGet | GET /logs | Get all logs
Documentation for Models
- ScraperApi.Actions
- ScraperApi.Data
- ScraperApi.DataToScrape
- ScraperApi.DeleteApiKeyRequest
- ScraperApi.ErrorResponse
- ScraperApi.Frecuency
- ScraperApi.Job
- ScraperApi.JobCreateRequest
- ScraperApi.JobCreateResponse
- ScraperApi.Key
- ScraperApi.KeyCreateRequest
- ScraperApi.Log
- ScraperApi.ProcessedJob
- ScraperApi.ProxyRegions
- ScraperApi.RateLimit
- ScraperApi.Selector
- ScraperApi.UserAccountCreateResponse
- ScraperApi.UserAccountUpdateRequest
Documentation for Authorization
Authentication schemes defined for the API:
key
- Type: API key
- API key parameter name: Key
- Location: HTTP header