twitter-crawler
v1.0.4
Published
NodeJS Crawler for Twitter
Downloads
17
Readme
NodeJS Twitter Crawler
Crawl twitter users and user tweets but using multiple credentials. Credentials used in a round-robin mode.
Using the component
NodeJS Twitter Crawler is implemented using promises. You will need to use promises pattern to add callbacks to crawler method invocations.
var crawler = new TwitterCrawler(credentials);
crawler.getUser(/* CrawlerParameters */)
.then( /* Success Callback */ )
.catch( /* Error Callback */ )
crawler.getTweets(/* CrawlerParameters */, { limit: /* Desired limit, you can omit this */ })
.then( /* Success Callback */ )
.catch( /* Error Callback */ )
API Methods
The available methods are the following ones:
getUser :: CrawlerParameters -> Promise
- Obtain the user status from Twitter by callingusers/show
method from Twitter API. Thethen
callback will receive the user information.getTweets :: (CrawlerParameters[, CrawlerOptions]) -> Promise
- Obtain User Tweets by callingstatuses/user_timeline
method from Twitter API. Thethen
callback will receive a list of tweets.
Definitions
CrawlerParameters
can be aTwitterID
or aTwitterParameters
object.TwitterID
is the numeric Twitter ID or the Twitter Handle.TwitterParameters
is an object with parameters to be passed to Twitter API. E.g. this documentation shows thatGET statuses/user_timeline
can receive parameters such asuser_id
orexclude_replies
.Promise
is a promise as defined by BlueBird package.CrawlerOptions
is an object containing options for the crawling with attributes:limit
: sets the max count of tweets to collect.min_tweets
: forces a minimum tweet count. If set and not satisfied, it will result in rejection.