ghminer
v0.0.7
Published
Command-line GitHub repository miner that aggregates dataset for your researches
Downloads
1,386
Maintainers
Readme
ghminer
ghminer is a command-line dataset miner, that aggregates set of public GitHub repositories from GitHub GraphQL API and flushes the result into CSV and JSON files. This tool is based on ksegla/GitHubMiner prototype.
Read this blog post about ghminer
, as a dataset miner from GitHub
to your researches.
Motivation. For our researches we require reasonably big datasets in order to properly analyze GitHub repositories and their metrics. To do so, we need aggregate them somehow. Default GitHub Search API does not help much, since it has limitation of 1000 repositories per query. Our tool uses GitHub GraphQL API instead, and can offer to utilize multiple GitHub PATs in order to automate the build of the such huge dataset and increase research productivity.
How to use
First, install it from npm like that:
npm install -g ghminer
then, execute:
ghminer --query "stars:2..100" --start "2005-01-01" --end "2024-01-01" --tokens pats.txt
After it will be done, you should have result.csv
file with all GitHub
repositories those were created in the provided date range.
CLI Options
| Option | Required | Description |
|---------------|----------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| --query
| ✅ | GitHub Search API query |
| --graphql
| ✅ | Path to GitHub API GraphQL query, default is ghminer.graphql
. |
| --schema
| ✅ | Path to parsing schema, default is ghminer.json
. |
| --start
| ✅ | The start date to search the repositories, in ISO format; e.g. 2024-01-01
. |
| --end
| ✅ | The end date to search the repositories, in ISO format; e.g. 2024-01-01
. |
| --tokens
| ✅ | Text file name that contains a number of GitHub PATs. Those will be used in order to pass GitHub API rate limits. Add as many tokens as needed, considering the amount of data (they should be separated by line break). |
| --date
| ❌ | The type of the date field to search on, you can choose from created
, updated
and pushed
, the default one is created
. |
| --batchsize
| ❌ | Request batch-size value in the range 10..100
. The default value is 10
. |
| --filename
| ❌ | The name of the file for the found repos (CSV and JSON files). The default one is result
. |
| --json
| ❌ | Save found repos as JSON file too. |
GraphQL Query
Your query, provided in --graphql
can have all
GitHub supported fields you want. However, to keep this query
running to collect all possible repositories, ghminer requires you to have
the following structure:
search
with$searchQuery
,$first
,$after
attributes.pageInfo
withendCursor
,hasNextPage
attributes.repositoryCount
field.
Here is an example:
query ($searchQuery: String!, $first: Int, $after: String) {
search(query: $searchQuery, type: REPOSITORY, first: $first, after: $after) {
repositoryCount
...
pageInfo {
endCursor
hasNextPage
}
}
}
Parsing Schema
To parse response generated by GraphQL Query, you should provide the parsing schema. This schema should have all desired metadata field names as keys and path to the data in response as values.
For instance:
{
"repo": "nameWithOwner",
"branch": "defaultBranchRef.name",
"readme": "defaultBranchRef.target.repository.object.text",
"topics": "repositoryTopics.edges[].node.topic.name",
"lastCommitDate": "defaultBranchRef.target.history.edges[0].node.committedDate",
"commits": "defaultBranchRef.target.history.totalCount",
"workflows": "object.entries.length"
}
How to contribute
Fork repository, make changes, send us a pull request.
We will review your changes and apply them to the master
branch shortly,
provided they don't violate our quality standards. To avoid frustration,
before sending us your pull request please run full npm build:
npm test
You will need Node 20+ installed.