ghminer is a command-line dataset miner, that aggregates set of public GitHub repositories from GitHub GraphQL API and flushes the result into CSV and JSON files. This tool is based on ksegla/GitHubMiner prototype.
Read this blog post about ghminer
, as a dataset miner from GitHub
to your researches.
Motivation. For our researches we require reasonably big datasets in order to properly analyze GitHub repositories and their metrics. To do so, we need aggregate them somehow. Default GitHub Search API does not help much, since it has limitation of 1000 repositories per query. Our tool uses GitHub GraphQL API instead, and can offer to utilize multiple GitHub PATs in order to automate the build of the such huge dataset and increase research productivity.
First, install it from npm like that:
npm install -g ghminer
then, execute:
ghminer --query "stars:2..100" --start "2005-01-01" --end "2024-01-01" --tokens pats.txt
Also, you should have these files: ghminer.graphql
for GraphQL query, and
ghminer.json
for parsing the response from GitHub API. In GraphQL query, you
can have all GitHub supported fields you want. However, to keep
this query running to collect all possible repositories, ghminer requires you to
have the following structure:
search
with$searchQuery
,$first
,$after
attributes.pageInfo
withendCursor
,hasNextPage
attributes.repositoryCount
field.
Here is an example:
query ($searchQuery: String!, $first: Int, $after: String) {
search(query: $searchQuery, type: REPOSITORY, first: $first, after: $after) {
repositoryCount
nodes {
... on Repository {
nameWithOwner
defaultBranchRef {
name
}
licenseInfo {
spdxId
}
}
}
pageInfo {
endCursor
hasNextPage
}
}
}
and ghminer.json
:
{
"repo": "nameWithOwner",
"branch": "defaultBranchRef.name",
"license": "licence.spdxId"
}
After it will be done, you should have result.csv
file with all GitHub
repositories those were created in the provided date range.
Consider this as more complicated example, demonstrating how to fetch various fields from GitHub repository:
ghminer.graphql
:
query ($searchQuery: String!, $first: Int, $after: String) {
search(query: $searchQuery, type: REPOSITORY, first: $first, after: $after) {
repositoryCount
nodes {
... on Repository {
nameWithOwner
description
defaultBranchRef {
name
}
defaultBranchRef {
name
target {
repository {
object(expression: "HEAD:README.md") {
... on Blob {
text
}
}
}
... on Commit {
history(first: 1) {
totalCount
edges {
node {
committedDate
}
}
}
}
}
}
repositoryTopics(first: 10) {
edges {
node {
topic {
name
}
}
}
}
issues(states: [OPEN]) {
totalCount
}
pullRequests {
totalCount
}
object(expression: "HEAD:.github/workflows/") {
... on Tree {
entries {
name
object {
... on Blob {
byteSize
}
}
}
}
}
}
}
pageInfo {
endCursor
hasNextPage
}
}
}
ghminer.json
:
{
"repo": "nameWithOwner",
"description": "description",
"branch": "defaultBranchRef.name",
"readme": "defaultBranchRef.target.repository.object.text",
"topics": "repositoryTopics.edges[].node.topic.name",
"issues": "issues.totalCount",
"pulls": "pullRequests.totalCount",
"commits": "defaultBranchRef.target.history.totalCount",
"lastCommitDate": "defaultBranchRef.target.history.edges[0].node.committedDate",
"workflows": "object.entries.length"
}
Also, check this repo, where ghminer is used to collect Java repositories from GitHub for research experiment.
Option | Required | Description |
---|---|---|
--query |
✅ | GitHub Search API query |
--graphql |
✅ | Path to GitHub API GraphQL query, default is ghminer.graphql . |
--schema |
✅ | Path to parsing schema, default is ghminer.json . |
--start |
✅ | The start date to search the repositories, in ISO format; e.g. 2024-01-01 . |
--end |
✅ | The end date to search the repositories, in ISO format; e.g. 2024-01-01 . |
--tokens |
✅ | Text file name that contains a number of GitHub PATs. Those will be used in order to pass GitHub API rate limits. Add as many tokens as needed, considering the amount of data (they should be separated by line break). |
--date |
❌ | The type of the date field to search on, you can choose from created , updated and pushed , the default one is created . |
--batchsize |
❌ | Request batch-size value in the range 10..100 . The default value is 10 . |
--filename |
❌ | The name of the file for the found repos (CSV and JSON files). The default one is result . |
--json |
❌ | Save found repos as JSON file too. |
Fork repository, make changes, send us a pull request.
We will review your changes and apply them to the master
branch shortly,
provided they don't violate our quality standards. To avoid frustration,
before sending us your pull request please run full npm build:
npm test
You will need Node 20+ installed.