Skip to content

Commit

Permalink
Add pre-commit config and GitHub Actions job (#1688)
Browse files Browse the repository at this point in the history
* Add pre-commit config and GitHub Actions job

Contains the following hooks:
* buildifier - for formatting and linting Bazel files.
* mypy, ruff, isort, black - for Python typechecking, import hygiene,
static analysis, and formatting.

The pylint CI job was changed to be a pre-commit CI job, where pre-commit
is bootstrapped via Python.

Pylint is currently no longer part of the
code checks, but can be re-added if requested. The reason to drop was
that it does not play nicely with pre-commit, and lots of its
functionality and responsibilities are actually covered in ruff.

* Add dev extra to pyproject.toml for development installs

* Clarify that pre-commit contains only Python and Bazel hooks

* Add one-line docstrings to Bazel modules

* Apply buildifier pre-commit fixes to Bazel files

* Apply pre-commit fixes to Python files

* Supply --profile=black to isort to prevent conflicts

* Fix nanobind build file formatting

* Add tooling configs to `pyproject.toml`

In particular, set line length 80 for all Python files.

* Reformat all Python files to line length 80, fix return type annotations

Also ignores the `tools/compare.py` and `tools/gbench/report.py` files
for mypy, since they emit a barrage of errors which we can deal with
later. The errors are mostly related to dynamic classmethod definition.
  • Loading branch information
nicholasjng authored Oct 30, 2023
1 parent b219e18 commit b93f5a5
Show file tree
Hide file tree
Showing 22 changed files with 1,628 additions and 1,080 deletions.
39 changes: 39 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: python + Bazel pre-commit checks

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
pre-commit:
runs-on: ubuntu-latest
env:
MYPY_CACHE_DIR: "${{ github.workspace }}/.cache/mypy"
RUFF_CACHE_DIR: "${{ github.workspace }}/.cache/ruff"
PRE_COMMIT_HOME: "${{ github.workspace }}/.cache/pre-commit"

steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.11
cache: 'pip'
cache-dependency-path: pyproject.toml
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e ".[dev]"
- name: Cache pre-commit tools
uses: actions/cache@v3
with:
path: |
${{ env.MYPY_CACHE_DIR }}
${{ env.RUFF_CACHE_DIR }}
${{ env.PRE_COMMIT_HOME }}
key: ${{ runner.os }}-${{ hashFiles('.pre-commit-config.yaml') }}-linter-cache
- name: Run pre-commit checks
run: |
pre-commit run --all-files --verbose --show-diff-on-failure
28 changes: 0 additions & 28 deletions .github/workflows/pylint.yml

This file was deleted.

26 changes: 26 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
repos:
- repo: https://github.com/keith/pre-commit-buildifier
rev: 6.3.3.1
hooks:
- id: buildifier
- id: buildifier-lint
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.6.1
hooks:
- id: mypy
types_or: [ python, pyi ]
args: [ "--ignore-missing-imports", "--scripts-are-modules" ]
- repo: https://github.com/psf/black
rev: 23.10.1
hooks:
- id: black
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
args: [--profile, black]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.3
hooks:
- id: ruff
args: [ --fix, --exit-non-zero-on-fix ]
193 changes: 99 additions & 94 deletions .ycm_extra_conf.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,30 @@
import os

import ycm_core

# These are the compilation flags that will be used in case there's no
# compilation database set (by default, one is not set).
# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
flags = [
'-Wall',
'-Werror',
'-pedantic-errors',
'-std=c++0x',
'-fno-strict-aliasing',
'-O3',
'-DNDEBUG',
# ...and the same thing goes for the magic -x option which specifies the
# language that the files to be compiled are written in. This is mostly
# relevant for c++ headers.
# For a C project, you would set this to 'c' instead of 'c++'.
'-x', 'c++',
'-I', 'include',
'-isystem', '/usr/include',
'-isystem', '/usr/local/include',
"-Wall",
"-Werror",
"-pedantic-errors",
"-std=c++0x",
"-fno-strict-aliasing",
"-O3",
"-DNDEBUG",
# ...and the same thing goes for the magic -x option which specifies the
# language that the files to be compiled are written in. This is mostly
# relevant for c++ headers.
# For a C project, you would set this to 'c' instead of 'c++'.
"-x",
"c++",
"-I",
"include",
"-isystem",
"/usr/include",
"-isystem",
"/usr/local/include",
]


Expand All @@ -29,87 +34,87 @@
#
# Most projects will NOT need to set this to anything; you can just change the
# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
compilation_database_folder = ''
compilation_database_folder = ""

if os.path.exists( compilation_database_folder ):
database = ycm_core.CompilationDatabase( compilation_database_folder )
if os.path.exists(compilation_database_folder):
database = ycm_core.CompilationDatabase(compilation_database_folder)
else:
database = None
database = None

SOURCE_EXTENSIONS = [".cc"]

SOURCE_EXTENSIONS = [ '.cc' ]

def DirectoryOfThisScript():
return os.path.dirname( os.path.abspath( __file__ ) )


def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
if not working_directory:
return list( flags )
new_flags = []
make_next_absolute = False
path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
for flag in flags:
new_flag = flag

if make_next_absolute:
make_next_absolute = False
if not flag.startswith( '/' ):
new_flag = os.path.join( working_directory, flag )

for path_flag in path_flags:
if flag == path_flag:
make_next_absolute = True
break

if flag.startswith( path_flag ):
path = flag[ len( path_flag ): ]
new_flag = path_flag + os.path.join( working_directory, path )
break

if new_flag:
new_flags.append( new_flag )
return new_flags


def IsHeaderFile( filename ):
extension = os.path.splitext( filename )[ 1 ]
return extension in [ '.h', '.hxx', '.hpp', '.hh' ]


def GetCompilationInfoForFile( filename ):
# The compilation_commands.json file generated by CMake does not have entries
# for header files. So we do our best by asking the db for flags for a
# corresponding source file, if any. If one exists, the flags for that file
# should be good enough.
if IsHeaderFile( filename ):
basename = os.path.splitext( filename )[ 0 ]
for extension in SOURCE_EXTENSIONS:
replacement_file = basename + extension
if os.path.exists( replacement_file ):
compilation_info = database.GetCompilationInfoForFile(
replacement_file )
if compilation_info.compiler_flags_:
return compilation_info
return None
return database.GetCompilationInfoForFile( filename )


def FlagsForFile( filename, **kwargs ):
if database:
# Bear in mind that compilation_info.compiler_flags_ does NOT return a
# python list, but a "list-like" StringVec object
compilation_info = GetCompilationInfoForFile( filename )
if not compilation_info:
return None

final_flags = MakeRelativePathsInFlagsAbsolute(
compilation_info.compiler_flags_,
compilation_info.compiler_working_dir_ )
else:
relative_to = DirectoryOfThisScript()
final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )

return {
'flags': final_flags,
'do_cache': True
}
return os.path.dirname(os.path.abspath(__file__))


def MakeRelativePathsInFlagsAbsolute(flags, working_directory):
if not working_directory:
return list(flags)
new_flags = []
make_next_absolute = False
path_flags = ["-isystem", "-I", "-iquote", "--sysroot="]
for flag in flags:
new_flag = flag

if make_next_absolute:
make_next_absolute = False
if not flag.startswith("/"):
new_flag = os.path.join(working_directory, flag)

for path_flag in path_flags:
if flag == path_flag:
make_next_absolute = True
break

if flag.startswith(path_flag):
path = flag[len(path_flag) :]
new_flag = path_flag + os.path.join(working_directory, path)
break

if new_flag:
new_flags.append(new_flag)
return new_flags


def IsHeaderFile(filename):
extension = os.path.splitext(filename)[1]
return extension in [".h", ".hxx", ".hpp", ".hh"]


def GetCompilationInfoForFile(filename):
# The compilation_commands.json file generated by CMake does not have entries
# for header files. So we do our best by asking the db for flags for a
# corresponding source file, if any. If one exists, the flags for that file
# should be good enough.
if IsHeaderFile(filename):
basename = os.path.splitext(filename)[0]
for extension in SOURCE_EXTENSIONS:
replacement_file = basename + extension
if os.path.exists(replacement_file):
compilation_info = database.GetCompilationInfoForFile(
replacement_file
)
if compilation_info.compiler_flags_:
return compilation_info
return None
return database.GetCompilationInfoForFile(filename)


def FlagsForFile(filename, **kwargs):
if database:
# Bear in mind that compilation_info.compiler_flags_ does NOT return a
# python list, but a "list-like" StringVec object
compilation_info = GetCompilationInfoForFile(filename)
if not compilation_info:
return None

final_flags = MakeRelativePathsInFlagsAbsolute(
compilation_info.compiler_flags_,
compilation_info.compiler_working_dir_,
)
else:
relative_to = DirectoryOfThisScript()
final_flags = MakeRelativePathsInFlagsAbsolute(flags, relative_to)

return {"flags": final_flags, "do_cache": True}
27 changes: 15 additions & 12 deletions BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -45,36 +45,39 @@ cc_library(
"include/benchmark/benchmark.h",
"include/benchmark/export.h",
],
linkopts = select({
":windows": ["-DEFAULTLIB:shlwapi.lib"],
"//conditions:default": ["-pthread"],
}),
copts = select({
":windows": [],
"//conditions:default": ["-Werror=old-style-cast"],
}),
strip_include_prefix = "include",
visibility = ["//visibility:public"],
# Only static linking is allowed; no .so will be produced.
# Using `defines` (i.e. not `local_defines`) means that no
# dependent rules need to bother about defining the macro.
linkstatic = True,
defines = [
"BENCHMARK_STATIC_DEFINE",
] + select({
":perfcounters": ["HAVE_LIBPFM"],
"//conditions:default": [],
}),
linkopts = select({
":windows": ["-DEFAULTLIB:shlwapi.lib"],
"//conditions:default": ["-pthread"],
}),
# Only static linking is allowed; no .so will be produced.
# Using `defines` (i.e. not `local_defines`) means that no
# dependent rules need to bother about defining the macro.
linkstatic = True,
strip_include_prefix = "include",
visibility = ["//visibility:public"],
deps = select({
":perfcounters": ["@libpfm//:libpfm"],
":perfcounters": ["@libpfm"],
"//conditions:default": [],
}),
)

cc_library(
name = "benchmark_main",
srcs = ["src/benchmark_main.cc"],
hdrs = ["include/benchmark/benchmark.h", "include/benchmark/export.h"],
hdrs = [
"include/benchmark/benchmark.h",
"include/benchmark/export.h",
],
strip_include_prefix = "include",
visibility = ["//visibility:public"],
deps = [":benchmark"],
Expand Down
14 changes: 10 additions & 4 deletions MODULE.bazel
Original file line number Diff line number Diff line change
@@ -1,11 +1,16 @@
module(name = "google_benchmark", version="1.8.3")
module(
name = "google_benchmark",
version = "1.8.3",
)

bazel_dep(name = "bazel_skylib", version = "1.4.2")
bazel_dep(name = "platforms", version = "0.0.6")
bazel_dep(name = "rules_foreign_cc", version = "0.9.0")
bazel_dep(name = "rules_cc", version = "0.0.6")

bazel_dep(name = "rules_python", version = "0.24.0", dev_dependency = True)
bazel_dep(name = "googletest", version = "1.12.1", repo_name = "com_google_googletest", dev_dependency = True)
bazel_dep(name = "googletest", version = "1.12.1", dev_dependency = True, repo_name = "com_google_googletest")

bazel_dep(name = "libpfm", version = "4.11.0")

# Register a toolchain for Python 3.9 to be able to build numpy. Python
Expand All @@ -18,7 +23,8 @@ python.toolchain(python_version = "3.9")

pip = use_extension("@rules_python//python/extensions:pip.bzl", "pip", dev_dependency = True)
pip.parse(
hub_name="tools_pip_deps",
hub_name = "tools_pip_deps",
python_version = "3.9",
requirements_lock="//tools:requirements.txt")
requirements_lock = "//tools:requirements.txt",
)
use_repo(pip, "tools_pip_deps")
Loading

0 comments on commit b93f5a5

Please # to comment.