Skip to content

Releases: BerkeleyLibrary/.github

v1.1.0: Tag images with the literal git-tag

29 Jan 19:58
Compare
Choose a tag to compare

Previously, only semver-like tags were parsed and applied to built docker images as image tags. With v1.1.0, any git-tag will be literally applied as-is as a tag on the built image. For example, this now supports tagging a commit with production to tag the resulting image as production.

v1.0.0: Reproducible CI using docker-compose

24 Jan 21:37
3492b02
Compare
Choose a tag to compare

This initial release allows applications to (fairly) easily setup Docker-driven pipelines by implementing a few simple files in their repos:

  1. docker-compose.ci.yml: An override file, layered atop the default docker-compose.yml, which defines the CI test environment. Applications must name the application service "app", clear its build definition, set image: ${DOCKER_APP_IMAGE}, and add an artifacts:/opt/app/artifacts volume.
  2. setup: An executable on the app service's PATH which performs initial setup. If this fails, the pipeline fails immediately. Use this to do things like scaffold your database.
  3. test: An executable on the app service's PATH which performs tests. You're free to perform as many tests as you like, in whatever manner you like, provided the script exits non-zero on failure (however you define that). Test results should be written to the /opt/app/artifacts directory (volume) in your app container to be archived and visible in the GitHub Actions UI.

Future releases will strive to make this less prescriptive and invasive, but for now you'll have to hew to these guidelines and restrictions.

Examples

Preparing an app service

For an app service to work with this pipeline, it needs to use the image given in $DOCKER_APP_IMAGE, not have a build section, include an artifacts:/opt/app/artifacts volume for storing test results and other build artifacts, and eliminate any host-mounted volumes (to avoid permissions issues).

# docker-compose.ci.yml
services:
  app:
    build: !reset
    image: ${DOCKER_APP_IMAGE}
    volumes: !reset
    volumes:
      - artifacts:/opt/app/artifacts

volumes:
  artifacts:

Scaffolding a Solr core

Many library applications require the creation of a customized Solr core, which can't be created at runtime due to Solr's API limitations. The solution is to commit your core configurations under ./solr/{coreName} and build a simple custom Solr instance atop that:

# solr/Dockerfile
FROM solr:8
COPY --chown=solr . /var/solr/data/
# docker-compose.ci.yml
services:
  solr:
    build: solr

Initializing the CI environment

Initialization tasks should be written into a setup executable (i.e. script) on your app service's PATH. For example, in a Rails app:

#!/usr/bin/env ruby
require "fileutils"

APP_ROOT = File.expand_path("..", __dir__)

def system!(*args)
  system(*args) || abort("\n== Command #{args} failed ==")
end

FileUtils.chdir APP_ROOT do
  puts "\n== Preparing database =="
  system! "bin/rails db:prepare RAILS_ENV=test"
  system! "bin/rails db:prepare"

  puts "\n== Starting solr =="
  system! 'bin/rails geoblacklight:index:seed'

  puts "\n== Removing old logs and tempfiles =="
  system! "bin/rails log:clear tmp:clear"
end

Running tests

Tests are run similarly to setup tasks, but via a test script. Note that artifacts are written to the /opt/app/artifacts directory in the below Rails example:

#!/usr/bin/env ruby
require "fileutils"

APP_ROOT = File.expand_path("..", __dir__)

# Check test coverage when running rspec
ENV['COVERAGE'] = '1'

# Test commands (to be exec'd in order)
TESTS = [
  %w(rspec -f html --out artifacts/rspec.html),
  %w(rubocop -f html --out artifacts/rubocop.html),
]

FileUtils.chdir APP_ROOT do
  exit TESTS.reduce(true) { |passed, test| system(*test) && passed }
end