For a decade, I’ve been building and maintaining container images (ranging from lightweight Ubuntu bases to full Android/Node/Cordova/Ionic stacks) and publishing them to Docker Hub. That journey has taught me what scales across architectures, what keeps images trustworthy, and how to run pipelines that fail fast when something drifts.
Thank you, Docker Hub, for reliably hosting multiple terabytes of images and experiments over these years.
Core principles that aged well#
Ship multi-arch by default. End users and CI runners span
amd64,arm64, and (depending on the base)arm/v7,s390x,ppc64le. Build once, publish once, let the registry serve the correct manifest.Prefer
latestbases + frequent rebuilds. Instead of pinning base images, I intentionally consume fresh bases (e.g.,ubuntu:latestdistro LTS tracks) and rebuild often. This allows for upstream fixes to be applied quickly, provided the pipeline also validates and scans on every build. Docker’s own docs stress “rebuild your images often” to ensure updated dependencies and security fixes.Automate verification, not hope. Every build runs linting (Hadolint), image validation (Container Structure Tests), and vulnerability analysis (Docker Scout) before or alongside publishing. Trust is a pipeline outcome, not a tag name.
Keep images lean and purposeful. Minimal base, minimal layers,
.dockerignore, one concern per image.Make provenance obvious. I tag with CalVer (e.g.,
v2025.08.22) and keeplatestmoving. The date in the tag answers “when was this built?” without opening the registry UI.
A GitHub Actions pipeline that earns trust#
Below are the building blocks I actually run across repos like docker-base, docker-android, docker-cordova, and docker-ionic. They are battle-tested patterns, not lab recipes.
Single-job, multi-arch builds with Buildx + QEMU are the simplest way to ship a manifest list:
name: Docker Image
on:
push:
branches: [ "latest" ]
tags: [ "v*.*.*" ]
pull_request:
branches: [ "latest" ]
schedule:
- cron: "0 10 * * *" # daily rebuild
env:
IMAGE: beevelop/your-image # derived from repo name in my repos
PLATFORMS: linux/amd64,linux/arm64/v8
jobs:
build:
runs-on: ubuntu-22.04
permissions:
contents: read
packages: write
pull-requests: write
steps:
- uses: actions/checkout@v5
- name: Compute CalVer
run: echo "CALVER=$(date -u +'%Y.%m.%d')" >> $GITHUB_ENV
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.IMAGE }}
# I keep 'latest' + CalVer; sha/ref tags are optional
tags: |
type=raw,value=latest
type=raw,value=v${{ env.CALVER }}
type=sha,format=short
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Lint Dockerfile early
- name: Hadolint
uses: hadolint/hadolint-action@v3.1.0
with:
dockerfile: ./Dockerfile
# Build & push multi-arch. I keep cache in GHA to speed rebuilds.
- name: Build & push
id: build
uses: docker/build-push-action@v6
with:
platforms: ${{ env.PLATFORMS }}
push: ${{ github.event_name != 'pull_request' }}
pull: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
sbom: true
provenance: true
# Vulnerability analysis (comment on PRs, upload SARIF on branch builds)
- name: Docker Scout (PR compare)
if: ${{ github.event_name == 'pull_request' }}
uses: docker/scout-action@v1
with:
command: compare
image: ${{ steps.meta.outputs.tags }}
only-severities: critical,high
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Docker Scout (CVEs to code scanning)
if: ${{ github.event_name != 'pull_request' }}
uses: docker/scout-action@v1
with:
command: cves
image: ${{ env.IMAGE }}:latest
sarif-file: scout.sarif
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Upload SARIF
if: ${{ github.event_name != 'pull_request' }}
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: scout.sarifWhy this works well
pull: truepulls fresh bases solatestactually means “up-to-date.”- Daily schedule + CalVer tag makes freshness obvious.
- Hadolint + Scout stops “works-on-my-machine” leaks and unvetted packages. Hadolint action reference; Docker Scout GHA docs.
imagetools in a follow-up job. Docker’s multi-platform docs show both approaches.Test-driven containers with Container Structure Tests (CST)#
CST treats images like software: you write tests for file presence, metadata, ports, commands, env, etc., and fail the pipeline if the image drifts. The upstream project is in maintenance mode but remains very useful; there’s also a small GHA wrapper.
Example tests/container.yaml:
schemaVersion: "2.0.0"
metadataTest:
env:
- key: ANDROID_HOME
value: /opt/android
labels:
- key: org.opencontainers.image.licenses
value: MIT
fileExistenceTests:
- name: "Gradle present"
path: "/usr/share/gradle/bin/gradle"
shouldExist: true
commandTests:
- name: "Java available"
command: "java"
args: ["-version"]
exitCode: 0
exposedPorts: ["8080/tcp"]Run CST in the workflow (after a local build or against a pushed tag):
- name: Local test build for CST
if: ${{ github.event_name == 'pull_request' }}
uses: docker/build-push-action@v6
with:
load: true
push: false
pull: true
tags: ${{ env.IMAGE }}:pr
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Container Structure Tests
if: ${{ github.event_name == 'pull_request' }}
uses: plexsystems/container-structure-test-action@v0.3.0
with:
image: ${{ env.IMAGE }}:pr
config: tests/container.yamlDockerfile habits that kept images reliable#
These reflect patterns across my repos (docker-base, docker-android, docker-android-nodejs, docker-cordova, docker-ionic):
Stay minimal. Start from a distro LTS or purpose-built base. Remove build tools once finished (multi-stage builds help).
Use
.dockerignore. Exclude VCS junk, build outputs, docs.Consolidate
aptsteps;apt-get update && apt-get install -y --no-install-recommends ...then cleanrm -rf /var/lib/apt/lists/*.Label everything. Use OCI labels (
org.opencontainers.image.*) for title, description, source, docs, authors, licenses - CST can assert these.Deterministic order. Sort multi-line package lists to reduce diff noise and simplify reviews.
Ephemeral containers. Keep runtime state out of the image; use volumes/config for mutables.
Platform awareness. Some stacks only ship
linux/amd64(e.g., Android tooling). Base images can be multi-arch even when the final app isn’t (just setPLATFORMSaccordingly per repo).
(Docker’s best-practices guide still nails the fundamentals - layers, cache behavior, etc.)
Lint early with Hadolint#
Hadolint encodes Dockerfile best practices as rules (naming, layer hygiene, security suggestions). I run it on every PR and push:
- name: Hadolint
uses: hadolint/hadolint-action@v3.1.0
with:
dockerfile: ./DockerfileAdd a repository-level .hadolint.yaml to tune or suppress rules for special cases (e.g., Android SDK quirks).
Make releases legible with Calendar Versioning (CalVer)#
I’ve found CalVer to be the most effective scheme for Docker images. It answers the practical question users have - “how old is this image?” - and aligns perfectly with frequent rebuilds:
- Tag shape:
vYYYY.MM.MICRO(e.g.,v2025.08.1). - Publish policy: Always publish
latestand the CalVer tag together. - Discovery:
docker pull beevelop/...:latestfor most users; CalVer for audits and rollbacks.
This keeps the “what/when” visible in the tag while preserving the ergonomics of latest.
Maintenance routines that don’t crumble at scale#
Daily rebuilds on a schedule +
pull: true. This pulls in upstream CVEs and base improvements automatically.Fail-fast gates: Hadolint (lint), CST (behavior/metadata), Docker Scout (CVEs). PRs show diffs and comments automatically.
Cache wisely:
cache-from/to: type=ghakeeps rebuilds fast even with daily jobs. Inline caching with ECR can accelerate building times even further.SBOM & provenance: Turn them on in
build-push-actionfor traceability.Targeted platforms per repo: e.g.,
docker-basepublishes broad multi-arch;docker-androidmay stick tolinux/amd64.Transparent deprecations: Leave final CalVer tags in place; move
latestforward; update READMEs with support notes instead of deleting tags.
Why I don’t pin bases (and sleep fine)#
Pinning digests improves reproducibility, but it also defers fixes until you remember to bump them. My approach is the inverse: accept the churn of latest bases and neutralize the risk with:
- Scheduled rebuilds,
- Automated lint/validation/scanning, and
- CalVer tags to make timelines obvious.
What started as a few base images grew into a long-lived, multi-arch catalog that others depend on. The through-line wasn’t clever Dockerfile tricks; it was discipline in CI: build for every platform you care about, rebuild often, lint and test every change, scan every artifact, and tag with the date so users instantly understand what they’re pulling.



