CI/CD Examples
CI/CD Examples
GitHub Self-Hosted Runner
To integrate the Cuica into a cloud-based GitHub CI/CD pipeline, a self-hosted runner must be instantiated on an on-premises server with network access to both GitHub and the Cuica. For this example, Docker Compose is used to instantiate the runner.
All placeholder values are enclosed below with <>
.
Prerequisites are:
- Both
docker
and thecompose
plugin are installed on the on-premises machine - A GitHub repository has been created and the actions configured to use a self-hosted runner
- A token has been generated for the Cuica and the token registered as a GitHub secret
- The Cuica hostname has been registered as a GitHub secret
- The GitHub runner token is stored in a secret file located in
.secrets/runner_token
- An actual device-under-test (DUT) is connected electrically to the Cuica via either the Squeak Board or the 20-pin directly
- The deployment configuration is set on the Cuica
The following files illustrate how Docker Compose can be used to run the runner service on the on-premises machine.
In this example Dockerfile, the build environment for an RP2040 application
is installed, the cuica
CLI application installed, a custom Root CA
installed, and the GitHub runner installed.
FROM debian:bullseye-20230612-slim
ARG builder_uid="1000"
ARG builder_gid="1000"
# Install needed build tools
RUN apt-get update && \
apt-get install -y \
build-essential \
cmake \
gcc-multilib \
libtool \
pkg-config \
curl \
g++ \
git \
ca-certificates \
libstdc++-arm-none-eabi-newlib \
libnewlib-arm-none-eabi \
ninja-build \
jq \
python3-minimal \
pylint3 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install required source deps and install cuica CLI app
# 6a7db34ff63345a7badec79ebea3aaef1712f374 is tag 1.5.1 for pico-sdk
# adfa11cf7584ae3c57cb77489b5af1681002f47d is tag 1.6.1 for cpplint
RUN git clone --depth 1 https://github.com/raspberrypi/pico-sdk.git /deps/src/pico-sdk && \
git -C /deps/src/pico-sdk fetch --depth 1 origin 6a7db34ff63345a7badec79ebea3aaef1712f374 && \
git -C /deps/src/pico-sdk checkout 6a7db34ff63345a7badec79ebea3aaef1712f374 && \
git clone --depth 1 https://github.com/cpplint/cpplint.git /deps/src/cpplint && \
git -C /deps/src/cpplint fetch --depth 1 origin adfa11cf7584ae3c57cb77489b5af1681002f47d && \
git -C /deps/src/cpplint checkout adfa11cf7584ae3c57cb77489b5af1681002f47d && \
curl -o /usr/bin/cuica -L https://uatha.net/cuica-client-0.8.20 && \
echo "2e5e7ac6b537315776c053d348783400e24a146e7d305af79322f710faa67d0c /usr/bin/cuica" | shasum -a 256 -c && \
chmod 700 /usr/bin/cuica && \
cuica --version && \
curl -o /usr/local/share/ca-certificates/uatha-root-ca.crt -L http://uatha.net/uatha-root-ca.pem && \
echo "02257660f40d40121652f888bad3a268c2ea22c81ccbcce69210191fd3f3143c /usr/local/share/ca-certificates/uatha-root-ca.crt" | shasum -a 256 -c && \
curl -o /usr/local/share/ca-certificates/uatha-intermediate-ca.crt -L http://uatha.net/uatha-intermediate-ca.pem && \
echo "9259a361bf4c57cd8d15d6a6210f94e0aecc813ff24431ca17ead84fd4ddbe6f /usr/local/share/ca-certificates/uatha-intermediate-ca.crt" | shasum -a 256 -c && \
update-ca-certificates
# NOTE: If you have your own certificates, add them like this:
# COPY my-cert.pem /usr/local/share/ca-certificates/my-cert.crt
# RUN update-ca-certificates
# Create a local container user & create mount points and chown them to builder since that'll be the active user
RUN addgroup -gid $builder_gid builder && \
useradd -u $builder_uid -g $builder_gid -ms /bin/bash builder && \
mkdir -p /bld /home/builder/actions-runner && \
chown builder:builder /bld /home/builder/actions-runner
WORKDIR /home/builder
USER builder
SHELL ["/bin/bash", "-c"]
# Download the latest GitHub Actions runner
RUN cd actions-runner && \
curl -o actions-runner-linux-x64.tar.gz -L https://github.com/actions/runner/releases/download/v2.319.1/actions-runner-linux-x64-2.319.1.tar.gz && \
shasum -a 256 actions-runner-linux-x64.tar.gz && \
echo "3f6efb7488a183e291fc2c62876e14c9ee732864173734facc85a1bfb1744464 actions-runner-linux-x64.tar.gz" | shasum -a 256 -c || true && \
tar xzf actions-runner-linux-x64.tar.gz && \
rm actions-runner-linux-x64.tar.gz
# Copy entrypoint script
COPY --chown=builder:builder entrypoint.sh /home/builder/entrypoint.sh
RUN chmod +x /home/builder/entrypoint.sh
# Set the entrypoint
ENTRYPOINT ["/home/builder/entrypoint.sh"]
To build and run the service, Docker Compose is used. This compose file mounts the GitHub runner token secret to provide the container access, enhances security, and mounts volumes for the build workspace
services:
rp2040-build-environment:
read_only: true
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
deploy:
resources:
limits:
cpus: '1.0'
memory: 1024M
tmpfs:
- /tmp
image: rp2040-build-environment:latest
build:
context: .
container_name: rp2040-build-environment
restart: unless-stopped
tty: true
environment:
REPO_URL: "<REPO_URL>"
RUNNER_NAME: "rp2040-demo-runner"
RUNNER_LABELS: "self-hosted,docker"
secrets:
- RUNNER_TOKEN
volumes:
- .:${PWD}
- runner_work:/bld
- runner_actions:/home/builder/actions-runner
secrets:
RUNNER_TOKEN:
file: ./.secrets/runner_token
volumes:
runner_work:
runner_actions:
This entrypoint.sh
is what starts the GitHub runner. It uses
the named mount in the docker-compose.yml file to persist the
runner configuration between service runs. Otherwise, GitHub will
generate a new token each time because the runner appears as a new instance.
#!/bin/bash
set -e
# Configuration variables
RUNNER_HOME="/home/builder/actions-runner"
RUNNER_WORKDIR="/bld"
# Read the GitHub Runner Token from the Docker secret file
if [ -f "/run/secrets/RUNNER_TOKEN" ]; then
RUNNER_TOKEN=$(cat /run/secrets/RUNNER_TOKEN)
if [ "${RUNNER_TOKEN}" == "NO_RUNNER" ]; then
# If it's desired to just build with this container
# interactively, sleep instead of running the runner
echo "Starting without GitHub Actions runner..."
exec sleep infinity
fi
else
echo "ERROR: Runner token not found in /run/secrets/RUNNER_TOKEN"
exit 1
fi
# Try removing the existing runner configuration locally
if [ -f "${RUNNER_HOME}/.runner" ]; then
echo "Reusing previous runner configuration..."
else
# Configure the runner
echo "Configuring the GitHub Actions runner..."
${RUNNER_HOME}/config.sh \
--unattended \
--url "${REPO_URL}" \
--token "${RUNNER_TOKEN}" \
--name "${RUNNER_NAME}" \
--labels "${RUNNER_LABELS}" \
--work "${RUNNER_WORKDIR}"
fi
# Run the runner
echo "Starting the GitHub Actions runner..."
exec ${RUNNER_HOME}/run.sh
Finally, once the GitHub runner is running, this GitHub build script builds the embedded software, bundles up the Binary Image with a Test Script into a Test Suite, runs the Test Suite, and lastly archives the test results.
name: Build and Test rp2040-demo
on:
push:
branches:
- '**'
pull_request:
branches: [ main ]
workflow_dispatch:
inputs:
enable_verbose:
description: 'Enable verbose logging (true or false)'
required: true
default: 'false'
jobs:
build:
runs-on: self-hosted
env:
CUICA_API_TOKEN: ${{ secrets.CUICA_API_TOKEN }}
CUICA_API_HOSTNAME: ${{ secrets.CUICA_API_HOSTNAME }}
outputs:
bundle_name: ${{ steps.set_output.outputs.bundle_name }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Initialize Build
run: |
mkdir -p bld
cmake -G Ninja -DREPO_ROOT="${GITHUB_WORKSPACE}" \
-DCMAKE_TOOLCHAIN_FILE="${GITHUB_WORKSPACE}/cmake/arm_pi_pico_gcc_toolchain.cmake" -B bld -S src
- name: Build Application
run: |
cmake --build bld
- name: Test Cuica Connection
run: cuica system info
- name: Create Test Suite Bundle
id: set_output
run: |
binary_image="bld/gpio_read.elf"
test_script="test/read.py"
bundle_tar="rp2040-demo-${GITHUB_SHA:0:8}.tar"
bundle_name="${bundle_tar}.gz"
# Build cuica suite
set -ex
cuica suite build --label rp2040-demo-read --scripts $test_script --binaries $binary_image -o $bundle_tar
gzip $bundle_tar
echo "BUNDLE_NAME=$bundle_name" >> $GITHUB_ENV
echo "bundle_name=$bundle_name" >> $GITHUB_OUTPUT
# Archive the bundle as an artifact and also so the test job can use it
- name: Archive Bundle
uses: actions/upload-artifact@v4
with:
name: ${{ steps.set_output.outputs.bundle_name }}
path: ${{ env.BUNDLE_NAME }}
test:
runs-on: self-hosted
needs: build
env:
CUICA_API_TOKEN: ${{ secrets.CUICA_API_TOKEN }}
CUICA_API_HOSTNAME: ${{ secrets.CUICA_API_HOSTNAME }}
outputs:
results_bundle_name: ${{ steps.set_output.outputs.results_bundle_name }}
steps:
# Pull in the bundle from the other job
- name: Download artifact
uses: actions/download-artifact@v4
with:
name: ${{ needs.build.outputs.bundle_name }}
# Remove all cuica suites with matching tag
- name: Clean Cuica
run: cuica suite remove --tag rp2040-demo
- name: Add Test Suite
run: |
set -euo pipefail
echo "Bundle name: ${{ needs.build.outputs.bundle_name }}"
# Add test suite and get testSuiteUuid
test_suite_uuid=$(cuica suite add --bundle ${{ needs.build.outputs.bundle_name }} | jq -r -c .TestSuitesAdded.testSuiteUuids[0])
echo "Test Suite UUID: $test_suite_uuid"
# Tag the suite so it can be easily cleaned up
cuica suite tag $test_suite_uuid --add rp2040-demo
# Set the test suite uuid in the environment for use in subsequent steps
echo "TEST_SUITE_UUID=$test_suite_uuid" >> $GITHUB_ENV
- name: Run Test Suite
run: |
set -euo pipefail
verbose=$([ "${{ github.event.inputs.enable_verbose }}" = "true" ] && echo "-vvv" || echo "")
# Run the test suite and get testSuiteRunUuid
test_suite_run_uuid=$(cuica ${verbose} suite run $TEST_SUITE_UUID | jq -rc .TestSuiteRunCompleted.testSuiteRunUuid)
echo "Test Suite Run UUID: $test_suite_run_uuid"
# Fetch and output test results
cuica ${verbose} suite-run get $test_suite_run_uuid
cuica suite-run get $test_suite_run_uuid | tee run_data.json
# Determine the result of the test
result=$(jq -r '.testSuiteRuns[] | .result' run_data.json)
if [[ ($result != "pass") ]]; then
echo "Test failed with result: $result"
exit 1
fi
# Extract out the script output and program console output
script_run_uuid=$(jq -r '.testSuiteRuns[] | .testCaseRuns[1] | .testScriptRunUuid' run_data.json)
binary_program_uuid=$(jq -r '.testSuiteRuns[] | .testCaseRuns[1] | .binaryImageProgramUuid' run_data.json)
echo "Found Script Run UUID: $script_run_uuid, Binary Program UUID: $binary_program_uuid"
# Fetch script run and results, and fetch binary program output
cuica script-run stdout $script_run_uuid | tee script_run_output.txt
cuica binary-program stdout $binary_program_uuid | tee binary_program.txt
results_bundle_name="rp2040-demo-${GITHUB_SHA:0:8}-results.tgz"
tar czf ${results_bundle_name} script_run_output.txt binary_program.txt
echo "RESULTS_BUNDLE_NAME=$results_bundle_name" >> $GITHUB_ENV
- name: Archive Results Bundle
uses: actions/upload-artifact@v4
with:
name: ${{ env.RESULTS_BUNDLE_NAME }}
path: ${{ env.RESULTS_BUNDLE_NAME }}
Jenkins Agent
To integrate the Cuica into an on-premises Jenkins CI/CD pipeline, a Jenkins agent must be running on a server with network access to both Jenkins and the Cuica.
Prerequisites are:
- The Jenkins Docker plugin has been installed and the Jenkins agent configured
- The
<CUICA_API_TOKEN_CREDENTIALS_ID>
has been registered as a secret with Jenkins - The code to build this example exists in
<REGISTRY_URL>
- A build environment docker image containing both the toolchain and
cuica
CLI application has been pushed to the<REGISTRY_URL>
docker registry (see Dockerfile for example) - An actual DUT is connected electrically to the Cuica via either the Squeak Board or the 20-pin directly
- The deployment configuration is set on the Cuica
The following Jenkinsfile illustrates how a build and testing of a bundle against a target DUT would be done.
node {
try {
stage('Checkout') {
gitSHA = checkout([$class: 'GitSCM', ...)
}
lock("<CUICA_LOCK>") {
stage('Test Connectivity') {
docker.withRegistry('<REGISTRY_URL>', '<REGISTRY>') {
docker.image('<BUILD_ENVIRONMENT_IMAGE>').inside {
withCredentials([string(credentialsId: '<CUICA_API_TOKEN_CREDENTIALS_ID>', variable: 'CUICA_API_TOKEN')]) {
sh """set -e
cuica --version
cuica -v system info
"""
}
}
}
}
}
stage('Build') {
docker.withRegistry('<REGISTRY_URL>', '<REGISTRY>') {
docker.image('<BUILD_ENVIRONMENT_IMAGE>').inside {
sh 'rm -rf bld && mkdir bld'
dir ('bld') {
sh "cmake -G Ninja -DCMAKE_TOOLCHAIN_FILE=toolchain.cmake ../src"
sh "cmake --build ."
}
}
}
}
lock("<CUICA_LOCK>") {
stage("Test") {
docker.withRegistry('<REGISTRY_URL>', '<REGISTRY>') {
docker.image('<BUILD_ENVIRONMENT_IMAGE>').inside {
withCredentials([string(credentialsId: '<CUICA_API_TOKEN_CREDENTIALS_ID>', variable: 'CUICA_API_TOKEN')]) {
def gitSHAShortened = gitSHA.substring(0, 8)
def label = "${config.dutFunctionName}-${gitSHAShortened}"
// Create the bundle
def binaryImage = 'my-binary.elf'
def testScript = 'my-script.elf'
def bundleName = "bundle-${label}.tar"
sh """set -e
cuica suite build --label ${label} --scripts ${testScript} --binaries ${binaryImage} -o ${bundleName}
# Clean off Cuica
cuica suite remove --all
gzip ${bundleName}
"""
echo "Created ${bundleName}"
// Save bundle as artifact
archiveArtifacts artifacts: '*.tar.gz', fingerprint: true
// Import the Test Suite Bundle
def testSuiteUuid = sh(script: """set -e
cuica suite add --bundle ${bundleName}.gz | \
jq -r -c .TestSuitesAdded.testSuiteUuids[0]
""", returnStdout: true).trim()
// Add a tag
sh """set -e
cuica suite tag ${testSuiteUuid} --add ${config.dutFunctionName}
"""
// Run the Test Suite and get the Test Suite UUID
def testSuiteRunUuid = sh(script: """set -e
cuica -v suite run ${testSuiteUuid} | \
jq -rc .TestSuiteRunCompleted.testSuiteRunUuid
""", returnStdout: true).trim()
// Run the Test Suite and get the Test Suite UUID
sh """set -e
cuica -v suite-run get ${testSuiteRunUuid}
cuica suite-run get ${testSuiteRunUuid} | tee run_data.json
"""
// If the test doesn't match the expected or the expectee isn't defined and the result is anything but
// successful, mark as a failed test
def result = sh(script: "jq -r '.testSuiteRuns[] | .result' run_data.json", returnStdout: true).trim()
if (((result != config.expectedResult) && (config.expectedResult != null))
|| (result != 'pass' && config.expectedResult == null)) {
unstable("Unexpected test result: ${result}, was expecting ${config.expectedResult} setting result to UNSTSABLE")
} else if (result == 'abort' || result == 'unfinished') {
failure("Test aborted or unfinished, setting result to failure")
}
// Lastly, gather the output and the results file
def scriptRunUuid = sh(script: "jq '.testSuiteRuns[] | .testCaseRuns[1] | .testScriptRunUuid' run_data.json", returnStdout: true).trim()
def binaryProgramUuid = sh(script: "jq '.testSuiteRuns[] | .testCaseRuns[1] | .binaryImageProgramUuid' run_data.json", returnStdout: true).trim()
echo "Found: ${scriptRunUuid}, ${binaryProgramUuid}"
sh """set -e
cuica script-run stdout ${scriptRunUuid}
cuica script-run results ${scriptRunUuid}
cuica binary-program stdout ${binaryProgramUuid}
"""
}
}
}
}
}
}
}
catch (Exception err) {
error(err as String)
} finally {
// cleanup
}
}
The following Dockerfile is essentially the same as the GitHub sans the
installation of the runner. It simply provides the toolchain for building
the Binary Image and installs the cuica
CLI Application.
FROM debian:bullseye-20230612-slim
ARG builder_uid="1000"
ARG builder_gid="1000"
# Install needed build tools
RUN apt-get update && \
apt-get install -y \
build-essential \
cmake \
gcc-multilib \
libtool \
pkg-config \
curl \
g++ \
git \
ca-certificates \
libstdc++-arm-none-eabi-newlib \
libnewlib-arm-none-eabi \
ninja-build \
jq \
python3-minimal \
pylint3 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install required source deps and install cuica CLI app
# 6a7db34ff63345a7badec79ebea3aaef1712f374 is tag 1.5.1 for pico-sdk
# adfa11cf7584ae3c57cb77489b5af1681002f47d is tag 1.6.1 for cpplint
RUN git clone --depth 1 https://github.com/raspberrypi/pico-sdk.git /deps/src/pico-sdk && \
git -C /deps/src/pico-sdk fetch --depth 1 origin 6a7db34ff63345a7badec79ebea3aaef1712f374 && \
git -C /deps/src/pico-sdk checkout 6a7db34ff63345a7badec79ebea3aaef1712f374 && \
git clone --depth 1 https://github.com/cpplint/cpplint.git /deps/src/cpplint && \
git -C /deps/src/cpplint fetch --depth 1 origin adfa11cf7584ae3c57cb77489b5af1681002f47d && \
git -C /deps/src/cpplint checkout adfa11cf7584ae3c57cb77489b5af1681002f47d && \
curl -o /usr/bin/cuica -L https://uatha.net/cuica-client-0.8.20 && \
echo "2e5e7ac6b537315776c053d348783400e24a146e7d305af79322f710faa67d0c /usr/bin/cuica" | shasum -a 256 -c && \
chmod 700 /usr/bin/cuica && \
cuica --version && \
curl -o /usr/local/share/ca-certificates/uatha-root-ca.crt -L http://uatha.net/uatha-root-ca.pem && \
echo "02257660f40d40121652f888bad3a268c2ea22c81ccbcce69210191fd3f3143c /usr/local/share/ca-certificates/uatha-root-ca.crt" | shasum -a 256 -c && \
curl -o /usr/local/share/ca-certificates/uatha-intermediate-ca.crt -L http://uatha.net/uatha-intermediate-ca.pem && \
echo "9259a361bf4c57cd8d15d6a6210f94e0aecc813ff24431ca17ead84fd4ddbe6f /usr/local/share/ca-certificates/uatha-intermediate-ca.crt" | shasum -a 256 -c && \
update-ca-certificates
# NOTE: If you have your own certificates, add them like this:
# COPY my-cert.pem /usr/local/share/ca-certificates/my-cert.crt
# RUN update-ca-certificates
# Create a local container user & create mount points and chown them to builder since that'll be the active user
RUN addgroup -gid $builder_gid builder && \
useradd -u $builder_uid -g $builder_gid -ms /bin/bash builder && \
mkdir -p /bld /home/builder/actions-runner && \
chown builder:builder /bld /home/builder/actions-runner
WORKDIR /home/builder
USER builder
SHELL ["/bin/bash", "-c"]