CodeBuild might not be the continuous integration server that everyone moves to in droves, especially as a standalone service, but I think there are use cases for it that exist outside of providing a test and build infrastructure. Short-lived compute instances.

CodeBuild forces users to run build steps in short-lived Docker containers and it’s more straightforward to work with than the EC2 Container Service, especially when it comes to configuring tasks for short-lived containers.

CodeBuild provides the containers without having to configure a virtual private cloud or a security group. It fits nicely between lambda and EC2. In lambda, Amazon is taking care of managing most of the system, including how to connect to it while locking down the ability to modify it. EC2 requires full management as if the machine was your laptop. CodeBuild provides a container that is an isolated file system. AWS still handles how to connect to the container while giving users what feels like full access to the operating system.

For most use cases the containers act like newly provisioned virtual machines, which can come on-line instantly and in parallel. Since CodeBuild is configured with an assume role policy and it’s containers have a fully writable file system, it can do some real work before accessing other internal or external resources. When the work is done, the container goes away to guarentee the next process begins from the same starting point as the last one.

CodeBuild builds respect a file known as the buildspec.yml. For GitHub projects, that file must exist in the repo’s root.

buildspec.yml

version: 0.2

phases:
  build:
    commands:
      - echo "Doing some work" > workdone.txt
artifacts:
  files:
    - workdone.txt
  discard-paths: yes

Sometimes a buildspec.yml in root doesn’t fit. In those cases, the build commands are configured in-line in the project’s config file or through the console, both of which are not the most readable. However the main point is that the buildspec defines the commands to execute the “build”.

The thing that makes CodeBuild a little clunky to use purely as an on demand container service is that it forces users to define a source repo with which to download code into the working directory. It’s not hard to envision a use case where the cloning of a repo is purely ceremonial. Perhaps a user only wants to execute code that already lives on the image or must pull from a code repository that Amazon doesn’t let us configure (which is anything other than GitHub, S3 or CodeCommit).

CodeBuild can launch up to 20 build projects in parallel with any number of input parameters and you’re only charged for machine execution time, unlike the EC2 Container Service, which would require managing server uptime if the machines mostly sat idle.

If you have any interest seeing what it’s like to spin up containers on demand in CodeBuild, you can clone this sample project which uses Terraform to set up a CodeBuild project and an s3 bucket in which to place an artifact that results from the build.

As input you’ll just have to provide a bucket name and be using a machine with AWS credentials configured properly. An easy way to get a valid configuration is through python. I’d recommend setting up and sourcing a virtual environment before doing:

$(venv)> pip install awscli
$(venv)> aws configure

The credentials you give it should have permission to create resources in AWS. After configuring the credentials, running a terraform apply should create a codebuild project that could be launched manually or invoked by another service such as lambda or CloudWatch cron task.