After yesterday’s post, this site was hosted in Google Cloud Storage without incurring any cost. Every good project needs its own CI/CD pipeline, and I decided to find a way to add one to build and publish the site automatically. And I got it to work, using Google Could Build.

As with every little project, everything starts with research. I found dozens of promising leads, but nothing that was exactly what I was looking for. Besides GCP official docs, these two are the pages I used to create this post:

Connecting GitHub and Google Cloud Build

Let’s get our hands dirty. The first thing was creating a trigger every time there was a new push in the repository. For that, you need to use Cloud Build product and make sure the project is the right one in the GCP console. In my case, my code is hosted in GitHub and not in GCP, which requires an extra step: connect your GitHub repository and Google Cloud Build. There are multiple steps, but the user flow is quite detailed:

  1. Click in the Triggers section in Could Build
  2. Click in “Connect to repository” in the center of the page
  3. Add your GitHub repository URL (the HTTPS link works fine)
  4. Authorize Could Build in your GitHub account
  5. Install Google Could Build app at least in the repository you are going to use
  6. Read and acknowledge the “risks” of what you just did ;)

Creating the trigger and configuring the build

Next is creating the trigger, either right after the connection is done or from the Triggers section. It is relatively straightforward, and primarily your job is to find a decent unique name. Add the name, the description, pick the repository, make sure “push to a branch” and the branch regexp is correct, and click on create.

Before you continue, don’t skip one of the steps mentioned in one of the pages I listed above. You will need to enable Could Build API to be able to use Cloud Build. To do so, you can go to Setting in Cloud Build, follow the link and enable the API. There you can see the free tier, 120 minutes per day, more than enough if you are only running a little static site.

Next step is to create and add the build configuration file. I used what I learned in from the pages references above to compose mine:

  - id: Update Permissions
    name: ''
    entrypoint: 'chmod'
    args: ['-v','-R', 'a+rw','.']
  - id: Build Jekyll Site
    name: 'jekyll/jekyll'
    args: ['jekyll','build']
    env: ['JEKYLL_ENV=production']
  - id: Deploy to Cloud Storage
    name: ''
    args: ['rsync', '-d', '-r', './_site', 'gs://']
timeout: '10m'

There are 3 build steps: Making all files in the build workspace accessible to all users, build the static site, upload the files to the right storage bucket. To better understand the file syntax and what happens, the best is to read Could Build documentation and check the logs once the first build runs. The first step is needed because the second step runs using a non-root user, and files won’t be readable otherwise.

Add your configuration to couldbuild.yaml in the root of your repository, commit, and push. That should trigger your first build. Go to the History section, click on the build id, and enjoy it ;)


I don’t use docker locally to build my Jekyll site. I tried using it, and it works, but it makes things a bit slower. Regardless of that little inconvenience, it is a good way to be “environment independent”. This is the command I used locally:

docker run --rm \
  -p 4000:4000 \
  --volume="$PWD:/srv/jekyll" \
  -it jekyll/jekyll:4 \
  jekyll serve

I want to see if I can modify or create a similar image that doesn’t need to update/install gems in each run, but it is nice that the default image uses the Gemfile in your folder. In Could Build, I see that having a more “precise” image will help build time (the build itself is a second, but the step takes close to a minute).

In the current state, I found 2 minor issues: It looks like the png asset in the first post always gets uploaded to the bucket even if nothing has changed, and I’m unsure the environment is being picked up in the build step. I will see if adding -c to use checksums instead of timestamps to the last step helps with the first issue, and I will add some production-only part in the site to verify the latter.