GitHub Actions to Bitbucket Pipelines and minimizing the pain of the journey
Recently I’ve had to move my mental model from the paradigms of GitHub Actions to Bitbucket Pipelines recently and boy howdy, it’s been kinda disorienting.
Bitbucket is not exactly a popular choice compared to GitHub, and at first I struggled with the monolithic bitbucket-pipelines.yml, the differences in syntax between GHA and Bitbucket Pipelines, and its sometimes arcane API. I’m in a better place now with Bitbucket, we’ve become friends, but the first time I tried to replicate authenticating to GCP with WIF – something easy on GHA – in Bitbucket, I was left spinning.
This is the first post in a four-part series on friction I’ve hit using Bitbucket Pipelines after years on GitHub Actions. While I feel I have material to dunk on Bitbucket, I’d rather share what I’ve learned the hard way so the next person coming from Actions doesn’t lose time and pieces of sanity to the same gotchas I did.
The mental-model gap Link to heading
Honestly, the friction I’ve experienced doesn’t come from Bitbucket Pipelines being altogether necessarily worse than GitHub Actions. Bitbucket is just built differently from GHA and has different objectives and architectural bets. If you want Actions with a different flavor, you’re not going to get that from Bitbucket Pipelines. If you’re willing to endure some pain, you can lean into what works and pivot away or navigate through the pain.
Here’s the shortest version I can give:
-
Bitbucket Pipelines has one
bitbucket-pipelines.ymlat the repo root, while GHA can have many workflows. -
GHA has lots of Actions, a huge marketplace, add in composite actions and reusable workflows and you’ve got a pretty hefty amount of support for reusable units. Bitbucket Pipelines has Pipes, with a small official catalog plus anything you can spin up in a Docker image yourself.
-
GHA has jobs and steps, Bitbucket Pipelines has steps and scripts. There’s overlapping terminology and concepts and it can be a chore to move things around in your brain about it.
-
GHA handles cross-step data better through Outputs. Bitbucket suffers here moving data between steps.
-
Marketplace actions for AWS/GCP/Azure with OIDC built in make cloud authentication easy-ish on GHA. Bitbucket involves a lot of wiring up token exchange and other nitty gritty that I’ll get into because I care about this bit a lot.
-
GHA handles filtering on triggers(with
on:) better than the top-level keys of Bitbucket Pipelines such asdefault,branches,tags, etc.The platforms are doing roughly the same job but differently.
One config file to rule them all Link to heading
The single most disorienting thing for me coming from Actions was the single-file deal. In a GH repo I can have ci.yml, release.yml,nightly.yml, reusable workflows, and more. In Bitbucket, everything runs against code that lives in bitbucket-pipelines.yml.
For small repos this is fine or perhaps even (arguably) better due to simplicity. For anything non-trivial it gets gnarly real fast: the file gets long, there’s a temptation to copy and paste across steps a lot at first, and it’s harder to reason about.
Here’s some balm to that pain, in four parts:
1. Use definitions: with YAML anchors
Link to heading
Please, I can not emphasize this enough, if you have a long pipeline file: move repeated step config into a definitions: block, pull it back in with YAML anchors. This is basic and plain YAML, nothing Bitbucket-specific, but it helps Bitbucket Pipelines be a lot more readable.
definitions:
caches:
pip-cache: ~/.cache/pip
steps:
- step: &run-tests
name: Run tests
image: python:3.12-slim
caches:
- pip-cache
script:
- pip install -e ".[test]"
- pytest -q
pipelines:
default:
- step: *run-tests
branches:
main:
- step: *run-tests
- step:
name: Deploy
deployment: production
script:
- ./deploy.sh
Here the anchor (&run-tests) declares the reusable step once. The alias (*run-tests) drops it in where it’s needed. This is the closest thing Bitbucket has to a reusable workflow for the most part and sadly.
2. Use changesets.includePaths for monorepo-ish behavior
Link to heading
If you’ve ever wished you could run “only the steps that touch this directory,” Bitbucket actually has good support here!
pipelines:
default:
- step:
name: Test API
condition:
changesets:
includePaths:
- "services/api/**"
script:
- cd services/api && pytest
- step:
name: Test web
condition:
changesets:
includePaths:
- "apps/web/**"
script:
- cd apps/web && npm test
Steps run based on the changed files. This is legit nicer than the equivalent in Actions, where tj-actions/changed-files became a popular action for so many years off dealing with this logic.
3. When you really do need multiple pipelines, call them Link to heading
Sometimes you actually want different pipelines for different concerns and the bitbucket-pipelines.yml won’t do. Two options!
- The
trigger-pipelinePipe, which allows one pipeline to kick off another (potentially in a different repo) via Bitbucket’s API. Useful for build-then-deploy patterns where the deploy lives elsewhere or vice versa. - Webhook triggers out to your own service, which then turns around and hits Bitbucket’s
pipelinesAPI to start a custom pipeline by name. Kinda gnarly, right?
But then, hey, you’ve got two pipelines and you’d better have logging that correlates them. Use these when anchors and changesets aren’t enough – like when you need to straight up run code out of a different repo. I’ll come back to the API mechanics of triggering pipelines from scripts in post three of this series.
What’s coming in the rest of the series Link to heading
The single-file constraint is somewhat easy to get used to. The next three posts are about trickier bits:
- Post 2 — The Pipes gap, with a working GCP OIDC example. Bitbucket supports keyless auth to cloud providers, but you handle a lot of the wiring andd wire up the token
exchange yourself. I’ll walk through a complete Workload Identity Federation setup against GCP and contrast it with what the beautiful friend of ours,
google-github-actions/auth, does for you in Actions. - Post 3 — The Bitbucket API UUID slog. Updating a deployment variable from a pipeline takes four chained API calls because nothing inside Bitbucket’s data model is addressable by name for SOME reason, don’t ask me. I’ll walk through the chain with a working bash script.
- Post 4 — Three gotchas. The
pull-request-fulfilledtrigger (and others) silently skipping after merge, weird URL encoding for branch names with forward slashes, and the default shallow clone of Bitbucket Pipelines biting in the git history.
Jump around if you want, if something addresses your specific concerns, or read in order, I’m not your mom, I don’t care.
Next in series: The Pipes Gap — getting secure GCP auth working in Bitbucket Pipelines