An introduction to Tilt, its concepts such as smart rebuilds and live updates for OpenFaaS functions.
A note from Alex:
Kevin Lindsay is a Full Stack Web Developer at Surge and an OpenFaaS community member. Whilst building OpenFaaS functions, he wanted to customise his local workflow and decided on Tilt. Learn how Kevin uses Tilt with his functions to get live updates and smart rebuilds of his code.
Creating a robust workflow for a software project is hard; when working with a new technology, it’s not uncommon to see people roll out their own software projects for the sole purpose of making the development experience as clean and easy as possible, such as Create React App or Serverless.
When I initially started using Kubernetes, I took a hard look at the ecosystem, made a few proof-of-concepts, and landed on Tilt. Similarly, when I first started looking at OpenFaaS, I looked at the development workflow offered by the documentation and the workshops, and determined that using Tilt would be an improvement to the experience. Here I’d like to show you a basic setup that uses a few of the concepts that Tilt offers, with a few extra tools that really make the development experience flexible.
What’s Tilt?
Kubernetes for Prod, Tilt for Dev
Tilt is a local development workflow automation toolkit targeted at developers using local Kubernetes clusters. It’s actually a generic workflow tool, similar to Jenkins and the like, allowing you to define a DAG with the nodes being referred to as resources
, and can be used to create workflows without ever touching a Kubernetes cluster, but it’s primarily focused on allowing developers to create applications using a local Kubernetes cluster, such as kind
, docker for desktop
, microk8s
, minikube
, etc.
Why should I consider using it?
Well, Kubernetes’ target audience is more devops focused, and typically need a certain amount of traditionally “devops” knowledge in order to be effective using it directly. Kubernetes’ APIs are largely concerned with CRUD operations on various generic resources that can be used to cleanly abstract most concepts traditionally in the “infrastructure” realm. In general, Kubernetes as an ecosystem has leaned further and further over the years towards a declarative approach, meaning that you provide more code in exchange for running fewer commands, but at the end of the day the main Kubernetes APIs aren’t really designed to be used for rapid iteration, rather they’re generally geared towards stable iteration; Kubernetes focuses on many commands that each have a very specific effect that work well with other commands, rather than having very few heavily opinionated commands.
In short, Kubernetes sits at a very low level of abstraction, allowing you to interact with the primitives in the framework. Tilt, on the other hand, sits at the highest level of abstraction, allowing you to spin up and actively develop on an entire stack using one command, and spin it back down with another single command.
Layers of Abstraction
When working a project, especially when designing a repeatable set of architectural patterns, it can be easy to fall into the habit of using a single familiar tool that gets the job done, even if it’s harder to work with. On the flip side, if you find a shiny new tool it’s easy to try to apply that tool to everything and provide your own API and various abstractions to allow you to “do everything with one command”, but then when you start to use that abstraction all over the place, especially if you’re using it on a team, you find that you have to turn every little thing into a parameter, and then your “1 line command” becomes a 50 line command with newlines and environment variables scattered throughout it. Typically, I find that when this happens the initial goal of being generic ruins what I believe are the more important goals of being simple and stable; it is far easier to opt-in to abstraction than to opt-out.
I’ve found that this problem can occur with just about everything related to software development, and over the years I’ve steadily grown fonder of the old addage “duplication is better than the wrong abstraction”. As a result, I generally avoid creating libraries, utility functions, external charts, and especially parameters set via environment variables until I’ve more-or-less exhausted all of the reasonable possibilities in my many projects and my team’s opinionated way.
This project relies on multiple tools, each of which can easily be used to expose up numerous ways to abstract away tuning (and potentially complexity), allowing you to change a single argument that trickles down the whole stack. For the purposes of keeping things simple, and making it clear which tools are doing what, this project doesn’t really use a ton of tunable parameters, and instead encourages you to experiment and form your own opinions.
The Tools
This project uses tools described below tools that were each picked specifically to handle a different layer of abstraction, from least abstract to most abstract. The following sections describe what each tool operates on, and the main declarative way to configure them, what abstractions it can expose, and workflow advantage of the tool from a development standpoint. Since abstractions can be good, bad, or both, I will not be offering opinions on what to abstract.
Kubernetes - resources
Kubernetes, as described above, creates resources
such as pods, load balancers, deployments, services, and so on, with the goal of providing a declarative way to describe application infrastructure in a generic (and therefore necessarily verbose) way.
Kubernetes uses yaml
files that look something like this:
# ./k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
...
spec:
...
in order to describe things, that when working together, can be thought of as an “application stack”, but on a workflow level they’re treated separately, but might just so happen to reference one another.
Workflow Advantages
Kubernetes allows you to effectively remove the “works on my machine” class of bugs, as well as not having to restart failed applications, build your own load balancers, create your own infrastructure as code language, and much much more.
Helm - charts
Helm is a package manager for Kubernetes. Just like other package managers, Helm is used to abstract away the manual process of fetching something and putting it in the right place. In Kubernetes, this means fetching a series of resources and putting them into the cluster.
Helm’s packages are called charts
, which are basically one or more folders of Kubernetes resources
and a values.yaml
file, which exposes parameters that can be used in the resources via helm templates
. A chart that has been installed is called a release
. In normal Kubernetes, you just have resources; with Helm, you use the same yaml files, but you can expose up parameters and run it through Go’s standard templating engine with a few extra functions like those in sprig. Helm templates have operate roughly with the same workflow as classics like handlebars.
Your previous yaml would generally move directory into a chart’s template directory, and have parameters such as:
# ./k8s/charts/app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name:
...
spec:
...
And now you can fill the name
parameter in with a default argument via values.yaml
:
# ./k8s/charts/app/values.yaml
name: my-app
This parameter can now be overridden by the consumer of the chart when creating a release.
Workflow Adavantages
Helm allows you to install entire configurable stacks with just some config and a few commands. Need a mysql server for local development? Helm can handle that for you.
Helmfile - releases
Helmfile is a tool that allows you to declaratively deploy Helm charts with tons of added utility that a lot of users really like, but are outside of the scope of what Helm tries to do. Helm is focused on giving you a declarative way to create a chart, whereas Helmfile is focused on a declarative way to install the chart.
Helmfile adds a helmfile.yaml
, which describes an installation of one or more charts, and can then be used to do things like environment configuration (local vs develop vs prod), secrets management (putting API keys in encrypted files), showing you the diff
between installations, utility functions like requiredEnv
, automatically adding and updating chart repositories, and more.
A helmfile.yaml
is at minimum one or more releases
, but it’s pretty common to define your different environments and put the different values in them, for instance:
# ./k8s/helmfile.yaml
environments:
local:
values:
- name: my-app-local
prod:
values:
- name: my-app-prod
---
releases:
- name: my-app
chart: ./charts/app
values:
-
When running helmfile apply
against this, you would specify which environment to use, such as -e local
, and it would use the corresponding set of values. Helmfile also allows you to use .yaml.gotmpl
files rather than simple .yaml
, so that you can use more advanced templates once your team settles on some opinionated patterns. In practice, I typically use .yaml.gotmpl
files for everything but helmfile.yaml
, which is currently only available as .yaml
.
Workflow Advantages
Pretty much everything that Helmfile does is expressly workflow related, but one of the main points is that Helmfile allows developers to have different flavors of a deployment, for example instead of locally building a Docker container on the standard local
environment, you could create your own custom variant such as local-prod
which uses the local parameters, such as environment variables, number of replicas, and CPU/Memory limits (so you don’t blow up your laptop), but using the production image from your container repository so that you don’t have to spend the time rebuilding the Docker container.
Additionally, Helmfile acts as a single point of installation for both local development and devops installation, meaning developers and devops can use the same exact command to install the same exact application stack. This immensely reduces the number of things like environment variables needed in CI/CD tools like Jenkins, and allows dev and devops teams to focus on their respective concerns. It has the added benefit of creating an environment where developers may more easily encounter issues that might normally only be noticed once the project goes down a CI/CD pipeline, effectively lowering the devops triage workflow rather than raising it.
Tilt - stacks
As stated at the beginning of this article, Tilt is a local development workflow toolkit targeted at developers using local Kubernetes clusters.
With Tilt, you provide a Tiltfile
(case-sensitive), which is generally placed at the root of your project, which uses Starlark, a flavor of Python that is a little more strict for the purpose of giving a high-level programming language that is easy to read for both devops and developers, with its target audience being those who create build pipelines & workflow scripts. Starlark removes certain features from Python and treats certain cases (such as equality operations) differently in order to attempt to fufull the design goals of simplicity and stability.
Tilt provides a number of globally-scoped functions such as k8s_resource
and local_resource
that allow you to define things that need to be executed as quickly as possible, but without having to describe exactly the order in which the steps happen; this type of solution is generally an implementation of a DAG. Instead of defining an explicit order of operations, you provide things that a particular step needs, for example a k8s_resource
would probably need a Docker image, so it would have a resource_dep
of a docker_build
step.
For example:
# ./Tiltfile
# Build: tell Tilt what images to build from which directories
docker_build(
'build',
context='.',
dockerfile='./Dockerfile'
)
# ...
# Deploy: tell Tilt what YAML to deploy
k8s_yaml('./k8s/deployment.yaml') # just a plain k8s resource, not a chart
# Watch: tell Tilt how to connect locally (optional)
k8s_resource(
'my-app', # this is the `metadata.name` of the `deployment`
port_forwards=8080,
resource_deps=['build']
)
This small example would build a Docker image needed in a k8s Deployment
, apply the Kubernetes yaml, and then attach to the deployment’s pod and then port-forward automatically, so that you can access the deployment right from the Tilt UI.
You use tilt up
to run a Tiltfile
and create resources, and tilt down
to re-run the same Tiltfile
and remove resources.
It doesn’t stop at just an entire programming language, though, you can also expose up configuration and validation via the config
API, and then use a tilt_config.json
file (or command-line arguments) to customize your local stack.
# ./Tiltfile
config.define_string('environment')
// ./tilt_config.json
{
"environment": "local"
}
Not only that, but since you have the ability to parse arbitrary JSON and YAML in this language, you can create your own style of config if the default options aren’t powerful enough, giving you full flexibility over your configuration options.
For example, if I run a Tiltfile
that runs a child project’s Tiltfile
, it’s pretty easy to set it up to treat the child project’s tilt_config
as reasonable defaults, and use the current project’s tilt_config
as overrides, similar to how you override reasonable defaults with Helm.
Note
At this time, Tilt does not actually use Helm releases
for implementation reasons (for example, Tilt needs to know what’s going on in a pod); you can still use Helm’s templating features with Tilt, but right now it just applies the resources directly without using Helm releases, which means that if you change your Tiltfile and remove a resource, cleaning up will then become a manual process.
As such, I do not recommend using tilt up
on a production cluster, and instead recommend using it only locally and getting into the habit of just completely resetting your local Kubernetes cluster if things too messy for your liking. You can clean it up manually, but it’s generally faster and simpler to just reset it. There’s a todo
out there to support Helm releases.
Workflow Advantages
Tilt is basically an entire build pipeline tool aimed at developers. As a result, you can make a project, give it to another developer, and all they have to do is run tilt up
, and have the same exact stack that would be running in production.
I’m not going to get into it in the scope of this example project, but my workflow for Tilt involves using nested Tiltfiles on entirely separate projects, each with their own Tiltfile, so that you can spin up a project in isolation, but then at flip of a switch spin up the entire business’ application stack, or any slice of it you want.
This means that when I start work for the day, I run a tilt up
, get my morning coffee, and when I come back in 5 minutes, I have our entire platform running locally, and can watch code and live update (meaning no Docker rebuild) anywhere across my stack.
It’s not only cool, it’s fast, and makes it makes stability easy by making it trivial to test complex interactions between different applications. For some teams with limited resources, the advantages of this workflow could be the make-or-break factor in deciding whether or not a monolithic stack is acceptable for them for the sake of speed and stability (yes, I have actually seen modern monoliths intentionally implemented at some companies).
Try it out
Requirements
- Set up a local kubernetes cluster.
- I recommend docker for desktop for new users and consulting tilt’s documentation if you have any issues or would like to see more options.
- Test that it’s installed and that you have a cluster up and running via
kubectl get all -A
.
- Install helm.
- Verify that it’s installed by running
helm
.
- Verify that it’s installed by running
- Install helmfile.
- Verify that it’s installed by running
helmfile
.
- Verify that it’s installed by running
- Install tilt.
- Verify that it’s installed by running
tilt
.
- Verify that it’s installed by running
Download the project
Set it up
- Run
watch "kubectl get all -A"
to see what’s going on in Kubernetes. - Type
tilt up
to have Tilt create the resources. - Press
space
to open the Tilt UI. - Watch your resources spin up.
Note
You may need to restart the gateway
resource, as it has to wait for OpenFaaS’ gateway to be up and running; I don’t actually recommend exposing that resource in that manner or installing OpenFaaS via Tilt in a normal workflow, mainly because I might have multiple instances of Tilt running at a time, and can’t bind to that port more than once, so I’m only doing that for the sake of this example.
Test it out
- Go to the
gateway
resource and copy the admin password. - Click the link to the locally forwarded port.
- Input the username
admin
and paste the password. - Click the
echo
function. - Type in whatever you want and press
Invoke
. - You should see your input echoed by the function.
Play around
- Change code in
src/handler.go
, watch it update in Tilt.- Note: I’m not using live updates for this example; a partial Docker rebuild will occur.
- Go into
tilt_config.json
and change the environment toprod
. - Navigate to
k8s/environments
and look around and play with environment variables. - Go into
k8s/environments/default.yaml.gotmpl
and set$useCustomQueue
totrue
.- Notice the
queue-worker
that was created specifically for this function in the Tilt UI. - Open up a new terminal and send
curl http://127.0.0.1:8080/async-function/echo -d "test" -H 'X-Callback-Url: https://webhook.site/<your webhook id>'
- You should see the queue worker invoke the function and you should see the result at your webhook URL.
- Notice the
To tear everything down, just run:
tilt down
helm delete -n openfaas openfaas
kubectl delete ns openfaas-fn
In closing
Although there was a fair amount of text in this article, as you can see there’s really not much to a simple workflow. I hope I’ve given you enough information to see the possibilities of these tools and give you some ideas for how you can use them in your own projects.
Tweet to @openfaas with your comments, questions and suggestions.
You can find out more about Tilt on their website at: https://tilt.dev/