Section B: Improving the Workflows
This is Part 2 of a series illustrating how to write Kratix Promises.
ππΎ Previous: Writing your first Promise
ππΎ Next: Accessing secrets and storing state
In the previous tutorial, you wrote your first Promise, delivering Apps-as-a-service. In this section, you will dive deeper into Promise Workflows.
You will:
Workflow design patternsβ
To recap, a Kratix Promise is configured with a collection of Workflows that can be triggered at different stages of the Promise or Resource lifecycle. In the previous section, you defined a Workflow for the resource.configure
action. In that workflow, you defined a collection of Kubernetes Resources that needed to be created to fulfill the user's request.
However, there's virtually no limitation to what you can do in a Workflow. You can, for example, download software, run imperative commands, wait for manual approvals and more. Pipeline stages are meant to be reused across different Promises, so you can define a Workflow once and reuse it across multiple Promises.
To make that possible, it's important to follow some design patterns when writing Workflows. In this section, you will learn about some of these design patterns.
Reusabilityβ
Workflows are a great place to validate and enforce common requirements. For example, if you write a stage that can check for necessary compliance requirements, that stage can be used by all applicable Pipelines. In addition, you can write stages to check for excess costs, labelling conventions, security risks, and more.
While most Workflows will have at least one stage with logic unique to that Promise, building the Kratix Pipeline stages with reusability in mind is a great way to make your platform extensible.
Idempotencyβ
An idempotent Workflow guarantees that running the same command multiple times will result in the same outcome. This is an important feature because they will be auto-reconciled on an ongoing basis.
Kubernetes controllers reconcile their objects in three scenarios:
- Object change
- Controller restart
- Default cadence
This means that yes, on every request for a Resource the Workflow will run. But also, it will run any time the controller is reset, as well as every 10 hours.
This means you will need to write your Workflows to make sure that rerunning them will not result in any adverse side effects. This is especially important when your workflow is interacting with external systems. In the next tutorials you will learn some strategies to make your Workflows idempotent.
π€ Wondering when to use Workflows versus creating a new Promise?
Platform design requires thinking about how to divide platform offerings into right sized Promises and evaluating options for reusability and composability.
Each Promise is the encapsulation of something as-a-Service. But that doesnβt mean that all platform users will want or need all types of Promises. It can be extremely helpful to create lower level Promises for services that are composed into a number of higher level offerings. For example, a Kubernetes Promise may never be something requested by an application developer, but it may be that a number of software Promises like βenvironmentβ, or βdata storeβ depend on a Kubernetes cluster that can be provisioned using a Promise.
Promises are not the only way to create reusable components when designing your platform with Kratix. You can also create reusable Pipeline stages that can be run in a number of different Promise Workflows. For example, you may want to add default labels to certain types of resources. You can create a Pipeline stage which evaluates the resources set to be declared at the end of the Workflow and apply consistent labelling before writing.
Since both Promises and Workflows can be reused, you may wonder when to use each. The best rule of thumb is to ask if you are describing a noun or a verb.
Nouns are most easily described as things. A database is a thing, so is a cluster, or an application, or any number of software offerings your platform may support. If you are trying to provide something as-a-Service you should be thinking about creating a Promise.
Verbs can be described as actions. Labelling, notifying, or scanning can all be actions you may want to take rather than things you want to create. These actions can often be made across multiple things, e.g. you may want to label both databases and queues. When you are trying to take action to fulfil a cross-cutting concern, this is most suited to a Workflow step.
Like all rules of thumb, this should be treated as a guide. When it comes to system design it is important that it works for your context and the Syntasso team is happy to work with you as you approach these discussions as a team.
Testabilityβ
One of the advantages of the Kratix workflow model is that it makes it easy to test your workflows. You can test your workflows in isolation, without having to deploy them to a cluster. This makes it easy to test your workflows as you are developing them.
The next section goes into more detail about how to test your workflows.
Test driving your workflowsβ
In the previous tutorial you created a simple workflow that generated the Kubernetes resources needed to fulfill the user's request. You tested it by installing the Promise and then sending a resource request to the Platform.
A better way to test your workflows is to just execute the container in your local environment. It's relatively easy to do this, and it will save you a lot of time as you are developing your workflows.
Prepare your environmentβ
When Kratix starts your workflow, it will mount the following three volumes:
/kratix/input
: Kratix will add the user's request into this directory asobject.yaml
./kratix/output
: the files in this directory will be scheduled to a matching Destination./kratix/metadata
: a directory where you can write documents with special meaning to Kratix; you will cover this in the next tutorials.
That means that, to test it locally, you will need to provide the three volumes to your Docker command. Note that you will also need to write the object.yaml
containing an example request in the input
directory.
Still in the app-promise
directory, run:
mkdir -p test/{input,output,metadata}
touch test/input/object.yaml
Later, when you run docker run
, you will mount the three directories onto the container. But first, you need to create the object.yaml
file in the input
directory. Create a file called test/input/object.yaml
with the following content:
apiVersion: workshop.kratix.io/v1
kind: App
metadata:
name: my-app
namespace: default
spec:
image: example/image:v1.0.0
service:
port: 9000
With that in place, you can go ahead and test your pipeline! Since the next tutorials will ask you to rebuild and retest the image, it's a good idea to write a simple helper script that you can re-run in the future. Run the following:
mkdir -p scripts
cat << 'EOF' > scripts/build-pipeline
#!/usr/bin/env bash
set -eu -o pipefail
testdir=$(cd "$(dirname "$0")"/../test; pwd)
docker build --tag kratix-workshop/app-pipeline-image:v1.0.0 workflows/resource/configure/mypipeline/kratix-workshop-app-pipeline-image
kind load docker-image kratix-workshop/app-pipeline-image:v1.0.0 --name platform
EOF
cat <<'EOF' > scripts/test-pipeline
#!/usr/bin/env bash
scriptsdir=$(cd "$(dirname "$0")"; pwd)
testdir=$(cd "$(dirname "$0")"/../test; pwd)
inputDir="$testdir/input"
outputDir="$testdir/output"
metadataDir="$testdir/metadata"
$scriptsdir/build-pipeline
rm -rf $outputDir/*
command=${1:-"resource-configure"}
docker run \
--rm \
--volume ~/.kube:/root/.kube \
--network=host \
--volume ${outputDir}:/kratix/output \
--volume ${inputDir}:/kratix/input \
--volume ${metadataDir}:/kratix/metadata \
--env MINIO_USER=minioadmin \
--env MINIO_PASSWORD=minioadmin \
--env MINIO_ENDPOINT=localhost:31337 \
kratix-workshop/app-pipeline-image:v1.0.0 sh -c "$command"
EOF
chmod +x scripts/*
If you are running the workshop on a Mac, you may need to update the MINIO_ENDPOINT
above to host.docker.internal:31337
.
These scripts do the following:
build-pipeline
codifies the image building and loads the container image on the KinD cluster.test-pipeline
calls build-pipeline and also runs the image, allowing you to verify the created files in thetest/output
directory.
At this stage, your directory structure should look like this:
π app-promise
βββ README.md
βββ dependencies
βΒ Β βββ dependencies.yaml
βββ example-resource.yaml
βββ promise.yaml
βββ scripts
βΒ Β βββ build-pipeline
βΒ Β βββ test-pipeline
βββ test
βΒ Β βββ input
βΒ Β βΒ Β βββ object.yaml
βΒ Β βββ metadata
βΒ Β βββ output
βββ workflows
βββ resource
βββ configure
βββ mypipeline
βββ kratix-workshop-app-pipeline-image
βββ Dockerfile
βββ resources
βββ scripts
βββ resource-configure
Run the testsβ
To execute the test, run the script with the following command:
./scripts/test-pipeline resource-configure
Which should build and run the image. Once the execution completes,
verify the test/output
directory. You should see the following files:
π app-promise
βββ ...
βββ test
βββ input
β βββ object.yaml
βββ output
βββ deployment.yaml
βββ ingress.yaml
βββ service.yaml
You can take a look at the files and verify their contents. You can see where the user's inputs are being used, and how that is reflected in the created services. You can poke around and make changes to both the input and to the workflow, and see how the output reflects this.
The ability to treat images as independent pieces of software that can have their own development lifecycle (fully testable, easy to execute locally, release independent) allows platform teams to move faster, sharing and reusing images across their Promises.
Next, you will explore how to access secrets and store state from within a workflow.
π Β Congratulations!β
You iterated over the Promise and added a Promise Workflow! Congratulations π
To recap what you achieved:
- β Β Β You got a deeper understanding of the Workflows.
- β Β Β You tested your Resource Workflow.
ππΎΒ Β Let's explore accessing secrets and storing state in workflows.