Skip to main content

Accessing Secrets and storing state

This is Part 3 of a series illustrating how to write Kratix Promises.

👈🏾 Previous: Improving Promise Workflows
👉🏾 Next: Surfacing information to users


In the previous tutorial, you learned about the different lifecycle hooks you can define in your Promise workflows. In this section, you will go over some basic use-cases, like accessing Secrets, from within your Promise workflows.

You will:

To illustrate the concepts above, you will be updating your Promise so that it creates a bucket in the MinIO server that's deployed in your Platform cluster. You will create the bucket using terraform.

Secrets

A common need for Workflows is to access secrets. In your Promise, you will need access to the MinIO credentials to be able to create a bucket. You could pull the credentials in differnet ways, but the Kratix Pipeline kind provides a convenient way to access secrets, similar to how you would access Secrets in a Kubernetes Pod.

To start, create a new Secret in your Platform cluster:

cat <<EOF | kubectl --context $PLATFORM apply -f -
apiVersion: v1
kind: Secret
metadata:
name: app-promise-minio-creds
namespace: default
type: Opaque
stringData:
username: minioadmin
password: minioadmin
endpoint: minio.kratix-platform-system.svc.cluster.local
EOF

Next update your Promise with the new create-bucket step. From within this step, you will load the secret defined above as an environment variable:

app-promise/promise.yaml
apiVersion: platform.kratix.io/v1alpha1
kind: Promise
metadata:
name: app
spec:
api: #...
workflows:
promise: # ...
resource:
configure:
- apiVersion: platform.kratix.io/v1alpha1
kind: Pipeline
metadata:
name: resource-configure
spec:
containers:
- name: create-resources
image: kratix-workshop/app-pipeline-image:v1.0.0
command: [ resource-configure ]
- name: create-bucket
image: kratix-workshop/app-pipeline-image:v1.0.0
command: [ create-bucket ]
env:
- name: MINIO_ENDPOINT
valueFrom:
secretKeyRef:
name: app-promise-minio-creds
key: endpoint
- name: MINIO_USER
valueFrom:
secretKeyRef:
name: app-promise-minio-creds
key: username
- name: MINIO_PASSWORD
valueFrom:
secretKeyRef:
name: app-promise-minio-creds
key: password

You can now add the create-bucket script to your Pipeline image. Create a create-bucket script in the workflows directory:

touch ./workflows/create-bucket
chmod +x ./workflows/create-bucket
mkdir -p workflows/terraform
touch ./workflows/terraform/terraform.tf

Next, create the terraform script that will be executed by the create-bucket script:

app-promise/workflows/terraform/terraform.tf
terraform {
required_providers {
minio = {
source = "aminueza/minio"
version = "2.0.1"
}
}
}

variable bucket_name {
type = string
}

resource "minio_s3_bucket" "state_terraform_s3" {
bucket = "${var.bucket_name}"
acl = "public"
}

output "minio_id" {
value = "${minio_s3_bucket.state_terraform_s3.id}"
}

output "minio_url" {
value = "${minio_s3_bucket.state_terraform_s3.bucket_domain_name}"
}

Make sure to add the following lines to the Dockerfile:

app-promise/workflows/Dockerfile
COPY ./terraform /terraform
COPY create-bucket /scripts/create-bucket

Finally, create the create-bucket script:

app-promise/workflows/create-bucket
#!/usr/bin/env bash

set -euxo pipefail

name=$(yq '.metadata.name' /kratix/input/object.yaml)
namespace=$(yq '.metadata.namespace' /kratix/input/object.yaml)

cd /terraform
terraform init
terraform apply -auto-approve --var="bucket_name=${name}.${namespace}"

With all of that in place, you can test the Workflow. You will need to expose the environment variables defined in the create-bucket step before running the test script. Open the scripts/test-pipeline script and update the docker run command to look like this:

scripts/test-pipeline
docker run \
--rm \
--volume ~/.kube:/root/.kube \
--network=host \
--volume ${outputDir}:/kratix/output \
--volume ${inputDir}:/kratix/input \
--volume ${metadataDir}:/kratix/metadata \
--env MINIO_USER=minioadmin \
--env MINIO_PASSWORD=minioadmin \
--env MINIO_ENDPOINT=localhost:31337 \
kratix-workshop/app-pipeline-image:v1.0.0 bash -c "$command"
warning

If you are running the workshop on a Mac, you may need to update the MINIO_ENDPOINT to host.docker.internal:31337.

With that change in place, you can run the test script:

./scripts/test-pipeline create-bucket

If everything works well, you should see a new bucket created in your MinIO server:

mc ls kind/

The output should look like this:

[2024-01-26 15:33:03 GMT]     0B kratix/
[2024-01-31 11:44:55 GMT] 0B my-app.default/

Great! That proves our pipeline stage works end-to-end. But what happens if you try to re-run the tests?

./scripts/test-pipeline create-bucket

You should see the following error:

minio_s3_bucket.state_terraform_s3: Creating...

│ Error: [FATAL] bucket already exists! (my-app.default): <nil>

│ with minio_s3_bucket.state_terraform_s3,
│ on terraform.tf line 14, in resource "minio_s3_bucket" "state_terraform_s3":
│ 14: resource "minio_s3_bucket" "state_terraform_s3" {

The create-bucket step is not idempotent. That means that, when Kratix tries to reconcile the resource again, it will fail because the bucket already exists.

To make it idempotent, you need to store and retrieve the terraform state from the previous run, and use it if it already exists. Hop on to the next section to learn how to do that.

State

There are many ways to store and retrieve state from within a pipeline. You could, for example, push the resulting state to a remote repository, rely on third-party services like Terraform Cloud, or use Kubernetes resources like ConfigMaps. For simplicity, we will use ConfigMaps to store and retrieve the state.

To store the terraform state in a ConfigMap, the first step is to give the Kratix Pipeline Service Account the ability to create and retrieve ConfigMaps. The Service Account Kratix creates for the Resource Pipeline follows the format <promise name>-resource-pipeline.

Create the following ClusterRole and ClusterRoleBinding in your Platform cluster:

kubectl --context $PLATFORM create clusterrole promise-configmap \
--verb=get,list,create,update,patch,watch,delete \
--resource=ConfigMap

kubectl --context $PLATFORM create clusterrolebinding promise-configmap \
--clusterrole=promise-configmap \
--serviceaccount=default:app-resource-pipeline

Next, you need to update the create-bucket script to both store the state and retrieve any existing state. Open the create-bucket script and update it to look like this:

app-promise/workflows/create-bucket
#!/usr/bin/env bash

set -euxo pipefail

name=$(yq '.metadata.name' /kratix/input/object.yaml)
namespace=$(yq '.metadata.namespace' /kratix/input/object.yaml)

cd /terraform
terraform init

# Check if the state exists and retrieve it if so
if kubectl get configmap ${name}-state; then
kubectl get configmap ${name}-state \
--output=jsonpath='{.data.tfstate}' \
> state.tfstate
fi

terraform apply \
-auto-approve \
--var="bucket_name=${name}.${namespace}" \
-state=state.tfstate

# Store the state in a ConfigMap
kubectl create configmap ${name}-state \
--from-file=tfstate=state.tfstate \
--dry-run=client \
--output=yaml > configmap.yaml
kubectl replace --filename configmap.yaml --force

Before re-running the test, make sure to delete the previous buckets, since the create-bucket script will try to create a bucket with the same name:

mc rb kind/my-app.default

Now you can run the test script again:

./scripts/test-pipeline create-bucket

You should see from the logs that the bucket got created and that the state got persisted in a ConfigMap. You can validate with the following command:

mc ls kind/

The above command should show you the buckets, like last time:

[2024-01-26 15:33:03 GMT]     0B kratix/
[2024-01-31 11:44:55 GMT] 0B my-app.default/

As for the ConfigMap, you can retrieve it with the following command:

kubectl --context $PLATFORM get configmap my-app-state --output=jsonpath={.data.tfstate}

The output should look like this:

{
"version": "4",
...
}

Excellent. Now try re-running the test script:

./scripts/test-pipeline create-bucket

This time it should just work! You can see from the logs (snippet below) that the state got retrieved from the ConfigMap and that no changes were applied. The test log should include the following lines:

minio_s3_bucket.state_terraform_s3: Refreshing state... [id=my-app.default]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Awesome! You pipeline stage is now idempotent, so go ahead and apply the promise with the new stage into your Platform:

kubectl --context $PLATFORM apply --filename promise.yaml

As soon as the Promise is applied, Kratix will trigger an update for the existing application. Once the pipeline completes, you should see a new Bucket on your MinIO server:

mc ls kind/

The output should look like this:

[2024-01-26 15:33:03 GMT]     0B kratix/
[2024-01-31 11:44:55 GMT] 0B my-app.default/
[2024-01-31 11:44:55 GMT] 0B todo.default/

You should also see the ConfigMap for the todo app:

kubectl --context $PLATFORM get configmap todo-state --output=jsonpath={.data.tfstate}

The output should look like this:

{
"version": "4",
...
}

To validate the idempotency, you can force the resource to reconcile by setting the kratix.io/manual-reconciliation label to true. Kratix listens to that label and, when it detects it, forces a reconciliation for the resource. Trigger a reconciliation:

# trigger the reconciliation
kubectl --context $PLATFORM label apps.workshop.kratix.io todo kratix.io/manual-reconciliation=true

Once the pipeline completes, you can check the logs and verify how it reused the state from the ConfigMap:

pod_name=$(kubectl --context $PLATFORM get pods --sort-by=.metadata.creationTimestamp -o jsonpath="{.items[-1:].metadata.name}")
kubectl --context $PLATFORM logs $pod_name --container create-bucket

If you trigger the reconciliation again, you should see the Pipeline logs indicating that no changes were applied, just as you observed in the test.

Bonus Challenge

You may have noticed that, at this stage, the bucket the Pipeline is created won't be removed when the user deletes their App request. As an extra challenge, try to implement a delete lifecycle hook for your resource, that deletes the bucket. Take a look at the Workflow reference docs to find out more (tip: check the workflows.resource.delete property).

🎉   Congratulations!

You successfully implemented a stateful Pipeline stage, using terraform to create external resources.

To recap what you achieved:

  1. ✅  Learned how to access Kubernetes secrets from a Pipeline.
  2. ✅  Implemented a pipeline stage to create a bucket with Terraform.
  3. ✅  Made your new state idempotent by storing and retrieving state.

👉🏾   Next, let's explore how to surface information to users.