Accessing Secrets and storing state
This is Part 3 of a series illustrating how to write Kratix Promises.
👈🏾 Previous: Improving Promise Workflows
👉🏾 Next: Surfacing information to users
In the previous tutorial, you learned about the different lifecycle hooks you can define in your Promise workflows. In this section, you will go over some basic use-cases, like accessing Secrets, from within your Promise workflows.
You will:
To illustrate the concepts above, you will be updating your Promise so that it creates a bucket in the MinIO server that's deployed in your Platform cluster. You will create the bucket using terraform
.
Secrets
A common need for Workflows is to access secrets. In your Promise, you will need access to the MinIO credentials to be able to create a bucket. You could pull the credentials in differnet ways, but the Kratix Pipeline kind provides a convenient way to access secrets, similar to how you would access Secrets in a Kubernetes Pod.
To start, create a new Secret
in your Platform cluster:
cat <<EOF | kubectl --context $PLATFORM apply -f -
apiVersion: v1
kind: Secret
metadata:
name: app-promise-minio-creds
namespace: default
type: Opaque
stringData:
username: minioadmin
password: minioadmin
endpoint: minio.kratix-platform-system.svc.cluster.local
EOF
Next update your Promise with the new create-bucket
step. From within this step, you will load the secret defined above as an environment variable:
apiVersion: platform.kratix.io/v1alpha1
kind: Promise
metadata:
name: app
spec:
api: #...
workflows:
promise: # ...
resource:
configure:
- apiVersion: platform.kratix.io/v1alpha1
kind: Pipeline
metadata:
name: mypipeline
spec:
containers:
- image: kratix-workshop/app-pipeline-image:v1.0.0
name: kratix-workshop-app-pipeline-image
command: [ resource-configure ]
- name: create-bucket
image: kratix-workshop/app-pipeline-image:v1.0.0
command: [ create-bucket ]
env:
- name: MINIO_ENDPOINT
valueFrom:
secretKeyRef:
name: app-promise-minio-creds
key: endpoint
- name: MINIO_USER
valueFrom:
secretKeyRef:
name: app-promise-minio-creds
key: username
- name: MINIO_PASSWORD
valueFrom:
secretKeyRef:
name: app-promise-minio-creds
key: password
status: {}
You can now add the create-bucket
script to your Resource Worfklow Pipeline image. Create a create-bucket
script in the workflows/resource/configure/mypipeline/kratix-workshop-app-pipeline-image/scripts/
directory:
touch ./workflows/resource/configure/mypipeline/kratix-workshop-app-pipeline-image/scripts/create-bucket
touch ./workflows/resource/configure/mypipeline/kratix-workshop-app-pipeline-image/resources/terraform.tf
Next, create the terraform script that will be executed by the create-bucket
script:
terraform {
required_providers {
minio = {
source = "aminueza/minio"
version = "2.0.1"
}
}
}
variable bucket_name {
type = string
}
resource "minio_s3_bucket" "state_terraform_s3" {
bucket = "${var.bucket_name}"
acl = "public"
}
output "minio_id" {
value = "${minio_s3_bucket.state_terraform_s3.id}"
}
output "minio_url" {
value = "${minio_s3_bucket.state_terraform_s3.bucket_domain_name}"
}
Update your Dockerfile
to include the Terraform CLI:
- RUN apk update && apk add --no-cache yq kubectl
+ RUN apk update && apk add --no-cache yq kubectl curl
+ RUN curl https://releases.hashicorp.com/terraform/1.7.1/terraform_1.7.1_linux_amd64.zip -o terraform.zip && \
+ unzip terraform.zip && \
+ mv terraform /usr/local/bin/terraform && \
+ rm terraform.zip
Finally, create the create-bucket
script:
#!/usr/bin/env sh
set -euxo pipefail
name=$(yq '.metadata.name' /kratix/input/object.yaml)
namespace=$(yq '.metadata.namespace' /kratix/input/object.yaml)
cd resources
terraform init
terraform apply -auto-approve --var="bucket_name=${name}.${namespace}"
With all of that in place, you can test the Workflow. Before running running the test, ensure kubectl is pointing to the platform cluster before running the test script:
kubectl config set current-context $PLATFORM
With that change in place, you can run the test script:
./scripts/test-pipeline create-bucket
If everything works well, you should see a new bucket created in your MinIO server:
mc ls kind/
The output should look like this:
[2024-01-26 15:33:03 GMT] 0B kratix/
[2024-01-31 11:44:55 GMT] 0B my-app.default/
Great! That proves our pipeline stage works end-to-end. But what happens if you try to re-run the tests?
./scripts/test-pipeline create-bucket
You should see the following error:
minio_s3_bucket.state_terraform_s3: Creating...
╷
│ Error: [FATAL] bucket already exists! (my-app.default): <nil>
│
│ with minio_s3_bucket.state_terraform_s3,
│ on terraform.tf line 14, in resource "minio_s3_bucket" "state_terraform_s3":
│ 14: resource "minio_s3_bucket" "state_terraform_s3" {
The create-bucket
step is not idempotent. That means that, when Kratix tries to reconcile the resource again, it will fail because the bucket already exists.
To make it idempotent, you need to store and retrieve the terraform state from the previous run, and use it if it already exists. Hop on to the next section to learn how to do that.
State
There are many ways to store and retrieve state from within a pipeline. You could, for example, push the resulting state to a remote repository, rely on third-party services like Terraform Cloud, or use Kubernetes resources like ConfigMaps. For simplicity, we will use ConfigMaps to store and retrieve the state.
To store the terraform state in a ConfigMap, the first step is to give the Kratix Pipeline Service Account the ability to create and retrieve ConfigMaps. There are a few ways
that you can ensure the Kratix Pipeline Service Account has the appropriate permissions to access resources, on this occasion we are going to specify the additional permissions required by the Workflow via the spec.rbac.permissions
in the Pipeline specification.
In your promise.yaml
, find the Workflow for the resource.configure
Pipeline. Edit the Workflow's Pipeline spec
to have the following:
apiVersion: platform.kratix.io/v1alpha1
kind: Promise
metadata:
name: app
spec:
api: #...
workflows:
promise: # ...
resource:
configure:
- apiVersion: platform.kratix.io/v1alpha1
kind: Pipeline
metadata:
name: mypipeline
spec:
rbac:
permissions:
- apiGroups: [""]
verbs: ["*"]
resources: ["configmaps"]
containers: #...
# rest of the file
This updates the permissions of the Kratix Pipeline Service Account to allow it to create and retrieve ConfigMaps. You can read more about how to configure RBAC permission in Pipelines in the Kratix reference documentation.
Next, you need to update the create-bucket
script to both store the state and retrieve any existing state. Open the create-bucket
script and update it to look like this:
#!/usr/bin/env sh
set -euxo pipefail
name=$(yq '.metadata.name' /kratix/input/object.yaml)
namespace=$(yq '.metadata.namespace' /kratix/input/object.yaml)
cd resources
terraform init
# Check if the state exists and retrieve it if so
if kubectl get configmap ${name}-state; then
kubectl get configmap ${name}-state \
--output=jsonpath='{.data.tfstate}' \
> state.tfstate
fi
terraform apply \
-auto-approve \
--var="bucket_name=${name}.${namespace}" \
-state=state.tfstate
# Store the state in a ConfigMap
kubectl create configmap ${name}-state \
--from-file=tfstate=state.tfstate \
--dry-run=client \
--output=yaml > configmap.yaml
kubectl replace --filename configmap.yaml --force
Before re-running the test, make sure to delete the previous buckets, since the create-bucket
script will try to create a bucket with the same name:
mc rb kind/my-app.default
Now you can run the test script again:
./scripts/test-pipeline create-bucket
You should see from the logs that the bucket got created and that the state got persisted in a ConfigMap. You can validate with the following command:
mc ls kind/
The above command should show you the buckets, like last time:
[2024-01-26 15:33:03 GMT] 0B kratix/
[2024-01-31 11:44:55 GMT] 0B my-app.default/
As for the ConfigMap, you can retrieve it with the following command:
kubectl --context $PLATFORM get configmap my-app-state --output=jsonpath={.data.tfstate}
The output should look like this:
{
"version": "4",
...
}
Excellent. Now try re-running the test script:
./scripts/test-pipeline create-bucket
This time it should just work! You can see from the logs (snippet below) that the state got retrieved from the ConfigMap and that no changes were applied. The test log should include the following lines:
minio_s3_bucket.state_terraform_s3: Refreshing state... [id=my-app.default]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Awesome! You pipeline stage is now idempotent, so go ahead and apply the promise with the new stage into your Platform:
kubectl --context $PLATFORM apply --filename promise.yaml
As soon as the Promise is applied, Kratix will trigger an update for the existing application. Once the pipeline completes, you should see a new Bucket on your MinIO server:
mc ls kind/
The output should look like this:
[2024-01-26 15:33:03 GMT] 0B kratix/
[2024-01-31 11:44:55 GMT] 0B my-app.default/
[2024-01-31 11:44:55 GMT] 0B todo.default/
You should also see the ConfigMap for the todo app:
kubectl --context $PLATFORM get configmap todo-state --output=jsonpath={.data.tfstate}
The output should look like this:
{
"version": "4",
...
}
To validate the idempotency, you can force the resource to reconcile by setting the kratix.io/manual-reconciliation
label to true. Kratix listens to that label and, when it detects it, forces a reconciliation for the resource. Trigger a reconciliation:
# trigger the reconciliation
kubectl --context $PLATFORM label apps.workshop.kratix.io todo kratix.io/manual-reconciliation=true
Once the pipeline completes, you can check the logs and verify how it reused the state from the ConfigMap:
pod_name=$(kubectl --context $PLATFORM get pods --sort-by=.metadata.creationTimestamp -o jsonpath="{.items[-1:].metadata.name}")
kubectl --context $PLATFORM logs $pod_name --container create-bucket
If you trigger the reconciliation again, you should see the Pipeline logs indicating that no changes were applied, just as you observed in the test.
Bonus Challenge
You may have noticed that, at this stage, the bucket the Pipeline is created won't be removed when the user deletes their App request. As an extra challenge, try to implement a delete lifecycle hook for your resource, that deletes the bucket. Take a look at the Workflow reference docs to find out more (tip: check the workflows.resource.delete
property).
🎉 Congratulations!
You successfully implemented a stateful Pipeline stage, using terraform
to create external resources.
To recap what you achieved:
- ✅ Learned how to access Kubernetes secrets from a Pipeline.
- ✅ Implemented a pipeline stage to create a bucket with Terraform.
- ✅ Made your new state idempotent by storing and retrieving state.
👉🏾 Next, let's explore how to surface information to users.