Skip to main content

Why use Kyverno?

Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. As one of the leading orchestration tools, Kubernetes enables the efficient management of container-based applications in dynamic and scalable environments. By using Kubernetes, developers and operators can package applications into containers, deploy them on a cluster of machines, automatically scale them, and efficiently manage resources.

As Kubernetes environments grow in complexity, the need to effectively implement and monitor policies and security measures increases as well. This is where Policy as Code (PaC) comes into play – a best practice for defining and managing policies through code. In this context, Kyverno has emerged as a powerful policy engine for Kubernetes, enabling the implementation and management of policies in the form of Kubernetes resources.

In the following, we will take a closer look at Kyverno and examine how this policy engine differs from other tools with similar functionality. We will then demonstrate practical examples of using Kyverno in a local Kubernetes cluster and discuss how Kyverno can be successfully integrated into projects.

 

Kyverno vs Kubernetes Native Policies vs OPA Gatekeeper

In addition to Kyverno, there are other tools for enforcing policies in a Kubernetes cluster. These include the built-in Kubernetes admission controllers (PodSecurity, ResourceQuota, ValidatingAdmissionPolicy) and the Open Policy Agent Gatekeeper (OPA Gatekeeper). This section provides a brief overview of the characteristics and differences between these solutions.

To define policies for certain fields in Kubernetes resources, ValidatingAdmissionPolicy is used. The ValidatingAdmissionPolicy is a Kubernetes resource, but the actual policy is defined using CEL (Common Expression Language), a DSL (Domain Specific Language). This policy must then be bound to a specific resource, such as a namespace, using a ValidatingAdmissionPolicyBinding.

Additionally, Kubernetes allows policies to be enforced via a service over HTTP. To do this, a ValidatingWebhookConfiguration or MutatingWebhookConfiguration is created to perform either validation or mutation on a manifest. This enables the implementation of complex validations or mutations. Both Kyverno and Open Policy Agent Gatekeeper are based on this functionality. Dynamic Admission Controllers can either be implemented independently or a Dynamic Admission Controller Engine can be used. This engine builds upon the solutions provided by Kubernetes and addresses various related challenges.

Kubernetes therefore divides policies into two categories: Admission Controllers and Dynamic Admission Controllers. The Admission Controllers execute things like ResourceQuota or ValidatingAdmissionPolicy. A complete list of these can be found in the Kubernetes documentation. The Dynamic Admission Controllers are responsible for executing the ValidatingWebhookConfiguration and MutatingWebhookConfiguration.

The tools provided with Kubernetes offer many ways to define policies and maintain control over the cluster, but the configuration and usage are not uniform.

The Open Policy Agent Gatekeeper (OPA Gatekeeper) is a policy engine developed for Kubernetes, making it a Dynamic Admission Controller. The major difference compared to the policies provided with Kubernetes is that all policies are implemented in the Rego language. Gatekeeper has the advantage that all policies can be tested outside the Kubernetes cluster. Since the learning curve for Rego and OPA in general is quite steep, the use of Gatekeeper in Kubernetes is especially suitable where the tool is already in use elsewhere, as OPA can also be applied outside of Kubernetes.

Kyverno is also a policy engine (and thus a Dynamic Admission Controller) for Kubernetes. Unlike OPA Gatekeeper, Kyverno uses YAML to define policies. This means that there is no need to learn a new language, and the already familiar Kubernetes tools for managing and installing policies can be used. Furthermore, Kyverno has excellent integration with tools like cosign to check whether images are signed. Kyverno is also capable of generating policies for Kubernetes resources. When a policy is defined for a Pod, it is also created for the pod template in Deployment, StatefulSet, DaemonSet, Job, and CronJob. This reduces the number of policies that the administrator needs to define. Kyverno is also capable of continuously applying policies in the background. Additionally, it is possible to test the policies outside the Kubernetes cluster with Kyverno.

Thus, Kyverno has several advantages over the policies provided by Kubernetes and other tools like OPA Gatekeeper.

 

Setting Up Kyverno

In this article, as mentioned earlier, Kyverno will be demonstrated on a local Kubernetes cluster. This will be done using minikube. However, it is not strictly necessary to create the Kubernetes cluster with minikube. It is up to the reader’s preference.

Before the first Kyverno policies are introduced and explained, it is a good idea to set up a local Kubernetes cluster with Kyverno and install the Kyverno command-line tool. Therefore, a cluster can be created and set up as follows:

$ minikube start
$ helm repo add kyverno https://kyverno.github.io/kyverno/
$ helm repo update
$ helm install kyverno kyverno/kyverno -n kyverno --create-namespace

The official installation guide for the Helm chart is available here.

After Kyverno has been installed with Helm, the following output can be seen:

Thank you for installing kyverno! Your release is named kyverno.

The following components have been installed in your cluster:
- CRDs
- Admission controller
- Reports controller
- Cleanup controller
- Background controller


WARNING: Setting the admission controller replica count below 3 means Kyverno is not running in high availability mode.

Note: There is a trade-off when deciding which approach to take regarding Namespace exclusions. Please see the documentation at https://kyverno.io/docs/installation/#security-vs-operability to understand the risks.

Since this is a test environment, the warning can safely be ignored. In a production cluster, the individual Kyverno components should be run with multiple instances. The tool integrates its own Leader Election mechanism to select a new leading instance in case of failure, ensuring high availability.

The note at the end of the output can initially be ignored. It will be addressed later in the article.

The command-line tool kyverno can be installed in multiple ways. For example, it can be installed using the brew package manager on macOS. Other installation methods are listed in the official documentation and can be found here.

 

First Encounter with Kyverno Policies

A Kyverno policy is a Kubernetes resource. Fundamentally, a Kyverno policy consists of rules. These rules, in turn, consist of conditions that determine when they can be applied and the actual rule that Kubernetes resources must comply with.

The following code listing shows a simple Kyverno policy that requires a namespace to have a description. For this, a ClusterPolicy named require-namespace-description-annotation is defined. This policy enforces that the description field in the annotations of a Kubernetes namespace is populated with a value. If the field is not set, the error message Namespace must have a description annotation is returned. The behavior of the policy can also be controlled using the validationFailureAction parameter, which accepts two values: Enforce and Audit. With Enforce, the policy is strictly enforced, preventing the manifest from being applied and displaying the error message. In contrast, Audit logs the policy violation, displays the error message, but still allows the manifest to be installed on the Kubernetes cluster. For this article, configuring policies with Enforce is particularly useful to receive concrete feedback from Kyverno. However, in a later section, the Audit parameter will also be explored in more detail, along with a proposed plan for rolling out Kyverno on a production cluster.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-namespace-description-annotation
spec:
  validationFailureAction: Enforce
  rules:
    - name: require-namespace-description-annotation-rule
      match:
        any:
          - resources:
              kinds:
                - Namespace
      validate:
        message: 'Namespaces must have a "description" annotation.'
        pattern:
          metadata:
            annotations:
              description: "*"

The following two code listings each show a namespace. The first namespace is invalid according to the previously defined policy and cannot be applied to the cluster. The second namespace, however, includes the required field specified by the policy and can be installed on the cluster.

apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
```

```yaml
apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
  annotations:
    description: "I have interesting things"

The following code listing shows that the Kyverno policy is installed on the Kubernetes cluster, and then the invalid namespace is attempted first, followed by the valid namespace. As mentioned earlier, the first namespace was not installed on the cluster, and the expected error message was displayed. Additionally, the corresponding kubectl apply command exited with code 1, indicating a failure. In contrast, the second namespace was successfully installed on the cluster, which was verified using kubectl describe.

$ kubectl apply -f manifests/namespace-example/require-namespace-description-annotation.yaml
$ kubectl get clusterpolicies.kyverno.io
NAME                                       ADMISSION   BACKGROUND   VALIDATE ACTION   READY   AGE   MESSAGE
require-namespace-description-annotation   true        true         Enforce           True    32s   Ready

$ kubectl apply -f manifests/namespace-example/invalid-namespace.yaml
Error from server: error when creating "manifests/namespace-example/invalid-namespace.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Namespace//my-namespace was blocked due to the following policies

require-namespace-description-annotation:
  require-namespace-description-annotation: 'validation error: Namespaces must have
    a "description" annotation. rule require-namespace-description-annotation failed
    at path /metadata/annotations/'

$ kubectl apply -f manifests/namespace-example/valid-namespace.yaml
namespace/my-namespace created

$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   31m
kube-node-lease   Active   31m
kube-public       Active   31m
kube-system       Active   31m
kyverno           Active   30m
my-namespace      Active   10s

$ kubectl describe namespaces my-namespace
Name:         my-namespace
Labels:       kubernetes.io/metadata.name=my-namespace
Annotations:  description: I have interesting things
Status:       Active

No resource quota.

No LimitRange resource.

This example was chosen to provide an introduction to Kyverno policies. However, it could also be implemented using Kubernetes policies (ValidatingAdmissionPolicy).

Finally, the created Kubernetes resources can be removed using the following two commands.

$ kubectl delete -f manifests/namespace-example/valid-namespace.yaml
$ kubectl delete -f manifests/namespace-example/require-namespace-description-annotation.yaml

 

Kyverno Policies for Pods

In the following examples, policies are defined for Pods. First, the origin of a Docker image is restricted to a specified image registry. For this, a Kyverno policy with a rule is implemented, as shown in the following code listing. The policy is named disallow-unspecified-image-registries, and the rule within the policy is called validate-registries. It enforces that container, initContainers, and ephemeralContainers in Pods must come from the specified image registry by requiring that the images match the pattern ghcr.io/iits-consulting/*. The syntax =(initContainers) acts as a conditional statement—if the initContainers field exists in the manifest, its image is evaluated against the pattern. This is necessary for initContainers and ephemeralContainers because these fields are optional. If the specified images do not match the required patterns, the rule (and thus the policy) is violated, preventing the Pod from being installed and returning the error message Pod references image from disallowed registry.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-unspecified-image-registries
spec:
  validationFailureAction: Enforce
  background: true
  rules:
    - name: validate-registries
      match:
        any:
          - resources:
              kinds:
                - Pod
      validate:
        message: "Pod references image from disallowed registry"
        pattern:
          spec:
            =(ephemeralContainers):
              - image: "ghcr.io/iits-consulting/*"
            =(initContainers):
              - image: "ghcr.io/iits-consulting/*"
            containers:
              - image: "ghcr.io/iits-consulting/*"

An invalid Pod, therefore, contains images that all start with ghcr.io/iits-consulting. The following is an example scenario with two Pods—one that violates the policy and one that complies with it.

The following code listing shows a Pod that violates the policy because the image field does not match the required pattern. As a result, the Pod is not installed on the Kubernetes cluster due to the violation of the disallow-unspecified-image-registries policy.

apiVersion: v1
kind: Pod
metadata:
  name: "myapp"
spec:
  containers:
    - name: myapp
      image: "nginx:1.24.0-alpine-slim"

In contrast, the following code listing shows another Pod that complies with the policy. This can be identified by the fact that the image field matches the pattern ghcr.io/iits-consulting/* defined in the policy. As a result, this Pod can be installed on the cluster. The Docker image was copied to a different image registry as part of the preparation for this article, a process known as vendoring.

apiVersion: v1
kind: Pod
metadata:
  name: "myapp"
spec:
  containers:
    - name: myapp
      image: "ghcr.io/iits-consulting/demo/nginx:1.24.0-alpine-slim"

The following code listing shows the output from the kubectl command after the invalid Pod and then the valid Pod were attempted to be installed. It can be seen here that the invalid Pod was indeed not installed on the Kubernetes cluster.

$ kubectl apply -f manifests/pod-example/disallow-unspecified-image-registries.yaml
$ kubectl apply -f manifests/pod-example/invalid-pod.yaml
Error from server: error when creating "manifests/pod-example/invalid-pod.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Pod/default/myapp was blocked due to the following policies

disallow-unspecified-image-registries:
  validate-registries: 'validation error: Pod references image from disallowed registry.
    rule validate-registries failed at path /spec/containers/0/image/'

$ kubectl apply -f manifests/pod-example/valid-pod.yaml
$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
myapp   1/1     Running   0          6s

However, this rightfully raises the question of why it is insisted that the images must only come from the internal registry. One reason for this could be that the client’s cybersecurity department imposes this requirement in the form of security and data protection policies, which must be followed. Furthermore, this practice offers other advantages, such as reducing dependency on external systems, which contributes to increased system stability. Additionally, Kubernetes cluster administrators gain insight into which images are running on the cluster. However, private registries also come with disadvantages, such as increased storage costs, additional Docker images, and the time required for installation and maintenance (if they are not already in place).

With the current definition of the disallow-unspecified-image-registries Kyverno policy, it is enforced in every namespace that the image must come from a specified registry. This can cause issues with namespaces such as kube-system, kube-public, and kube-node-lease. In a managed Kubernetes cluster, operational images are typically sourced from the cloud provider. Therefore, these namespaces can be excluded from the policy by specifying the namespaces to be excluded within the policy rule. In the following code listing, the disallow-unspecified-image-registries policy has been modified. The rule validate-registries has been extended to exclude namespaces using the exclude keyword.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-unspecified-image-registries-improved
spec:
  validationFailureAction: Enforce
  background: true
  rules:
    - name: validate-registries
      match:
        any:
          - resources:
              kinds:
                - Pod
      exclude:
        any:
          - resources:
              namespaces:
                - kube-system
                - kube-public
                - kube-node-lease
      validate:
        message: "Pod references image from disallowed registry"
        pattern:
          spec:
            =(ephemeralContainers):
              - image: "ghcr.io/iits-consulting/*"
            =(initContainers):
              - image: "ghcr.io/iits-consulting/*"
            containers:
              - image: "ghcr.io/iits-consulting/*"

This policy can also be extended to allow multiple registries. For this, the expressions in the image field of the policy need to be adjusted. To allow images from registry.company.com to run on the Kubernetes cluster, the expression for image must be changed from image: “ghcr.io/iits-consulting/*” to image: “ghcr.io/iits-consulting/* | registry.mycompany.com/*”. Similarly, the use of the latest tag on images can be prohibited by creating a new policy where the expression in the image field of the rule is set to image: !*:latest.

This ensures that all images in the specified namespaces come from the defined registries. However, it is not always guaranteed that images from third-party Helm charts map to the allowed registry. Additionally, some Helm charts may not provide administrators with the option to define the image repository. The corresponding lines in the manifests within the Helm chart can be manually adjusted, but this can be very time-consuming. With the current policies, such a Helm chart can be deployed if the namespace is excluded, but this approach is neither optimal nor necessary. Kubernetes policies also allow modifying manifests using Mutating Webhooks before they are installed on the Kubernetes cluster. For this scenario, the standard Helm chart will be used, which is a new chart created using the Helm CLI tool.

Another interesting aspect of this scenario is that a policy was defined for a Pod, but Kyverno applied it to a Deployment. This is possible because Kyverno generates new rules for all other Kubernetes resources that create pods using a PodTemplate. These include Deployments, as well as StatefulSet, DaemonSet, Job, and CronJob.

The following code listing shows that three commands were executed. First, the default Helm chart is created, which deploys an nginx web server. Furthermore, it displays the policies present on the cluster, and finally, the Helm chart is installed on the Kubernetes cluster. During the installation attempt, an error message appears. As expected, the Kyverno policy disallow-unspecified-image-registries was violated, preventing the Helm chart from being deployed on the Kubernetes cluster.

$ helm create test-deploy

Creating test-deploy

$ kubectl get clusterpolicies.kyverno.io
NAME                                    ADMISSION   BACKGROUND   VALIDATE ACTION   READY   AGE   MESSAGE
disallow-unspecified-image-registries   true        true         Enforce           True    42h   Ready

$ helm install test-deploy test-deploy
Error: INSTALLATION FAILED: 1 error occurred:
	* admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/default/test-deploy was blocked due to the following policies

disallow-unspecified-image-registries:
  autogen-validate-registries: 'validation error: Pod references image from disallowed
    registry. rule autogen-validate-registries failed at path /spec/template/spec/containers/0/image/'

Before defining a Kyverno policy, it is important to understand the order of Admission Controller execution in Kubernetes. The following image, taken from the Kubernetes website, illustrates the necessary phases. After the user is authenticated or authorized, the Mutating Webhooks are executed first. Kubernetes then validates the schema, followed by the execution of the Validating Webhooks. This means that manifests can be modified by policies, but they must still be valid Kubernetes manifests. It also implies that mutation occurs first, followed by validation.

A guide to kubernetes admission controllers

The correct image reference in a Pod can be set using a new policy. The following code listing shows a new policy named prepend-registry with a rule called prepend-registry-containers. Two new fields are required: the precondition field, which selects the appropriate Pods and acts as an additional filter, and the mutate field, which replaces the validate field. Under mutate, the desired mutation for a Pod is defined. Within the patchStrategicMerge field, the image field is modified accordingly. This modification is conditional—if the specified condition is met, the rest of the key-value pairs are applied. This prevents image references that already point to the correct registry from being modified again. For example, the image nginx:1.24 is transformed into the image reference ghcr.io/iits-consulting/demo/nginx:1.24, effectively correcting it. This example has been shortened for clarity. In a practical application, initContainers and ephemeralContainers should also be included, but these fields can be defined similarly to the image field.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: prepend-registry
spec:
  validationFailureAction: Enforce
  background: false
  rules:
    - name: prepend-registry-containers
      match:
        any:
          - resources:
              kinds:
                - Pod
      preconditions:
        all:
          - key: "{{request.operation || 'BACKGROUND'}}"
            operator: AnyIn
            value:
              - CREATE
              - UPDATE
      mutate:
        foreach:
          - list: "request.object.spec.containers"
            patchStrategicMerge:
              spec:
                containers:
                  - image: "!ghcr.io/iits-consulting/demo/*"
                    name: "{{ element.name }}"
                    image: ghcr.io/iits-consulting/demo/{{ images.containers."{{element.name}}".path}}:{{images.containers."{{element.name}}".tag}}

The following section demonstrates the installation of the new policy. Using an incorrect Pod as an example, it will be shown that this Pod can now be deployed in the cluster according to the new policy. The Pod’s events indicate that it has been affected by the mutating policy. To verify the correct image of the Pod, its image reference is also displayed. It becomes clear that the image has been successfully updated to the desired one.

$ kubectl apply -f manifests/pod-example/mutate-prepend-image-registry.yaml
clusterpolicy.kyverno.io/prepend-registry created

$ kubectl get clusterpolicies.kyverno.io
NAME                                    ADMISSION   BACKGROUND   VALIDATE ACTION   READY   AGE     MESSAGE
disallow-unspecified-image-registries   true        true         Enforce           True    47h     Ready
prepend-registry                        true        false        Enforce           True    4m23s   Ready

$ kubectl apply -f manifests/pod-example/invalid-pod.yaml
pod/myapp configured

$ kubectl describe clusterpolicies.kyverno.io prepend-registry | tail -5
    Message:
Events:
  Type    Reason         Age   From               Message
  ----    ------         ----  ----               -------
  Normal  PolicyApplied  29s   kyverno-admission  Pod default/myapp is successfully mutated

$ kubectl get pods myapp -o json | jq ".spec.containers[0].image"
"ghcr.io/iits-consulting/demo/nginx:1.24.0-alpine-slim"

$ kubectl delete -f manifests/pod-example/invalid-pod.yaml
pod "myapp" deleted


$ kubectl apply -f manifests/pod-example/valid-pod.yaml
pod/myapp unchanged

$ kubectl get pods myapp -o json | jq ".spec.containers[0].image"
"ghcr.io/iits-consulting/demo/nginx:1.24.0-alpine-slim"

By changing the image reference to a different registry, it may also be necessary to provide the imagePullSecrets. These can be attached to the Pod or PodTemplate using Kyverno policies. However, this article omits that step to keep all examples executable. Attaching imagePullSecrets is already a solved problem, and a code snippet is available on the Kyverno website. The Kyverno website also lists many other policies addressing potential user issues. Therefore, it is recommended to first check kyverno policies to see if a suitable policy has already been defined.

Kyverno offers a total of four types of policies. This article has introduced two of these types: validating and mutating policies. The other policy types are not covered in this article. However, for completeness, they are listed here.

  • validate: Policies that only validate. For example, disallow-unspecified-image-registries. These check whether the images of a Pod meet a specific condition.
  • mutate: Policies that mutate the Kubernetes resource during installation on the Kubernetes cluster. For example, prepend-registry. These mutate manifests when they (do not) meet certain conditions.
  • generate: Policies that create Kubernetes resources when another Kubernetes resource is installed in the cluster.
  • cleanup: Policies that can delete Kubernetes resources.

Kyverno also offers good integration with cosign. This tool is used to sign container images. These signatures can be verified by Kyverno before the pod with the image is even running in the cluster. To sign your own images, cosign can be used in the CI/CD pipeline after a container image is built. This method can prevent container images that were built outside the CI/CD pipeline from running on the Kubernetes cluster. To demonstrate this functionality, two images have been provided: one, ghcr.io/iits-consulting/demo/nginx:1.24.0-alpine-slim, which is signed, and ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim, which is not signed.

To verify the signature with Kyverno, a policy must be created. This is a validating policy, but it does not use the validate field. Instead, it uses the verifyImage field to check the image signature. The following code snippet shows another Kyverno policy, verify-image, which verifies the signature for container images from the ghcr.io/iits-consulting registry using the public part of the key pair that was used for signing. If this verification is successful, the image can be installed in the Kubernetes cluster. Otherwise, an error message is displayed, and the image is prevented from running in the cluster.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image
spec:
  validationFailureAction: Enforce
  background: false
  rules:
    - name: verify-image
      match:
        any:
          - resources:
              kinds:
                - Pod
      verifyImages:
        - imageReferences:
            - "ghcr.io/iits-consulting/*"
          mutateDigest: true
          attestors:
            - entries:
                - keys:
                    publicKeys: |
                      -----BEGIN PUBLIC KEY-----
                      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE2QenMJhzn+zirp2EHGc5I7ShKepy
                      9gCY9PueG/syqGFRrgIfU1HyGieZVMEHrlYLCKE76nexFygDOag8wKbLcA==
                      -----END PUBLIC KEY-----

The following Pod myapp-unsigned is blocked from being admitted to the cluster by the new policy verify-image. This is demonstrated in the following code listing. The valid Pod, which has been used as a continuous example, uses a signed image and is therefore allowed to be admitted to the cluster without any issues.

apiVersion: v1
kind: Pod
metadata:
  name: "myapp-unsigend"
spec:
  containers:
    - name: myapp
      image: "ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim"
```

```shell
$ kubectl apply -f manifests/pod-example/disallow-unsigned-image.yaml
clusterpolicy.kyverno.io/verify-image configured

$ kubectl apply -f manifests/pod-example/unsigned-pod.yaml
Error from server: error when creating "manifests/pod-example/unsigned-pod.yaml": admission webhook "mutate.kyverno.svc-fail" denied the request:

resource Pod/default/myapp-unsigend was blocked due to the following policies

verify-image:
  verify-image: 'failed to verify image ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim:
    .attestors[0].entries[0].keys: no matching signatures'

$ kubectl apply -f manifests/pod-example/valid-pod.yaml
pod/myapp created

With policies, requirements specified by the project and other restrictions for Kubernetes resources can be implemented using Kyverno. The examples have only considered Namespaces and Pods, but it is certainly possible to define policies for other Kubernetes resources, such as Service, IngressRoute, or even CRDs.

 

Monitoring & Testing

Before Kyverno can be implemented for a cluster, the topics of monitoring and testing policies should be considered.

To initially roll out Kyverno on the Kubernetes cluster, it is recommended to set all policies to Audit. This way, the entire cluster is not blocked if a policy is violated. Administrators can work on resolving policy violations or even adjusting them, whether through relaxation or tightening. Monitoring policies is also possible with Kyverno. Without additional tools, this is kept simple, as Kyverno will indicate which policies have been violated, breaking it down by the respective namespaces.

To demonstrate monitoring with Kyverno, the policies in use will be switched from Enforce to Audit. Additionally, the prepend-registry policy will be modified to not apply in the policy-reporter namespace, as a Helm chart is installed there that has not been vendored.

In the following code listing, it is shown that the three policies on the cluster have been updated, and an unsigned pod has been installed. Due to the Audit mode of the policies, these violations are now only reported. This is evident from the last command and the events of the respective Kubernetes resource. myapp-unsigned has a 1 in the PASS column and a 1 in the FAIL column.

$ kubectl apply -f manifests/pod-example/disallow-unspecified-image-registries-audit.yaml

$ kubectl apply -f manifests/pod-example/disallow-unsigned-image-audit.yaml

$ kubectl apply -f manifests/pod-example/mutate-prepend-image-registry-ignore-webui-ns.yaml

$ kubectl apply -f manifests/pod-example/unsigned-pod.yaml
Warning: policy verify-image.verify-image: failed to verify image ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim: .attestors[0].entries[0].keys: no matching signatures
Warning: policy verify-image.verify-image: missing digest for ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim
pod/myapp-unsigend created

$ kubectl get policyreports.wgpolicyk8s.io
NAME                                   KIND   NAME             PASS   FAIL   WARN   ERROR   SKIP   AGE
cba77f20-c96c-4244-8de3-432cfe23d9cd   Pod    myapp            2      0      0      0       0      36m
f9db84ef-3b3b-4ab1-8557-d145133239dd   Pod    myapp-unsigend   1      1      0      0       0      4m23s

$ kubectl describe pod myapp-unsigend | tail -n 9
Events:
  Type     Reason           Age                    From               Message
  ----     ------           ----                   ----               -------
  Warning  PolicyViolation  8m54s (x2 over 8m54s)  kyverno-admission  policy verify-image/verify-image fail: failed to verify image ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim: .attestors[0].entries[0].keys: no matching signatures
  Normal   Scheduled        8m55s                  default-scheduler  Successfully assigned default/myapp-unsigend to minikube
  Normal   Pulling          8m54s                  kubelet            Pulling image "ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim"
  Normal   Pulled           8m51s                  kubelet            Successfully pulled image "ghcr.io/iits-consulting/demo/nginx:1.23-alpine-slim" in 2.947s (2.947s including waiting)
  Normal   Created          8m51s                  kubelet            Created container myapp
  Normal   Started          8m51s                  kubelet            Started container myapp

Kyverno also offers another tool, the Kyverno Policy UI, in the form of a web UI to visualize policies and their violations. Additionally, this tool allows exporting the values to Prometheus, enabling the display of metrics in Grafana dashboards, for example, or setting up alert notifications with AlertManager.

The Kyverno Policy UI can now be installed and made available with Helm, as shown in the following code listing. It can be accessed at the URL https://localhost:8082, where the following results will be visible:

$ helm repo add policy-reporter https://kyverno.github.io/policy-reporter
$ helm repo update
$ helm install policy-reporter policy-reporter/policy-reporter -n policy-reporter --set metrics.enabled=true --set rest.enabled=true --create-namespace
$ kubectl port-forward service/policy-reporter-ui 8082:8080 -n policy-reporter

It is also important to test the policies to ensure their correct functionality, ideally outside of the cluster. The Kyverno command-line tool offers an excellent way to do this. It allows developers and administrators to test policies locally before implementing them in a production environment. This proactive measure helps identify and fix potential errors early, greatly improving the stability and security of the cluster. Additionally, Kyverno provides comprehensive documentation and active community support to assist users in effectively utilizing the policy engine. Two important commands in the Kyverno command-line tool for testing are kyverno apply and kyverno test.

With kyverno apply, a number of policies can be locally passed to the tool and applied to Kubernetes resources. This allows you to verify, for example, whether a Pod contains an image that does not come from a specified registry, without needing a Kubernetes cluster. The following code listing demonstrates this scenario:

$ kyverno apply manifests/pod-example/disallow-unspecified-image-registries.yaml --resource manifests/pod-example/invalid-pod.yaml

Applying 3 policy rule(s) to 1 resource(s)...

pass: 0, fail: 1, warn: 0, error: 0, skip: 2
Error: exit as fail or error count > 0

In addition, the Kyverno command-line tool also provides a test suite, where scenarios with different policies and Kubernetes resources can be defined and then tested. Each test scenario is represented by a kyverno-test.yaml file. This allows for testing each policy in isolation first, and then testing it together with other policies in a larger test scenario.

A kyverno-test.yaml file is shown in the following code listing, which illustrates its structure. In this file, a CRD is defined with components such as policies, resources, and results. The policies and resources fields contain relative paths to the resources being used. The results field specifies the expected outcomes for a policy, rule, and resource. However, this list does not have to be complete, as some combinations can be omitted.

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
  name: kyverno-test
policies:
  - ../../manifests/pod-example/disallow-unspecified-image-registries.yaml
  - ../../manifests/pod-example/disallow-unsigned-image.yaml
  - ../../manifests/pod-example/mutate-prepend-image-registry.yaml
resources:
  - ../../manifests/pod-example/invalid-pod.yaml
  - ../../manifests/pod-example/valid-pod.yaml
results:
  - policy: prepend-registry
    rule: prepend-registry-containers
    resources:
      - default/myapp-ghcr
    patchedResource: ./mutated-invalid-pod.yaml
    kind: Pod
    result: pass

  - policy: disallow-unspecified-image-registries
    isValidatingAdmissionPolicy: true
    rule: validate-registries
    resources:
      - default/myapp-ghcr
    kind: Pod
    result: pass

  - policy: verify-image
    isValidatingAdmissionPolicy: true
    rule: verify-image
    resources:
      - default/myapp
    kind: Pod
    result: pass

  - policy: disallow-unspecified-image-registries
    isValidatingAdmissionPolicy: true
    rule: validate-registries
    resources:
      - default/myapp
    kind: Pod
    result: pass

The following code listing shows the output of the kyverno test command.

$ kyverno test manifests/testing/
Loading test  ( manifests/testing/kyverno-test.yaml ) ...
  Loading values/variables ...
  Loading policies ...
  Loading resources ...
  Applying 3 policies to 2 resources ...
  Checking results ...

│────│───────────────────────────────────────│─────────────────────────────│────────────────────────│────────│────────│
│ ID │ POLICY                                │ RULE                        │ RESOURCE               │ RESULT │ REASON │
│────│───────────────────────────────────────│─────────────────────────────│────────────────────────│────────│────────│
│ 1  │ prepend-registry                      │ prepend-registry-containers │ default/Pod/myapp-ghcr │ Pass   │ Ok     │
│ 2  │ disallow-unspecified-image-registries │ validate-registries         │ default/Pod/myapp-ghcr │ Pass   │ Ok     │
│ 3  │ verify-image                          │ verify-image                │ default/Pod/myapp      │ Pass   │ Ok     │
│ 4  │ disallow-unspecified-image-registries │ validate-registries         │ default/Pod/myapp      │ Pass   │ Ok     │
│────│───────────────────────────────────────│─────────────────────────────│────────────────────────│────────│────────│


Test Summary: 4 tests passed and 0 tests failed

 

Conclusion

With Kyverno, policies can be implemented directly in Kubernetes clusters to ensure that security requirements and operational policies are met. Furthermore, Kyverno provides an integrated test suite that allows users to quickly and efficiently verify their policies before they are deployed in production. This simplifies troubleshooting and ensures smooth policy implementation.

Furthermore, Kyverno offers various monitoring capabilities to continuously track policy compliance and identify potential issues early. Users can either use the built-in monitoring features through the console or deploy a separate Helm chart to gain detailed insights into the performance and security of their Kubernetes cluster.

Overall, Kyverno provides a comprehensive solution for implementing, validating, and monitoring policies in Kubernetes clusters, enabling users to operate their clusters securely and efficiently.

If you want to know more about our cloud services click here

Zeljko Bekcic

Zeljko Bekcic is a DevOps Engineer at iits-consulting, specializing in Kubernetes, Azure, IoT, and Linux. Beyond his professional expertise, he is passionate about nerdy pursuits like video games, homelabbing, and building and programming keyboards.