32 KiB
About Hull
What is Hull?
Hull is a Go testing framework for writing comprehensive tests on Helm charts.
How Do Helm Charts Work?
At its core, all Helm charts are just sets of Go templates listed under a templates/ directory that are rendered based on Helm's Built-In RenderValues struct, which takes input from the values.yaml file declared with the chart.
Note: Helm places the chart's metadata (i.e. the
Chart.yaml) under.Chartand thevalues.yamlunder.Values. For subcharts, it places all overrides provided by the main chart under.Values.<subchart>.
Upon rendering, it's expected that each file produces a Kubernetes manifest, or a list of Kubernetes resources (where the type of resource for each YAML document produced is identified by the apiVersion and kind fields).
Note: The combined list of all resources in the Kubernetes manifests produced by all template files in a chart constitute a single Helm release.
For example, take a simple Helm chart that has this single file in its templates/ directory:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: {{ .Values.image }}
ports:
- containerPort: 80
By providing a values.yaml that contains image: nginx:latest to this Helm chart or running the helm install or helm upgrade command with --set image=nginx:latest, Helm will produce the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
If you were to provide image: nginx:1.2.3 instead, you would get a different template:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.2.3
ports:
- containerPort: 80
If you are specifically running a helm install (non-dry-run) or helm upgrade, Helm will then take this produced manifest and do the equivalent of a kubectl apply onto the cluster.
Note: It's not exactly correct to say that Helm does a
kubectl applysince Helm supports Chart Hooks, each of which applies resources in order based on the weight of the hook annotations attached to it. A normalkubectl applywould apply all resources in a given manifest in one shot.
How Does Hull Support Testing Helm Charts?
Hull seeks to enforce comprehensive unit testing by helping you:
- Encode
test.Cases that cover all possible valid configurations of the chart: eachtest.Casecorresponds to a single Helm template / release. This means eachtest.Caseis tied to a single generated Kubernetes manifest rendered based on a givenvalues.yaml(provided viachart.TemplateOptions) and the templates contained within the chart - Run "checks" on all generated Helm releases: these should be generic tests (i.e.
test.NamedChecks) that run on all templates that are generated from thetest.Cases. Thes tests can be parametrized by thevalues.yamlthat was used to render a givenchart.Templatevia using thechecker.RenderValue / checker.MustRenderValuehelper functions
Testing With and Without Hull
Let's say you wanted to manually test the .Values.image field above without Hull.
The intended behavior of the field is to override the contents of .spec.containers[0].image to whatever value is provided.
Therefore, there are probably two checks you would write up here:
helm template <default-chart-release-name> -n <default-chart-namespace> | yq e 'select(.kind == "Pod") | .spec.containers[0].image': ensure that the output here is the default value ofnginx:latesthelm template <default-chart-release-name> -n <default-chart-namespace> --set image=nginx:1.2.3 | yq e 'select(.kind == "Pod") | .spec.containers[0].image': ensure that the output here is the provided valuenginx:1.2.3
For a simple field like this, these two checks would be sufficient to fully test this field.
On the other hand, in Hull you would encode this as two test.Cases with two distinct test.NamedChecks that run on produced Helm templates from those options:
var ChartPath = utils.MustGetPathFromModuleRoot("path", "to", "chart")
var (
DefaultReleaseName = "<default-chart-release-name>"
DefaultNamespace = "<default-chart-namespace>"
)
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Setting nginx:1.2.3 image",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
SetValue("nginx.image", "1.2.3"),
},
},
NamedChecks: []test.NamedChecks{
{
Name: "Has correct value for Pod's image"
Covers: []string{".Values.nginx.image"}
Checks: test.Checks{
// TODO: some check(s) that ensure .spec.containers[0].image for the nginx workload == .Values.nginx.image
}
}
},
}
func TestChart(t *testing.T) {
opts := test.GetRancherOptions()
suite.Run(t, opts)
}
While Hull may seem more verbose, there are three distinct advantages of encoding those same manual checks into Hull:
- It runs along with your normal CI; by just enforcing that a
go test -count=1 <testing-module>passes in your repository, you know that all written checks would pass on all recordedtest.Cases that are used with your chart, so a change that has been introduced to the chart is not breaking an existing use case - You can encode checks that span across all of the templates by only declaring a single
test.NamedCheck; for example, a check that sees that all Pods generated havenodeSelectorsandtolerationsto support being deployed in a cluster with Windows nodes would apply to allvalues.yamlconfigurations encoded intest.Cases - Hull's coverage check will ensure you have at least one test case to cover all used fields from the
values.yaml, so you'll be able to use it to identify any gaps in CI
Note: Why do we run the
go testwith-count=1?Go normally will cache the results of tests to avoid re-running them when the underlying Go code has not been modified.
However, this would not invalidate the cache when your Helm chart alone changes; as a result, passing in
count=1tellsgo testto always run every test at least once, ignoring the cached contents.
A Simple Introduction To Hull
Let's take a look at the anatomy of a simple Hull testing file:
var ChartPath = utils.MustGetPathFromModuleRoot("..", "testdata", "charts", "simple-chart")
var (
DefaultReleaseName = "simple-chart"
DefaultNamespace = "default"
)
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
},
}
func TestChart(t *testing.T) {
suite.Run(t, nil)
}
Here, we're defining a test.Suite that applies to the chart located at ../testdata/charts/simple-chart.
On running go test -count=1 -v <path-to-module>, we get the following output:
=== RUN TestChart
=== RUN TestChart/Using_Defaults
=== RUN TestChart/Using_Defaults/HelmLint
template.go:171: [INFO] Chart.yaml: icon is recommended
=== RUN TestChart/Coverage
suite.go:208:
Error Trace: path/to/hull/examples/tests/workspace/suite.go:208
Error: Not equal:
expected: 1
actual : 0
Test: TestChart/Coverage
Messages: The following field references are not tested:
- {{ .Values.data }} : templates/configmap.yaml
- {{ .Values.shouldFail }} : templates/configmap.yaml
- {{ .Values.shouldFailRequired }} : templates/configmap.yaml
--- FAIL: TestChart (0.01s)
--- PASS: TestChart/Using_Defaults (0.01s)
--- PASS: TestChart/Using_Defaults/HelmLint (0.01s)
--- FAIL: TestChart/Coverage (0.00s)
FAIL
FAIL path/to/hull/examples/tests/workspace 1.643s
FAIL
Note: If you look at the output above, you'll notice that each
test.Caseis run as a separate Go subtest and tests within it are run as subtests of itself (i.e.HelmLint), so it's possible to run individual cases for charts on ago test!
The reason why you get this error is that, if you look at the contents of testdata/charts/simple-chart/templates/configmap.yaml, you will find that your Go template uses {{ .Values.data }} to populate the contents of the ConfigMaps deployed alongside the chart. It also uses {{ .Values.shouldFail }} and {{ .Values.shouldFailRequired }} for other custom logic.
These field usages are picked up by the logic in pkg/tpl, which analyzes each file in the Helm chart provided to suite.ChartPath and automatically figures out the ground that needs to be covered to fully test the chart.
To resolve these issues, you will need to add an test.Case to this example suite that uses each of these fields and include a test.NamedChecks for each of those fields to run the logical checks.
Alternatively, you can write a test.FailureCase, if the chart is expected not to render when some configuration is provided; this is used to test .Vales.shouldFail and .Values.shouldFailRequired.
Adding a test.Case
To resolve the issue that came up as part of coverage, we first need to introduce a new test case that overrides .Values.data.
To do this, we will first add a test.Case, which makes our test.Suite object look as follows:
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Override .Values.data",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
SetValue("data.hello", "cattle"),
},
},
}
As you can see above, the chart.TemplateOptions provided differ between these two cases; the template that Hull would render on parsing the second test.Case would be equivalent to the one produced by running helm template simple-chart -n default --set data.hello=cattle ../testdata/charts/simple-chart based on the TemplateOptions provided.
It is also identical to provide the same configuration in the following way:
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Override .Values.data",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
Set("data", map[string]string{
"hello": "cattle",
}),
},
},
}
Or even like this:
type MyStruct{
Hello string
}
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Override .Values.data",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
Set("data", MyStruct{
Hello: "cattle",
}),
},
},
}
Regardless of the object provided, the argument to Set will always marshall the given content into JSON and pass it to Helm as if it was received by the --set-json command line option.
Therefore, the second test.Case in both of the examples above would be equivalent to the one produced by running helm template simple-chart -n default --set-json '{"data": {"hello": "cattle"}}' ../testdata/charts/simple-chart based on the TemplateOptions provided.
Note:
chart.TemplateOptionsallows you to configure the underlyingRenderValuesstruct that would be passed into Helm when it tries to render your template; it corresponds directly to all flags providable in ahelm install,helm upgrade, orhelm templatecall that affects rendering.The utility functions
SetandSetValueare intentionally written to mimic the way that Helm would receive these arguments itself (for the convenience of the tester)! However, you could also pass in a customTemplateOptionsstruct for more complex cases (such as if you want to modify theCapabilitiesorReleaseoptions provided to the template call).See
pkg/chart/template_options.goto view theTemplateOptionsstruct and see what options it takes!
Running a test.Case in Hull
On executing each test.Case, Hull automatically takes the following actions:
- A
helm lintwill be run on the rendered Kubernetes manifest - Each
suite.NamedChecksthat does not exist incase.OmitNamedCheckswill be run on the rendered Kubernetes manifests - Coverage will be checked; if the chart is not fully covered by the
test.Suite, the test will fail
Note: Since our current suite has no
test.NamedChecks, no checks will be run on the template yet.Therefore, you will only see 6 tests run: the root test for the overall chart, coverage for the overall chart, and 2 tests per case corresponding to the root test and just the output from running
helm lint.Once we add checks, you will see 1 additional subtest per
test.Casepertest.NamedCheckbe added to the total number of tests that are run.
We can see that tests for our new test.Cases have been added to our output on running go test -count=1 -v <path-to-module>
=== RUN TestChart
=== RUN TestChart/Using_Defaults
=== RUN TestChart/Using_Defaults/HelmLint
template.go:171: [INFO] Chart.yaml: icon is recommended
=== RUN TestChart/Override_.Values.data
=== RUN TestChart/Override_.Values.data/HelmLint
template.go:171: [INFO] Chart.yaml: icon is recommended
=== RUN TestChart/Coverage
suite.go:208:
Error Trace: path/to/hull/examples/tests/workspace/suite.go:208
Error: Not equal:
expected: 1
actual : 0
Test: TestChart/Coverage
Messages: The following field references are not tested:
- {{ .Values.data }} : templates/configmap.yaml
- {{ .Values.shouldFail }} : templates/configmap.yaml
- {{ .Values.shouldFailRequired }} : templates/configmap.yaml
--- FAIL: TestChart (0.01s)
--- PASS: TestChart/Using_Defaults (0.01s)
--- PASS: TestChart/Using_Defaults/HelmLint (0.01s)
--- PASS: TestChart/Override_.Values.data (0.00s)
--- PASS: TestChart/Override_.Values.data/HelmLint (0.00s)
--- FAIL: TestChart/Coverage (0.00s)
FAIL
FAIL path/to/hull/examples/tests/workspace 0.845s
FAIL
Note: Hull adds additional linting for Rancher charts, such as validating the existence of certain annotations in the correct format. This can be enabled by supplying additional options in the second argument of the
suite.Runcall, but is disabled by default.To encode additional linting, Hull uses the same underlying mechanism as Helm does, as seen in
pkg/chart/template_lint.go.Feature requests to add additional custom linters (or the ability to supply custom linters that organizations can "plug-in" to the
helm lintaction) are welcome!
Note: If you set
suiteOptions.YAMLLint.Enabledto true (default is false), Hull will also runyamllinton the YAML generated for eachtest.Casebased on the configuration in pkg/chart/configuration/yamllint.yaml (or whatever you provide tosuite.YAMLLint.Configuration).However, this does cause Hull to have an external dependency as you will need to have
yamllintinstalled on your machine (or in the container you are using to run Hul) to run this check.
Note: If you see a lint failure and want to debug where it is coming from, Hull natively supports the advanced capability to output a Markdown file to a location identified by the environment variable
TEST_OUTPUT_DIR.When this environment variable is set, Hull will create a file at
${TEST_OUTPUT_DIR}/test-${UNIX_TIMESTAMP}.mdon every failed test execution that formats all the tests errors in a human-readable way.
Our next step is to add a check!
Adding a test.NamedCheck
In order to ensure that we can pass coverage for .Values.data, we will introduce a no-op test.NamedCheck to our test suite.
On adding it, our test suite should look as follows:
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Override .Values.data",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
Set("data", map[string]string{
"hello": "cattle",
}),
},
},
NamedChecks: []test.NamedCheck{
{
Name: "Test",
Covers: []string{".Values.data"},
Checks: test.Checks{},
},
},
}
If you were to modify the suite to match what has been modified above, you will find that .Values.data is passing coverage now!
This is because there exists at least one test.Case whose TemplateOptions modify .Values.data in some way and at least one test.NamedCheck that runs against that test.Case that covers that particular value; as long as this requirement is satisfied, Hull is happy to let coverage pass for that field.
What are test.Checks?
While it's great that tests are passing in our example above, we're still passing tests as a false positive here; we need to actually execute a check on the manifest that is generated to truly have covered this field of the chart.
To do this, you can specify test.Checks, which is a slice of checker.ChainedChecks; each checker.ChainedCheck produces a function that runs on the Kubernetes objects contained in a rendered Kubernetes manifest generated from a Helm chart.
Each chart.Template can run the test.Checks by running template.Check(testingT, checker.NewCheckFunc(checks)); this is precisely what the test.Suite does on a Run, with a couple of extra configuration options.
What should a checker.ChainedCheck do?
Examples of what you may want to encode in a checker.ChainedCheck function include:
- Resources follow best practices that allow them to be deployed onto specialized environments (i.e. Deployments have nodeSelectors and tolerations for Windows)
- Resources meet other business-specific requirements (i.e. all images start with
rancher/mirrored-unless they belong on a special allow-list of Rancher original images)
This is the core of what a chart owner will want to be able to use Hull for; therefore, by design, Hull is extremely permissive with respect to how a chart developer can choose to write their checks!
Writing a checker.ChainedCheck
In order to create a checker.ChainedCheck, you can use any of the helper functions defined in pkg/checker/loop.go (or any of the other files in the checker module); these helper functions are intended to simplify the workflow of defining a checker.ChainedCheck by allowing a user to provide functions based on Go Generics to do common actions on resources.
All checker.ChainedChecks (including those helper functions) will always have a function that takes in the checker.TestContext, a construct that allows checks to perform contextual actions, such as:
- Running
checker.RenderValue[myType](tc, ".Values.something.i.want")to extract a value from the rendered values of thechart.Template, such as.Chart.Nameor.Values.data - Running
checker.Store("Some Value I Store", "myValue")/checker.Get[string, string]("Some Value I Store")to set arbitrary values for future checks in the chain to use - Running
checker.MapSet/checker.MapGetto set groupings that you can later iterate through usingchecker.MapFor; this is useful if you want to do something like find all workloads running Windows workloads (.MapSet) in one check and check if they havenodeSelectorsset for Windows (.MapFor) in another - Running
checker.HasLabels(obj, expectedLabels)orchecker.HasAnnotations(obj, expectedLabels)to simplify things that would need to be done for any arbitrarymetav1.Object; contributions are welcome for more such functions! - Running
checker.ToYAMLor other functions that handle performing basic transformations from objects to string representations for you
Writing a custom checker.ChainedCheck
You can even define your own checker.ChainedCheck in Hull, but to do this you will need to define a function that takes in a checker.TestContext and returns a CheckFunc. However, if you look at the type definition of a CheckFunc, you may notice that it is an empty interface{}!
This is because, under the hood, Hull performs the validation of the CheckFunc's function signature at runtime based on logic encoded in pkg/checker/internal/do.go.
Note: The logic used for the checker is heavily tested in
pkg/checker/internal/do.gowith multiple extremely complex examples. However, contributions for any testing gaps are welcome!
In essence, what Hull looks for in a CheckFunc is any object matches a signature of func(*testing.T, someRuntimeObjectStruct) where someRuntimeObjectStruct is defined as "any struct that contains only slices of objects that implement k8s.io/apimachinery/pkg/runtime.Object somewhere in the struct".
For example, a function with a signature like func(t *testing.T, cms struct{ []*corev1.ConfigMap }) is perfectly acceptable! Another perfectly acceptable function would be:
type myStruct {
CronJobs []*batchv1.CronJob
DaemonSets []*appsv1.DaemonSet
Deployments []*appsv1.Deployment
Jobs []*batchv1.Job
StatefulSets []*appsv1.StatefulSet
}
func MyCheck(t *testing.T, workloads myStruct) {
}
On introspecting on the function's signature, Hull will automatically manage placing the applicable rendered template objects into your provided struct to execute each test!
In this way, Hull encourages chart developers to think of changes applied to values.yaml as changes to sub-manifests: specific sets of resource types encoded in someRuntimeObjectStruct that you, the chart tester, cares about when writing the test.
For example, in the above MyCheck function that takes in all the workloads types, you may want to encode a check that determines whether all those workloads have resource requests and/or limits set. This would be fairly easy to do in Hull; just loop through each of the objects in your desired struct and execute the check, emitting a t.Error if it fails the check.
If you need to do something like this, you may want to define such a custom check using checker.NewChainedCheckFunc, which simplifies the declaration for such a function; you can put your custom struct in the function signature of the function passed into checker.NewChainedCheckFunc; the struct's type will be inferred to be the value of the type parameter S.
Dealing With Custom Resources
If you plan to write a checker.ChainedCheckFunc on a custom resource in Kubernetes, the major caveat is that you need to ensure that any Kubernetes resource Go types are added to the pkg/checker.Scheme defined here.
For users who have developed Kubernetes controllers before, it may be familiar to mention that the usual way to do this is by running the AddToScheme function usually generated by most Go-based Kubernetes controller frameworks.
For example, here is how you would add the corev1 Go types to ensure the Hull will be able to correctly identify that a YAML object that has apiVersion: v1 and kind: ConfigMap should be marshalled into the *corev1.ConfigMap struct:
import (
"github.com/rancher/hull/pkg/checker"
corev1 "k8s.io/api/core/v1"
)
func init() {
if err := corev1.AddToScheme(checker.Scheme); err != nil {
panic(err)
}
}
If you also need your CheckFunc to be able to work with other workload types, you may also need to import appsv1 and batchv1, so you can modify this accordingly:
import (
"github.com/rancher/hull/pkg/checker"
corev1 "k8s.io/api/core/v1"
appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
)
func init() {
if err := appsv1.AddToScheme(checker.Scheme); err != nil {
panic(err)
}
if err := batchv1.AddToScheme(checker.Scheme); err != nil {
panic(err)
}
if err := corev1.AddToScheme(checker.Scheme); err != nil {
panic(err)
}
}
Note: Why is this necessary?
Without adding a type to the underlying
checker.Scheme, Hull will not understand how to convert a YAML object it sees with a givenapiVersionandkindto a corresponding Go struct that is within your function signature.If a type cannot be found in the
checker.Scheme, it will always be assumed that your object is of type*unstructured.Unstructured.You can also use
*unstructured.Unstructuredas a catch-all for objects of all types inchecker.CheckFuncs; for example, if you wanted to check that all objects, regardless of type, have the expected labels, you will want to use this type.
Adding a real test.NamedCheck
Now that we've gone over what test.Checks are, we can fix our previous suite to have a real check contained in our suite.NamedChecks. Let's also rename the check accordingly:
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Override .Values.data",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
Set("data", map[string]string{
"hello": "cattle",
}),
},
},
NamedChecks: []test.NamedCheck{
{
Name: "ConfigMaps have expected data",
Covers: []string{".Values.data"},
Checks: test.Checks{
checker.PerResource(func(tc *checker.TestContext, configmap *corev1.ConfigMap) {
assert.Contains(tc.T,
configmap.Data, "config",
"%T %s does not have 'config' key", configmap, checker.Key(configmap),
)
if tc.T.Failed() {
return
}
assert.Equal(tc.T,
checker.ToYAML(checker.MustRenderValue[map[string]string](tc, ".Values.data")), configmap.Data["config"],
"%T %s does not have correct data in 'config' key", configmap, checker.Key(configmap),
)
}),
},
},
},
}
If we run this, we should find that the simple-chart passes this test! Now we're ready to move onto our next check.
Note:
checker.MustRenderValueis a generic function, so it can return any type that you expect would belong in the path provided.Here, we expect a
map[string]string, but it would be valid to provide a struct here, or even a pointer to a struct.The only caveat though is that
checker.MustRenderValuecannot returnnil. Therefore, in cases where the value can benil, you should useval, ok := checker.RenderValue(...), where!oksignifies that either nothing was found in that path (it was never set) or the value at that path is set tonil.Here, I'm assuming it's safe to use
.MustRenderValuesince thesimple-chartalways provides.Values.dataas a default value, but a user setting.Values.datain thevalues.yamltonullcan cause this check to panic (not fail) due to the use ofMustRenderValueinstead ofRenderValue. You can try this out by adding atest.Casethat does this!
Note: How did I know to
import corev1 "k8s.io/api/core/v1"to get the type for a*corev1.ConfigMap?For most built-in Kubernetes resources, the definition for the type can always be found at
k8s.io/api/GROUP/VERSION; sinceConfigMapsis in the default group ("", which translates tocore) and the version of the group I'm looking for isv1, I found it atk8s.io/api/core/v1.Similarly, a
ClusterRolebelongs to the API grouprbacat versionv1, soimport rbacv1 "k8s.io/api/rbac/v1"would allow me to use*rbac.ClusterRole.Occasionally, when defining fields like
metadata.*in the resources, you may also need to importmetav1 "k8s.io/apimachinery/pkg/apis/meta/v1"; this library contains the structs that are commonly used by all Kubernetes resources.Remember to always use the pointer (
*corev1.ConfigMap) not the value (corev1.ConfigMap) since the pointer is the one that implements the interfaceruntime.Objectandmetav1.Object, not the value!
Adding a test.FailureCase
Sometimes, you want to test conditions where you expect a template failure, such as when you have fail or required block in your templates; in these cases, you don't need to run any checks, but you do want to ensure that a user gets the right error.
To do this, you can add a test.FailureCase; it's fairly similar to a test.Case, but with the ability to declare coverage (since there is no produced manifest to run suite.NamedChecks on) and the ability to assert that a specific failure message is received.
Let's finish up our coverage by adding the last two test.FailureCases, covering our full values.yaml:
var suite = test.Suite{
ChartPath: ChartPath,
Cases: []test.Case{
{
Name: "Using Defaults",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace),
},
{
Name: "Override .Values.data",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
Set("data", map[string]string{
"hello": "cattle",
}),
},
},
NamedChecks: []test.NamedCheck{
{
Name: "Test",
Covers: []string{".Values.data"},
Checks: test.Checks{
checker.PerResource(func(tc *checker.TestContext, configmap *corev1.ConfigMap) {
assert.Contains(tc.T,
configmap.Data, "config",
"%T %s does not have 'config' key", configmap, checker.Key(configmap),
)
if tc.T.Failed() {
return
}
assert.Equal(tc.T,
checker.ToYAML(checker.MustRenderValue[map[string]string](tc, ".Values.data")), configmap.Data["config"],
"%T %s does not have correct data in 'config' key", configmap, checker.Key(configmap),
)
}),
},
},
},
FailureCases: []test.FailureCase{
{
Name: "Set .Values.shouldFail",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
SetValue("shouldFail", "true"),
Covers: []string{
".Values.shouldFail",
},
FailureMessage: ".Values.shouldFail is set to true",
},
{
Name: "Set .Values.shouldFailRequired",
TemplateOptions: chart.NewTemplateOptions(DefaultReleaseName, DefaultNamespace).
SetValue("shouldFailRequired", "true"),
Covers: []string{
".Values.shouldFailRequired",
},
FailureMessage: ".Values.shouldFailRequired is set to true",
},
},
}
Coverage should now be passing for the full chart!
You can see the full working example at ../examples/tests/simple or a more complex example at ../examples/tests/example that does not currently have full coverage (this is left as an exercise to the reader).
Should I Use Hull?
Is this the best Helm testing framework for me?
Depends on your use case!
Hull is great for organization that:
- Needs a fairly robust Helm template testing tool with easy extensibility
- Works primarily in Golang (such as those that develop Golang Kubernetes operators and ship them in Helm charts)
For example, Hull is targeted for use in rancher/charts today, a repository that maintains a large number of highly complex charts that primarily deploy operators built in Go.