Skip to content

Creating a new task

Alan Cha edited this page Feb 4, 2022 · 2 revisions

Iter8's gen-load-and-collect-metrics task enables load test for HTTP services and built on top of Fortio. However, while Fortio has a gRPC health check capability, it does not have the ability to load test arbitrary gRPC services. The excellent ghz project does have this capability.

This documentation describes the process used to develop the gen-load-and-collect-metrics-grpc task which enables load test for gRPC services and built on top of ghz.

1. Scaffolding

We will create the logic for gen-load-and-collect-metrics-grpc task in the file base/collect_grpc.go as follows.

package base

import (
	"errors"
)

// collectGRPCInputs holds all the inputs for this task
type collectGRPCInputs struct {
}

const (
	// CollectGRPCTaskName is the name of this task which performs load generation and metrics collection for gRPC services.
	CollectGRPCTaskName = "gen-load-and-collect-metrics-grpc"
)

// collectGRPCTask enables load testing of gRPC services.
type collectGRPCTask struct {
	TaskMeta
	With collectGRPCInputs `json:"with" yaml:"with"`
}

// initializeDefaults sets default values for the collect task
func (t *collectGRPCTask) initializeDefaults() {
}

// validate task inputs
func (t *collectGRPCTask) validateInputs() error {
	return nil
}

// Run executes this task
func (t *collectGRPCTask) Run(exp *Experiment) error {
	// 1. validate inputs
	var err error

	err = t.validateInputs()
	if err != nil {
		return err
	}

        // 2. initialize defaults
	t.initializeDefaults()

	// 3. collect raw results from ghz for each version

	// 4. The inputs for this task determine the number of versions participating in the experiment.
	// Hence, init insights with num versions

	// 5. Populate all metrics collected by this task
	return errors.New("not implemented")
}

At this point, the above file is mostly stubbed, and the collectGRPCTask is unimplemented. However, we have a valid task definition, which we can use to make progress.

2. Unmarshaling

The ExperimentSpec struct has a custom unmarshaller in the file base/experiment.go which we will extend as follows.

// some stuff above
// the unmarshaling code we're adding is as follows...
case CollectGRPCTaskName:
	cgt := &collectGRPCTask{}
	json.Unmarshal(tBytes, cgt)
	tsk = cgt
// more stuff below

At this point, you have fully defined task. Unfortunately, using this task in an experiment will cause the experiment to fail, since the task is not implemented and its run will result in an error. We will continue with its development as follows.

3. Defining Inputs

The struct holding the inputs to this task was stubbed in step 1. It is now time to define the inputs properly. We will define the inputs as follows.

A key thing to remember in this step is to properly document all inputs.

Please refer to the base/collect-grpc.go file for the inputs (TBD: show marked up code in GitHub)

4. Implementing Run

Time to implement the run method for this task. Please refer to the base/collect_grpc.go file for the detailed implementation (TBD: show marked up code in GitHub). There are two areas in this file to pay special attention.

Init insights

This might be the first task to be executed in the experiment. So, initializing insights is necessary in this task, since it populates insights.

// 4. The inputs for this task determine the number of versions participating in the experiment.
// Hence, init insights with num versions
err = exp.Result.initInsightsWithNumVersions(len(t.With.VersionInfo))
if err != nil {
	return err
}
in := exp.Result.Insights

Update metrics

m := iter8BuiltInPrefix + "/" + gRPCRequestCountMetricName
mm := MetricMeta{
	Description: "number of gRPC requests sent",
	Type:        CounterMetricType,
}
in.updateMetric(m, mm, i, float64(gr[i].Count))

Use the above pattern whenever you create/update metrics (single method for both). Refer to base/collect_grpc.go for more context.

5. Testing the task

You have two options for testing a task, both of which you will exercise at some stage.

  1. Write the unit test that programmatically tests your task (see examples of unit tests for tasks in the base package)
  2. Create an experiment.yaml file with your task in it. Create a chart with this experiment.yaml as the "experiment template". You can do this as follows.
iter8 hub load-test
cd load-test
echo '{{ define "load-test.experiment" -}}' > templates/_experiment.tpl
cat <your experiment.yaml> >> templates/_experiment.tpl
echo '{{ end }}' >> templates/_experiment.tpl

You can now do

iter8 run

This will run the experiment containing your new task.