Testing - jboursiquot/go-for-experienced-programmers GitHub Wiki

This section introduces testing in Go, which, unsurprisingly, is very much like writing any other kind of Go code.

In this section you’ll learn how toβ€¦β€Œ

  • use the testing package
  • use table-driven tests for multiple scenarios under the same test
  • do setup/teardown for each test

The testing package

Go's standard library comes equipped with all you need to test your programs, mainly through the testing package. Let's start with the simplest example of this section.

package main

import (

func TestRuneIsDigit(t *testing.T) {	
	c := '4'
	if unicode.IsDigit(c) != true {
		t.Error("expected rune to be a digit")
ok  	github.com/jboursiquot/go-in-3-weeks/testing	0.009s	coverage: 0.0% of statements
Success: Tests passed.

The output you see above is from running go test inside of the folder where the test is located. The following is an example directory listing containing go files and matching test files:

β”œβ”€β”€ somefile.go
β”œβ”€β”€ somefile_test.go
β”œβ”€β”€ main.go

In the listing above, somefile.go contains some form of logic while by convention, somefile_test.go contains the code that tests the functionality in somefile.go. Regardless of how you name the file, as long as it ends with _test.go, the Go toolchain (go test) will treat it as a test file and include it in test runs.

Exercise 1: Using the testing package

For this exercise, we'll reach back to the library we created during the Getting Started section and write tests for the stringutils package. Here's the set of functions again:

package stringutils

import "strings"

// Upper returns the uppercase of the given string argument.
func Upper(s string) string {
	return strings.ToUpper(s)

// Lower returns the lowercase of the given string argument.
func Lower(s string) string {
	return strings.ToLower(s)


  1. Create an appropriately-named file for this package (Hint: it needs to end with _test.go).
  2. Write a test for each function in the package.
  3. Use the go toolchain on the command line to run the tests and see the results.

What you need to know

Table-Driven Tests

When you have multiple scenarios you'd like to verify within the same test ideally, you can use a technique called "table-driven tests."

Consider the following:

package main

import (

func greeting(name string) string {
	return fmt.Sprintf("Hello, %s!", name)

func TestGreeting(t *testing.T) {
	tests := []struct {
		input string
		want  string
		{input: "Johnny", want: "Hello, Johnny!"},
		{input: "δΈ–η•Œ", want: "Hello, δΈ–η•Œ!"},

	for _, tc := range tests {
		got := greeting(tc.input)
		if !reflect.DeepEqual(tc.want, got) {
			t.Fatalf("expected: %v, got: %v", tc.want, got)

Table-Driven Subtests

  • Run a group of tests within a single test function.
  • Useful for running related tests together, especially for multiple input scenarios to same function.
  • Allow for parallelized tests within a single test function.
  • Low overhead to add new scenarios.
  • Easy to reproduce reported issues.
  • Tip: Use a map[string]... instead of []struct{...} for the test cases to use map keys as test names in output.
package main

import (

func Sum(nums []int) int {
	total := 0
	for _, num := range nums {
		total += num
	return total

func TestSumParallel(t *testing.T) {
	tests := map[string]struct {
		nums     []int
		expected int
		"positive numbers":         {nums: []int{1, 2, 3}, expected: 6},
		"negative numbers":         {nums: []int{-1, -2, -3}, expected: -6},
		"mix of positive and negative": {nums: []int{-1, 2, -3, 4}, expected: 2},
		"zero values":              {nums: []int{0, 0, 0}, expected: 0},
		"empty slice":              {nums: []int{}, expected: 0},

	for name, tt := range tests {
		tt := tt 
		t.Run(name, func(t *testing.T) {
			result := Sum(tt.nums)
			if result != tt.expected {
				t.Errorf("Sum(%v) = %d; expected %d", tt.nums, result, tt.expected)


=== RUN   TestSumParallel
=== RUN   TestSumParallel/positive_numbers
=== PAUSE TestSumParallel/positive_numbers
=== RUN   TestSumParallel/negative_numbers
=== PAUSE TestSumParallel/negative_numbers
=== RUN   TestSumParallel/mix_of_positive_and_negative
=== PAUSE TestSumParallel/mix_of_positive_and_negative
=== RUN   TestSumParallel/zero_values
=== PAUSE TestSumParallel/zero_values
=== RUN   TestSumParallel/empty_slice
=== PAUSE TestSumParallel/empty_slice
=== CONT  TestSumParallel/positive_numbers
=== CONT  TestSumParallel/zero_values
=== CONT  TestSumParallel/empty_slice
=== CONT  TestSumParallel/mix_of_positive_and_negative
=== CONT  TestSumParallel/negative_numbers
--- PASS: TestSumParallel (0.00s)
    --- PASS: TestSumParallel/positive_numbers (0.00s)
    --- PASS: TestSumParallel/zero_values (0.00s)
    --- PASS: TestSumParallel/empty_slice (0.00s)
    --- PASS: TestSumParallel/mix_of_positive_and_negative (0.00s)
    --- PASS: TestSumParallel/negative_numbers (0.00s)

What's with the PAUSE and CONT?

When you run tests with subtests marked to run in parallel, the PAUSE and CONT steps are part of the process that ensures the subtests are executed correctly and concurrently.


Indicates that the subtest has been initialized and is ready to run, but it is temporarily paused. This happens because the subtest is marked to run in parallel using t.Parallel(). Here's what happens during the PAUSE step:

  • Initialization: The test framework initializes the subtest and sets up its context.
  • Pause: The test framework initialize all parallel subtests and pauses them.
  • Scheduling: The test framework decides when to run subtests, parallelizing their execution, maintaining proper synchronization and resource management.

CONT Indicates that the previously paused subtest is now being continued and executed. This is the point where the subtest actually runs its test logic. Here’s what happens during the CONT step:

  • Continuation: The test framework resumes the execution of the subtest. This happens after all parallel subtests have been initialized and paused.
  • Execution: The subtest runs its test logic, checking conditions, and making assertions.
  • Completion: Once the subtest finishes its execution, it reports the result (pass or fail).

Exercise 2: Using table-driven testing

Modify your solution to Exercise 1 to test at least a couple of different scenarios in each of your tests.


  1. Use a "table" of scenarios to test multiple cases within each test.
  2. Understand and make use of sub-tests.
  3. Make your subtests run in parallel.


You may be used to setups and teardowns before and after each test from other languages and frameworks. Go has support for this concept through its TestMain function from the testing package.

Here's how it looks:

func TestMain(m *testing.M) {
	// setup	
	code := m.Run()	
	// teardown	

The idea is to perform any setup you need before calling on m.Run() and performing any teardowns after it. In the snippet above, we capture and pass an exit code to os.Exit.

We terminate the test run (and programs in general) with a zero (0) to indicate success or a non-zero for failure. Go here if you'd like to know more about exit code/status.

Exercise 3: Using setup/teardown

Modify your solution to Exercise 2 to add a TestMain function. In it, you can setup your test "table" for use within the other test functions and avoid repeating yourself.

What you need to know

Intro to Benchmarks

Go's testing package has built-in support for benchmarking the performance of Go code. In this section, we'll see how to write some simple benchmarks.

Benchmarking plays a part in performance tuning which is covered in the Advanced portion of this section. If you're interested in diving deeper, start with Profiling Go Programs.

Take the common recursive fibonacci implementation below:

func fib(n int) int {
	if n < 2 {
		return n
	return fib(n-1) + fib(n-2)

To benchmark this function, our code, living in something like fib_test.go, will look something like so:

func benchFib(i int, b *testing.B) {
	for n := 0; n < b.N; n++ {

func BenchmarkFib1(b *testing.B) { benchFib(1, b)}
func BenchmarkFib10(b *testing.B) { benchFib(10, b)}
func BenchmarkFib20(b *testing.B) { benchFib(20, b)}

To run our benchmarks, we invoke go test with the -bench flag like so:

$ go test -bench=.   
goos: darwin
goarch: amd64
pkg: github.com/jboursiquot/go-in-3-weeks/testing/benchmarks
BenchmarkFib1-12        742379979                1.62 ns/op
BenchmarkFib10-12        3690924               299 ns/op
BenchmarkFib20-12          30712             37152 ns/op
ok      github.com/jboursiquot/go-in-3-weeks/testing/benchmarks 4.615sh

Some observations:

  1. Unlike regular tests that start with Test, benchmarks start with Benchmark.
  2. The benchmark functions run the target code b.N times during which b.N is adjusted until the benchmark function lasts long enough to be timed reliably.

Exercise 4: Writing benchmarks

Write benchmark tests for your stringutils package's Upper and Lower functions.


In this section you learned how to use the testing package to test your Go code. You picked up some efficiency tricks with table-driven tests and learned how to do setup/teardown for each test.