Go - kamialie/knowledge_corner GitHub Wiki

Go

Language characteristics:

  • Strong, static type system
  • C-inspired syntax
  • Multi-paradigm (procedural and object-oriented)
  • Garbage collector
  • Fully compiled
  • Rapid compilation
  • Single binary (however, intermediate libraries, plug-in systems are available)
  • Composite type system

Use cases:

  • Web services (moving data between servers)
  • Web applications (end user, html, etc)
  • DevOps space
  • GUI
  • Machine learning

Semi-colons are automatically added by compiler to the end of lines that do not end with a symbol.

Sample program:

package main

import (
	"fmt"
)

func main() {
	fmt.Println("Hello world!")
}

For build tool to produce an executable there must be a main function, which will be an entrypoint, and it must reside in the main package.

Data types

Go is statically-typed language. Variables can be declared explicitly or use implicit initialization syntax (compiler determines the type).

Good rule of thumb is to use var when declaring variables that will be initialized to their zero value, and to use short variable declaration, := when extra initialization of function call is to be provided.

Simple types

Go builtin - about simple types and more.

Declaring a variable:

package main

func main() {
    // Verbose
    var i int
    i = 42

    // One line
    var f float32 = 3.14

    // Implicit initialization syntax
    var name = "Mike"
    // Short declaration syntax ":="
    firstName := "Name"

    // Complex type
    c := complex(3, 4)
    // Split real and imaginary parts; similarly declare multiple variables at
    // once
    r, i := real(c), imag(c)
    a, b := 3, 5

    // Variable block (does not require implicit declaration syntax `:=`):
    var(
        index = 1
    )
}

Dividing integers always is performed as integer division (remainder is dropped); the result of a floating number division is a decimal number.

rune (type rune = int32) is a special type for characters. It was specifically designed to cover Unicode characters, which vary in size from 1 to 4 bytes. Thus, rune is treated as a logical block, instead of raw bytes. Looping over strings yields runes.

Strings

String types:

  • "example" (with quotes) - interpreted string, handles escape characters as well
  • `example` (backticks) - raw string

Numbers

Numeric types include integers, unsigned integers, floating point numbers, and complex numbers. Common examples are int, uint, float32 and float64, complex64, complex128 respectively.


Boolean

Simply true or false value.


Error types

Conventional interface for representing an error condition, with the nil value representing no error. Represents almost any type or data type that has an error method, meaning it can report what the actual error was.

Idiomatic error handling; most common pattern is to return a value along an error from a function or simply an error, if no value needs to be returned, so that error value can indicate if function was successful:

func main() {
    value, err := someFunction()
    if err != nil {
        fmt.Printf("An error occured: %s\n", err)
        return
    }
    println(value)
}

Pointer

Uninitialized pointer equals nil. Value can not be assigned through uninitialized pointer, thus, it should be initialize with new() function or getting an address of a pre-existing variable.

package main

import "fmt"

func main() {
    var firstName *string = new(string)
    firstName = "Name"

    foo := "bar"
    ptr := &foo
    *ptr = "updated_bar"
    fmt.Println(ptr, *ptr)
}

Constant

Constant must be declared at the time as initialization (same line). Value type must be known at compile time. Implicitly typed constant (type is reevaluated for each case) can be also declared on a package level (outside of any function): either standalone or as const block.

Implicitly typed declaration marks a variable as a literal, which allows it to be used in other contexts, e.g. assigning integer constant to floating point variable type. Explicitly typed constant enforces that variable to be used only when that specific type is allowed to be used.

Uninitialized constant gets the constant expression of a previous constant, e.g. 2 * 5.

package main

import "fmt"

const(
    foo = "bar"
    b // "bar"
)

// Implicitly typed
const pi = 3.14

func main() {
    const pi float32 = 3.14

    const c = 3
    fmt.Println(c + 2)
    fmt.Println(c + 1.5)
}

iota

Used only in the context of constant variables. Represents an expression, not a literal value, which gets incremented on position within a block. Can also be used in expressions. The value of iota resets with a new constant block. Expression is duplicated for every constant in the block until the block ends or new assignment statement is found.

package main

const(
    a = iota     // 0
    b            // 1
    c = 3 * iota // 6
)

const(
    d = iota // 0
)

const(
    e = 5
    f = iota // 1
)

Aggregate data types

Array

Fixed sized collection (value type). All items must be of the same type. 2 arrays can be compared, using == operator, - first, length and type are compared, then each item is compared for equality; when passed to a function entire array is copied over (prefer passing pointer to array in such case).

var array [3]int
array[0] = 1

// Array literal - provide values on the same line
array := [3]int{1, 2, 3}
// Let Go automatically determine the length needed
array := [...]int{1, 2, 3}
// Initialize specific elemenets, the rest will be initialized to zero value
// for the type
array := [3]int{1: 5, 2: 10}

fmt.Println(len(array))

// Arrays are copied by value. array2 is a distinct array with exactly same
// value. Type must be the same, which is both the element's type and array
// length. For pointer arrays the pointer value itself is copied, not what it
// points to.
array2 := array

// Access pointer element
array := [5]*int{0: new(int), 1: new(int)}
*array[0] = 1


// Multidimensional array. For Comparison and assignment to work both total
// length and element length must match.
array := [3][2]int{{1, 2}, {3, 4}, {5, 6}}

Slice

Dynamically sized collection (reference type); acts as abstraction to underlying array providing same benefits of contiguous block (indexing, iteration) and garbage collection. Slice itself holds 3 pieces of data: pointer to underlying array, current length (what slice has access to at the moment) and total size of the array (capacity). For slices len function return the length, cap function returns the capacity. Passing a slice to a function copies only 3 values: pointer to array, length, and capacity.

// Second argument - length, third - capacity.
slice := make([]int, 3, 5)
// If capacity is ommitted, it is same as length.
slice := make([]int, 3)

// Slice literal - initial size and length are determined by Go.
slice := []int{1, 2, 3}
// The following would create a slice with length and capacity of 10
slice := []int{9: 9}

// nil slice - there is no underlying array, length and capacity are 0
var slice []int
// empty slice behaves almost identical to nil slice. Both work the same with
// len, append and cap functions.
slice := make([]int, 0)
slice := []int{}

append function increases the length of a slice. Capacity may increase, depending on available capacity of source slice. Accepts source slice, and item to append, returns a new slice. If no capacity is available, append creates new array and copies existing values. Capacity is always doubled when the existing capacity of the slice is under 1,000 elements. For elements over 1000, the capacity is grown by a factor of 25% (algorithm may change).

slice := append(slice, 4)

// Append all elemenets from second slice
s1 := []int{1, 2}
s2 := []int{3, 4}
// Spread operator
s3 := append(s1, s2...)

// Remove elements from the slice; remove indices 1 and 2
// Requires experimental slices package as of 1.19
slice = slices.Delete(slice, 1, 3)

New slices can be created to share same underlying array using any existing capacity. 2 slices can not be compared.

// Starting from 1st element
s2 := slice[1:]
// Up to but not including
s3 := slice[:2]
// Starting and ending indexes
s4 := slice[1:2]

arr := [3]int{1, 2, 3, 4}

// Slice operator. Any changes to array are reflected in a slice and vice versa.
// Slice is sort of pointing to the array.
slice := arr[:]
// Length of 2, capacity of 3.
slice := arr[1:3]

Calculate the length and capacity for new slice; slice[i:j] of capacity k:

  • Length: j - i
  • Capacity: k - i

Third index (k from formula above) in slice operation determines the capacity of new slice. This gives more control over slice creation process, e.g. restricting new slice length to be the same as capacity would force next append to create separate underlying array with same values.

source := []int{1, 2, 3, 4}
slice := source[1:2:2]

Slices can be nested. Same apply rules work.

slice := [][]int{{1}, {2,3}}
slice[0] = append(slice[0], 4)

Map

Similar to slice, map is a reference type - actual data is stored elsewhere. 2 maps can not be compared. Map key can be any built-in or struct type as long as it can be used with == operator. Slices, functions, and struct types that contain slices can not be used as keys.

m := make(map[string]int)
// Map literal
m := map[string]int{"foo": 42}

// Update existing item or create new
m["bar"] = 21

// Remove key/value pair
delete(m, "foo")

// nil map. Can not be used to store data.
var m map[string]int

// Get map size
len(m)

Queries always return a result; for non-existing key the default value for the type is returned, e.g. 0 for int. Comma ok syntax can be used to determine, if a value came from the map or not. ok is just a conventional name for variable in this context; is of type boolean.

v, ok := m["foo"]

User-defined types

type keyword is used to defined custom structure, which is basically defining a new type. It can also be used to declare a new type that can be represented by already existing type; can be built-in or reference type. As Go does not implicitly convert types, new and backing types are not compatible and are completely distinct.

type Duration int64

// Code below will not work
var dur Duration
dur = int64(10)

Since compiler allows to add methods only to user-defined named types, it is also possible to declare a new type based on already existing type. Copies type fields over to a new type.

type IP []byte

Simpler version of above is type alias. New type behaves exactly as the backing type and can be used interchangeably. However, it isn't possible to extend this type, as it would be same as extending built-in, which is not possible. For that use type declaration above. Copies fields and method sets of a type.

type UserID = string

Struct

Fields can be of any types, but are fixed at compile time. Declaration and definition must be separate. Each field is initialized to zero value. Struct is a value type, like arrays, which means assignment operation creates a separate value copy.

// Anonymous structure
var s struct {
    name string
}

// Custom structure
type user struct {
	ID int
	firstName string
	lastName string
}

var keyword is a good way to indicate that the variable is being set to its "zero" value. Otherwise, use short variable declaration operator.

var u user
u.ID = 1
u.firstName = "Alex"

// With field name: order of the fields does not matter. Trailing comma on each
// line.
u2 := user{
    firstName: "Alex",
    ID: 1,
    lastName: "Dot",
}
// Without field names, must follow order in the type declaration. Traditionally
// put on the same line; no trailing comma.
u3 := user{2, "John", "Comma"}

Initializing nested types:

// user type from above
type admin struct {
    person user
    level string
}

u := admin {
    person: user{
        ID: 1,
        firstName: "Alex",
        lastName: "Dot",
    },
    level: "super",
}

Method

Provides a way to add behavior to user-defined types.

type someStruct struct {}

func (ss someStruct) func_name(param1 int, param2 string) {}

var newStruct someStruct
result = newStruct.func_name(35, "string")

A method receiver (defined between func keyword and the name of the function) binds a function to specific type (a function with a receiver becomes a method). Now this method only works in the context of a specific type.

Receiver could be of type value or pointer. When data has to be shared between the caller and a method, a pointer should be used. Best practice is to declare pointer receiver, since the method most likely will need to manipulate the state; value receiver can be declared if no manipulation is needed.

type user struct {
  id    int
  name  string
}

// Method operates against a copy of a value that was used to make the call.
func (u user) String() string {
  return fmt.Sprintf(..)
}

// The value that was used to make the call is shared with the method.
func (u *user) UpdateName(n name) {
    u.name = name
}

For reference type values use value type receiver, since reference type value should not be shared - copy of the value will point to the same underlying data structure, thus, sharing is done intrinsically.

Go automatically takes the address of a value when calling a method with pointer receiver, user.UpdateName() is implicitly converted to (&user).UpdateName(). However, it does not do it for interfaces:

ype Printer interface {
    Print()
}

type MyStruct struct{}

func (m *MyStruct) Print() {
    fmt.Println("Hello from pointer receiver")
}

func main() {
    var p Printer

    p = MyStruct{} // Breaks, as user should do it explicitly, p = &MyStruct{}
	p.Print()

    k := MyStruct{}
	k.Print()
}

If one or more methods are defined with pointer receivers, it is common to transform all methods to pointer receiver type for consistency, even though it is not required.

Interface

A type that declares a behavior, but never implements it. This takes a form of methods. When user-defined (concrete) type implements given methods a value of this type can be assigned to value of interface type; only interface methods will be visible.

Internally an interface type variable consists of 2 parts: first one is pointer to internal table, iTable, which contains type information about stored value, second one is pointer to stored value. Whether value or pointer type has been assigned to an interface value will be reflected in iTable.

Method sets define rules around interface compliance. In short it reads as: values can have only value type receivers, while pointers can have both value and pointer type receivers. In other words, methods declared with value receiver can be called by interface type values that contain both values and pointers, with pointer receiver - values that contain only pointers. Restriction is based on the fact that it is not always possible to fetch an address of a value, e.g. some_function(value).

type Reader interface {
  Read([]byte) (int, error)
}

type File struct {...}
func (f File) Read(b []byte) (n int, err error)

type TCPConn struct {...}
func (t TCPConn) Read(b []byte) (n int, err error)

var f File
var t TCPConn

var r Reader
r = f
r.Read(...)

r = t
r.Read(...)
var f File
var r Reader = f
// A compile time error, since Go doesn't know the underlying type of an
// interface in general.
var f2 File = r

// Type assertion - tell explicitly what type does the variable contain. If
wrong, a runtime panic occurs.
f2 = r.(File)
// Safe type assertion
f2, ok := r.(File)

To test an interface for multiple underlying types use type switch:

var f File
var r Reader = f

switch v := r.(type) {
case File:
  // v is now a File object
case TCPConn:
  // v is now a TCPConn object
default:
  // if v is none of the objects types above
}

Naming convention for interfaces: if there is only one method, the name should end with er, otherwise name should relate to generic behavior.

Type embedding

Builds on top of stuct and method notions. Allows declaring a given type inside another type, thus, creating inner and outer type relationship. To ember a type simply declare it inside another. Outer type has all components of inner type, fields and methods; inner type methods are promoted to outer type - can be called both directly by outer type or through inner type by accessing it first.

type user struct {
    name string
    email string
}

type admin struct {
    user
    level string
}

func (u *user) notify() {
    fmt.Println("Sending message to %s", u.name)
}

func main() {
    ad := admin{
        user: user{
            name: "John",
            email: "[email protected]"
        },
        level: "super",
    }

    // Access inner method
    ad.user.notify()
    // Access promoted method
    ad.notify()
}

Outer type still can declare its own behavior and fields if needed, thus, overwriting promoted method and fields. Inner members will still be accessible directly via inner type.

Promoted method also works with interfaces: outer type will be considered as matching the interface, even if the latter is declared with inner type.

If a struct embeds 2 interfaces that declare same methods, that will result in a conflict, as compiler would not know, which one to execute, when an object calls such method from outer type. In such case a wrapper method should be created, where either one of interfaces are explicitly used.

Interface embedding

Similarly to struct embedding, interfaces can be embedded into structs, thus, directly exposing methods that it declares. These methods can be overwritten by struct, simply by defining it's own method.

Interfaces can also be embedded into other interfaces. Outer interface would have direct access to methods inner interface declares.

type Identifiable interface {
    ID() string
}

type Citizen interface {
    Identifiable
    Country() string
}

type euIdentifier struct {
	id string
	country string
}

func (eui euIdentifier) ID() string {
	return eui.id
}

func (eui euIdentifier) Country() string {
	return fmt.Sprintf("EU: %s", eui.country)
}

func NewEuIdentifier(id, country string) Citizen {
	return euIdentifier{
		id:      id,
		country: country,
	}
}

type Name struct {
	First string
	last string
}

type Person struct {
    Name
    Citizen
}

Comparing types

Struct objects can be comparable (==, !=):

  • both have to be of the same type
  • type has to have predictable memory layout, no slices, maps or functions. This also makes this type hashable (can be used as key in maps)

Objects of the same interface can also be compared. This actually allows structs of different types to be compared (field by field), even though direct comparison of structs of different types is now possible. If underlying struct does have incomparable type, compiler will not catch this, as it compares interface signatures (methods, return types, etc), but it will fail on runtime.

Code structure

All init functions in any source file that are part of a program will be called before the main functions.

embed allows to natively make external files available in the program by including them in the binary.

Function

func functiona_name(parameters)(return values) {
  function_body
}

Factory functions (create and return a value of specific type) in Go are called New by convention. Common practice is to declare fields as private and provide functions (with parameters if needed) that either create and return an object (NewSmth), modify fields or simply output them.

Parameters

Function parameters can stay unused - this won't cause compile error. Parameters of the same type can be listed under single type as a list.

func function_name(param1 int, param2 string) {
}

func function_name(param1, param2 int) {
}

func function_name(param1, param2 int) bool {
	return true
}

Variadic parameter transforms the type to a collection. Within a function acts as a slice of specified type. Only one variadic parameter can be specified, and it must be specified last. Function call accepts multiple values separated by comma.

func print_names(names ...string) {
    for _, n : range names {
        println(n)
    }
}
int main() {
    print_names("Alex", "Max", "Sam")

    // Unpack slice to match function signature
    names := []string{"John", "Luke"}
    print_names(names...)

Return value

Return value is optionally set. Multiple values can be returned. Common practice is to return a value alongside error (pointer type) in case function encounters a problem. To ignore a return value use write only variable.

import "errors"

func function_name(param1, param2 int) error {
	return errors.New("smth went wrong")
}

func function_name(param1, param2 int) (int, error) {
	return 1, nill
}

// write only variable
_, err = function_name()

Return values can also be named; in this case also a naked return statement can be used, normal return statement can still be used as well (rarely used):

func function_name(param1, param2 int) (result int, err error) {
    result = 1
    error = nil
	return // naked return, values can be optionally specified
    // return result, err
}

// Once err variable is set, function automatically returns right after
func divide(x, y float64) (result float64, err error) {
    if y == 0 {
        err = errors.New("Can't divide by 0")
    }
    result = x / y
    return
}

Anonymous function

Can accept and return values just like regular functions:

func main() {
    func() {
        println("Inside anonymous function!")
    }()

    a := func(str string) {
        fmt.Println("Inside anonymous function assigned to a variable!")
        fmt.Printf("Passed in parameter: %s\n", str)
    }

    a("foo")
}

This also allows functions to be return from a function and passed in as parameters:

func main() {
    f := getFunction()
    println(f(3, 5))
    acceptFunction(1, 2, f)
}

func getFunction() func(int, int) int {
    return func(a int, b int) int {
        return a + b
    }
}

func acceptFunction(a int, b int, f func(int, int) int) {
    println("Accepted a function!")
    result := f(a, b)
    println(result)
}

Stateful function can store information between function invocations, which is similar to static variable:

func main() {
    f := getFunc()
    value := f()
    println(value)
    value := f()
    println(value)
}

func getFunc() func() int64 {
    x := 1
    return func() int64 {
        x += 1
        return x
    }
}

Package

A package is a directory located in the same space as go.mod, which contains collection of source files that are compiled together (at least one file). Packages can also be nested - another directory within a package that has at least one source file is also a package (subpackage).

Any source file starts with a package <value> statement, package declaration. All source files in a directory must use the same package name; by convention that is either main or name of the directory. Should be short, concise, and lowercase; prefer single word nouns, for multi-word names chain names without spaces or underscores all in lowercase.

main package signals to Go command that it should be compiled into executable; an executable must have a main package, which in turn must have a main() function.

Visibility mode

Functions, types, variables, and constants defined in sources files are members of a package.

There are 2 visibility modes: package and public. Visibility mode determines what importing code can see from a package. Package mode makes all members visible within a package by default. Capitalized members, e.g. type User struct have a public visibility, become part of public API of a package. Public members are visible in imported packages. Same applies to structure members.

Private members can still be utilized via public function. Even though variable of private type can not be created explicitly, short variable declaration operator can do that.

// ./foo
package foo

type bar int

func New(value int) bar {
    return bar(value)
}

// .
package main

import project/foo

func main() {
    v := foo.New(10)
    println(v)
}

Inside a struct each field can be declared as public or private. Private fields will not be accessible outside the package. The code below will not compile as main function can not reference email field.

// ./foo
package foo

type User struct {
    Name string
    email string
}

// ./
package main

import (
    "fmt"
    "project/foo"
)

func main() {
    u := foo.User{
        Name: "John",
        email: "[email protected]",
    }
    fmt.Printf("User: %v\n", u)
}

Embedded type (inner) type can declare fields as public, which makes them accessible if outer type is public.

// ./foo
package foo

type user struct {
    Name string
    Email string
}

type Admin struct {
    user
    level string
}

// ./
package main

import (
    "fmt"
    "project/foo"
)

func main() {
    ad := foo.Admin{
        level: "super",
    }
    ad.User = "John"
    ad.Email: "[email protected]",
    fmt.Printf("Admin: %v\n", ad)
}

Package identifier

Name of directory becomes the name of the package. Package can be referenced from the root package by appending it to path after slash.

Fully Qualified Package Name is used for imports; check also identifiers. To import a bar package in the root package in a module named foo (look in go.mod file) the following identified is used - foo/bar.


Imports

Standard library functions are located where Go is installed, GOROOT; custom packages are located inside GOPATH, path to your workspace. Given GOPATH is set to /home/project:/home/libraries, and Go is installed under foo/bar the lookup order for net/http package would be:

/foo/bar/src/pkg/net/http
/home/project/src/net/http
/home/libraries/src/net/http

Remote imports can fetch packages from DVCS like GitHub. go build command would still search packages under GOPATH, while go get fetches remote package and places it on local disk.

import "github.com/spf13/viper"

Named import gives a package a new name; useful if there is more than one packages with the same name (last folder name).

import (
    "fmt"
    myfmt "mylib/fmt"
)

Blank identifier before a package listing in an import block allows initialization (init functions) from a package to occur, even if no identifiers are used.

import(
  _ "github.com/sample"
)

Init

Each package can define one or multiple init() function that are executed before main function; good way to initialize variables, setup packages, perform other bootstrapping. This function doesn't take any arguments, and doesn't return any values. Multiple init functions with a single file are executed in the same order as they are defined, across multiple files - based on file name (alphabetically).

init function can work with other packages and variables, which means those are executed before init function.

To enforce init function execution from a package that is not being referenced, a blank identifier can be used.


Internal package

Members are only accessible by parent and its children. Allows better organization without leaking details. Declared by creating a internal directory within a package.

Module

A module is a collection of related Go packages that are released together.

# Create go.mod file
$ go mod init <name>

# Add dependencies. Also creates go.sum file, if it didn't exist and updates
# go.mod. Version is optional.
$ go get <package_name>@<version>
# Upgrade all dependencies
$ go get -u

# Remove no longer used packages
# go mod tidy

Module versioning follows v<major>.<minor>.<patch>-<pre_release> model.

  • v0.x.x denotes module in development

Vendoring

Basically managing all dependencies within the project instead of downloading them to shared location. This means entire code can be built in isolated environment.

# Enable vendoring. Creates "vendor" directory, where external dependencies
will be stored. "vendor/modules.txt" lists all dependencies.
$ go mod vendor -v

Documentation

Comments follow C notation:

// single line comment
var i // same line comment

/*
multiline comment
*/

Documentation is expected in certain places of any Go program. First, explain what package provides and what it is used for. Start with Package .

// Package user provides ...
package user

Any public members of a package are also expected to be documented, such as variables and functions. Start with the name of a member, e.g. GetByID searches...:

// MaxUsers controls how many user can be handled at once.
const MaxUsers = 100

// GetByID searches for users using their employee ID.
func GetByID(id int) (User, bool) {}

Program flow

If statements. Initilizer can also be optionally specified; valid only on first if clause, if multiple branches are specified.

i := 1
j := 2

if i == j {
	println("equal")
} else if i < 2 {
	println("less")
} else {
	println("more")
}

if k := 1; k == 1 {
    println("if with initializer")
}

Switch

Break is implicit. Multiple case expression can be specified for a single branch, in which case OR logic is applied. Initializer syntax can be used here as well, similarly to if statements.

method := "POST"
switch method {
case "POST":
    println("POST")
case "GET", var:
    println("GET")
    fallthrough
case "PUT":
    println("PUT")
default:
    println("default")
}

Logical switch is the one that has test expression set to true, so each case checks if it evaluates to true. Since this is a common use case, true can be omitted, in which case it is implied.

switch i := 2, true {
case i < 5:
    println("i is less that 5")
case i < 10:
    println("i is less that 10")
default:
    println("i is greater or equal to 10")
}

Switch statement can also work with types; store result of casting in a variable to be able to reuse it later. Struct are accepted as well.

// id basically matches anything, in real code use specific interface or
// generic.
func example(id interface{}) {
    switch v := id.(type) {
    case string:
        println("It is a string")
    case int:
        println("It is a string")
        println(strconv.Itoa(v)
    default:
        panic("Unsupported type")
    }
}

Loops

Basic loops:

  • infinite loop
     var i int
     for {
     	if i == 5 {
     		break
     	}
     	println(i)
     	i += 1
     }
  • loop till condition
     var i int
     for i < 5 {
     	if i == 1 {
     		continue
     	}
     	if i == 3 {
     		break
     	}
     	i += 1
     }
  • loop till condition with post clause
     for i := 0; i < 5; i++ {
     	...
     }

Looping over collections structure follows the form for key, value := range collection { ... }. For slice and array a key is an index. For a slice the value is a copy from a slice, not a reference to it; basically its a local variable.

slice := []int{1, 2, 3}
// range keyword tells that the following identifier represents
// a collection, and returns key/value pair on each iteration
for i, v := range slice {
    println(i, v)
}

m := map[string]int{"one": 1, "two": 2}
for k, v := range m {
    println(k, v)
}
# Retrieve only keys
for k := range m {
    println(k)
}
# Retrieve only values using blank identifier
for _, v := range m {
    println(v)
}

Deferred function

defer keyword schedules an instruction to be executed after the function returns; gets executed even if the function panics or terminates unexpectedly. Commonly used when working with files. Deferred statements are executed in the opposite order - last one registered is executed first.

func main() {
    println("main 1")
    defer println("defer 1")
    println("main 2")
    defer println("defer 2")
}

/*
main 1
main 1
defer 2
defer 1
*/

Practical example:

func reaFile() {
    file, err := os.Open("example.txt")
    if err != nil {
        return nil, err
    }

    defer file.Close()
}

Panic

panic keyword immediately stops the execution of the current function and returns focus to the called. If no recovery steps are present, this repeats until program ultimately exits. Deferred functions still get called after panic has occurred. Any type of data can be passed to a panic function, e.g. string, slice, object.

func example() {
    println("One")
    panic("Error message")
    println("Two")
}

Recovery is done with defer concept and recover function. The latter returns everything that was passed to panic. Once recover function is called, Go assumes that panic was handled and continues normal execution of the calling function. panic() can be used again in recovery logic, if unexpected panic has occurred.

func example() {
    defer func() {
        // Standard recovery pattern
        if p := recover(); p != nil {
            println(p)
        }
    }()
    fmt.Prinln("One")
    panic("Error message")
    fmt.Prinln("Two")
}

Anonymous function executes even if no panic occurred. return function return nil, if panic has not occurred, which can be used in an if statement to perform recovery steps or not.

Goto

func example() {
    i := 10
    if i < 15 {
        goto my_label
    }
my_label:
    j := 42
    for ; i < 15; i++ {
        ...
    }
}

goto statement in Go follows 4 rules:

  1. Can leave a block, e.g. if statement, for loop
  2. Can jump to a containing block, e.g. function
  3. Can not jump after variable declaration - if variable was not declared as part of normal sequential execution, it will not be known.
  4. Can not jump directly into another block, e.g. any block at the same level as the block that was left.

Object orientation and polymorphism

Generic

Interface loses the original identity of an object, thus, requiring type assertion to be able to access all other properties of a given object. Generic function allows to temporary treat an object to act as another type and return to its original form outside this function.

Generic type is defined after a function name, but before parenthesis for parameter list. any is a built-in constraint, which represents an interface with 0 or more methods.

//Accepts any time of slice
func clone[V any](s []V) []V { ... }

For generic function that deals with maps any can not be used for the key, as it has to be some type that can be compared; comparable represent such types.

//
func clone[K comparable, V any](m map[K]V) map[K]V { ... }

Custom type constraint

Some operations, such as addition, can not be performed for all types, any. Thus, an explicit interface must be created, where each type is listed, to allow Go to check, if that type can performed certain actions. Adding boolean to the interface will cause an error.

type addable interface {
  int | float64
}

func add[V ](s []V) V {
  var result V
  ...
    result += V
}

Constraints package contains more types that are not built-in.

Concurrency

Communicating sequential processes (CSP) paradigm provides concurrency synchronization. CSP is a message-passing model that works by communicating data between goroutines instead of locking data to synchronize access. Go uses a channel for synchronizing and passing messages between goroutines.

Operating System schedules threads to run against physical processors, Go runtime schedules goroutines (potentially millions) to run against logical processors; each logical unit is individually bound to single OS thread. Default is logical processor per physical processor.

Include the special tool to detect race conditions in the code (only detect race conditions that did occur).

// Also works with go run
$ go build --race

Goroutine

Goroutine is a function that runs concurrently with other goroutines, including entry point of the program. Multiple goroutines can run on a single thread. Both thread and goroutines have their own execution stack; while thread is typically around 1MB, goroutine stack size varies from 2KB to about 2GB (typically much lightweight than thread stack, but also can cover special use cases). Go runtime automatically schedules the execution of goroutines on configured logical processors (OS thread).

runtime package provides support for changing Go runtime configuration. Can also be set as environment variable, e.g. GOMAXPROCS, which sets number of logical processors.

runtime.GOMAXPROCS(1)

// NumCPU returns number of available physical processors
runtime.GOMAXPROCS(runtime.NumCPU())

Scheduler can also stop a running goroutine and reschedule it to run again; that makes sure other goroutines can run as well, and that no goroutine blocks logical processor, e.g. syscall, sleeping, network, blocking (channel operation), etc.

Once the main function returns, the program terminates. Go runtime also terminates remaining goroutines. Thus, its best to cleanly terminate all goroutines before letting main function return.

Return values are not available when goroutine terminates.

func log(msg string) {
... some logging code here
}

// Elsewhere in our code after we've discovered an error.
go log("something dire happened")

sync.WaitGroup is a counting semaphore that can be used to maintain a record of running goroutines. When the value is more than 1, Wait() method is blocking. Done() method decrements the value (do in goroutines). To pass around/share WaitGroup, create/pass a pointer to it, as it has an internal counter.

var waitGroup sync.WaitGroup

waitGroup.Add(count) // count represents number of goroutines
waitGroup.Done() // decrement
waitGroup.Wait() // block execution until value is 0

runtime.Gosched() yields the thread and gives other goroutine(s) chance to run.

Shared resources

Go provides several ways to safely work with shared resources across goroutines.

Atomic

Atomic functions from sync/atomic package provide low-level locking mechanism to safely work with integers and pointers. Goroutines using atomic functions are automatically synchronized.

import "sync/atomic"

var counter int64

func increment() {
    atomic.AddInt64(&counter, 1)
}

LoadInt64 and StoreInt64 functions provide a safe read and write operations; can be used to set up synchronous flag, e.g. to signal that it's time to terminate:

import(
    "time"
    "sync/atomic"
)

var(
    flag int64
    ws sync.WaitGroup
)

func main() {
   wg.Add(2)

   go foo("A")
   go foo("B")

   time.Sleep(1 * time.Second)

   atomic.StoreInt64(&flag, 1)
}

func foo() {
    defer wg.Done()

    for {
        time.Sleep(250 * time.Millisecond)

        if atomic.LoadInt64(&flag) == 1 {
            break
        }
    }
}

Mutex

Mutex (mutual exclusion) is used to create a section in code that can only be accessed (executed) by a single goroutine. Brackets can be optionally used to clearly mark protected part of code.

import "sync"

var mutex sync.Mutex

func incCounter(id int) {
    defer wg.Done()

    mutex.Lock()
    {
       // access shared resource
    }
    mutex.Unlock()
}

Common pattern is to lock at the start of the function and defer unlocking.

var m sync.Mutex

int main() {
    go func() {
        m.Lock()
        defer m.Unlock()
        // do smth
    }()
}

sync.RWMutex is a special Read/Write type mutex that allows multiple read access. Regular Lock() and Unlock() method lock for both write and read operations, RLock() and RUnlock() allow concurrent read operations, but would lock if Lock is used elsewhere.

Channel

Channel is a data structure that enables safe data communication between goroutines. In principal one goroutine passes data through a channel, while another goroutine is waiting; the exchange is synchronized and both goroutines know that exchange has been made. Doesn't provide Data access protection: if pointers to data are passed, each goroutine still needs to be synchronized, if reads and writes will be performed.

Working with channels

make function is required to create a channel. First keyword has to be chan followed by data type to be exchanged; optional second argument specifies the size for buffered channel.

// Unbuffered channel of integers.
unbuffered := make(chan int)
// Buffered channel of strings.
buffered := make(chan string, 10)

<- operator is used to send/receive value or pointer:

// Send a string
buffered <- "Gopher"

// Receive a string from the channel.
value := <-buffered

// Check if channel is closed
value, ok := <-buffered
if !ok {
   println("Channel was closed")
   return
}

close function closes a channel (basically signals that no more messages are going to arrive to the channel, doesn't free resources) . There is no mechanism to check if channel is closed, but a comma okay syntax can be used - if second parameter is false, then the channel is empty and closed. An attempt to send a message to a closed channel will trigger a panic.

For loop can be used with a channel (using range keyword). Loop will run until close(ch) is called on the sending end, without it a deadlock condition will arise:

for msg := range ch {
    fmt.Println(msg)
}

A special control flow construct, select, can only be used with channels; works similar to switch. If any of the channels are available for specified operator, then corresponding block will execute; if more than 1 is available, one is randomly selected. Optional default block can be added to make select non-blocking, otherwise goroutine is blocked until one of the cases match.

ch1 := make(chan int)
ch2 := make(chan string)
select {
    case i := <-ch1:
        ...
    case ch2 <- "example":
        ...
}

Channel types

Unbuffered channel doesn't have capacity to hold any value before it's received, thus, both sending and received goroutines should be ready at the same time to send/receive operation to happen. If both goroutines aren't ready, channel makes faster goroutine wait.

Buffered channel has a capacity to hold one or more values before they are received, thus, 2 goroutines don't have to be ready at the same time. A receive will block only if there are not values present in the channel; send will block only if there is no available channel. Receives are still possible after closing a channel (send is not). Both empty and closed channel would receive immediately with an empty value.

int main() {
    tasks := make(chan string, taskLoad)
    wg.Add(2)
    for gr := 1; gr <= 2; gr++ {
        go worker(tasks, gr)
    }

    for post := 1; post <= taskLoad; post++ {
        tasks <- fmt.Sprintf("Task : %d", post)
    }

    // Close the channel so the goroutines will quit
    // when all the work is done.
    close(tasks)

    wg.Wait()
}

func worker(tasks chan string, worker int) {
    defer wg.Done()

    for {
        task, ok := <-tasks
        if !ok {
            // This means channel is closed, no work left
            fmt.Printf("Worker: %d : Shutting Down\n", worker)
            return
    }
}

Channels can also be restricted on what operations can be applied to it. Thus, channel can be bidirectional, send-only, or receive-only. By default, any newly created channel is bidirectional. Channel type can be updated in the function signature (when channel is passed in):

func example(ch chan int) {...} // bidirectional
func example(ch chan<- int) {...} // send-only
func example(ch <-chan int) {...} // receive-only

Patterns

At a core all concurrency patterns consist of one or multiple of the following messaging cases:

  • single producer - single consumer
  • single producer - multiple consumers
  • multiple producers - single consumer
  • multiple producers - multiple consumers

Multiple consumers pattern is implemented simply by spawning multiple goroutines on the receiving end (buffered or unbuffered channel), and closing the channel on the sending end. Receiving for loop would safely exit, as the channel would close.

Multiple producers pattern requires additional managing goroutine that would ensure that all producers are done, and only then close the channel that was used. Ensuring that the work is done can be synchronized via WaitGroup.

Runner

Runner pattern can be used to monitor amount of time a program is running and terminate it, if it runs to long; useful for program that is scheduled to run in the background, e.g. cron job.

Track goroutine execution: time.Time is sent to the channel after specified duration has elapsed.

timeout <-chan time.Time

time.After(d)

Pooling

Pooling pattern can be used when there is a static set of resources to share, e.g. database connections or memory buffers. A goroutine uses shared resource on-demand: acquires it, uses it, then returns back to the pool, which is backed by buffered channel.

Work

General idea is to create unbuffered channel to distribute work between goroutines - this blocks goroutines that submit work until it is received by worker goroutine, and exactly what is needed, as it ensures that work is indeed picked up.

Standard library

Standard library contains over 100 packages organized into categories (some categories are packages themselves). Source code for standard library is located at $GOROOT/src; packages are compiled on demand and stored under $GOCACHE.

net and net/http

package main

import(
    "io"
    "net/http"
    "os"
)

func main() {
    http.HandleFunc("/", Handler)
    http.ListenAndServe(":3000", nil)

    func Handler(w http.ResponseWriter, r *http.Request) {
        f, _ := io.Open("./file.txt")
        io.Copy(w, f)
    }
}

bufio

Read string from user:

package main

import(
    "bufio"
    "fmt"
    "os"
)

func main() {
    fmt.Println("Enter text")

    // Wrap raw Stdin with reader decorator
    reader := bufio.NewReader(os.Stdin)
    // Accepts a delimiter, but also includes it in the text
    text, _ := reader.ReadString('\n')
}

Read one line at a time from file. scanner.Scan() returns true if it was able to read next token (by default a line, as new line is a delimiter), and false if it encountered an error or it has reached end of file. The actual content can be retrieved by scanner.Text().

package main

import(
    "bufio"
    "fmt"
    "os"
)

func main() {
    file, err := os.Open("file.txt"
    if err != nil {
        fmt.Println(err)
    }
    defer file.Close()

    scanner := bufio.NewScanner(file)
    for scanner.Scan() {
        fmt.Println(scanner.Text())
    }

    if err := scanner.Err(); err != nil {
        fmt.Println(err)
    }
}

context

Was designed to cover a case when goroutines spawn more goroutine and original caller needs to cancel all subgoroutines. Context encapsulates a single goroutine with all goroutines it spawns, immediate and nested.

Context is an interface; the intend is that user can derive contexts to replicate nested goroutines structure. context package contains 2 functions that provide parent context and are the same, but differ in use case: context.Background() represents a context that exists throughout program execution (it is cancelled when the program stops), context.TODO() represents a placeholder, indicating that a parent context should be provided.

By convention context variable is either names ctx (if there is only one), or prefixed with ctx_, and those are passed in to goroutines (should be first parameter of a function).

import "time"

int main() {
    ctx, cancel := context.Cancel(context.Background())
    // Takes second argument, duration, and cancels context after it expires.
    // cancel() should be called at least once, in this case we simply call it
    // last. time.Sleep() and cancel() at the end won't be needed anymore.
    // ctx, cancel := context.WithTimeout(context.Background(), 2 * time.Second)
    // defer cancel()
    var wg sync.WaitGroup
    wg.Add(1)
    go func(ctx context.Context) {
        defer wg.Done()
        // Interval example
        for range time.Tick(500 * time.Millisecond) {
            if ctx.Err() != nil {
                log.println(ctx.Err())
                return
            }
            fmt.Println("tick")
        }
        wg.Done()
    }(ctx)

    time.Sleep(2 * time.Second)
    cancel() // it is a second return value from context.Cancel()
    wg.Wait()
}

cancel() (second return value from context.Cancel()) doesn't immediately stop all goroutines, as they might be in the middle of a critical part (allow clean exit, etc), subgoroutines need to check, if the context is cancelled.

flag

import(
    "fmt"
    "flag"
)

func main() {
    // 1 - name of the flag
    // 2 - default value
    // 3 - description
    choice = flag.String("fruit", "apple", "Provide fruit name") 

    flag.Parse()

    switch *choice {
        case "apple":
            fmt.Println("Favorite fruit is apple")
        case "peach":
            fmt.Println("Favorite fruit is peach")
    }
}

-h is automatically generated.

fmt

Implementing Stringer interface in fmt package allows printing out customized message for custom types:

import "fmt"

type foo struct {
	x int
	y string
}

func (f foo) String () string {
	return fmt.Sprintf("Custom message: %v, %v\n", f.x, f.y)
}

func main() {
	f := foo{1, "bar"}
	fmt.Print(f)
}

Printf method works like a printf function in C: accepts format specifier string, and variable number of parameters to apply (which can also be an interface); return number of bytes written and errors if encountered during write. %v placeholder can be used to tell Go to infer the type from the variable. Other common format specifiers:

  • %s - string, %q - quoted string
  • %d - decimal integers
  • %g - floating-point numbers
  • %b - base 2 numbers
  • %o - base 8 numbers
  • %x - base 16 numbers
  • %t - boolean
  • %T - type of variable

Sprintf is same as Printf, but returns the strings instead of printing it out.

Color output:

const(
    blue = "\033[1;34m%s\033[0m"
    yellow = "\033[1;33m%s\033[0m"
    red = "\033[1;31m%s\033[0m"
)

func main() {
   fmt.Printf(blue, "sample text")
}

http

Simple web server:

package main

import(
    "log"
    "net/http"
)

func main() {
    http.HandleFunc("/foo", Foo)

    http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) {
        w.Write([]byte("Hello, world!"))
    })

    http.ListenAndServer(":3000", nil)
    if err != nil {
        log.Fatal(err)
    }
}

func Foo(rw http.ResponseWriter, r *http.Request) {
    data := {
        FieldOne string,
        FieldTwo string,
    }{
        FieldOne: "foo",
        FieldTwo: "bar",
    }

    rw.Header().Set("Content-Type", "application/json")
    rw.WriteHeader(200)
    json.NewEncoder(rw).Encode(&data)
}

io

io package works around values that implement io.Writer and io.Reader interface. By doing so it enables simple single purpose components where input of one components can be an output of another, while the knowledge of data types or how data is written or read is not needed, as long as these data types implement io interfaces.

Implementation of io.Writer must accept byte slice, and write it to underlying data stream, return number of bytes written (can not be more than length of byte slice), and a non-nil error, if bytes written is less that length of byte slice passed in. Byte slice should never be modified.

Implementation of io.Reader must accept a byte slice where data should be placed, and return number of bytes read and error value.

  • Must read up to byte slice length; even if less than length is returned, all of slice space may be used; if less than length is read, Reader should return what is available, instead of waiting for more.
  • if error of EOF is met after reading more than 0 bytes, number of bytes read is returned. non-nill error (EOF) may be returned on the same call, subsequent call should return 0 bytes read and EOF.
  • callers must process returned bytes (before checking the error value)
  • may not return 0 bytes and nil error, should be treated as no-op

io.Copy(dest, source) can be used to copy data from source to destination (both implementing respective interfaces from io package.

io.MultiWriter accepts variadic number of values that implement Writer interface, and returns a single one that bundles all together. This combined object can be used by all other methods that accept Writer interface value.

file, _ := os.OpenFile("sample.txt", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
io.MultiWriter(file, os.StdOut)

json

Tags (strings declared at the end of each field) provide necessary metadata between JSON document and Go struct. Without tags decoding and encoding is matched against case-insensitive fields names directly.

type Contact struct {
    Name string `json:"name"`
    LastName string `json:"last_name"`
    Address struct {
        Street string `json:"home"`
        Number string `json:"cell"`
    } `json:"address"`
}

json package provides NewDecoder method, which accepts Writer interface values and returns Decoder, which in turn accepts value of any type and performs value initialization (reflection is used to inspect type information of the value passed in).

resp, _ := http.Get(uri)
var contact *Contact
err = json.NewDecoder(resp.Body).Decode(&contact)

Unmarshal method works with strings, which first need to be converted into byte slice.

var JSON = `{
    "name": "John",
    "last_name": "Smith",
    "address": {
        "streen": "1st Avenue",
        "number": "5"
    }
}`

var contact Contact
json.Unmarshal([]byte(JSON), &contact)
fmt.Println(contact)

MarshalIndent can be used to do the opposite, to transform data into JSON string, marshaling. Marshal is an identical function, which doesn't have additional 2 parameters.

contact := make(map[string]interface{})
contact["name"] = "John"
contact["last_name"] = "Smith"
contact["address"] = map[string]interface{}{
    "street": "1st Avenue",
    "number": "5",
}

// Return byte slice and error.
// Accepts value of any type, prefix, indent sequence
data, _ := json.MarshalIndent(contact, "", " ")
fmt.Println(string(data))

log

In general, if a program writes both output and logs, common practice is to write all logs to stderr, and outputs to stdout. If a program only writes logs, then general logs go to stdout, while errors and warnings go to stderr.

log package supports concurrency, and by default writes logs to stderr device. Simple example:

import(
  "log"
  "os"
)

func writeLog() {
  file, err := os.OpenFile("logs.txt", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
  if err != nil {
    log.Fatal(err)
  }
  // Accepts anything with Writer interface
  log.SetOutput(file)

  log.Println("sample log entry")
}

Methods to write logs; all have *ln and *f versions as well:

  • Print - write a log
  • Fatal - Print followed by os.Exit(1)
  • Panic - Print followed by panic()

init() is a common place to configure logger.

func init() {
    // Prefix to apply to every log entry. Commonly used to distinguish logs
    // from outputs, traditionally in capital letters.
    log.setPrefix("TRACE: ")
    // Configure logger behavior, see log package for available flags
    log.setFlags(log.Ldata | log.Lmicroseconds)
}

Custom logger

To create a custom logger, e.g. to support different logging levels, one has to create own Logger type values. Each logger can be configured with a different destination, flags, and prefix.

import(
    "io"
    "log"
    "os"
)

func main() {
    file, err := os.OpenFile("errors.txt", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
    if err != nil {
        log.Fatalln("Failed to open error log file:", err)
    }

    Info *log.Logger
    Error *log.Logger

    Info = log.New(os.StdOut, "INFO: ", log.Ldate|log.Ltime)
    Error = log.New(io.MultiWriter(file, os.StdErr), "ERROR: ", log.Ldate|log.Ltime)

    Info.Println("Sample log entry")
    Error.Println("Sample error message")
}

To temporary disable a logger a Discard variable from io/ioutil can be used. It is backed by an empty Write interface implementation, which does nothing.

import(
    "io/ioutil"
    "log"
)

func main() {
    Info = log.New(ioutil.Discard, "INFO: ", log.Ldate|log.Ltime)
}

os

import(
    "fmt"
    "os"
)

int main() {
    // Slice of command line arguments
    args = os.Args

    // index 0 is the name of executable
    fmt.Println(args)
}

reflect

Reflection is the ability of application to examine and modify its own structure and behavior (look at own data and modify it at runtime).

Examine types:

import(
    "fmt"
    "reflect"
)

func main() {
    type foo struct {
        ...
    }

    f := foo(...)

    // Prints main.foo
    fmt.Printf("Type is %v\n", reflect.TypeOf(f))
    // Prints foo, works with structs, slice would have no name printed
    fmt.Printf("Type name is %v\n", reflect.TypeOf(f).Name())
    // Prints struct, which is reflect.Struct
    fmt.Printf("Kind is %v\n", reflect.TypeOf(f).Kind())
    // Print value of each field
    fmt.Printf("Value is %v\n", reflect.ValueOf(f))
}

Create types:

// Create slice
names := make([]string, 3)

nType = reflect.TypeOf(names)
newNames := reflect.MakeSlice(nType, 0, 0) // type, length, capacity
newNames = reflect.Append(newNames, reflect.ValueOf("John"))

Empty interface is one way to accept any type of parameter, thus, creating an abstract function:

func example(t interface{}) {
    switch reflect.TypeOf(t).Name() {
        ...
    }
}

Learning to Use Go Reflection.

regexp

s := "..."
r, _ = regexp.Compile(`([a-z]+)`)

r.MatchString(s) // returns boolean
r.FindAllString(s, count) // returns a slice of matching terms, count -1 means return all
r.FindStringIndex(s) // returns a slice with 2 items: starting index, ending index
r.ReplaceAllString(s, replaceTerm)` returns a new string

string

String is a read-only slice of bytes (not some character set). When a modification is made a new slice is created, and the string variable value is updated to point to it.

Debug non-printable characters:

str := "..."
for i := 0; i < len(str); i++ {
    fmt.Println("%q\n", str[i])
}

Strings are indexed, but based on bytes (simple index operation returns integer value); to extract characters use range str[0:1].

Subset of useful methods in string package:

  • Compare(str1, str2) returns integer (may sometimes be faster than comparison operator):
    • 0 - identical
    • 1 - do no match
    • -1 - do not match, also str2 is less in lexicographic order
  • Split(sampleString, delimiter) returns a slice; SplitAfter(sampleString, delimiter) also returns a slice, where items include delimiter as well
  • Contains(sampleString, searchTerm), HasPrefix(sampleString, searchTerm) HasSuffix(...) return boolean.
  • Replace(sampleString, searchTerm, replaceTerm, count) returns a new string, count specifies how many times replacement should take place, -1 means replace all.
  • TrimSpace(sampleString) return a new string with spaces removed; TrimLeft(sampleString, trimCharaceter) does the same from left side only; TrimPrefix(sampleString, trimTerm) accepts a string instead of character as second argument.
  • Fields(sampleString) returns a slice of words

sync

sync.Once object guarantees that a piece of code is only executed once in case it is called twice. sync.Once.Do() take a function with no arguments and doesn't return any values.

import "sync"

var once sync.Once

int main() {
    Run()
    Run()
}

func Run() {
    once.Do(func() {
        //initialize database connection
    })
}

time

Wall clock is the time provided by OS to keep track of the current time of day. Can be set to anything (NTP can keep it accurate). Great for telling what time it is, but not for measuring it.

Monotonic clock is like a stopwatch. Isn't affected by adjustments, time zone changes and so on, but is only meaningful within a scope of a process that called it.

time.Now() returns both wall clock and monotonic time.

t := time.Now()
// t.Year()
// t.Month()
// t.Day()

time package also provides some standard formats:

t := time.Now()
fmt.Println(t.Format(time.ANSIC))
fmt.Println(t.Format(time.RFC3339))
fmt.Println(t.Format(time.UnixDate))
fmt.Println(t.Format(time.Kitchen))

To create a custom time format a specific time must be used as a template:

t := time.Now()
// Mon Jan 2 15:04:05 MST 2006
fmt.Println(t.Format("Monday, January 2 in the year 2006"))

Create an object with custom date:

start := time.Date(year, month, day, hours, minute, seconds, nanoseconds, time_format)
start := time.Date(2010, 02, 03, 4, 05, 06, 00, time.UTC)

Time spans

// Hault execution of a program for the specified duration
time.Sleep(nanoseconds)
time.Sleep(time.Second * 2)

Calculate elapsed time:

start := time.Date(...)
elapsed := time.Since(start)

fmt.Printf("H: %v, M: %v, S: %v\n", elapsed.Hours(), elapsed.Minutes(), elapsed.Seconds())

Calculate future date:

today := time.Now()
future := today.AddDate(0, 2, 0) // Arguments: years, months, days
past := today.AddDate(0, -2, 0)

// More granular version
future = today.Add(time.Hour * 2)

Find out how much time left:

date := time.Date(...)
time_remaning := time.Until(date).Hours()

Testing

Ideally test output should document why test exists, what is being tested, and the result in clear complete sentences.

Test types in Go:

  • (regular) test - typical unit test, integration, end-to-end
  • benchmark - performance profiling, testing part of code where efficiency is crucial
  • example - documentation extension

A test must receive testing.T object, which provides useful methods, such as reporting errors. Methods are split into 2 categories: ones that stop the execution of the current test function, and the ones that report an error, but continue. If no function from below is called, the test is considered as passing.

Immediate failure Just error reporting
FailNow() Fail()
Fatal(), Fatalf() Error(), Errorf()

Log() (Logf()) functions provide a way to write message to test output. These are seen if test fails, also if -v flag is passed to go test.

Skip(), Skipf(), SkipNow() methods allow to temporary skip some tests - simply add execute them inside test function to mark it as skipped.

Simple example:

package main_test

import "testing"

func TestAddition(t *testing.T) {
    result := 2 + 2
    expected := 4
    if result != 4 {
        t.Errorf("Result did not match")
    }
}

Basic test tests a specific part of code for a single set of parameters and result. Table test tests a specific part of code for multiple parameters and results, basically a pattern to evaluate a part of code for multiple inputs and check each of them for expected output.

func TestExclamationTable (t *testing.T) {
    scenarious := []struct{
        input string,
        output string
    }{
        {input: "foo", output: "foo!"}
        {input: "bar", output: "bar!"}
    }
    for _, s := range scenarios {
         result := Exclamation(s.input)
         if result != s.output {
             # It's important to provide more verbose error message, as test
             # runner doesn't provide information on which scenario has failed
             t.Errorf("Input: %v, Expected: %v, Received: %v\n", s.input, s.output, result)
         }
    }
}

Tests can be nested for organizational purposes, e.g. to put together related preparation code. In that case execute test via Run() method, which accepts a name for subtest and lambda function implementing the test. Can be nested multiple levels.

func TestMyFunction(t *testing.T) {
    t.Run("WithValidInput", func (t *testing.T) {
        result := MyFunction("valid input")
        // Check results
    })
}

Related standard library packages:

  • testing
  • testing/quick - for testing packages with unknown internals
  • testing/iotest - provides simple Readers and Writers to test common scenarios working within inputs and outputs
  • net/http/httptest - API to simulate requires, response recorders, test servers

Community frameworks:

If same package name is used, e.g. package main for main_test.go, then tests have access to all members, therefore, perform whitebox testing. If a package name is different, e.g. package main_test, then only public members are available, blackbox testing.

If TestMain is defined all other functions are ignored, and this functions becomes an entrypoint for testing. This pattern helps to isolate/separate common preparation.

func TestMain(m *testing.M) {
    log.Println("Preparation")
    // Executes all tests
    exitVal := m.Run()
    log.Println("Cleanup")

    os.Exit(exitVal) // Required to indicate whether the test run passed or failed
}

Mocking calls

net/http/httptest helps mock HTTP-based web calls so that tests can run in an environment with no internet access.

Below is a mock server that is able to simulate a response from a real server.

var data = `{
    "foo": "bar"
}`

func mockServer() *httptest.Server {
    // Same signature as http.HanderlFunc
    f := func(w http.ResponseWriter, r *http.Request) {
        w.WriteHeader(200)
        w.Header().Set("Content-Type", "application/json")
        fmt.Fprintln(w, data)
    }

    return httptest.NewServer(http.HandlerFunc(f))
}

Here is how a mock server is used with unit test. Mock server URL is a localhost with a random high number port - provided by httptest package.

func TestDownload(t *testing.T) {
    statusCode = http.StatusOK

    server := mockServer()
    defer server.Close()

    // http.Get thinks it makes a web call, instead anonymous function from
    // above is executed
    resp, err := http.Get(server.URL)
    if err != nil {
        ...
    }

    defer resp.Body.Close()
    if resp.Body.StatusCode != statusCode {
        ...
    }
}

Endpoint testing

When package name ends with _test the test code can only access exported identifiers, even if test code and the code to be tested are in the same folder.

// Testing endpoint for simple webserver example in http section
package endpoint_test

func init() {
    http.HandleFunc("/foo", Foo)
}

func TestSendJSON(t *testing.T) {
    req, err := http.NewRequest("GET", "/foo", nil)
    if err != nil {
        ...
    }

    // Create new http.ResponseRecorder and http.Request (above) are passed to
    // the default server multiplexer, which mocks a call to server as if it
    // was made from external source.
    rw := httptest.NewRecorder()
    http.DefaultServeMux.ServeHTTP(rw, req)

    if rw.Code != 200 {
        ...
    }

    data := struct {
        FieldOne string
        FieldTwo string
    }

    if err := json.NewDecoder(rw.Body).Decode(&u); err != nil {
       ...
    }

    if data.FieldOne != "foo" {
        ...
    }
    if data.FieldTwo != "bar" {
        ...
    }
}

Benchmarking

Benchmarking is testing code performance, for instance compare different solutions for a given problem, identify CPU or memory issues, test concurrency patterns for the best configuration, etc.

Similarly to testing setup benchmarking functions must be defined in *_test.go files. Functions start with Benchmark and accept testing.B pointer as a single parameter. Since same code must run multiple times for accurate benchmark results a for loop is used - all benchmark code must be inside for loop body, while loop should be based on testing.B.N (that's the number of times benchmarking framework will run the code).

func BenchmarkFoo(b *testing.B) {
    number := 10

    // Use after some initialization code to have accurate benchmark results
    b.ResetTimer()

    for i := 0; i < b.N; i++ {
        Foo()
    }
}

testing.B also has StartTimer, StopTimer, ResetTimer methods, which help to establish correct time framework for the test, e.g. eliminate database connection setup from test results; by default the timer starts and ends as the benchmark function.

By default go test does not run benchmark tests; user needs to specify -bench <pattern> option to include those. Benchmarking runs by default for at least one second; -benchtime <duration> option can be specified to extend total duration (more than 3 seconds tends not to increase accuracy in most cases).

// -run="none" turns off unit tests
$ go test -v -run="none" -bench .

-benchmem option will also display additional information on number and size of memory allocations.

CLI

$ go version

# initialise project
$ go mod init

# run single file
$ go run main.go

Most go subcommands accept package identifier, both local and from standard library.

  • env prints available env vars; can also modify, unset and add more
  • fmt (calls gofmt) applies a predetermined layout; just like vet usually ran before commit or even on save. More info with go doc cmd/gofmt.
  • get downloads and installs (by default) packages (also modifies go.mod)
  • generate looks for //go:generate ... directive in source code and invokes specified command line commands. Check impl for examples.
  • list lists packages, -f flags accepts a template to output additional info (check with go help list), e.g. go list -f {{.Deps}} net/http. Use -e flag to force command to succeed (output empty array) if some error occurred, e.g. package was not found.
  • vet checks for common errors and best practices; a good check to run before commit. More info with go doc cmd/vet. Some common checks:
    • Bad parameters to in Printf-style function calls
    • Method signature errors for common method definitions
    • Bad struct args
    • Unkeyed composite literals

Build and run

Go run command builds and runs an application in place without leaving a binary on disk. Provide path to the source file as a command parameter. Imported dependencies must be included in the

Go build compiles the packages - binaries are saved, but libraries are discarded by default. However, this behavior can be changed with various flags.

# Save library after build
$ go build -a package_name.a package_name

# Save intermediate dependecies (pkg directory)
$ go build -i binary_name

# See escape analysis and other memory related stuff
$ go build -gcflags="-m" main.go

$ go build main.go # binary will be called main
$ go build . # binary will be called the name of the module
# Package identifier can be ommitted, which default to current package
$ go build <package_identifier>

$ go clean # remove all build artifacts

Name of the binary is the name of the directory where main package is declared.

doc

# Pull docs in terminal
$ go install golang.org/x/tools/cmd/godoc@latest
$ go doc <package_name>
$ go doc <package_name>.<symbol>
$ go doc cmd/<cli_subcommand>

# Start browsable session locally. Provides documentation for std library
# functions and source code under GOPATH.
$ godoc -http=:6060

Custom that follows specific conventions will also be included in generated documentation. For example, comments directly above identifiers such as packages, functions, types, and global variables. Both slashes and slash-asterisk style work.

// Some helpful description
func Example() (foo, bar) {
// ...
}

For a large block of text to document a package a separate doc.go file, which declares same package, can be added. Package into needs to be placed above package declaration in that file.

Functions with Example prefix are shown as part of documentation, as example code. It must also be based on exported function or method, e.g. SendData with ExampleSendData. go test command can also verify is example is valid - go test -v -run=ExampleSendData (-run accepts a regular expression for both test functions and example functions to run). Output marker designates expected output of the function and also documents it; if it doesn't match, test fails:

func ExampleSendData() {
    var some_variable string
    ...
    // Use fmt to write to stdout to check the output.
    fmt.Println(some_variable)
    // Output:
    // "foo"
}

test

Go testing tool only looks for files that end with _test.go, and for testing functions within these files - function name must start with Test (typically followed by capital letter, e.g. TestFoo) and accept a pointer of testing.T type and return no value.

# By default only current directory tests are executed
$ go test
$ go test <pkg1> <pkg2>
# Current and all descendant directories
$ go test ./...
# Run specific tests - match test names (function name) with a regular
# expression; replace -run with -list to simply list those
$ go test -run <regex>
# See docs on all flags for test command
$ go help testflags

# Record execution trace to specified file
$ go test -trace <trace.out>
$ go tool trace <trace.out>

Coverage report

Coverage tool modifies source code, so in case of failures line reporting might be off. Also because of these modifications general performance is also affected, thus, do not combine -cover option with benchmarking tests.

# Simple terminal summary
$ go test -cover
# Implicitly enabled -cover option, also generates a report file (provide a path)
$ go test -coverprofile <report_path>
# Read coverage report generated in the previous step
$ go tool cover -func <report_path>
# Starts a browser page and outlines coverage in the code
$ go tool cover -htlm <report_path>

Profiling

# Generate requested type of profile - block, cover, cpu, mem, mutex
$ go test -<type>profile <profile.out>
# Supply report to performance profiling tool, svg command can generate a graph
$ go tool pprof <profile.out>

For memory profiling there could be a case that a test runs too fast for profiling tool to be able to capture any meaningful information. In such case increase profiling rate, -memprofilerate 1 being the most granular.

# For memory profiling Go also generates compiled version of tests, which needs
# to be passed to pprof
$ go test -memprofile mem.out
$ go tool pprof -web <package>.test mem.out

$ go test -memprofile mem.out -memprofilerate 1

For CPU profiling in case the function runs too fast supply -count <n> option (about a million) to give enough time for profiling tool to capture data.

IDE

VSCode

Shift + CMD + P -> Go: Install/Update Tools (needs gopls first - language server)

Debugging

To be able to pass input to the program and debug it add the following to launch configuration:

{
    "configurations": [
        "console": "integratedTerminal"
    ]
}

Delve.

# Start debugging main package in the current directory
$ dlv debug .

Some often used command:

  • continue
  • help
  • restart
  • list <file_name>:<line_number> - show contents of a file around the indicated line number
  • break <file_name>:<line_number> - set breakpoint
  • breakpoints - list currently set breakpoints, delv also has some preset breakpoints, which will be listed
  • step - step into underlying function call
  • args - list function argument values
  • locals - list local variable values
  • next - move to the next line
  • stack - output complete stack trace
  • threads - list available threads (to later switch between them if needed)
  • stepout - continue execution until it is out of current function
  • clear <number> - remove specific breakpoint
  • clearall - remove all breakpoints

Debugging containers with VS Code:

FROM golang

WORKDIR /usr/src/app

RUN go install github.com/go-delve/delve/cmd/dlv@latest

COPY go.mod go.* ./
RUN go mod download && go mod verify

COPY . .
RUN go build -gcflags="all=-N -l" -v -o /usr/local/bin/app ./...

CMD ["dlv", "debug", "--listen=:1234", "--headless=true", "--api-version=2", "--accept-multiclient"]

launch.json:

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Connect to server",
      "type": "go",
      "request": "attach",
      "mode": "remote",
      "remotePath": "/usr/src/app/",
      "port": 1234,
      "host": "127.0.0.1",
      "cwd": "${workspaceFolder}",
      "trace": "verbose"
    }
  ]
}
$ docker run --security-opt="seccomp=unconfined" \
    --cap-add=SYS_TRACE
    -p1234:1234 # debugger port
    -p5678:5678 # app port
    godebug

Frameworks

Network services

  • Go kit - comprehensive microservice framework
  • Gin - fast, lightweight web framework
  • Gorilla Toolkit - collection of useful tools

Command Line

  • Cobra - framework for building CLI apps

Resources



⚠️ **GitHub.com Fallback** ⚠️