sync

Guided tour · Concurrency · pkg.go.dev →

Low-level synchronization primitives: mutexes, WaitGroups, Once, Pool, Map, Cond.

Goroutine coordination primitives. Reach for channels first; reach for sync when channels would obscure intent (counters, lazy init, simple shared state).

Wait for N goroutines
var wg sync.WaitGroup
for _, item := range items {
    wg.Add(1)
    go func(it Item) {
        defer wg.Done()
        process(it)
    }(item)
}
wg.Wait()
One-time init
var once sync.Once
once.Do(setup)
Mutex around shared state
var mu sync.Mutex
mu.Lock()
counter++
mu.Unlock()
Read-mostly map
var mu sync.RWMutex
mu.RLock(); v := m[k]; mu.RUnlock()
Pool for reusable buffers
var bufPool = sync.Pool{
    New: func() any { return new(bytes.Buffer) },
}

Mutex and RWMutex

Guard shared data. Don't share by communicating? You're allowed — use channels OR a mutex, whichever fits.

Mutex — one-at-a-time access

Zero value is an unlocked mutex. Never copy a Mutex after first use — put it in a pointer or on a struct kept by pointer.

type Counter struct {
    mu sync.Mutex
    n  int
}
func (c *Counter) Inc() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.n++
}

RWMutex — many readers, one writer

Reach for RWMutex only when reads vastly outnumber writes. For contention-light cases a plain Mutex is faster.

var mu sync.RWMutex

// reader side
mu.RLock()
v := cache[k]
mu.RUnlock()

// writer side
mu.Lock()
cache[k] = v
mu.Unlock()

WaitGroup — wait for a set of goroutines

Classic fan-out/wait

Call Add BEFORE starting the goroutine. The common bug is Add inside the goroutine (race).

var wg sync.WaitGroup
for _, u := range urls {
    wg.Add(1)
    go func(u string) {
        defer wg.Done()
        fetch(u)
    }(u)
}
wg.Wait()

WaitGroup.Go (Go 1.25+)

Go 1.25 added wg.Go(func()) which does Add+go+Done for you. Cleaner and harder to misuse.

var wg sync.WaitGroup
for _, u := range urls {
    wg.Go(func() { fetch(u) })
}
wg.Wait()

Once — run exactly once

Lazy initialization

var (
    once sync.Once
    cfg  *Config
)
func get() *Config {
    once.Do(func() { cfg = load() })
    return cfg
}

OnceFunc / OnceValue (1.21+)

Newer typed wrappers — often clearer than a manual sync.Once + package-level var.

var loadConfig = sync.OnceValue(func() *Config {
    return load()
})

cfg := loadConfig()  // computed once, cached forever

Pool — reuse allocations

A free list of reusable objects. Great for large buffers that get created and thrown away in hot paths.

Pool of []byte buffers

Items in the pool may be GC'd at any time — don't rely on them sticking around. Never reuse content between Put and Get.

var bufPool = sync.Pool{
    New: func() any { return make([]byte, 0, 4096) },
}

buf := bufPool.Get().([]byte)
defer bufPool.Put(buf[:0])

buf = append(buf, "hello"...)

Map — concurrent map

Avoid by default. A normal map + sync.RWMutex is simpler and usually faster. sync.Map shines only for two specific patterns: (1) keys written once and read many times; (2) disjoint keys per goroutine.

Basic usage

var m sync.Map
m.Store("a", 1)
v, ok := m.Load("a")
m.Range(func(k, v any) bool {
    fmt.Println(k, v)
    return true   // return false to stop
})

Atomic — lock-free primitives

sync/atomic provides atomic loads, stores, adds, CAS. Use for simple counters where a Mutex would be overkill.

atomic.Int64 — typed, easy to use

var n atomic.Int64
go func() { n.Add(1) }()
go func() { n.Add(1) }()
// ...
fmt.Println(n.Load())