Most memory in Zig is ordinary memory.
You read it. You write it. The compiler is allowed to optimize those reads and writes as long as the program behaves the same according to normal language rules.
But some memory is special.
Sometimes a memory address is connected to hardware. Sometimes another thread can read or write the same memory at the same time. Sometimes a value must be loaded again even if it looks unchanged.
For those cases, Zig gives you tools such as volatile access and atomic operations.
Ordinary Memory
Start with a normal variable:
var x: u32 = 10;
x = 20;
const y = x;This is ordinary memory.
The compiler may optimize the code. If it can prove that a load or store is unnecessary, it may remove it or reorder it in ways that preserve normal single-threaded behavior.
For normal variables, this is good. Optimization makes programs faster.
But not all memory behaves like a normal local variable.
Volatile Memory
Volatile memory access tells the compiler:
This load or store must actually happen.
This is useful for memory-mapped hardware registers.
For example, an embedded program may have a device register at a fixed address. Reading that address may receive hardware status. Writing that address may control a device.
The compiler must not remove those accesses just because they look redundant.
Conceptually:
const register: *volatile u32 = @ptrFromInt(0x1000_0000);This says register points to a volatile u32.
Reading it:
const status = register.*;must perform a real load.
Writing it:
register.* = 1;must perform a real store.
Volatile Is Not Thread Safety
Volatile is often misunderstood.
Volatile does not make shared data safe between threads.
It does not create locks.
It does not prevent data races.
It does not give you atomic increments.
It mainly controls compiler optimization around specific memory accesses.
Use volatile for memory-mapped I/O and special hardware-like memory.
Use atomics for shared memory between threads.
Atomic Memory
Atomic operations are used when multiple threads access the same memory.
Suppose two threads both increment the same counter. This is not safe with an ordinary load, add, and store.
counter += 1;That operation looks simple, but it has several steps:
load counter
add 1
store counterTwo threads can interleave those steps and lose an update.
Atomic operations make the operation happen with rules that are safe for concurrent access.
Atomic Load and Store
Zig provides atomic builtins for low-level atomic operations.
A basic atomic load looks like:
const value = @atomicLoad(u32, &counter, .seq_cst);A basic atomic store looks like:
@atomicStore(u32, &counter, 10, .seq_cst);The final argument is the memory ordering.
For beginners, .seq_cst is the easiest to understand. It means sequentially consistent ordering, the strongest and simplest ordering model.
It is not always the fastest, but it is the clearest starting point.
Atomic Read Modify Write
For counters, you usually want an atomic read-modify-write operation.
_ = @atomicRmw(u32, &counter, .Add, 1, .seq_cst);This atomically adds 1 to counter.
No update is lost because the read, modification, and write happen as one atomic operation from the point of view of other threads.
Example:
const std = @import("std");
var counter: u32 = 0;
fn incrementMany() void {
var i: usize = 0;
while (i < 1000) : (i += 1) {
_ = @atomicRmw(u32, &counter, .Add, 1, .seq_cst);
}
}
pub fn main() !void {
const t1 = try std.Thread.spawn(.{}, incrementMany, .{});
const t2 = try std.Thread.spawn(.{}, incrementMany, .{});
t1.join();
t2.join();
std.debug.print("counter = {}\n", .{counter});
}The expected final value is:
counter = 2000Without atomic increment, two threads could overwrite each other’s updates.
Compare Exchange
Another important atomic operation is compare exchange.
It means:
Compare the current value with an expected value. If they match, replace it with a new value.
This is used to build lock-free data structures and state transitions.
Conceptually:
const old = @cmpxchgStrong(
u32,
&state,
0,
1,
.seq_cst,
.seq_cst,
);This tries to change state from 0 to 1.
If the current value is 0, the operation succeeds.
If the current value is not 0, the operation fails and returns the actual value.
Beginners do not need to use compare exchange often, but it is important to know it exists.
Memory Ordering
Memory ordering controls how atomic operations relate to other memory operations.
This is a deep topic. For now, keep the beginner model simple.
Use .seq_cst when learning.
It gives the strongest ordering and the simplest mental model.
Later, performance-sensitive concurrent code may use weaker orderings such as acquire, release, or relaxed. Those require careful reasoning.
Do not guess memory orderings. A wrong ordering can create rare bugs that are difficult to reproduce.
Volatile vs Atomic
Volatile and atomic solve different problems.
| Feature | Volatile | Atomic |
|---|---|---|
| Main purpose | Force real memory access | Coordinate shared memory between threads |
| Common use | Hardware registers | Counters, flags, lock-free data |
| Prevents compiler removing access | Yes | Yes, as part of atomic semantics |
| Prevents data races | No | Yes, when used correctly |
| Provides memory ordering | No | Yes |
| Typical beginner use | Embedded or OS code | Thread-safe counters and flags |
Use volatile when the memory itself has side effects.
Use atomic when multiple threads share memory.
Mutexes Are Often Better
Atomics are low-level. They are easy to misuse.
For most shared data, a mutex is clearer.
const std = @import("std");
var mutex = std.Thread.Mutex{};
var counter: u32 = 0;
fn incrementMany() void {
var i: usize = 0;
while (i < 1000) : (i += 1) {
mutex.lock();
counter += 1;
mutex.unlock();
}
}A mutex protects a section of code. Only one thread can run that section at a time.
Use atomics for simple shared values or carefully designed concurrent structures.
Use mutexes for larger shared state.
Common Mistake: Using Volatile for Threads
This is wrong as a thread-safety strategy:
var done: bool = false;
// Treating done as volatile does not make all shared access safe.A volatile load may force the compiler to reload the value, but it does not give the full synchronization rules needed between threads.
For thread communication, use atomics, mutexes, condition variables, channels, or another synchronization design.
Common Mistake: Non-Atomic Shared Increment
This is unsafe when used by multiple threads:
counter += 1;The operation can lose updates.
Use an atomic operation:
_ = @atomicRmw(u32, &counter, .Add, 1, .seq_cst);Or use a mutex:
mutex.lock();
counter += 1;
mutex.unlock();Common Mistake: Guessing Memory Order
This is risky:
@atomicStore(u32, &value, 1, .monotonic);A weaker ordering may be correct in some cases, but it must be chosen for a reason.
For learning and simple code, use .seq_cst.
Optimize memory ordering only after you understand the concurrency design.
The Main Idea
Volatile memory access is for memory where the act of reading or writing matters, such as hardware registers.
Atomic memory operations are for shared memory accessed by multiple threads.
Volatile tells the compiler not to remove or merge certain accesses.
Atomic operations provide thread-safe access and memory ordering.
For normal Zig code, you rarely need either. For embedded programming, operating systems, drivers, and concurrent programs, they become important tools.