Reading a file means asking the operating system for bytes stored on disk.
A file may contain text, JSON, source code, an image, a database page, or any other data. At the lowest level, a file is just a sequence of bytes.
In Zig, file reading teaches several important ideas at once:
You must open the file.
You must handle errors.
You must close the file.
You must decide where the bytes go.
You must decide whether to read the whole file or read it piece by piece.
Zig 0.16 introduced major I/O changes with the newer std.Io system. The official 0.16 documentation shows main can accept std.process.Init, which gives access to the program’s I/O context, and the release notes describe the new I/O implementations such as Io.Threaded.
The Smallest Useful Example
Create a file named hello.txt:
Hello from a file.Now create main.zig:
const std = @import("std");
pub fn main(init: std.process.Init) !void {
const io = init.io;
const file = try std.Io.Dir.cwd().openFile(io, "hello.txt", .{});
defer file.close(io);
var buffer: [1024]u8 = undefined;
const n = try file.read(io, &buffer);
std.debug.print("{s}\n", .{buffer[0..n]});
}Run it:
zig build-exe main.zig
./mainOutput:
Hello from a file.This program does four things.
First, it gets the I/O context:
const io = init.io;In Zig 0.16, many operations that can block, such as file operations, use this I/O context.
Second, it opens the file:
const file = try std.Io.Dir.cwd().openFile(io, "hello.txt", .{});std.Io.Dir.cwd() means “the current working directory.”
openFile asks the operating system to open hello.txt.
The try means opening the file can fail. The file might not exist. The program might not have permission. The path might refer to a directory instead of a file.
Third, it makes sure the file is closed:
defer file.close(io);defer runs when the current scope exits. This is the usual Zig pattern for cleanup.
Fourth, it reads bytes into a buffer:
var buffer: [1024]u8 = undefined;
const n = try file.read(io, &buffer);The buffer has room for 1024 bytes. The call to read returns how many bytes were actually read.
Then we print only the part of the buffer that was filled:
buffer[0..n]That slice means: start at index 0, stop before index n.
Files Are Bytes, Not Automatically Strings
This line prints the bytes as text:
std.debug.print("{s}\n", .{buffer[0..n]});The {s} formatter treats the slice as a string-like sequence of bytes.
But Zig does not automatically promise that file data is valid text. A file can contain any bytes.
For beginner examples, reading a .txt file and printing it is fine. In real programs, you often need to decide what the bytes mean.
For example:
const data: []const u8 = buffer[0..n];This says: data is a slice of bytes.
It might be UTF-8 text.
It might be JSON.
It might be a binary format.
Zig keeps that distinction clear.
Why the Buffer Is undefined
This line may look strange:
var buffer: [1024]u8 = undefined;It creates an array of 1024 bytes, but does not fill it with zeros.
That is safe here because we only use the part of the buffer that read writes into.
const n = try file.read(io, &buffer);
std.debug.print("{s}\n", .{buffer[0..n]});We do not print the whole buffer. We print only buffer[0..n].
This matters. The rest of the buffer still contains undefined data.
Wrong:
std.debug.print("{s}\n", .{&buffer});This tries to print all 1024 bytes, even though only part of the array was filled.
Correct:
std.debug.print("{s}\n", .{buffer[0..n]});Always use the length returned by read.
Reading May Return Less Than You Asked For
A common beginner mistake is assuming one read gets the whole file.
This line asks for up to 1024 bytes:
const n = try file.read(io, &buffer);It does not guarantee 1024 bytes.
It also does not guarantee the whole file.
It returns the number of bytes actually read.
For a small local file, one read may get everything. But you should not build your understanding around that accident.
A file can be larger than the buffer. A stream can provide data in smaller pieces. A read operation can return early.
So the general rule is:
Use one read only when you know one read is enough.
Use a loop when you want to read until the end.
Reading a File in a Loop
Here is a simple loop that reads and prints the file chunk by chunk:
const std = @import("std");
pub fn main(init: std.process.Init) !void {
const io = init.io;
const file = try std.Io.Dir.cwd().openFile(io, "hello.txt", .{});
defer file.close(io);
var buffer: [64]u8 = undefined;
while (true) {
const n = try file.read(io, &buffer);
if (n == 0) break;
std.debug.print("{s}", .{buffer[0..n]});
}
}This version uses a small 64-byte buffer so the pattern is easy to see.
The important part is:
while (true) {
const n = try file.read(io, &buffer);
if (n == 0) break;
std.debug.print("{s}", .{buffer[0..n]});
}A read length of 0 means there is no more data to read.
The loop stops at that point.
This is a core pattern in systems programming:
Read a chunk.
Process the chunk.
Repeat until the input ends.
Reading the Whole File into Memory
Sometimes you want the whole file as one slice.
That is useful for small configuration files, JSON files, source files, templates, and tests.
To do that, you need an allocator, because the file size is known at runtime.
const std = @import("std");
pub fn main(init: std.process.Init) !void {
const io = init.io;
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
const file = try std.Io.Dir.cwd().openFile(io, "hello.txt", .{});
defer file.close(io);
const stat = try file.stat(io);
const data = try allocator.alloc(u8, stat.size);
defer allocator.free(data);
const n = try file.readAll(io, data);
std.debug.print("{s}\n", .{data[0..n]});
}This example follows a clear sequence.
Open the file.
Ask for its size.
Allocate a buffer large enough.
Read the bytes into that buffer.
Free the buffer later.
This line gets file metadata:
const stat = try file.stat(io);This line allocates exactly enough space for the file size:
const data = try allocator.alloc(u8, stat.size);This line reads into the allocated memory:
const n = try file.readAll(io, data);The result n is the number of bytes read.
Even when you expect the full file, it is still good practice to use the returned length:
data[0..n]Stack Buffer vs Heap Buffer
There are two common ways to hold file bytes.
A stack buffer has a fixed size known at compile time:
var buffer: [1024]u8 = undefined;This is simple and fast. It works well when you read in chunks.
A heap buffer is allocated at runtime:
const data = try allocator.alloc(u8, size);This is useful when the size is not known at compile time, or when the file is too large for a fixed stack buffer.
Use a stack buffer when you can process the file piece by piece.
Use a heap buffer when you truly need all data in memory at once.
Handling “File Not Found”
Opening a file can fail.
This program catches one specific error:
const std = @import("std");
pub fn main(init: std.process.Init) !void {
const io = init.io;
const file = std.Io.Dir.cwd().openFile(io, "missing.txt", .{}) catch |err| switch (err) {
error.FileNotFound => {
std.debug.print("file not found\n", .{});
return;
},
else => return err,
};
defer file.close(io);
var buffer: [1024]u8 = undefined;
const n = try file.read(io, &buffer);
std.debug.print("{s}\n", .{buffer[0..n]});
}This part matters:
catch |err| switch (err) {
error.FileNotFound => {
std.debug.print("file not found\n", .{});
return;
},
else => return err,
}It says:
If the error is FileNotFound, print a friendly message and stop.
For every other error, return the error to the caller.
This is a good pattern. Handle the error you understand. Propagate the rest.
Reading Lines
Many programs want to read a text file line by line.
A line-oriented API may be convenient, but the core idea is still byte processing. A line is a sequence of bytes ending in \n.
For beginners, it is useful to first understand a simple manual version:
const std = @import("std");
fn printLines(data: []const u8) void {
var start: usize = 0;
for (data, 0..) |byte, i| {
if (byte == '\n') {
std.debug.print("line: {s}\n", .{data[start..i]});
start = i + 1;
}
}
if (start < data.len) {
std.debug.print("line: {s}\n", .{data[start..]});
}
}This function receives all file data as a slice.
It walks through each byte.
When it sees \n, it prints one line.
The variable start stores where the current line begins.
This is not the most advanced way to read lines, but it teaches what line reading really means.
A Complete “Read Then Split Lines” Program
const std = @import("std");
fn printLines(data: []const u8) void {
var start: usize = 0;
for (data, 0..) |byte, i| {
if (byte == '\n') {
std.debug.print("line: {s}\n", .{data[start..i]});
start = i + 1;
}
}
if (start < data.len) {
std.debug.print("line: {s}\n", .{data[start..]});
}
}
pub fn main(init: std.process.Init) !void {
const io = init.io;
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
const file = try std.Io.Dir.cwd().openFile(io, "hello.txt", .{});
defer file.close(io);
const stat = try file.stat(io);
const data = try allocator.alloc(u8, stat.size);
defer allocator.free(data);
const n = try file.readAll(io, data);
printLines(data[0..n]);
}With this hello.txt:
red
green
blueThe output is:
line: red
line: green
line: blueReading Files Is Also Resource Management
A file is not just data. It is also an operating system resource.
When you open a file, the operating system gives your program access to it. When you finish, you should close it.
That is why this pattern is important:
const file = try std.Io.Dir.cwd().openFile(io, "hello.txt", .{});
defer file.close(io);Put cleanup immediately after setup.
This makes the code safer. You do not need to remember to close the file at every return point.
If the function returns normally, the file closes.
If the function returns an error, the file still closes.
The Core Pattern
Most file reading code follows this shape:
const file = try open_the_file;
defer close_the_file;
var buffer = make_some_buffer;
while (true) {
const n = try read_into_the_buffer;
if (n == 0) break;
use(buffer[0..n]);
}In real Zig code, the names are more specific, but the structure is the same.
Open.
Defer close.
Read.
Use only the bytes that were read.
Repeat if needed.
Handle errors explicitly.
What You Should Remember
Reading a file in Zig is explicit.
The file may not exist.
Opening can fail.
Reading can fail.
Reading returns a byte count.
A buffer may be only partly filled.
The file should be closed.
Large files should usually be read in chunks.
Small files can be read fully into memory when that makes the code simpler.
This explicit style is the point. Zig does not hide file I/O behind a large abstraction. It shows you the actual steps, and the code becomes easier to reason about once those steps feel familiar.