Using Fio for Read/Write Performance Testing on Network Attached Storage
When configuring Network Attached Storage (NAS) over SMB and NFS, it’s essential to assess the read and write performance to optimize for specific use cases. One tool I find highly effective for this purpose is fio (Flexible I/O Tester), which can be used to simulate various read/write workloads, allowing a detailed understanding of your NAS performance.
Below are some fio command examples I used to test different read/write patterns, with both random and sequential workloads.
Random read performance
Random reads are useful for scenarios where data is accessed non-sequentially, such as database workloads. Random access tends to be slower due to disk seek times or network latency, making this a critical metric.
fio --randrepeat=1 --ioengine=libaio --direct=1 \
--gtod_reduce=1 --name=test --filename=test \
--bs=4k --iodepth=64 --size=4G --readwrite=randread
Random write performance
Random writes often reflect write-intensive applications like log systems or databases. Random writes are generally slower, so benchmarking here helps to identify performance bottlenecks.
fio --randrepeat=1 --ioengine=libaio --direct=1 \
--gtod_reduce=1 --name=test --filename=test \
--bs=4k --iodepth=64 --size=4G --readwrite=randwrite
This test uses the same block size and depth as the random read test to provide consistent comparison between read and write speeds.
Random Read/Write Performance (75% Read, 25% Write Mix)
This test mimics a workload that involves a mix of reads and writes, which is common in many real-world applications, such as databases that frequently read and write data simultaneously.
fio --randrepeat=1 --ioengine=libaio --direct=1 \
--gtod_reduce=1 --name=test --filename=test \
--bs=4k --iodepth=64 --size=4G --readwrite=randrw \
--rwmixread=75
Mix ratio (–rwmixread=75): Configured to simulate a read-heavy workload with 75% reads and 25% writes.
Sequential read performance
Sequential reads are important for applications that read large, contiguous files, such as media streaming or large file transfers. Network-attached storage systems often perform well here since there’s less overhead than with random access.
fio --randrepeat=1 --ioengine=libaio --direct=1 \
--gtod_reduce=1 --name=test --filename=test \
--bs=64k --iodepth=64 --size=4G --readwrite=read
Larger block size (--bs=64k)
: Reflects the need for handling larger chunks of data, common in sequential access scenarios.
Sequential write performance
Similar to the sequential read test, this focuses on writing large, contiguous blocks of data to the NAS, which is important for tasks like backups or large file uploads.
fio --randrepeat=1 --ioengine=libaio --direct=1 \
--gtod_reduce=1 --name=test --filename=test \
--bs=64k --iodepth=64 --size=4G --readwrite=write
Sequential writes are typically faster than random writes, especially with large block sizes
Insights and Next Steps
I/O Caching: If using direct I/O (–direct=1), the results will exclude caching effects, showing raw performance. For real-world scenarios, disabling direct I/O may better reflect actual user experiences as caches play an important role in performance.