I had spoke of a penalty with adding a drive to a existing raidz vdev, but what was the actual penalty? Does the added stripe actually help??
so I decided to see
here is the benchmark for the 4 wide raidz1 and adding a drive via the new zfs add command
root@deepblue-mi6-puffin:/backup# fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.38
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [W(1)][11.7%][w=62.0MiB/s][w=62 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [W(1)][21.7%][w=41.0MiB/s][w=41 IOPS][eta 00m:47s]
Jobs: 1 (f=1): [W(1)][31.7%][w=62.0MiB/s][w=62 IOPS][eta 00m:41s]
Jobs: 1 (f=1): [W(1)][41.7%][w=61.0MiB/s][w=61 IOPS][eta 00m:35s]
Jobs: 1 (f=1): [W(1)][51.7%][w=62.1MiB/s][w=62 IOPS][eta 00m:29s]
Jobs: 1 (f=1): [W(1)][61.7%][w=62.1MiB/s][w=62 IOPS][eta 00m:23s]
Jobs: 1 (f=1): [W(1)][71.7%][w=58.1MiB/s][w=58 IOPS][eta 00m:17s]
Jobs: 1 (f=1): [W(1)][81.7%][w=48.0MiB/s][w=48 IOPS][eta 00m:11s]
Jobs: 1 (f=1): [W(1)][91.7%][w=56.1MiB/s][w=56 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [W(1)][100.0%][w=62.1MiB/s][w=62 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=73625: Fri Feb 7 22:03:01 2025
write: IOPS=58, BW=58.3MiB/s (61.1MB/s)(3500MiB/60017msec); 0 zone resets
slat (msec): min=9, max=124, avg=17.12, stdev= 6.13
clat (usec): min=14, max=948748, avg=525743.99, stdev=89245.60
lat (msec): min=17, max=966, avg=542.87, stdev=90.72
clat percentiles (msec):
| 1.00th=[ 271], 5.00th=[ 447], 10.00th=[ 481], 20.00th=[ 493],
| 30.00th=[ 502], 40.00th=[ 506], 50.00th=[ 506], 60.00th=[ 510],
| 70.00th=[ 518], 80.00th=[ 527], 90.00th=[ 659], 95.00th=[ 726],
| 99.00th=[ 827], 99.50th=[ 877], 99.90th=[ 944], 99.95th=[ 944],
| 99.99th=[ 953]
bw ( KiB/s): min= 8192, max=122880, per=100.00%, avg=60235.29, stdev=11999.43, samples=119
iops : min= 8, max= 120, avg=58.82, stdev=11.72, samples=119
lat (usec) : 20=0.06%
lat (msec) : 20=0.06%, 50=0.09%, 100=0.20%, 250=0.51%, 500=31.06%
lat (msec) : 750=65.34%, 1000=2.69%
cpu : usr=0.43%, sys=40.66%, ctx=3606, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=98.2%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,3500,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=58.3MiB/s (61.1MB/s), 58.3MiB/s-58.3MiB/s (61.1MB/s-61.1MB/s), io=3500MiB (3670MB), run=60017-60017msec
-------------------
now a read
-------------------
root@deepblue-mi6-puffin:/backup# fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.38
Starting 1 process
Jobs: 1 (f=1): [R(1)][30.4%][r=480MiB/s][r=480 IOPS][eta 00m:16s]
Jobs: 1 (f=1): [R(1)][56.5%][r=430MiB/s][r=430 IOPS][eta 00m:10s]
Jobs: 1 (f=1): [R(1)][86.4%][r=479MiB/s][r=479 IOPS][eta 00m:03s]
Jobs: 1 (f=1): [R(1)][100.0%][r=371MiB/s][r=371 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=75798: Fri Feb 7 22:04:05 2025
read: IOPS=453, BW=454MiB/s (476MB/s)(10.0GiB/22575msec)
slat (usec): min=1084, max=175367, avg=2186.75, stdev=2221.82
clat (usec): min=12, max=272976, avg=67070.10, stdev=11356.27
lat (usec): min=1472, max=293874, avg=69256.85, stdev=11792.90
clat percentiles (msec):
| 1.00th=[ 45], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62],
| 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67],
| 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 79], 95.00th=[ 86],
| 99.00th=[ 106], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 155],
| 99.99th=[ 155]
bw ( KiB/s): min=198656, max=544768, per=99.77%, avg=463439.64, stdev=60117.25, samples=45
iops : min= 194, max= 532, avg=452.58, stdev=58.71, samples=45
lat (usec) : 20=0.05%
lat (msec) : 2=0.05%, 4=0.05%, 10=0.15%, 20=0.20%, 50=0.57%
lat (msec) : 100=97.59%, 250=1.35%, 500=0.01%
cpu : usr=0.80%, sys=17.17%, ctx=10247, majf=0, minf=8202
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=10.0GiB (10.7GB), run=22575-22575msec
Next I destroyed the pool and then reassembled them this time 5 wide with the exact setting except the add command…
root@deepblue-mi6-puffin:/backup# fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.38
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [W(1)][11.7%][w=64.1MiB/s][w=64 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [W(1)][21.7%][w=63.1MiB/s][w=63 IOPS][eta 00m:47s]
Jobs: 1 (f=1): [W(1)][31.7%][w=63.1MiB/s][w=63 IOPS][eta 00m:41s]
Jobs: 1 (f=1): [W(1)][41.7%][w=20.0MiB/s][w=20 IOPS][eta 00m:35s]
Jobs: 1 (f=1): [W(1)][51.7%][w=53.1MiB/s][w=53 IOPS][eta 00m:29s]
Jobs: 1 (f=1): [W(1)][61.7%][w=61.0MiB/s][w=61 IOPS][eta 00m:23s]
Jobs: 1 (f=1): [W(1)][71.7%][w=63.0MiB/s][w=63 IOPS][eta 00m:17s]
Jobs: 1 (f=1): [W(1)][81.7%][w=65.1MiB/s][w=65 IOPS][eta 00m:11s]
Jobs: 1 (f=1): [W(1)][91.7%][w=63.1MiB/s][w=63 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [W(1)][100.0%][w=63.0MiB/s][w=63 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=10532: Fri Feb 7 22:19:35 2025
write: IOPS=56, BW=56.1MiB/s (58.9MB/s)(3369MiB/60008msec); 0 zone resets
slat (msec): min=9, max=634, avg=17.78, stdev=22.23
clat (usec): min=15, max=2779.3k, avg=546122.72, stdev=251609.58
lat (msec): min=15, max=2805, avg=563.91, stdev=257.89
clat percentiles (msec):
| 1.00th=[ 266], 5.00th=[ 435], 10.00th=[ 464], 20.00th=[ 485],
| 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 498],
| 70.00th=[ 506], 80.00th=[ 510], 90.00th=[ 651], 95.00th=[ 693],
| 99.00th=[ 2123], 99.50th=[ 2198], 99.90th=[ 2769], 99.95th=[ 2769],
| 99.99th=[ 2769]
bw ( KiB/s): min= 2048, max=126976, per=100.00%, avg=57980.77, stdev=19150.32, samples=119
iops : min= 2, max= 124, avg=56.62, stdev=18.70, samples=119
lat (usec) : 20=0.06%
lat (msec) : 20=0.03%, 50=0.12%, 100=0.18%, 250=0.56%, 500=61.44%
lat (msec) : 750=33.93%, 1000=0.06%, 2000=2.32%, >=2000=1.31%
cpu : usr=0.43%, sys=39.05%, ctx=3510, majf=1, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=98.2%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,3369,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=56.1MiB/s (58.9MB/s), 56.1MiB/s-56.1MiB/s (58.9MB/s-58.9MB/s), io=3369MiB (3533MB), run=60008-60008msec
--------------------------
now a read
-------------------------
root@deepblue-mi6-puffin:/backup# fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.38
Starting 1 process
Jobs: 1 (f=1): [R(1)][43.8%][r=669MiB/s][r=668 IOPS][eta 00m:09s]
Jobs: 1 (f=1): [R(1)][81.2%][r=676MiB/s][r=676 IOPS][eta 00m:03s]
Jobs: 1 (f=1): [R(1)][93.8%][r=629MiB/s][r=629 IOPS][eta 00m:01s]
TEST: (groupid=0, jobs=1): err= 0: pid=12748: Fri Feb 7 22:20:56 2025
read: IOPS=647, BW=647MiB/s (679MB/s)(10.0GiB/15818msec)
slat (usec): min=891, max=157161, avg=1527.68, stdev=1685.54
clat (usec): min=14, max=239810, avg=46904.92, stdev=5508.57
lat (usec): min=1222, max=241820, avg=48432.60, stdev=5881.90
clat percentiles (usec):
| 1.00th=[28443], 5.00th=[42206], 10.00th=[43779], 20.00th=[44827],
| 30.00th=[45351], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449],
| 70.00th=[47973], 80.00th=[49021], 90.00th=[50070], 95.00th=[51643],
| 99.00th=[67634], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974],
| 99.99th=[84411]
bw ( KiB/s): min=327680, max=727040, per=99.65%, avg=660579.10, stdev=65219.16, samples=31
iops : min= 320, max= 710, avg=645.10, stdev=63.69, samples=31
lat (usec) : 20=0.05%
lat (msec) : 2=0.05%, 4=0.04%, 10=0.23%, 20=0.32%, 50=88.01%
lat (msec) : 100=11.29%, 250=0.01%
cpu : usr=1.11%, sys=22.51%, ctx=10252, majf=0, minf=8202
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=647MiB/s (679MB/s), 647MiB/s-647MiB/s (679MB/s-679MB/s), io=10.0GiB (10.7GB), run=15818-15818msec
from my view point it does make a difference. And if one is just seeking performance it would make a difference Yet it does not undercut the ability for those whom just need to add a drive or two to a existing raidz vdev for increased storage on the fly …
@demyers
Standby I’ll retry (test) that I did attempt it (attach , but may have left off -f) earlier and may have messed up in syntax… although from what I hit on the guidance was add not attach but that was bad advice on a differing site …
bad site
bad site
(and naturally If it does assemble I’ll benchmark it as well)