As the nas zpool responsible for storing media and documents have been growing out of its space lately it was time to do something about it. This is how it looked previously.
# zpool list san
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
san 3.62T 2.93T 710G 80% 1.00x ONLINE -
# zpool status san
pool: san
state: ONLINE
scan: scrub repaired 0 in 7h3m with 0 errors on Sat Jan 26 00:54:57 2013
config:
NAME STATE READ WRITE CKSUM
san ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada6 ONLINE 0 0 0
ada7 ONLINE 0 0 0
errors: No known data errors
The disks in the pool are all the same of the following brand. WD Black 2Tb.
ada4: <WDC WD2002FAEX-007BA0 05.01D05> ATA-8 SATA 3.x device
ada4: 33.300MB/s transfers (UDMA2, PIO 8192bytes)
ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
Also added was two OCZ Vertex3 90Gb SSD that will become mirrored ZIL (log) and L2ARC (cache).
ada8: <OCZ-VERTEX3 2.15> ATA-8 SATA 3.x device
ada8: 33.300MB/s transfers (UDMA2, PIO 8192bytes)
ada8: 85857MB (175836528 512 byte sectors: 16H 63S/T 16383C)
Adding the disks to the pool
zpool add san mirror ada0 ada1 mirror ada2 ada3
Now the zpool looks like this instead.
# zpool list san
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
san 7.25T 2.93T 4.32T 40% 1.00x ONLINE -
# zpool status san
pool: san
state: ONLINE
scan: scrub repaired 0 in 7h3m with 0 errors on Sat Jan 26 00:54:57 2013
config:
NAME STATE READ WRITE CKSUM
san ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada6 ONLINE 0 0 0
ada7 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
errors: No known data errors
Partitioning the SSDs
The idea is to mirror the log but keep the cache unmirrored. ZFS can only utilize a maximum of half the available memory for the log device. The rest will be dedicated to cache. Performance can be seriously harmed if they’re not properly 4k block aligned.
gpart create -s gpt ada8
gpart create -s gpt ada9
gpart add -t freebsd-zfs -b 2048 -a 4k -l log0 -s 8G ada8
gpart add -t freebsd-zfs -b 2048 -a 4k -l log1 -s 8G ada9
gpart add -t freebsd-zfs -a 4k -l cache0 ada9
gpart add -t freebsd-zfs -a 4k -l cache1 ada9
Add them to the zpool.
zpool add san log mirror gpt/log0 gpt/log1
zpool add san cache gpt/cache0 gpt/cache1
The new and improved zpool.
pool: san
state: ONLINE
scan: scrub repaired 0 in 7h3m with 0 errors on Sat Jan 26 00:54:57 2013
config:
NAME STATE READ WRITE CKSUM
san ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada6 ONLINE 0 0 0
ada7 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
logs
mirror-4 ONLINE 0 0 0
gpt/log0 ONLINE 0 0 0
gpt/log1 ONLINE 0 0 0
cache
gpt/cache0 ONLINE 0 0 0
gpt/cache1 ONLINE 0 0 0
errors: No known data errors
Some statistics
This is thanks to the magic of ZFS. It first buffers the written data on the SSD and then commits it to disk every few seconds. As long as it’s in cache, data will be read rediculously fast. And as it’s self-learning and quite large, eventually all or most of the more commonly used data will be in cache.
Writing in 269Mb/s !!
# dd if=/dev/zero of=/export/Documents/slask bs=1024000 count=10240
10240+0 records in
10240+0 records out
10485760000 bytes transferred in 37.173678 secs (282074860 bytes/sec)
Reading in 635Mb/s !!
# dd if=/export/Documents/slask of=/dev/null bs=1024000
10240+0 records in
10240+0 records out
10485760000 bytes transferred in 15.740916 secs (666146747 bytes/sec)
Now I need to seriously consider a 10Gb ethernet in order to be able to fully use this speed 🙂
Recent Comments