{"id":176,"date":"2013-02-22T23:55:57","date_gmt":"2013-02-22T22:55:57","guid":{"rendered":"\/wordpress\/?p=176"},"modified":"2013-03-06T19:19:40","modified_gmt":"2013-03-06T18:19:40","slug":"expanding-zpool","status":"publish","type":"post","link":"\/wordpress\/zfs-2\/expanding-zpool\/","title":{"rendered":"Expanding a zpool and adding ZIL (log) and L2ARC (cache)"},"content":{"rendered":"<p>As the nas zpool responsible for storing media and documents have been growing out of its space lately it was time to do something about it. This is how it looked previously.<\/p>\n<pre><code># zpool list san\r\nNAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT\r\nsan   3.62T  2.93T   710G    80%  1.00x  ONLINE  -\r\n# zpool status san\r\n  pool: san\r\n state: ONLINE\r\n  scan: scrub repaired 0 in 7h3m with 0 errors on Sat Jan 26 00:54:57 2013\r\nconfig:\r\n\r\n        NAME        STATE     READ WRITE CKSUM\r\n        san         ONLINE       0     0     0\r\n          mirror-0  ONLINE       0     0     0\r\n            ada4    ONLINE       0     0     0\r\n            ada5    ONLINE       0     0     0\r\n          mirror-1  ONLINE       0     0     0\r\n            ada6    ONLINE       0     0     0\r\n            ada7    ONLINE       0     0     0\r\n\r\nerrors: No known data errors\r\n<\/code><\/pre>\n<p>The disks in the pool are all the same of the following brand. WD Black 2Tb.<\/p>\n<pre><code>ada4: &lt;WDC WD2002FAEX-007BA0 05.01D05&gt; ATA-8 SATA 3.x device\r\nada4: 33.300MB\/s transfers (UDMA2, PIO 8192bytes)\r\nada4: 1907729MB (3907029168 512 byte sectors: 16H 63S\/T 16383C)\r\n<\/code><\/pre>\n<p>Also added was two OCZ Vertex3 90Gb SSD that will become mirrored ZIL (log) and L2ARC (cache).<\/p>\n<pre><code>ada8: &lt;OCZ-VERTEX3 2.15&gt; ATA-8 SATA 3.x device\r\nada8: 33.300MB\/s transfers (UDMA2, PIO 8192bytes)\r\nada8: 85857MB (175836528 512 byte sectors: 16H 63S\/T 16383C)\r\n<\/code><\/pre>\n<h3>Adding the disks to the pool<\/h3>\n<pre><code>zpool add san mirror ada0 ada1 mirror ada2 ada3\r\n<\/code><\/pre>\n<p>Now the zpool looks like this instead.<\/p>\n<pre><code># zpool list san\r\nNAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT\r\nsan   7.25T  2.93T  4.32T    40%  1.00x  ONLINE  -\r\n# zpool status san\r\n  pool: san\r\n state: ONLINE\r\n  scan: scrub repaired 0 in 7h3m with 0 errors on Sat Jan 26 00:54:57 2013\r\nconfig:\r\n\r\n        NAME        STATE     READ WRITE CKSUM\r\n        san         ONLINE       0     0     0\r\n          mirror-0  ONLINE       0     0     0\r\n            ada4    ONLINE       0     0     0\r\n            ada5    ONLINE       0     0     0\r\n          mirror-1  ONLINE       0     0     0\r\n            ada6    ONLINE       0     0     0\r\n            ada7    ONLINE       0     0     0\r\n          mirror-2  ONLINE       0     0     0\r\n            ada0    ONLINE       0     0     0\r\n            ada1    ONLINE       0     0     0\r\n          mirror-3  ONLINE       0     0     0\r\n            ada2    ONLINE       0     0     0\r\n            ada3    ONLINE       0     0     0\r\n\r\nerrors: No known data errors\r\n<\/code><\/pre>\n<h3>Partitioning the SSDs<\/h3>\n<p>The idea is to mirror the log but keep the cache unmirrored. ZFS can only utilize a maximum of half the available memory for the log device. The rest will be dedicated to cache. Performance can be seriously harmed if they&#8217;re not properly 4k block aligned. <\/p>\n<pre><code>gpart create -s gpt ada8\r\ngpart create -s gpt ada9\r\ngpart add -t freebsd-zfs -b 2048 -a 4k -l log0 -s 8G ada8\r\ngpart add -t freebsd-zfs -b 2048 -a 4k -l log1 -s 8G ada9\r\ngpart add -t freebsd-zfs -a 4k -l cache0 ada9\r\ngpart add -t freebsd-zfs -a 4k -l cache1 ada9\r\n<\/code><\/pre>\n<p>Add them to the zpool.<\/p>\n<pre><code>zpool add san log mirror gpt\/log0 gpt\/log1\r\nzpool add san cache gpt\/cache0 gpt\/cache1\r\n<\/code><\/pre>\n<p>The new and improved zpool.<\/p>\n<pre><code>  pool: san\r\n state: ONLINE\r\n  scan: scrub repaired 0 in 7h3m with 0 errors on Sat Jan 26 00:54:57 2013\r\nconfig:\r\n\r\n        NAME          STATE     READ WRITE CKSUM\r\n        san           ONLINE       0     0     0\r\n          mirror-0    ONLINE       0     0     0\r\n            ada4      ONLINE       0     0     0\r\n            ada5      ONLINE       0     0     0\r\n          mirror-1    ONLINE       0     0     0\r\n            ada6      ONLINE       0     0     0\r\n            ada7      ONLINE       0     0     0\r\n          mirror-2    ONLINE       0     0     0\r\n            ada0      ONLINE       0     0     0\r\n            ada1      ONLINE       0     0     0\r\n          mirror-3    ONLINE       0     0     0\r\n            ada2      ONLINE       0     0     0\r\n            ada3      ONLINE       0     0     0\r\n        logs\r\n          mirror-4    ONLINE       0     0     0\r\n            gpt\/log0  ONLINE       0     0     0\r\n            gpt\/log1  ONLINE       0     0     0\r\n        cache\r\n          gpt\/cache0  ONLINE       0     0     0\r\n          gpt\/cache1  ONLINE       0     0     0\r\n\r\nerrors: No known data errors\r\n<\/code><\/pre>\n<h3>Some statistics<\/h3>\n<p>This is thanks to the magic of ZFS. It first buffers the written data on the SSD and then commits it to disk every few seconds. As long as it&#8217;s in cache, data will be read rediculously fast. And as it&#8217;s self-learning and quite large, eventually all or most of the more commonly used data will be in cache.<\/p>\n<p>Writing in 269Mb\/s !!<\/p>\n<pre><code># dd if=\/dev\/zero of=\/export\/Documents\/slask bs=1024000 count=10240\r\n10240+0 records in\r\n10240+0 records out\r\n10485760000 bytes transferred in 37.173678 secs (282074860 bytes\/sec)\r\n<\/code><\/pre>\n<p>Reading in 635Mb\/s !!<\/p>\n<pre><code># dd if=\/export\/Documents\/slask of=\/dev\/null bs=1024000\r\n10240+0 records in\r\n10240+0 records out\r\n10485760000 bytes transferred in 15.740916 secs (666146747 bytes\/sec)\r\n<\/code><\/pre>\n<p>Now I need to seriously consider a 10Gb ethernet in order to be able to fully use this speed \ud83d\ude42<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As the nas zpool responsible for storing media and documents have been growing out of its space lately it was time to do something about it. This is how it looked previously. # zpool list san NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT san 3.62T 2.93T 710G 80% 1.00x ONLINE &#8211; # zpool status <span class=\"ellipsis\">&hellip;<\/span> <span class=\"more-link-wrap\"><a href=\"\/wordpress\/zfs-2\/expanding-zpool\/\" class=\"more-link\"><span>Read More &rarr;<\/span><\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[14,6],"tags":[31,7,8,36,2,9],"_links":{"self":[{"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/posts\/176"}],"collection":[{"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/comments?post=176"}],"version-history":[{"count":13,"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/posts\/176\/revisions"}],"predecessor-version":[{"id":198,"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/posts\/176\/revisions\/198"}],"wp:attachment":[{"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/media?parent=176"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/categories?post=176"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wp.strahlert.net\/wordpress\/wp-json\/wp\/v2\/tags?post=176"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}