Deciding to upgrade my NAS from FreeBSD 9.2-RELEASE to 10.2-RELEASE, I wanted to try out the new native iSCSI initiator.
As my iSCSI-network is physically separated by vlan I don’t bother with chap-authentication. I allow the entire network connect to the presented targets.
The only part of the config previously used in istgt that couldn’t be migrated due to lack of support in ctld was QueueDepth which defaults at 32. My use of iSCSI is presenting LUNs to an ESXi installation that ideally would like a higher queue depth. There doesn’t seem to be any way to change this in ctld currently. As noted by a forum post I also experience it falls back to 32 when the LUN is being accessed.
/etc/ctld.conf
portal-group pg1 {
discovery-auth-group no-authentication
listen 192.168.100.13:3260
}
auth-group ag1 {
auth-type none
initiator-portal 192.168.100.0/24
}
target iqn.2004-06.net.strahlert.home:esxiL1 {
portal-group pg1
auth-group ag1
lun 0 {
path /dev/zvol/san/volumes/esxiL1
option rpm 7200
}
}
target iqn.2004-06.net.strahlert.home:esxiL2 {
portal-group pg1
auth-group ag1
lun 0 {
path /dev/zvol/san/volumes/esxiL2
option rpm 7200
}
}
target iqn.2004-06.net.strahlert.home:esxiL3 {
portal-group pg1
auth-group ag1
lun 0 {
path /dev/zvol/san/volumes/esxiL3
option rpm 7200
}
}
target iqn.2004-06.net.strahlert.home:esxiL4 {
portal-group pg1
auth-group ag1
lun 0 {
path /dev/zvol/san/volumes/esxiL4
option rpm 7200
}
}
ZFS properties
The following ZFS properties should be set on the parent dataset that will inherit down to each resepective sub-dataset.
san/volumes mountpoint none local
san/volumes compression lz4 local
san/volumes atime off local
san/volumes volmode dev local
Caveats
The LUN id of the targets differ between istgt and ctl. When shutting down istgt, rescanning all hba:s in ESXi, starting ctld and again rescanning all hba:s, the datastores became unavailable. VMware knowledgebase articles didn’t present any solution. I ended up having to reboot the ESXi-server in order to detach the unavailable datastores.
The LUNs had to be re-added due to this. Simply add storage as Disk/LUN in the vSphere-client and select the LUN. It recognised that a vmfs filesystem already existed and by resignaturing the LUN, the datastore was back online with its data intact. It had to be renamed as it was a assigned a randomized prefix.
The VM inventory was still pointing to the old datastore and I ended up having to re-add every VM by right-clicking the vmx file of each VM and select add to inventory. I also had to remove the old static discovered targets from the iSCSI software adapter properties.
When having ZFS as backend-storage, the LUN will present itself as an SSD. Unless an option rpm is given per lun set to >1024. To present non-ssd storage as ssd (and the other way around) is generally bad practice. When using VMware as front-end it opens up for faulty configurations such as using non-ssd storage as host cache.
Also when using ZFS as backend-storage, be sure to set zfs_enable="YES"
in /etc/rc.conf
or ctld will fail to open the zvol.
Take-aways
I’d recommend documenting which VM belong to which resourcepool/folder before attempting this. Failing that, comparing the outputs of vim-cmd vmsvc/getallvms|grep Skipping
with the contents of /etc/vmware/hostd/vmInventory.xml
proved useful in finding out the previous placements of the VMs.
The new iSCSI initiator performs a lot better in my environment. The time that it takes for the snapshot backups have more than halved. The transfer rates are now about 2.2 times than what they used to be.
Recent Comments