Friday, 24 January 2014

Objective 1.1 - Implement and Manage Complex Storage Solutions

Knowledge - Identify RAID levels
Read Intensive RAID 5 or 6
Write Intensive RAID 1 or 1+0 

wikipedia - Standard RAID Levels

Knowledge - Identify supported HBA types
HBA (Host Bus Adapter) is the physical adapter which connects a server to SAN via FC fabric, iSCSI or FCoE.  Software adapters for iSCSI etc while providing similar functionality are not referred to as HBA's.

Which HBA's are supported for use with vSphere can be found in the vSphere HCL

Knowledge - Identify virtual disk format types
vmkfstools -t0 <vmdisk>.vmdk to determine type of provisioning

Thick Provision Lazy Zeroed
Space required for the virtual disk is allocated during creation. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.

Thick Provision Eager Zeroed
Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out during creation. It might take much longer to create disks in this format than to create other types of disks.

Thin Provision
At first, a thin provisioned disk uses only as much datastore space as the disk initially needs. If the thin disk needs more space later, it can grow to the maximum capacity allocated to it

Independent - Persistent
Disks in persistent mode behave like conventional disks on your physical computer. All data written to a disk in persistent mode are written permanently to the disk.

Independent - Nonpersistent
Changes to disks in nonpersistent mode are discarded when you power off or reset the virtual machine. With nonpersistent mode, you can restart the virtual machine with a virtual disk in the same state every time. Changes to the disk are written to and read from a redo log file that is deleted when you power off or reset.

RDM - Raw Device Mapping
You can store virtual machine data directly on a SAN LUN instead of storing it in a virtual disk file. This ability is useful if you are running applications in your virtual machines that must detect the physical characteristics of the storage device.

Skills and Abilities - Determine use cases for and configure VMware DirectPath I/O
DirectPath I/O allows a VM to access PCI(e) I/O devices on the physical server natively it has as the name would suggest a Direct Path to the hardware.  To achieve this the hardware of the physical server must support this (Intel VT-d and AMD-Vi) and be configured in the BIOS to allow this.  You would use DirectPath I/O for taking advantage of things like 10GbE NICs.

The downsides to using DirectPath,  you loose capability to vMotion, svMotion, FT, Suspend and Resume your VM. As such use where performance is needed but high availability less important 

Configuration Examples and Troubleshooting for VMDirectPath

Skills and Abilities - Determine requirements for and configure NPIV
N-Port ID Virtualization (NPIV) is used to present multiple World Wide Names (WWN) to a SAN network (fabric) through one physical adapter. NPIV is an extension of the Fibre Channel protocolAll fabric switches and HBAs need to support NPIV and all HBAs in hosts need to be same.  NPIV does not support Storage vMotion. NPIV is used in solutions such as Cisco UCS.

As well as using NPIV for presenting LUNs to the host,  if host supports NPIV and you are using FC storage then a RDM can be mapped and the VM can be configured with a virtual NPIV HBA with its own WWNs.

vSphere Storage - ESXi 5.0 - vSphere Storage p41

Skills and Abilities - Determine appropriate RAID level for various Virtual Machine workloads
Read Intensive RAID 5 or 6
Write Intensive RAID 1 or 1+0

To identify workload or a running VM you can use vscsiStats

Skills and Abilities - Apply VMware storage best practices
Best Practices for Running VMware vSphere on iSCSI 
Best Practices for running VMware vSphere on Network Attached Storage

Skills and Abilities - Understand use cases for Raw Device Mapping
Raw device mappings to a VM can be great but have limitations, for most general occasions the VM doesn't need to write directly to the LUN, it writes to its virtual hard disk VMDK file.

Occasions I have used RDM's have been,  virtual servers which run SAN management software and require to issue commands to the storage array via the LUN direct,  eg HDS Command Devices and EMC Gatekeeper Devices.  Virtual servers running as nodes within a Veritas Cluster containing physical servers.  Virtual servers which require array level snapshots to be taken or presented when snapped from physical servers. The other benefits you can gain from using RDMs is if you require to present a very large LUN to a VM and the size exceeds the maximum VMDK size allowable,  in practical terms for me this has never occured.... yet!

RDM limitations
In FC only environments each RDM requires to be presented as a LUN to the ESXi hosts within cluster so it can be mapped to the VM as an RDM. If RDMs are used extensively then LUN and LUN path maximums for the ESXi hosts can soon be reached.

If you are aiming for configuring your VMs using software defined declarative methods such as vCloud director or Puppet Enterprise mapping to present the LUN from storage array,  present to ESX cluster then map as RDM to VMs can become very complicated very quickly.  The use of VMware snapshots is also effected by RDMs as they do not fall under control of snap shotting process.

ptRDMs are not supported for for vMotion or svMotion when in use with Windows Clustering within the VM due to its use of SCSI persistent reserverations,  link

Skills and Abilities - Configure vCenter Server storage filters
 
There are four storage filters all of which are applied by default.
  • VMFS Filter: filters out storage devices or LUNs that are already used by a VMFS datastore
  • RDM Filter: filters out LUNs that are already mapped as a RDMSame Host and 
  • Transports Filter: filters out LUNs that can’t be used as a VMFS datastore extend.
    • Prevents you from adding LUNs as an extent not exposed to all hosts that share the original VMFS datastore.
    • Prevents you from adding LUNs as an extent that use a storage type different from the original VMFS datastore
  • Host Rescan Filter: Automatically rescans and updates VMFS datastores after you perform datastore management operations
vSphere Client ->  Administration -> vCenter Server -> Settings -> Advanced Settings
Add the following key if not already there and set it to false.
config.vpxd.filter.vmfsFilter (VMFS Filter)
config.vpxd.filter.rdmFilter (RDM Filter)
config.vpxd.filter.SameHostAndTransportsFilter (Same Host and Transports Filter)
config.vpxd.filter.hostRescanFilter (Host Rescan Filter)


Skills and Abilities - Understand and apply VMFS re-signaturing
Resignaturing of LUNs is required if you have the occasion where two contain the same disk signature,  the most common occurrence I have come across is where a LUN is backed up by the storage array by use of snapshot and when you need to restore a VM from this you present the snapped copy of the LUN back to the cluster to recover a single VM back to original LUN.

The requirement to re-signature LUNs can of course have various scenarios which lead to,  but the same fix applies,  namely writing a new disk signature but leaving the data intact.

This used to be an arduous task in Vi3 but since vSphere 4 then 5 this is now easily done via the vSphere client GUI.  Useful description of the commands to run on each edition can be found here

You may also have the occasion where a LUN which has been used before and has obsolete data on has been presented and for this you may not want to resignature you may want to wipe all the data and work fresh,  to do this you would select Format in the GUI.

Skills and Abilities - Understand and apply LUN masking using PSA-related commands 
LUN masking is typically done at the array or FC switch level however this can also be done within the Plugable Storage Architecture level of the ESXi host.

Obtain :C Channel :T Target :L LUN and vmhba of the device you want to mask
   esxcfg-scsidevs -m


Paths can be viewed and changed using this namespace
   esxcli storage core claimrule
Using this information assign the device to MASK_PATH
   esxcli storage core claimrule add -r 500 -t location -A vmhba35 -C <x> -T <y> -L <z> -P MASK_PATH

Skills and Abilities - Analyze I/O workloads to determine storage performance requirements
To identify workload or a running VM you can use vscsiStats
 
In addition to vscsiStats you can also use esxtop / resxtop to gather I/O statisctics and latency information

esxtop (d disk mode)

esxtop (v vdisk mode)
  • MBREAD/s — megabytes read per second
  • MBWRTN/s — megabytes written per second
  • KAVG — latency generated by the ESXi kernel
  • DAVG — latency generated by the device driver
  • QAVG — latency generated from the queue
  • GAVG — latency as it appears to the guest VM (KAVG + DAVG)
  • AQLEN – storage adapter queue length (amount of I/Os the storage adapter can queue)
  • LQLEN – LUN queue depth (amount of I/Os the LUN can queue)
  • %USD – percentage of the queue depth being actively used by the ESXi kernel (ACTV / QLEN * 100%)
More reading

Skills and Abilities - Identify and tag SSD devices
To view tag,
GUI - Select host > Configuration >  Storage > Drive Type column of your datastore.
CMD - esxcli storage core device list

If missing SSD tag,
CMD - esxcli storage nmp satp rule add -s <PSP name> -d <device id> -o enable_ssd


Skills and Abilities - Administer hardware acceleration for VAAI
To view state,
GUI - Select host > Configuration >  Storage > Hardware Acceleration column of your datastore.
CMD FC - esxcli storage core device list -d <device id>
CMD NFS - esxcli storage nfs list

Skills and Abilities - Configure and administer profile-based storage
Home > Management > VM Storage Profiles > Enable VM Storage Profile (choose clusters to enable)
Home > Management > VM Storage Profiles > Create VM Storage Profile (set name, description, attributes)
Home > Inventory > VMs and Templates > Right click VM and select VM Storage Profiles > Manage Profiles
Home > Inventory > VMs and Templates > Right click VM and select VM Storage Profiles > Check Profiles Compliance

Skills and Abilities - Prepare storage for maintenance
GUI - Select host > Configuration >  Storage > Datastore - Right Click Unmount
CMD - esxcli storage filesystem unmount -l <datastore name>

GUI - Select host > Configuration >  Storage > Datastore - Right Click Mount
CMD - esxcli storage filesystem mount -l <datastore name>

Skills and Abilities - Upgrade VMware storage infrastructure
GUI - Select host > Configuration >  Storage > Datastore - Right Click Upgrade to VMFS-5
CMD - esxcli storage vmfs upgrade -l <datastore name>

No comments:

Post a Comment