This Article describes you about What is Virtual SAN (VSAN) Software Defined Storage
VMware introduce Virtual SAN that is a new software-defined storage solution and fully integrated with vSphere. Virtual SAN is locally attached disks in a vSphere cluster to create a storage solution that rapidly can be provisioned from VMware vCenter during virtual machine provisioning operations.
It’s a solution in which storage and compute for virtual machines are combined into a single device, with storage’s being provided within the hypervisor itself.
This will provide central storage for Virtual machine and capabilities through a SPBM platform. SPBM and virtual machine storage policies this simplifies virtual machine storage placement decisions for vSphere administrators.
Virtual SAN is fully integrated with core vSphere enterprise features such as VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Scheduler (vSphere DRS), and VMware vSphere vMotion. This provided high availability and scale-out storage functionality both. It also can be considered in the context of quality of service (QoS) because virtual machine storage policies can be created to define the levels of performance and availability required on a pervirtual machine basis.
Virtual SAN can scale out to 32 hosts
Managed via vCenter Server
Virtual SAN requires a vCenter Server running 5.5 Update 1 or higher. Virtual SAN can be managed by both the Windows version of vCenter Server and the vCenter Server Appliance (VCSA). Virtual SAN is configured and monitored via the vSphere Web Client and this also needs to be version 5.5 Update 1 or higher.
Requirement to configure VSAN
Here you require at least 3 vSphere hosts for creating Virtual SAN each host should be local storage. in order to form a supported Virtual SAN cluster.
It allow host failure tolerance (one host) to the cluster so that it can meet the minimum availability. Basic requirement is that the vSphere hosts must be running vSphere version 5.5 Update 1 at a minimum. Note: if you are running with fewer hosts then there will be a risk to the availability of virtual machines if a single host goes down. The maximum number of hosts supported is 32.
Each vSphere host in the cluster that contributes local storage to Virtual SAN must have at least one hard disk drive (HDD) and at least one solid state disk drive (SSD).
- A SAS/SATA Controller (Storage Controller in “pass-through” or RAID0” mode) is required
- A combination of HDD & SSD devices are required (a minimum of 1 HDD & 1 SSD [SAS or SATA]), although we expect SSDs to make up 10% of total storage
- The SSD will provide both a write buffer and a read cache. The more SSD capacity in the host, the greater the performance since more I/O can be cached.
- Not every node in a Virtual SAN cluster needs to have local storage. Hosts with no local storage can still leverage distributed datastore. However, the recommendation is to have hosts similarly configured, for optimal resource utilization balancing.
- Each vSphere host must have at least one network interface card (NIC). The NIC must be 1Gb capable. However,as a best practice, VMware is recommending 10Gb network interface cards.
- A Distributed Switch can be optionally configured between all hosts in the Virtual SAN cluster, although VMware Standard Switches (VSS) will also work.
- A Virtual SAN VMkernel port must be configured for each host. With a Distributed Switch, NIOC can also be enabled to dedicate bandwidth to the Virtual SAN network.
The VMkernel port is labeled Virtual SAN. This port is used for inter-cluster node communication and also for read and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O will need to traverse the network configured between the hosts in the cluster.
How to configure:
1. First open VMware vSphere WebClient.
2. Select Home tab> Double-Click Hosts & Clusters
3. Expand the navigation on the left side and click vCenter
4. With ESXi selected navigate to Manage -> Networking -> VMkernel adapters
5. You must now add a VMkernel adapter for the Virtual SAN traffic. Click the icon to add a new adapter.
6. Select VMkernel Network Adapter. Click Next
7. We have already attached each host to a distributed switch and created a VSAN port group. You must select the port group to use for this host. Click Browse
8. Select the VSAN Network and click ‘OK’.
9. Keep the default settings but select Virtual SAN traffic. Click Next
10. Keep the default settings. Click Next
11. Click Finish if your screen looks like the above
12. You should now see vsan added for the VSAN Network. Now Enable Virtual SAN
13. Once your network adapters are in place we can turn on Virtual SAN at the Cluster level.
Select Cluster Site then navigate to Manage> Settings> Virtual SAN> General > Edit
14. Check “Turn ON Virtual SAN”, click OK. I am going to keep “Manual” selected for this lab. The Automatic option will add all empty disks on the hosts to be claimed by Virtual SAN.
15. Create Disk Group for Virtual SAN. From here we will create a new disk group that will use all eligible disks. Select Cluster Site> Manage > Settings > Virtual SAN > Disk Management > Claim Disks (disk number)
16. Click “Select all eligible disks”