vSAN or "Virtual SAN" is a SDS or hyper-converged infrastructure solution from VMware. It’s an alternative for FC or iSCSI based shared storage. It uses a distributed file system called vDFS on top of object storage. Management is done with vCenter. For details there’s the VMware docs and of course there are already many blogs written over the years. What follows below are a few things to take into consideration from an architecture and operations PoV.
Small to mid sized environments (single to low double digit hosts) with a shared storage requirement and no available storage expertise, where you want to make full use of the ESXi hosts hardware.
vSAN is not just "storage" but a clustered solution which fully integrates with vSphere.
While COTS hardware can be used, all devices have to certificated for and compatible with vSAN per VMware’s requirements. Storage will usually be (a combination of) SSD and flash based devices and optionally a Cache device.
Network-wise there are no special requirements, besides the usual storage related considerations about latency and bandwidth. Since version v7U2 it is possible to use vSAN over RDMA ("vSANoRDMA") instead of TCP, offering lower latency and higher performance. Besides compatible NICs, RDMA (RoCE v2) requires a network configured for lossless traffic — which most recent network devices should support.
You will need a minimum of 3 hosts. The default setup is a Single site cluster. A stretched cluster configuration is also possible, with 2 sites replicating each others data. As it’s a cluster consider failure domains, split brain/node isolation scenario’s, quorum, witness and FTT (Faults to Tolerate).
Data can be encrypted both in transit and at rest.
After enabling vSAN it will take care of HA instead of vSphere HA. This means heartbeats will go over the vSAN Network instead of the usual Management Network.
Although VMware tells you vSAN will work fine without vCenter (which is technically true), you should be prepared to fix issues while VC is unreachable. As there are catch 22 situations where VC has no storage since vsan is unavailable, but you want to use VC to fix it. Which could mean you have to (temporary) setup a new vCenter or use the cli.
While it’s possible to directly store files on a vSAN backed datastore, you should instead setup "File services" which offers SMB and NFS shares. It does this by placing a VM on each node.
Note that with RDMA, LACP or IP hash based NIC teaming are not supported.
While "selective" data replication using SPBM is possible, this can quickly get complicated (e.g. having to micro manage storage per VM).
Since v7U2 data at rest encryption can be setup without an external KMS, by using vSphere as Native Key Provider (NKP). Besides having an sane key management policy, this requires UEFI Secure Boot and TPM 2.0 to be enabled on the hosts first.
Before you can Enter Maintenance Mode on a host, as an extra step vSAN might need to migrate data to remaining hosts in the cluster.
Data integrity has been excellent for me, but accessibility? Maybe not so much. I did tests with lost network connectivity, removed hosts from the cluster and other methods of breaking the cluster by changing vsan configuration and removing/redeploying vCenter. I was always able to get full access back to the vsan datastore without any data corruption. However, this took considerable amounts of time using esx cli rebuilding the cluster meaning downtime because VM’s had no storage.