Virtualization is, basically, taking very powerful computers carving them up in to chunks that can then be used to run many individual virtual (not physically real) computers, typically referred to as a VM, or Virtual Machine. A virtualization stack is made up of four main components: compute, storage, network, and management.
Compute
Compute refers to the actual hardware platform that provides the CPU and memory for VMs, and software that manages and divides the resources up into virtual machines. The software is referred to as the hypervisor. There are two main types of hypervisors: Type-1, also known as native or bare-metal hypervisors, run directly on the hardware, generally as part of the operating system itself. Common Type-1 hypervisors include VMware ESXi, Nutanix AHV, Microsoft Hyper-V, KVM, and Xen. The second type of hypervisors are Type-2, also known as nested or desktop hypervisors. These are hypervisors that run as processes within an operating system. Common examples include VirtualBox and VMWare Workstation. Type-1 hypervisors are much more performant that type-2 hypervisors, as they are able to give their virtual machines much more direct access to the hardware.
Storage
Hypervisors need storage on which to store the files that define the virtual machines. They can either use local storage (installed directly in the hypervisor) or shared storage connected to multiple hypervisors. Shared storage is the most common in production virtualization environments, as it enables VMs to move compute processes across hypervisors as long as they have access to the same shared storage.
Shared storage is traditionally provided by either a SAN (Storage Attached Network) or a NAS (Network-Attached Storage). SANs and NASs are both purpose-built hardware intended to hold and manage many physical disks and pooling them together in a manner that provides large logical disks while also protecting the data from being lost in the event of an individual drive. The key difference between a SAN and a NAS is the technology use to present the storage, and the expandability of the platform. SANs typically leverage special storage protocols such as iSCSI (which leverages traditional IP/Ethernet) or FibreChannel (which leverages optical networking via hardware called HBAs which connect to specialized FibreChannel switches). NASs meanwhile tend to present storage via protocols more traditionally associated with file sharing, such as NFS, CIFS, or SMB. SANs also tend to be expandable by adding additional physical components to hold additional disks, sometimes called shelves, whereas NASs tend to be non-expandable.
VMWare and Nutanix, among others, offer proprietary storage technologies that, instead of leveraging an external storage device, pool and manage the locally attached storage within the hypervisors themselves to provide shared storage. VMWare refers to this technology as vSAN, whereas Nutanix has a concept we will discuss later called the Distributed Storage Fabric.
Network
Hypervisors manage and present network interaction between VMs and the physical network infrastructure. Most hypervisors have similar methods of connecting to the physical infrastructure by way of uplink NICs on the hypervisors configured as trunk ports and connected to the switches, with the VLANs needed by VMs being added to the trunk ports. Virtual switches present virtual switchports to virtual NICs.
VMWare and Nutanix both offer extensions, typically called software-defined networking or network virtualization, to the network stack to expand the switching capabilities and add routing and security capabilities to the hypervisor. These platforms, VMWare NSX and Nutanix Flow, are key components that enable us to present our services in a way that enable end users to provision and manage their own networking. We will be covering these platforms in great depth.
Management
While an individual hypervisor generally can present a GUI and/or CLI enabling management of itself, a larger management layer is required to manage multiple hypervisors as an aggregate. Each vendor has a very different approach to this, and how they aggregate and pool hyper
Virtual Machines
It’s worth taking some time to explain what a virtual machine actually is. A virtual machine itself is best thought of as a logical representation of a physical computer. It acts like a physical computer. It has the same “parts” as a physical computer. Virtually, it is comprised of two main pieces. The first is the definition of the hardware specifications of the VM exactly as you would describe your home computer. It defines processors, RAM, optical drives and their connectivity (IDE or SATA), hard disk controllers, hard disks and their size and connectivity, video card (yes, this matters), network cards, and other hardware we are never concerned about, as well as how all of this is plugged in to the virtual motherboard. Adding a NIC, for instance, simply adds a line to the definition that says, “I’m installing this brand of NIC in PCI slot X”. The second main part is one (or more) disk image files, representing the hard drives installed in the virtual machine. VMWare and Nutanix each have their own way of storing these definitions and disk images.
Hyperconverged Platform
Originally, virtualization used fully discrete and separate compute, storage, and networking. You had your hypervisors, your SANs, and all the networking was handled by the physical infrastructure. When all three of these are integrated into the same platform, it is referred to as hyperconverged. Today, VMWare can combine two or even all three of these by way of vSAN for storage and NSX for networking. Nutanix is considered a hyperconverged platform by way of AHV, the Distributed Storage Fabric, and Flow virtual networking. I do not know which marketing intern developed this term, or when, but I suspect that the marketing executive that took credit for it probably has a very large yacht.
That all being said…
As for how it changes all that networking stuff? It doesn’t, at least from the perspective of the virtual machine. Virtual machines follow the same rules. They ARP and respond to ARP, they send and receive packets and frames, their virtual NICs plug into virtual switchports in a port group on a virtual switch which has uplink ports to the physical switches. All these things behave exactly as you would expect their physical counterparts to act, and because they are connected to the physical infrastructure, if layer 2 connectivity is in place, VM-to-VM and VM-to-physical communication behaves the same as physical-to-physical. We’ll get into virtual switches and how they are implemented in the specific sections for VMWare/NSX and Nutanix/Flow, but from the perspective of the VM, it’s all the same.