Cloud FoundationsVCP-DCV

VMware VCP-DCV: vSphere Administration, Storage, and High Availability

The VMware Certified Professional — Data Centre Virtualisation (VCP-DCV) is VMware's most widely held certification. It validates your ability to install, configure, manage, and troubleshoot vSphere environments — the virtualisation platform that runs the majority of enterprise datacentre workloads globally. Understanding vSphere is fundamental for any infrastructure engineer: even as organisations migrate to cloud, vSphere environments persist, and VMware Cloud on AWS bridges on-premises and cloud.

12 min
5 sections · 10 exam key points

vSphere Architecture: ESXi, vCenter, and the Management Stack

vSphere is VMware's server virtualisation platform. Core components: ESXi (Type 1 bare-metal hypervisor — runs directly on hardware, provides the virtualisation layer, managed remotely — no persistent management OS), vCenter Server (centralised management for all ESXi hosts — deployed as vCenter Server Appliance (VCSA), a Linux-based virtual appliance, runs as a VM on an ESXi host). vCenter services: vSphere Client (HTML5 web UI for all vSphere management), vSphere API (REST and SOAP APIs for automation), vSphere Lifecycle Manager (VLM — manages ESXi host updates and upgrades via baseline or image-based profiles), vSphere High Availability (HA), vSphere Distributed Resource Scheduler (DRS). The vSphere infrastructure hierarchy: Datacenter (logical container) > Cluster (group of ESXi hosts sharing resources) > Host (ESXi server) > VM. Resource pools: logical grouping of resources within a cluster — set reservation (guaranteed minimum), limit (maximum ceiling), and share weights (relative priority during contention). vCenter features like HA, DRS, and vMotion require a shared storage solution — hosts must see the same datastores.

Virtualisation Concepts: VMs, Templates, and Snapshots

Virtual Machine components: virtual CPU (vCPU — mapped to physical CPU threads by the hypervisor scheduler), vRAM (allocated from host physical RAM), virtual disks (VMDK files stored on datastores), virtual NICs (connected to port groups on virtual switches), VM hardware version (determines which virtual hardware features are available — newer versions add features, older versions ensure backward compatibility). VM creation: deploy from scratch (manual configuration), deploy from template (cloned from a master VM, fastest deployment — convert a VM to template to prevent accidental modification), clone from existing VM (copy with new identity). Snapshots: preserve VM state at a point in time — snapshot file (delta disk captures all changes since snapshot), snapshot tree (chain of delta disks). Snapshots are not backups — they grow indefinitely and degrade performance as the chain grows. Delete snapshots regularly; commit changes or revert to clean state. Custom Specifications (sysprep for Windows, cloud-init for Linux): applied during template deployment to customise hostname, IP address, and domain join — automated guest OS configuration without manual setup.

vSphere Networking: Virtual Switches and Distributed Switching

vSphere networking uses virtual switches that connect VMs to physical networks. Standard vSwitch (vSS): configured per host, not centralised — each host needs its own switch configuration (labour-intensive for large environments). Distributed vSwitch (VDS): configured once in vCenter, distributed to all attached hosts — centralised management, consistent configuration, advanced features (port mirroring, NetFlow, traffic shaping per VM, LACP support). Port groups: named network configurations applied to vSwitch or VDS — VMs connect to port groups, not directly to the switch. VLAN tagging: port group VLAN ID 0 = no tagging (access port), VLAN ID 4095 = VGT (Virtual Guest Tagging — the VM handles VLAN tags, useful for network appliance VMs), specific VLAN ID = trunk tag (ESX host tags outbound, untags inbound). vmkernel (VMK) adapters: interfaces for ESXi management traffic — dedicated VMK for each function: management (vSphere Client access), vMotion (live migration traffic), vSAN, iSCSI, NFS, Fault Tolerance logging. NIC teaming: multiple physical uplinks on a vSwitch or VDS — load balancing policies (IP hash for LACP, route based on originating port ID for active/standby, explicit failover order).

vSphere Storage: VMFS, NFS, vSAN, and iSCSI

vSphere storage options. VMFS (Virtual Machine File System): clustered filesystem stored on block storage (FC SAN, iSCSI SAN, FCoE) — multiple ESXi hosts can mount the same VMFS datastore simultaneously. VMFS versions: VMFS 6 is current (supports 4K sector drives, space reclamation). NFS: file-based storage shared via NAS (NetApp, EMC Isilon) — ESXi mounts NFS shares as datastores — simpler than block storage, somewhat lower performance. vSAN: hyper-converged storage — uses local NVMe/SSD drives across cluster hosts to create a shared distributed datastore. No external storage array required. vSAN policy-based storage: VM Storage Policies define FTT (Failures to Tolerate — RAID-1 mirrors, RAID-5/6 erasure coding), Number of disk stripes, and IOPs limit per VMDK. vSAN requires minimum 3 hosts for RAID-1, 4 hosts for RAID-5 erasure coding. iSCSI: block storage over IP network — ESXi includes software iSCSI initiator, use dedicated VMK adapters for iSCSI traffic, bind to specific NICs to segregate storage traffic.

vSphere High Availability, DRS, and vMotion

vSphere cluster features for availability and efficiency. vSphere HA (High Availability): automatically restarts VMs on other cluster hosts if a host fails — admission control ensures enough spare capacity to restart all VMs from the largest host (or configured percentage). HA uses heartbeat network (VMK management) and datastore heartbeating (detects host isolation from the network vs total host failure). vMotion (live migration): move a running VM from one host to another with no downtime — requires shared storage (same datastore visible to both hosts), compatible CPUs (same family, or Enhanced vMotion Compatibility mode), and VMK adapters enabled for vMotion on both hosts. Storage vMotion: migrate VM disks between datastores while the VM is running — no shared storage required. DRS (Distributed Resource Scheduler): automatic load balancing of VMs across cluster hosts — migrates VMs via vMotion when hosts are overloaded, balances CPU and memory utilisation. DRS modes: Manual (recommendations only, you approve), Partially Automated (initial placement automatic, migration requires approval), Fully Automated (all DRS decisions executed automatically). Affinity rules: keep VMs together (same host) or apart (separate hosts) — anti-affinity for HA (database primary and replica on different hosts).

Key exam facts — VCP-DCV

  • VCSA is a Linux appliance VM that hosts vCenter Server — manages all ESXi hosts centrally
  • vSphere VDS (Distributed vSwitch): centralised configuration in vCenter, distributed to all attached hosts
  • Snapshots are not backups — delta disks grow indefinitely and degrade performance over time
  • vSAN: hyper-converged storage using local NVMe/SSD — requires minimum 3 hosts, policy-driven
  • vMotion requires shared storage, compatible CPUs, and enabled VMK vMotion adapters
  • HA restarts VMs on surviving hosts after a host failure — requires spare capacity via admission control
  • DRS automatically balances VMs via vMotion — Fully Automated mode executes migrations without approval
  • VMFS: clustered block storage filesystem — multiple hosts mount the same datastore simultaneously
  • VLAN 4095 on port group = VGT mode — VM OS handles VLAN tagging directly
  • VM Storage Policies define FTT (fault tolerance) and stripe width for vSAN objects

Common exam traps

vSphere snapshots should be used as a backup strategy

Snapshots capture VM state at a point in time but are not backups. Delta disks grow indefinitely, consume datastore space, and degrade VM performance. Use purpose-built backup solutions (Veeam, VMware Data Recovery) for actual backup and recovery.

vCenter is required to run VMs on ESXi

VMs run on ESXi hosts independently — vCenter is required only for advanced cluster features (HA, DRS, vMotion, VDS). A standalone ESXi host runs VMs without vCenter, managed via the ESXi Host Client. vCenter adds centralised management and the advanced cluster features.

More vCPUs always improves VM performance

Over-assigning vCPUs can hurt performance — the hypervisor scheduler must find multiple simultaneous physical CPU threads for each vCPU. VMs with excessive vCPU assignments wait longer for the scheduler, increasing latency. Right-size vCPU count to actual workload needs.

Practice this topic

Test yourself on VMware VCP-DCV

JT Exams routes you to questions in your exact weak areas — automatically, after every session.

No credit card · Cancel anytime

Related certification topics