Server virtualization definition
Server virtualization is the process of dividing one physical server into multiple virtual servers via a software layer known as a hypervisor. The same resources remain available to users. The hypervisor only allocates the CPU, memory, storage, and network among the VMs. Each virtual machine operates as an independent system with its own operating system, applications, and resources. As a result, server virtualization maximizes server resource utilization and efficiency and simplifies IT management.
See also: host virtual machine, micro-virtual-machine
Benefits of server virtualization
- Efficient use of hardware and reduced server sprawl.
- Lower hardware, power, and cooling costs.
- Simplified management and more efficient use of IT staff time.
- Quick provisioning and server configuration.
- Effective backup and recovery solutions, leading to less downtime in case of failure.
- Easy scalability for changing business needs.
- An ideal environment for testing and development.
Types of server virtualization
- Full virtualization. Full server virtualization utilizes a hypervisor. Hypervisors communicate with the servers to determine the disk space status and CPU usage, allowing the software to allocate the resources accordingly. The key aspect of full server virtualization is that the separate server instances are unaware of each other. As a result, each VM can run an unmodified operating system.
- Para-virtualization. Unlike in full virtualization, the operating system on the server is aware of the virtual environment and interacts with the hypervisor to optimize performance. Consequently, the hypervisor does not need as many resources because all sides work together for optimization.
- OS-level virtualization. This is the simplest, most basic type of server virtualization. The physical server’s operating system manages the resources, so there is no need for a hypervisor. However, each individual instance has to run under the same operating system.