FaultVisor2: Testing Hypervisor Device Drivers Against Real Hardware Failures
Hardware failures are inevitable, especially in cloud environments where there are many hardware devices. To improve the hypervisor's reliability, hypervisor device drivers must handle hardware failures appropriately. Our goal is to allow cloud vendors to test closed-source hypervisor device dr...
Saved in:
Published in | 2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom) pp. 204 - 211 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Hardware failures are inevitable, especially in cloud environments where there are many hardware devices. To improve the hypervisor's reliability, hypervisor device drivers must handle hardware failures appropriately. Our goal is to allow cloud vendors to test closed-source hypervisor device drivers against failures of their real hardware. Previous studies either require source code, can only test against virtual hardware, or cannot be applied to hypervisors. In this paper, we propose FaultVisor2, a hypervisor device driver testing framework that combines fault injection and nested virtualization. To test closed-source hypervisor device drivers, we inject pseudo faults to the I/O data returned from hardware to hypervisor device drivers. To test against real hardware, we allow the target hypervisors pass-through access to the physical hardware and manipulate I/O data of the target devices by intercepting I/O access. To apply to hypervisors, we exploit nested virtualization and run a small hypervisor underneath the target hypervisor to inject pseudo faults. We omit some nested virtualization functions, including nested paging virtualization, to achieve a close to real execution environment and reduce runtime overhead. In our experiment using the VMWare ESXi hypervisor, we found three types of errors which led to critical system failures. |
---|---|
ISSN: | 2380-8004 |
DOI: | 10.1109/CloudCom2018.2018.00048 |