Introduction These are my notes and brain-farts during investigation and setup of a 2-node KVM "warm standby" system. These notes are primarily for my own record ... but ... hoping it's an interesting read and maybe even useful to others (i.e. you, the reader). That would be a great bonus :-) Goal My goal is to have a shared-nothing architecture with 2 hypervisor servers running KVM virtual machines. The VM's itself must be synchronized onto both machines' direct attached storage.
In case of a problem or server maintenance, the VM's should be able to be live-migrated to the other node. In case of a total server failure, the VM's should be able to be manually started on the surviving node. What I am NOT building To be clear: this is not a Highly Available setup. It does not have Cluster features and does not support automatic failover of the Virtual Machine resources when one of the nodes should fail. Why not, you ask?? Well, I'm a guy that really likes to Keep It Simple, Stupid. Adding cluster services into the mix for only 2 servers dramatically increases the complexity of the total setup, with all the headaches and possible problems that comes with it. Oh.. yeah... and this way I don't need fencing mechanisms (which is a requirement for a proper cluster). Yay! Challenges before I even started the real setup 1. I wanted to test this setup first on a single test-PC. Ok no problem... I loaded CentOS on a PC, but then I ran into Nested virtualization on CentOS 6
Once I figured that out, it was a joy though! Works like a charm to prove hypervisor functionality and experiment with virsh.
2. After I was happy with my test environment, it was time to go bare metal.
But the storage controller confronted me to solve installing CentOS 6.3 on Adaptec RAID ... to be continued ...
|