Friday, August 3, 2012

IT At the LHC — Managing a Petabyte of Data Per Second

VMWare is pretty widely recognized as the king of virtualization-- at least so long as you arent concerned with money. Its overhead is far far smaller than the others especially when dealing with huge numbers of connections, and it simply has more features than its competitors.

Which doesn't mean those features are implemented well.

Not so long ago, I built an automated QA platform on top of Qumranet's KVM. Partway through the project, my employer was bought by Dell, a VMware licensee. As such, we ended up putting software through automated testing on VMware, manual testing on Xen (legacy environment, pre-acquisition), and deployment to a mix of real hardware and VMware.

In terms of accurate hardware implementation, KVM kicked the crap out of what VMware (ESX) shipped with at the time. We had software break because VMware didn't implement some very common SCSI mode pages (which the real hardware and QEMU both did), we had software break because of funkiness in their PXE implementation, and we otherwise just plain had software *break*. I sometimes hit a bug in the QEMU layer KVM uses for hardware emulation, but when those happened, I could fix it myself half the time, and get good support from the dev team and mailing list otherwise. With VMware, I just had to wait and hope that they'd eventually get around to it in some future release.

"King of virtualization"? Bah.

Source: http://rss.slashdot.org/~r/Slashdot/slashdotScience/~3/dS3uT_R34eM/it-at-the-lhc-managing-a-petabyte-of-data-per-second

super bowl tickets superbowl birmingham news lee evans lee evans 49ers 49ers vs giants

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.