With part one of my whole home computing make-over complete, it’s time to get finish up on the lab!  There are tons of other great blogs out there that talk about home labs (like this one, and this one, and this one, and this one, and this one…), and I definitely recommend you go through and pick what makes sense for you.  There are definitely wrong ways to build a VSphere lab, but there are also many, many right ways to build one.  Don’t get hung up on host count, NIC density, switch models and case types, just find something that fits into your budget, fits into your environment and does what you need it to do. 

In my lab (as you may remember from this post) there are a total of five hosts.  One host does nothing but management functions (vCenter, AD, etc…) and the other four are completely configurable.

In my case, I built a list of use cases that I wanted to support:

  1. I wanted to stay away from virtualized hosts, if possible.  This is 100% a personal preference, because when I’m buzzing around in the lab I don’t want to worry about how many layers nested in I am.  There are still situations when I’ll use them, but one or two physical hosts wasn’t going to cut it for me.
  2. I wanted the hosts to be as close to 100% supported on the VMware HCL as possible.  I hate troubleshooting some strange performance issue only to find out that it’s not supported by VMware.  NICs, CPUs, I want to get as right as I can up-front.
  3. I want to be able to simulate multiple sites.  Whether it’s SRM, storage replication, application testing or what have you, being able to simulate real-life network conditions was a must for me. If you want to simulate WAN conditions in the lab, there are a couple tools out there.  WANem has both a bootable ISO version as well as a virtual appliance, and both work pretty well.  I’ve used both (I have a couple old beater Dell PE650 servers with dual-NICs that work great for this), if there’s any interest maybe I can put up a blog post on it…
  4. I wanted to be able to use VLANs to segment the different setups in the lab, but still be able to route between them. This was especially critical for the vCloud Director and multi-site labs (SRM, RecoverPoint, etc…), but in a classic case of want my cake and to be able to eat it, I needed something that was quiet and that I could use my existing Cisco skill-set to manage.
  5. I needed to be able to support multiple hypervisors and switch the lab hosts from one to another quickly.
  6. I needed to be small enough to fit on an existing shelf in my office, I needed to be able to support it with a single UPS and it needed to be quiet/cool enough to not get my wife upset at me. :-)

So the hardware order was a go, and if you want to see that again here’s the link to the NewEgg.com shopping cart. I went over the rationale for the hardware in my previous post, so we’ll skip to the actual Lab build process.

IMAG0649First, I love these cases.  They are small, they look good, they have two USB ports on the front and they are easy to work with internally, which isn’t always the case with micro-ATX cases.  The edges are nicely rounded, so there was no blood shed, and the drive casing at the front disengages and slides out to make everything easy to get to.  There’s also enough slack in the cables that I was able to do a decent job of cleaning things up.  Finally, the BIOS has a “green” setting which allows you to turn off the annoying LED lights on the board, so once it’s up it’s completely dark.  The single 80mm fan on the side is OK, although it’s a little loud.  If I can find a good deal on some silent fans I might replace these and see if it helps any.  Honestly, one of them running isn’t a big IMAG0650deal, but five of them side-by-side and it’s louder than I’d like.  Overall, I was very impressed with the cases and the ease of construction.  In fact, after building the first one and figuring out the cabling between the case and the motherboard, I decided to see if the kids wanted to help.  My little boy is 4 and his sister is 3, so there was definitely the potential for disaster, but they did awesome.  Both built a whole PC from scratch (mount the motherboard, plug in the power cables, seat the CPU and RAM), with Dad to help with the CPU cooler spring mount and the little case cables.  I was very impressed, and both got to write their names inside the case so everyone knows that they built it.  My daughter was particularly excited with this and asked if I could take a picture of her!  I’ll have to start teaching her how to deal with stupid IT boys…

IMAG0652The switches are sweet.  Completely fan-less, able to do ACLs/layer-3 routing and with a Cisco logo on them for less than $250, they may be the best lab switches I’ve ever had.  The version of “IOS” that they run is…ummm…different, but the web interface is actually very, very good so that balances out. 

The NICs have also been perfect.  I was worried about spending more for the dual-port Intel cards, but hearing about the issues some of my friends have had here, I’m glad I did.  They have been 100% solid from day one, support wake-on-LAN and haven’t given me a bit of problems.  I like the dual port card as well, since there aren’t a ton of available slots on the motherboard.

After putting everything together on the rack and cleaning up the cabling, I have to say I’m pretty happy.  Here’s a couple pictures so you can see how it looks, and then we’ll talk about the VMware side of things. The rack is one I’ve had forever, and it fits perfectly under the desk in my downstairs office.  It has three shelves, but I’ve removed the middle one to be able to fit all of the hosts in on their sides.  The single fan on the sides of the hosts works great here! A liberal application of zip ties and everything is in place.  On top, you can see the lab switches, and then the firewall, internal LAN switch and one of the wireless access points use in the house.


The only real hole in the lab right now is the storage. I have an old ReadyNAS NV+ array that doesn’t do iSCSI and that gets easily crushed by too many VMs. I’m going to have to figure something out there, but I’ve been looking longingly at the Synology DS1511+ as a replacement. We’ll have to see how it goes.

DSC05933If you looked at the pictures, you may be asking what the USB keys are doing plugged into the front of the hosts.  That was my answer to being able to quick switch between hypervisors!  I have figured out how to boot almost every hypervisor I need from USB, so if I want to switch from one version to another, I just reboot the host and swap out the USB key!  I have all the standards, including vSphere 4.1, XenServer, Xen 4.1.1 and Microsoft Hyper-V server.  I’ve also got some new VMware goodness in there, and some stuff I can’t even talk about, but having everything on USB makes it super convenient.  Want to see how different hypervisors work together?  Want to split the four hosts into two sites?  Now it’s easy!  There’s more than enough horsepower to go around, so I can do what I want to do.

So now we’ve gone through the home PC and the home lab, so what else is there?  Oh boy, now it gets crazy.  I don’t want to give too much away but the next post in this series will detail the offsite lab?!? Stay tuned…

16,498 total views, 10 views today


5 Responses to PCs and Home Labs and Data Centers, Oh My… (Part 2)

  1. Shaunguthrie says:

    Very cool and very clean. I like that. Wish I was at that stage in my career again to build something like this. As I said before, when we move I’ll set something up in the new place and have it for a home system and tinkering

  2. The good thing about the lab is that is doesn’t have to be big, or cutting edge. You can put something together on your laptop, if you want. Or buy one host and build up from there. When I was asking how to justify the cost of the home lab earlier this year, Chad Sakac told me that everything he’d ever spent on lab gear had been a good investment, and I think he’s right. You don’t need all of this to get started, just get what you need!

  3. I think I’m going to start with two of these hosts for my lab, and possibly just one of the Cisco switches. Do you think I can get away without having a dedicated management server, but still be able to test clustering on all three hypervisors? If not, I may go for three, since my computer donating model leaves me without old machines to scavenge.
    Also, I’d read an MSDN blog about making a Hyper-V R2 usb key and it recommended 8GB or larger. If I’m going to get an 8GB for that, I may just pick up 3 at that size, even if Xen & ESXi don’t require that much room.
    The main pain point for me is the NICs, although $160 isn’t bad for dual-port NICs from what I could see. I think your effort to stay completely within the HCL is a smart thing.
    With just 2, maybe 3 hosts, do you think I’ll be OK with just a single Cisco switch for now?
    Great post – really appreciate the perspective. Oh, and I think the pic of your daughter is helping me convince the CFO, since she thinks it is adorable.

  4. To answer my own question about doing without a dedicated management server, after waking up I see the main benefit of that being able to maintain the management VMs independent of which hypervisor I’m using on the other nodes. Without that, I imagine I would need to keep a separate mgmt environment running under each hypervisor on at least one of the cluster nodes.

  5. You are exactly right, Mike. I have AD, all of my vCenter instances, monitoring and the like inside the single host management cluster. That way everything else can leverage those apps without me having to rebuild/redeploy every time. For some of the lab setups, like multi-tenant vCD, I’ll have separate AD instances, but there’s still one to rule them all.