While in Toronto recently, we rebuilt our terminal server cluster taking advantage of a new VMWare host server. We had 2 physical servers for our TS cluster before, linked up with Microsoft's NLB (Network Load Balancing). We provisioned 3 new VMs (we've got lots of headroom, so we can pass that to our users), and went about setting up the NLB service to cluster them. We quickly ran into some odd behaviour, so I'm going to list a couple of things to be aware of.
- NLB can run in unicast or multicast mode. Each method has drawbacks and benefits, but if you're running VMs then you'll essentially HAVE to use multicast. This is due to the way unicast load balancing works, by establishing virtual MAC addresses for the adapters. When all of these adapters are virtual, your switches can get freaked out and won't resolve traffic properly. Using multicast lets you get around this problem.
- Changing between unicast and multicast can cause all manner of problems. If you've had your cluster set up for unicast, and you want to change it to multicast, make sure to remove the host IP (the one IP that all members of the cluster have in common) from the IP address mappings, BEFORE you change to multicast mode. Otherwise you'll get IP conflicts and your network adapters will disable themselves.
- When using multicast, you may run into problems getting your terminal services cluster onto the internet. Not all firewalls can handle the routing of multicast packets; we use pfsense, and it does not handle multicast traffic. Clients who try to connect from outside the LAN to the terminal server cluster behind the firewall don't get anywhere, the connection just times out. This is because the firewall is dropping the multicast packets that are used to communicate with the TS cluster. To get around this, we just configured the firewall to send non-LAN clients to one specific member of the cluster; it's not ideal, but it does the trick until pfsense bakes in the support for multicast traffic.