|T O P I C R E V I E W
||Posted - 03/11/2012 : 4:30:56 PM
Hey all --
Playing with Win 8's NIC Teaming feature. I've not worked with teamed NICs, so I have no idea what these guys are talking about. Any chance anyone's got an insight?
There's a "Teaming Mode" with options "switch independent," "static teaming," and "LACP." I'm guessing that has to do with the new MAC address that I imagine gets created?
Then there's "Load distribution mode," and the answers are "Address hash" and "Hyper-V Port." I'm guessing that says to let the Hyper-V virtual switch do the load balancing? Or it only makes sense on a VM with virtual NICs?
Any insights would be very much appreciated. Thanks!
|20 L A T E S T R E P L I E S (Newest First)
||Posted - 03/18/2012 : 11:26:55 PM
Yes James, excluding what you haven't stated about where the traffic goes once it hits those two switches ;-)
I have moved two of our production hyper-v hosts to Windows Server 8 beta and it is working great (I'm not completely nuts - each of them has a partner machine still running 2008r2 - relatively small offices with redundancy provided by one DC and one file server on each hyper-v host). One is a Dell PowerEdge 2900 (Broadcom nics) and the other is an HP dl380g5 (intel nics). Using switch independent teaming with one adapter set as standby (one gigabit nic is plenty for a DC and file server in an office of less than 50 users).
Our production clusters is where it will be nice to no longer necessarily have to worry about tightly matching hardware (although of course it still pays to do so wherever possible). Currently we run a couple of 2-node clusters on PowerEdge T710s - so four built-in Broadcom nics. In 2008r2 teaming is of course still left up to the vendors, and Broadcom's hyper-v support was lagging behind intel's when we first built the clusters - so we put two quad-core intel pcie nics in each node (ebay ftw!).
One bcom nic for management, one for CSV (crossover cable), one for Live Migration (crossover cable), and one spare (this could be teamed with the first for HA on the management interface but we haven't done that as we try to avoid more than one 3rd party nic teaming software on the machines; another place where inbuilt teaming is great - still matching hardware within a team but no extra bloatware). The two intel nics each have one port teamed with one port on the other to create four distinct physical networks with HA (untrust, trust, DMZ, and vault in our case). This keeps complexity down for my staff as the VLANning and trunking only happens on the cisco switches. Intel offers a virtualization aware teaming mode that distributes VMs across the available ports and it has worked fine for us.
||Posted - 03/16/2012 : 06:13:14 AM
Can I bump this and ask Marks opinion please?
I think this option is for High Availability.
I have four NICs and I team them into two pairs.
One pair of NICs handles Management and Networking (I know not ideal but trying to keep my example simple)
One pair of NICs handles iSCSI traffic to my central storage.
I have four switches and they are stacked into two pairs.
One pair handles Management and Networking.
One pair handles iSCSI Traffic.
So, if one switch fails the other switch keeps the network alive.
I have my Windows 8 networking set to switch-independent and one ether net from each NIC pair into a port on each of the four switches.
Do I have High Availability in my network design?
||Posted - 03/15/2012 : 09:38:08 AM
EDIT - Sorry - this is the same link as Michael's above.
It took me some digging around, but this is how Microsoft are defining it.
Network Adapter Teaming Configurations
here are two basic sets of algorithms that are used for network adapter teaming:
Algorithms that require the switch to participate in the teaming, also known as switch-dependent modes. These algorithms usually require all the network adapters of the team to be connected to the same switch.
Algorithms that do not require the switch to participate in the teaming, also referred to as switch-independent modes. Because the switch does not know that the network adapter is part of a team, the team network adapters can be connected to different switches. Switch-independent modes do not require that the team members connect to different switches, they merely make it possible.
There are two common choices for switch-dependent modes of network adapter teaming:
Generic or static teaming (IEEE 802.3ad draft v1). This mode requires configuration on the switch and the computer to identify which links form the team. Because this is a statically configured solution, no additional protocol assists the switch and the computer to identify incorrectly plugged cables or other errors that could cause the team to fail. This mode is typically supported by server-class switches.
Dynamic teaming (IEEE 802.1ax, LACP). IEEE 802.1ax uses the Link Aggregation Control Protocol (LACP) to dynamically identify links between the computer and a specific switch. This enables the automatic creation of a team and, in theory, the expansion and reduction of a team simply by the transmission or receipt of LACP from the peer network adapter. Typical server-class switches support IEEE 802.1ax, but most switches require manual administration to enable LACP on the port.
IEEE 802.1ax is also commonly known as IEEE 802.3ad because it was developed by the IEEE 802.3ad committee before being published as IEEE 802.1ax.
Both modes should result in inbound and outbound traffic approaching the practical limits of the aggregated bandwidth because the pool of links in the team functions as a single pipeline.
||Posted - 03/14/2012 : 6:16:24 PM
Oh and as far as LACP vs. Static, those are both configurations done on the switch end. LACP is preferred if the switch supports it; it is "smarter" with balancing compared to static, particulary when links are added or removed.
||Posted - 03/14/2012 : 6:12:49 PM
Here's a discussion of the concept:
I am not 100% sure how switch independent works but from reading that I think it means your virtual machines on a particular virtual switch would be divided evenly between the two physical NICs. So half your VMs would use one physical NIC and the other half would use the other. If one NIC went down, the VMs on that NIC would be reconnected to the other physical NIC. The advantage being you could plug into two switches and still balance the load, however, any particular VM could only use 1 physical NIC at a time. With static teaming, or LACP, both the server and the switch know about the teaming, and are able to combine the bandwidth, so all VMs share the combined bandwidth of all links.
So if you had 4 1Gbps links, with static or LACP, a VM could get up to 4Gbps bandwidth, whereas with switch independent, 1 VM could only get 1Gbps. At least that's my guess <g>
||Posted - 03/14/2012 : 5:25:59 PM
Apologies, Jetze, I only saw one slide and it didn't really answer my question.
For "Teaming Mode," "LAPC" clearly means -- as Michael said -- that you've got a switch with a particular set of abilities. Is "switch independent" slower but always works? And what's "static?" I guess I don't know what kind of problem they're trying to solve here.
Also, I still don't have any clues on what the various load balancing options are... any help would be much appreciated. Thanks!
||Posted - 03/14/2012 : 3:30:42 PM
Yeah virtual switch would be for the guests. That is the most important you are correct.
And yeah I forgot about iSCSI, which would be another pair of NICs.
Still, having redundancy for the other networks, while not necessarily required, would be important for a large environment.
||Posted - 03/14/2012 : 12:37:53 PM
oh, and do not forget iscsi, you do not want that mixed up with your huest network
||Posted - 03/14/2012 : 12:36:50 PM
And where do you connect the guests Michael? The virtual switch i presume?
That's the Nic i'd prefer to have redundant. The rest is just mgmt and failover. You do not need those to keep the Hyper-V host working.
||Posted - 03/14/2012 : 12:06:14 PM
I'd use "server" NICs regardless, but I see it as a big plus combining two different NICs in a team.
Four NIC ports could be used this way (someone jump in if I don't have this right!):
1 - Hyper-V host management
2 - Virtual Switch
3 - Cluster Heartbeat
4 - CSV redirection / Live Migration
With a 4-port add-in NIC, you could team all 4 of those between onboard and add-in, and they would all be highly available even in the event of an issue with either the onboard NIC or the add-in NIC. Granted NICs rarely have a problem, but still it is nice to have the added redundancy.
||Posted - 03/14/2012 : 06:02:45 AM
You're right on the servers, my mistake. Anyway, with 4 NICs you still need expansion for an HA environment.
||Posted - 03/14/2012 : 05:57:15 AM
Both the DL360 and DL380 G7 have four NICs by default.
It is arguably the most important component in production virtual environments - the Network Card.
It is a nice feature for Hyper-V, I am still not sold that if it works with Windows it is supported and will work!
||Posted - 03/14/2012 : 05:04:34 AM
Don't know about the 4+ NICs, HP DL360 and DL380 have two NICs standard and the DL380 is the one I work most with.
And there is an HCL but this is very simmple, if it works with Windows it works with Hyper-V. If the hardware is supported in Windows, it's supported in Hyper-V. This is a big advantage over VMware because VMware doesn't support many well known and enterprise level A-brand devices. Most noticably built-in SATA controllers. I don't see the advantage in this approach, I see it as a limitation. Maye this is because VMware doesn't built operating systems and Microsoft does.
||Posted - 03/14/2012 : 04:40:44 AM
Yes, but most Servers come with 4+ NICs built in these days.
It is a nice advantage for older hosts.
I still don't know if I am sold about Hyper-V being able to install on any hardware and not have a HCL.
I quite like VMWARE's HCL as it gives re-assurance that the hardware has been tested.
I do see the advantage for smaller shops and labs, legacy hardware but the cynic in me says you should be running your production virtual infrastructure on server class support hardware.
||Posted - 03/14/2012 : 03:23:15 AM
Most of us would use enterprise class hardware, but you´re not required to. As long as the NIC has a driver, you can add it to your team. This is a great advantage, especcially for production use. Now you can add the on-board NICs in the same team ad the PCI expension card NICs.
||Posted - 03/14/2012 : 03:14:02 AM
In a production environment - you would not want to do that though - right?
You want to know your NICs are server class and of the same type/model. Correct?
What is best practice?
||Posted - 03/12/2012 : 10:06:10 AM
That's right James! Hyper-V has native support for NIC teaming now, just as that other product. :) You can mix NICs from different types and even different vendors.
||Posted - 03/12/2012 : 09:57:32 AM
This .pdf discusses HP Proliant use of NIC Teaming with 2008.
Might be of help?
Aidans document is big news to me. Does this mean I could team any pair of physical NIC in Windows 8? No need for HCL of HW Vendor support?
||Posted - 03/12/2012 : 09:52:32 AM
LACP is a switch standard (Link Aggregation Control Protocol) designed to negotiate multiple ethernet links into one. If the switch you are connecting to supports LACP, I would think that would be the best option to use.
||Posted - 03/12/2012 : 05:20:09 AM
Should ask Aidain, he gave a great presentation about Server 8 networking for the Dutch Hyper-v.nu user group. Slides here: http://www.slideshare.net/hypervnu/windows-server-8-hyper-v-networking-aidan-finn