Our new VMware design relies heavily on OTV from Cisco. 4 blades in each datacenter running the same cluster of virtual machines. We had it tested and working with the secondary DC moved to its new location, but over the weekend we fired up the new primary datacenter and moved the Nexus 7ks to it, while keeping the 6509s in our old primary DC. No VC-ESX communications worked after that.
Now, last week we discovered that OTV adds some packet overhead to communications (we knew it did but didn't realize the repercussions). Vmware secure communications to the VC server are pretty close to the maximum size already (1500 by default). When we tried to add a host to the cluster when the VC was connected to a vlan using OTV, the host would send SHA thumbprint info, but the communication would timeout after that. Thats because OTV adds 70 bytes or so. Pings even work normally, but using the size option (-s, -l depending on client) we found that pings of 1430 size worked, and 1431 size didn't.
So after discovering this we played around with resizing the MTU on vmware and the VC, but decided rather that the switches all should have their MTU fixed. The network team fixed the MTU size on the 7ks, but the 6509s will unfortunately cause OSPF errors if the MTU isn't the same on all the switches. means a big outage, so we're scheduling that.
So, why did it work during testing? Because the 7ks could talk directly to each other without a router (6509) prior to the move, and afterwards they couldn't. Doh.
So, why then couldn't the VC server, which was hosted in the backup DC, not even communicate with the ESX host it resided on? Because of OTV domain ownership. The 7ks in the primary datacenter own all the OTV vlans, and because the 7ks couldn't talk to each other anymore directly, the OTV vlans in the backup DC are broken until the 6509 reboots. Big d'oh.
8 comments:
You can use the tcp mss and not change the MTu to avoid ospf. Issues.
U can use the adjust tcp mss to avoid changing MTu and hence o ospf issues.
we having the same issue now. Do you have to set all the VLANS and L3 MTU size?
we having the same issue now. Do you have to set all the VLANS and L3 MTU size?
Do you also have to enable it on the ESX boxes? I cannot seem to enable the MTU size on the port where the ESX box is connected. Not sure if it's a FEX limitation.
Running into the same issue with OTV between two sites with 7Ks connecting. We tried dropping MTU on vSwitch level, but still had problems. We ended up having to change the MTU on each guest VM - any thoughts on why this might be? Shouldn't the vSwith chafe to 1400 avoid us having to do what we did on each vm?
You could tell opsf to ignore mtu mismatch... No need for outage
Post a Comment