fryguypa

Archive for September, 2010|Monthly archive page

Still here, just busy. . .

In Uncategorized on September 28, 2010 at 17:34

I am still here, just been a little busy for the past few days.  Took some time off from work to work on the house, that is all.

I am planning to blog some things on 10Gbps cabling and associated GBICs soon as well as some more Nexus stuff in the coming month.  If there is anything that you would like me to blog about, please do not hesitate to suggest – I am here to help you, I have the opportunity to learn more as well.

Thanks,

Jeff

Advertisements

..and so the studying begins (again)

In Uncategorized on September 20, 2010 at 16:28

Argh, time to get to studying again. Now that the Nexus installation is behind me – need to get my butt in gear (Thanks Carl) and get studying again. So, here we go again!

Perhaps I will blog some SP stuff soon, probably would be good to start that soon.

Quote…

In Uncategorized on September 17, 2010 at 12:38

Saw a quote today that I wanted to share. Not sure who said it, but it is one to remember.

“Life’s not about the breaths we take, but the moments that take our breath away.”

What is the difference between the Nexus 7010 and 7018?

In Uncategorized on September 15, 2010 at 20:21

I was chatting with one of my old co-workers the other day and he mentioned that most people (Exec Level) do not always understand the difference in the Nexus 7010 and 7018 switches.  So, based on that chat conversation I figured I would just post up some information on the similarities and the differences between the two.

First difference, size and weight.  The Nexus 7010 is 36.5″ (92.7 cm) tall and can weight up to 516 lb (235 kg) whereas the Nexus 7018 is 43.5″ tall (110.5 cm) and can weigh up to 696 lb (316 kg) fully loaded.  That equates to the Nexus 7010 being about 21RU and the 7018 25RU.  Both are also the full depth of a standard 4-post rack – so you need to be aware of quite a bit when designing the space layout around them.  Below is an image from Cisco’s web site ( http://www.cisco.com/en/US/products/ps9402/index.html ) that shows the 7018 on the left and the 7010 on the right.

The next difference in them is the slot capacity and performance.  The Nexus 7010 is a 10-slot chassis but slots 5 and 6 are reserved for Supervisors, so the real capacity is 8 line cards.  In the 7018 you have an 18 slot chassis and slots 9 and 10 are reserved for supervisors, thus providing 16 available slots for line cards.  With the currently available supervisors and cards, the Nexus 7010 can handle 480 Mpps and the 7018 can support twice that, 960 Mpps.  In both the Nexus 7010 and 7018 chassis you can have up to 5 fabric modules installed, but there is a difference between the 7010 and the 7018 the cards.  In the 7010, the cards are in a vertical mounting whereas in the 7018 they are in a horizontal mount.  This is one of the only cards that I  know which is unique to each chassis.  All the other line cards (supervisors, 1g, 10g, etc) are interchangeable between the two.

When you look at the Nexus 7010 and the line cards, in order for the fabric card to “touch” all the line cards, it would have to be mounted horizontally.  In the 7018, the line cards are horizontal – so the fabric card needs to be vertical.  It is just how the backplane connections work in these chassis’s.  At least, that is my understanding of why they are different.

The Nexus 7018 (N7k-C7018-FAB-1) is pictured on the left and the 7010 Fabric module (N7K-C7010-FAB-1) is on the right.

Like the fabric cards that are found on the back, the fan trays  that are found on the back are unique to each switch.   The Nexus 7018 fan tray (N7K-C7018-FAN) is larger and contains more fans as the airflow on that chassis is side-to-side, but the 7010 fan tray (N7K-C7010-FAN) is a smaller as the chassis airflow is front to back.  The 7018 fan tray is picture below on the left and the 7010 fan tray is on the right.  When it comes to the Nexus 7018 airflow, Panduit actually makes a special rack just for this switch to accommodate the airflow in a cold aisle/hot aisle data center. More information can be found on Panduit’s website at http://www.panduit.com/groups/MPM-BR/documents/InstallationInstruction/CMSCONT_035895.pdf

The final difference that I am aware of is with the power supplies.  Today there are three power supplies for the Nexus 7000 switches, two that are AC powered and one that is DC powered.   There is a  6000 kW AC (N7K-AC-6.0KW), a  7500 kW AC (N7k-AC-7.5KW), and finally a 6000 kW DC (N7K-DC-6.0KW) power supply.  In the Nexus 7010 you can have up to three power supplies installed and in the 7018 you can have up to four installed. With the 6000 kW power supply, the cable connected is a standard C19 to country specific plug type.  On the 7500 kW one, the cable is hard-wired to the power supply and must be ordered with the correct plug type.   When it comes to the DC power supply, there is also a DC Power Interface Unit that you will need in order to provide the DC power to the power supply.

Pictures of the power supplies are below with the 6000 kW on the left, the 7.5KW on the right and the DC one below.

As for the similarities, most everything else is the same – Supervisors, Line Cards, configurations, NX-OS, etc.

Hope that this post helps you understand that barring a few hardware differences, these switches are the same.

About me…

In Uncategorized on September 14, 2010 at 15:06

Just a quick note, I have updated the About Fryguy page.  If you are interested (or bored), please feel free to chec ./ it out.

So, what are you waiting for?  Click that About Fryguy’s link under the FryGuy’s Blog title!

Cisco Data Center Architecture announcement today (Nexus 7000 and FEX?)

In Uncategorized on September 14, 2010 at 13:19

Just a quick post on something I noticed today on the Cisco Data Center Architecture announcement today. The slide below got my attention, and quick!  If you look at what I have circled, you will see a reference to a Nexus 2248TP FEX that is supported on the Nexus 7000 switch.

When I check out the Cisco website ( http://www.cisco.com/en/US/products/ps10783/ ) for the Nexus 2248TP, it also referenced the Nexus 7000 as a parent switch.  I have heard rumors of this for quite some time – but have never seen anything official on it until today.  Perhaps I have had my head in the sand because of the Nexus install that I was working on.

Oh yeah, notice the Cisco 6513 there – 2TB+ switching capacity?  Hmm… New Supervisor 2 TB coming?

LACP Configuration and multi-chassis Etherchannel on Nexus 7000 with vPC, Part 2 of 2

In Nexus on September 13, 2010 at 19:47
This is the second part in a two part post on Etherchannel on the
Nexus 7000.  In the first part I covered how to configure vPC on
the Nexus 7000, here I will cover what it takes to get a remote
switch to uplink to the Nexus 7000 core switches using
vPC/Multi-chassis etherchannel.

Here is a diagram depicting the layout that we are using.  For
this part of the post, we will focus on the blue line that is
connecting both Nexus switches to the 3750 Stack.


On the Cisco 3750 switches (they are in a stack configuration of
two switches) we need to configure the interface to be in a
channel-group - for this example Iam using Channel-Group 6
(the switch is actually named StackSwitch06). What you will
also notice is that you configure the 3750 Stack just like it
was only connected to one switch, just one single port-channel
that consists of all the ports connected to both Nexus switches.

For this example we are using ports G1/0/1, G1/0/24, G2/0/1,
and G2/0/24. One thing I want to mention, when you are thinking
about your uplinks to your core switches, be aware of the switch
ASIC layout.  I say this because I have seen many times when
companies use ports 23 and 24 to uplink to a core switch.
The problem with this is that:
 1) The same ASIC is probably controlling both ports, and if
    it goes bad your links to the switch are gone and your
    switch is also isolated.
 2) You have a better chance of oversubscribing the ASIC
    before the uplink when utilization is high on the channel.

Now, onto the configuration, first up the Cisco 3750s.
    interface GigabitEthernet 1/0/1
     description [----[ Uplink to N7K1 - E9/10 ]----]
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-group 6 mode active
    interface GigabitEthernet1/0/24
     description [----[ Uplink to N7K2 - E9/10 ]----]
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-group 6 mode active
    interface GigabitEthernet 2/0/1
     description [----[ Uplink to N7K1 - E10/10]----]
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-group 6 mode active
    interface GigabitEthernet2/0/24
     description [----[ Uplink to N7K2 - E10/10]----]
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-group 6 mode active
    
Once the interfaces are assigned to the channel-group, we
can configure the etherchannel on the Cisco 3750s. Notice
that there is no vPC info nor anything else that says this
is connected to two switches.
     interface Port-channel6
      switchport trunk encapsulation dot1q
      switchport mode trunk


Now, on the Nexus side we need to do some configurations
as well. Both Nexus switches are also configured the same,
so there are no differences in the switch configs.
     interface Ethernet9/10
       description [----[ StackSwitch6-1 ]----]
       switchport
       switchport mode trunk
       channel-group 6 mode active
       no shutdown

     interface Ethernet10/10
       description [----[ StackSwitch6-1 ]----]
       switchport
       switchport mode trunk
       channel-group 6 mode active
       no shutdown

Now, when it comes to configuring the etherchannel on the Nexus
switches, is is configured the same except for the addition of
a vPC identifier. I recommend using the same number that you used
for the port-channel for easy identification, but that is up to you.
   interface port-channel6
     description [----[ LACP EtherChannel for StackSwitch6 ]----]
     switchport
     switchport mode trunk
     vpc 6
Once you have it configured on the Nexus, make sure it is up and
in the vPC correctly.        

     N7K1# sh int port-channel 6  
     port-channel6 is up
     vPC Status: Up, vPC number: 6
     Hardware: Port-Channel, address: 5475.d04f.1165 (bia 5475.d04f.1165)  
     Description: [----[ LACP EtherChannel for RackSwitch6 ]----]   
     Members in this channel: Eth9/10, Eth10/10  
     N7K1#
Once you have confirmed that all is working correctly, you can
check out the StackSwitch spanning tree information: 

     StackSwitch06#sh spanning-tree interface port-channel 6 
     Vlan             Role Sts Cost      Prio.Nbr Type
     ---------------- ---- --- --------- -------- --------------------------------
     VLAN0001         Root FWD 3         128.656  P2p
     VLAN0002         Root FWD 3         128.656  P2p
     VLAN0003         Root FWD 3         128.656  P2p
     VLAN0004         Root FWD 3         128.656  P2p
     VLAN0005         Root FWD 3         128.656  P2p
     StackSwitch06#
You will see that even though you are connected to two switches,
the port-channel is seen as a single spanning-tree
path to the root.

Dirty Chai… what a wonderful drink!

In Uncategorized on September 13, 2010 at 12:53

Well, just to mix this up a bit – decided to post on my new favorite drink from Starbucks.  To be honest, the credit goes to Jennifer (@jenniferlucille on Twitter) for turning me onto this drink.

This year at Cisco Live 2010,  Jennifer told me to try a Dirty Chai the next time I was at Starbucks and I initially hesitated.  Took me about a month or so to finally order one – was in an airport and saw a Starbucks – and I have no idea why I waited so long.  They are sooooo good.

So, what is a Dirty Chai?  Simply a Chai Latte with a shot of Espresso added (many people recommend ordering it with Soy – I have yet to try that).  It is funny how many places have not heard of it, but I will tell you this – they learn quick after a few times.

Next time you are at a Starbucks, go for it – you will not regret it!


LACP Configuration and multi-chassis Etherchannel on Nexus 7000 with vPC, Part 1 of 2

In Nexus on September 13, 2010 at 11:21
The other day I received a question on Ether-channel and the Nexus
7000 - based on the question I felt it would be also good to
include the information here.

This will be a 2-part post, first part is the Nexus configuration
for vPC, the second post will be on the mutli-chassis ether-channel
configuration around the 3750 as well as the Nexus 7000 switches.
What are the benefits of Multi-chassis (vPC) ether-channel? 
Basically all your up-links from your switches are in FORWARDING
mode, nothing is in blocking mode in your spanning tree domain. 
What this means is that you have a loop free topology in your
data center and all links can be utilized.

Below is the diagram of the configuration that I will be
showing here.  There will be a Layer 2 Ether-channel vPC between
the Nexus 7010-1 and Nexus 7010-2 (Orangish line), a Layer 3
Ether-channel for vPC keep-alive (Red line), as well as a
mutli-chassis (vPC) ether-channel from a 3750 stack to Nexus 7010-1
and Nexus 7010-2 with all links in a single ether-channel bundle.



Configuration for both of the Nexus switches is the same except where noted.


Configuration for the Nexus switches
First thing to do is enable the vPC feature:
      feature vpc
 
Once you have enabled the vPC feature, you should create your keep-alive links.
Here I create a port-channel via LACP over ports 9/1 and 10/1.  You will also
notice that I have spread the channel over two line cards.  This has been done
to help assure maximum redundancy.  If a card where to go bad, the other card would
still be active in the port-channel. 
      interface Ethernet9/1
       description [----[ vPC KeepAlive to CoreSwitch2 ]----]
       channel-group 101 mode active  ! Assign port to port-channel 101 via LACP
       no shutdown
     interface Ethernet10/1
       description [----[ vPC KeepAlive to CoreSwitch2 ]----]
       channel-group 101 mode active
       no shutdown

Now we can create the VRF for the keep-alive link.  I suggest using a dedicated
VRF for security and sanity purpose.  This VRF will not participate in your
global routing table, thus allowing for more stability and also the prevention
of duplicate IP addresses in the network.
     vrf context VPC100_KA

Now we can create the Layer 3 interface on the port-channel and assign it
to the new VRF, VPC100_KA
     interface port-channel101
       description [----[ vPC Keep-Alive link between CoreSwitches ]----]
       vrf member VPC100_KA ! Assign this interface into the appropriate VRF
       ip address 10.10.10.1/30  ! The other side of the link is .2/30


Now you can configuration the vPC Peer links (Orangish lines).  Since I am using
10G links for this connection, I have set the rate mode to Dedicated.  This prevents
any chance for over subscription on the 10G port.  It also disables the other 3 ports in
group, so you need to keep that in mind when you are designing your deployment.
     interface Ethernet7/1
       description [-[ vPC Connection to Nexus 7010-2 - E7/1 ]-]
       switchport
       switchport mode trunk  ! Set the mode to trunk
       rate-mode dedicated force ! Force the rate-mode
       mtu 9216
       udld enable ! Since this is also fiber, enable UDLD
       channel-group 100 mode active ! Assign to port-channel 100
       no shutdown
     !
     interface Ethernet8/1
       description [-[ vPC Connection to Nexus 7010-2 - E8/1 ]-]
       switchport
       switchport mode trunk
       rate-mode dedicated force
       mtu 9216
       udld enable
       channel-group 100 mode active
       no shutdown
     !

Now to configure the port-channel as a vPC link as well as the vPC
domain information.
     interface port-channel100
       description [-[ vPC Peer-Link between Nexus Switches ]-]
       switchport
       switchport mode trunk
       vpc peer-link ! Assign this port-channel as a vpc peer-link
       spanning-tree port type network
       mtu 9216
     !
vpc domain 100
role priority 16000 ! Here I hard-coded switch 1 to be the vPC master.
                       switch 2 was left as the default
peer-keepalive destination 10.10.10.2 source 10.10.10.1 vrf VPC100_KA
                    ! The other side has the IP addresses reversed
Had to move the formatting above to get the command to fit, sorry.

Let's check the port-channel and make sure it is up with the appropriate members.
As you can see from the output, Eth7/1 and Eth8/1 are members of the channel.

     N7K1# sh int port-channel 100
      port-channel100 is up
      [------ SNIP - Output omitted! ------]
      Members in this channel: Eth7/1, Eth8/1
     N7K1#

Also check the vPC and the vPC keep-alive link
     N7K1# sh vpc
      Legend:
             (*) - local vPC is down, forwarding via vPC peer-link
      vPC domain id                        : 100 
      Peer status                          : peer adjacency formed ok      
      vPC keep-alive status                : peer is alive                 
      Configuration consistency status     : success 
      Type-2 consistency status            : success 
      vPC role                             : primary, operational secondary
      Number of vPCs configured            : 9   
      Peer Gateway                         : Disabled
      Dual-active excluded VLANs           : -
      vPC Peer-link status
      ---------------------------------------------------------------------
      id   Port   Status Active vlans    
      --   ----   ------ --------------------------------------------------
      1    Po100  up     1-224

     N7K1# sh vpc peer-keepalive

      vPC keep-alive status           : peer is alive                
      --Peer is alive for             : (1486816) seconds, (684) msec
      --Send status                   : Success
      --Last send at                  : 2010.09.11 12:38:36 872 ms
      --Sent on interface             : Po101
      --Receive status                : Success
      --Last receive at               : 2010.09.11 12:38:36 872 ms
      --Received on interface         : Po101
      --Last update from peer         : (0) seconds, (161) msec

     vPC Keep-alive parameters
     --Destination                    : 10.10.10.2
      --Keepalive interval            : 1000 msec
      --Keepalive timeout             : 5 seconds
      --Keepalive hold timeout        : 3 seconds
      --Keepalive vrf                 : VPC100_KA
      --Keepalive udp port            : 3200
      --Keepalive tos                 : 192
     N7K1#

As of now, both switches are connected via vPC.  

This concludes the first post, the second post will be up shortly and will focus
around the Cisco 3750 configuration as well as the associated configs on the
Nexus 7000 switches.

The week after the installation. . .

In Nexus, Uncategorized on September 6, 2010 at 17:59

Ok, the Nexus switches have been installed and running for over a week now with no further problems and there has been no fallout that I need to address prior to this post.  Everyone at work took the change well, and understood some of the issues that we ran into as well as how we addressed them.

Just to recap what we did and the thoughts around why…

– Location had two Cisco 6509 switches running Sup2/MSFC2 as well as 6548 line cards – running for over 8 years
– Switches had started to show failures on line-cards on a more regular basis, chalked up to age of equip.
– When line-cards failed, spanning-tree loops where introduced which had the ability to severely impact the site
– Recently installed a large VM environment in location with the understanding of DMZ requirements in the near future
– This is a date center location, so data center level hardware was required (10g capabilities and beyond)
– In a single night, removed both Cisco 6509 switches, reconnected about 250 servers, and moved to LACP Etherchannel on all Rack switches in STP forwarding mode
– Also built a temporary network to maintain customer traffic through the site

Now, these requirements might not scream Nexus 7000 hardware – but we do not change core datecenter hardware very often and wanted to install a switch that had more “future proofing” built-in than other switches.  The Cisco 6509E chassis in VSS mode has many of these features, but Cisco is investing money in the Nexus line and we felt that this is the proper way to go.  Also, with a potential web presence imminent, the VDC and OTV capabilities of the Nexus are a perfect fit.

To be honest, the installation went really well.  We where able to remove both Cisco 6509 switches in about an hour (they were DC, so an electrician was required) and get the new Nexus 7010 shoe-horned in their place.  The Nexus are some heavy beasts, these where north of 500 lbs each – so I highly recommend removing the Power Supplies if possible.  To be honest, the way that Cisco has designed these, they rack easily.  Just like the Cisco 6500, the Nexus sits on a shelf for support and then gets screwed in on the face to the rack.   We had the new network up and running, ready for cut-over by around 5 AM – 5 hours after we started.
So, what problems did we encounter – that is where the fun begins.  What is funny is that only 1 of the problems I would consider a network design issue, the others where the typical – oh, we did not know that – or, whoops, type in the default gateway IP address.

So, the first problem that was actually a design issue was L3 neighbor routing over vPC – even though Cisco does not come out and say it does not work in the documentation, trust me – it does not.  Per Cisco’s doc ( http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/interfaces/configuration/guide/if_vPC.pdf )

Configuring VLAN Interfaces for Layer 3 connectivity
You can use VLAN network interfaces on the vPC peer devices to link to Layer 3 of the network for such
applications as HSRP and PIM. However, we recommend that you configure a separate Layer 3 link for
routing from the vPC peer devices, rather than using a VLAN network interface for this purpose.

Now, as you can see from the above picture we have a EIGRP Neighbor relationship between all the routers and the core switches (R1-N7K1, R1-N7K2, R2-N7K2, R2-N7K1).  Typically this is fine in a normal spanning-tree network, but what happens when using a VPC is something different.  When a packet is received on R1 and R1 then decides that the next hop should be N7K2, but the device that R1 is trying to get to is attached to N7K1 (direct or etherchannel – the packet is dropped as it would need to transverse the vPC link twice.  N7K2 sees the packet and just drops it.  It actually sets a bit on the packet when it is received over the vPC link so that it is not re-transmitted back over the link.  This is a loop-prevention mechanism, and that is a good thing as you can guess.

In order for us to fix this design flaw, we just had to make the links between R1 and N7K1 a L3 interface as well as between R2 and N7K2 a L3 interface.  We actually talked about doing this prior to the Nexus being installed, but chose to wait as we did not want to change too many things at one time.  The final design looks like this.

Now, some of the other problems that we encountered that where hardware related was a bad Supervisor module (backup supervisor actually) that was causing high CPU usage on the box.  The first Nexus was running at 10-20% cpu whereas the second Nexus was running at 90-100% CPU.  This was a little more difficult to track down as there where no errors in the log, but the way we figured it out was that one line card was stuck in a “downgrade in progress” message on some of the ports.  The way that message showed up is that we actually downgraded from 5.0.3 to 5.0.2 to see if we had a bug in code that was causing the CPU issues.  I will admit, Cisco had us a new supervisor as well as a line card in about 2 – 3 hours after we figured that out.

The last two problems that we encountered where out of out control – one problem that we experienced was bad default gateways on devices.  I do not know how they where working prior to the upgrade as they had a non-existent IP address configured for the default gateway.  Perhaps they had a static route and it disappeared when the network link went down.  That is the only logical explanation that I can figure – and we had a few devices that did this, so it may actually be a “feature” in their code.  Luckily, those devices have now been fixed.

The last one that we had taken us a bit longer to figure out – and it turns out that a “socks and sandle” person from the vendor had to get on the phone.  We had a device that plays audio message to end-users, and since the installation of the Nexus that feature was no longer working.  Stuck us as odd and out of character for it to be related to the new core switches, but since it broke after the install – we kept at it until we figured it out.  What it turns out was the Nexus was receiving the packet ( it is the default gateway ) and dropping it.  Why you ask, well because the vendor wrote their application to use the default gateway to loop the packet.  ie – the same source and destination are in the packet, just routing it through the default gateway.  The Nexus, and most any other security conscious device, would drop that packet as it is viewed as a spoof packet.  Reviewing the logs in the main VDC, I can see the following error message:   2010 Aug 26 12:49:07 N7K1 %EEM_ACTION-6-INFORM: Packets dropped due to IDS check address reserved on module 9. Once we disabled that feature, everything started to work.

So what is the moral of all of this, well – it is good to know how all the software on your network is configured – but that honestly almost impossible to do.  What does help is speaking with the vendors before the change so they are aware of what you are doing – and we did, we actually had pro-active tickets with all vendors for the change.   I will also say that getting all the vendors on the phone (TAC, Vendor, etc) makes a huge difference.  We had TAC on the phone (actually they usually had 2 TAC Nexus Engineers on the calls) and the vendor and they worked out all the communications between devices and figured it out.  But in the end what did it was the “socks and sandle” person from the vendor who said “you know, it does this…” to get the light-bulb to click.

I just want to say, all-in-all this installation went very well – few bumps in the road, but they where to be expected.  It helps to have a good team of people who you can count on when you are doing this, and thankfully I have that.

%d bloggers like this: