Showing posts with label multicast. Show all posts
Showing posts with label multicast. Show all posts
Sunday, February 10, 2013
Troubleshooting High CPU due to Multicast
Great Video on Troubleshooting High CPU due to Multicast. This video is geared toward IOS and not NX-OS.
https://supportforums.cisco.com/community/netpro/network-infrastructure/switching/blog/2012/12/12/troubleshooting-high-cpu-due-to-multicast
Location:
Avenel, NJ 07001, USA
Thursday, January 31, 2013
Cisco Multicast Security - IOS Base
A Must read on Multicast Security. One thing that I really like about this article is the illustrations provided. Most documents talk about these feature sets, but never illustrate how they work and for me personally it helps grasp these topics.
http://www.cisco.com/web/about/security/intelligence/multicast_toolkit.html
Location:
Avenel, NJ 07001, USA
Tuesday, November 27, 2012
PIM Sparse-Mode: Register via one link and SPT cutover via another link.
Figure 1 |
Lets enable multicast routing on all the devices, sparse-mode on the interfaces and configure RP details.
R1 - R2 - R3
ip multicast-routing
!
interface range fa0/0 - 1
ip pim sparse-mode
!
ip pim rp-address 1.1.1.1 multicast_groups override
!
ip access-list standard multicast_groups
permit 233.54.1.1
Now lets assign some ip addresses to the interfaces and setup static routes.
R3:
interface FastEthernet0/0
ip address 192.168.100.1 255.255.255.0
ip pim sparse-mode
speed 100
full-duplex
!
interface FastEthernet0/1
ip address 192.168.101.1 255.255.255.0
ip pim sparse-mode
speed 100
full-duplex
!
ip route 1.1.1.1 255.255.255.255 FastEthernet0/0 192.168.100.2 name RP
ip route 2.2.2.2 255.255.255.255 FastEthernet0/1 192.168.101.2 name 233_54_1_1_Source
R2:
interface FastEthernet0/0
ip address 192.168.103.2 255.255.255.0
ip pim sparse-mode
speed 100
full-duplex
!
interface FastEthernet0/1
ip address 192.168.101.2 255.255.255.0
ip pim sparse-mode
speed 100
full-duplex
ip route 1.1.1.1 255.255.255.255 FastEthernet0/0 192.168.103.1 name RP
ip route 2.2.2.2 255.255.255.255 FastEthernet0/0 192.168.103.1 name 233_54_1_1_Source
R1:
interface FastEthernet0/0
ip address 192.168.103.1 255.255.255.0
ip pim sparse-mode
speed 100
full-duplex
!
interface FastEthernet0/1
ip address 192.168.100.2 255.255.255.0
ip pim sparse-mode
speed 100
full-duplex
Now we need to configure our loopback interfaces on R1 for the RP address and Source address. Also enable sparse-mode on the loopback interfaces as well. If not, it wont work.
R1:
interface Loopback1
ip address 1.1.1.1 255.255.255.255
ip pim sparse-mode
!
interface Loopback2
ip address 2.2.2.2 255.255.255.255
ip pim sparse-mode
Lets setup a dummy loopback interface on R3 and we will statically configure our multicast group to it since I don't have an actual host to use to join the group. This will take the place of that.
R3:
interface Loopback0
ip address 3.3.3.3 255.255.255.255ip pim sparse-mode
ip igmp static-group 233.54.1.1
Now lets issue this command from R1. "ping 233.54.1.1 source loopback 2 repeat 3" I am just going to show you how R2 looks after we issue the command.
Currently R2 has no mcast state until the ping command is issued on R1 because R3 is sending a Join only to R1 and not R2 because of how routing is setup by design. When we added the static join to the R3 loopback interface, it let R1 know that it was interested in joinng this group and wanted to also know the source of the group. Since no data was being published from the host at that time, it never sent anything to R2. Once we issue the above command R3 will learn the source(2.2.2.2) and see that it has to go through R2 to reach that source and in turn tell R2 that it was to Join 233.54.1.1
R2#
*Mar 1 01:42:52.495: PIM(0): Check RP 1.1.1.1 into the (*, 233.54.1.1) entry
*Mar 1 01:42:52.551: PIM(0): Received v2 Join/Prune on FastEthernet0/1 from 192.168.101.1, to us
*Mar 1 01:42:52.551: PIM(0): Join-list: (2.2.2.2/32, 233.54.1.1), S-bit set
*Mar 1 01:42:52.555: PIM(0): Add FastEthernet0/1/192.168.101.1 to (2.2.2.2, 233.54.1.1), Forward state, by PI M SG Join
*Mar 1 01:42:52.555: PIM(0): Insert (2.2.2.2,233.54.1.1) join in nbr 192.168.103.1's queue
*Mar 1 01:42:52.559: PIM(0): Building Join/Prune packet for nbr 192.168.103.1
*Mar 1 01:42:52.559: PIM(0): Adding v2 (2.2.2.2/32, 233.54.1.1), S-bit Join
R2#
*Mar 1 01:42:52.559: PIM(0): Send v2 join/prune to 192.168.103.1 (FastEthernet0/0)
R2#
On R3 we can see the *,G and S,G have two different incoming interfaces. This is by design and expected because of how we have this setup.
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 233.54.1.1), 00:51:56/stopped, RP 1.1.1.1, flags: SJC
Incoming interface: FastEthernet0/0, RPF nbr 192.168.100.2
Outgoing interface list:
Loopback0, Forward/Sparse, 00:51:56/00:01:44
(2.2.2.2, 233.54.1.1), 00:00:03/00:02:56, flags: J
Incoming interface: FastEthernet0/1, RPF nbr 192.168.101.2
Outgoing interface list:
Loopback0, Forward/Sparse, 00:00:03/00:02:56
R3#
Labels:
multicast,
pim,
routing,
Sparse-mode
Location:
Avenel, NJ 07001, USA
Tuesday, April 3, 2012
Multicast Links
If you are interested in learning multicast, review these in the order I have them listed. All these articles are Cisco centric and one thing to keep in mind is the architecture of your equipment.
Internet Protocol Multicast
Multicast Routing - PIM
Financial Services Design for High Availability
Configuring IP Multicast Routing
Internet Protocol Multicast
Multicast Routing - PIM
Financial Services Design for High Availability
Configuring IP Multicast Routing
Labels:
cisco,
dense-mode,
multicast,
pim
Location:
Perth Amboy, NJ 08861, USA
Sunday, March 4, 2012
Multicast - Input queue drops/flushes
Below I will describe the steps I took to troubleshoot input queue drops/flushes on a Cisco 6509E device.
1: I tried increasing the input-queue from the default 75 to 1000, 2000 and then 4096 but that still did not stop the drops/flushes. This does work from occasion but it depends on the input rate of the data.
value> command for each interface (the queue length value can be 0 − 4096. The default value
Each interface owns an input queue onto which incoming packets are placed to await processing by the Routing Processor (RP). Frequently, the rate of incoming packets placed on the input queue exceeds the rate at which the RP can process the packets.
http://www.cisco.com/en/US/products/hw/modules/ps2643/products_tech_note09186a0080094a8c.shtml
6509E#show int gi 4/7
GigabitEthernet4/7 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 0017.0f9b.0800 (bia 0017.0f9b.0800)
Description: multicast-unicast
Internet address is 192.168.7.94/31
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 00:20:24
Input queue: 0/2000/5/5 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 6004000 bits/sec, 3614 packets/sec
30 second output rate 0 bits/sec, 0 packets/sec
L2 Switched: ucast: 480 pkt, 33840 bytes - mcast: 145 pkt, 13669 bytes
L3 in Switched: ucast: 0 pkt, 0 bytes - mcast: 7135671 pkt, 1481502786 bytes mcast
L3 out Switched: ucast: 0 pkt, 0 bytes mcast: 0 pkt, 0 bytes
7064317 packets input, 1466661500 bytes, 0 no buffer
Received 7063839 broadcasts (6970308 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
1296 packets output, 93739 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
6509E#
6509E#show int gi 4/7 switching
GigabitEthernet4/7 multicast-unicast
Throttle count 0
Drops RP 5 SP 0
SPD Flushes Fast 5 SSE 0
SPD Aggress Fast 0
SPD Priority Inputs 84 Drops 0
Protocol Path Pkts In Chars In Pkts Out Chars Out
Other Process 2682 210124 0 0
Cache misses 0
Fast 0 0 0 0
Auton/SSE 0 0 0 0
IP Process 716508 63616061 713076 92495576
Cache misses 0
Fast 32778336 5720227032 0 0
Auton/SSE 564492684 102883224482 5 390
6509E#
Selective Packet Discard (SPD) is a mechanism to manage the process level input queues on the Route Processor (RP). The goal of SPD is to provide priority to routing protocol packets and other important traffic control Layer 2 keepalives during periods of process level queue congestion.
http://www.cisco.com/en/US/products/hw/routers/ps167/products_tech_note09186a008012fb87.shtml
In order to stop the interface from being oversubscribed by the multicast traffic, I need a way to filter all the traffic coming into the interface.
3: Implementing a multicast boundary list, you need to set this up on both sides of the interface. If you only set this up one side, the receiving end the transmitting end will still push unnecessary data onto the receiving device. In order to setup a multicast boundary, issue the below commands.
Creating a multicast boundary and applying it:
ip access-list standard name_acl
permit x.x.x.x
or
permit x.x.x.x 0.0.0.255
exit
!
interface gi 4/7
ip multicast boundary name_acl
end
!
WR
!
Now you are filtering multicast coming in and out of this port. This resolved my issue. I also recommend you read http://www.frameip.com/dos-cisco/queue_drops.pdf
1: I tried increasing the input-queue from the default 75 to 1000, 2000 and then 4096 but that still did not stop the drops/flushes. This does work from occasion but it depends on the input rate of the data.
hold−queue <
is 75).value> command for each interface (the queue length value can be 0 − 4096. The default value
Each interface owns an input queue onto which incoming packets are placed to await processing by the Routing Processor (RP). Frequently, the rate of incoming packets placed on the input queue exceeds the rate at which the RP can process the packets.
http://www.cisco.com/en/US/products/hw/modules/ps2643/products_tech_note09186a0080094a8c.shtml
6509E#show int gi 4/7
GigabitEthernet4/7 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 0017.0f9b.0800 (bia 0017.0f9b.0800)
Description: multicast-unicast
Internet address is 192.168.7.94/31
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 00:20:24
Input queue: 0/2000/5/5 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 6004000 bits/sec, 3614 packets/sec
30 second output rate 0 bits/sec, 0 packets/sec
L2 Switched: ucast: 480 pkt, 33840 bytes - mcast: 145 pkt, 13669 bytes
L3 in Switched: ucast: 0 pkt, 0 bytes - mcast: 7135671 pkt, 1481502786 bytes mcast
L3 out Switched: ucast: 0 pkt, 0 bytes mcast: 0 pkt, 0 bytes
7064317 packets input, 1466661500 bytes, 0 no buffer
Received 7063839 broadcasts (6970308 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
1296 packets output, 93739 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
6509E#
6509E#show int gi 4/7 switching
GigabitEthernet4/7 multicast-unicast
Throttle count 0
Drops RP 5 SP 0
SPD Flushes Fast 5 SSE 0
SPD Aggress Fast 0
SPD Priority Inputs 84 Drops 0
Protocol Path Pkts In Chars In Pkts Out Chars Out
Other Process 2682 210124 0 0
Cache misses 0
Fast 0 0 0 0
Auton/SSE 0 0 0 0
IP Process 716508 63616061 713076 92495576
Cache misses 0
Fast 32778336 5720227032 0 0
Auton/SSE 564492684 102883224482 5 390
6509E#
Selective Packet Discard (SPD) is a mechanism to manage the process level input queues on the Route Processor (RP). The goal of SPD is to provide priority to routing protocol packets and other important traffic control Layer 2 keepalives during periods of process level queue congestion.
http://www.cisco.com/en/US/products/hw/routers/ps167/products_tech_note09186a008012fb87.shtml
In order to stop the interface from being oversubscribed by the multicast traffic, I need a way to filter all the traffic coming into the interface.
3: Implementing a multicast boundary list, you need to set this up on both sides of the interface. If you only set this up one side, the receiving end the transmitting end will still push unnecessary data onto the receiving device. In order to setup a multicast boundary, issue the below commands.
Creating a multicast boundary and applying it:
ip access-list standard name_acl
permit x.x.x.x
or
permit x.x.x.x 0.0.0.255
exit
!
interface gi 4/7
ip multicast boundary name_acl
end
!
WR
!
Now you are filtering multicast coming in and out of this port. This resolved my issue. I also recommend you read http://www.frameip.com/dos-cisco/queue_drops.pdf
Input queue drops are counted by the system if the number of packet buffers allocated to the interface is
exhausted or reaches its maximum threshold. The maximum queue value can be increased by using the
Labels:
6509E,
dense-mode,
drops,
drops/flushes,
input que,
multicast,
pim
Location:
Perth Amboy, NJ 08861, USA
Friday, February 17, 2012
Layer 2 Multicast Arista
Layer2 Multicast Troubleshooting tips. Some stuff I learned today from the SR guys at work.
Show ip igmp snooping (Confirms which host are sending IGMP reports as you can trace back what’s connected to that specific port)
Show ip igmp snooping groups (Confirms which groups hosts are trying to subscribe to)
Show spanning-tree vlan X/X (Confirms which way the multicast data is flowing)
So how do confirm multicast data is actually being disseminated from a host on a layer 2 switch? You get the big guns out and put a sniffer up. That’s the fastest and easiest way to determine why host's that are sending out reports are not getting multicast data. You can also run into issues where a specific ASIC for a group of ports is not working properly (Not common but it does happen).
Eventually I would like to do reviews on all the tools at have at my possesion at work, to name a few Gigamon, Netscout, Apcon Matrix Switch, Network General Sniffer and Niksun. Arista switches and a Ton of nexus gear.
Show ip igmp snooping (Confirms which host are sending IGMP reports as you can trace back what’s connected to that specific port)
Show ip igmp snooping groups (Confirms which groups hosts are trying to subscribe to)
Show spanning-tree vlan X/X (Confirms which way the multicast data is flowing)
So how do confirm multicast data is actually being disseminated from a host on a layer 2 switch? You get the big guns out and put a sniffer up. That’s the fastest and easiest way to determine why host's that are sending out reports are not getting multicast data. You can also run into issues where a specific ASIC for a group of ports is not working properly (Not common but it does happen).
Eventually I would like to do reviews on all the tools at have at my possesion at work, to name a few Gigamon, Netscout, Apcon Matrix Switch, Network General Sniffer and Niksun. Arista switches and a Ton of nexus gear.
Labels:
Arista,
igmp,
igmp snooping,
multicast,
ultra low latency
Location:
Perth Amboy, NJ 08861, USA
Subscribe to:
Posts (Atom)