Here a short description to generate and get the techsup file from Cisco ISE throug CLI:
- Login to ISE through ssh
- Generate techsup through following command:
- show tech-support file <<<filename>>>
- Show full filename
- Copy file to a ftp server
- copy disk:/<<<filename>>> ftp://<<<ip-address>>>/folder
Thanks to MystaJoneS’s article
I don’t need to open a TAC case for my low diskspace problem on our prime infrastructure, as the disk cleanup feature is for nothing. Growing the disk outside the VM and later adding it as new pv in the ade os works pretty fine.
if you have the same issue, just follow his guide.
If you plan to use SPAN to mirror network ports, take care how you use it.
If you just use „monitor session # source interface xY“ and „monitor session # destination interface xY“ you can get unwanted results. Without adding „monitor session # destination interface xY ingress vlan #“ you can get frames from other uplink ports.
To preserve vlan tags you need to add „encapsulation dot1q“ to the „destination interface“ command. You also need to make sure, that your monitoring device, connected to the destination port, is able to understand dot1q tags, otherwise the monitoring device removes the tag. There are some registry hacks for monitoring devices with Windows and Intel network cards, but I can’t promise that those will work.
Also mind the duplicate packet issue with SPAN. Please see this link for details. Mike Schiffman explains it really good.
Summed up: SPAN works for me the best with following commands:
- „monitor session # source interface xY rx/tx/both“
- „monitor session # destination interface xX encapsulation dot1q ingress vlan id #“
- # stands for any number
Newer devices like the Cisco 3850 with the IOS XE release already include wireshark, but this is bound to ipbase or ipservice license. Please see this link for details. Hopefully they’ll also add it to lanbase later on.
One general hint: For debugging start from the first interface you know and handle forward through each interface you can, until you find the problem.
It shows that updating Cisco ISE VM from 1.1 up to 1.2.1 can lead to huge performance impacts. The original 1.1 version ran without problems, through the update of the VM to 1.2 the whole system got realy slow. The web interface was nearly unusable. Reboot of the VM solved the problem only for short term. Problem indicators are:
- Non matching performance statistics between VMWare and Cisco Ise
- Wrong alert messages from Cisco ISE concerning IO write performance
- High authentication latency
- Authenticators reporting dead radius server
The problem was solved through a fresh installation of Cisco ISE VM with 1.2 image and then updating to 1.2.1. The restore of the configurational backups works realy fine and even includes voucher codes if the ISE guest portal is used.
Please note that a restore requires to rejoin ISE VM to the domain and to rehost the installed license from the defect to the restored mashine. Also after restoring the backup, the VM gets the original ip address through the backup. So it has to be ensured, that the old mashine is offline or the restored one has no network connectivity while the old one is running.
The Kron feature under Cisco IOS and Cisco IOS XE has multiple known bugs. Recently it showed , that a Ciscio 3850 running IOS XE 03.03.03SE with a configured Kron job for auto backup lost parts of it’s running configuration.
After the Kron job was executed, parts of the Kron configuration itself and also parts of interface configurations were missing. Mainly the execution time configuration of the Kron job got lost but also special port configurations of uplink ports, which made the bug critical.
We now use EEM scripts as alternative solution to the Kron feature. See http://www.cisco.com/c/en/us/products/ios-nx-os-software/ios-embedded-event-manager-eem/index.html for more information.
The access point can’t join the controller and the debug output of the access point shows outputs like:
„%CAPWAP-3-ERRORLOG: Invalid event 10 & state 5 combination.“
„%Error opening flash:/ap3g2-rcvk9w8-mx/info (No such file or directory)cisco AIR-CAP2602I-E-K9 (PowerPC) processor (revision A0) with 180214K/81920K bytes of memory.“
This is based on a faulty AP image and can be resolved through a console session on the AP and following commands:
- debug capwap console cli
- conf t
- test mesh mode local
This forces the AP to get a fresh image from the wireless controller and he’ll join the controller after getting the image.
Mind to define the LACP mode with your AIX administrator. Configuring the ports with „channel group # mode active“ did work fine for us. Mode on won’t work if the AIX servers uses mode active. For this also see cisco LACP configuration guidelines under:
Configuring EtherChannels and Link-State Tracking
Here the different modes:
•auto—Enables PAgP only if a PAgP device is detected. It places the port into a passive negotiating state, in which the port responds to PAgP packets it receives but does not start PAgP packet negotiation. This keyword is not supported when EtherChannel members are from different switches in the switch stack.
•desirable—Unconditionally enables PAgP. It places the port into an active negotiating state, in which the port starts negotiations with other ports by sending PAgP packets. This keyword is not supported when EtherChannel members are from different switches in the switch stack.
•on—Forces the port to channel without PAgP or LACP. In the on mode, an EtherChannel exists only when a port group in the on mode is connected to another port group in the on mode.
•non-silent—(Optional) If your switch is connected to a partner that is PAgP capable, configure the switch port for nonsilent operation when the port is in the auto or desirable mode. If you do not specify non-silent, silent is assumed. The silent setting is for connections to file servers or packet analyzers. This setting allows PAgP to operate, to attach the port to a channel group, and to use the port for transmission.
•active—Enables LACP only if a LACP device is detected. It places the port into an active negotiating state in which the port starts negotiations with other ports by sending LACP packets.
•passive—Enables LACP on the port and places it into a passive negotiating state in which the port responds to LACP packets that it receives, but does not start LACP packet negotiation.
We had the problem, that bootp packets from an IBM p720 client, which should get an image from an IBM NIM server, were dropped on the switch where the client was connected to. I could proof this with a SPAN port on the local switch. The switch which was used was a cisco 3750G stack with 15.02 ios release.
The ibm p720 client had a fixed ip address, as also the IBM NIM server and the client was configured to use the IBM NIM server ip address, so the packets were unicast packets.
The reason why those packets were dropped is dhcp snooping feature on the cisco switches. This feature is used to prevent the network from so called spurious DHCP server, which are dhcp servers which exist in your network without your knowledge. Here an abstract from the cisco configuration guideline:
•If a Layer 2 LAN port is connected to a DHCP server, configure the port as trusted by entering the ip dhcp snooping trust interface configuration command.
•If a Layer 2 LAN port is connected to a DHCP client, configure the port as untrusted by entering the no ip dhcp snooping trust interface configuration command.
For more details see: Configuring DHCP Features and IP Source Guard
To prevent the switch from dropping the packets from the bootp client, we had to configure the NIM server port with „ip dhcp snooping trust“ as also the client port.
Bootp was designed prior to dhcp and uses the same ports (UDP 67, 68) as dhcp. Based on lack of time, I could not find the exact reason for the packet drop, the packet validaiton chapter from the cisco link above didn’t bring fast enlightment, why we also had to configure the client ports with „ip dhcp snooping trust“.
I had following scenario:
client in client vlan
server multihomed in client vlan and server vlan with SLES 11.2 (I know that this is bad!)
the server didn’t respond to the ping from the client when the client pinged the srv vlan ip of this server. (ping came in on iface of the srv vlan but the server never send a reply)
I could proof this through tcpdump. After opening a service request at novell, the rp_filter was the solution. If this filter is set to one, the server won’t respond depending on the setting of this filter.
Set to 0 (deactivated) the server starts responding to the ping. For details please see:
# enable route verification on all interfaces
net.ipv4.conf.all.rp_filter = 1