Thursday, August 25, 2016

Monitor IO in linux machines

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          31.33    0.00    8.52   27.32    0.00   32.83

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     7.00   52.00   22.00  2848.00   232.00    41.62     0.42    5.65   3.41  25.20
sdb               0.00    16.00  213.00   48.00  9832.00   512.00    39.63     1.80    6.93   3.55  92.60
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00  265.00   93.00 12672.00   744.00    37.47     2.34    6.52   2.77  99.10
dm-3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

^C
server#  iostat -x -t 1

Wednesday, May 18, 2016

Add latency to a linux server

In a simulation environment, it can be interesting to simulate latency.

! add 15ms latency
tc qdisc add dev eth0 root netem delay 15ms
! remove latency
tc qdisc del dev eth0 root netem delay 15ms

! check latency

tc -s qdisc

Monday, May 9, 2016

Problems after cloning Centos

ifconfig -a
vi /etc/udev/rules.d  -> edit this files and removes cloned macs.
rename interfaces to eth0,1,n

Friday, April 29, 2016

Thursday, April 28, 2016

Ansible 2.0 for Network Engineers - Getting Started

Well, enough of adoc scripting tcl/expect way, it is time to start using something more powerfull. After more then 10 years of tcl/expect I believe it is time to convert myself to python , ansible, jinja, paramiko and so on.
The problem is: How can I get into a fast pace as I have with older technology. The answer it ... I need to invest time and never look back ... :)

Getting started:
- Grab a centos machine and:
     yum install ansible - you should get version 2.0

- Install modules ...

Thursday, April 7, 2016

NetEnforcer Allot CLI Basic Commands

Here is a simple list of commands that will help drilling down allot sigma machines:

NetEnforcer:
- acstat-sum -> total connections
- netstat -antp | grep LISTEN
- actype - show version
- netstat -an | find "80"
- acstat-sum  -->> connections
- go config view  ->  assymetric config
- go list host
- go list vlan
- go list vc [<-option> ].
- acstat -l vc | grep searchtag
- acstat -l pipe   - >>> ** PIPE LIST ** CONNECTION PER PIPE
- acmon -p 5.51 -r ---> traffic and connections per second for specific pipe

- acmon -y    -> asymmetric statistics
- acmon -x 0 -> prcessing unit (0 ou 1) traffic in a cc (core controller)
- acstat -E 2.2.2.2   ->>> WORKED!  ------> all for a specific external IP
- acstat -P 10.1.1.1  ->>> WORKED!  ------>  all for a specific internal IP
- acstat -P 172.16.1.0 -M 255.255.255.0
- acstat -ifx | grep 172.16.1.1
- cat /proc/allot/infra/hw/status/*
- cat /proc/allot/infra/hw/network/*

Sigma-E:
The SG-Sigma E offers extreme performance valuesand comes in two models: SG-Sigma E6, using an ATCA standard 6 slots chassis, offers up to 64 Gbps, up to 20 million connections and a policy with up to 512 lines, 400,000 pipes and 800,000 VCs, when fully populated with 4 CC-300 blades. SG-Sigma E14, using an ATCA 14 slot chassis, offers up to 160Gbps, up to 50 million connections and a policy with up to 512 lines, 1,000,000 pipes and 2,000,000 VCs, when fully populated with 10 CC-300blades. These maximum values supported depend on the number of Core Controller blades deployed in each platform. The CC-200, used in the SGSigma, supports 15Gbps. The CC-300, used in the SG-Sigma E supports 16Gbps. Both types of blade support 5 million connections and a policy with 512 lines, 125,000 pipes and 250,000 VCs. 

Tuesday, February 9, 2016

How to determine the number of slots in a Catalyst 6500 or 7600

For capacity management kpis, could be interesting to know via snmp how many slots are in use in a 6500.
This can be achieved by using snmp mib:
server# snmpwalk -v 2c -c public 10.1.1.1 .1.3.6.1.2.1.47.1.1.1.1.7
SNMPv2-SMI::mib-2.47.1.1.1.1.7.1 = STRING: "WS-C6509-E"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.2 = STRING: "Physical Slot 1"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.3 = STRING: "Physical Slot 2"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.4 = STRING: "Physical Slot 3"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.5 = STRING: "Physical Slot 4"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.6 = STRING: "Physical Slot 5"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.7 = STRING: "Physical Slot 6"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.8 = STRING: "Physical Slot 7"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.9 = STRING: "Physical Slot 8"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.10 = STRING: "Physical Slot 9"
SNMPv2-SMI::mib-2.47.1.1.1.1.7.11 = STRING: "Backplane"

Thursday, February 4, 2016

Problem with bgp status snmp queries

I have been seen a problem in Cisco Catalyst line, either 3560 or 6500, with snmp queries returning snmp status.
For example,
1.3.6.1.2.1.15.3.1.1 ->  returns peer ip
1.3.6.1.2.1.15.3.1.2 ->  returns peer status
  1 : idle
  2 : connect
  3 : active
  4 : opensent
  5 : openconfirm
  6 : established

When ever there is a  returned mib with value:
SNMPv2-SMI::mib-2.15.3.1.1.10.1.1.1= IpAddress: 0.0.0.0

Then, the snmp status will be
SNMPv2-SMI::mib-2.15.3.1.2.10.1.1.1= INTEGER: 1



when the returned value should be:
SNMPv2-SMI::mib-2.15.3.1.2.10.1.1.1= INTEGER: 6 ---> established.








So far, I have no workaround.

Monday, January 25, 2016

hsrp v4 and v6 mac

HSRPv6:
0005.73A0.0000 through 0005.73A0.0FFF (4096 addresses
 udp/2029
 hsrp v4 must be enabled
 HSRPv4:
 0000.0c07.ac00 through 0000.0c07.acFF
 The mac changes with the group ID
 interface x
   standby

Monday, January 18, 2016

Monitor ASR nv Cluster Status with SNMP

Here are some mibs that can be used for that. I have a script that builds the logic. I will share it when I have time.

Inside MIB ID 1.3.6.1.4.1.9.9.498 you will find a lot of measurements to monitor the cluster.

Use these MIBs with this result:
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.1 = STRING: "Rack0"
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.2 = STRING: "Rack1"
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.2 = INTEGER: 1

if SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.1 returns 1 , then the node is ok.if SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.1 does not result anything, then node is down.

Example both are ok:
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.1 = STRING: "Rack0"
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.2 = STRING: "Rack1"
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.2 = INTEGER: 1

Example Rack0 is down:
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.1 = STRING: "Rack1"
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.2 = failed result
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.2 = failed result

Example Rack1 is down:
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.1 = STRING: "Rack0"
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.2.1.2 = failed result
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.9.9.498.1.1.6.1.10.1.2 = failed result


Example rack0 and rack1 are down:
guess what ... not reply :)

How can I tell the type of traffic dropped in a interface

In a normal day operation in a NOC, tipical problem is a port dropping packets.
How can you tell which type of traffic is dropping without sniffing the traffic?
 
Well, there is a way to how if it is a stp packet, broadcast packet: 
 
Use the command:
sh platform port-asic stats drop asic 2
 

 Supervisor TxQueue Drop Statistics
   Queue  0: 0 -> rpc
   Queue  1: 0 -> STP
   Queue  2: 0 -> ipc
   Queue  3: 931 -> Routing protocol
   Queue  4: 0
   Queue  5: 0
   Queue  6: 0
   Queue  7: 0
   Queue  8: 0 -> Broadcast
   Queue  9: 0
   Queue  10: 0  -> igmp snooping

Sunday, January 17, 2016

downtime measurement with linux ping script in bash

During a equipament upgrade in the datacenter, sometimes we need to measure downtime, mainly when and how long it was down.
One way is to do it with a list of pings:

while :; do
  for ip in $(cat /my/script/dir/listofipstoping.txt)
  do
     ping -c 1 -W 1 $ip >/dev/null || echo "PING TO $ip FAILED @ `date`";
  done
done

Saturday, January 16, 2016

Microsoft NLB debugging and Cisco ACE.

NLB multicast address looks something like this:
 03-bf-c0-a8-03-0e


First 2 digits:
      01=IGMP
      02=Unicast
      03=Multicast
The follows by "bf".
Followed by the ip address in hex:
    c0=192, a8=168, 03=3, 0e=14 and thus the IP of 192.168.3.14.

Some equipments will not put this mac in the cam table, like cisco ACE for example.

Nexus 7k/5k behavior:
 
vlan configuration 10
  layer-2 multicast lookup mac

How to block bpdu packets in an ASR 9k Link

There are some situations when we really need to block bpdu from going over a Backbone router.
Some examples are:
- When extending a L2 segmento to another datacenter
- When interconnecting 2 CORE infrastrutures with diferent vlan IDs.

Obviously, this is done using best pratices such as having a single connection between these points using aggregation (LACP) port-channels between the sites or MPLS.


So, the solution to apply to a specific port is to build a l2 acl like this one:
ethernet-services access-list block-invalid-frames
  10 deny any 0180.c200.0000 0000.0000.000f
  20 deny any host 0180.c200.0010
  30 deny any host 0100.0c00.0000
  40 deny any host 0100.0ccc.cccc
  50 deny any host 0100.0ccc.cccd
  60 deny any host 0100.0ccd.cdce
  70 permit any any

Virtual ports on Catalyst and Nexus

Concepts:
* Virtual Interfaces
* Virtual Ports
* STP Logical Ports

Cisco Catalyst 6500
 ROUTER6k#sh vlan virtual-port

 Slot 2
 -------
 Total slot virtual ports 2832

 Slot 3
 -------
 Total slot virtual ports 672

 Slot 6
 -------
 Total slot virtual ports 54

 Slot 7
 -------
 Total slot virtual ports 5

 Slot 9
 -------
 Total slot virtual ports 5033

 Slot 10
 -------
 Total slot virtual ports 74

 Slot 11
 -------
 Total slot virtual ports 1045

 Slot 12
 -------
 Total slot virtual ports 615

 Slot 13
 -------
 Total slot virtual ports 2504

 Total chassis virtual ports 12834
 Router6k#sh vlan virtual-port
= Cisco Nexus 7000 =

 http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572834-00_STDG_NX-OS_vPC_DG.pdf

 Total stp_ports*instances:     4616
 Total ports*vlans        :     4616  <== This is what you check when you run RPVST. (Limite 16000)
 Total phy_ports*vlans    :    11193  <== This is what you check when you run in MST (Limite 90000?)

 - 16000 virtual ports with RPVST in version 6.2.2 Nexus 7k
 - 90000 virtual ports withMST in version 6.2.2 Nexus 7k 


Cisco Nexus 5500

 show spanning-tree internal info global | inc ports
 Total stp_ports*instances:     3325
 Total ports*vlans        :     3323 ----------------------> (limit is 32000)
 Total phy_ports*vlans    :     3689

 - 48000 virtual ports with RPVST or MST. Solution with L2 only, no L3.


Tuesday, January 12, 2016

Monitor Failover HA feature in Cisco Firewals ASA, FWSM and PIX

This post will answer the question: "How can I monitor failover in my Cisco Firewalls?"

Quick answer:
I do it with a script that will keep history of failover and will send an email every time there is a failure.
You can find the script here:
https://github.com/pmachete/ciscoFirewallFailoverMonitor

In an operation perspective you will want to know changes and not status. Status is nice but if there is a failure that lasts for some hours, then you need a system that is not always telling you that something is wrong. You want to know it only at the time it happens. Email can be a solution. A trigger to an operation ticket might be even better.

How to setup an IP SLA latency graph in Cisco Switches and Routers

When ever there are performance problems in a flow between 2 points, customer reporting slowness and communication failures, first thing network engineers do is to run pings or reliability protocols to verify that.
The problem is when this only happens somethings during the week. Then you will need a graph measuring that latency.
Latency can be measured with several linux machine, using self made scripts, tools like someping, etc, but the issues goes down normally to a specific link that you want to be sure that is not causing the problem. Also, it can be in a specific vrf where your linux machine cannot reach.

The best solution is always the ip sla feature in routers and switches. It will measure the latency every 1 minute and then you will monitor that ip sla using snmp to store it.

Here is an example of IP SLA graph generated by capturing the ip sla data via snmp and dumping it to mrtg file.
It shows the latency of a 300 km link between 2 datacenters. The latency varies a lot during the day stable at 4ms but variating to 12 ms sometimes. The measurement is done using a ping between a router and something that replies back to icmp. It will not show us jitter, will not measure udp transport. For that you will need an ip sla on the 2 sides running ip sla.
There is another graph that will tell us the service uptime:. Here is the graph below. The is very usefull to see exactly if the ping failed, then the service failed. It is a snmp capture as well dumped to a mrtg file and you can use it to associate it to an operation alarm.
You can setup a condition in your monitoring tool: If the values measure is 1 during 3 reads, then wake up somebody.

This is a good start to analyse latency but it will only pool every 5 minutes and measurements will happen every 1 minute. So, obviously you will lose information but in the end , you will have important data to report in a RCA. From my experience with ip sla feature in Cisco routers or switches, 90% of the times it works great. Some versions will measure incorrectly, but you can leave with it. After all, this graph is only a hint on where should you search for the problem.

So, here's how to do it:

For example, a IP SLA on a Cisco 6500:
ip sla 30
 icmp-echo 192.168.10.10 source-ip 192.168.10.1
 vrf CUSTOMER01
 tag MYPROBE30CUSTOMER01
 frequency 10
ip sla schedule 30 life forever start-time now

If you are using mrtg:
###  PROBE_A Probe 30 status
Target[10.1.1.1_PROBE_P30_STATUS]:1.3.6.1.4.1.9.9.42.1.2.9.1.6.30&1.3.6.1.4.1.9.9.42.1.2.9.1.6.30:public@10.1.1.1
MaxBytes[10.1.1.1_PROBE_P30_STATUS]: 500000
PNGTitle[10.1.1.1_PROBE_P30_STATUS]: PROBE_A Probe 30 status (1-fail 2-sucess)
YLegend[10.1.1.1_PROBE_P30_STATUS]: Status (1-fail 2-sucess)
Options[10.1.1.1_PROBE_P30_STATUS]: growright, nopercent, gauge
ShortLegend[10.1.1.1_PROBE_P30_STATUS]: _status
Title[10.1.1.1_PROBE_P30_STATUS]:  PROBE_A Probe 30 status
PageTop[10.1.1.1_PROBE_P30_STATUS]: PROBE_A Probe 30 status

###  PROBE_A Probe 30 rtt
Target[10.1.1.1_PROBE_A_P30_RTT]:1.3.6.1.4.1.9.9.42.1.2.10.1.1.30&1.3.6.1.4.1.9.9.42.1.2.10.1.1.30:public@10.1.1.1
MaxBytes[10.1.1.1_PROBE_A_P30_RTT]: 500000
PNGTitle[10.1.1.1_PROBE_A_P30_RTT]: PROBE_A Probe 30 rtt
YLegend[10.1.1.1_PROBE_A_P30_RTT]: Status (1-fail 2-sucess)
Options[10.1.1.1_PROBE_A_P30_RTT]: growright, nopercent, gauge
ShortLegend[10.1.1.1_PROBE_A_P30_RTT]: _ms
Title[10.1.1.1_PROBE_A_P30_RTT]:  PROBE_A Probe 30 rtt
PageTop[10.1.1.1_PROBE_A_P30_RTT]: PROBE_A Probe 30 rtt






Monitoring CSM Services and finding MIB

CSM Quick CMDS

show module contentSwitchingModule 3 stats
show module contentSwitchingModule 3 variable

FIND THIS MIB

SR_C6500#sh module contentSwitchingModule all vservers  | inc 10.111.111.143
MYPLTCAS_VIP110 SLB   TCP  10.111.111.143/32:110    ALL  OPERATIONAL  95
MYPLTCAS_VIP143 SLB   TCP  10.111.111.143/32:143    ALL  OPERATIONAL  157
MYPLTCAS_VIP443 SLB   TCP  10.111.111.143/32:443    ALL  OPERATIONAL  9806
MYPLTCAS_VIP80  SLB   TCP  10.111.111.143/32:80     ALL  OPERATIONAL  0
MYPLTCAS_VIP993 SLB   TCP  10.111.111.143/32:993    ALL  OPERATIONAL  179
MYPLTCAS_VIP995 SLB   TCP  10.111.111.143/32:995    ALL  OPERATIONAL  52
SR_C6500#


popnnm06#  snmpwalk -v 1 -c public 10.111.2.10 1.3.6.1.4.1.9.9.254 | grep
popnnm06#  snmpwalk -v 1 -c public 10.111.2.10 1.3.6.1.4.1.9.9.254 | grep 10.111.111.143
popnnm06#  snmpwalk -v 1 -c public 10.111.2.10 1.3.6.1.4.1.9.9.161.1.4 | grep 10.111.111.143
popnnm06#  snmpwalk -v 1 -c public 10.111.2.10 SNMPv2-SMI::enterprises.9.9.161.1.3.1.1.16.3 | grep "Counter32: 8[0-9][0-9][0-9]"

# REALS
snmpwalk -v 1 -c public 10.111.2.10 enterprises.9.9.161.1.3.1.1.5.3
snmpwalk -v 1 -c public 10.111.2.10 enterprises.9.9.161.1.3.1.1.7.3 | grep 10.111.111
snmpwalk -v 1 -c public 10.111.2.10 enterprises.9.9.161.1.3.1 | grep "Counter32: [76][0-9][0-9][0-9]"
snmpwalk -v 1 -c public 10.111.2.10 .1.3.6.1.4.1.9.9.161.1.3.1.1.5 | grep 10.111.111
snmpwalk -v 1 -c public 10.111.2.10 SNMPv2-SMI::enterprises.9.9.161.1.3.1.1.5.3.12.80.84.65.83.80.67.65.83.45.49.49.48

# VSERVERS
snmpwalk -v 1 -c public 10.111.2.10 1.3.6.1.4.1.9.9.161.1.4.1.1.17.3

FIND VSERVER MIB
SR_C6500#sh module contentSwitchingModule all vservers  | inc 10.111.111.172
SR_C6500#sh module contentSwitchingModule all vservers name MYSERVERASHUB-110 config
Depois de saber os reals, procurar o ip do real na config. Por exemplo neste caso seleccionei:
  PTAEXCASHUB01-10.111.111.166
# find real que comece por 10.111.111.166 com a porta 110
snmpwalk -v 1 -c public 10.111.2.10 .1.3.6.1.4.1.9.9.161.1.3.1.1.5 | grep 10.111.111.166 | grep "110 = G"
SNMPv2-SMI::enterprises.9.9.161.1.3.1.1.5.3.13.80.84.65.67.65.83.72.85.66.45.49.49.48.10.111.111.166.110 = Gauge32: 18
                                               80.84.65.67.65.83.72 | grep 49.48.
# find vserver:
snmpwalk -v 1 -c public 10.111.2.10 1.3.6.1.4.1.9.9.161.1.4.1.1.17.3 | grep 80.84.65.67.65.83.72 | grep 49.48
SNMPv2-SMI::enterprises.9.9.161.1.4.1.1.17.3.13.80.84.65.67.65.83.72.85.66.45.49.49.48 = Gauge32: 105


# find real que comece por 10.111.111.160 com a porta 143
snmpwalk -v 1 -c public 10.111.2.10 .1.3.6.1.4.1.9.9.161.1.3.1.1.5 | grep 10.111.111.160 | grep "143 = G"
SNMPv2-SMI::enterprises.9.9.161.1.3.1.1.5.3.12.80.84.65.83.80.67.65.83.45.49.52.51.10.111.111.160.143 = Gauge32: 14
                                               80.84.65.83.80.67.65.83 | grep 49.52.51
# find vserver:
snmpwalk -v 1 -c public 10.111.2.10 1.3.6.1.4.1.9.9.161.1.4.1.1.17.3 | grep 80.84.65.83.80.67.65.83 | grep 49.52.51
result:
SNMPv2-SMI::enterprises.9.9.161.1.4.1.1.17.3.15.80.84.65.83.80.67.65.83.95.86.73.80.49.52.51 = Gauge32: 101
1.3.6.1.4.1.9.9.161.1.4.1.1.17.3.15.80.84.65.83.80.67.65.83.95.86.73.80.49.52.51 -> MIB VSERVER

Monday, January 11, 2016

Datacenter Network Capacity Management

Here is a complex topic: Capacity Management inside the Datacenter.

There is daily operation and monitoring, and there is capacity management. They will touch themselves in some points but they have completly diferent perspectives:
I have been doing capacity KPIs for the past 8 years. To be honest, I believe that now it is the first time I am doing it right. Here are some diferences:

Operation:
- Realtime monitoring
- Alarm driven
- Focused in detail
- We are worried with uptime

Capacity Management:
- Its a recurrent process
- It is weekly based
- We are worried with provision capacity
- We are worried with long term stability
- We want to have a big picture of all infrastruture.

So, the first questions that normally arise are:
- How can I extract my data
- What are my limits
- Which tools can I use.

Well , the correct questions should be "How can I setup a process that will take actions on important findings triggered by capacity management". If you can answer this question and execute in a process then , you already have capacity management in place. Now, all you need is data and analytics to feed this process. Believe me that setting the process is the hard part because it is normally taken as a less priority task that can always be done later.

So, after the process is setup and running, which means, you will have a weekly review, meeting minutes weekly report that will show you are doing it will all  updated findings, than you will have something for an auditor in a ISO 20000 certification.

But, what you are probably trying to get is, what should I measure and how. Getting the data and finding the limits is also not easy.
If you work in a service provider, you will have dozens of networking device vendores, diferent models, diferent version in the same models (templates), and all will have diferent limits and measurements. Also, the configuration applied to simliar appliances will have diferent measurements because the configuration is diferent. For example, a Cisco ASA configured with multi context must have diferent measurements then a non virtualized chassis.








(will continue ...)

ssh login delays 12 seconds

Hi,

Been hitting my head off for the past hour or so to resolve my ssh login lag of about 12 seconds to a centos server.
The fix can be done in the client if you are using putty by disabling GSAPI option:
   Connection/ssh/auth/GSAPI
   Attemp t GSAPI authentication (SSH-2 only)

To fix it permanently, then I believe that you will have to do it in the server. Let's try ...
I am using centos:

Did the following in /etc/ssh/sshd_config:
# GSSAPI options
GSSAPIAuthentication no
#GSSAPIAuthentication yes

It worked! Now, I have no 12 second delay!