@@ -454,6 +454,27 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <abc@telekom.ru> -- 2008-2014.
454454 actual binary loaded;
455455 aggr mac vlan: tags to identify compile time options that are enabled.
456456
457+ > Protocol version 10 (ipfix), refresh-rate 20, timeout-rate 30, (templates 2, active 2). Timeouts: active 5, inactive 15. Maxflows 2000000
458+
459+ Protocol version currently in use. Refresh-rate and timeout-rate
460+ for v9 and IPFIX. Total templates generated and currently active.
461+ Timeout: active X: how much seconds to wait before exporting active flow.
462+ - same as sysctl net.netflow.active_timeout variable.
463+ inactive X: how much seconds to wait before exporting inactive flow.
464+ - same as sysctl net.netflow.inactive_timeout variable.
465+ Maxflows 2000000: maxflows limit.
466+ - all flows above maxflows limit must be dropped.
467+ - you can control maxflows limit by sysctl net.netflow.maxflows variable.
468+
469+ > Promisc hack is disabled (observed 0 packets, discarded 0).
470+
471+ observed n: To see that promisc hack is really working.
472+
473+ > Natevents disabled, count start 0, stop 0.
474+
475+ - Natevents mode disabled or enabled, and how much start or stop events
476+ are reported.
477+
457478> Flows: active 5187 (peak 83905 reached 0d0h1m ago), mem 283K, worker delay 100/1000 (37 ms, 0 us, 4:0 0 [3]).
458479
459480 active X: currently active flows in memory cache.
@@ -466,7 +487,7 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <abc@telekom.ru> -- 2008-2014.
466487 worker delay X/HZ: how frequently exporter scan flows table per second.
467488 Rest is boring debug info.
468489
469- > Hash: size 8192 (mem 32K), metric 1.00, [1.00, 1.00, 1.00]. MemTraf : 1420 pkt, 364 K (pdu 0, 0) .
490+ > Hash: size 8192 (mem 32K), metric 1.00, [1.00, 1.00, 1.00]. InHash : 1420 pkt, 364 K, InPDU 28, 6716 .
470491
471492 Hash: size X: current hash size/limit.
472493 - you can control this by sysctl net.netflow.hashsize variable.
@@ -482,87 +503,68 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <abc@telekom.ru> -- 2008-2014.
482503 15 minutes. Sort of hash table load average. First value is instantaneous.
483504 You can try to increase hashsize if averages more than 1 (increase
484505 certainly if >= 2).
485- MemTraf: X pkt, X K: how much traffic accounted for flows that are in memory.
486- - these flows that are residing in internal hash table.
487- pdu X, X: how much traffic in flows preparing to be exported.
488- - it is included already in aforementioned MemTraf total.
489-
490- > Protocol version 10 (ipfix), refresh-rate 20, timeout-rate 30, (templates 2, active 2). Timeouts: active 5, inactive 15. Maxflows 2000000
491-
492- Protocol version currently in use. Refresh-rate and timeout-rate
493- for v9 and IPFIX. Total templates generated and currently active.
494- Timeout: active X: how much seconds to wait before exporting active flow.
495- - same as sysctl net.netflow.active_timeout variable.
496- inactive X: how much seconds to wait before exporting inactive flow.
497- - same as sysctl net.netflow.inactive_timeout variable.
498- Maxflows 2000000: maxflows limit.
499- - all flows above maxflows limit must be dropped.
500- - you can control maxflows limit by sysctl net.netflow.maxflows variable.
506+ InHash: X pkt, X K: how much traffic accounted for flows in the hash table.
507+ InPDU X, X: how much traffic in flows preparing to be exported.
501508
502509> Rate: 202448 bits/sec, 83 packets/sec; 1 min: 668463 bps, 930 pps; 5 min: 329039 bps, 483 pps
503510
504511 - Module throughput values for 1 second, 1 minute, and 5 minutes.
505512
506- > cpu# stat: <search found new [metric], trunc frag alloc maxflows>, sock: <ok fail cberr, bytes >, traffic: <pkt, bytes>, drop: <pkt, bytes>
507- > cpu0 stat: 980540 10473 180600 [1.03], 0 0 0 0, sock: 4983 928 0, 7124 K , traffic: 188765, 14 MB, drop: 27863, 1142 K
513+ > cpu# pps; <search found new [metric], trunc frag alloc maxflows>, traffic: <pkt, bytes>, drop: <pkt, bytes>
514+ > cpu0 123; 980540 10473 180600 [1.03], 0 0 0 0, traffic: 188765, 14 MB, drop: 27863, 1142 K
508515
509516 cpu#: this is Total and per CPU statistics for:
510- stat: <search found new, trunc frag alloc maxflows>: internal stat for:
517+ pps: packets per second on this CPU. It's useful to debug load imbalance.
518+ <search found new, trunc frag alloc maxflows>: internal stat for:
511519 search found new: hash table searched, found, and not found counters.
512520 [metric]: one minute (ewma) average hash metric per cpu.
513521 trunc: how much truncated packets are ignored
514- - these are that possible don't have valid IP header.
515- - accounted in drop packets counter but not in drop bytes.
522+ - for example if packets don't have valid IP header.
523+ - it's also accounted in drop packets counter, but not in drop bytes.
516524 frag: how much fragmented packets have seen.
517- - kernel always defragments INPUT/OUTPUT chains for us.
525+ - kernel defragments INPUT/OUTPUT chains for us if nf_defrag_ipv[46]
526+ module is loaded.
518527 - these packets are not ignored but not reassembled either, so:
519528 - if there is no enough data in fragment (ex. tcp ports) it is considered
520- zero.
529+ to be zero.
521530 alloc: how much cache memory allocations are failed.
522- - packets ignored and accounted in drop stat.
531+ - packets ignored and accounted in traffic drop stat.
523532 - probably increase system memory if this ever happen.
524533 maxflows: how much packets ignored on maxflows (maximum active flows reached).
525- - packets ignored and accounted in drop stat.
534+ - packets ignored and accounted in traffic drop stat.
526535 - you can control maxflows limit by sysctl net.netflow.maxflows variable.
527536
528- sock: <ok fail cberr, bytes>: table of exporting stats for:
529- ok: how much Netflow PDUs are exported (i.e. UDP packets sent by module).
530- fail: how much socket errors (i.e. packets failed to be sent).
531- - packets dropped and their internal statistics cumulatively accounted in
532- drop stat.
533- cberr: how much connection refused ICMP errors we got from export target.
534- - probably you not launched collector software on destination,
535- - or specified wrong destination address.
536- - flows lost in this fashion is not possible to account in drop stat.
537- - these are ICMP errors, and would look like this in tcpdump:
538- 05:04:09.281247 IP alice.19440 > bob.2055: UDP, length 120
539- 05:04:09.281405 IP bob > alice: ICMP bob udp port 2055 unreachable, length 156
540- bytes: how much kilobytes of exporting data successfully sent by the module.
541-
542537 traffic: <pkt, bytes>: how much traffic is accounted.
543538 pkt, bytes: sum of packets/megabytes accounted by module.
544539 - flows that failed to be exported (on socket error) is accounted here too.
545540
546541 drop: <pkt, bytes>: how much of traffic is not accounted.
547- pkt, bytes: sum of packets/kilobytes we are lost/ dropped.
548- - reasons they are dropped and accounted here:
542+ pkt, bytes: sum of packets/kilobytes that are dropped by metering process .
543+ - reasons these drops are accounted here:
549544 truncated/fragmented packets,
550545 packet is for new flow but failed to allocate memory for it,
551- packet is for new flow but maxflows is already reached,
552- all flows in export packets that got socket error.
546+ packet is for new flow but maxflows is already reached.
547+ Traffic lost due to socket errors is not accounted here. Look below
548+ about export and socket errors.
553549
554- > Natevents disabled, count start 0, stop 0 .
550+ > Export: Rate 0 bytes/s; Total 2 pkts, 0 MB, 18 flows; Errors 0 pkts; Traffic lost 0 pkts, 0 Kbytes, 0 flows .
555551
556- - Natevents mode disabled or enabled, and how much start or stop events
557- are reported.
552+ Rate X bytes/s: traffic rate generated by exporter itself.
553+ Total X pkts, X MB: total amount of traffic generated by exporter.
554+ X flows: how much data flows are exported.
555+ Errors X pkts: how much packets not sent due to socket errors.
556+ Traffic lost 0 pkts, 0 Kbytes, 0 flows: how much metered traffic is lost
557+ due to socket errors.
558+ Note that `cberr' errors are not accounted here due to their asynchronous
559+ nature. Read below about `cberr' errors.
558560
559561> sock0: 10.0.0.2:2055 unconnected (1 attempts).
560562
561563 If socket is unconnected (for example if module loaded before interfaces is
562564 up) it shows now much connection attempts was failed. It will try to connect
563565 until success.
564566
565- > sock0: 10.0.0.2:2055, sndbuf 106496, filled 0, peak 106848; err: sndbuf reached 928, connect 0, other 0
567+ > sock0: 10.0.0.2:2055, sndbuf 106496, filled 0, peak 106848; err: sndbuf reached 928, connect 0, cberr 0, other 0
566568
567569 sockX: per destination stats for:
568570 X.X.X.X:Y: destination ip address and port.
@@ -579,6 +581,13 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <abc@telekom.ru> -- 2008-2014.
579581 sndbuf reached X: how much packets dropped due to sndbuf being too small
580582 (error -11).
581583 connect X: how much connection attempts was failed.
584+ cberr X: how much connection refused ICMP errors we got from export target.
585+ - probably you are not launched collector software on destination,
586+ - or specified wrong destination address.
587+ - flows lost in this fashion is not possible to account in drop stat.
588+ - these are ICMP errors, and would look like this in tcpdump:
589+ 05:04:09.281247 IP alice.19440 > bob.2055: UDP, length 120
590+ 05:04:09.281405 IP bob > alice: ICMP bob udp port 2055 unreachable, length 156
582591 other X: dropped due to other possible errors.
583592
584593> aggr0: ...
0 commit comments