Flexible NetFlow
Network Administrators and Engineers need to obtain statistics for their network infrastructure, while SNMP does a great job at this it by providing us broad information such as packet count, interface changes, and device health, it cannot provide us granular information about our networks. This is why Cisco introduced NetFlow, a tool that works alongside SNMP by letting us obtain information and statistics for the unique types and characteristics of data flowing through interfaces. In this post I will discuss how to fundamentally implement NetFlow and concepts revolving around it.
Original NetFlow#
Once NetFlow is enabled on an interface it will begin the following:
- Sort all packets into existing or new “flows” based on:
- Source IP
- Destination IP
- Source & Destination Port Numbers
- IP Protocol / TCP / UDP (L3 Protocol Type)
- ToS Byte
- Input Interface
- Collect flow data and place it into a NetFlow Cache where it will be temporarily stored unless configured to have persistent storage.
- This information can be viewed with IOS CLI commands, or by exporting it to a NetFlow Collector.
- How NetFlow cache entries expire or triggers occur:
- Flow has been idle for a specified period of time (15-seconds default)
- Flow is classified as “long-lived” (30-minutes) this flow is exported and if traffic from this flow is ongoing a new Flow Cache entry is built.
- TCP connections in which FIN or RST flags are seen.
- In the event the cache becomes full, heuristics are applied to aggressively age groups of flows out.
- Expired flows are grouped together into NetFlow Export datagrams and are sent to NetFlow Collector typically (but not always) UDP port 2055.
Flexible NetFlow#
Before we’re able to begin
R1(config)# ip cef
R1(config)# flow monitor MON_NAME
Custom Records#
R1(config)# flow record REC_NAME
R1(config-flow-record)# decription WEB_SERVER
R1(config-flow-record)# match ipv4
echo "[$(/usr/bin/date '+%Y-%m-%d %H:%M:%S')] Starting NetFlow archive job"
# Find old, uncompressed NetFlow files and archive them
/usr/bin/find "$FLOW_DIR" -maxdepth 1 -type f \
-name "$FILE_PATTERN" ! -name '*.gz' \
-mtime +"$DAYS_OLD" -print0 |
while IFS= read -r -d '' file; do
echo "Archiving: $file"
# Compress in place
/usr/bin/gzip -9 "$file"
# Move compressed file to archive directory
/usr/bin/mv "${file}.gz" "$ARCHIVE_DIR"/
done
echo "[$(/usr/bin/date '+%Y-%m-%d %H:%M:%S')] NetFlow archive job finished"
Read other posts