Iperf is a network performance measurement and optimization tool.
The iperf application is a cross-platform program can provide standard network performance metrics. Iperf comprises a client and a server that can generate a stream of data to assess the throughput between two endpoints in one or both directions.
A typical iperf output includes a stamped time report of the amount of data transported and the measured throughput.
Iperf2
Iperf2 is a network throughput and responsive measurement tool that supports TCP and UDP. One of its goals is to keep the iperf codebase functioning on various platforms and operating systems.
It is a multi-threaded architecture that grows in proportion to the number of CPUs or cores in a system with which it can get and report network performance using high- and low-impact strategies.
Features of Iperf2
- Supports smaller report intervals (100 us or greater, configure –enable-fast sampling for high-precision interval time output)
- Supports SO_RCVTIMEOUT for report servers regardless of no package
- Support SO_SNDTIMEO when sending so that socket writes won’t block beyond -t or -i
- Supports SO_TIMESTAMP for kernel-level package timestamps
- Supports end/end latency in mean/min/max/stdev (UDP) format (-e required) (assuming client and server clocks are synchronized, e.g. with Precision Time Protocol to OCXO per Spectracom oscillator)
- Supports TCP-level limited flows (via -b) using a simplified token bucket
- Supports packets per second (UDP) over pps as units, (e.g. -b 1000pps)
- Show PPS in client and server (UDP) reports (-e required)
- Supports real-time schedulers as command-line options (–real-time or -z, assuming proper user rights)
- Display target loop time in the initial client header (UDP)
- Add local support of ipv6 links (eg. iperf -c fe80::d03a:d127:75d2:4112%eno1)
- UDP ipv6 payload defaults to 1450 bytes per one ethernet frame per payload
- Support isochronous traffic (via –isochronous) and frame burst with variable bit rate (vbr) traffic and frame id
- SSM multi-cast support for v4 and v6 uses -H or -ssm-host, ie. iperf -s -B ff1e::1 -u -V -H fc00::4
- Latency histograms for packets and frames (e.g. –udp-histogram=10u.200000, 0.03, 99.97)
- Support for timed delivery starts per –txstart-time <unix.epoch time>
- Support for clients that increase destination IP with -P via –incr-dstip
- Support for varying the load is offered using the normal distribution of logs around the mean and standard deviation (per -b <mean>,<stdev>),
- Honor -T (ttl) for unicast and multicast
- UDP uses a 64-bit sequence number (although it still operates with 2.0.5 which uses a seq number of 32b.)
Iperf2 supported operating systems
- Linux, Windows 10, Windows 7, Windows XP, macOS, Android, and some OS set-top boxes.
Download Iperf2
Iperf3
The Iperf3 application is a rewrite of iperf from scratch to create a smaller and simpler codebase.
iPerf3 is a tool for measuring the maximum possible bandwidth on an IP network in real-time. It allows you to fine-tune various timings, buffers, and protocols (TCP, UDP, SCTP with IPv4 and IPv6). And it will also provide reports of bandwidth, losses, and other metrics for each test.
Features of Iperf3
- TCP and SCTP (Measure bandwidth, Report MSS/MTU size and observed read size, Support for TCP window size over socket buffer).
- UDP (Client can create UDP flow from specified bandwidth, Measure packet loss, Measure delay jitter, Capable multicast)
- Both the client and the server can have several simultaneous connections (-P option).
- The server handles multiple connections, rather than stopping after a single test.
- Can run for a specified time (option -t), rather than any amount of data to be transferred (option -n or -k).
- Periodic print, medium bandwidth, jitter, and loss reports at specific intervals (option-i).
- Run the server as a daemon (-D option)
- Use representative flows to test how link layer compression affects achievable bandwidth (-F option).
- A server receives one client simultaneously (iPerf3) and multiple clients simultaneously (iPerf2)
- Ignore TCP slow-start (-O option).
- Set the target bandwidth for UDP and (new) TCP (option -b).
- Set IPv6 flow label (-L option)
- Set the congestion control algorithm (-option -C)
- Use SCTP instead of TCP (–sctp option)
- The output is in JSON format (-J option).
- Disk read test (server: iperf3 -s / client: iperf3 -c testhost -i1 -F filename)
- Disk write test (server: iperf3 -s -F filename / client: iperf3 -c testhost -i1)
Operating systems supported by Iperf3
- Windows, Linux, Android, macOS X, FreeBSD, OpenBSD, NetBSD, VxWorks, Solaris
Download Iperf3
Other Interesting Articles
Iperf2 vs Iperf3
Featured | Iperf 2 | Iperf 3 |
Traffic types | ||
TCP traffic | Y | Y |
UDP traffic | Y | Y |
SCTP traffic | N | Y |
IPv4 | Y | Y |
IPv6 | Y | Y |
Multicast traffic (including SSM) | Y | N |
TCP connect only | Y | N |
Layer 2 checks | Y | N |
Output options | ||
Human format | Y | Y |
JSON output | N | Y |
CSV (basic only) | Y | N |
Hide IP addresses in output (v4 only) | Y | N |
Client side server reports | N | Y |
Traffic profiles | ||
Fair queue rate limiting | Y | Y |
Write rate limiting | Y | Y |
Read rate limiting (TCP) | Y | N |
Bursts | Y | Y |
Isochronous (video) TCP/UDP | Y | N |
Reverse roles | Y | Y |
Bidirectional traffic | Y | Y |
Full duplex same socket | Y | N |
TCP bounceback w/optional working load(s) | Y | N |
Low duty cycle traffic with server side stats | Y | N |
TCP_NOTSENT_LOWAT with select() (using the –tcp-write-prefetch option) | Y | N |
TCP near congestion (experimental) | Y | N |
Metrics | ||
Throughput | Y | Y |
Responsiveness per second (RPS) | Y | N |
UDP packets (total/lost) | Y | Y |
UDP Jitter | Y | Y |
Packet latencies UDP | Y | N |
TCP/UDP frame/burst latencies | Y | N |
Write-to-read latencies TCP | Y | N |
Network power (latency/throughput) | Y | N |
InP – Bytes in queues (Little’s law) | Y | N |
TCP CWND | Y | N |
TCP retries | Y | Y |
TCP RTT | Y | Y |
Send side write delay histograms | Y | N |
UDP packets per second | Y | N |
Latency histograms | Y | N |
TCP connect times | Y | N |
TCP response per interval | Y | N |
Sum only output | Y | N |
Other | ||
Multi-threaded design | Y | N |
Parallel -P technique | Threads | Processes |
Real-time scheduling | Y | N |
-t support for server | Y | N |
TAP virtual interface support (receive only) via –tap-dev | Y | N |
CPU affinity | N | Y |
Zero copy | N | Y |
IPv6 Flow Labels | N | Y |
–omit option (skip first samples per time in seconds) | N | Y |
Incr dst ip option with -P | Y | N |
Incr dst ip option with -P | Y | N |
Incr dst port option with -P | Y | N |
Incr src port option with -P | Y | N |
Device or interface binding | Y | Y |
Source port binding | Y | N |
Scheduled tx start time | Y | N |
Delay tx start time | Y | N |
User password | N | Y |
Permit keys | Y (TCP only) | N |
Stateless UDP | Y | N |
Python framework (asyncio) | Y (flows) | N |
Testing WiFi thru 100G | Y | N/a |
Scaling to 1000+ threads | Y | N/a |