Iperf2 vs Iperf3 – Network Performance Measurement

Iperf is a network performance measurement and optimization tool.

The iperf application is a cross-platform program can provide standard network performance metrics. Iperf comprises a client and a server that can generate a stream of data to assess the throughput between two endpoints in one or both directions.

iperf2

A typical iperf output includes a stamped time report of the amount of data transported and the measured throughput.

Iperf2

Iperf2 is a network throughput and responsive measurement tool that supports TCP and UDP. One of its goals is to keep the iperf codebase functioning on various platforms and operating systems.

It is a multi-threaded architecture that grows in proportion to the number of CPUs or cores in a system with which it can get and report network performance using high- and low-impact strategies.

Features of Iperf2

  • Supports smaller report intervals (100 us or greater, configure –enable-fast sampling for high-precision interval time output)
  • Supports SO_RCVTIMEOUT for report servers regardless of no package
  • Support SO_SNDTIMEO when sending so that socket writes won’t block beyond -t or -i
  • Supports SO_TIMESTAMP for kernel-level package timestamps
  • Supports end/end latency in mean/min/max/stdev (UDP) format (-e required) (assuming client and server clocks are synchronized, e.g. with Precision Time Protocol to  OCXO  per Spectracom oscillator)
  • Supports TCP-level limited flows (via -b) using a simplified token bucket
  • Supports packets per second (UDP) over pps as units, (e.g. -b 1000pps)
  • Show PPS in client and server (UDP) reports (-e required)
  • Supports real-time schedulers as command-line options (–real-time or -z, assuming proper user rights)
  • Display target loop time in the initial client header (UDP)
  • Add local support of ipv6 links (eg. iperf -c fe80::d03a:d127:75d2:4112%eno1)
  • UDP ipv6 payload defaults to 1450 bytes per one ethernet frame per payload
  • Support isochronous traffic (via –isochronous) and frame burst with variable bit rate (vbr) traffic and frame id
  • SSM multi-cast support for v4 and v6 uses -H or -ssm-host, ie.  iperf -s -B ff1e::1 -u -V -H fc00::4
  • Latency histograms for packets and frames (e.g. –udp-histogram=10u.200000, 0.03, 99.97)
  • Support for timed delivery starts per –txstart-time <unix.epoch time>
  • Support for clients that increase destination IP with -P via –incr-dstip
  • Support for varying the load is offered using the normal distribution of logs around the mean and standard deviation (per -b <mean>,<stdev>),
  • Honor -T (ttl) for unicast and multicast
  • UDP uses a 64-bit sequence number (although it still operates with 2.0.5 which uses a seq number  of 32b.)

Iperf2 supported operating systems

  • Linux, Windows 10, Windows 7, Windows XP, macOS, Android, and some OS set-top boxes.

Download Iperf2

Iperf2 Windows v2.1.8

Iperf2 Linux v2.1.8

Iperf3

The Iperf3 application is a rewrite of iperf from scratch to create a smaller and simpler codebase.

iPerf3 is a tool for measuring the maximum possible bandwidth on an IP network in real-time. It allows you to fine-tune various timings, buffers, and protocols (TCP, UDP, SCTP with IPv4 and IPv6).  And it will also provide reports of bandwidth, losses, and other metrics for each test.

Features of Iperf3

  • TCP and SCTP (Measure bandwidth, Report MSS/MTU size and observed read size, Support for TCP window size over socket buffer).
  • UDP (Client can create UDP flow from specified bandwidth, Measure packet loss, Measure delay jitter, Capable multicast)
  • Both the client and the server can have several simultaneous connections (-P option).
  • The server handles multiple connections, rather than stopping after a single test.
  • Can run for a specified time (option -t), rather than any amount of data to be transferred (option -n or -k).
  • Periodic print, medium bandwidth, jitter, and loss reports at specific intervals (option-i).
  • Run the server as a daemon (-D option)
  • Use representative flows to test how link layer compression affects achievable bandwidth (-F option).
  • A server receives one client simultaneously (iPerf3) and multiple clients simultaneously (iPerf2)
  • Ignore TCP slow-start (-O option).
  • Set the target bandwidth for UDP and (new) TCP (option -b).
  • Set IPv6 flow label (-L option)
  • Set the congestion control algorithm (-option -C)
  • Use SCTP instead of TCP (–sctp option)
  • The output is in JSON format (-J option).
  • Disk read test (server: iperf3 -s / client: iperf3 -c testhost -i1 -F filename)
  • Disk write test (server: iperf3 -s -F filename / client: iperf3 -c testhost -i1)

Operating systems supported by Iperf3

  • Windows, Linux, Android, macOS X, FreeBSD, OpenBSD, NetBSD, VxWorks, Solaris

Download Iperf3

Iperf3 Windows 64bit v3.1.3

Iperf3 Windows 32bit v3.1.3

Iperf3 Linux v3.1.3

Iperf2 vs Iperf3

FeaturedIperf 2Iperf 3
Traffic types
TCP trafficYY
UDP trafficYY
SCTP trafficNY
IPv4YY
IPv6YY
Multicast traffic (including SSM)YN
TCP connect onlyYN
Layer 2 checksYN
Output options
Human formatYY
JSON outputNY
CSV (basic only)YN
Hide IP addresses in output (v4 only)YN
Client side server reportsNY
Traffic profiles
Fair queue rate limitingYY
Write rate limitingYY
Read rate limiting (TCP)YN
BurstsYY
Isochronous (video) TCP/UDPYN
Reverse rolesYY
Bidirectional trafficYY
Full duplex same socketYN
TCP bounceback w/optional working load(s)YN
Low duty cycle traffic with server side statsYN
TCP_NOTSENT_LOWAT with select() (using the –tcp-write-prefetch option)YN
TCP near congestion (experimental)YN
Metrics
ThroughputYY
Responsiveness per second (RPS)YN
UDP packets (total/lost)YY
UDP JitterYY
Packet latencies UDPYN
TCP/UDP frame/burst latenciesYN
Write-to-read latencies TCPYN
Network power (latency/throughput)YN
InP – Bytes in queues (Little’s law)YN
TCP CWNDYN
TCP retriesYY
TCP RTTYY
Send side write delay histogramsYN
UDP packets per secondYN
Latency histogramsYN
TCP connect timesYN
TCP response per intervalYN
Sum only outputYN
Other
Multi-threaded designYN
Parallel -P techniqueThreadsProcesses
Real-time schedulingYN
-t support for serverYN
TAP virtual interface support (receive only) via –tap-devYN
CPU affinityNY
Zero copyNY
IPv6 Flow LabelsNY
–omit option (skip first samples per time in seconds)NY
Incr dst ip option with -PYN
Incr dst ip option with -PYN
Incr dst port option with -PYN
Incr src port option with -PYN
Device or interface bindingYY
Source port bindingYN
Scheduled tx start timeYN
Delay tx start timeYN
User passwordNY
Permit keysY (TCP only)N
Stateless UDPYN
Python framework (asyncio)Y (flows)N
Testing WiFi thru 100GYN/a
Scaling to 1000+ threadsYN/a

Latest Articles