# bbperf **Repository Path**: mirrors_cloudflare/bbperf ## Basic Information - **Project Name**: bbperf - **Description**: bbperf - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-12-18 - **Last Updated**: 2026-02-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
bbperf - An end-to-end performance and bufferbloat measurement tool
`bbperf` measures what matters most. Traditional network performance measurement tools collect metrics such as latency and throughput regardless of the conditions that exist during the collection period. While valuable for many uses, that approach can miss reporting the actual performance that real user payloads experience on production networks. This tool only reports performance metrics when the flow is operating at "max buffer usage". Max buffer usage is when the active flow has filled any and all buffers that exist along the packet path between the endpoints. User payload is used to measure latency and throughput. This accounts for the performance impact of transparent proxies, transparent tunnels, transparent firewalls, and all the other things that are not visible to the endpoints. It also simplifies the interpretation of retransmissions on user performance, which is non-intuitive at best. This is because some retransmissions are due to the real loss of user payload while many are not. In this tool, the loss of user payload will show up in the latency and throughput metrics, i.e. higher latencies and lower throughput. Features: * Latency, both unloaded and loaded, is measured by the same flow that is under test. Other tools will commonly measure latency using a different flow or different protocol. One of the reasons why using different protocols and/or different flows is not desirable is because fair queuing will cause the latency of those other flows to be much lower (better) than the flow that matters. * Throughput Both sender and receiver rates are collected, but the receiver rate (a.k.a. goodput) is the important one. * Bufferbloat is calculated It is often assumed that TCP receive buffers are the only source of bufferbloat. While that is common, it misses many other locations where bufferbloat may occur. This tool reports the effects of all sources of bufferbloat, not just TCP receive buffers. `bbperf` calculates both the BDP (bandwidth delay product) and the total amount of buffer actually used. The difference between those two is reported as "excess buffer usage". A small number for this metric is normal and expected, but a large number, relative to BDP, is bufferbloat. Bufferbloat also appears as a large difference between unloaded and loaded latency. * Both TCP and UDP are supported Both benchmark tests will wait until it has reached "max buffer usage" before collecting metrics data. For TCP, it will wait for the sending and receiving rates to match. For UDP, the sending rate will be automatically adjusted to be just above the maximum packet rate without dropping packets before starting its metrics collection. * `bbperf` measures the performance of data flow in one direction only. Network routing can be asymmetric, bottleneck links are asymmetric, bufferbloat is asymmetric, all of which means that performance is asymmetric. `bbperf` allows us to see the asymmetry. Data flow in `bbperf` is one way. The direction of data flow is from the client host to the server host (unless the `-R` option is specified). That is the direction being measured, and is what is reported in the metrics. Latency is measured round trip, but the return traffic (from the data receiver back to the data sender) is low-volume and should not contribute any bufferbloat-related latency to the measurement. This cannot be guaranteed, in the same way that it cannot be guaranteed that the unloaded latency measurement does not contain any bufferbloat-induced latency. But it does ensure that no bufferbloat-induced latency is cause by `bbperf`s own flow. * Automatic generation of graphs ### Usage To run a test: 1. Start the server on one host ``` $ bbperf.py -s ``` 2. Run the client on another host ``` $ bbperf.py -c