-
Notifications
You must be signed in to change notification settings - Fork 18.4k
Description
What did you do?
Kubernetes API servers use the standard compress/gzip
library to reduce payload sent over the network. The payload is large JSON and Protobuf, size can reach above multiple gigabytes.
What did you see happen?
We've identified the standard compress/gzip
library as a significant performance bottleneck. This is blocking our efforts to support larger resource sizes.
We are observing that compression throughput with the standard compress/gzip
maxes out at around 100-200MB/s.
When we benchmark an alternative implementation, github.com/klauspost/compress/gzip
(written by cc @klauspost), we see a ~10x throughput improvement with what appears to be a drop-in replacement for our use case.
What did you expect to see?
We expected the standard library's performance to be more competitive with other Go implementations, as this performance gap is now a primary bottleneck for Kubernetes scalability.
In kubernetes/kubernetes#104071, the key concern raised by Kubernetes maintainers (specifically @liggitt) was the high cost, security review, and maintenance burden of switching from the standard library to a large, third-party dependency for such a core function. The recommendation was to first report this performance gap upstream to the Go team and seek improvements in the standard library implementation.
Looking for guidance whether we can improve the standard compress/gzip library or the alternative implementation is the only way forward. The Kubernetes project's preference remains unchanged: we would strongly prefer to rely on the standard library rather than vendoring a specialized third-party implementation.
(This bug is filed based on a recent discussion with @mknyszek, who suggested this was the correct forum. He also noted that @dsnet (Joe Tsai) might have the most context on this package's history.)