Gate News reports that on March 25, Google Research Institute released the quantization compression algorithm TurboQuant, which can compress large language model KV caches to 3 bits, reducing memory usage by at least 6 times without training or fine-tuning and without loss of model accuracy. In 4-bit mode, the attention computation speed on NVIDIA H100 GPUs is up to 8 times faster than the 32-bit unquantized baseline. The research team validated TurboQuant using models like Gemma and Mistral on long-context benchmarks such as LongBench, Needle In A Haystack, and ZeroSCROLLS, achieving optimal performance in all tests. The algorithm consists of two sub-algorithms: PolarQuant, which eliminates traditional quantization memory overhead through polar coordinate transformation, and QJL, which corrects residual errors with only 1 bit. Led by Amir Zandieh and Vice President and Google Fellow Vahab Mirrokni at Google Research, in collaboration with KAIST in Korea and New York University, the study will be published at ICLR 2026. Google states that one of the main applications of this technology is to address KV cache bottlenecks in models like Gemini.