CDQuant: Accurate Post-training Weight Quantization of Large Pre-trained Models using Greedy Coordinate Descent (2024)

Pranav Ajit Nair
Google DeepMind
pranavajitnair@google.com&Arun Sai Suggala
Google DeepMind
arunss@google.com

Abstract

Large language models (LLMs)Work done when the authors were at Google Research. have recently demonstrated remarkable performance across diverse language tasks. But their deployment is often constrained by their substantial computational and storage requirements. Quantization has emerged as a key technique for addressing this challenge, enabling the compression of large models with minimal impact on performance. The recent GPTQ algorithm, a post-training quantization (PTQ) method, has proven highly effective for compressing LLMs, sparking a wave of research that leverages GPTQ as a core component. Recognizing the pivotal role of GPTQ in the PTQ landscape, we introduce CDQuant, a simple and scalable alternative to GPTQ with improved performance. CDQuant uses coordinate descent to minimize the layer-wise reconstruction loss to achieve high-quality quantized weights. Our algorithm is easy to implement and scales efficiently to models with hundreds of billions of parameters. Through extensive evaluation on the PaLM2 model family, we demonstrate that CDQuant consistently outperforms GPTQ across diverse model sizes and quantization levels. In particular, for INT2 quantization of PaLM2-Otter, CDQuantachieves a 10%percent1010\%10 % reduction in perplexity compared to GPTQ.

1 Introduction

Large language models (LLMs) have shown remarkable ability to handle various language tasks(Touvron etal., 2023; OpenAI, 2023; Google, 2023), but their widespread adoption is hampered by their substantial computational and memory demands. To tackle this challenge, researchers have explored techniques like quantization, pruning, and distillation, with quantization being a particularly promising avenue for reducing model size and inference time without significantly sacrificing performance(Miao etal., 2023).Quantization techniques broadly fall into two categories: post-training quantization (PTQ) and quantization-aware training (QAT). QAT, while potentially yielding better results, poses a significant challenge for LLMs due to the immense resources required to train these large models. As a result, a growing body of research has focused on PTQ, which is generally less computationally intensive and can be applied to large, pre-trained models.

A seminal work in PTQ for LLMs is GPTQ(Frantar etal., 2022), which introduced a one-shot weight quantization approach that minimizes the following layer-wise output reconstruction loss: X(WW^)F2superscriptsubscriptnorm𝑋𝑊^𝑊𝐹2\|X(W-\hat{W})\|_{F}^{2}∥ italic_X ( italic_W - over^ start_ARG italic_W end_ARG ) ∥ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT,where X=[𝐱1,𝐱2𝐱n]𝑋subscript𝐱1subscript𝐱2subscript𝐱𝑛X=[\mathbf{x}_{1},\mathbf{x}_{2}\dots\mathbf{x}_{n}]italic_X = [ bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT … bold_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] represents the layer inputs and W,W^din×dout𝑊^𝑊superscriptsubscript𝑑insubscript𝑑outW,\widehat{W}\in\mathbb{R}^{d_{\text{in}}\times d_{\text{out}}}italic_W , over^ start_ARG italic_W end_ARG ∈ blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT × italic_d start_POSTSUBSCRIPT out end_POSTSUBSCRIPT end_POSTSUPERSCRIPTare the original and quantized weight matrices, respectively. This method has sparked a wave of research in PTQ. Subsequent works have built upon GPTQ, addressing its limitations and improving it’s performance.For instance, SpQR(Dettmers etal., 2023) and OWQ(Lee etal., 2023) improve GPTQ by explicitly addressing outlier weights, maintaining them in full precision, while quantizing the remaining weights using GPTQ. This hybrid approach improves quantization accuracy, particularly at lower bit-widths. Another line of research focuses on transforming the weight space before applying GPTQ. For instance, QuIP(Chee etal., 2024), FrameQuant(Adepu etal., 2024) transform the weights into a more quantization-friendly space and apply GPTQ in the transformed space. AWQ(Lin etal., 2023), SmoothQuant(Xiao etal., 2023) reduce the effect of activation outliers by performing feature scaling before quantizing the weights using standard techniques.

Given the central role GPTQ plays in the landscape of PTQ methods for LLMs, improving its quality directly translates to better quantization across numerous techniques that build upon it.Consequently, in this work, we revisit the core optimization problem addressed by GPTQ and introduce a novel coordinate descent based algorithm, CDQuant, to minimize the objective. CDQuantis an iterative optimization technique that greedily selects coordinates to descend along in each iteration. This approach contrasts with GPTQ, which cycles through the coordinates only once, in a predetermined order, often leading to suboptimal solutions for the layer-wise objective (see Section2 for more details). CDQuantis simple to implement and scales effectively to models with billions of parameters.We further extend CDQuant to group/sub-channel quantization, a technique gaining popularity of late(Dettmers etal., 2021; Dettmers and Zettlemoyer, 2023). Extensive experiments on PaLM 2(Anil etal., 2023) demonstrate that CDQuantconsistently outperforms GPTQ across various model sizes and quantization precision levels. For example, for INT2 quantization of PaLM2-Otter, CDQuantachieves a 10%percent1010\%10 % reduction in perplexity compared to GPTQ.

2 Related Work

The literature on quantization is huge. In this section, we only review the works that are related to quantization of large pre-trained models, and those that are related to our work.

Weight-only Quantization.

Optimal Brain Quantization (OBQ)(Frantar and Alistarh, 2022), a post-training quantization (PTQ) technique, draws inspiration from the Optimal Brain Surgeon (OBS) framework proposed byLeCun etal. (1989). Frantar and Alistarh (2022) introduced exact yet efficient algorithms for implementing the OBS greedy strategy. However, OBQ remains computationally expensive and struggles to scale effectively to large language models (LLMs). Frantar etal. (2022) subsequently introduced GPTQ, which employs heuristics to accelerate the OBQ algorithm. Specifically, GPTQ updates coordinates once in a cyclic manner rather than the greedy approach used in OBQ. This modification significantly speeds up the algorithm but comes at the cost of reduced performance. In our work, we address this performance drop by proposing greedy coordinate descent algorithms that are both straightforward and easy to implement.

Recent works have sought to improve upon GPTQ. One popular strategy here is to identify and isolate problematic weights before applying GPTQ. For example, SpQR(Dettmers etal., 2023) isolates outlier weights and keeps them at full precision, quantizing only the remaining values using GPTQ. Similarly, OWQ(Lee etal., 2023) identifies weights that are highly sensitive to small perturbations and excludes them from the GPTQ quantization process.Another promising direction involves transforming the weights into a different basis before quantization. Both QuIP(Chee etal., 2024), and FrameQuant(Adepu etal., 2024) take this route and perform quantization in the transformed space using GPTQ. In all these approaches, we could potentially substitute GPTQ with CDQuantand expect to see further improvements in performance. Other promising methods such as AWQ(Lin etal., 2023), AffineQuant(Ma etal., 2024), suppress the effect of outliers in input activations by adjusting their scale and transferring it to weights. They then perform the standard MinMax quantization on the rescaled weights. One could replace MinMax with GPTQ, CDQuantto improve the performance of these techniques (see Table3 for a comparison of CDQuantwith AWQ).

While working on our paper, we came across QuantEase(Behdin etal., 2023), a parallel research effort sharing similar goals as ours for improving GPTQ. While both methods leverage the concept of coordinate descent, QuantEase adopts a cyclic approach for weight updates, whereas our work employs a greedy strategy. Although our experiments (AppendixF) indicate comparable performance between the two methods, our work extends beyond QuantEase by introducing specialized algorithms for group/sub-channel quantization, not just full-channel quantization. Additionally, we develop novel block coordinate descent algorithms that further improve the performance. That being said, both these works (QuantEase and ours) collectively highlight the potential of coordinate descent algorithms in outperforming GPTQ.

Weight+Activation Quantization.

SmoothQuant(Xiao etal., 2023), OS+(Wei etal., 2023, 2022) have a similar flavour as AWQ, but quantize both weights and activations after scaling them appropriately. OmniQuant(Shao etal., 2023) performs quantization of the entire transformer block in a single shot. This encompasses both activation and weight quantization. Furthermore, it subsumes SmoothQuant, OS+ by using both feature scaling and outlier suppression. QLLM(Liu etal., 2023) tackles the issue of outliers in the activations by splitting the outlier features into multiple sub-channels and then recombining them, effectively reducing their influence. QLLM also incorporates a fine-tuning step at the end, introducing low-rank weights into each layer of the LLM. LLM.int8()(Dettmers etal., 2022) quantizes both acativations and weights to 8888-bits and also identifies outliers and stores them in full precision.

Efficient End-to-End Quantization.

To recover the drop in performance from quantizationChai etal. (2023); Dettmers etal. (2024); Hu etal. (2021) perform low-rank parameter-efficient fine-tuning.

3 CDQuant

Notation.

Throughout the paper, we denote vectors by bold faced letters (𝐚𝐚\mathbf{a}bold_a), and matrices by capital letters (A𝐴Aitalic_A). 𝐚2=i𝐚i2subscriptnorm𝐚2subscript𝑖superscriptsubscript𝐚𝑖2\|\mathbf{a}\|_{2}=\sqrt{\sum_{i}\mathbf{a}_{i}^{2}}∥ bold_a ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = square-root start_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG is the Euclidean norm and AF=i,jAij2subscriptnorm𝐴𝐹subscript𝑖𝑗superscriptsubscript𝐴𝑖𝑗2\|A\|_{F}=\sqrt{\sum_{i,j}A_{ij}^{2}}∥ italic_A ∥ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = square-root start_ARG ∑ start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG is the Frobenius norm of a matrix. diag(𝐚)diag𝐚\mbox{diag}(\mathbf{a})diag ( bold_a ) represents a diagonal matrix with 𝐚𝐚\mathbf{a}bold_a as its diagonal entries. din,doutsubscript𝑑insubscript𝑑outd_{\text{in}},d_{\text{out}}italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT , italic_d start_POSTSUBSCRIPT out end_POSTSUBSCRIPT denote the input, output dimensions of a layer. Wdin×dout𝑊superscriptsubscript𝑑insubscript𝑑outW\in\mathbb{R}^{d_{\text{in}}\times d_{\text{out}}}italic_W ∈ blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT × italic_d start_POSTSUBSCRIPT out end_POSTSUBSCRIPT end_POSTSUPERSCRIPT is the weight matrix of the layer, and X=[𝐱1,𝐱2𝐱n]n×din𝑋subscript𝐱1subscript𝐱2subscript𝐱𝑛superscript𝑛subscript𝑑inX=[\mathbf{x}_{1},\mathbf{x}_{2}\dots\mathbf{x}_{n}]\in\mathbb{R}^{n\times d_{%\text{in}}}italic_X = [ bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT … bold_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT end_POSTSUPERSCRIPT is the matrix containing n𝑛nitalic_n datapoints that are sent as input to the layer. H=XTX𝐻superscript𝑋𝑇𝑋H=X^{T}Xitalic_H = italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_X is the Hessian matrix for our objective in Equation(1). c𝑐citalic_c denotes the number of bits of precision used in quantization. In quantization, we aim to represent W𝑊Witalic_W as Q×diag(𝐚)+𝟏𝐛T𝑄diag𝐚1superscript𝐛𝑇Q\times\mbox{diag}(\mathbf{a})+\mathbf{1}\mathbf{b}^{T}italic_Q × diag ( bold_a ) + bold_1 bold_b start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, where 𝐚,𝐛dout𝐚𝐛superscriptsubscript𝑑out\mathbf{a},\mathbf{b}\in\mathbb{R}^{d_{\text{out}}}bold_a , bold_b ∈ blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT out end_POSTSUBSCRIPT end_POSTSUPERSCRIPT represent the scale and bias parameters and Q{0,1,2c1}din×dout𝑄superscript01superscript2𝑐1subscript𝑑insubscript𝑑outQ\in\{0,1,\dots 2^{c}-1\}^{d_{\text{in}}\times d_{\text{out}}}italic_Q ∈ { 0 , 1 , … 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 } start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT × italic_d start_POSTSUBSCRIPT out end_POSTSUBSCRIPT end_POSTSUPERSCRIPT is the quantized matrix.

Many existing PTQ techniques, including GPTQ, aim to solve the following layer-wise optimization objective: min𝐚,𝐛,QX(WQ×diag(𝐚)𝟏𝐛T)F2subscript𝐚𝐛𝑄superscriptsubscriptnorm𝑋𝑊𝑄diag𝐚1superscript𝐛𝑇𝐹2\min_{\mathbf{a},\mathbf{b},Q}\|X(W-Q\times\mbox{diag}(\mathbf{a})-\mathbf{1}%\mathbf{b}^{T})\|_{F}^{2}roman_min start_POSTSUBSCRIPT bold_a , bold_b , italic_Q end_POSTSUBSCRIPT ∥ italic_X ( italic_W - italic_Q × diag ( bold_a ) - bold_1 bold_b start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) ∥ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.Observe that this problem breaks down into doutsubscript𝑑outd_{\text{out}}italic_d start_POSTSUBSCRIPT out end_POSTSUBSCRIPT independent problems across the output dimension. So, in the sequel, we focus on the following problem of quantizing a dinsubscript𝑑ind_{\text{in}}italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT-dimensional vector

mina,b,𝐪X(𝐰a𝐪b)22.subscript𝑎𝑏𝐪superscriptsubscriptnorm𝑋𝐰𝑎𝐪𝑏22\displaystyle\min_{a,b,\mathbf{q}}\|X(\mathbf{w}-a\mathbf{q}-b)\|_{2}^{2}.roman_min start_POSTSUBSCRIPT italic_a , italic_b , bold_q end_POSTSUBSCRIPT ∥ italic_X ( bold_w - italic_a bold_q - italic_b ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(1)

For a fixed (a,b)𝑎𝑏(a,b)( italic_a , italic_b ), this problem is called Integer Linear Regression problem. It turns out, finding optimal solutions to this problem is NP-hard(Chrétien and Corset, 2009; Park and Boyd, 2018). So, several works have designed heuristics to solve this problem(Nagel etal., 2020; Li etal., 2021; Frantar and Alistarh, 2022; Frantar etal., 2022; Hubara etal., 2021). Within the context of LLMs, GPTQ is perhaps the most popular among these techniques, as it scales efficiently to models with billions of parameters(Frantar etal., 2022). In this work, we aim to improve upon GPTQ by designing better heuristics, while maintaining its scalability aspect. To this end, we rely on performing coordinate descent on objective(1), which we describe below.

3.1 Greedy Coordinate Descent

1:Input: T𝑇Titalic_T - coordinate descent steps, X𝑋Xitalic_X - input data matrix, 𝐰𝐰\mathbf{w}bold_w - vector to be quantized, a𝑎aitalic_a - scale, b𝑏bitalic_b - bias, 𝐪0subscript𝐪0\mathbf{q}_{0}bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - initial estimate

2:Compute Hessian H𝐻Hitalic_H as: HXTX𝐻superscript𝑋𝑇𝑋H\leftarrow X^{T}Xitalic_H ← italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_X

3:Compute gradient 𝐠𝐠\mathbf{g}bold_g as: 𝐠2H(𝐪0a1(𝐰b))𝐠2𝐻subscript𝐪0superscript𝑎1𝐰𝑏\mathbf{g}\leftarrow 2H(\mathbf{q}_{0}-a^{-1}(\mathbf{w}-b))bold_g ← 2 italic_H ( bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_a start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_w - italic_b ) )

4:fort[1:T]t\in[1:T]italic_t ∈ [ 1 : italic_T ]do

5:Find the coordinate that leads to the largest reduction in loss

i,r=argmini{0,1,din1},r{0,1,2c1}(r𝐪t1,i)2Hi,i+(r𝐪t1,i)𝐠ii^{*},r^{*}=\operatorname*{arg\,min}_{i\in\{0,1,\dots d_{\text{in}}-1\},r\in\{%0,1,\dots 2^{c}-1\}}(r-\mathbf{q}_{t-1,i})^{2}H_{i,i}+(r-\mathbf{q}_{t-1,i})%\mathbf{g}_{i}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT italic_i ∈ { 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT - 1 } , italic_r ∈ { 0 , 1 , … 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 } end_POSTSUBSCRIPT ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT + ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) bold_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT

6:Update gradient 𝐠𝐠\mathbf{g}bold_g as

𝐠𝐠+2(r𝐪t1,i)Hi,,𝐠𝐠2superscript𝑟subscript𝐪𝑡1superscript𝑖subscript𝐻superscript𝑖\mathbf{g}\leftarrow\mathbf{g}+2(r^{*}-\mathbf{q}_{t-1,i^{*}})H_{i^{*},\cdot},bold_g ← bold_g + 2 ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , ⋅ end_POSTSUBSCRIPT ,

  where Hi,subscript𝐻superscript𝑖H_{i^{*},\cdot}italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , ⋅ end_POSTSUBSCRIPT is the isuperscript𝑖i^{*}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT column of H𝐻Hitalic_H

7:Update 𝐪t1subscript𝐪𝑡1\mathbf{q}_{t-1}bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT as

𝐪t𝐪t1+(r𝐪t1,i)𝐞i,subscript𝐪𝑡subscript𝐪𝑡1superscript𝑟subscript𝐪𝑡1superscript𝑖subscript𝐞superscript𝑖\mathbf{q}_{t}\leftarrow\mathbf{q}_{t-1}+(r^{*}-\mathbf{q}_{t-1,i^{*}})\mathbf%{e}_{i^{*}},bold_q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ← bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT + ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) bold_e start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ,

 where 𝐞isubscript𝐞superscript𝑖\mathbf{e}_{i^{*}}bold_e start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is the standard basis vector with 1111 in isuperscript𝑖i^{*}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT position and 00 everywhere else

8:endfor

In this section, we assume we have suitable values for scale (a𝑎aitalic_a) and bias (b𝑏bitalic_b) parameters already at hand, and focus on optimizing 𝐪𝐪\mathbf{q}bold_q. For a more in-depth discussion of how we determine these values, please refer to the final part of the section. As the name suggests, in greedy coordinate descent, at each round, we find the coordinate that leads to the biggest reduction in the objective and descend along that coordinate. Letting (𝐪)X(𝐰a𝐪b)22𝐪superscriptsubscriptnorm𝑋𝐰𝑎𝐪𝑏22{\mathcal{L}}(\mathbf{q})\coloneqq\|X(\mathbf{w}-a\mathbf{q}-b)\|_{2}^{2}caligraphic_L ( bold_q ) ≔ ∥ italic_X ( bold_w - italic_a bold_q - italic_b ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT be the objective in Equation(1), we try to find a coordinate i𝑖iitalic_i and value r𝑟ritalic_r, such that updating the ithsuperscript𝑖𝑡i^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT coordinate to r𝑟ritalic_r gives the biggest reduction in loss

mini,r(𝐪+(r𝐪i)𝐞i)(𝐪),subscript𝑖𝑟𝐪𝑟subscript𝐪𝑖subscript𝐞𝑖𝐪\min_{i,r}{\mathcal{L}}(\mathbf{q}+(r-\mathbf{q}_{i})\mathbf{e}_{i})-{\mathcal%{L}}(\mathbf{q}),roman_min start_POSTSUBSCRIPT italic_i , italic_r end_POSTSUBSCRIPT caligraphic_L ( bold_q + ( italic_r - bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) bold_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - caligraphic_L ( bold_q ) ,

where 𝐞isubscript𝐞𝑖\mathbf{e}_{i}bold_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the standard basis vector with 1111 in i𝑖iitalic_i position and 00 everywhere else. Luckily for us, this can be implemented extremely efficiently as we have analytical expressions for the objective. In particular, one can easily show that

(𝐪+(r𝐪i)𝐞i)(𝐪)=(r𝐪i)2Hi,i+(r𝐪i)𝐠i,𝐪𝑟subscript𝐪𝑖subscript𝐞𝑖𝐪superscript𝑟subscript𝐪𝑖2subscript𝐻𝑖𝑖𝑟subscript𝐪𝑖subscript𝐠𝑖{\mathcal{L}}(\mathbf{q}+(r-\mathbf{q}_{i})\mathbf{e}_{i})-{\mathcal{L}}(%\mathbf{q})=(r-\mathbf{q}_{i})^{2}H_{i,i}+(r-\mathbf{q}_{i})\mathbf{g}_{i},caligraphic_L ( bold_q + ( italic_r - bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) bold_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - caligraphic_L ( bold_q ) = ( italic_r - bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT + ( italic_r - bold_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) bold_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ,

where H,𝐠𝐻𝐠H,\mathbf{g}italic_H , bold_g are the Hessian and gradient of {\mathcal{L}}caligraphic_L evaluated at 𝐪𝐪\mathbf{q}bold_q.Algorithm1 describes this procedure. Observe that line 5 is the most expensive step of the algorithm. It requires finding the minimum value among din×2csubscript𝑑insuperscript2𝑐d_{\text{in}}\times 2^{c}italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT × 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT possibilities, a task that is well-suited for parallelization on GPUs.

Extension to Block Coordinate Descent.

A natural extension to Algorithm1 is block coordinate descent (BCD), where multiple coordinates are updated simultaneously in each iteration. While a greedy approach to BCD could, in principle, optimize the objective much better, the computational cost becomes prohibitive. Specifically, updating k𝑘kitalic_k coordinates at a time necessitates evaluating (din×2c)ksuperscriptsubscript𝑑insuperscript2𝑐𝑘(d_{\text{in}}\times 2^{c})^{k}( italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT × 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT possible combinations of coordinates and their corresponding values. To overcome this, we propose a randomized BCD strategy (Algorithm 2), where we randomly partition the coordinates into din/ksubscript𝑑in𝑘d_{\text{in}}/kitalic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_k blocks, and only search over these blocks and their corresponding values. This significantly reduces the search space to a more manageable din/k×2kcsubscript𝑑in𝑘superscript2𝑘𝑐d_{\text{in}}/k\times 2^{kc}italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_k × 2 start_POSTSUPERSCRIPT italic_k italic_c end_POSTSUPERSCRIPT possibilities, making the algorithm practical for larger models. In our experiments, we primarily set k=2𝑘2k=2italic_k = 2 and work with c{2,3,4}𝑐234c\in\{2,3,4\}italic_c ∈ { 2 , 3 , 4 }.

Initializing a,b,𝐪𝑎𝑏𝐪a,b,\mathbf{q}italic_a , italic_b , bold_q.

To initialize a,b,𝐪𝑎𝑏𝐪a,b,\mathbf{q}italic_a , italic_b , bold_q in Algorithms1,2, we introduce a technique called Optimal Weight Clipping (OWC), which draws inspiration from the Learnable Weight Clipping (LWC) mechanism used in OmniQuant(Shao etal., 2023). In OWC, we quantize weight 𝐰𝐰\mathbf{w}bold_w as follows

𝐪=clamp(𝐰ba,0,2c1),a=γ(max(𝐰)min(𝐰))2c1,b=min(𝐰).\displaystyle\mathbf{q}=\text{clamp}\left(\left\lfloor\frac{\mathbf{w}-b}{a}%\right\rceil,0,2^{c}-1\right),\quad a=\frac{\gamma(\max(\mathbf{w})-\min(%\mathbf{w}))}{2^{c}-1},\quad b=\min(\mathbf{w}).bold_q = clamp ( ⌊ divide start_ARG bold_w - italic_b end_ARG start_ARG italic_a end_ARG ⌉ , 0 , 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 ) , italic_a = divide start_ARG italic_γ ( roman_max ( bold_w ) - roman_min ( bold_w ) ) end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 end_ARG , italic_b = roman_min ( bold_w ) .(2)

Here, γ[0,1]𝛾01\gamma\in[0,1]italic_γ ∈ [ 0 , 1 ] represents the clipping strength. We determine the optimal γ𝛾\gammaitalic_γ by minimizing the following layer-wise loss:

minγ[0,1]X(𝐰a𝐪b)22.subscript𝛾01superscriptsubscriptnorm𝑋𝐰𝑎𝐪𝑏22\min_{\gamma\in[0,1]}\|X(\mathbf{w}-a\mathbf{q}-b)\|_{2}^{2}.roman_min start_POSTSUBSCRIPT italic_γ ∈ [ 0 , 1 ] end_POSTSUBSCRIPT ∥ italic_X ( bold_w - italic_a bold_q - italic_b ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .

Note that while not explicitly stated, both 𝐰,a𝐰𝑎\mathbf{w},abold_w , italic_a in the above objective are implicitly dependent on γ𝛾\gammaitalic_γ.This optimization can be efficiently solved using a simple grid search. In contrast, LWC(Shao etal., 2023) optimizes a different objective function, focusing on end-to-end quantization of entire transformer block using gradient-based techniques. It is worth noting that setting γ=1𝛾1\gamma=1italic_γ = 1 in OWCrecovers the widely used MinMax quantization scheme, which is used in many existing quantization methods, including GPTQ(Frantar etal., 2022), SmoothQuant‘(Xiao etal., 2023). In our experiments, we observed that OWCprovides a much better initialization compared to MinMax quantization, for both GPTQ and CDQuant.

1:Input: T𝑇Titalic_T - coordinate descent steps, k𝑘kitalic_k - block size, X𝑋Xitalic_X - input data matrix, 𝐰𝐰\mathbf{w}bold_w - vector to be quantized, a𝑎aitalic_a - scale, b𝑏bitalic_b - bias, 𝐪0subscript𝐪0\mathbf{q}_{0}bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - initial estimate

2:Compute Hessian H𝐻Hitalic_H as: HXTX𝐻superscript𝑋𝑇𝑋H\leftarrow X^{T}Xitalic_H ← italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_X

3:Compute gradient 𝐠𝐠\mathbf{g}bold_g as: 𝐠2H(𝐪0a1(𝐰b))𝐠2𝐻subscript𝐪0superscript𝑎1𝐰𝑏\mathbf{g}\leftarrow 2H(\mathbf{q}_{0}-a^{-1}(\mathbf{w}-b))bold_g ← 2 italic_H ( bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_a start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_w - italic_b ) )

4:fort[1:T]t\in[1:T]italic_t ∈ [ 1 : italic_T ]do

5:Randomly partition the set {0,1,din1}01subscript𝑑in1\{0,1,\dots d_{\text{in}}-1\}{ 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT - 1 } into din/ksubscript𝑑in𝑘d_{\text{in}}/kitalic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_k blocks, each of size k𝑘kitalic_k

6:Find the block that leads to the largest reduction in loss

i,r=argmini{0,1,din/k1},r{0,1,2c1}k(r𝐪t1,i)THi,i(r𝐪t1,i)+(r𝐪t1,i)T𝐠i,i^{*},r^{*}=\operatorname*{arg\,min}_{\begin{subarray}{c}i\in\{0,1,\dots d_{%\text{in}}/k-1\},\\r\in\{0,1,\dots 2^{c}-1\}^{k}\end{subarray}}(r-\mathbf{q}_{t-1,i})^{T}H_{i,i}(%r-\mathbf{q}_{t-1,i})+(r-\mathbf{q}_{t-1,i})^{T}\mathbf{g}_{i},italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_i ∈ { 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_k - 1 } , end_CELL end_ROW start_ROW start_CELL italic_r ∈ { 0 , 1 , … 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 } start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT end_CELL end_ROW end_ARG end_POSTSUBSCRIPT ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) + ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ,

 where 𝐪t1,i,Hi,isubscript𝐪𝑡1𝑖subscript𝐻𝑖𝑖\mathbf{q}_{t-1,i},H_{i,i}bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT are the sub-vector, sub-matrix of 𝐪t1,Hsubscript𝐪𝑡1𝐻\mathbf{q}_{t-1},Hbold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT , italic_H corresponding to block i𝑖iitalic_i.

7:Update gradient 𝐠𝐠\mathbf{g}bold_g as

𝐠𝐠+2Hi,(r𝐪t1,i),𝐠𝐠2subscript𝐻superscript𝑖superscript𝑟subscript𝐪𝑡1superscript𝑖\mathbf{g}\leftarrow\mathbf{g}+2H_{i^{*},\cdot}(r^{*}-\mathbf{q}_{t-1,i^{*}}),bold_g ← bold_g + 2 italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , ⋅ end_POSTSUBSCRIPT ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ,

8:Update 𝐪t1subscript𝐪𝑡1\mathbf{q}_{t-1}bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT as

𝐪t1,ir,𝐪t𝐪t1,formulae-sequencesubscript𝐪𝑡1superscript𝑖superscript𝑟subscript𝐪𝑡subscript𝐪𝑡1\mathbf{q}_{t-1,i^{*}}\leftarrow r^{*},\quad\mathbf{q}_{t}\leftarrow\mathbf{q}%_{t-1},bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ← italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , bold_q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ← bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ,

9:endfor

3.2 Extension to Sub-channel Quantization

In this section, we consider sub-channel (or group) quantization, which is a morefine-grained quantization that divides the weight vector 𝐰𝐰\mathbf{w}bold_w into multiple groups and assigns a quantization scale to each group(Dettmers etal., 2021; Dettmers and Zettlemoyer, 2023). Letting g𝑔gitalic_g be the group size, we divide weight 𝐰𝐰\mathbf{w}bold_w into din/gsubscript𝑑in𝑔d_{\text{in}}/gitalic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g groups {𝐰(0),𝐰(1),𝐰(din/g1)}superscript𝐰0superscript𝐰1superscript𝐰subscript𝑑in𝑔1\{\mathbf{w}^{(0)},\mathbf{w}^{(1)},\dots\mathbf{w}^{(d_{\text{in}}/g-1)}\}{ bold_w start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , bold_w start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , … bold_w start_POSTSUPERSCRIPT ( italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 ) end_POSTSUPERSCRIPT } each of size g𝑔gitalic_g, and quantize 𝐰(i)superscript𝐰𝑖\mathbf{w}^{(i)}bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT as a(i)𝐪(i)+b(i)superscript𝑎𝑖superscript𝐪𝑖superscript𝑏𝑖a^{(i)}\mathbf{q}^{(i)}+b^{(i)}italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT + italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT. To learn the optimal parameters for this sub-channel quantization, we solve the following optimization problem:

min{a(i),b(i),𝐪(i)}i=0din/g1i=0X(i)(𝐰(i)a(i)𝐪(i)b(i))22.subscriptsuperscriptsubscriptsuperscript𝑎𝑖superscript𝑏𝑖superscript𝐪𝑖𝑖0subscript𝑑in𝑔1superscriptsubscriptnormsubscript𝑖0superscript𝑋𝑖superscript𝐰𝑖superscript𝑎𝑖superscript𝐪𝑖superscript𝑏𝑖22\displaystyle\min_{\{a^{(i)},b^{(i)},\mathbf{q}^{(i)}\}_{i=0}^{d_{\text{in}}/g%-1}}\big{|}\big{|}\sum_{i=0}X^{(i)}(\mathbf{w}^{(i)}-a^{(i)}\mathbf{q}^{(i)}-b%^{(i)})\big{|}\big{|}_{2}^{2}.roman_min start_POSTSUBSCRIPT { italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT | | ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT italic_X start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(3)

Here, X(i)superscript𝑋𝑖X^{(i)}italic_X start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT represents the columns of X𝑋Xitalic_X corresponding to the indices within group i𝑖iitalic_i. To solve this optimization problem, we employ a coordinate descent approach, similar to Algorithms1 and 2 described earlier. That is, given initial values for the scaling and bias parameters, we iteratively optimize the quantized representation 𝐪𝐪\mathbf{q}bold_q using coordinate descent. Due to space constraints, we present the resulting algorithms in AppendixC (see Algorithms4,5).

Initialization.

Next, we tackle the initialization of parameters {a(i),b(i),𝐪(i)}i=0din/g1superscriptsubscriptsuperscript𝑎𝑖superscript𝑏𝑖superscript𝐪𝑖𝑖0subscript𝑑in𝑔1\{a^{(i)},b^{(i)},\mathbf{q}^{(i)}\}_{i=0}^{d_{\text{in}}/g-1}{ italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 end_POSTSUPERSCRIPT for our coordinate descent procedure. Our approach draws inspiration from the OWCalgorithm described above, adapting its core idea to this problem. In essence, we reframe the initialization problem as one of selecting optimal clipping strengths {γ(i)}i=0din/g1superscriptsubscriptsuperscript𝛾𝑖𝑖0subscript𝑑in𝑔1\{\gamma^{(i)}\}_{i=0}^{d_{\text{in}}/g-1}{ italic_γ start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 end_POSTSUPERSCRIPT for groups {0,din/g1}0subscript𝑑in𝑔1\{0,\dots d_{\text{in}}/g-1\}{ 0 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 }. This leads us to the following problem

minγ(0),γ(din/g1)i=0X(i)(𝐰(i)a(i)𝐪(i)b(i))22,subscriptsuperscript𝛾0superscript𝛾subscript𝑑in𝑔1superscriptsubscriptnormsubscript𝑖0superscript𝑋𝑖superscript𝐰𝑖superscript𝑎𝑖superscript𝐪𝑖superscript𝑏𝑖22\displaystyle\min_{\gamma^{(0)},\dots\gamma^{(d_{\text{in}}/g-1)}}\big{|}\big{%|}\sum_{i=0}X^{(i)}(\mathbf{w}^{(i)}-a^{(i)}\mathbf{q}^{(i)}-b^{(i)})\big{|}%\big{|}_{2}^{2},roman_min start_POSTSUBSCRIPT italic_γ start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , … italic_γ start_POSTSUPERSCRIPT ( italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT | | ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT italic_X start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ,(4)

where

𝐪(i)=clamp(𝐰(i)b(i)a(i),0,2c1),a(i)=γ(i)(max(𝐰(i))min(𝐰(i)))2c1,b(i)=min(𝐰(i)).\mathbf{q}^{(i)}=\text{clamp}\left(\left\lfloor\frac{\mathbf{w}^{(i)}-b^{(i)}}%{a^{(i)}}\right\rceil,0,2^{c}-1\right),\ a^{(i)}=\frac{\gamma^{(i)}(\max(%\mathbf{w}^{(i)})-\min(\mathbf{w}^{(i)}))}{2^{c}-1},\ b^{(i)}=\min(\mathbf{w}^%{(i)}).bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = clamp ( ⌊ divide start_ARG bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_ARG ⌉ , 0 , 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 ) , italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = divide start_ARG italic_γ start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( roman_max ( bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) - roman_min ( bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) ) end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 end_ARG , italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT = roman_min ( bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) .

We use greedy coordinate descent to optimize Equation(4). In each iteration, we update the γ(i)superscript𝛾𝑖\gamma^{(i)}italic_γ start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT that leads to biggest drop in loss. This procedure, which we call OWC-CD, is described in Algorithm3.

1:Input: T𝑇Titalic_T - coordinate descent steps, g𝑔gitalic_g - group size, X𝑋Xitalic_X - input data matrix, 𝐰𝐰\mathbf{w}bold_w - weight vector, ΓΓ\Gammaroman_Γ- grid of possible values for clipping strength

2:forβΓ𝛽Γ\beta\in\Gammaitalic_β ∈ roman_Γdo

3:fori[0:din/g1]i\in[0:d_{\text{in}}/g-1]italic_i ∈ [ 0 : italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 ]do

4:Compute quantization residual Δ(i,β)Δ𝑖𝛽\Delta(i,\beta)roman_Δ ( italic_i , italic_β ) for group i𝑖iitalic_i with clipping strength β𝛽\betaitalic_β as:

Δ(i,β)𝐰(i)a(i)(β)𝐪(i)(β)b(i)(β)Δ𝑖𝛽superscript𝐰𝑖superscript𝑎𝑖𝛽superscript𝐪𝑖𝛽superscript𝑏𝑖𝛽\Delta(i,\beta)\leftarrow\mathbf{w}^{(i)}-a^{(i)}(\beta)\mathbf{q}^{(i)}(\beta%)-b^{(i)}(\beta)roman_Δ ( italic_i , italic_β ) ← bold_w start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT - italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( italic_β ) bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( italic_β ) - italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( italic_β )

.  where a(i)(β),b(i)(β),𝐪(i)(β)superscript𝑎𝑖𝛽superscript𝑏𝑖𝛽superscript𝐪𝑖𝛽a^{(i)}(\beta),b^{(i)}(\beta),\mathbf{q}^{(i)}(\beta)italic_a start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( italic_β ) , italic_b start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( italic_β ) , bold_q start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ( italic_β ) are as defined in Equation(2).

5:endfor

6:endfor

7:Initialize clipping strengths for each group γ(0),γ(din/g1)superscript𝛾0superscript𝛾subscript𝑑in𝑔1\gamma^{(0)},\dots\gamma^{(d_{\text{in}}/g-1)}italic_γ start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , … italic_γ start_POSTSUPERSCRIPT ( italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 ) end_POSTSUPERSCRIPT

8:Compute Hessian H𝐻Hitalic_H as: HXTX𝐻superscript𝑋𝑇𝑋H\leftarrow X^{T}Xitalic_H ← italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_X

9:𝐯i2Hi(a𝐪+bw)subscript𝐯𝑖2subscript𝐻𝑖𝑎𝐪𝑏𝑤\mathbf{v}_{i}\leftarrow-2H_{i}(a\mathbf{q}+b-w)bold_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← - 2 italic_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_a bold_q + italic_b - italic_w ) where Hisubscript𝐻𝑖H_{i}italic_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the sub-matrix of H𝐻Hitalic_H corresponding to the columns of group i𝑖iitalic_i.

10:fort[1:T]t\in[1:T]italic_t ∈ [ 1 : italic_T ]do

11:Find the group that leads to the largest reduction in loss

i,β=argmini{0,1,din/g1},βΓsuperscript𝑖superscript𝛽subscriptargmin𝑖01subscript𝑑in𝑔1𝛽Γ\displaystyle i^{*},\beta^{*}=\operatorname*{arg\,min}_{\begin{subarray}{c}i%\in\{0,1,\dots d_{\text{in}}/g-1\},\\\beta\in\Gamma\end{subarray}}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_i ∈ { 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g - 1 } , end_CELL end_ROW start_ROW start_CELL italic_β ∈ roman_Γ end_CELL end_ROW end_ARG end_POSTSUBSCRIPT(Δ(i,γ(i))Δ(i,β))THi,i(Δ(i,γ(i))Δ(i,β))superscriptΔ𝑖superscript𝛾𝑖Δ𝑖𝛽𝑇subscript𝐻𝑖𝑖Δ𝑖superscript𝛾𝑖Δ𝑖𝛽\displaystyle(\Delta(i,\gamma^{(i)})-\Delta(i,\beta))^{T}H_{i,i}(\Delta(i,%\gamma^{(i)})-\Delta(i,\beta))( roman_Δ ( italic_i , italic_γ start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) - roman_Δ ( italic_i , italic_β ) ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT ( roman_Δ ( italic_i , italic_γ start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) - roman_Δ ( italic_i , italic_β ) )
+𝐯iT(Δ(i,γ(i))Δ(i,β))superscriptsubscript𝐯𝑖𝑇Δ𝑖superscript𝛾𝑖Δ𝑖𝛽\displaystyle+\mathbf{v}_{i}^{T}(\Delta(i,\gamma^{(i)})-\Delta(i,\beta))+ bold_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ( roman_Δ ( italic_i , italic_γ start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) - roman_Δ ( italic_i , italic_β ) )

 where Hi,isubscript𝐻𝑖𝑖H_{i,i}italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT is the block diagonal element of H𝐻Hitalic_H corresponding to group i𝑖iitalic_i.

12:Update 𝐯𝐯+2(Δ(i,γ(i))Δ(i,β))THi𝐯𝐯2superscriptΔsuperscript𝑖superscript𝛾superscript𝑖Δsuperscript𝑖superscript𝛽𝑇subscript𝐻superscript𝑖\mathbf{v}\leftarrow\mathbf{v}+2(\Delta(i^{*},\gamma^{(i^{*})})-\Delta(i^{*},%\beta^{*}))^{T}H_{i^{*}}bold_v ← bold_v + 2 ( roman_Δ ( italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_γ start_POSTSUPERSCRIPT ( italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT ) - roman_Δ ( italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT

13:Update γ(i)βsuperscript𝛾superscript𝑖superscript𝛽\gamma^{(i^{*})}\leftarrow\beta^{*}italic_γ start_POSTSUPERSCRIPT ( italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT ← italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT

14:endfor

4 Experiments

Quantization.

In our experiments, we focus on weight-only quantization. We present results for two scenarios: (1) quantizing the FFN layers and (2) quantizing both the FFN and attention weights. In the former setting, all attention layers are quantized to INT8 using MinMax quantization, and low-bit quantization schemes are only applied to the feed-forward layers. It’s worth pointing out that this setting is very relevant in practice because FFN layers account for majority of the latency, and quantizing attention weights yields negligible latency reduction at the expense of model quality(Samaga etal., 2024). We perform both per-channel and sub-channel quantization (with a group size of 128128128128). We mainly focus on INT2, INT3 and INT4 quantization.

Models.

We use PaLM2 models(Anil etal., 2023) in all our experiments. In particular, we use PaLM2-Gecko, PaLM2-Otter, PaLM2-Bison models. These models vary in size, with PaLM2-Gecko being the smallest, followed by PaLM2-Otter, and finally PaLM2-Bison.

Baselines.

Since our primary goal is to demonstrate that CDQuantgets improved performance over GPTQ, we have chosen it as the primary baseline in most of our experiments. To further demonstrate the our technique can be used as a plug-and-play replacement for GPTQ, we also include comparisons with AWQ(Lin etal., 2023). For all our experiments, we initialize GPTQ with OWC, and run GPTQ for T=din𝑇subscript𝑑inT=d_{\text{in}}italic_T = italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT steps. To ensure stability and generalization, GPTQ regularizes its Hessian matrix by adding a scaled identity matrix (λI𝜆𝐼\lambda Iitalic_λ italic_I). Tuning this λ𝜆\lambdaitalic_λ for every (model, layer) pair is infeasible. So, we determine a single optimal value using the PaLM2-Gecko model, and apply it universally.

CDQuant.

We evaluate both the coordinate descent variants described in Section3.1: CD (Algorithm1), BCD (Algorithm2). For per-channel quantization, we initialize CD with Optimal Weight Clipping (OWC), and for sub-channel quantization, we additionally include initialization with OWC-CD. BCD is always initialized with CD in our experiments. Unless otherwise stated, both CD and BCD are run for T=din𝑇subscript𝑑inT=d_{\text{in}}italic_T = italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT iterations, OWC-CDis run for din/gsubscript𝑑in𝑔d_{\text{in}}/gitalic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_g iterations, where g𝑔gitalic_g is the group size. Similar to GPTQ, we regularize the Hessian matrix used in CD, BCD by adding λI𝜆𝐼\lambda Iitalic_λ italic_I. We determine a reasonable value for λ𝜆\lambdaitalic_λ using the PaLM2-Gecko model, and use it in all our experiments.

Evaluation.

Following recent works(Frantar etal., 2022; Ma etal., 2024), we evaluate all algorithms using two key metrics: perplexity and downstream task performance. To ensure a robust evaluation, we calculate perplexity using a 100100100100 million token subset derived from the PaLM2 training mixture. We chose to do this since the evaluation sets for PTB(Marcus etal., 1994) and WikiText2(Merity etal., 2017) datasets, that are commonly used in the field, are orders of magnitude smaller (83K and 246K tokens in their respective test sets, and 512K tokens subsampled from the C4 validation set (Frantar etal., 2022)), and thus can have high variance, and may not accurately reflect the model’s true performance. We use TriviaQA(Joshi etal., 2017), SQuAD(Rajpurkar etal., 2018), NaturalQuestions(Kwiatkowski etal., 2019) and WebQuestions(Berant etal., 2013) to evaluate generation capabilities of the quantized models, and to evaluate their reasoning capabilities, we test on ARC-c,ARC-e(Clark etal., 2018), HellaSwag(Zellers etal., 2019), BoolQ(Clark etal., 2019), PIQA(Bisk etal., 2020) and WinoGrande(Sakaguchi etal., 2020). For downstream evaluations, we consider both zero-shot, and one-shot settings. Finally, to determine which optimization technique is most effective at solving the layer-wise objective in Equation(1), we evaluate the final solution’s objective value using the same 100100100100 million tokens that we used to compute perplexity.

Training.

All techniques are calibrated using 1280128012801280 data points, where each data point has 2048204820482048 tokens. For OWC, we use a grid size of 50505050 to find the most optimal γ𝛾\gammaitalic_γ. We used 8 Nvidia H100100100100 GPUs for quantizing the models.

ConfigMethodPaLM2-GeckoPaLM2-OtterPaLM2-Bison
w16a16-7.9485.9845.298
w3a16OWC12.57012.57012.57012.57017.92817.92817.92817.9286.1696.1696.1696.169
GPTQ11.34711.34711.34711.3477.1767.1767.1767.1765.7745.7745.7745.774
CD10.92010.92010.92010.9207.0027.0027.0027.0025.7395.7395.7395.739
BCD(k=2)10.89810.898\mathbf{10.898}bold_10.8986.9796.979\mathbf{6.979}bold_6.9795.7335.733\mathbf{5.733}bold_5.733
w3a16g128OWC11.59711.59711.59711.5978.3428.3428.3428.3425.8475.8475.8475.847
GPTQ10.41410.41410.41410.4146.6356.6356.6356.6355.6775.6775.6775.677
CD10.27310.27310.27310.2736.6556.6556.6556.6555.6565.6565.6565.656
BCD(k=2)10.25910.25910.25910.2596.5456.5456.5456.5455.6545.6545.6545.654
OWC-CD10.70610.70610.70610.7066.6356.6356.6356.6355.6865.6865.6865.686
OWC-CD+ CD10.14310.14310.14310.1436.5286.5286.5286.5285.6505.6505.6505.650
OWC-CD+ BCD(k=2)10.13810.138\mathbf{10.138}bold_10.1386.5276.527\mathbf{6.527}bold_6.5275.6475.647\mathbf{5.647}bold_5.647
w4a16OWC8.9468.9468.9468.9466.6936.6936.6936.6935.4755.4755.4755.475
GPTQ8.7648.7648.7648.7646.2496.2496.2496.2495.4175.4175.4175.417
CD8.6948.6948.6948.6946.1956.1956.1956.1955.4075.4075.4075.407
BCD(k=2)8.6918.691\mathbf{8.691}bold_8.6916.1926.192\mathbf{6.192}bold_6.1925.4055.405\mathbf{5.405}bold_5.405
w4a16g128OWC8.6138.6138.6138.6136.2646.2646.2646.2645.4015.4015.4015.401
GPTQ8.4988.4988.4988.4986.1126.1126.1126.1125.3775.3775.3775.377
CD8.4568.4568.4568.4566.0976.0976.0976.0975.3735.3735.3735.373
BCD(k=2)8.4548.4548.4548.4546.0976.0976.0976.0975.3725.3725.3725.372
OWC-CD8.5198.5198.5198.5196.1066.1066.1066.1065.3775.3775.3775.377
OWC-CD+ CD8.4368.4368.4368.4366.0926.0926.0926.0925.3715.3715.3715.371
OWC-CD+ BCD(k=2)8.4348.434\mathbf{8.434}bold_8.4346.0916.091\mathbf{6.091}bold_6.0915.3715.371\mathbf{5.371}bold_5.371

ConfigMethodPaLM2-GeckoPaLM2-OtterPaLM2-Bison
Gen.RankAvg.Gen.RankAvg.Gen.RankAvg.
w16a16-25.8525.8525.8525.8561.4461.4461.4461.4447.247.247.247.244.3144.3144.3144.3175.4375.4375.4375.4362.9862.9862.9862.9851.0851.0851.0851.0878.2878.2878.2878.2868.0068.0068.0068.00
w3a16OWC18.9518.9518.9518.9555.7455.7455.7455.7441.0241.0241.0241.0221.0221.0221.0221.0265.7865.7865.7865.7847.8747.8747.8747.8747.5247.5247.5247.5277.7677.7677.7677.7665.6665.6665.6665.66
GPTQ19.4219.4219.4219.4257.7157.7157.7157.7142.3942.3942.3942.3937.2637.2637.2637.2673.9573.9573.9573.9559.2759.2759.2759.2748.5548.5548.5548.5578.5878.5878.5878.5866.5766.5766.5766.57
CD20.6920.6920.6920.6958.0558.0558.0558.0543.1143.1143.1143.1138.1238.1238.1238.1273.3773.3773.3773.3759.2759.2759.2759.2748.4748.4748.4748.4778.5678.5678.5678.5666.5366.5366.5366.53
BCD(k=2)20.5220.5220.5220.5258.4458.4458.4458.4443.2743.2743.2743.2738.5338.5338.5338.5373.4673.4673.4673.4659.4959.4959.4959.4948.0848.0848.0848.0878.6678.6678.6678.6666.4366.4366.4366.43
w3a16g128OWC22.8322.8322.8322.8357.0257.0257.0257.0243.3543.3543.3543.3533.1833.1833.1833.1870.8170.8170.8170.8155.7655.7655.7655.7648.0848.0848.0848.0878.3478.3478.3478.3466.2466.2466.2466.24
GPTQ22.2422.2422.2422.2458.6658.6658.6658.6644.0944.0944.0944.0940.3040.3040.3040.3074.1274.1274.1274.1260.5960.5960.5960.5949.1549.1549.1549.1578.5878.5878.5878.5866.8166.8166.8166.81
CD22.7922.7922.7922.7958.8158.8158.8158.8144.4044.4044.4044.4041.3341.3341.3341.3374.2774.2774.2774.2761.1061.1061.1061.1049.1749.1749.1749.1778.4978.4978.4978.4966.7666.7666.7666.76
BCD(k=2)22.6422.6422.6422.6458.8558.8558.8558.8544.3744.3744.3744.3741.1941.1941.1941.1973.9473.9473.9473.9460.8460.8460.8460.8449.0249.0249.0249.0278.5178.5178.5178.5166.7166.7166.7166.71
OWC-CD22.2122.2122.2122.2158.6958.6958.6958.6944.1044.1044.1044.1040.8440.8440.8440.8474.3374.3374.3374.3360.9360.9360.9360.9348.5448.5448.5448.5478.2978.2978.2978.2966.3966.3966.3966.39
OWC-CD+ CD22.4422.4422.4422.4458.8458.8458.8458.8444.2844.2844.2844.2840.8740.8740.8740.8774.4174.4174.4174.4161.0061.0061.0061.0049.2249.2249.2249.2278.8678.8678.8678.8667.0067.0067.0067.00
OWC-CD+ BCD(k=2)22.1522.1522.1522.1558.7758.7758.7758.7744.1344.1344.1344.1341.1941.1941.1941.1974.6574.6574.6574.6561.2761.2761.2761.2749.0849.0849.0849.0878.378.378.378.366.6166.6166.6166.61
w4a16OWC23.2723.2723.2723.2760.2260.2260.2260.2245.4445.4445.4445.4441.7041.7041.7041.7074.2174.2174.2174.2161.2161.2161.2161.2150.0550.0550.0550.0578.7278.7278.7278.7267.2567.2567.2567.25
GPTQ24.5124.5124.5124.5160.7260.7260.7260.7246.2446.2446.2446.2443.0043.0043.0043.0074.8674.8674.8674.8662.1262.1262.1262.1250.2850.2850.2850.2879.1579.1579.1579.1567.667.667.667.6
CD24.6024.6024.6024.6060.6260.6260.6260.6246.2246.2246.2246.2243.2143.2143.2143.2175.1775.1775.1775.1762.3962.3962.3962.3950.0950.0950.0950.0979.2179.2179.2179.2167.5667.5667.5667.56
BCD(k=2)24.5224.5224.5224.5260.6260.6260.6260.6246.1846.1846.1846.1843.1643.1643.1643.1675.5075.5075.5075.5062.5662.5662.5662.5650.2550.2550.2550.2579.1479.1479.1479.1467.5967.5967.5967.59
w4a16g128OWC24.6324.6324.6324.6361.2761.2761.2761.2746.6146.6146.6146.6142.3142.3142.3142.3174.7674.7674.7674.7661.7861.7861.7861.7850.4950.4950.4950.4979.3879.3879.3879.3867.8267.8267.8267.82
GPTQ25.4025.4025.4025.4061.1361.1361.1361.1346.8446.8446.8446.8443.5443.5443.5443.5475.0275.0275.0275.0262.4362.4362.4362.4350.6350.6350.6350.6379.2179.2179.2179.2167.7867.7867.7867.78
CD25.1825.1825.1825.1861.4261.4261.4261.4246.9246.9246.9246.9243.7843.7843.7843.7874.9374.9374.9374.9362.4762.4762.4762.4750.7350.7350.7350.7379.3079.3079.3079.3067.8767.8767.8767.87
BCD(k=2)24.9524.9524.9524.9561.4961.4961.4961.4946.8746.8746.8746.8743.7743.7743.7743.7775.0475.0475.0475.0462.5362.5362.5362.5350.8450.8450.8450.8479.5679.5679.5679.5668.0768.0768.0768.07
OWC-CD24.5924.5924.5924.5961.4061.4061.4061.4046.6746.6746.6746.6743.6743.6743.6743.6775.3175.3175.3175.3162.6562.6562.6562.6550.6550.6550.6550.6579.0479.0479.0479.0467.6967.6967.6967.69
OWC-CD+ CD25.1225.1225.1225.1261.4461.4461.4461.4446.9146.9146.9146.9144.0044.0044.0044.0075.2775.2775.2775.2762.7662.7662.7662.7650.6250.6250.6250.6279.2179.2179.2179.2167.7867.7867.7867.78
OWC-CD+ BCD(k=2)25.0025.0025.0025.0061.5061.5061.5061.5046.9046.9046.9046.9043.7543.7543.7543.7575.2075.2075.2075.2062.6262.6262.6262.6250.6450.6450.6450.6479.1779.1779.1779.1767.7667.7667.7667.76

Results.

Table1 presents the perplexity numbers for different quantization techniques applied to FFN layers. It can be seen that both our CD and BCD methods have a clear advantage over GPTQ, leading to lower perplexity scores for all models and quantization levels. The difference is more pronounced for lower bit quantization. This is further highlighted in Table5, which focuses specifically on INT2 quantization. Here, we see almost 10%percent1010\%10 % improvement in perplexity over GPTQ. We also present (1-shot) downstream evals in Tables2,5 (for 0-shot results, and a detailed breakdown of results for each dataset, please refer to AppendixD). In Table5 we see 2222-2.5%percent2.52.5\%2.5 % relative improvement in downstream evals over GPTQ. Furthermore, Table2 demonstrates that our methods consistently match or exceed GPTQ’s performance across all the settings, with the most substantial improvements observed in low-bit quantization, and smaller models. Finally, we also observe that coordinate descent techniques are better at optimizing the layer-wise objective in Equation(1), than GPTQ. For instance, the average objective value (relative to all 00’s solution) for the GPTQ solution for the 1stsuperscript1st1^{\text{st}}1 start_POSTSUPERSCRIPT st end_POSTSUPERSCRIPT feed-forward layer is 0.1640.1640.1640.164, whereas for CD it is 0.1580.1580.1580.158, and for BCD(k=2222) it is 0.1570.1570.1570.157.

Next, observe that Table5 also investigates the effect of multiple epochs of CD and BCD, where each epoch corresponds to dinsubscript𝑑ind_{\text{in}}italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT iterations. For BCD with k=2𝑘2k=2italic_k = 2, additional epochs enhance performance. However, for BCD with k=4𝑘4k=4italic_k = 4, multiple epochs appear to lead to overfitting, indicating a need for stronger regularization.

FFN+Attention Quantization. In addition to FFN quantization, we also perform FFN + attention weight quantization. However, the inputs to the attention layer are often aligned in a handful of directions. Consequently, performing quantization using such a data leads to a huge drop in performance, as the algorithms would primarily focus on a few directions and ignore the rest. To mitigate this, we clip the largest eigenvalues of the Hessian to ensure a more balanced Hessian. This technique, reminiscent of the weight clipping in OmniQuant(Shao etal., 2023),improves the performance of both GPTQ and CDQuant111One could also rely on existing techniques such as AWQ and SmoothQuant to reduce the effect of outliers. But in our experiments we noticed that both AWQ and SmoothQuant performed poorly compared to the simple eigenvalue clipping technique.. For instance, for PaLM2-Gecko, the perplexity for w3a16-GPTQ improves from 34.87234.87234.87234.872 to 24.05424.05424.05424.054 with clipping, and for w4a16-GPTQ, it improves from 10.96610.96610.96610.966 to 10.00510.00510.00510.005. Table4 presents the results from this experiment, where we run all the algorithms on the clipped Hessian. We once again notice that in almost all the settings, our coordinate descent algorithms outperform GPTQ.

MethodPaLM2-Otter
w3a16w3a16g128
with AWQw/o AWQwith AWQw/o AWQ
OWC22.66122.66122.66122.66117.92817.92817.92817.9286.8136.8136.8136.8138.3428.3428.3428.342
GPTQ7.3817.3817.3817.3817.1767.1767.1767.1766.5196.5196.5196.5196.6356.6356.6356.635
CD7.2447.2447.2447.2447.0027.0027.0027.0026.4756.4756.4756.4756.6556.6556.6556.655
BCD(k=2)7.2117.211\mathbf{7.211}bold_7.2116.9796.979\mathbf{6.979}bold_6.9796.4746.474\mathbf{6.474}bold_6.4746.5456.545\mathbf{6.545}bold_6.545

AWQ. A major strength of our algorithm is its versatility. It seamlessly replaces GPTQ in any quantization technique that relies on it. To illustrate this, we focus on the AWQ algorithm. We demonstrate that our algorithm, when layered on top of AWQ, surpasses the performance of GPTQ layered on top of AWQ. Table3 presents the result from this experiment. It can be seen that BCD always outperforms GPTQ, for both per-channel and group quantization222The performance of AWQ on PaLM2-Bison is much worse than not performing AWQ. Hence, in Table3, we only present results for PaLM2-Otter..

Runtime.

The runtimes of various techniques for quantizing PaLM2-Otter and PaLM2-Bison are presented in Table6. To be precise, the table presents time taken to quantize all the FFN, attention layers in the model, for per-channel quantization. For group quantization runtimes, please refer to AppendixE. It can be seen that CD and GPTQ have comparable runtimes. While BCD takes longer than CD, the majority of its time is spent quantizing the 2ndsuperscript2𝑛𝑑2^{nd}2 start_POSTSUPERSCRIPT italic_n italic_d end_POSTSUPERSCRIPT FFN layer, where the quantization dimension is equivalent to the model’s hidden dimension. So, in practice, to speed up the entire process, one could rely on CD for quantizing FFN2 and use BCD for quantizing the rest of the weight matrices.

ConfigMethodPaLM2-GeckoPaLM2-OtterPaLM2-Bison
w16a16-7.9485.9845.298
w3a16OWC41.75141.75141.75141.75154.49954.49954.49954.4996.4466.4466.4466.446
GPTQ24.05424.05424.05424.05411.38411.384\mathbf{11.384}bold_11.3845.8015.8015.8015.801
CD19.69219.69219.69219.69211.54211.54211.54211.5425.7705.7705.7705.770
BCD(k=2)19.31219.312\mathbf{19.312}bold_19.31211.39211.39211.39211.3925.7625.762\mathbf{5.762}bold_5.762
w3a16g128OWC20.30120.30120.30120.3019.7859.7859.7859.7855.9735.9735.9735.973
GPTQ15.04215.04215.04215.0427.5017.5017.5017.5015.6985.6985.6985.698
CD14.75714.75714.75714.7577.4547.4547.4547.4545.6795.6795.6795.679
BCD(k=2)14.69114.69114.69114.6917.4387.438\mathbf{7.438}bold_7.4385.6735.6735.6735.673
OWC-CD16.26816.26816.26816.2687.6177.6177.6177.6175.7465.7465.7465.746
OWC-CD+ CD14.28514.28514.28514.2857.4667.4667.4667.4665.6755.6755.6755.675
OWC-CD+ BCD(k=2)14.24214.242\mathbf{14.242}bold_14.2427.4587.4587.4587.4585.6705.670\mathbf{5.670}bold_5.670
w4a16OWC10.68310.68310.68310.6837.1947.1947.1947.1945.5085.5085.5085.508
GPTQ10.00510.00510.00510.0056.6456.6456.6456.6455.4255.4255.4255.425
CD9.9919.9919.9919.9916.6276.6276.6276.6275.4155.4155.4155.415
BCD(k=2)9.9659.965\mathbf{9.965}bold_9.9656.6216.621\mathbf{6.621}bold_6.6215.4135.413\mathbf{5.413}bold_5.413
w4a16g128OWC9.4029.4029.4029.4026.4156.4156.4156.4155.4205.4205.4205.420
GPTQ9.1609.1609.1609.1606.2416.2416.2416.2415.3815.3815.3815.381
CD9.1249.1249.1249.1246.2266.2266.2266.2265.3775.3775.3775.377
BCD(k=2)9.1209.1209.1209.1206.2256.225\mathbf{6.225}bold_6.2255.3765.3765.3765.376
OWC-CD9.2419.2419.2419.2416.2616.2616.2616.2615.3875.3875.3875.387
OWC-CD+ CD9.0949.0949.0949.0946.2446.2446.2446.2445.3755.3755.3755.375
OWC-CD+ BCD(k=2)9.0939.093\mathbf{9.093}bold_9.0936.2426.2426.2426.2425.3745.374\mathbf{5.374}bold_5.374

MethodEpochsPaLM2-OtterPaLM2-Bison
w2a16g128PerplexityGenerationRankAvg.PerplexityGenerationRankAvg.
GPTQ-10.81610.81610.81610.81626.9826.9826.9826.9867.1467.1467.1467.1451.0751.0751.0751.077.2307.2307.2307.23039.3239.3239.3239.3274.1974.1974.1974.1960.2460.2460.2460.24
CD11119.9179.9179.9179.91729.8729.8729.8729.8766.7666.7666.7666.7652.0052.0052.0052.007.1237.1237.1237.12338.7338.7338.7338.7373.5673.5673.5673.5659.6359.6359.6359.63
BCD(k=2)11119.8229.8229.8229.82230.0330.0330.0330.0366.8266.8266.8266.8252.1152.1152.1152.117.0947.0947.0947.09438.8538.8538.8538.8573.9973.9973.9973.9959.9359.9359.9359.93
22229.7609.7609.7609.76030.0930.0930.0930.0967.4567.45\mathbf{67.45}bold_67.4552.5152.51\mathbf{52.51}bold_52.517.0887.088\mathbf{7.088}bold_7.08839.3539.35\mathbf{39.35}bold_39.3574.1074.1074.1074.1060.2060.2060.2060.20
33339.7199.7199.7199.71930.1930.19\mathbf{30.19}bold_30.1967.0967.0967.0967.0952.3352.3352.3352.337.0947.0947.0947.09439.3039.3039.3039.3074.2474.24\mathbf{74.24}bold_74.2460.2660.26\mathbf{60.26}bold_60.26
BCD(k=4)11119.7089.708\mathbf{9.708}bold_9.70830.1130.1130.1130.1167.0967.0967.0967.0952.3052.3052.3052.307.0997.0997.0997.09938.9838.9838.9838.9874.2074.2074.2074.2060.1160.1160.1160.11
22229.7529.7529.7529.75230.0030.0030.0030.0066.7066.7066.7066.7052.0252.0252.0252.027.0997.0997.0997.09939.0939.0939.0939.0974.0874.0874.0874.0860.0860.0860.0860.08
33339.7499.7499.7499.74930.0730.0730.0730.0766.9366.9366.9366.9352.1952.1952.1952.197.0997.0997.0997.09939.0539.0539.0539.0574.1074.1074.1074.1060.0860.0860.0860.08

ConfigMethodPaLM2-OtterPaLM2-Bison
FFN runtime
(in minutes)
Attn. runtime
(in minutes)
FFN runtime
(in minutes)
Attn. runtime
(in minutes)
w3a16GPTQ0.900.900.900.90m0.720.720.720.72m6.426.426.426.42m0.630.630.630.63m
CD2.942.942.942.94m0.490.490.490.49m16.1316.1316.1316.13m0.430.430.430.43m
BCD(k=2)14.0314.0314.0314.03m2.082.082.082.08m82.1382.1382.1382.13m1.811.811.811.81m
w4a16GPTQ0.900.900.900.90m0.640.640.640.64m6.606.606.606.60m0.560.560.560.56m
CD3.593.593.593.59m0.590.590.590.59m21.1921.1921.1921.19m0.520.520.520.52m
BCD(k=2)34.8534.8534.8534.85m4.724.724.724.72m235.14235.14235.14235.14m4.134.134.134.13m

5 Conclusion and Future Work

In this work, we developed a coordinate descent framework (CDQuant) for quantization of LLMs. CDQuantis a simple and effective alternative to GPTQ, that consistently outperformed it on PaLM2 models. The simplicity of our algorithm makes it a seamless substitute for GPTQ in various algorithmic contexts where GPTQ currently functions as a sub-routine. Our future work aims to further improve the performance of CDQuant. In particular, we aim to speed up our BCD algorithm, and make it as fast as CD. Furthermore, we will focus on developing layer-wise loss functions that are more closely aligned with end-to-end loss, thereby reducing the performance gap between full-precision and quantized models. Finally, we will also explore integrating CDQuantwith QuIP and FrameQuant to evaluate potential performance gains for extreme low-bit quantization.

Acknowledgements

We are grateful to Kyuyeun Kim, Wonpyo Park, and Jaehong Kim for assisting in setting up the calibration pipelines, and Prateek Jain for helpful discussions, support, and feedback.

References

  • Touvron etal. [2023]Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971, 2023.
  • OpenAI [2023]OpenAI.Gpt-4 technical report.Technical report, 2023.URL https://cdn.openai.com/papers/gpt-4.pdf.
  • Google [2023]GeminiTeam Google.Gemini: A family of highly capable multimodal models.arXiv preprint arXiv:2312.11805, 2023.
  • Miao etal. [2023]Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Hongyi Jin, Tianqi Chen, and Zhihao Jia.Towards efficient generative large language model serving: A survey from algorithms to systems.arXiv preprint arXiv:2312.15234, 2023.
  • Frantar etal. [2022]Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh.Gptq: Accurate post-training quantization for generative pre-trained transformers.arXiv preprint arXiv:2210.17323, 2022.
  • Dettmers etal. [2023]Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh.Spqr: A sparse-quantized representation for near-lossless llm weight compression.arXiv preprint arXiv:2306.03078, 2023.
  • Lee etal. [2023]Changhun Lee, Jungyu Jin, Taesu Kim, Hyungjun Kim, and Eunhyeok Park.Owq: Lessons learned from activation outliers for weight quantization in large language models.arXiv preprint arXiv:2306.02272, 2023.
  • Chee etal. [2024]Jerry Chee, Yaohui Cai, Volodymyr Kuleshov, and ChristopherM DeSa.Quip: 2-bit quantization of large language models with guarantees.Advances in Neural Information Processing Systems, 36, 2024.
  • Adepu etal. [2024]Harshavardhan Adepu, Zhanpeng Zeng, LiZhang, and Vikas Singh.Framequant: Flexible low-bit quantization for transformers.arXiv preprint arXiv:2403.06082, 2024.
  • Lin etal. [2023]JiLin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han.Awq: Activation-aware weight quantization for llm compression and acceleration.arXiv preprint arXiv:2306.00978, 2023.
  • Xiao etal. [2023]Guangxuan Xiao, JiLin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han.Smoothquant: Accurate and efficient post-training quantization for large language models.In International Conference on Machine Learning, pages 38087–38099. PMLR, 2023.
  • Dettmers etal. [2021]Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer.8-bit optimizers via block-wise quantization.arXiv preprint arXiv:2110.02861, 2021.
  • Dettmers and Zettlemoyer [2023]Tim Dettmers and Luke Zettlemoyer.The case for 4-bit precision: k-bit inference scaling laws.In International Conference on Machine Learning, pages 7750–7774. PMLR, 2023.
  • Anil etal. [2023]Rohan Anil, AndrewM Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, etal.Palm 2 technical report.arXiv preprint arXiv:2305.10403, 2023.
  • Frantar and Alistarh [2022]Elias Frantar and Dan Alistarh.Optimal brain compression: A framework for accurate post-training quantization and pruning.Advances in Neural Information Processing Systems, 35:4475–4488, 2022.
  • LeCun etal. [1989]Yann LeCun, John Denker, and Sara Solla.Optimal brain damage.Advances in neural information processing systems, 2, 1989.
  • Ma etal. [2024]Yuexiao Ma, Huixia Li, Xiawu Zheng, Feng Ling, Xuefeng Xiao, Rui Wang, Shilei Wen, Fei Chao, and Rongrong Ji.Affinequant: Affine transformation quantization for large language models.arXiv preprint arXiv:2403.12544, 2024.
  • Behdin etal. [2023]Kayhan Behdin, Ayan Acharya, Aman Gupta, Qingquan Song, Siyu Zhu, Sathiya Keerthi, and Rahul Mazumder.Quantease: Optimization-based quantization for language models.arXiv preprint arXiv:2309.01885, 2023.
  • Wei etal. [2023]Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, and Xianglong Liu.Outlier suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling.arXiv preprint arXiv:2304.09145, 2023.
  • Wei etal. [2022]Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, QiZhang, Fengwei Yu, and Xianglong Liu.Outlier suppression: Pushing the limit of low-bit transformer language models.Advances in Neural Information Processing Systems, 35:17402–17414, 2022.
  • Shao etal. [2023]Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, YuQiao, and Ping Luo.Omniquant: Omnidirectionally calibrated quantization for large language models.arXiv preprint arXiv:2308.13137, 2023.
  • Liu etal. [2023]Jing Liu, Ruihao Gong, Xiuying Wei, Zhiwei Dong, Jianfei Cai, and Bohan Zhuang.Qllm: Accurate and efficient low-bitwidth quantization for large language models.arXiv preprint arXiv:2310.08041, 2023.
  • Dettmers etal. [2022]Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer.Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale.Advances in Neural Information Processing Systems, 35:30318–30332, 2022.
  • Chai etal. [2023]Yuji Chai, John Gkountouras, GlennG Ko, David Brooks, and Gu-Yeon Wei.Int2. 1: Towards fine-tunable quantized large language models with error correction through low-rank adaptation.arXiv preprint arXiv:2306.08162, 2023.
  • Dettmers etal. [2024]Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer.Qlora: Efficient finetuning of quantized llms.Advances in Neural Information Processing Systems, 36, 2024.
  • Hu etal. [2021]EdwardJ Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, LuWang, and Weizhu Chen.Lora: Low-rank adaptation of large language models.arXiv preprint arXiv:2106.09685, 2021.
  • Chrétien and Corset [2009]Stéphane Chrétien and Franck Corset.Using the eigenvalue relaxation for binary least-squares estimation problems.Signal processing, 89(11):2079–2091, 2009.
  • Park and Boyd [2018]Jaehyun Park and Stephen Boyd.A semidefinite programming method for integer convex quadratic minimization.Optimization Letters, 12:499–518, 2018.
  • Nagel etal. [2020]Markus Nagel, RanaAli Amjad, Mart VanBaalen, Christos Louizos, and Tijmen Blankevoort.Up or down? adaptive rounding for post-training quantization.In International Conference on Machine Learning, pages 7197–7206. PMLR, 2020.
  • Li etal. [2021]Yuhang Li, Ruihao Gong, XuTan, Yang Yang, Peng Hu, QiZhang, Fengwei Yu, Wei Wang, and Shi Gu.Brecq: Pushing the limit of post-training quantization by block reconstruction.arXiv preprint arXiv:2102.05426, 2021.
  • Hubara etal. [2021]Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, and Daniel Soudry.Accurate post training quantization with small calibration sets.In International Conference on Machine Learning, pages 4466–4475. PMLR, 2021.
  • Samaga etal. [2024]Yashas Samaga, Varun Yerram, Chong You, Srinadh Bhojanapalli, Sanjiv Kumar, Prateek Jain, and Praneeth Netrapalli.Hire: High recall approximate top-k estimation for efficient llm inference.ArXiv, abs/2402.09360, 2024.URL https://api.semanticscholar.org/CorpusID:267657774.
  • Marcus etal. [1994]MitchellP. Marcus, Grace Kim, MaryAnn Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger.The penn treebank: Annotating predicate argument structure.In Human Language Technology, Proceedings of a Workshop held at Plainsboro, New Jerey, USA, March 8-11, 1994. Morgan Kaufmann, 1994.URL https://aclanthology.org/H94-1020/.
  • Merity etal. [2017]Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.Pointer sentinel mixture models.In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.URL https://openreview.net/forum?id=Byj72udxe.
  • Joshi etal. [2017]Mandar Joshi, Eunsol Choi, DanielS. Weld, and Luke Zettlemoyer.Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension.In Regina Barzilay and Min-Yen Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1601–1611. Association for Computational Linguistics, 2017.doi: 10.18653/V1/P17-1147.URL https://doi.org/10.18653/v1/P17-1147.
  • Rajpurkar etal. [2018]Pranav Rajpurkar, Robin Jia, and Percy Liang.Know what you don’t know: Unanswerable questions for squad.In Iryna Gurevych and Yusuke Miyao, editors, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 784–789. Association for Computational Linguistics, 2018.doi: 10.18653/V1/P18-2124.URL https://aclanthology.org/P18-2124/.
  • Kwiatkowski etal. [2019]Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, AnkurP. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, AndrewM. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov.Natural questions: a benchmark for question answering research.Trans. Assoc. Comput. Linguistics, 7:452–466, 2019.doi: 10.1162/TACL\_A\_00276.URL https://doi.org/10.1162/tacl_a_00276.
  • Berant etal. [2013]Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang.Semantic parsing on freebase from question-answer pairs.In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1533–1544. ACL, 2013.URL https://aclanthology.org/D13-1160/.
  • Clark etal. [2018]Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.Think you have solved question answering? try arc, the AI2 reasoning challenge.CoRR, abs/1803.05457, 2018.URL http://arxiv.org/abs/1803.05457.
  • Zellers etal. [2019]Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi.Hellaswag: Can a machine really finish your sentence?In Anna Korhonen, DavidR. Traum, and Lluís Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics, 2019.doi: 10.18653/V1/P19-1472.URL https://doi.org/10.18653/v1/p19-1472.
  • Clark etal. [2019]Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova.Boolq: Exploring the surprising difficulty of natural yes/no questions.In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2924–2936. Association for Computational Linguistics, 2019.doi: 10.18653/V1/N19-1300.URL https://doi.org/10.18653/v1/n19-1300.
  • Bisk etal. [2020]Yonatan Bisk, Rowan Zellers, RonanLe Bras, Jianfeng Gao, and Yejin Choi.PIQA: reasoning about physical commonsense in natural language.In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432–7439. AAAI Press, 2020.doi: 10.1609/AAAI.V34I05.6239.URL https://doi.org/10.1609/aaai.v34i05.6239.
  • Sakaguchi etal. [2020]Keisuke Sakaguchi, RonanLe Bras, Chandra Bhagavatula, and Yejin Choi.Winogrande: An adversarial winograd schema challenge at scale.In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8732–8740. AAAI Press, 2020.doi: 10.1609/AAAI.V34I05.6399.URL https://doi.org/10.1609/aaai.v34i05.6399.

Appendix A Broader Impact

We introduce a Coordinate Descent based approach, CDQuant, for compressing Large Language Models. Our method uses only a small amount of data for calibration. We do not foresee any ethical implications arising from the technical aspects of our approach. However, compressing LLMs may give rise to bias effects, a study of which seems essential given the extensive use of LLMs. Our work may be of assistance to such studies. Also, since quantization allows for easier deployment of LLMs, it could have potential societal implications which seem difficult to predict.

Appendix B Limitations

  • While both CD and BCD outperformed GPTQ in our experiments, BCD achieves slightly better performance than CD.However, BCD is not as fast as CD and can be expensive for large models. In future, we aim to develop techniques to speed up BCD.

  • Our algorithms still don’t bridge the gap between QAT and PTQ, especially on smaller models. To bridge this gap, we believe one should move away from the 22superscriptsubscript22\ell_{2}^{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT surrogate loss that is being considered by most of the existing work. Instead, we should design surrogate losses that are more closely aligned with end-to-end loss.

  • Due to computational reasons, we couldn’t run experiments where we replace GPTQ component with CDQuantin algorithms such as QuIP, FrameQuant.

Appendix C CDQuantfor sub-channel quantization

To simplify the explanation in this section, we introduce a slightly modified notation. Given a weight vector 𝐰𝐰\mathbf{w}bold_w, we represent it’s sub-channel quantization as𝐚𝐪+𝐛direct-product𝐚𝐪𝐛\mathbf{a}\odot\mathbf{q}+\mathbf{b}bold_a ⊙ bold_q + bold_b, where direct-product\odot is the elementwise multiplication, and 𝐚,𝐛din𝐚𝐛superscriptsubscript𝑑in\mathbf{a},\mathbf{b}\in\mathbb{R}^{d_{\text{in}}}bold_a , bold_b ∈ blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT end_POSTSUPERSCRIPT are the scale, and bias parameters that satisfy the following constraints: 𝐚k=𝐚l,𝐛k=𝐛lformulae-sequencesubscript𝐚𝑘subscript𝐚𝑙subscript𝐛𝑘subscript𝐛𝑙\mathbf{a}_{k}=\mathbf{a}_{l},\mathbf{b}_{k}=\mathbf{b}_{l}bold_a start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = bold_a start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = bold_b start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT, for any two indices k,l𝑘𝑙k,litalic_k , italic_l that fall in the same group. With this notation, for any given 𝐚,𝐛𝐚𝐛\mathbf{a},\mathbf{b}bold_a , bold_b, the optimization problem in Equation(3) can be rewritten as

min𝐪X(𝐰𝐚𝐪𝐛)22.subscript𝐪superscriptsubscriptnorm𝑋𝐰direct-product𝐚𝐪𝐛22\displaystyle\min_{\mathbf{q}}\|X(\mathbf{w}-\mathbf{a}\odot\mathbf{q}-\mathbf%{b})\|_{2}^{2}.roman_min start_POSTSUBSCRIPT bold_q end_POSTSUBSCRIPT ∥ italic_X ( bold_w - bold_a ⊙ bold_q - bold_b ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(5)

Letting D𝐚=diag(𝐚),X~=XD𝐚,𝐰~=D𝐚1𝐰,𝐛~=D𝐚1𝐛formulae-sequencesubscript𝐷𝐚diag𝐚formulae-sequence~𝑋𝑋subscript𝐷𝐚formulae-sequence~𝐰superscriptsubscript𝐷𝐚1𝐰~𝐛superscriptsubscript𝐷𝐚1𝐛D_{\mathbf{a}}=\mbox{diag}(\mathbf{a}),\tilde{X}=XD_{\mathbf{a}},\tilde{%\mathbf{w}}=D_{\mathbf{a}}^{-1}\mathbf{w},\tilde{\mathbf{b}}=D_{\mathbf{a}}^{-%1}\mathbf{b}italic_D start_POSTSUBSCRIPT bold_a end_POSTSUBSCRIPT = diag ( bold_a ) , over~ start_ARG italic_X end_ARG = italic_X italic_D start_POSTSUBSCRIPT bold_a end_POSTSUBSCRIPT , over~ start_ARG bold_w end_ARG = italic_D start_POSTSUBSCRIPT bold_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_w , over~ start_ARG bold_b end_ARG = italic_D start_POSTSUBSCRIPT bold_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_b, the above problem can be further rewritten as

min𝐪X~(𝐰~𝐪𝐛~)22.subscript𝐪superscriptsubscriptnorm~𝑋~𝐰𝐪~𝐛22\displaystyle\min_{\mathbf{q}}\|\tilde{X}(\tilde{\mathbf{w}}-\mathbf{q}-\tilde%{\mathbf{b}})\|_{2}^{2}.roman_min start_POSTSUBSCRIPT bold_q end_POSTSUBSCRIPT ∥ over~ start_ARG italic_X end_ARG ( over~ start_ARG bold_w end_ARG - bold_q - over~ start_ARG bold_b end_ARG ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(6)

Observe that this problem is the same as the per-channel quantization problem described in Equation(1), but with modified parameters X~,𝐰~,𝐛~~𝑋~𝐰~𝐛\tilde{X},\tilde{\mathbf{w}},\tilde{\mathbf{b}}over~ start_ARG italic_X end_ARG , over~ start_ARG bold_w end_ARG , over~ start_ARG bold_b end_ARG. So extending CDQuantto sub-channel quantization simply involves running Algorithms1,2 with these modified parameters. Algorithms4,5 present the resulting algorithms.

1:Input: T𝑇Titalic_T - coordinate descent steps, X𝑋Xitalic_X - input data matrix, 𝐰𝐰\mathbf{w}bold_w - vector to be quantized, a𝑎aitalic_a - scale, b𝑏bitalic_b - bias, 𝐪0subscript𝐪0\mathbf{q}_{0}bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - initial estimate

2:X~Xdiag(𝐚),𝐰~=diag(𝐚)1𝐰,𝐛~=diag(𝐚)𝐛formulae-sequence~𝑋𝑋diag𝐚formulae-sequence~𝐰diagsuperscript𝐚1𝐰~𝐛diag𝐚𝐛\tilde{X}\leftarrow X\mbox{diag}(\mathbf{a}),\tilde{\mathbf{w}}=\mbox{diag}(%\mathbf{a})^{-1}\mathbf{w},\tilde{\mathbf{b}}=\mbox{diag}(\mathbf{a})\mathbf{b}over~ start_ARG italic_X end_ARG ← italic_X diag ( bold_a ) , over~ start_ARG bold_w end_ARG = diag ( bold_a ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_w , over~ start_ARG bold_b end_ARG = diag ( bold_a ) bold_b

3:Compute Hessian H𝐻Hitalic_H as: HX~TX~𝐻superscript~𝑋𝑇~𝑋H\leftarrow\tilde{X}^{T}\tilde{X}italic_H ← over~ start_ARG italic_X end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT over~ start_ARG italic_X end_ARG

4:Compute gradient 𝐠𝐠\mathbf{g}bold_g as: 𝐠2H(𝐪0(𝐰~𝐛~))𝐠2𝐻subscript𝐪0~𝐰~𝐛\mathbf{g}\leftarrow 2H(\mathbf{q}_{0}-(\tilde{\mathbf{w}}-\tilde{\mathbf{b}}))bold_g ← 2 italic_H ( bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - ( over~ start_ARG bold_w end_ARG - over~ start_ARG bold_b end_ARG ) )

5:fort[1:T]t\in[1:T]italic_t ∈ [ 1 : italic_T ]do

6:Find the coordinate that leads to the largest reduction in loss

i,r=argmini{0,1,din1},r{0,1,2c1}(r𝐪t1,i)2Hi,i+(r𝐪t1,i)𝐠ii^{*},r^{*}=\operatorname*{arg\,min}_{i\in\{0,1,\dots d_{\text{in}}-1\},r\in\{%0,1,\dots 2^{c}-1\}}(r-\mathbf{q}_{t-1,i})^{2}H_{i,i}+(r-\mathbf{q}_{t-1,i})%\mathbf{g}_{i}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT italic_i ∈ { 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT - 1 } , italic_r ∈ { 0 , 1 , … 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 } end_POSTSUBSCRIPT ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT + ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) bold_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT

7:Update gradient 𝐠𝐠\mathbf{g}bold_g as

𝐠𝐠+2(r𝐪t1,i)Hi,,𝐠𝐠2superscript𝑟subscript𝐪𝑡1superscript𝑖subscript𝐻superscript𝑖\mathbf{g}\leftarrow\mathbf{g}+2(r^{*}-\mathbf{q}_{t-1,i^{*}})H_{i^{*},\cdot},bold_g ← bold_g + 2 ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , ⋅ end_POSTSUBSCRIPT ,

  where Hi,subscript𝐻superscript𝑖H_{i^{*},\cdot}italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , ⋅ end_POSTSUBSCRIPT is the isuperscript𝑖i^{*}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT column of H𝐻Hitalic_H

8:Update 𝐪t1subscript𝐪𝑡1\mathbf{q}_{t-1}bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT as

𝐪t𝐪t1+(r𝐪t1,i)𝐞i,subscript𝐪𝑡subscript𝐪𝑡1superscript𝑟subscript𝐪𝑡1superscript𝑖subscript𝐞superscript𝑖\mathbf{q}_{t}\leftarrow\mathbf{q}_{t-1}+(r^{*}-\mathbf{q}_{t-1,i^{*}})\mathbf%{e}_{i^{*}},bold_q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ← bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT + ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) bold_e start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ,

 where 𝐞isubscript𝐞superscript𝑖\mathbf{e}_{i^{*}}bold_e start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is the standard basis vector with 1111 in isuperscript𝑖i^{*}italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT position and 00 everywhere else

9:endfor

1:Input: T𝑇Titalic_T - coordinate descent steps, k𝑘kitalic_k - block size, X𝑋Xitalic_X - input data matrix, 𝐰𝐰\mathbf{w}bold_w - vector to be quantized, a𝑎aitalic_a - scale, b𝑏bitalic_b - bias, 𝐪0subscript𝐪0\mathbf{q}_{0}bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - initial estimate

2:X~Xdiag(𝐚),𝐰~=diag(𝐚)1𝐰,𝐛~=diag(𝐚)𝐛formulae-sequence~𝑋𝑋diag𝐚formulae-sequence~𝐰diagsuperscript𝐚1𝐰~𝐛diag𝐚𝐛\tilde{X}\leftarrow X\mbox{diag}(\mathbf{a}),\tilde{\mathbf{w}}=\mbox{diag}(%\mathbf{a})^{-1}\mathbf{w},\tilde{\mathbf{b}}=\mbox{diag}(\mathbf{a})\mathbf{b}over~ start_ARG italic_X end_ARG ← italic_X diag ( bold_a ) , over~ start_ARG bold_w end_ARG = diag ( bold_a ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_w , over~ start_ARG bold_b end_ARG = diag ( bold_a ) bold_b

3:Compute Hessian H𝐻Hitalic_H as: HX~TX~𝐻superscript~𝑋𝑇~𝑋H\leftarrow\tilde{X}^{T}\tilde{X}italic_H ← over~ start_ARG italic_X end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT over~ start_ARG italic_X end_ARG

4:Compute gradient 𝐠𝐠\mathbf{g}bold_g as: 𝐠2H(𝐪0(𝐰~𝐛~))𝐠2𝐻subscript𝐪0~𝐰~𝐛\mathbf{g}\leftarrow 2H(\mathbf{q}_{0}-(\tilde{\mathbf{w}}-\tilde{\mathbf{b}}))bold_g ← 2 italic_H ( bold_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - ( over~ start_ARG bold_w end_ARG - over~ start_ARG bold_b end_ARG ) )

5:fort[1:T]t\in[1:T]italic_t ∈ [ 1 : italic_T ]do

6:Randomly partition the set {0,1,din1}01subscript𝑑in1\{0,1,\dots d_{\text{in}}-1\}{ 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT - 1 } into din/ksubscript𝑑in𝑘d_{\text{in}}/kitalic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_k blocks, each of size k𝑘kitalic_k

7:Find the block that leads to the largest reduction in loss

i,r=argmini{0,1,din/k1},r{0,1,2c1}k(r𝐪t1,i)THi,i(r𝐪t1,i)+(r𝐪t1,i)T𝐠i,i^{*},r^{*}=\operatorname*{arg\,min}_{\begin{subarray}{c}i\in\{0,1,\dots d_{%\text{in}}/k-1\},\\r\in\{0,1,\dots 2^{c}-1\}^{k}\end{subarray}}(r-\mathbf{q}_{t-1,i})^{T}H_{i,i}(%r-\mathbf{q}_{t-1,i})+(r-\mathbf{q}_{t-1,i})^{T}\mathbf{g}_{i},italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_i ∈ { 0 , 1 , … italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT / italic_k - 1 } , end_CELL end_ROW start_ROW start_CELL italic_r ∈ { 0 , 1 , … 2 start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT - 1 } start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT end_CELL end_ROW end_ARG end_POSTSUBSCRIPT ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) + ( italic_r - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ,

 where 𝐪t1,i,Hi,isubscript𝐪𝑡1𝑖subscript𝐻𝑖𝑖\mathbf{q}_{t-1,i},H_{i,i}bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT italic_i , italic_i end_POSTSUBSCRIPT are the sub-vector, sub-matrix of 𝐪t1,Hsubscript𝐪𝑡1𝐻\mathbf{q}_{t-1},Hbold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT , italic_H corresponding to block i𝑖iitalic_i.

8:Update gradient 𝐠𝐠\mathbf{g}bold_g as

𝐠𝐠+2Hi,(r𝐪t1,i),𝐠𝐠2subscript𝐻superscript𝑖superscript𝑟subscript𝐪𝑡1superscript𝑖\mathbf{g}\leftarrow\mathbf{g}+2H_{i^{*},\cdot}(r^{*}-\mathbf{q}_{t-1,i^{*}}),bold_g ← bold_g + 2 italic_H start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , ⋅ end_POSTSUBSCRIPT ( italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ,

9:Update 𝐪t1subscript𝐪𝑡1\mathbf{q}_{t-1}bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT as

𝐪t1,ir,𝐪t𝐪t1,formulae-sequencesubscript𝐪𝑡1superscript𝑖superscript𝑟subscript𝐪𝑡subscript𝐪𝑡1\mathbf{q}_{t-1,i^{*}}\leftarrow r^{*},\quad\mathbf{q}_{t}\leftarrow\mathbf{q}%_{t-1},bold_q start_POSTSUBSCRIPT italic_t - 1 , italic_i start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ← italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , bold_q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ← bold_q start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ,

10:endfor

Appendix D Detailed Downstream Evaluation Results

ConfigMethodPaLM2-Bison
NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
w16a16-27.8427.8427.8427.8475.1975.1975.1975.1977.4277.4277.4277.4223.8723.8723.8723.8759.0459.0459.0459.0484.6484.6484.6484.6488.8788.8788.8788.8782.5082.5082.5082.5082.4382.4382.4382.4378.2278.2278.2278.22
w3a16OWC22.1622.1622.1622.1675.8475.8475.8475.8470.3170.3170.3170.3121.7521.7521.7521.7557.3457.3457.3457.3483.5083.5083.5083.5086.9486.9486.9486.9479.6079.6079.6079.6081.1281.1281.1281.1278.0678.0678.0678.06
GPTQ24.2924.2924.2924.2974.6574.6574.6574.6572.2972.2972.2972.2922.9822.9822.9822.9858.7058.7058.7058.7085.0285.0285.0285.0287.5887.5887.5887.5880.8180.8180.8180.8181.7281.7281.7281.7277.6677.6677.6677.66
CD23.6023.6023.6023.6074.9274.9274.9274.9272.2572.2572.2572.2523.1323.1323.1323.1358.5358.5358.5358.5384.6884.6884.6884.6888.2088.2088.2088.2080.6080.6080.6080.6081.6181.6181.6181.6177.7477.7477.7477.74
BCD(k=2)23.2423.2423.2423.2474.7874.7874.7874.7872.0672.0672.0672.0622.2422.2422.2422.2458.6258.6258.6258.6284.5584.5584.5584.5588.0488.0488.0488.0480.6980.6980.6980.6981.8381.8381.8381.8378.2278.2278.2278.22
w3a16g128OWC23.8823.8823.8823.8873.5973.5973.5973.5973.3173.3173.3173.3121.5621.5621.5621.5657.8557.8557.8557.8583.8483.8483.8483.8488.2388.2388.2388.2380.6780.6780.6780.6780.7980.7980.7980.7978.6978.6978.6978.69
GPTQ25.3225.3225.3225.3273.9673.9673.9673.9674.1074.1074.1074.1023.2323.2323.2323.2358.0258.0258.0258.0284.3984.3984.3984.3988.5088.5088.5088.5080.6780.6780.6780.6781.8381.8381.8381.8378.0678.0678.0678.06
CD25.2425.2425.2425.2474.1674.1674.1674.1674.1174.1174.1174.1123.1823.1823.1823.1857.5157.5157.5157.5183.7583.7583.7583.7588.3888.3888.3888.3881.1581.1581.1581.1582.1582.1582.1582.1577.9877.9877.9877.98
BCD(k=2)24.9324.9324.9324.9374.0574.0574.0574.0573.9073.9073.9073.9023.1823.1823.1823.1856.7456.7456.7456.7484.0584.0584.0584.0588.2688.2688.2688.2681.0681.0681.0681.0682.4382.4382.4382.4378.5378.5378.5378.53
OWC-CD24.6024.6024.6024.6074.0374.0374.0374.0373.8073.8073.8073.8021.7521.7521.7521.7557.1757.1757.1757.1784.1884.1884.1884.1888.288.288.288.280.9780.9780.9780.9781.9481.9481.9481.9477.2777.2777.2777.27
OWC-CD+ CD25.1225.1225.1225.1274.3774.3774.3774.3773.9973.9973.9973.9923.3823.3823.3823.3858.3658.3658.3658.3685.4485.4485.4485.4488.6588.6588.6588.6580.9380.9380.9380.9381.7781.7781.7781.7777.9877.9877.9877.98
OWC-CD+ BCD(k=2)25.3225.3225.3225.3273.5473.5473.5473.5474.3074.3074.3074.3023.1823.1823.1823.1857.3457.3457.3457.3483.9683.9683.9683.9688.5988.5988.5988.5981.0181.0181.0181.0181.3981.3981.3981.3977.5177.5177.5177.51
w4a16OWC26.5726.5726.5726.5774.6674.6674.6674.6675.8475.8475.8475.8423.1323.1323.1323.1358.2858.2858.2858.2883.4283.4283.4283.4288.7588.7588.7588.7581.3981.3981.3981.3981.6681.6681.6681.6678.8578.8578.8578.85
GPTQ27.0627.0627.0627.0674.8774.8774.8774.8776.4576.4576.4576.4522.7422.7422.7422.7458.8758.8758.8758.8784.6084.6084.6084.6088.5388.5388.5388.5381.7281.7281.7281.7282.4882.4882.4882.4878.6978.6978.6978.69
CD26.7926.7926.7926.7974.4074.4074.4074.4076.3576.3576.3576.3522.8322.8322.8322.8359.2259.2259.2259.2284.6884.6884.6884.6888.9988.9988.9988.9981.6081.6081.6081.6082.1082.1082.1082.1078.6978.6978.6978.69
BCD(k=2)27.0627.0627.0627.0674.5474.5474.5474.5476.2876.2876.2876.2823.1323.1323.1323.1359.8159.8159.8159.8184.7684.7684.7684.7688.5688.5688.5688.5681.7081.7081.7081.7082.2682.2682.2682.2677.7477.7477.7477.74
w4a16g128OWC27.0427.0427.0427.0475.3775.3775.3775.3776.4276.4276.4276.4223.1323.1323.1323.1359.7359.7359.7359.7384.9784.9784.9784.9788.8788.8788.8788.8781.9481.9481.9481.9482.3282.3282.3282.3278.4578.4578.4578.45
GPTQ27.1227.1227.1227.1275.6375.6375.6375.6376.4276.4276.4276.4223.3823.3823.3823.3858.4558.4558.4558.4584.7684.7684.7684.7689.0589.0589.0589.0582.0182.0182.0182.0182.1582.1582.1582.1578.8578.8578.8578.85
CD27.3127.3127.3127.3175.5275.5275.5275.5276.4576.4576.4576.4523.6223.6223.6223.6259.0459.0459.0459.0485.0285.0285.0285.0288.5088.5088.5088.5082.0482.0482.0482.0482.5482.5482.5482.5478.6978.6978.6978.69
BCD(k=2)27.3727.3727.3727.3775.4475.4475.4475.4476.9276.9276.9276.9223.6223.6223.6223.6258.8758.8758.8758.8785.3585.3585.3585.3588.7288.7288.7288.7281.9881.9881.9881.9882.9782.9782.9782.9779.4879.4879.4879.48
OWC-CD27.3127.3127.3127.3175.7075.7075.7075.7076.6476.6476.6476.6422.9322.9322.9322.9358.3658.3658.3658.3684.6484.6484.6484.6488.5688.5688.5688.5681.9881.9881.9881.9882.5982.5982.5982.5978.1478.1478.1478.14
OWC-CD+ CD27.0127.0127.0127.0175.5275.5275.5275.5276.6976.6976.6976.6923.2823.2823.2823.2858.7958.7958.7958.7985.1085.1085.1085.1088.5688.5688.5688.5681.9881.9881.9881.9882.1582.1582.1582.1578.6978.6978.6978.69
OWC-CD+ BCD(k=2)26.9826.9826.9826.9875.4275.4275.4275.4276.8476.8476.8476.8423.3323.3323.3323.3358.7958.7958.7958.7985.3585.3585.3585.3588.4788.4788.4788.4781.9381.9381.9381.9382.3282.3282.3282.3278.1478.1478.1478.14

ConfigMethodPaLM2-Otter
NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
w16a16-19.7819.7819.7819.7870.370.370.370.369.169.169.169.118.0618.0618.0618.0654.154.154.154.180.1380.1380.1380.1385.4785.4785.4785.4779.5179.5179.5179.5179.4379.4379.4379.4373.9573.9573.9573.95
w3a16OWC5.965.965.965.9650.1750.1750.1750.1724.824.824.824.83.153.153.153.1545.9945.9945.9945.9966.8466.8466.8466.8474.8374.8374.8374.8366.9866.9866.9866.9872.4772.4772.4772.4767.5667.5667.5667.56
GPTQ13.7713.7713.7713.7766.1266.1266.1266.1256.1556.1556.1556.1512.9912.9912.9912.9952.5652.5652.5652.5678.5878.5878.5878.5885.1785.1785.1785.1776.7576.7576.7576.7578.3578.3578.3578.3572.372.372.372.3
CD14.4914.4914.4914.4965.7165.7165.7165.7158.8358.8358.8358.8313.4413.4413.4413.4452.4752.4752.4752.4779.9279.9279.9279.9283.0383.0383.0383.0375.8175.8175.8175.8177.0977.0977.0977.0971.971.971.971.9
BCD(k=2)14.6314.6314.6314.6365.9665.9665.9665.9659.4659.4659.4659.4614.0714.0714.0714.0752.7352.7352.7352.7379.8479.8479.8479.8483.0983.0983.0983.0975.9375.9375.9375.9377.4277.4277.4277.4271.7471.7471.7471.74
w3a16g128OWC10.8910.8910.8910.8967.9867.9867.9867.9845.0345.0345.0345.038.818.818.818.8148.2948.2948.2948.2974.8374.8374.8374.8379.1479.1479.1479.1474.6474.6474.6474.6475.5275.5275.5275.5272.4572.4572.4572.45
GPTQ16.1216.1216.1216.1269.869.869.869.860.8260.8260.8260.8214.4714.4714.4714.4753.1653.1653.1653.1679.3479.3479.3479.3484.0784.0784.0784.0777.1277.1277.1277.1277.8677.8677.8677.8673.1673.1673.1673.16
CD15.4615.4615.4615.4671.6271.6271.6271.6262.0762.0762.0762.0716.1916.1916.1916.1952.352.352.352.378.7978.7978.7978.7984.3484.3484.3484.3477.0277.0277.0277.0278.3578.3578.3578.3574.8274.8274.8274.82
BCD(k=2)16.0416.0416.0416.0471.0471.0471.0471.0462.0262.0262.0262.0215.6515.6515.6515.6551.7151.7151.7151.7178.3278.3278.3278.3284.8384.8384.8384.8377.0477.0477.0477.0477.6977.6977.6977.6974.0374.0374.0374.03
OWC-CD15.6215.6215.6215.6271.7871.7871.7871.7861.1961.1961.1961.1914.7614.7614.7614.7652.9952.9952.9952.9979.3479.3479.3479.3484.184.184.184.176.9276.9276.9276.9277.4277.4277.4277.4275.2275.2275.2275.22
OWC-CD+ CD16.216.216.216.270.7770.7770.7770.7761.861.861.861.814.7114.7114.7114.7153.4153.4153.4153.4179.4279.4279.4279.4284.584.584.584.577.2277.2277.2277.2278.2978.2978.2978.2973.6473.6473.6473.64
OWC-CD+ BCD(k=2)16.716.716.716.771.371.371.371.361.8761.8761.8761.8714.9114.9114.9114.9153.8453.8453.8453.8479.879.879.879.884.3784.3784.3784.3777.2577.2577.2577.2578.4578.4578.4578.4574.1974.1974.1974.19
w4a16OWC16.8116.8116.8116.8171.771.771.771.761.7561.7561.7561.7516.5416.5416.5416.5453.3353.3353.3353.3378.2478.2478.2478.2485.1785.1785.1785.1777.3277.3277.3277.3279.3379.3379.3379.3371.971.971.971.9
GPTQ1919191970.6670.6670.6670.6666.0166.0166.0166.0116.3416.3416.3416.3453.6753.6753.6753.6779.4679.4679.4679.4685.5785.5785.5785.5778.578.578.578.579.0579.0579.0579.0572.9372.9372.9372.93
CD18.718.718.718.770.9770.9770.9770.9766.1166.1166.1166.1117.0817.0817.0817.0854.2754.2754.2754.2780.8980.8980.8980.8985.8785.8785.8785.8778.5578.5578.5578.5579.0579.0579.0579.0572.3872.3872.3872.38
BCD(k=2)19.1419.1419.1419.1470.970.970.970.965.7265.7265.7265.7216.8816.8816.8816.8854.7854.7854.7854.7880.680.680.680.685.685.685.685.678.6478.6478.6478.6479.6579.6579.6579.6573.7273.7273.7273.72
w4a16g128OWC18.0318.0318.0318.0369.9769.9769.9769.9765.2865.2865.2865.2815.9415.9415.9415.9453.4153.4153.4153.4179.4279.4279.4279.4284.8984.8984.8984.8978.178.178.178.179.1679.1679.1679.1673.5673.5673.5673.56
GPTQ19.1719.1719.1719.1770.3470.3470.3470.3467.4667.4667.4667.4617.1817.1817.1817.1853.5853.5853.5853.5879.8879.8879.8879.8885.0285.0285.0285.027979797979.1179.1179.1179.1173.5673.5673.5673.56
CD19.2519.2519.2519.2571.1271.1271.1271.1267.6167.6167.6167.6117.1317.1317.1317.1353.3353.3353.3353.3380.1880.1880.1880.1884.9284.9284.9284.9279.0979.0979.0979.0979.1179.1179.1179.1172.9372.9372.9372.93
BCD(k=2)19.4519.4519.4519.4570.8170.8170.8170.8167.7567.7567.7567.7517.0817.0817.0817.0854.2754.2754.2754.2780.380.380.380.384.7484.7484.7484.7479.0579.0579.0579.0578.9478.9478.9478.9472.9372.9372.9372.93
OWC-CD19.219.219.219.270.5870.5870.5870.5867.6667.6667.6667.6617.2217.2217.2217.2253.2453.2453.2453.2480.5180.5180.5180.5185.0585.0585.0585.0578.8478.8478.8478.8479.4979.4979.4979.4974.7474.7474.7474.74
OWC-CD+ CD19.6419.6419.6419.6470.870.870.870.868.0268.0268.0268.0217.5217.5217.5217.5253.7553.7553.7553.7580.7780.7780.7780.7784.9884.9884.9884.9879.0979.0979.0979.0978.7878.7878.7878.7874.2774.2774.2774.27
OWC-CD+ BCD(k=2)19.3119.3119.3119.3170.4170.4170.4170.4167.9167.9167.9167.9117.3717.3717.3717.3753.6753.6753.6753.6780.380.380.380.384.9584.9584.9584.9579.0779.0779.0779.0778.7878.7878.7878.7874.4374.4374.4374.43

ConfigMethodPaLM2-Gecko
NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
w16a16-7.677.677.677.6749.0449.0449.0449.0437.7237.7237.7237.728.968.968.968.9637.1237.1237.1237.1268.168.168.168.166.1566.1566.1566.1561.5561.5561.5561.5573.9973.9973.9973.9961.7261.7261.7261.72
w3a16OWC3.713.713.713.7148.1748.1748.1748.1718.8418.8418.8418.845.075.075.075.0730.0330.0330.0330.0360.160.160.160.159.6959.6959.6959.6953.4553.4553.4553.4570.2470.2470.2470.2460.9360.9360.9360.93
GPTQ4.384.384.384.3846.9146.9146.9146.9120.4320.4320.4320.435.955.955.955.9531.2331.2331.2331.2362.8462.8462.8462.8463.5563.5563.5563.5556.5156.5156.5156.5171.6571.6571.6571.6560.4660.4660.4660.46
CD4.634.634.634.6348.148.148.148.122.7922.7922.7922.797.237.237.237.2332.5932.5932.5932.5963.5163.5163.5163.5162.1462.1462.1462.1457.657.657.657.672.0972.0972.0972.0960.3860.3860.3860.38
BCD(k=2)4.524.524.524.5248.8448.8448.8448.8422.0922.0922.0922.096.646.646.646.6432.5132.5132.5132.5164.1464.1464.1464.1463.2463.2463.2463.2457.5357.5357.5357.5372.272.272.272.261.0161.0161.0161.01
w3a16g128OWC4.434.434.434.4357.4857.4857.4857.4820.9920.9920.9920.998.428.428.428.4232.5932.5932.5932.5963.0963.0963.0963.0962.0562.0562.0562.0553.4553.4553.4553.4571.3871.3871.3871.3859.5959.5959.5959.59
GPTQ4.854.854.854.8553.1453.1453.1453.1423.2523.2523.2523.257.737.737.737.7333.1133.1133.1133.1164.7364.7364.7364.7364.4364.4364.4364.4356.1956.1956.1956.1972.4272.4272.4272.4261.0961.0961.0961.09
CD4.494.494.494.4954.4354.4354.4354.4324.1224.1224.1224.128.128.128.128.1233.2833.2833.2833.2865.1565.1565.1565.1564.2264.2264.2264.2256.5656.5656.5656.5671.9871.9871.9871.9861.6461.6461.6461.64
BCD(k=2)4.294.294.294.2953.5853.5853.5853.5824.4124.4124.4124.418.278.278.278.2733.733.733.733.765.3265.3265.3265.3264.2564.2564.2564.2556.2556.2556.2556.2571.7171.7171.7171.7161.8861.8861.8861.88
OWC-CD4.64.64.64.653.8653.8653.8653.8623.2323.2323.2323.237.147.147.147.1432.8532.8532.8532.8564.8164.8164.8164.8164.6564.6564.6564.6556.3156.3156.3156.3171.8271.8271.8271.8261.7261.7261.7261.72
OWC-CD+ CD4.524.524.524.5252.452.452.452.425.0725.0725.0725.077.787.787.787.7832.7632.7632.7632.7664.4464.4464.4464.4465.1165.1165.1165.1156.8556.8556.8556.8571.9371.9371.9371.9361.9661.9661.9661.96
OWC-CD+ BCD(k=2)4.654.654.654.6551.1551.1551.1551.1525.1325.1325.1325.137.687.687.687.6833.1933.1933.1933.1964.964.964.964.964.6864.6864.6864.6856.8856.8856.8856.8871.8271.8271.8271.8261.1761.1761.1761.17
w4a16OWC6.096.096.096.0948.448.448.448.431.0631.0631.0631.067.537.537.537.5335.2435.2435.2435.2465.0765.0765.0765.0764.5964.5964.5964.5959.5959.5959.5959.5972.9672.9672.9672.9663.8563.8563.8563.85
GPTQ6.816.816.816.8150.6750.6750.6750.6732.532.532.532.58.078.078.078.0736.4336.4336.4336.4365.5365.5365.5365.5365.0565.0565.0565.0560.1660.1660.1660.1673.2973.2973.2973.2963.8563.8563.8563.85
CD6.816.816.816.8151.0751.0751.0751.0731.8731.8731.8731.878.668.668.668.6635.5835.5835.5835.5866.9266.9266.9266.9264.6564.6564.6564.6560.2260.2260.2260.2273.2373.2373.2373.2363.1463.1463.1463.14
BCD(k=2)6.656.656.656.6550.8650.8650.8650.8631.6731.6731.6731.678.918.918.918.9135.1535.1535.1535.1567.2167.2167.2167.2164.464.464.464.460.3160.3160.3160.3173.4573.4573.4573.4563.2263.2263.2263.22
w4a16g128OWC7.127.127.127.1247.8547.8547.8547.8533.5833.5833.5833.589.999.999.999.9935.5835.5835.5835.5867.0967.0967.0967.0968.3868.3868.3868.3860.1560.1560.1560.1573.573.573.573.562.962.962.962.9
GPTQ7.27.27.27.251.651.651.651.633.9433.9433.9433.948.868.868.868.8636.1836.1836.1836.1867.2167.2167.2167.2166.5166.5166.5166.5159.9959.9959.9959.9973.573.573.573.563.3863.3863.3863.38
CD7.597.597.597.5948.8748.8748.8748.8735.2235.2235.2235.229.069.069.069.0635.9235.9235.9235.9268.0168.0168.0168.0167.0967.0967.0967.0960.4960.4960.4960.4973.4573.4573.4573.4563.5463.5463.5463.54
BCD(k=2)7.297.297.297.2948.7248.7248.7248.7234.6734.6734.6734.679.19.19.19.136.6936.6936.6936.6968.0168.0168.0168.0167.2867.2867.2867.2860.760.760.760.773.3473.3473.3473.3462.962.962.962.9
OWC-CD7.317.317.317.3148.5248.5248.5248.5233.233.233.233.29.39.39.39.335.7535.7535.7535.7567.5967.5967.5967.5966.9766.9766.9766.9760.860.860.860.874.2174.2174.2174.2163.0663.0663.0663.06
OWC-CD+ CD7.157.157.157.1549.1549.1549.1549.1534.7734.7734.7734.779.49.49.49.436.5236.5236.5236.5267.867.867.867.866.3366.3366.3366.3360.7460.7460.7460.7473.6173.6173.6173.6163.6163.6163.6163.61
OWC-CD+ BCD(k=2)7.297.297.297.2948.8848.8848.8848.8834.6834.6834.6834.689.159.159.159.1536.4336.4336.4336.4368.0168.0168.0168.0165.7565.7565.7565.7560.7760.7760.7760.7774.0574.0574.0574.0564.0164.0164.0164.01

MethodEpochsPaLM2-Otter
w2a16g128NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
GPTQ-6.936.936.936.9364.8964.8964.8964.8928.3628.3628.3628.367.737.737.737.7344.2844.2844.2844.2871.0471.0471.0471.0478.8178.8178.8178.8168.2968.2968.2968.2972.6972.6972.6972.6967.7267.7267.7267.72
CD11118.238.238.238.2368.1268.1268.1268.1232.9832.9832.9832.9810.1410.1410.1410.1443.6943.6943.6943.6972.4372.4372.4372.4376.9176.9176.9176.9168.1168.1168.1168.1172.9672.9672.9672.9666.4666.4666.4666.46
BCD(k=2)11118.098.098.098.0969.269.269.269.232.8532.8532.8532.859.999.999.999.9943.4343.4343.4343.4373.0273.0273.0273.027777777768.0368.0368.0368.0373.0773.0773.0773.0766.3866.3866.3866.38
22228.28.28.28.269.4369.4369.4369.4332.9832.9832.9832.989.749.749.749.7443.5243.5243.5243.5273.773.773.773.777.0377.0377.0377.0368.5668.5668.5668.5674.174.174.174.167.867.867.867.8
33338.318.318.318.3169.7969.7969.7969.7933.0333.0333.0333.039.659.659.659.6543.5243.5243.5243.5273.1173.1173.1173.1177.0977.0977.0977.0968.7668.7668.7668.7673.0773.0773.0773.0767.0167.0167.0167.01
BCD(k=4)11117.877.877.877.8769.5269.5269.5269.5233.2433.2433.2433.249.799.799.799.7943.1743.1743.1743.1772.672.672.672.677.8377.8377.8377.8368.6168.6168.6168.6173.8873.8873.8873.8866.4666.4666.4666.46
22228.318.318.318.3169.0669.0669.0669.0633.0333.0333.0333.039.69.69.69.64343434372.672.672.672.676.6476.6476.6476.6468.5168.5168.5168.5173.1873.1873.1873.1866.366.366.366.3
33338.318.318.318.3169.869.869.869.832.6932.6932.6932.699.59.59.59.542.1542.1542.1542.1572.4372.4372.4372.4377.0677.0677.0677.0668.3468.3468.3468.3473.7273.7273.7273.7267.8867.8867.8867.88

MethodEpochsPaLM2-Bison
w2a16g128NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
GPTQ-15.2615.2615.2615.2669.3469.3469.3469.3455.655.655.655.617.0817.0817.0817.0851.0251.0251.0251.0280.1380.1380.1380.1386.0286.0286.0286.0274.4474.4474.4474.4478.7878.7878.7878.7874.7474.7474.7474.74
CD111115.615.615.615.667.4367.4367.4367.4355.0655.0655.0655.0616.8316.8316.8316.8350.4350.4350.4350.4379.9779.9779.9779.9783.7683.7683.7683.7674.3574.3574.3574.3578.3578.3578.3578.3574.5174.5174.5174.51
BCD(k=2)111115.6215.6215.6215.6267.3267.3267.3267.3255.8255.8255.8255.8216.6316.6316.6316.6351.4551.4551.4551.4579.9779.9779.9779.9783.6183.6183.6183.6174.4674.4674.4674.467979797975.4575.4575.4575.45
222215.6215.6215.6215.6268.4568.4568.4568.4556.3656.3656.3656.3616.9816.9816.9816.9851.0251.0251.0251.0280.0980.0980.0980.0984.5684.5684.5684.5674.9974.9974.9974.9978.8978.8978.8978.8975.0675.0675.0675.06
333316.0116.0116.0116.0167.9967.9967.9967.9956.3156.3156.3156.3116.8816.8816.8816.8851.1951.1951.1951.1979.8879.8879.8879.8884.6584.6584.6584.6574.9574.9574.9574.9579.3379.3379.3379.3375.4575.4575.4575.45
BCD(k=4)111115.915.915.915.967.4667.4667.4667.4656.1256.1256.1256.1216.4416.4416.4416.4451.2851.2851.2851.2880.3580.3580.3580.3584.0184.0184.0184.0174.8974.8974.8974.8978.8478.8478.8478.8475.8575.8575.8575.85
222216.0416.0416.0416.0467.6167.6167.6167.6156.3456.3456.3456.3416.3916.3916.3916.3951.5451.5451.5451.5480.0980.0980.0980.0983.9183.9183.9183.9174.8374.8374.8374.8378.7378.7378.7378.7375.3775.3775.3775.37
333316.1216.1216.1216.1267.5467.5467.5467.5456.356.356.356.316.2416.2416.2416.2451.0251.0251.0251.0280.2280.2280.2280.2283.9883.9883.9883.9874.9374.9374.9374.9378.5178.5178.5178.5175.9375.9375.9375.93

ConfigMethodPaLM2-GeckoPaLM2-OtterPaLM2-Bison
Gen.RankAvg.Gen.RankAvg.Gen.RankAvg.
w16a16-25.8525.8525.8525.8561.4461.4461.4461.4447.247.247.247.244.3144.3144.3144.3175.4375.4375.4375.4362.9862.9862.9862.9851.0851.0851.0851.0878.2878.2878.2878.2868.0068.0068.0068.00
w3a16OWC15.4515.4515.4515.4555.7355.7355.7355.7339.6239.6239.6239.6215.0615.0615.0615.0659.2959.2959.2959.2941.6041.6041.6041.6039.5839.5839.5839.5875.4575.4575.4575.4561.1061.1061.1061.10
GPTQ13.0813.0813.0813.0856.9256.9256.9256.9239.3939.3939.3939.3931.8031.8031.8031.8071.9471.9471.9471.9455.8855.8855.8855.8841.4241.4241.4241.4276.7076.7076.7076.7062.5862.5862.5862.58
CD15.8215.8215.8215.8258.1958.1958.1958.1941.2441.2441.2441.2432.3432.3432.3432.3472.0372.0372.0372.0356.1656.1656.1656.1641.6641.6641.6641.6676.7176.7176.7176.7162.6962.6962.6962.69
BCD(k=2)16.1616.1616.1616.1657.8557.8557.8557.8541.1741.1741.1741.1732.0432.0432.0432.0472.2272.2272.2272.2256.1556.1556.1556.1541.3741.3741.3741.3776.7876.7876.7876.7862.6262.6262.6262.62
w3a16g128OWC18.5418.5418.5418.5456.9556.9556.9556.9541.5941.5941.5941.5926.6226.6226.6226.6268.0968.0968.0968.0951.5051.5051.5051.5041.5441.5441.5441.5476.5176.5176.5176.5162.5262.5262.5262.52
GPTQ17.9617.9617.9617.9658.3058.3058.3058.3042.1642.1642.1642.1633.4333.4333.4333.4372.9872.9872.9872.9857.1657.1657.1657.1643.4743.4743.4743.4776.6576.6576.6576.6563.3863.3863.3863.38
CD16.8116.8116.8116.8158.1058.1058.1058.1041.5841.5841.5841.5832.7332.7332.7332.7372.6272.6272.6272.6256.6656.6656.6656.6642.9542.9542.9542.9577.0377.0377.0377.0363.463.463.463.4
BCD(k=2)16.3316.3316.3316.3357.8157.8157.8157.8141.2241.2241.2241.2232.1332.1332.1332.1372.6772.6772.6772.6756.4656.4656.4656.4642.6742.6742.6742.6776.8576.8576.8576.8563.1863.1863.1863.18
OWC-CD16.7316.7316.7316.7357.7457.7457.7457.7441.3441.3441.3441.3432.9032.9032.9032.9072.4072.4072.4072.4056.6056.6056.6056.6042.2342.2342.2342.2377.0277.0277.0277.0263.1163.1163.1163.11
OWC-CD+ CD16.3716.3716.3716.3758.2258.2258.2258.2241.4841.4841.4841.4833.5333.5333.5333.5372.6472.6472.6472.6456.9956.9956.9956.9943.0943.0943.0943.0976.9176.9176.9176.9163.3863.3863.3863.38
OWC-CD+ BCD(k=2)16.4716.4716.4716.4758.4558.4558.4558.4541.6641.6641.6641.6633.7133.7133.7133.7172.7372.7372.7372.7357.1257.1257.1257.1243.0743.0743.0743.0776.7076.7076.7076.7063.2563.2563.2563.25
w4a16OWC16.0716.0716.0716.0759.4259.4259.4259.4242.0842.0842.0842.0833.7633.7633.7633.7672.2272.2272.2272.2256.8356.8356.8356.8343.5943.5943.5943.5977.1377.1377.1377.1363.7163.7163.7163.71
GPTQ18.7718.7718.7718.7759.559.559.559.543.2143.2143.2143.2134.0334.0334.0334.0373.5373.5373.5373.5357.7357.7357.7357.7344.1644.1644.1644.1677.5477.5477.5477.5464.1964.1964.1964.19
CD18.2018.2018.2018.2059.9559.9559.9559.9543.2543.2543.2543.2535.0135.0135.0135.0173.4273.4273.4273.4258.0658.0658.0658.0643.9743.9743.9743.9777.5577.5577.5577.5564.1264.1264.1264.12
BCD(k=2)17.8917.8917.8917.8959.6459.6459.6459.6442.9442.9442.9442.9435.1035.1035.1035.1073.3173.3173.3173.3158.0258.0258.0258.0243.9543.9543.9543.9577.7777.7777.7777.7764.2464.2464.2464.24
w4a16g128OWC19.0019.0019.0019.0060.4260.4260.4260.4243.8543.8543.8543.8534.5234.5234.5234.5272.8372.8372.8372.8357.557.557.557.544.4544.4544.4544.4577.7177.7177.7177.7164.4164.4164.4164.41
GPTQ19.0719.0719.0719.0760.4060.4060.4060.4043.8743.8743.8743.8735.3435.3435.3435.3473.6073.6073.6073.6058.2958.2958.2958.2944.2544.2544.2544.2577.6577.6577.6577.6564.2964.2964.2964.29
CD20.3520.3520.3520.3560.9460.9460.9460.9444.7044.7044.7044.7036.2036.2036.2036.2073.6773.6773.6773.6758.6858.6858.6858.6844.2144.2144.2144.2177.7277.7277.7277.7264.3164.3164.3164.31
BCD(k=2)20.4420.4420.4420.4460.6960.6960.6960.6944.5944.5944.5944.5936.1136.1136.1136.1173.5473.5473.5473.5458.5758.5758.5758.5744.1744.1744.1744.1777.5677.5677.5677.5664.264.264.264.2
OWC-CD19.8019.8019.8019.8060.3960.3960.3960.3944.1544.1544.1544.1536.3336.3336.3336.3373.6273.6273.6273.6258.7158.7158.7158.7144.7544.7544.7544.7577.7077.7077.7077.7064.5264.5264.5264.52
OWC-CD+ CD20.4720.4720.4720.4760.8560.8560.8560.8544.7044.7044.7044.7035.9835.9835.9835.9873.5973.5973.5973.5958.5458.5458.5458.5444.7744.7744.7744.7777.5277.5277.5277.5264.4264.4264.4264.42
OWC-CD+ BCD(k=2)20.1520.1520.1520.1560.4960.4960.4960.4944.3644.3644.3644.3636.0036.0036.0036.0073.6273.6273.6273.6258.5758.5758.5758.5744.7044.7044.7044.7077.7077.7077.7077.7064.5064.5064.5064.50

ConfigMethodPaLM2-Bison
NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
w16a16-aa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_a
w3a16OWC15.8715.8715.8715.8773.0173.0173.0173.0158.8658.8658.8658.8610.5810.5810.5810.5852.4752.4752.4752.4776.4376.4376.4376.4386.9186.9186.9186.9179.7679.7679.7679.7681.0181.0181.0181.0176.0976.0976.0976.09
GPTQ17.3117.3117.3117.3171.2571.2571.2571.2565.5365.5365.5365.5311.5611.5611.5611.5654.4454.4454.4454.4479.2579.2579.2579.2587.2287.2287.2287.2280.7880.7880.7880.7881.6181.6181.6181.6176.8776.8776.8776.87
CD17.8417.8417.8417.8471.2471.2471.2471.2464.7664.7664.7664.7612.812.812.812.854.4454.4454.4454.4479.1779.1779.1779.1787.5887.5887.5887.5880.7980.7980.7980.7982.182.182.182.176.1676.1676.1676.16
BCD(k=2)17.8117.8117.8117.8171.371.371.371.364.2964.2964.2964.2912.0612.0612.0612.0653.553.553.553.579.0879.0879.0879.0887.6587.6587.6587.6580.9180.9180.9180.9182.2182.2182.2182.2177.3577.3577.3577.35
w3a16g128OWC15.9815.9815.9815.9871.2871.2871.2871.2868.0268.0268.0268.0210.8810.8810.8810.8853.5853.5853.5853.5879.5579.5579.5579.5586.7386.7386.7386.7380.980.980.980.981.6181.6181.6181.6176.7276.7276.7276.72
GPTQ18.9518.9518.9518.9571.4971.4971.4971.4969.6769.6769.6769.6713.7813.7813.7813.7853.2453.2453.2453.2479.8479.8479.8479.8486.7986.7986.7986.7981.0581.0581.0581.0582.0582.0582.0582.0576.9576.9576.9576.95
CD18.3118.3118.3118.3171.8171.8171.8171.8168.5668.5668.5668.5613.1413.1413.1413.1453.4153.4153.4153.4179.879.879.879.887.5587.5587.5587.5581.3181.3181.3181.3182.4382.4382.4382.4377.6677.6677.6677.66
BCD(k=2)18.5318.5318.5318.5371.5571.5571.5571.5567.6267.6267.6267.6212.9912.9912.9912.9953.5853.5853.5853.5879.5979.5979.5979.5987.4987.4987.4987.4981.1481.1481.1481.1482.8182.8182.8182.8176.4876.4876.4876.48
OWC-CD17.2617.2617.2617.2671.7971.7971.7971.7967.5367.5367.5367.5312.3512.3512.3512.3554.6954.6954.6954.6980.8580.8580.8580.8585.7285.7285.7285.7281.0281.0281.0281.0281.8881.8881.8881.8877.9877.9877.9877.98
OWC-CD+ CD18.7518.7518.7518.7572.0572.0572.0572.0568.3568.3568.3568.3513.1913.1913.1913.1954.154.154.154.180.1880.1880.1880.1886.3686.3686.3686.3680.8680.8680.8680.8682.4882.4882.4882.4877.5177.5177.5177.51
OWC-CD+ BCD(k=2)19.3619.3619.3619.3671.4771.4771.4771.4768.4568.4568.4568.4512.9912.9912.9912.9953.5853.5853.5853.5880.4780.4780.4780.4786.0286.0286.0286.0281.1581.1581.1581.1582.2682.2682.2682.2676.7276.7276.7276.72
w4a16OWC2020202072.5172.5172.5172.5169.3469.3469.3469.3412.512.512.512.554.6154.6154.6154.6180.3980.3980.3980.3987.0687.0687.0687.0681.581.581.581.582.0582.0582.0582.0577.1977.1977.1977.19
GPTQ21.2521.2521.2521.2572.3972.3972.3972.3970.670.670.670.612.412.412.412.455.255.255.255.280.6480.6480.6480.6487.5887.5887.5887.5881.9881.9881.9881.9882.2682.2682.2682.2677.5877.5877.5877.58
CD21.521.521.521.571.5771.5771.5771.5770.0470.0470.0470.0412.812.812.812.855.2955.2955.2955.2980.5680.5680.5680.5688.0788.0788.0788.0781.7881.7881.7881.7882.5982.5982.5982.5977.0377.0377.0377.03
BCD(k=2)21.1621.1621.1621.1671.9271.9271.9271.9270.0170.0170.0170.0112.712.712.712.755.6355.6355.6355.6381.1981.1981.1981.1987.7787.7787.7787.7781.8281.8281.8281.8282.8182.8182.8182.8177.4377.4377.4377.43
w4a16g128OWC21.2221.2221.2221.2273.0673.0673.0673.0670.0770.0770.0770.0713.4413.4413.4413.4455.855.855.855.880.4780.4780.4780.4787.7787.7787.7787.7782.0882.0882.0882.0882.6482.6482.6482.6477.5177.5177.5177.51
GPTQ20.7220.7220.7220.7273.1773.1773.1773.1770.1270.1270.1270.1212.9912.9912.9912.9955.3855.3855.3855.3880.8980.8980.8980.8987.5587.5587.5587.5582.1482.1482.1482.1482.5982.5982.5982.5977.3577.3577.3577.35
CD21.821.821.821.873.3773.3773.3773.3769.0669.0669.0669.0612.612.612.612.654.7854.7854.7854.7880.7280.7280.7280.7287.2587.2587.2587.2582.1682.1682.1682.1683.1983.1983.1983.1978.2278.2278.2278.22
BCD(k=2)21.521.521.521.573.0673.0673.0673.0669.3469.3469.3469.3412.812.812.812.855.2955.2955.2955.2980.9880.9880.9880.9887.3187.3187.3187.3182.1482.1482.1482.1482.5482.5482.5482.5477.1177.1177.1177.11
OWC-CD21.3621.3621.3621.3673.8573.8573.8573.8570.0770.0770.0770.0713.7313.7313.7313.7355.1255.1255.1255.1281.2381.2381.2381.2387.7187.7187.7187.7182.2782.2782.2782.2782.5982.5982.5982.5977.2777.2777.2777.27
OWC-CD+ CD21.4121.4121.4121.4173.3273.3273.3273.3271.4671.4671.4671.4612.8912.8912.8912.8954.1854.1854.1854.1881.481.481.481.487.2887.2887.2887.2882.1782.1782.1782.1782.3782.3782.3782.3777.7477.7477.7477.74
OWC-CD+ BCD(k=2)21.1921.1921.1921.1973.2473.2473.2473.2471.6471.6471.6471.6412.7512.7512.7512.7555.0355.0355.0355.0381.5281.5281.5281.5287.2887.2887.2887.2882.0682.0682.0682.0682.6482.6482.6482.6477.6677.6677.6677.66

ConfigMethodPaLM2-Otter
NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
w16a16-aa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_a
w3a16OWC2.272.272.272.2749.4549.4549.4549.458.028.028.028.020.490.490.490.4941.9841.9841.9841.9860.1460.1460.1460.1448.7848.7848.7848.7866.9166.9166.9166.9171.8771.8771.8771.8766.0666.0666.0666.06
GPTQ9.099.099.099.0964.5364.5364.5364.5346.3846.3846.3846.387.197.197.197.1949.8349.8349.8349.8373.9973.9973.9973.9981.9681.9681.9681.9676.8776.8776.8776.8778.8978.8978.8978.8970.0970.0970.0970.09
CD10.7210.7210.7210.7264.0164.0164.0164.0148.0348.0348.0348.036.596.596.596.5950.7750.7750.7750.7776.676.676.676.681.0181.0181.0181.0175.7775.7775.7775.7777.9777.9777.9777.9770.0970.0970.0970.09
BCD(k=2)10.5510.5510.5510.5564.3164.3164.3164.3147.2547.2547.2547.256.056.056.056.0550.8550.8550.8550.8576.4376.4376.4376.4381.4181.4181.4181.4175.9175.9175.9175.9178.4578.4578.4578.4570.2470.2470.2470.24
w3a16g128OWC6.186.186.186.1866.3766.3766.3766.3730.4830.4830.4830.483.443.443.443.4447.4447.4447.4447.4470.1670.1670.1670.1669.6369.6369.6369.6374.6174.6174.6174.6176.8876.8876.8876.8869.8569.8569.8569.85
GPTQ10.1710.1710.1710.1766.5366.5366.5366.5349.2949.2949.2949.297.737.737.737.7351.3751.3751.3751.3775.9375.9375.9375.9382.3582.3582.3582.357777777779.1179.1179.1179.1172.1472.1472.1472.14
CD7.177.177.177.1768.7368.7368.7368.7349.4249.4249.4249.425.615.615.615.6150.2650.2650.2650.2676.1876.1876.1876.1882.3982.3982.3982.3976.7976.7976.7976.7978.1378.1378.1378.1371.9871.9871.9871.98
BCD(k=2)6.686.686.686.6868.1868.1868.1868.1848.3948.3948.3948.395.275.275.275.2750.2650.2650.2650.2675.9375.9375.9375.9381.4181.4181.4181.4176.8476.8476.8476.847979797972.6172.6172.6172.61
OWC-CD8.988.988.988.9868.3768.3768.3768.3748.3348.3348.3348.335.915.915.915.9150.5150.5150.5150.5175.4675.4675.4675.4681.3581.3581.3581.3576.8376.8376.8376.8378.1878.1878.1878.1872.0672.0672.0672.06
OWC-CD+ CD9.729.729.729.7267.8767.8767.8767.8749.8749.8749.8749.876.646.646.646.6451.3751.3751.3751.3776.6876.6876.6876.6881.5981.5981.5981.5976.7876.7876.7876.7877.9177.9177.9177.9171.5171.5171.5171.51
OWC-CD+ BCD(k=2)9.949.949.949.9468.2668.2668.2668.2649.9149.9149.9149.916.746.746.746.7450.8550.8550.8550.8576.2276.2276.2276.2281.6881.6881.6881.6876.9976.9976.9976.9978.8978.8978.8978.8971.7471.7471.7471.74
w4a16OWC11.1911.1911.1911.1966.6766.6766.6766.6749.5649.5649.5649.567.637.637.637.635050505073.9173.9173.9173.918080808077.3677.3677.3677.3679.1179.1179.1179.1172.9372.9372.9372.93
GPTQ12.2712.2712.2712.2766.6566.6566.6566.6548.7248.7248.7248.728.468.468.468.4651.7151.7151.7151.7175.8475.8475.8475.8482.0282.0282.0282.0278.278.278.278.280.0980.0980.0980.0973.3273.3273.3273.32
CD12.5212.5212.5212.5266.8566.8566.8566.8553.1653.1653.1653.167.537.537.537.5352.352.352.352.376.4376.4376.4376.4381.2881.2881.2881.2878.378.378.378.379.679.679.679.672.6172.6172.6172.61
BCD(k=2)12.3312.3312.3312.336767676753.4553.4553.4553.457.637.637.637.6351.9651.9651.9651.9675.9775.9775.9775.9780.9280.9280.9280.9278.4778.4778.4778.4780.4780.4780.4780.4772.0672.0672.0672.06
w4a16g128OWC11.3311.3311.3311.3365.9865.9865.9865.9853.5753.5753.5753.577.197.197.197.1949.6649.6649.6649.6674.9274.9274.9274.9282.1782.1782.1782.1778.2278.2278.2278.2279.5479.5479.5479.5472.4572.4572.4572.45
GPTQ11.511.511.511.565.8165.8165.8165.8156.3656.3656.3656.367.687.687.687.6851.7151.7151.7151.7176.5276.5276.5276.5282.0882.0882.0882.0878.8678.8678.8678.8679.2779.2779.2779.2773.1673.1673.1673.16
CD11.7211.7211.7211.7266.8566.8566.8566.8557.6357.6357.6357.638.618.618.618.6151.1951.1951.1951.1976.676.676.676.682.5182.5182.5182.5178.8978.8978.8978.8979.4979.4979.4979.4973.3273.3273.3273.32
BCD(k=2)11.6611.6611.6611.6666.7366.7366.7366.7357.557.557.557.58.568.568.568.5651.1951.1951.1951.1976.376.376.376.382.2982.2982.2982.2978.8878.8878.8878.8879.2779.2779.2779.2773.3273.3273.3273.32
OWC-CD12.2712.2712.2712.2766.2866.2866.2866.2858.2158.2158.2158.218.568.568.568.5651.6251.6251.6251.6276.8576.8576.8576.8582.3282.3282.3282.3278.8578.8578.8578.8579.4979.4979.4979.4972.6172.6172.6172.61
OWC-CD+ CD11.6311.6311.6311.6366.2866.2866.2866.2858.0658.0658.0658.067.927.927.927.9251.6251.6251.6251.6276.9476.9476.9476.9482.4582.4582.4582.4578.7278.7278.7278.7279.1179.1179.1179.1172.6972.6972.6972.69
OWC-CD+ BCD(k=2)11.7211.7211.7211.7265.9665.9665.9665.9658.2958.2958.2958.298.028.028.028.0251.7951.7951.7951.7976.7776.7776.7776.7782.1782.1782.1782.1778.878.878.878.879.4979.4979.4979.4972.6972.6972.6972.69

ConfigMethodPaLM2-Gecko
NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
w16a16-aa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_aaa𝑎𝑎aaitalic_a italic_a
w3a16OWC2.552.552.552.5547.5747.5747.5747.579.819.819.819.811.871.871.871.8729.6929.6929.6929.6956.7356.7356.7356.7364.864.864.864.853.2853.2853.2853.2869.5969.5969.5969.5960.360.360.360.3
GPTQ3.353.353.353.3544.1344.1344.1344.133.973.973.973.970.890.890.890.8930.230.230.230.262.1262.1262.1262.1263.7963.7963.7963.7956.3356.3356.3356.3371.2271.2271.2271.2257.8557.8557.8557.85
CD3.553.553.553.5545.7145.7145.7145.7111.3211.3211.3211.322.712.712.712.7130.6330.6330.6330.6360.5660.5660.5660.5666.4566.4566.4566.4557.257.257.257.271.8771.8771.8771.8762.4362.4362.4362.43
BCD(k=2)3.633.633.633.6347.0447.0447.0447.0411.2211.2211.2211.222.762.762.762.7630.2930.2930.2930.2960.5260.5260.5260.5265.3265.3265.3265.3257.3357.3357.3357.3371.8271.8271.8271.8261.861.861.861.8
w3a16g128OWC3.683.683.683.6855.3855.3855.3855.3812.512.512.512.52.612.612.612.6131.6631.6631.6631.6660.1460.1460.1460.1466.1266.1266.1266.1253.1953.1953.1953.1971.7171.7171.7171.7158.8858.8858.8858.88
GPTQ4.024.024.024.0250.0450.0450.0450.0414.6414.6414.6414.643.153.153.153.1532.4232.4232.4232.4261.9161.9161.9161.9166.6466.6466.6466.6456.0556.0556.0556.0573.0173.0173.0173.0159.7559.7559.7559.75
CD3.383.383.383.3852.0152.0152.0152.019.629.629.629.622.212.212.212.2131.2331.2331.2331.2363.1363.1363.1363.1365.965.965.965.955.9855.9855.9855.9872.4772.4772.4772.4759.9159.9159.9159.91
BCD(k=2)3.13.13.13.150.9550.9550.9550.959.369.369.369.361.921.921.921.9231.1431.1431.1431.1462.2562.2562.2562.2565.4165.4165.4165.4156.1656.1656.1656.1672.9172.9172.9172.9158.9658.9658.9658.96
OWC-CD3.63.63.63.651.6951.6951.6951.699.339.339.339.332.312.312.312.3131.8331.8331.8331.8361.7461.7461.7461.7465.0265.0265.0265.0256.2356.2356.2356.2372.272.272.272.259.4359.4359.4359.43
OWC-CD+ CD3.823.823.823.8250.3950.3950.3950.399.59.59.59.51.771.771.771.7732.5132.5132.5132.5162.2562.2562.2562.2566.3366.3366.3366.3356.9156.9156.9156.9171.2771.2771.2771.2760.0660.0660.0660.06
OWC-CD+ BCD(k=2)3.633.633.633.6349.8749.8749.8749.8710.2610.2610.2610.262.122.122.122.1232.7632.7632.7632.7662.8862.8862.8862.8866.2466.2466.2466.2456.7956.7956.7956.7971.8271.8271.8271.8260.2260.2260.2260.22
w4a16OWC4.024.024.024.0246.9646.9646.9646.9610.6310.6310.6310.632.662.662.662.6633.1933.1933.1933.1961.9961.9961.9961.9966.766.766.766.759.2659.2659.2659.2673.6773.6773.6773.6761.7261.7261.7261.72
GPTQ4.464.464.464.4648.6948.6948.6948.6918.8418.8418.8418.843.13.13.13.134.4734.4734.4734.4762.9662.9662.9662.9664.7764.7764.7764.7760.0960.0960.0960.0973.6173.6173.6173.6161.0961.0961.0961.09
CD4.134.134.134.1348.8848.8848.8848.8816.6416.6416.6416.643.153.153.153.1533.5333.5333.5333.5363.8963.8963.8963.8966.7966.7966.7966.7960.3160.3160.3160.3173.7873.7873.7873.7861.461.461.461.4
BCD(k=2)4.244.244.244.2448.3148.3148.3148.3115.7315.7315.7315.733.33.33.33.333.2833.2833.2833.2863.3463.3463.3463.3466.3966.3966.3966.3960.2660.2660.2660.2673.5673.5673.5673.5661.0161.0161.0161.01
w4a16g128OWC4.994.994.994.9946.146.146.146.120.6220.6220.6220.624.284.284.284.2834.7334.7334.7334.7364.4864.4864.4864.4867.8367.8367.8367.8359.9159.9159.9159.9174.3274.3274.3274.3261.2561.2561.2561.25
GPTQ5.185.185.185.1849.7949.7949.7949.7918.1718.1718.1718.173.153.153.153.1535.1535.1535.1535.1564.664.664.664.666.2466.2466.2466.2460.2160.2160.2160.2174.3274.3274.3274.3261.8861.8861.8861.88
CD5.735.735.735.7346.9646.9646.9646.9624.1724.1724.1724.174.534.534.534.5336.1836.1836.1836.1866.0466.0466.0466.0467.867.867.867.860.760.760.760.773.6773.6773.6773.6761.2561.2561.2561.25
BCD(k=2)5.795.795.795.7947.2847.2847.2847.2824.1124.1124.1124.114.584.584.584.5836.0136.0136.0136.0165.765.765.765.767.0667.0667.0667.0660.7360.7360.7360.7373.6173.6173.6173.6161.0161.0161.0161.01
OWC-CD5.545.545.545.5447.2447.2447.2447.2421.7121.7121.7121.714.724.724.724.7234.334.334.334.364.7364.7364.7364.7368.0168.0168.0168.0160.7260.7260.7260.7273.5673.5673.5673.5661.0161.0161.0161.01
OWC-CD+ CD5.625.625.625.6247.647.647.647.6242424244.684.684.684.6835.4935.4935.4935.4966.1666.1666.1666.1667.0667.0667.0667.0660.7860.7860.7860.7874.174.174.174.161.4861.4861.4861.48
OWC-CD+ BCD(k=2)5.575.575.575.5747.1747.1747.1747.1723.0523.0523.0523.054.824.824.824.8235.2435.2435.2435.2465.8265.8265.8265.8266.0266.0266.0266.0260.960.960.960.973.2373.2373.2373.2361.7261.7261.7261.72

MethodEpochsPaLM2-OtterPaLM2-Bison
w2a16g128GenerationRankAvg.GenerationRankAvg.
GPTQ-23.0623.0623.0623.0666.6266.6266.6266.6249.249.249.249.233.6233.6233.6233.6272.7472.7472.7472.7457.0957.0957.0957.09
CD125.1225.1225.1225.1265.7365.7365.7365.7349.4849.4849.4849.4833.6333.6333.6333.6371.8071.8071.8071.8056.5356.5356.5356.53
BCD(k=2)111125.4225.4225.4225.4265.5765.5765.5765.5749.5149.5149.5149.5133.9733.9733.9733.9772.1672.1672.1672.1656.8856.8856.8856.88
222224.7324.7324.7324.7365.7865.7865.7865.7849.3649.3649.3649.3634.0134.0134.0134.0172.3972.3972.3972.3957.0457.0457.0457.04
333325.2825.2825.2825.2866.0166.0166.0166.0149.7249.7249.7249.7234.2734.2734.2734.2772.6772.6772.6772.6757.3157.3157.3157.31
BCD(k=4)111125.1525.1525.1525.1566.0466.0466.0466.0449.6849.6849.6849.6833.9433.9433.9433.9472.1072.1072.1072.1056.8456.8456.8456.84
222224.8624.8624.8624.8665.7165.7165.7165.7149.3749.3749.3749.3733.8733.8733.8733.8771.9771.9771.9771.9756.7356.7356.7356.73
333325.7025.7025.7025.7065.3065.3065.3065.3049.4649.4649.4649.4633.8533.8533.8533.8572.0672.0672.0672.0656.7856.7856.7856.78

MethodEpochsPaLM2-Otter
w2a16g128NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
GPTQ-4.324.324.324.3263.4363.4363.4363.4322.0822.0822.0822.082.412.412.412.4142.3242.3242.3242.3266.9666.9666.9666.9680.0980.0980.0980.0968.5468.5468.5468.5473.7873.7873.7873.7868.0368.0368.0368.03
CD11114.994.994.994.9966.5166.5166.5166.5125.5325.5325.5325.533.443.443.443.4440.8740.8740.8740.8770.3370.3370.3370.3374.8974.8974.8974.8968.1368.1368.1368.1372.9672.9672.9672.9667.1767.1767.1767.17
BCD(k=2)11114.934.934.934.9367.5167.5167.5167.5125.5625.5625.5625.563.693.693.693.6941.4741.4741.4741.4769.769.769.769.774.9874.9874.9874.9867.9767.9767.9767.9773.7273.7273.7273.7265.5965.5965.5965.59
22224.64.64.64.667.7467.7467.7467.7423.623.623.623.6333340.7840.7840.7840.7869.4969.4969.4969.4975.4775.4775.4775.4768.2768.2768.2768.2773.3973.3973.3973.3967.2567.2567.2567.25
33335.15.15.15.167.9167.9167.9167.9124.5324.5324.5324.533.593.593.593.5942.4142.4142.4142.4169.8769.8769.8769.8775.0275.0275.0275.0268.5268.5268.5268.5273.8873.8873.8873.8866.3866.3866.3866.38
BCD(k=4)11114.94.94.94.967.567.567.567.525.0125.0125.0125.013.23.23.23.242.1542.1542.1542.1568.6968.6968.6968.6976.1876.1876.1876.1868.5168.5168.5168.5173.6173.6173.6173.6167.0967.0967.0967.09
22224.794.794.794.7967.967.967.967.923.5523.5523.5523.553.23.23.23.241.8141.8141.8141.8169.8269.8269.8269.8273.3973.3973.3973.3968.568.568.568.573.573.573.573.567.2567.2567.2567.25
33335.465.465.465.4668.2968.2968.2968.29262626263.053.053.053.0541.2141.2141.2141.2169.0269.0269.0269.0273.9873.9873.9873.9868.0668.0668.0668.0672.5272.5272.5272.5267.0167.0167.0167.01

MethodEpochsPaLM2-Bison
w2a16g128NatualQ.SQuADTriviaQAWebQARC-cARC-eBoolQHellaSwagPIQAWinoGrande
GPTQ-11.5511.5511.5511.5566.7266.7266.7266.7248.7948.7948.7948.797.437.437.437.4347.2747.2747.2747.2776.8976.8976.8976.8984.5684.5684.5684.5674.3674.3674.3674.3679.5479.5479.5479.5473.873.873.873.8
CD111111.2511.2511.2511.2564.4964.4964.4964.4950.8450.8450.8450.847.927.927.927.9248.1248.1248.1248.1275.9775.9775.9775.9779.3979.3979.3979.3974.2874.2874.2874.2879.1179.1179.1179.1173.9573.9573.9573.95
BCD(k=2)111111.7711.7711.7711.7764.1764.1764.1764.1751.3651.3651.3651.368.568.568.568.5648.3848.3848.3848.3875.8875.8875.8875.8881.0181.0181.0181.0174.674.674.674.678.8478.8478.8478.8474.2774.2774.2774.27
222211.6111.6111.6111.6164.6964.6964.6964.6951.2351.2351.2351.238.518.518.518.5147.8747.8747.8747.8776.0976.0976.0976.0981.7481.7481.7481.7474.7274.7274.7274.727979797974.974.974.974.9
333311.7211.7211.7211.7264.564.564.564.552.1552.1552.1552.158.718.718.718.7148.4648.4648.4648.4676.1876.1876.1876.1882.5482.5482.5482.5474.8274.8274.8274.8279.679.679.679.674.4374.4374.4374.43
BCD(k=4)111111.6911.6911.6911.6964.4864.4864.4864.4851.0751.0751.0751.078.518.518.518.5147.747.747.747.775.6375.6375.6375.6380.8980.8980.8980.8974.5574.5574.5574.5579.679.679.679.674.2774.2774.2774.27
222211.7511.7511.7511.7564.3964.3964.3964.3951.3651.3651.3651.367.977.977.977.9748.4648.4648.4648.4675.0475.0475.0475.0480.5880.5880.5880.5874.4874.4874.4874.4879.3879.3879.3879.3873.8873.8873.8873.88
333311.811.811.811.864.4164.4164.4164.4151.1651.1651.1651.168.028.028.028.0248.1248.1248.1248.1275.5575.5575.5575.5580.8680.8680.8680.8674.5574.5574.5574.5578.7378.7378.7378.7374.5974.5974.5974.59

Appendix E Quantization Runtime

ConfigMethodPaLM2-OtterPaLM2-Bison
FFN runtime
(in minutes)
Attn. runtime
(in minutes)
FFN runtime
(in minutes)
Attn. runtime
(in minutes)
w3a16GPTQ0.900.900.900.90m0.720.720.720.72m6.426.426.426.42m0.630.630.630.63m
CD2.942.942.942.94m0.490.490.490.49m16.1316.1316.1316.13m0.430.430.430.43m
BCD(k=2)14.0314.0314.0314.03m2.082.082.082.08m82.1382.1382.1382.13m1.811.811.811.81m
w3a16g128GPTQ1.021.021.021.02m0.750.750.750.75m6.706.706.706.70m0.660.660.660.66m
CD3.323.323.323.32m0.590.590.590.59m18.4418.4418.4418.44m0.520.520.520.52m
BCD(k=2)14.7314.7314.7314.73m2.122.122.122.12m88.1088.1088.1088.10m1.861.861.861.86m
w4a16GPTQ0.900.900.900.90m0.640.640.640.64m6.606.606.606.60m0.560.560.560.56m
CD3.593.593.593.59m0.590.590.590.59m21.1921.1921.1921.19m0.520.520.520.52m
BCD(k=2)34.8534.8534.8534.85m4.724.724.724.72m235.14235.14235.14235.14m4.134.134.134.13m
w4a16g128GPTQ0.980.980.980.98m0.690.690.690.69m6.9626.9626.9626.962m0.600.600.600.60m
CD3.993.993.993.99m0.690.690.690.69m23.4823.4823.4823.48m0.600.600.600.60m
BCD(k=2)42.3742.3742.3742.37m5.415.415.415.41m192.59192.59192.59192.59m4.744.744.744.74m

Appendix F Comparison to QuantEase

We also compare our per-channel quantization results with those of QuantEase, a parallel study to ours. It’s important to note that QuantEase does not have a publicly available implementation. So, we implemented the cyclic coordinate descent strategy used by QuantEase, and used the same initialization and regularization strength as our algorithms (although the QuantEase paper doesn’t provide these details). We then ran QuantEase for the recommended number of iterations specified in the paper (20202020 epochs or T=20din𝑇20subscript𝑑inT=20d_{\text{in}}italic_T = 20 italic_d start_POSTSUBSCRIPT in end_POSTSUBSCRIPT iterations). The findings are presented in Table20. On PaLM2-Gecko, PaLM2-Otter, and PaLM2-Bison, both these approaches have similar performance. These results collectively highlight the potential of coordinate descent algorithms in outperforming GPTQ.

ConfigMethodPaLM2-GeckoPaLM2-OtterPaLM2-Bison
w3a16GPTQ11.34711.34711.34711.3477.1767.1767.1767.1765.7745.7745.7745.774
QuantEase (epochs=20)10.73110.73110.73110.7316.9966.9966.9966.9965.7415.7415.7415.741
CD10.92010.92010.92010.9207.0027.0027.0027.0025.7395.7395.7395.739
BCD(k=2)10.89810.89810.89810.8986.9796.9796.9796.9795.7335.7335.7335.733
w4a16GPTQ8.7648.7648.7648.7646.2496.2496.2496.2495.4175.4175.4175.417
QuantEase (epochs=20)8.6708.6708.6708.6706.1976.1976.1976.1975.4085.4085.4085.408
CD8.6948.6948.6948.6946.1956.1956.1956.1955.4075.4075.4075.407
BCD(k=2)8.6918.6918.6918.6916.1926.1926.1926.1925.4055.4055.4055.405

CDQuant: Accurate Post-training Weight Quantization of Large Pre-trained Models using Greedy Coordinate Descent (2024)
Top Articles
Latest Posts
Article information

Author: Tyson Zemlak

Last Updated:

Views: 5965

Rating: 4.2 / 5 (43 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Tyson Zemlak

Birthday: 1992-03-17

Address: Apt. 662 96191 Quigley Dam, Kubview, MA 42013

Phone: +441678032891

Job: Community-Services Orchestrator

Hobby: Coffee roasting, Calligraphy, Metalworking, Fashion, Vehicle restoration, Shopping, Photography

Introduction: My name is Tyson Zemlak, I am a excited, light, sparkling, super, open, fair, magnificent person who loves writing and wants to share my knowledge and understanding with you.