(Almost) Differentiable Indexing of Tensors
Indexing tensors is at the core of numeros ML applications. In this post, I describe a strategy to index a tensor when we need to compute the gradient with respect to the index.
When indexing a tensor, we extract the portion of the tensor specified by the index. Using a programming-oriented notation, will be the i-th element of the tensor .
During differentiation, we compute the rate of change of a function with respect to its input (i.e. the derivative). In the case of indexing, this equates to How would my output look if I take a tiny step from my current index? The long story is that functions need to respect specific properties to be differentiable, and indexing does not respect them. The short one is that indices are integer values, so an infinitesimally small step from the current index would result in an ill-defined operation (which memory cell corresponds to ?).
With these ideas in mind, let’s explore a trick to index a tensor in a differentiable way. To do this, we consider the mathematical description of the indexing operation:
where is the input tensor and is the one-hot encoding of the index. That is, is constrained (or quantized) to rows of the identity matrix (). Therefore, for any ,
where is the quantization operation that snaps to the closest row of . There are many alternatives for the function , and they depend on the definition of “closest.” Let’s define
That is, maps each vector to the row of the identity matrix with the located in the position of the highest entry of . The rate of change of with respect to the entries of will be zero because the output is constant for a given input (imagine a step function whose derivative is an impulse).
Let’s say we want to optimize an objective, , using gradient descent. To do this, we compute the gradient of with respect to the weights of our model (in our case, the indexing vector, ) and update the weights after each iteration as:
Since we don’t update the weights. We can, however, use the Straight-Throught Estimation (STE) of the gradients. That is, we compute the gradient as if we did not quantize the vector
and use this gradient for the update in Eq. $5$.
In code, the indexing operation with STE would look like this for one-dimensional vectors:
1
2
def index_ste(T: Tensor, idx: Tensor) -> Tensor:
return T + (torch.eye(T.shape[0])[T.argmax()] - T).detach()
The role of T
in the return value is to instruct autograd to propagate the gradients through T
without the need to specify a custom backward pass.