This post explains how the attention mechanism in modern language models works, and how it is accomplished through matrix multiplication.
Token interaction as matrix multiplication
In language modeling, a chuck of text can be tokenised and embedded into a data matrix. In the following (2, 3) data matrix, each row represent the embedding of one token
However, we know that to model the human natural language, it’s not enough to have each token independently embedded; the tokens are also semantically related to each other. For example in the sentence “the art of writting is the art of discovering what you believe”, the meaning of the sentence is not only in the individual words, but also in how they are methodically arranged together. With the words and sentences tokenized as the above data matrix, interactions can easily be realized by left and right multiplying the data matrix with interaction matrices.
For example left multiplication it with an (2, 2) interaction matrix \[
\mathbf{L} = \begin{pmatrix}
l_{11} & l_{12} \\
l_{21} & l_{22}
\end{pmatrix}
\]
which is again a (2, 3) matrix, the same as the original data matrix. We notice that each entry in the new data matrix is the weighted sum of the corresponding entries in the original matrix, frome different rows, but from the same column. In this way we achieved the goal of interacting the different tokens with each other. However we should also note that there is no interaction between the different columns of the original matrix.
Similarly, right multiplication it with an (3, 2) interaction matrix
which is a (2, 2) matrix. It has the same number of rows as the original data matrix, but the number of columns is determined by the interaction matrix. We notice that each entry in the new data matrix is the weighted sum of the corresponding entries in the original matrix, from different columns, but from the same row. And there is no interaction between the different rows of the original matrix. In this way we have updated the embedding of each token, but without interacting the tokens with each other.
Combining left and right multiplication, we can thus both make the tokens interact with each other, and update the embedding of each token. In transformer language models, it’s the repeated application of this operation that allows the model to “understand” the context of the sentence, and represent it in the most meaningful way. This is the backbone of all the large language models.
The final interaction mechanism will look like this: \[
\mathbf{Y} = \mathbf{L} \mathbf{X} \mathbf{R}.
\]
Interaction matrices in practice
To see this in action let’s first define a data matrix.
import numpy as npnp.random.seed(0)T, E =4, 5# sequence length, embedding dimensionx = np.random.randn(T, E) # input tensorprint(x)
Here each row is an embedding of one token as a 5 dimensional vector (E=5), and 4 tokens (T=4) are stacked together, from top to bottom. This way we end up with a (4, 5) data matrix.
Next, let’s start with left multiplication interactions. A diagonal interaction matrix, which means that each token only interacts with itself (i.e. no interaction at all), is the simplest of all the interaction matrices.
As we expected, the embedding matrix is the same as the original x. Next we can make all the tokens equally affecting all others by setting the interaction matrix to be of all ones.
Now all the rows of the embedding matrix are the same, since we effectively summed up all the rows in the original matrix to create the new ones. A slightly more interesting interaction matrix is the lower triangular matrix:
L = np.tril(np.ones((T, T))) print(L, L @ x, sep="\n")
After left multiplying L, the first row (embeddings for the first word) is left as is, the new second row is the sum of the first two rows, the new third row is the sum of the first three, etc. This is a one way interaction mechanism where the tokens come before affect the tokens that come after, but not the other way around.
Of course the interaction matrix can be any matrix, not just the ones we’ve seen above.
L = np.random.randn(T, T)print(L, L @ x, sep="\n")
We can no longer identify the relationship between the tokens since, as the name implied, we are now randomly mixing the tokens up.
One extra thing to note is that it’s generally not a good idea to simply summing vectors up: when the vectors get extremely long, as is often the case in modern deep learning (to accurately capture the meaning of a word we need some context, the longer the better), the sum of vectors can often explode, which affects the stability and efficiency of the model training. So we’ll use weights the instead, and make sure each row of L sum to one. Things are easy if all the values are positive, say 1, 2, 2, 3, then the sum is 8, and the weights are simply 1/8, 2/8, 2/8, and 3/8. But, it’s not always easy to get meaningful weights, if the entries in L are not always positive. What if the values are 1, -2, 2, 3? And they may even sum to zero, like -1, 2, 2, -3, leaving us with a devided by zero error. The solution we use is simple: first exponentiate all the entries to make sure they are all positive, then compute the weights using these positive numbers.
L = np.random.randn(T, T)L = np.exp(L)L = L / L.sum(axis=1, keepdims=True)print(L, L @ x, sep="\n")
Similarly we can create a right interaction matrix R to make the columns interact, and thus update the embedding of each token. We can of course apply the same softmax treatment to each column of R so that each column sums to one, but for some reason (which is still beyond my knowledge) this has never been done in practice. So here we’ll stick to the common wisdom. When applying the right interaction matrix, we need to decide on the new embedding dimension H1. (It’s named H1 because we are going to need another H2, as we’ll see shortly.)
H1 =6R = np.random.randn(E, H1)print(R, x @ R, sep="\n")
We end up with a new embedding matrix of dimension (T, H1).
Parameterizing query, key, and value
We have seen how the left and right interaction matrices can be used to update the token embedding, now it’s time to determine how to populate them. Since we are talking about neural networks, we can just treat L and R as parameters of the network, and learn them from the data. However, as things stand now, once learned, the same parameter matrices will be applied to all sentences, which effectively means that all the sentences will be forced to interact in the same way, which is clearly not what we want. On the contrary the interaction matrices should be bespoke for each sentence; each sentence should has its own way of interaction between the tokens. The natural way to achieve this is to make the interaction matrices not only have learnable parameters, but are also functions of the input tokens.
The left interaction matrix L is the product of two matrices, the query matrix Q and the key matrix K, each in turn is the product of two matrices, the data matrix and a parameter matrix (Ignore softmax, which is for turning random matrices into weights). Since Q and K are calculated exactly the same way, their difference in naming is only an conventional. Note that since Q and K (and V) are all calculated by right multiplying a parameter matrix, they are all updated embeddings for each token, with no interactions between them, as we’ve discussed earlier. However by multiplying them together
We see that the weight matrices are both appearing on the left (for the second X) and right (for the first X), the resulting matrix thus captures all possible information flows among the tokens. To make sure the query and key have compatible dimensions, their parameter matrices should have the same dimensions.
The value matrix \(\mathbf{V}\) is just a rewrite of the right interaction \(\mathbf{X} \mathbf{R}\).
We can now implement the whole process.
T, E, H1, H2 =4, 5, 6, 7X = np.random.randn(T, E)W_Q = np.random.randn(E, H2)W_K = np.random.randn(E, H2)W_V = np.random.randn(E, H1)Q = X @ W_QK = X @ W_KV = X @ W_VL = np.exp(Q @ K.T)L = L / L.sum(axis=1, keepdims=True)print("Q shape:", Q.shape)print("K shape:", K.shape)print("V shape:", V.shape)print("L shape:", L.shape)print("L @ V shape:", (L @ V).shape)print(L, V, L @ V, sep='\n')
Note that we have used different embedding dimensions for the data matrix(E=5), right interaction matrix(H1=6), and the query/key embedding for the left interaction matrix(H2=7). In practice, since transformer models usually have multiple attention layers stacked together, and each attention layer also has an extra residual connection, the embedding dimensions for the data matrix and the right interaction matrix are usually the same (E=H1). Besides, the query/key embedding for the left interaction matrix is usually also kept the same as the embedding dimension for the data matrix (E=H2).
T, E =4, 5X = np.random.randn(T, E)W_Q = np.random.randn(E, E)W_K = np.random.randn(E, E)W_V = np.random.randn(E, E)Q = X @ W_QK = X @ W_KV = X @ W_VL = np.exp(Q @ K.T)L = L / L.sum(axis=1, keepdims=True)print("Q shape:", Q.shape)print("K shape:", K.shape)print("V shape:", V.shape)print("L shape:", L.shape)print("L @ V shape:", (L @ V).shape)print(L, V, L @ V, sep='\n')
And this is usually what attention in language models looks like.
Masked attention
Modern Large language models (basically all LLMs from GPT2 on, the so called “decoder only transformers”) are mostly next-word-prediction machines. That is, given a word, we want the model to predict what the next word should be, but we are not interested in what has come before. The information only flows one way, like the rivers to the sea.
We have already seen how such one-way forward interaction can be achieved: by making L a lower triangle matrix. We’ll create a mask matrix the same shape as L, and after applying the mask, the lower left part of L will be intact, while the upper right part above the diagonal will be set to negative infinity. This way, after exponentiation, the upper right part of the left interaction matrix will be set to zero, which effectively forbids the later tokens affecting the earlier ones.
T, E =4, 5X = np.random.randn(T, E)W_Q = np.random.randn(E, E)W_K = np.random.randn(E, E)W_V = np.random.randn(E, E)Q = X @ W_QK = X @ W_KV = X @ W_VL = Q @ K.Tmask = np.tril(np.ones(L.shape), k=0) # Create a lower triangular maskL = np.where(mask ==1, L, -np.inf) # Apply the mask to L, setting upper right entries to -infL = np.exp(L)L = L / L.sum(axis=1, keepdims=True)print(L, V, L @ V, sep='\n')