3 minute read

Visualizing transformers and attention

Visualizing Transformers and Attention

This is the summary note from Grant Sanderson’s talk at TNG Big Tech 2024. My earlir article on transformers can be found here

Transformers and Their Flexibility

  • 📜 Origin: Introduced in 2017 in the “Attention is All You Need” paper, originally for machine translation.
  • 🌍 Applications Beyond Translation: Used in transcription (e.g., Whisper), text-to-speech, and even image classification.
  • 🤖 Chatbot Models: Focused on models trained to predict the next token in a sequence, generating text iteratively one token at a time.

Next Token Prediction and Creativity

  • 🔮 Prediction Process: Predicts probabilities for possible next tokens, selects one, and repeats the process.
  • 🌡️ Temperature Control: Adjusting randomness in token selection affects creativity vs. predictability in outputs.

Tokens and Tokenization

  • 🧩 What are Tokens? Subdivisions of input data (words, subwords, punctuation, or image patches).
  • 🔡 Why Not Characters? Using characters increases context size and computational complexity; tokens balance meaning and computational efficiency.
  • 📖 Byte Pair Encoding (BPE): A common method for tokenization.

Embedding Tokens into Vectors

  • 📏 Embedding: Tokens are mapped to high-dimensional vectors representing their meaning.
  • 🗺️ Contextual Meaning: Vectors evolve through the network to capture context, disambiguate meaning, and encode relationships.

The Attention Mechanism

  • 🔍 Purpose: Enables tokens to “attend” to others, updating their vectors based on relevance.
  • 🔑 Key Components:
    • Query Matrix: Encodes what a token is “looking for.”
    • Key Matrix: Encodes how a token responds to queries.
    • Value Matrix: Encodes information passed between tokens.
  • 🧮 Calculations:
    • Dot Product: Measures alignment between keys and queries.
    • Softmax: Converts dot products into normalized weights for updates.
  • ⛓️ Masked Attention: Ensures causality by blocking future tokens from influencing past ones.

Multi-Headed Attention

  • 💡 Parallel Heads: Multiple attention heads allow different types of relationships (e.g., grammar, semantic context) to be processed simultaneously.
  • 🚀 Efficiency on GPUs: Designed to maximize parallelization for faster computation.

Multi-Layer Perceptrons (MLPs)

  • 🤔 Role in Transformers:
    • Add capacity for general knowledge and non-contextual reasoning.
    • Store facts learned during training, e.g., associations like “Michael Jordan plays basketball.”
  • 🔢 Parameters: MLPs hold the majority of the model’s parameters.

Training Transformers

  • 📚 Learning Framework:
    • Models are trained on vast datasets using next-token prediction, requiring no manual labels.
    • Cost Function: Measures prediction accuracy using negative log probabilities, guiding parameter updates.
  • 🏔️ Optimization: Gradient descent navigates a high-dimensional cost surface to minimize error.
  • 🌐 Pretraining: Allows large-scale unsupervised learning before fine-tuning with human feedback.

Embedding Space and High Dimensions

  • 🔄 Semantic Clusters: Similar words cluster together; directions in the space encode relationships (e.g., gender: King - Male + Female = Queen).
  • 🌌 High Dimensionality: Embedding spaces have thousands of dimensions, enabling distinct representations of complex concepts.
  • 📈 Scaling Efficiency: High-dimensional spaces allow exponentially more “almost orthogonal” directions for encoding meanings.

Practical Applications

  • ✍️ Language Models: Effective for chatbots, summarization, and more due to their generality and parallel processing.
  • 🖼️ Multimodal Models: Transformers can integrate text, images, and sound by treating all as tokens in a unified framework.

Challenges and Limitations

  • 📏 Context Size Limitations: Attention grows quadratically with context size, requiring optimization for large contexts.
  • ♻️ Inference Redundancy: Token-by-token generation can involve redundant computations; caching mitigates this at inference time.

Engineering and Design

  • 🛠️ Hardware Optimization: Transformers are designed to exploit GPUs’ parallelism for efficient matrix multiplication.
  • 🔗 Residual Connections: Baked into the architecture to enhance stability and ease of training.

The Power of Scale

  • 📈 Scaling Laws: Larger models and more data improve performance, often qualitatively.
  • 🔄 Self-Supervised Pretraining: Enables training on vast unlabeled datasets before fine-tuning.

BPE (Byte Pair Encoding)

BPE is a widely used tokenization method in natural language processing (NLP) and machine learning. It is designed to balance between breaking text into characters and full words by representing text as a sequence of subword units. This approach helps models handle rare and unseen words effectively while keeping the vocabulary size manageable.


How BPE Works:

  1. Start with Characters:
    • Initially, every character in the text is treated as a separate token.
  2. Merge Frequent Pairs:
    • BPE repeatedly identifies the most frequent pair of adjacent tokens in the training corpus and merges them into a single token. This process is iteratively applied.
    • For example:
      • Input: low, lower, lowest
      • Output Vocabulary: {low_, e, r, s, t}
  3. Build Vocabulary:
    • The merging process stops after a predefined number of merges, resulting in a vocabulary of subwords, characters, and some common full words.

Visualizing transformers and attention

Updated: