stub NLP Rise with Transformer Models | A Comprehensive Analysis of T5, BERT, and GPT - Unite.AI
Connect with us

Artificial Intelligence

NLP Rise with Transformer Models | A Comprehensive Analysis of T5, BERT, and GPT

mm

Published

 on

Guide on NLP

Natural Language Processing (NLP) has experienced some of the most impactful breakthroughs in recent years, primarily due to the the transformer architecture. These breakthroughs have not only enhanced the capabilities of machines to understand and generate human language but have also redefined the landscape of numerous applications, from search engines to conversational AI.

To fully appreciate the significance of transformers, we must first look back at the predecessors and building blocks that laid the foundation for this revolutionary architecture.

Early NLP Techniques: The Foundations Before Transformers

Word Embeddings: From One-Hot to Word2Vec

In traditional NLP approaches, the representation of words was often literal and lacked any form of semantic or syntactic understanding. One-hot encoding is a prime example of this limitation.

One-hot encoding is a process by which categorical variables are converted into a binary vector representation where only one bit is “hot” (set to 1) while all others are “cold” (set to 0). In the context of NLP, each word in a vocabulary is represented by one-hot vectors where each vector is the size of the vocabulary, and each word is represented by a vector with all 0s and one 1 at the index corresponding to that word in the vocabulary list.

Example of One-Hot Encoding

Suppose we have a tiny vocabulary with only five words: [“king”, “queen”, “man”, “woman”, “child”]. The one-hot encoding vectors for each word would look like this:

  • “king” -> [1, 0, 0, 0, 0]
  • “queen” -> [0, 1, 0, 0, 0]
  • “man” -> [0, 0, 1, 0, 0]
  • “woman” -> [0, 0, 0, 1, 0]
  • “child” -> [0, 0, 0, 0, 1]

Mathematical Representation

If we denote as the size of our vocabulary and as the one-hot vector representation of the i-th word in the vocabulary, the mathematical representation of would be:

where the i-th position is 1 and all other positions are 0.

The major downside of one-hot encoding is that it treats each word as an isolated entity, with no relation to other words. It results in sparse and high-dimensional vectors that do not capture any semantic or syntactic information about the words.

The introduction of word embeddings, most notably Word2Vec, was a pivotal moment in NLP. Developed by a team at Google led by Tomas Mikolov in 2013, Word2Vec represented words in a dense vector space, capturing syntactic and semantic word relationships based on their context within large corpora of text.

Unlike one-hot encoding, Word2Vec produces dense vectors, typically with hundreds of dimensions. Words that appear in similar contexts, such as “king” and “queen”, will have vector representations that are closer to each other in the vector space.

For illustration, let's assume we have trained a Word2Vec model and now represent words in a hypothetical 3-dimensional space. The embeddings (which are usually more than 3D but reduced here for simplicity) might look something like this:

  • “king” -> [0.2, 0.1, 0.9]
  • “queen” -> [0.21, 0.13, 0.85]
  • “man” -> [0.4, 0.3, 0.2]
  • “woman” -> [0.41, 0.33, 0.27]
  • “child” -> [0.5, 0.5, 0.1]

While these numbers are fictitious, they illustrate how similar words have similar vectors.

Mathematical Representation

If we represent the Word2Vec embedding of a word as , and our embedding space has dimensions, then can be represented as:

Semantic Relationships

Word2Vec can even capture complex relationships, such as analogies. For example, the famous relationship captured by Word2Vec embeddings is:

vector(“king”) – vector(“man”) + vector(“woman”)≈vector(“queen”)

This is possible because Word2Vec adjusts the word vectors during training so that words that share common contexts in the corpus are positioned closely in the vector space.

Word2Vec uses two main architectures to produce a distributed representation of words: Continuous Bag-of-Words (CBOW) and Skip-Gram. CBOW predicts a target word from its surrounding context words, whereas Skip-Gram does the reverse, predicting context words from a target word. This allowed machines to begin understanding word usage and meaning in a more nuanced way.

Sequence Modeling: RNNs and LSTMs

As the field progressed, the focus shifted toward understanding sequences of text, which was crucial for tasks like machine translation, text summarization, and sentiment analysis. Recurrent Neural Networks (RNNs) became the cornerstone for these applications due to their ability to handle sequential data by maintaining a form of memory.

However, RNNs were not without limitations. They struggled with long-term dependencies due to the vanishing gradient problem, where information gets lost over long sequences, making it challenging to learn correlations between distant events.

Long Short-Term Memory networks (LSTMs), introduced by Sepp Hochreiter and Jürgen Schmidhuber in 1997, addressed this issue with a more sophisticated architecture. LSTMs have gates that control the flow of information: the input gate, the forget gate, and the output gate. These gates determine what information is stored, updated, or discarded, allowing the network to preserve long-term dependencies and significantly improving the performance on a wide array of NLP tasks.

The Transformer Architecture

The landscape of NLP underwent a dramatic transformation with the introduction of the transformer model in the landmark paper “Attention is All You Need” by Vaswani et al. in 2017. The transformer architecture departs from the sequential processing of RNNs and LSTMs and instead utilizes a mechanism called ‘self-attention' to weigh the influence of different parts of the input data.

The core idea of the transformer is that it can process the entire input data at once, rather than sequentially. This allows for much more parallelization and, as a result, significant increases in training speed. The self-attention mechanism enables the model to focus on different parts of the text as it processes it, which is crucial for understanding the context and the relationships between words, no matter their position in the text.

Encoder and Decoder in Transformers:

In the original Transformer model, as described in the paper “Attention is All You Need” by Vaswani et al., the architecture is divided into two main parts: the encoder and the decoder. Both parts are composed of layers that have the same general structure but serve different purposes.

Encoder:

  • Role: The encoder's role is to process the input data and create a representation that captures the relationships between the elements (like words in a sentence). This part of the transformer does not generate any new content; it simply transforms the input into a state that the decoder can use.
  • Functionality: Each encoder layer has self-attention mechanisms and feed-forward neural networks. The self-attention mechanism allows each position in the encoder to attend to all positions in the previous layer of the encoder—thus, it can learn the context around each word.
  • Contextual Embeddings: The output of the encoder is a series of vectors which represent the input sequence in a high-dimensional space. These vectors are often referred to as contextual embeddings because they encode not just the individual words but also their context within the sentence.

Decoder:

  • Role: The decoder's role is to generate output data sequentially, one part at a time, based on the input it receives from the encoder and what it has generated so far. It is designed for tasks like text generation, where the order of generation is crucial.
  • Functionality: Decoder layers also contain self-attention mechanisms, but they are masked to prevent positions from attending to subsequent positions. This ensures that the prediction for a particular position can only depend on known outputs at positions before it. Additionally, the decoder layers include a second attention mechanism that attends to the output of the encoder, integrating the context from the input into the generation process.
  • Sequential Generation Capabilities: This refers to the ability of the decoder to generate a sequence one element at a time, building on what it has already produced. For example, when generating text, the decoder predicts the next word based on the context provided by the encoder and the sequence of words it has already generated.

Each of these sub-layers within the encoder and decoder is crucial for the model's ability to handle complex NLP tasks. The multi-head attention mechanism, in particular, allows the model to selectively focus on different parts of the sequence, providing a rich understanding of context.

Popular Models Leveraging Transformers

Following the initial success of the transformer model, there was an explosion of new models built on its architecture, each with its own innovations and optimizations for different tasks:

BERT (Bidirectional Encoder Representations from Transformers): Introduced by Google in 2018, BERT revolutionized the way contextual information is integrated into language representations. By pre-training on a large corpus of text with a masked language model and next-sentence prediction, BERT captures rich bidirectional contexts and has achieved state-of-the-art results on a wide array of NLP tasks.

BERT

BERT

T5 (Text-to-Text Transfer Transformer): Introduced by Google in 2020, T5 reframes all NLP tasks as a text-to-text problem, using a unified text-based format. This approach simplifies the process of applying the model to a variety of tasks, including translation, summarization, and question answering.

t5 Architecture

T5 Architecture

GPT (Generative Pre-trained Transformer): Developed by OpenAI, the GPT line of models started with GPT-1 and reached GPT-4 by 2023. These models are pre-trained using unsupervised learning on vast amounts of text data and fine-tuned for various tasks. Their ability to generate coherent and contextually relevant text has made them highly influential in both academic and commercial AI applications.

GPT

GPT Architecture

Here's a more in-depth comparison of the T5, BERT, and GPT models across various dimensions:

1. Tokenization and Vocabulary

  • BERT: Uses WordPiece tokenization with a vocabulary size of around 30,000 tokens.
  • GPT: Employs Byte Pair Encoding (BPE) with a large vocabulary size (e.g., GPT-3 has a vocabulary size of 175,000).
  • T5: Utilizes SentencePiece tokenization which treats the text as raw and does not require pre-segmented words.

2. Pre-training Objectives

  • BERT: Masked Language Modeling (MLM) and Next Sentence Prediction (NSP).
  • GPT: Causal Language Modeling (CLM), where each token predicts the next token in the sequence.
  • T5: Uses a denoising objective where random spans of text are replaced with a sentinel token and the model learns to reconstruct the original text.

3. Input Representation

  • BERT: Token, Segment, and Positional Embeddings are combined to represent the input.
  • GPT: Token and Positional Embeddings are combined (no segment embeddings as it is not designed for sentence-pair tasks).
  • T5: Only Token Embeddings with added Relative Positional Encodings during the attention operations.

4. Attention Mechanism

  • BERT: Uses absolute positional encodings and allows each token to attend to all tokens to the left and right (bidirectional attention).
  • GPT: Also uses absolute positional encodings but restricts attention to previous tokens only (unidirectional attention).
  • T5: Implements a variant of the transformer that uses relative position biases instead of positional embeddings.

5. Model Architecture

  • BERT: Encoder-only architecture with multiple layers of transformer blocks.
  • GPT: Decoder-only architecture, also with multiple layers but designed for generative tasks.
  • T5: Encoder-decoder architecture, where both the encoder and decoder are composed of transformer layers.

6. Fine-tuning Approach

  • BERT: Adapts the final hidden states of the pre-trained model for downstream tasks with additional output layers as needed.
  • GPT: Adds a linear layer on top of the transformer and fine-tunes on the downstream task using the same causal language modeling objective.
  • T5: Converts all tasks into a text-to-text format, where the model is fine-tuned to generate the target sequence from the input sequence.

7. Training Data and Scale

  • BERT: Trained on BooksCorpus and English Wikipedia.
  • GPT: GPT-2 and GPT-3 have been trained on diverse datasets extracted from the internet, with GPT-3 being trained on an even larger corpus called the Common Crawl.
  • T5: Trained on the “Colossal Clean Crawled Corpus”, which is a large and clean version of the Common Crawl.

8. Handling of Context and Bidirectionality

  • BERT: Designed to understand context in both directions simultaneously.
  • GPT: Trained to understand context in a forward direction (left-to-right).
  • T5: Can model bidirectional context in the encoder and unidirectional in the decoder, appropriate for sequence-to-sequence tasks.

9. Adaptability to Downstream Tasks

  • BERT: Requires task-specific head layers and fine-tuning for each downstream task.
  • GPT: Is generative in nature and can be prompted to perform tasks with minimal changes to its structure.
  • T5: Treats every task as a “text-to-text” problem, making it inherently flexible and adaptable to new tasks.

10. Interpretability and Explainability

  • BERT: The bidirectional nature provides rich contextual embeddings but can be harder to interpret.
  • GPT: The unidirectional context may be more straightforward to follow but lacks the depth of bidirectional context.
  • T5: The encoder-decoder framework provides a clear separation of processing steps but can be complex to analyze due to its generative nature.

The Impact of Transformers on NLP

Transformers have revolutionized the field of NLP by enabling models to process sequences of data in parallel, which dramatically increased the speed and efficiency of training large neural networks. They introduced the self-attention mechanism, allowing models to weigh the significance of each part of the input data, regardless of distance within the sequence. This led to unprecedented improvements in a wide array of NLP tasks, including but not limited to translation, question answering, and text summarization.

Research continues to push the boundaries of what transformer-based models can achieve. GPT-4 and its contemporaries are not just larger in scale but also more efficient and capable due to advances in architecture and training methods. Techniques like few-shot learning, where models perform tasks with minimal examples, and methods for more effective transfer learning are at the forefront of current research.

The language models like those based on transformers learn from data which can contain biases. Researchers and practitioners are actively working to identify, understand, and mitigate these biases. Techniques range from curated training datasets to post-training adjustments aimed at fairness and neutrality.

I have spent the past five years immersing myself in the fascinating world of Machine Learning and Deep Learning. My passion and expertise have led me to contribute to over 50 diverse software engineering projects, with a particular focus on AI/ML. My ongoing curiosity has also drawn me toward Natural Language Processing, a field I am eager to explore further.