stub What is Few-Shot Learning? - Unite.AI
Connect with us

AI 101

What is Few-Shot Learning?

mm
Updated on

Few-shot learning refers to a variety of algorithms and techniques used to develop an AI model using a very small amount of training data. Few-shot learning endeavors to let an AI model recognize and classify new data after being exposed to comparatively few training instances. Few-shot training stands in contrast to traditional methods of training machine learning models, where a large amount of training data is typically used. Few-shot learning is used primarily in computer vision.

To develop a better intuition for few-shot learning, let’s examine the concept in more detail. We’ll examine the motivations and concepts behind few-shot learning, explore some various types of few-shot learning, and cover some models used in few-shot learning at a high-level. Finally, we’ll examine some applications for few-shot learning.

What Is Few-Shot Learning?

“Few-shot learning” describes the practice of training a machine learning model with a minimal amount of data. Typically, machine learning models are trained on large volumes of data, the larger the better. However, few-shot learning is an important machine learning concept for a few different reasons.

One reason for using few-shot learning is that it can dramatically cut the amount of data needed to train a machine learning model, which cuts the time needed to label large datasets down. Likewise, few-shot learning reduces the need to add specific features for various tasks when using a common dataset to create different samples. Few-shot learning can ideally make models more robust and able to recognize object-based on less data, creating more general models as opposed to the highly specialized models which are the norm.

Few-shot learning is most commonly used in the computer vision field, as the nature of computer vision problems necessitates either large volumes of data or a flexible model.

Sub-Categories

The phrase “few-shot” learning is actually just one type of learning using very few training examples. Since you are using just “a few” training examples, there are subcategories of few-shot learning that also involve training with a minimal amount of data. “One-shot” learning is another type of model training that involves teaching a model to recognize an object after seeing just one image of that object. The general tactics used across one-shot learning and few-shot learning are the same. Be aware that the term “few-shot” learning might be used as an umbrella term to describe any situation where a model is being trained with very little data.

Approaches to Few-Shot Learning

Most few-shot learning approaches can fit into one of three categories: data-level approaches, parameter-level approaches, and metrics-based approaches.

Data-level Approaches

Data-level approaches to few-shot learning are very simple in concept. In order to train a model when you don’t have enough training data, you can just get more training data. There are various techniques a data scientist can use to increase the amount of training data they have.

Similar training data can back up the exact target data you are training a classifier on. For example, if you are training a classifier to recognize specific kinds of dogs but lacked many images of the particular species you were trying to classify, you could include many images of dogs which would help the classifier determine the general features that make up a dog.

Data augmentation can create more training data for a classifier. This typically involves applying transformations to existing training data, such as rotating existing images so that the classifier examines the images from different angles. GANs can also be used to generate new training examples based on what they learn from the few authentic examples of training data you have.

Parameter-level approaches

Meta-learning

One parameter-level approach to few-shot learning involves the use of a technique called “meta-learning”. Meta-learning involves teaching a model how to learn which features are important in a machine learning task. This can be accomplished by creating a method to regulate how the parameter space of a model is explored.

Meta-learning makes use of two different models: a teacher model and a student model. The “teacher” model and a “student” model. The teacher model learns how to encapsulate the parameter space, while the student algorithm learns how to recognize and classify the actual items in the dataset. To put that another way, the teacher model learns how to optimize a model, while the student model learns how to classify. The teacher model’s outputs are used to train the student model, showing the student model how to negotiate the large parameter space that results from too little training data. Hence the “meta” in meta-learning.

One of the main problems with few-shot learning models is that they can easily overfit on training data, as they frequently have high-dimensional spaces. Limiting the parameter space of a model solves this problem, and while it can be accomplished by applying regularization techniques and selecting the proper loss functions, the use of a teacher algorithm can dramatically improve the performance of a few-shot model.

A few-shot learning classifier model (the student model) will endeavor to generalize based on the small amount of training data it’s provided with, and its accuracy can improve with a teacher model to direct it through the high dimensional parameter space. This general architecture is referred to as a “gradient-based” meta-learner.

The full process of training a gradient-based meta learner is as follows:

  1. Create the base-learner (teacher) model
  2. Train the base-learner model on the support set
  3. Have the base-learner return predictions for the query set
  4. Train the meta-learner (student) on the loss derived from the classification error

Variations on Meta-learning

Model-Agnostic Meta-learning is a method used to augment the basic gradient-based meta-learning technique we covered above.

As we covered above a gradient-based meta-learner uses the prior experience gained by a teacher model to fine-tune itself and deliver more accurate predictions for a small amount of training data. However, starting with randomly initialized parameters means that the model can still potentially overfit the data. In order to avoid this, a “Model-agnostic” meta-learner is created by limiting the influence of the teacher model/base model. Instead of training the student model directly on the loss for the predictions made by the teacher model, the student model is trained on the loss for its own predictions.

For every episode of training a model-agnostic meta-learner:

  1. A copy of the current meta-learner model is created.
  2. The copy is trained with the assistance of the base model/teacher model.
  3. The copy returns predictions for the training data.
  4. Computed loss is used to update the meta-learner.

Metric-Learning

Metric-learning approaches to designing a few-shot learning model typically involve the use of basic distance metrics to make comparisons between samples in a dataset. Metric-learning algorithms like cosine distance are used to classify query samples based on their similarity to the supporting samples. For an image classifier, this would mean just classifying images based on the similarity of superficial characteristics. After a support set of images is selected and transformed into an embedding vector, the same is done with the query set and then the values for the two vectors are compared, with the classifier selecting the class that has the closest values to the vectorized query set.

A more advanced metric-based solution is the “prototypical network”. Prototypical networks cluster data points together combing clustering models with the metric-based classification described above. Like in K-means clustering, centroids for clusters are computed for the classes in the support and query sets. A euclidean distance metric is then applied to determine the difference between the query sets and the centroids of the support set, assigning the query set to whichever support set classes are closer.

Most other few-shot learning approaches are just variations on the core techniques covered above.

Applications for Few-Shot Learning

Few-shot learning has applications in the many different subfields of data science, such as computer vision, natural language processing, robotics, healthcare, and signal processing.

Applications for few-shot learning in the computer vision space include efficient character recognition, image classification, object recognition, object tracking, motion prediction, and action localization. Natural language processing applications for few-shot learning include translation, sentence completion, user intent classification, sentiment analysis, and multi-label text classification. Few-shot learning can be used in the robotics field to help robots learn about tasks from just a few demonstrations, letting robots learn how to carry out actions, move, and navigate the world around them. Few-shot drug discovery is an emerging area of AI healthcare. Finally, few-shot learning has applications for acoustic signal processing, which is the process of analyzing sound data, letting AI systems clone voices based on just a few user samples or voice conversion from one user to another.

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.