很抱歉,此網頁上的內容無法使用您選擇的語言。

跳到主要內容

主頁 Zero-shot learning (ZSL)

Zero-shot learning (ZSL)

(also ZSL)

Zero-shot learning (ZSL) definition

Zero-shot learning is a technique where a machine learning model can make predictions or perform tasks without having been explicitly trained on examples of those tasks. It uses what it has already learned from similar data or tasks to handle unseen situations.

See also: training data, machine learning, end-to-end (E2E) learning, data augmentation, LLM temperature

How does zero-shot learning (ZSL) works?

  • The model is trained on a wide range of tasks or data, which helps it learn general patterns, relationships, and features that can be applied across different contexts.
  • The model is provided with a description or instructions about the new task, even if it hasn’t seen examples of that specific task during training.
  • The model uses its existing knowledge to link the new task or data to something similar it’s already seen. It leverages patterns learned from other tasks to make predictions or decisions.
  • Since the model can apply general knowledge, it can make reasonable predictions or solve problems without needing explicit examples or fine-tuning for the new task.

Benefits for zero-shot learning (ZSL)

  • The model can handle new tasks without needing additional data or training specific to that task.
  • Since it doesn’t need task-specific data, it reduces the time and effort needed for model training.
  • The model can work with a wide range of tasks, even ones it hasn’t seen before.
  • It helps the model apply what it has learned to new situations, making it more adaptable in real-world scenarios.
  • You can use the same model for multiple tasks without retraining, which makes it faster to implement in different areas.