Masked Model

Masked models are machine learning models, often used in natural language processing, that predict missing or masked parts of input data during training. For example, in models like BERT, random tokens in a sentence are hidden (masked), and the model learns to predict them based on context. This helps the model understand relationships in data, improving tasks like text generation or classification.