A word embedding is a type of embedding specifically used in Natural Language Processing (NLP). It maps words (or subwords) to real-valued vectors in a continuous vector space, where semantically similar words are close together.

Example word embeddings:

Properties:

Example:

word_vectors["king"] - word_vectors["man"] + word_vectors["woman"]  word_vectors["queen"]

See: Cosine Similarity