Abstract
Representation learning has become one of the most important research directions in modern Artificial Intelligence. Instead of relying heavily on large amounts of labeled data, self-supervised and unsupervised learning techniques aim to learn meaningful data representations from unlabeled data. These approaches reduce the dependency on human annotations and allow models to learn hidden structures and patterns present in raw data. Self-supervised learning creates surrogate tasks from the data itself, while unsupervised learning extracts intrinsic patterns without labels. This paper reviews major techniques, architectures, and recent developments in self-supervised and unsupervised representation learning across vision, natural language processing, and speech domains. Methods such as autoencoders, contrastive learning, generative models, masked modeling, and clustering-based learning are discussed in detail. Applications in healthcare, robotics, recommender systems, and autonomous systems are highlighted. Challenges and future directions are also examined.
Keywords: Self-supervised learning, Unsupervised learning, Representation learning, Contrastive learning, Autoencoders, Masked modeling, Generative models, Deep learning
Full Issue
| View or download the full issue | PDF 97-108 |