Unsupervised Representation Learning from Sparse Transformation Analysis
Publication date: 7 Oct 2024
Topic: Representation Learning
Paper: https://arxiv.org/pdf/2410.05564v1.pdfGitHub: https://github.com/kingjamessong/latent-flowDescription:
In this paper we propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components. Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model, before being decoded to predict a future input state. The flow model is decomposed into a number of rotational (divergence-free) vector fields and a number of potential flow (curl-free) fields. Our sparsity prior encourages only a small number of these fields to be active at any instant and infers the speed with which the probability flows along these fields.