분류 전체보기
-
[VQ-VAE] Neural Discrete Representation LearningResearch/Generative Model 2024. 4. 7. 15:13
※ https://arxiv.org/pdf/1711.00937.pdf Abstract Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather..
-
Lagging Inference Networks and Posterior Collapse in Variational AutoencodersResearch/Generative Model 2024. 4. 6. 10:26
※ https://arxiv.org/pdf/1901.05534.pdf Abstract The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique. By using a neural inference network to approximate the model’s posterior on latent variables, VAEs efficiently parameterize a lower bound on marginal data likelihood that can be optimized directly via gradient me..
-
[HFVAE] Structured Disentangled RepresentationsResearch/Generative Model 2024. 4. 5. 22:45
※ https://arxiv.org/pdf/1804.02086.pdf Abstract Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Ga..
-
[Factor VAE] Disentangling by FactorisingResearch/Generative Model 2024. 4. 5. 22:43
KL Decomposition Mutual Information ※ https://arxiv.org/pdf/1802.05983.pdf Abstract We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We sh..
-
[Beta-VAE] Learning Basic Visual Concepts with a Constrained Variational FrameworkResearch/Generative Model 2024. 4. 4. 21:03
※ https://lilianweng.github.io/posts/2018-08-12-vae/ if each variable in the inferred latent representation z is only sensitive to one single generative factor and relatively invariant to other factors, we will say this representation is disentangled or factorized. One benefit that often comes with disentangled representation is good interpretability and easy generalization to a variety of tasks..
-
Semi-supervised Learning with Variational AutoencodersResearch/Generative Model 2024. 4. 4. 12:03
※ https://bjlkeng.io/posts/semi-supervised-learning-with-variational-autoencoders/ Semi-supervised Learning Semi-supervised learning is a set of techniques used to make use of unlabelled data in supervised learning problems (e.g. classification and regression). Semi-supervised learning falls in between unsupervised and supervised learning because you make use of both labelled and unlabelled data..
-
[CVAE 2] Semi-supervised Learning with Deep Generative ModelsResearch/Generative Model 2024. 4. 3. 22:19
※ https://arxiv.org/pdf/1406.5298.pdf Abstract The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generali..
-
[CVAE 1] Learning Structured Output Representation using Deep Conditional Generative ModelsResearch/Generative Model 2024. 4. 3. 16:46
※ https://proceedings.neurips.cc/paper_files/paper/2015/file/8d55a249e6baa5c06772297520da2051-Paper.pdf※ Structured predictionIn standard Machine Learning setting the output is a scalar quantity which is fine for most of the real world problems, but there are some problems where the expected output of Machine Learning model is a complex structure like tree, graph, image etc. To apply Machine Lea..