Research
-
Recurrent Neural Network, RNNResearch/NLP_reference 2024. 4. 9. 16:52
※ https://wikidocs.net/60690 RNN(Recurrent Neural Network)은 시퀀스(Sequence) 모델입니다. 입력과 출력을 시퀀스 단위로 처리하는 모델입니다. 번역기를 생각해보면 입력은 번역하고자 하는 문장. 즉, 단어 시퀀스입니다. 출력에 해당되는 번역된 문장 또한 단어 시퀀스입니다. 이러한 시퀀스들을 처리하기 위해 고안된 모델들을 시퀀스 모델이라고 합니다. 그 중에서도 RNN은 딥 러닝에 있어 가장 기본적인 시퀀스 모델입니다. 1. 순환 신경망(Recurrent Neural Network, RNN) 앞서 배운 신경망들은 전부 은닉층에서 활성화 함수를 지난 값은 오직 출력층 방향으로만 향했습니다. 이와 같은 신경망들을 피드 포워드 신경망(Feed Forward Neu..
-
[SA-VAE] Semi-Amortized Variational AutoencodersResearch/Generative Model 2024. 4. 8. 09:02
※ https://arxiv.org/pdf/1802.02550.pdf Abstract Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approac..
-
[IWAE] Importance Weighted AutoencodersResearch/Generative Model 2024. 4. 7. 21:11
※ https://arxiv.org/pdf/1509.00519.pdf Abstract The variational autoencoder (VAE; Kingma & Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial..
-
[VQ-VAE] Neural Discrete Representation LearningResearch/Generative Model 2024. 4. 7. 15:13
※ https://arxiv.org/pdf/1711.00937.pdf Abstract Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather..
-
Lagging Inference Networks and Posterior Collapse in Variational AutoencodersResearch/Generative Model 2024. 4. 6. 10:26
※ https://arxiv.org/pdf/1901.05534.pdf Abstract The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique. By using a neural inference network to approximate the model’s posterior on latent variables, VAEs efficiently parameterize a lower bound on marginal data likelihood that can be optimized directly via gradient me..
-
[HFVAE] Structured Disentangled RepresentationsResearch/Generative Model 2024. 4. 5. 22:45
※ https://arxiv.org/pdf/1804.02086.pdf Abstract Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Ga..
-
[Factor VAE] Disentangling by FactorisingResearch/Generative Model 2024. 4. 5. 22:43
KL Decomposition Mutual Information ※ https://arxiv.org/pdf/1802.05983.pdf Abstract We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We sh..
-
[Beta-VAE] Learning Basic Visual Concepts with a Constrained Variational FrameworkResearch/Generative Model 2024. 4. 4. 21:03
※ https://lilianweng.github.io/posts/2018-08-12-vae/ if each variable in the inferred latent representation z is only sensitive to one single generative factor and relatively invariant to other factors, we will say this representation is disentangled or factorized. One benefit that often comes with disentangled representation is good interpretability and easy generalization to a variety of tasks..