전체 글
-
[RevNets] The Reversible Residual Network: Backpropagation Without Storing ActivationsResearch/Generative Model 2024. 5. 15. 22:30
https://arxiv.org/pdf/1707.04585Abstract Deep residual networks (ResNets) have significantly pushed forward the state-of-the-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual ..
-
Variational Inference with Normalizing FlowsResearch/Generative Model 2024. 5. 15. 18:18
https://arxiv.org/pdf/1505.05770AbstractThe choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the qua..
-
[Gated PixelCNN] PixelCNN's Blind SpotResearch/Generative Model 2024. 5. 14. 17:50
IntroductionPixelCNNs are a type of generative models that learn the probability distribution of pixels, that means that the intensity of future pixels will be determined by previous pixels. In this blogpost series we implemented two PixelCNNs and noticed that the performance was not stellar. In the previous posts, we mentioned that one of the ways to improve the model's performance was to fix t..
-
Pixel Recurrent Neural NetworksResearch/Generative Model 2024. 5. 14. 11:53
https://www.youtube.com/watch?v=-FFveGrG46w 1. Variational autoencoders use a latent vector z to encode and decode the images so VAE's are good at efficient inference which just means they generate images quickly and efficiently. Unfortunately the generated samples end up being blurrier than other models. 2. Generative adversarial networks use an adversarial loss to train their models. Gans gen..
-
Pixel Recurrent Neural NetworksResearch/Generative Model 2024. 5. 14. 08:19
https://medium.com/a-paper-a-day-will-have-you-screaming-hurray/day-4-pixel-recurrent-neural-networks-1b3201d8932dPixel-RNN presents a novel architecture with recurrent layers and residual connections that predicts pixels across the vertical and horizontal axes. The architecture models the joint distribution of pixels as a product of conditional distributions of horizontal and diagonal pixels. T..
-
What is a variational autoencoder?Research/Generative Model 2024. 5. 11. 06:53
VAE 복습 中...※ https://jaan.io/what-is-variational-autoencoder-vae-tutorial/Understanding Variational Autoencoders (VAEs) from two perspectives: deep learning and graphical models. Why do deep learning researchers and probabilistic machine learning folks get confused when discussing variational autoencoders? What is a variational autoencoder? Why is there unreasonable confusion surrounding this te..
-
Variational autoencodersResearch/Generative Model 2024. 5. 10. 22:55
다시 복습 中...※ https://www.jeremyjordan.me/variational-autoencoders/A variational autoencoder (VAE) provides a probabilistic manner for describing an observation in latent space. Thus, rather than building an encoder which outputs a single value to describe each latent state attribute, we'll formulate our encoder to describe a probability distribution for each latent attribute.IntuitionTo provide a..
-
Understanding Generative Adversarial Networks (GANs)Research/Generative Model 2024. 5. 5. 13:54
https://towardsdatascience.com/understanding-generative-adversarial-networks-gans-cd6e4651a29IntroductionYann LeCun described it as “the most interesting idea in the last 10 years in Machine Learning”. Of course, such a compliment coming from such a prominent researcher in the deep learning area is always a great advertisement for the subject we are talking about! And, indeed, Generative Adversa..