Research/Generative Model
-
Normalizing Flows Tutorial, Part 2: Modern Normalizing FlowsResearch/Generative Model 2024. 5. 16. 07:17
https://blog.evjang.com/2018/01/nf2.htmlIn my previous blog post, I described how simple distributions like Gaussians can be “deformed” to fit complex data distributions using normalizing flows. We implemented a simple flow by chaining 2D Affine Bijectors with PreLU nonlinearities to build a small invertible neural net. However, this MLP flow is pretty weak: there are only 2 units per hidden lay..
-
Normalizing Flows Tutorial, Part 1: Distributions and DeterminantsResearch/Generative Model 2024. 5. 16. 07:00
https://blog.evjang.com/2018/01/nf1.htmlIf you are a machine learning practitioner working on generative modeling, Bayesian deep learning, or deep reinforcement learning, normalizing flows are a handy technique to have in your algorithmic toolkit. Normalizing flows transform simple densities (like Gaussians) into rich complex distributions that can be used for generative models, RL, and variatio..
-
[RevNets] The Reversible Residual Network: Backpropagation Without Storing ActivationsResearch/Generative Model 2024. 5. 15. 22:30
https://arxiv.org/pdf/1707.04585Abstract Deep residual networks (ResNets) have significantly pushed forward the state-of-the-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual ..
-
Variational Inference with Normalizing FlowsResearch/Generative Model 2024. 5. 15. 18:18
https://arxiv.org/pdf/1505.05770AbstractThe choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the qua..
-
[Gated PixelCNN] PixelCNN's Blind SpotResearch/Generative Model 2024. 5. 14. 17:50
IntroductionPixelCNNs are a type of generative models that learn the probability distribution of pixels, that means that the intensity of future pixels will be determined by previous pixels. In this blogpost series we implemented two PixelCNNs and noticed that the performance was not stellar. In the previous posts, we mentioned that one of the ways to improve the model's performance was to fix t..
-
Pixel Recurrent Neural NetworksResearch/Generative Model 2024. 5. 14. 11:53
https://www.youtube.com/watch?v=-FFveGrG46w 1. Variational autoencoders use a latent vector z to encode and decode the images so VAE's are good at efficient inference which just means they generate images quickly and efficiently. Unfortunately the generated samples end up being blurrier than other models. 2. Generative adversarial networks use an adversarial loss to train their models. Gans gen..
-
Pixel Recurrent Neural NetworksResearch/Generative Model 2024. 5. 14. 08:19
https://medium.com/a-paper-a-day-will-have-you-screaming-hurray/day-4-pixel-recurrent-neural-networks-1b3201d8932dPixel-RNN presents a novel architecture with recurrent layers and residual connections that predicts pixels across the vertical and horizontal axes. The architecture models the joint distribution of pixels as a product of conditional distributions of horizontal and diagonal pixels. T..
-
What is a variational autoencoder?Research/Generative Model 2024. 5. 11. 06:53
VAE 복습 中...※ https://jaan.io/what-is-variational-autoencoder-vae-tutorial/Understanding Variational Autoencoders (VAEs) from two perspectives: deep learning and graphical models. Why do deep learning researchers and probabilistic machine learning folks get confused when discussing variational autoencoders? What is a variational autoencoder? Why is there unreasonable confusion surrounding this te..