Research/Generative Model
-
Denoising Diffusion Probabilistic Models 정리Research/Generative Model 2024. 4. 21. 10:01
Basic Idea of Diffusion Models Bacis idea of diffusion models is to destroy all the information in the image progressively in a sequence of timestep t where at each step we add a little bit of Gaussian noise and by the end of a large number of steps what we have is a complete random noise mimicking a sample from a normal distribution. This is called the forward process and we apply this transiti..
-
DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly DetectionResearch/Generative Model 2024. 4. 16. 12:48
https://arxiv.org/pdf/2303.08730.pdf Abstract Anomaly detection has garnered extensive applications in real industrial manufacturing due to its remarkable effectiveness and efficiency. However, previous generative-based models have been limited by suboptimal reconstruction quality, hampering their overall performance. We introduce DiffusionAD, a novel anomaly detection pipeline comprising a reco..
-
Introduction to Diffusion ModelsResearch/Generative Model 2024. 4. 15. 21:16
※ Kemal Erdem, (Nov 2023). "Step by Step visual introduction to Diffusion Models.". https://erdem.pl/2023/11/step-by-step-visual-introduction-to-diffusion-models What is diffusion model? The idea of the diffusion model is not that old. In the 2015 paper called "Deep Unsupervised Learning using Nonequilibrium Thermodynamics", the Authors described it like this: "The essential idea, inspired by no..
-
[SA-VAE] Semi-Amortized Variational AutoencodersResearch/Generative Model 2024. 4. 8. 09:02
※ https://arxiv.org/pdf/1802.02550.pdf Abstract Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approac..
-
[IWAE] Importance Weighted AutoencodersResearch/Generative Model 2024. 4. 7. 21:11
※ https://arxiv.org/pdf/1509.00519.pdf Abstract The variational autoencoder (VAE; Kingma & Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial..
-
[VQ-VAE] Neural Discrete Representation LearningResearch/Generative Model 2024. 4. 7. 15:13
※ https://arxiv.org/pdf/1711.00937.pdf Abstract Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather..
-
Lagging Inference Networks and Posterior Collapse in Variational AutoencodersResearch/Generative Model 2024. 4. 6. 10:26
※ https://arxiv.org/pdf/1901.05534.pdf Abstract The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique. By using a neural inference network to approximate the model’s posterior on latent variables, VAEs efficiently parameterize a lower bound on marginal data likelihood that can be optimized directly via gradient me..
-
[HFVAE] Structured Disentangled RepresentationsResearch/Generative Model 2024. 4. 5. 22:45
※ https://arxiv.org/pdf/1804.02086.pdf Abstract Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Ga..