*Generative Model
-
Denoising Diffusion Implicit Models*Generative Model/Diffusion 2025. 1. 23. 10:19
https://arxiv.org/pdf/2010.02502(Oct 2022 ICLR 2021)https://github.com/ermongroup/ddimAbstractDenoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps in order to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more effici..
-
Variational Diffusion Models*Generative Model/Generative Model_2 2025. 1. 23. 00:21
https://arxiv.org/pdf/2107.00630https://github.com/google-research/vdm(NeurIPS 2021) Jul 2021 ★ ★ ★ ★ 이걸 먼저 봐야 함!!!!! ★ ★ ★ ★ ↓ ↓ ↓ ↓ ↓ ↓ ↓Demystifying Variational Diffusion Models https://arxiv.org/pdf/2401.06281 ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ 「 Variational Diffusion Models 」 이 논문에서의 수식이 Classifier-free guidance, distillation, Imagen 등등..에서 계속 이어지는데,(score-based model 계열 논문에서도 수식, 개념이..
-
High-Resolution Image Synthesis with Latent Diffusion Models*Generative Model/Diffusion 2024. 8. 23. 23:08
https://arxiv.org/pdf/2112.10752AbstractBy decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically opera..
-
Diffusion Models Beat GANs on Image Synthesis*Generative Model/Diffusion 2024. 8. 21. 02:34
https://arxiv.org/pdf/2105.05233AbstractWe show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient m..
-
Improved Denoising Diffusion Probabilistic Models*Generative Model/Diffusion 2024. 8. 20. 12:27
https://arxiv.org/pdf/2102.09672AbstractDenoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive loglikelihoods while maintaining high sample quality. Additionally, we find that learning variances of the reverse diffusion process al..
-
[CSDI] Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation*Generative Model/Generative Model 2024. 5. 21. 11:12
https://arxiv.org/pdf/2107.03502AbstractThe imputation of missing values in time series has many applications in healthcare and finance. While autoregressive models are natural candidates for time series imputation, score-based diffusion models have recently outperformed existing counterparts including autoregressive models in many tasks such as image generation and audio synthesis, and would be..
-
IMDIFFUSION: Imputated Diffusion Models for Multivariate Time Series Anomaly Detection*Generative Model/Generative Model 2024. 5. 20. 06:51
https://arxiv.org/pdf/2307.00754https://github.com/17000cyh/IMDiffusion.gitABSTRACT Anomaly detection in multivariate time series data is of paramount importance for ensuring the efficient operation of large-scale systems across diverse domains. However, accurately detecting anomalies in such data poses significant challenges due to the need for precise modeling of complex multivariate time seri..
-
[InfoGAN] Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets*Generative Model/Generative Model 2024. 5. 18. 16:58
https://arxiv.org/pdf/1606.03657AbstractThis paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a ..