전체 글
-
6. Nonparametric IdentificationCausality/1 2025. 2. 20. 14:27
https://www.bradyneal.com/causal-inference-course#course-textbookIn Section 4.4, we saw that satisfying the backdoor criterion is sufficient to give us identifiability, but is the backdoor criterion also necessary? In other words, is it possible to get identifiability without being able to block all backdoor paths? As an example, consider that we have data generated according to the graph in Fig..
-
5. Randomized ExperimentsCausality/1 2025. 2. 20. 13:16
https://www.bradyneal.com/causal-inference-course#course-textbookRandomized experiments are noticeably different from observational studies. In randomized experiments, the experimenter has complete control over the treatment assignment mechanism (how treatment is assigned). For example, in the most simple kind of randomized experiment, the experimenter randomly assigns (e.g. via coin toss) each ..
-
4. Causal ModelsCausality/1 2025. 2. 19. 21:43
https://www.bradyneal.com/causal-inference-course#course-textbookCausal models are essential for identification of causal quantities. We described identification as the process of moving from a causal estimand to a statistical estimand. However, to do that, we must have a causal model. 4.1. The do-operator and Interventional DistributionsThe first thing that we will introduce is a mathematical o..
-
3. The Flow of Association and Causation in GraphsCausality/1 2025. 2. 19. 13:38
https://www.bradyneal.com/causal-inference-course#course-textbook3.1. Graph TerminologyA graph is a collection of nodes ("vertices") and edges that connect the nodes. Undirected graph: the edges do not have any direction. A directed graph's edges go out of a parent node and into a child node, with the arrows signifying which direction the edges are going. Two nodes are said to be adjacent if the..
-
2. Potential OutcomesCausality/1 2025. 2. 19. 09:46
https://www.bradyneal.com/causal-inference-course#course-textbook2.1. Potential Outcomes and Individual Treatment EffectsThe potential outcome Y(t) denotes what your outcome would be, if you were to take treatment t. A potential outcome Y(t) is distinct from the observed outcome Y in that not all potential outcomes are observed. Rather all potential outcomes can potentially be observed. The one ..
-
1. Motivation: Why You Might CareCausality/1 2025. 2. 19. 09:05
https://www.bradyneal.com/causal-inference-course#course-textbook1.1. Simpson's ParadoxA key ingredient necessary to find Simpson's paradox is the non-uniformity of allocation of people to the groups. Scenario 1If the condition C is a cause of the treatment T, treatment B is more effective at reducing mortality Y. Because having severe condition causes one to be more likely to die (C → Y) and ca..
-
..Campus Life 2025. 2. 18. 21:33
새로 이사한 집은 바다가 보인다. 인천대교도 보이고.해질 무렵 노을이 예쁘게 물든다.인천공항에서 날아가는 비행기도 보인다.. 오늘은 소주가 땡기네.소주 먹고 싶다.. 많이도 아니고 조금만 마셔도 행복할 것 같은데..(근데 아마 몇 모금 들어가면 자제력을 잃을 듯) 참아야지.졸업할 때까지 참아야지. 열심히 참고, 졸업 후에 제주도 돌아가서 마셔야지.제주도 돌아가면, 같이 마셔주실 분들 잔뜩 계시니까. 소주 먹고싶다고 하면'소영 연권이 먼저 술을 먹자고 하고. 왠일이냐고' 버선발로 달려나와 주실 분들 계시니까.. 그때까지 참아야지.. 갑자기 왜 소주가 땡겼는지 이유를 깨달았다.예시에서 drinking이 나와서 생각났다.
-
Maximum Likelihood Training of Score-Based Diffusion ModelsGenerative Model/Generative Model_2 2025. 2. 15. 16:59
https://arxiv.org/pdf/2101.09258https://github.com/yang-song/score_flowOct 2021 (NeurIPS 2021)Abstract Score-based diffusion models synthesize samples by reversing a stochastic process that diffuses data to noise, and are trained by minimizing a weighted combination of score matching losses. The log-likelihood of score-based diffusion models can be tractably computed through a connection to cont..