Mode collapse is real for generative models

I am very curious to see whether generative models like GAN and VAE can fit data of multi-modes. [1] has some overview over different generative models, mentioning that VAE has a clear probabilistic objective function and is more efficient. [2] showed that diffusion models (score-based generative models) can better fit multimode distribution than VAE and …

Causal Inference in Recommendation Systems

We have briefly touched some concepts of causal inference in [1, 2]. This post introduces some more specific works which apply causal inference in recommendation systems. Some works need to know the background of backdoor and frontdoor adjustments. So we will introduce them first. Backdoor and frontdoor adjustment  Suppose we have a causal graph like …

Dollar cost average on TQQQ vs QQQ [Simulation]

This post runs a simple simulation on using the Dollar-Cost-Average strategy to invest in QQQ vs. TQQQ, its 3x-leveraged ETF. In the simulation, QQQ will plunge 34% after 20 rounds. One round is a small up-and-down cycle – the index first moves up 1% then 3% down, until 34% down from the top. After reaching …

GATO and related AGI research

Policy Generalist Deepmind has recently published a work named Gato. I find it interesting as Gato learns a multi-modal multi-task policy to many tasks such as robot arm manipulation, playing atari, and image captioning. I don’t think the original paper [2] has every detail of implementation but I’ll try to best summarize what I understand. …

Some latest recsys papers

7 years ago I posted one tutorial about recommendation systems. Now it is 2022 and there are many more advancements. This post will overview several latest ideas. CTR models Google’s recsys 2022 paper [1] introduces many practical details on their CTR models. First, to reduce training cost, there are 3 effective ways: applying bottleneck layers …

New Model Architectures

There are many advancements in new model architectures in AI domain. Let me overview these advancements in this post.   Linear Compression Embedding LCE [1] is simply using a matrix to project one embedding matrix to another: , where . Pyramid networks, inception network, dhen, lce   Perceiver and Perceiver IO Perceiver-based architectures [5,6] solve …

Simulation on the ads supply problem

I start to feel the importance of simulating any practical problem before deploying an RL policy. If you cannot implement a reasonable simulator on your own, you are not clear about your environment and your model. It is then a pure gamble to me if we just train an RL policy offline without testing in …

GFlowNet

GFlowNet is the latest technique developed for solving combinatorial optimization problems [1]. I’ve prepared a series of deep dive slides for it (See GFlowNet deep dive). In this post, I just list a few more references. References [1] https://yoshuabengio.org/2022/03/05/generative-flow-networks/ [2] https://towardsdatascience.com/the-what-why-and-how-of-generative-flow-networks-4fb3cd309af0 [3] https://neurips.cc/media/neurips-2021/Slides/26729.pdf [4] https://www.youtube.com/watch?v=7W69-ffTs48 [5] https://milayb.notion.site/GFlowNet-Tutorial-919dcf0a0f0c4e978916a2f509938b00#afe03e54d6db43468f8dee3a3350f98a [6] http://folinoid.com/w/gflownet/

How does Metropolis-Hastings algorithm work?

I learned about Markov Chain Monte Carlo (MCMC) algorithm a little bit during my phd but I did not record my thoughts back then. In this post, I revisit the core concept of MCMC, particularly focusing on illustrating the Metropolis-Hastings (MH) algorithm.  What is the motivation of MCMC? Suppose you have observed some data . …