Deep generative models have achieved enormous success in learning the underlying high-dimensional data distribution from samples. In this talk, we will introduce two methods to learn deep generative models. First, we will introduce variational gradient flow (VGrow) which can be used to minimize the f-divergence between the evolving distribution and the target distribution. In particular, we showed that the commonly used logD-trick indeed belongs to f-divergence. Second, we will introduce a Schrödinger Bridge approach to learning deep generative models. Our theoretical results guarantee that the distribution learned by our approach converges to the target distribution. Experimental results on multimodal synthetic data and benchmark data support our theoretical findings and indicate that the generative model via Schrödinger Bridge is comparable with state-of-the-art GANs, suggesting a new formulation of generative learning. We demonstrate its usefulness in image interpolation and image inpainting.