Text to Image Generation using Stacked Generative Adversarial Networks

Authors

  • N. Himachalapathy Reddy
  • Uma Priyadarsini P. S

Abstract

The Stage-I GAN outlines crude shape and shades of the article dependent on given content portrayal, Although Generative Adversarial Networks (GANs) have demonstrated astounding achievement in different undertakings, and despite everything they face difficulties in producing excellent pictures. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) planned for producing high-goals photorealistic pictures. To begin with, we propose a two-arrange generative ill-disposed system engineering, StackGAN-v1, for content to-picture synthesis yielding low-goals pictures. The Stage-II GAN takes Stage-I results and content portrayals as data sources, and produces high-goals pictures with photograph sensible subtleties. Second, a progressed multi-arrange generative ill-disposed organize design, StackGAN-v2, is proposed for both restrictive and unlimited generative assignments. Our StackGAN-v2 comprises of numerous generators and discriminators in a tree-like structure; pictures at various scales comparing to a similar scene are produced from various parts of the tree. StackGAN-v2 shows more steady preparing conduct than StackGAN-v1 by mutually approximating numerous dispersions. Broad tests show that the proposed stacked generative ill-disposed arranges fundamentally outflank other best in class techniques in creating photograph sensible pictures.

Downloads

Published

2020-02-01

Issue

Section

Articles