Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/survey-paper-on-accelerators-for-generative-adversarial-networks-gans.14406/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Survey paper on Accelerators for Generative Adversarial Networks (GANs)

sparsh

Member
Recent years have witnessed a significant interest in the “generative adversarial networks” (GANs) due to their ability to generate high-fidelity data. GANs have a high compute and memory requirements. Also, since they involve both convolution and deconvolution operation, they do not map well to the conventional accelerators designed for convolution operations. Evidently, there is a need of customized accelerators for achieving high efficiency with GANs.

We present a survey of techniques and architectures for accelerating GANs. We review accelerators based on in-memory computing using ReRAM and SOT-RAM and those implemented on FPGA and ASICs. We discuss various optimization techniques for GANs, such as Winograd-based CONV, sparsity-related optimizations, etc. Paper accepted in Journal of Systems Architecture 2021. Available here.
 
Back
Top