Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?members/sparsh.19510/recent-content
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Recent content by sparsh

  1. S

    Survey paper on SRAM-based In-Memory Computing Techniques

    As von Neumann computing architectures become increasingly constrained by data-movement overheads, researchers have started exploring in-memory computing (IMC) techniques to offset data-movement overheads. We present a survey of 90+ papers on in-memory computing using SRAM memory. We review...
  2. S

    Difference between processing-in-memory and computing-in-memory

    What is the difference between processing-in-memory and computing-in-memory and logic-in-memory. Or are they same? Can someone please give a definite answer, considering various memories such as SRAM/memristor/spintronic. Sometimes, research papers confuse the terms, so I am not able to come to...
  3. S

    Survey paper on Accelerators for Generative Adversarial Networks (GANs)

    Recent years have witnessed a significant interest in the “generative adversarial networks” (GANs) due to their ability to generate high-fidelity data. GANs have a high compute and memory requirements. Also, since they involve both convolution and deconvolution operation, they do not map well...
  4. S

    Survey paper on hardware security of DNN models and accelerators

    As “deep neural networks” (DNNs) achieve increasing accuracy, they are getting employed in increasingly diverse applications, including security-critical applications such as medical and defense. The worldwide revenue produced from the deployment of AI is expected to reach $190.6 billion by...
  5. S

    Survey paper on accelerators for 3D CNNs

    3D convolution neural networks (CNNs) have shown excellent predictive performance on tasks such as action recognition from videos, weather forecasting, detecting action similarity between two video clips, video captioning, labeling and surveillance. Also, they are used for performing object...
  6. S

    A Survey of Techniques for Intermittent Computing (from Harvested Energy)

    Intermittent computing (ImC) refers to the scenario where periods of program execution are separated by reboots. This computing paradigm is common in some IoT devices. ImC systems are generally powered by energy-harvesting devices: they start executing a program when the accumulated energy...
  7. S

    Survey paper on hardware accelerators and optimizations for RNNs

    RNNs have shown remarkable effectiveness in several tasks such as music generation, speech recognition and machine translation. RNN computations involve both intra-timestep and inter-timestep dependencies. Due to these features, hardware acceleration of RNNs is more challenging than that of...
  8. S

    Survey paper on Intel's Xeon Phi

    Intel's Xeon Phi (having "many-integrated core" or MIC micro-architecture) combines the parallel processing power of a many-core accelerator with the programming ease of CPUs. In this paper, we survey 100+ works that study the architecture of Phi and use it as an accelerator for a broad range...
  9. S

    Survey paper on Deep Learning on CPUs

    CPU is a powerful, pervasive, and indispensable platform for running deep learning (DL) workloads in systems ranging from mobile to extreme-end servers. We review 140+ papers focused on optimizing DL applications on CPUs. We include the methods proposed for both inference and training and...
  10. S

    A Survey on Reliability of DNN Algorithms and Accelerators

    As DNNs become common in mission-critical applications, ensuring their reliable operation has become crucial. Conventional resilience techniques fail to account for the unique characteristics of DNN algorithms/accelerators, and hence, they are infeasible or ineffective. Our paper...
  11. S

    Survey paper on Deep Learning on GPUs

    The rise of deep-learning (DL) has been fuelled by the improvements in accelerators. GPU continues to remain the most widely used accelerator for DL applications. We present a survey of architecture and system-level techniques for optimizing DL applications on GPUs. We review 75+ techniques...
  12. S

    Survey paper on Micron's Automata Processor

    Sorry, Arthur. I have not idea about the business aspect. Technically, as an academician, I can say that the effectiveness of Automata execution depends a lot on memory technology. If 3D Xpoint can provide larger fan-in/fan-out, then it will be helpful for modeling complex automata which have a...
  13. S

    Survey paper on Micron's Automata Processor

    Micron has stopped developing AP. http://naturalsemi.com and https://engineering.virginia.edu/center-automata-processing-cap are now leading the development of AP.
  14. S

    Survey paper on Micron's Automata Processor

    Problems from a wide variety of application domains can be modeled as ``nondeterministic finite automaton'' (NFA) and hence, efficient execution of NFAs can improve the performance of several key applications. Since traditional architectures, such as CPU and GPU are not inherently suited for...
  15. S

    Survey paper on Intel's Xeon Phi

    Intel’s Xeon Phi combines the parallel processing power of a many-core accelerator with the programming ease of CPUs. We survey ~100 works that study the architecture of Phi and use it as an accelerator for a broad range of applications. We discuss the strengths and limitations of Phi. We...
Back
Top