Crypto modernization timeline starting to take shape

Crypto modernization timeline starting to take shape
by Don Dingee on 06-15-2023 at 10:00 am

CNSA Suite 2.0 crypto modernization timeline

Post-quantum cryptography (PQC) might be a lower priority for many organizations, with the specter of quantum-based cracking seemingly far off. Government agencies are fully sensitized to the cracking risks and the investments needed to mitigate them and are busy laying 10-year plans for migration to quantum-safe encryption.… Read More


Reconfigurable DSP and AI IP arrives in next-gen InferX

Reconfigurable DSP and AI IP arrives in next-gen InferX
by Don Dingee on 05-08-2023 at 10:00 am

InferX 2.5 reconfigurable DSP and AI IP from Flex Logix

DSP and AI are generally considered separate disciplines with different application solutions. In their early stages (before programmable processors), DSP implementations were discrete, built around a digital multiplier-accumulator (MAC). AI inference implementations also build on a MAC as their primitive. If the interconnect… Read More


Formal-based RISC-V processor verification gets deeper than simulation

Formal-based RISC-V processor verification gets deeper than simulation
by Don Dingee on 05-01-2023 at 10:00 am

End to end formal-based RISC-V processor verification flow for the Codasip L31

The flexibility of RISC-V processor IP allows much freedom to meet specific requirements – but it also opens the potential for many bugs created during the design process. Advanced processor features are especially prone to errors, increasing the difficulty and time needed for thorough verification. Born out of necessity, … Read More


Configurable RISC-V core sidesteps cache misses with 128 fetches

Configurable RISC-V core sidesteps cache misses with 128 fetches
by Don Dingee on 04-25-2023 at 6:00 am

Gazzillion misses 2

Modern CPU performance hinges on keeping a processor’s pipeline fed so it executes operations on every tick of the clock, typically using abundant multi-level caching. However, a crop of cache-busting applications is looming, like AI and high-performance computing (HPC) applications running on big data sets. SemidynamicsRead More


Advanced electro-thermal simulation sees deeper inside chips

Advanced electro-thermal simulation sees deeper inside chips
by Don Dingee on 03-29-2023 at 6:00 am

Advanced electro-thermal simulation in Keysight PathWave ADS

Heat and semiconductor reliability exist in an inversely proportional relationship. Before the breaking point at the thermal junction temperature rating, every 10°C rise in steady-state temperature cuts predicted MOSFET life in half. Yet, heat densities rise as devices plunge into harsher environments like smartphones,… Read More


eFPGA goes back to basics for low-power programmable logic

eFPGA goes back to basics for low-power programmable logic
by Don Dingee on 03-21-2023 at 10:00 am

Renesas ForgeFPGA Evaluation Board features Flex Logic ELFX 1K low-power programmable logic tile

When you think “FPGA,” what comes to mind? Massive, expensive parts capable of holding a lot of logic but also consuming a lot of power. Reconfigurable platforms that can swallow RTL for an SoC design in pre-silicon testing. Big splashy corporate acquisitions where investors made tons of money. Exotic 3D packaging and advanced… Read More


MIPI D-PHY IP brings images on-chip for AI inference

MIPI D-PHY IP brings images on-chip for AI inference
by Don Dingee on 03-13-2023 at 10:00 am

Perceive Ergo 2 brings images on-chip for AI inference with Mixel MIPI D-PHY IP

Edge AI inference is getting more and more attention as demand grows for AI processing across an increasing number of diverse applications, including those requiring low-power chips in a wide range of consumer and enterprise-class devices. Much of the focus has been on optimizing the neural network processing engine for these… Read More


Deep thinking on compute-in-memory in AI inference

Deep thinking on compute-in-memory in AI inference
by Don Dingee on 03-09-2023 at 6:00 am

Compute-in-memory for AI inference uses an analog matrix to instantaneously multiply an incoming data word

Neural network models are advancing rapidly and becoming more complex. Application developers using these new models need faster AI inference but typically can’t afford more power, space, or cooling. Researchers have put forth various strategies in efforts to wring out more performance from AI inference architectures,… Read More


Area-optimized AI inference for cost-sensitive applications

Area-optimized AI inference for cost-sensitive applications
by Don Dingee on 02-15-2023 at 6:00 am

Expedera uses packet-centric scalability to move up and down in AI inference performance while maintaining efficiency

Often, AI inference brings to mind more complex applications hungry for more processing power. At the other end of the spectrum, applications like home appliances and doorbell cameras can offer limited AI-enabled features but must be narrowly scoped to keep costs to a minimum. New area-optimized AI inference technology from… Read More