
Webinar: IP Design Considerations for Real-Time Edge AI Systems

*Work Email Required*
Edge AI systems increasingly require on-chip integration of large-capacity memory, compute engines, and inference-optimized accelerators—all within strict power, latency, and footprint constraints. This webinar provides a an overview of IP architecture and integration methodologies that support real-time AI workloads at the edge. We’ll cover:
• Memory and compute efficiency: Techniques for optimizing on-chip memory architectures and compute pipelines to deliver high throughput and low energy consumption.
• Low-latency inference: Design considerations to meet stringent latency targets through local processing and effective dataflow management.
• Power and area optimization: Strategies for balancing compute capability with ultra-low power operation and minimal silicon area.
• Multi-die considerations: Ways to approach multi-die design for advanced Edge AI applications.
• IP risk mitigation and integration: Best practices for modular, verifiable IP reuse that accelerates first-silicon success and manages integration risk.
• Evolving Edge AI requirements: Insights into how emerging architectures are shaping IP trends enabling smarter, adaptive edge devices.
Attendees will gain a detailed understanding of how IP selection, memory/compute co-design, and hardware optimization directly impact the scalability, reliability, and deployment speed of next-generation Edge AI solutions.
Speaker: Hezi Saar is a senior director of product line management for mobile, automotive, and consumer IP for the Synopsys Solutions Group. He brings more than 20 years of experience in the semiconductor and embedded systems industries.
*This webinar is in partnership with Synopsys and SemiWiki*
Share this post via:
Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet