
Agentic AI emerges in this Synopsys Converge keynote not as a futuristic add-on, but as a practical response to the growing complexity of engineering. In the speaker’s view, the traditional way of designing chips, systems, and intelligent products is no longer sufficient for the era of physical AI. Engineers are now dealing with software-defined systems, advanced silicon, multi-physics constraints, verification challenges, and ever-shorter development cycles. In that environment, agentic AI becomes essential because it helps “re-engineer engineering” itself. Rather than replacing engineers, it is presented as a new layer of intelligence that works alongside them, extending human capability and allowing organizations to handle more ambitious projects with the same or limited engineering resources.
A key tension in the speech is the mixture of excitement and fear surrounding agentic AI. At the user and engineering level, many people worry about how this technology may change their jobs. That concern is understandable, because engineering has long depended on deep human expertise, judgment, and careful iteration. At the same time, the speaker stresses that management and organizational leaders are enthusiastic because they see agentic AI as a productivity multiplier. Companies today are often limited not by ideas, but by the number of engineers available and the amount of time required to turn complex ideas into working products. In that sense, agentic AI is framed less as a threat than as a force multiplier: it can help teams do more, move faster, and explore more design possibilities than would be possible through manual effort alone.
One of the most important parts of the keynote is the framework of autonomy levels from L1 to L5. This structure shows that Synopsys is not treating agentic AI as a vague concept, but as a staged engineering roadmap. At L1, there are co-pilots, which assist users across different parts of the design flow. These tools help engineers interact with software more efficiently and automate limited actions. At L2, the system moves to task agents, where each agent is responsible for a specific task. Here the human engineer still acts as the orchestrator, assigning work to multiple specialized agents. At L3, multi-agent workflows appear, meaning that groups of agents can coordinate with one another to complete broader processes. By L4, the vision becomes much more ambitious: a cognitive layer dynamically orchestrates multiple agents with contextual awareness, allowing the system to reason across tasks instead of simply executing isolated commands. L5 remains an early pathfinding stage, but it represents the long-term vision of far greater autonomy in engineering workflows.
What makes this model especially compelling is that it preserves a central role for the human engineer. The keynote explicitly argues that human engineers become more, not less, important in this environment. Their role shifts upward: instead of performing every task manually, they guide, supervise, and accelerate innovation through collaboration with agent systems. This reflects a broader change in knowledge work. The engineer of the future may spend less time on repetitive implementation details and more time defining intent, setting constraints, evaluating outcomes, and steering exploration. Agentic AI, then, is not just about automation; it is about elevating engineering work to a more strategic and creative level.
The concrete example given in the keynote is the path from specification to RTL. This workflow involves many steps: architectural specification, RTL design, test creation, test planning, formal verification, static verification, coverage analysis, and debugging. Traditionally, these are labor-intensive tasks that often require repeated iterations across different teams and tools. In the agentic model, each of these can be assigned to a dedicated task agent. A higher-level reasoning agent then orchestrates those specialists, ensuring that the outputs align and that the resulting RTL is of sufficient quality for the next design phase. This example shows why agentic AI matters: engineering is not usually blocked by one giant problem, but by the coordination of many interdependent smaller problems. Agentic AI addresses that coordination challenge directly.
Another significant idea in the speech is optionality. Synopsys does not present its agentic platform as a closed black box. Customers may bring their own agents, their own data, and their own infrastructure, whether on-premises or in the cloud. This matters because engineering organizations have different security requirements, workflows, and intellectual property concerns. By allowing customers to plug their own systems into the Synopsys stack, the company acknowledges that the future of agentic AI will likely be open, modular, and multi-model rather than standardized around a single monolithic system.
Bottom line: The keynote connects agentic AI to a broader transformation in science and engineering. In a recorded conversation with Microsoft CEO Satya Nadella, Sassine suggests that the next frontier is not merely natural-language assistance, but systems that can plan, execute, and verify complex engineering tasks using deep domain knowledge. The future will depend on combining general-purpose language models with specialized physics and design models. That vision is especially powerful in fields like EDA, where the tools, feedback loops, and verification frameworks already exist to support highly structured automation. In this sense, agentic AI is not just a productivity tool. It is the beginning of a new engineering paradigm, where human expertise and intelligent agents work together to build the increasingly complex systems of the future.
Also Read:
Efficient Bump and TSV Planning for Multi-Die Chip Designs
Reducing Risk Early: Multi-Die Design Feasibility Exploration
Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems
Share this post via:


Comments
There are no comments yet.
You must register or log in to view/post comments.