RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Accelerating NPI with Deep Data: From First Silicon to Volume

Accelerating NPI with Deep Data: From First Silicon to Volume
by Kalar Rajendiran on 12-03-2025 at 10:00 am

proteanTecs Multi Pillar Technology

For decades, semiconductor teams have relied on traditional methods such as corner-based analysis, surrogate monitors, and population-level statistical screening for post-silicon validation. These methods served well when variability was modest, and timing paths behaved predictably. However, today’s advanced nodes and complex architectures expose the limitations of these approaches. Local process variation, workload-driven activation, dynamic voltage droop, aging, and subtle defects create path-specific outcomes that traditional monitors cannot capture. Proxy monitors cannot reflect real functional paths under real operating conditions, leaving engineers blind to critical performance, quality, and reliability issues.

As competition and time-to-market pressures increase, teams cannot afford the iterative cycles required to reconcile design assumptions with actual silicon behavior.

proteanTecs recently hosted a webinar addressing this very topic and presented its solution for accelerating New Product Introduction (NPI). proteanTecs’ Alex Burlak, Executive Vice President, Test and Analytics and Noam Brousard, Vice President, Solutions Engineering led the webinar session. The webinar titled “Accelerating NPI with Deep Data: From First Silicon to Volume” presented a new approach that replaces assumptions with real-time, on-chip insight, enabling teams to detect issues early, characterize power/performance confidently, accelerate debug, and optimize qualification.

The Need for Deep Visibility Across the NPI Lifecycle

Modern NPI requires visibility into every chip, in every scenario. Engineers need to understand where individual devices might fail, how variability affects functional paths, and how workload, voltage, and temperature interact to create real operational limits. Traditional methods cannot provide this insight, leaving teams reactive and slow to identify critical issues. This webinar demonstrated that high-resolution, chip-specific data allows teams to characterize actual performance, detect early parametric drift, and unify insights across design, test, and validation phases.

On-Chip Monitoring with Advanced Design-Aware Analytics

proteanTecs provide a HW IP Monitoring system that includes monitoring agents and an infrastructure that provides the control framework. The on-chip agents are embedded, ultra-lightweight on-chip monitors engineered to extract “deep data” – including design profiling, material classification, performance degradation, workload impact, and operational effects. Rather than monitoring only high-level counters or traditional test structures, these agents sit close to the actual circuitry, collecting granular telemetry throughout the chip’s entire operational life.

By capturing this deep data from within the device and applying advanced machine learning, these agents enable early detection of reliability risks, performance drift, power inefficiencies, and system degradations, long before they become visible at the system level.

Timing Margin Monitoring: Real-Time Insight from Real Functional Paths

proteanTecs Margin Agents deliver this visibility by embedding lightweight monitors directly into real timing paths. These agents measure instantaneous slack and are sensitive to operational conditions, process variations, aging, and latent defects. Unlike proxy circuits, they capture the real limits of a chip, providing precise insight into performance and reliability boundaries.

Alex Burlak opened the webinar with a use case demonstrating how proteanTecs enables customers to correlate simulation expectations with real silicon behavior.

By aggregating agent data from multiple test stages including wafer sort, final test, and system-level evaluation into a centralized analysis environment, engineers can directly align design intent with silicon results.

By examining process signatures captured by Profiling Agents across standard cells, teams can quantify process variation relative to design corners and link it to metrics such as Fmax, VDDmin, and the impact on yield. This insight supports detailed root-cause analysis, helping engineers identify why certain chips run faster or slower and isolate variation sources, such as clock-path versus data-path effects or on-chip variation (OCV).

To accelerate characterization, proteanTecs offers a Smart Material Selection algorithm. After initial test data collection, this algorithm identifies the most representative subset of chips (e.g., 50 out of 1,000) that best captures process variability. By focusing on these representative devices, characterization efforts, such as voltage, temperature, or workload sweeps, become far more efficient and comprehensive. Advanced HTOL Methodologies for Device Qualification.

Next, Alex presented a use case on High-Temperature Operating Life (HTOL) testing. Using proteanTecs’ Profiling and Margin Agents, customers can track degradation over time, collecting data at intervals such as 0, 48, 500, and 1,000 hours. This enables quantification of parametric drift and more accurate decisions about guard-banding and reliability.

Unifying Data from Design, Test, Validation, and Characterization

proteanTecs’ agents produce consistent, high-resolution data throughout the NPI lifecycle. Engineers can trace performance trends from wafer sort through ATPG, functional testing, HTOL, qualification, and high-volume production. They can even continue monitoring in the field. This unified dataset allows teams to detect deviations early, correlate results across test stages, and communicate insights efficiently between design and product engineering teams. By grounding decisions in actionable data rather than assumptions, organizations reduce risk and accelerate time-to-market.

Smart Models: Eliminating Yield–Quality Trade-Offs

The webinar highlighted smart models that leverage agent data to resolve the traditional trade-off between yield and quality. Instead of relying on global statistical thresholds, smart models analyze each chip against its expected electrical behavior. They identify true outliers based on high-resolution, chip-specific measurements, avoiding the need to discard potentially good devices or compromise quality. Noam emphasized that this approach allows teams to maintain high yield without sacrificing reliability, effectively providing both efficiency and assurance across production.

Continuous Monitoring during HTOL and In-Field Monitoring

The solution also supports continuous monitoring during HTOL and in-field operation. Engineers can observe degradation trends in real time, rather than waiting for post-stress readouts. Noam  demonstrated that this enables early detection of unexpected behavior, identification of hotspots, and rapid response to process or setup issues. In-field operation benefits similarly: Margin Agents operate without interrupting workloads, providing continuous visibility into aging, performance drift, and reliability over the product’s lifetime. By extending NPI insight into actual deployment, teams can react proactively, reducing risk and improving long-term product performance.

Summary

Alex and Noam demonstrated through live demos on case studies that deep on-chip data transforms NPI by providing real-time, high-resolution insight into each chip’s power, performance and reliability. On-chip agents reveal true performance limits, smart models identify outliers without compromising yield, and continuous monitoring provides actionable information from wafer sort through in-field operation.

By embedding deep data and analytics into the NPI workflow, semiconductor teams gain confidence, clarity, and control. Every chip becomes its own source of truth, and every stage of the NPI pipeline benefits from actionable insight. The result is faster ramp, higher quality, fewer surprises, and a fundamentally more predictable transition from first silicon to volume production.

To watch the on-demand webinar, click here: https://hubs.la/Q03W0k2V0

To learn more, visit:

proteanTecs/technology

proteanTecs/solutions

Also Read:

Failure Prevention with Real-Time Health Monitoring: A proteanTecs Innovation

Podcast EP313: How proteanTecs Optimizes Production Test

Thermal Sensing Headache Finally Over for 2nm and Beyond


We Need to Turn Specs into Oracles for Agentic Verification

We Need to Turn Specs into Oracles for Agentic Verification
by Bernard Murphy on 12-03-2025 at 6:00 am

Spec as an oracle min

The natural language understanding now possible in LLMs has raised interest in using specs as a direct reference for test generation, to eliminate need for intermediate and fallible human translation. Sadly, specs today are not an infallible source of truth for multiple reasons. I am grateful to Shelly Henry (CEO of MooresLab) for his insights into the realities of spec evolution in production settings. Shelly and his team have many years of design experience across several enterprises, most recently as alumni of the Microsoft Silicon Group.

Today’s spec as an oracle – you wish

Architecture specs go through a development cycle, as do all aspects of design and verification, and those specs are not perfect on first pass just as is the case for other deliverables in the design flow. An architect is responsible for building the specification, starting from customer requirements, considering what can be leveraged from other projects and what must be redesigned or upgraded to meet those new specs. The architect may be able to rely on a modeling group to do some virtual prototyping, testing for throughput, latencies, other metrics. Once they feel their rough model looks good they will start writing their spec. In what follows I’ll focus on the spec as guidance for hardware design verification though it equally should guide hardware and firmware design.

Working on the spec you need to start test planning will be the architect’s primary focus for a while, but not their only focus as they continue to manage other tasks already in their pipeline. Their first release may be a 0.5 version, covering perhaps 70% of what they considered up to this point. Again, a decent representation but not guaranteed to be perfect. Good enough to start design and verification schedule and resources.

Over time they will add to and refine the spec based on their own ideas, feedback from the customer and from you. Eventually the spec is frozen (Shelly suggests around halfway into the design schedule, though your mileage may differ). Within that window, between the 0.5 release and freeze, the spec is changing. There may be contradictions or missing information. There may also be ambiguities: the spec defines a feature but leaves too much open for you to be certain about expected behavior in all cases.

You email the architect for clarification. That turns into a thread, and you eventually agree on a resolution. But this outcome doesn’t always get back into the spec, or maybe it does but not fully reflecting the agreement you thought you had. Worse yet, you call the architect, agree on a resolution for which you make a note – somewhere. It’s easy to see how mistakes can happen despite good intentions all round. Unfortunately, there is no verification methodology to definitively prove that a spec fully reflects the expectations of all stakeholders. Perhaps disconnects will surface pre-silicon, perhaps not. Is this really the best that we can do?

How we could turn a spec into a robust oracle

Start with what we already can do. Input the 0.5 spec into an LLM-based agent and have that agent generate questions to the LLM to elaborate verification requirements based on know-how already captured in that LLM model. What are the standard types of tests that should be performed around a DDR interface in this class of designs for example?

There’s no need to digest a full spec in one gulp, likely impossible anyway given the bounded prompt windows that LLMs support. Specs are naturally organized by chapters and sections to respect the limited abilities of us fallible humans, much more amenable to LLM processing.

Agent questions shouldn’t ask how such tests should be performed – that is the concern of later test synthesis flows. Here we want to refine the test specification to add more descriptive detail around what behavior is expected. The detail the architect or you would have added to the spec if time and fallible memories allowed. Very likely this may involve timing diagrams, maybe FSM diagrams, block diagrams to elaborate clock and reset control, or how domain crossings are handled.

As the spec evolves, the agent should be able to digest mail threads, DM threads, notes, and use that information for further refinement. Ensuring a central source of truth, while also clarifying where changes originated and what they impacted, by revision. Making it much easier for stakeholders to review and mutually agree that this refined version fully reflects what they wanted.

Turning a spec into an oracle is an essential first step in an agentic verification flow. Filling in holes, correcting inconsistencies, resolving ambiguities and testing that the spec itself provides enough detail to drive comprehensive test generation. This seems to me to be a no-brainer. If you’re curious, you might want to talk to the folks at MooresLab.ai.


Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025

Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025
by Daniel Nenni on 12-02-2025 at 10:00 am

Siemens MediaTek TSMC OIP 2025

In the competitive vertical of mobile System-on-Chip development, SRAM plays a pivotal role, occupying nearly 40% of chip area and directly impacting yield and performance. The presentation “Accelerating SRAM Design Cycles With Additive AI Technology,” co-delivered by Mohamed Atoua of Siemens EDA and Deepesh Gujjar of MediaTek at TSMC’s Open Innovation Platform, addresses the verification challenges in advanced nodes like TSMC’s N2P process. As mobile SoCs push for lower minimum operating voltages (Vmin) to enhance power efficiency, device variations intensify, necessitating rigorous statistical yield qualification: 6-sigma for bitcells and 4-4.5 sigma for periphery logic. Traditional brute-force Monte Carlo simulations, while accurate, are computationally intensive and time-consuming, often leading to iterative design cycles that delay production.

The core motivation stems from these iterative workflows. Failures in verification prompt design fixes, PDK revisions, simulator updates, or additional PVT corners, each requiring full re-runs. MediaTek, leveraging Siemens EDA’s Solido tools, sought a more efficient approach. Enter Additive Learning technology, an AI-driven methodology integrated into the Solido Design Environment. This innovation retains and reuses AI models and simulation data from prior jobs, drastically reducing simulations in subsequent iterations without compromising SPICE-level accuracy.

Solido’s suite includes the High-Sigma Verifier and PVTMC Verifier, both enhanced by Additive Learning. HSV enables verifiable high-sigma analysis, achieving 6-sigma yield verification in thousands of simulations—up to 1,000-1,000,000,000x faster than brute force. PVTMC provides full-coverage verification across PVT corners plus Monte Carlo, 2-10x faster than traditional methods, excels at outlier detection. In traditional flows, five iterations might consume 50 hours; with Solido’s iterative workflow, this drops to 5 hours, saving days or weeks in chip schedules.

The Additive Learning Engine automatically detects reuse opportunities, drawing from a lightweight, optimized Reusable AI Datastore. This datastore supports multi-user access, parallel read/write, and small disk footprints, allowing deletion of full DE results while preserving speedup potential. It stores AI models and past data for fast lookups, ensuring seamless integration into workflows like design sizing changes or PDK updates.

MediaTek’s results demonstrate tangible benefits. In Case 1, verifying 5-sigma bitcell write margin on N2P (clock-to-bitcell flip), the base run required 2,500 simulations, yielding a mean of 120.1 ps and 5-sigma of 131.2 ps. Post-design fix (Vt changes in write driver and column mux), Additive Learning completed verification in just 29 simulations, a 67x speedup, with mean 121.8 ps and 5-sigma 132.5 ps. Case 2 involved 4-sigma instance-level verification (clock-to-data out), where the original 300 simulations gave mean 167.4 ps and 4-sigma 173.1 ps. After Vt updates in control/IO blocks, Additive Learning used 15 simulations (20x faster) matching full re-run results (mean 198 ps, 4-sigma 204.6-204.8 ps).

This technology’s broader adoption underscores its production-grade maturity. NVIDIA, for instance, employs Additive Learning in AI-powered standard cell verification, achieving speedups on incremental runs amid rising design complexity beyond 5nm. Siemens EDA highlights up to 100x boosts to existing AI techniques for verification efficiency. As nodes shrink, such tools are essential for maintaining accuracy while compressing cycles, enabling faster time-to-market for high-yield SoCs.

Bottom line: Additive Learning transforms SRAM design from a bottleneck into an agile process: fast, accurate, and automatic. By reusing models across iterations PDK revisions, sizing tweaks, or tool updates, it exemplifies AI’s role in EDA, as evidenced by MediaTek’s 20-67x gains. This collaboration between Siemens EDA and MediaTek not only accelerates mobile innovation but sets a benchmark for AI integration in semiconductor workflows, promising even greater efficiencies in future nodes.

Also Read:

Transforming Functional Verification through Intelligence

Why chip design needs industrial-grade EDA AI

Hierarchically defining bump and pin regions overcomes 3D IC complexity

CDC Verification for Safety-Critical Designs – What You Need to Know


United Micro Technology and Ceva Collaborate for 5G RedCap SoC and Why it Matters

United Micro Technology and Ceva Collaborate for 5G RedCap SoC and Why it Matters
by Daniel Nenni on 12-02-2025 at 6:00 am

CEVA United Micro 5G

In the ultra competitive automotive technology race the integration of advanced connectivity is no longer a luxury but a necessity. As vehicles transition from isolated machines to intelligent nodes in a vast ecosystem, seamless, reliable communication becomes paramount. On November 11, 2025, Ceva, Inc., a leading licensor of silicon and software IP for the Smart Edge, and United Micro Technology (UMT), a high-tech innovator in smart cellular IoT solutions, unveiled a groundbreaking collaboration: the HyperMotion 5G RedCap Automotive IoT Platform. This partnership harnesses UMT’s 5G RedCap SoC with Ceva’s PentaG Lite scalable 5G modem platform IP and DSP technology, creating a robust connectivity solution tailored for automotive telematics control units (T-Box) and Cellular Vehicle-to-Everything (C-V2X) applications.

At the heart of this innovation is 5G RedCap, a streamlined variant of 5G designed for mid-tier devices that demand efficiency over ultra-high speeds. Unlike full-fledged 5G, which caters to bandwidth-hungry applications like streaming or AR, RedCap reduces complexity and power consumption by capping peak data rates at around 220 Mbps while maintaining low latency and enhanced reliability. This makes it ideal for cost-sensitive sectors like automotive, where overkill capabilities inflate expenses without proportional benefits. According to industry forecasts from Omdia, RedCap connections are projected to surpass 700 million globally by 2030, with the automotive segment leading the charge as it supplants legacy LTE Cat-1 to Cat-4 modules. By embedding RedCap into vehicles, manufacturers can enable real-time data exchange for traffic management, predictive maintenance, and enhanced safety features without ballooning production costs.

The HyperMotion platform exemplifies this synergy. Powered by UMT’s RedCap SoC, it integrates Ceva’s PentaG Lite (a member of the advanced Ceva-PentaG2 family) which optimizes modem performance through sophisticated DSPs and hardware accelerators. This combination not only slashes terminal costs but also embeds essential automotive functionalities: support for eCall and Next-Generation eCall for emergency response, Time-Sensitive Networking for deterministic communication, and hardware-accelerated network offloading to ensure ultra-low latency. Certified to AEC-Q100 Grade 2 standards, the platform guarantees resilience in harsh vehicular environments, from extreme temperatures to vibrations. Moreover, it prioritizes security with always-on connectivity safeguards, addressing vulnerabilities in over-the-air updates and V2X interactions.

This collaboration accelerates connected vehicle adoption by democratizing 5G. Traditional 5G deployments have been prohibitive for mass-market cars due to high power draw and chip complexity, limiting advanced ADAS and infotainment to premium models. HyperMotion changes that, enabling automakers to roll out fleet-wide connectivity swiftly. For instance, C-V2X enables vehicles to “talk” to infrastructure, pedestrians, and each other, potentially reducing accidents by 80% through collision warnings and adaptive traffic flow. In industrial IoT extensions, it supports telematics for fleet tracking, optimizing logistics in real time.

Hui Fu, CEO of UMT, emphasized the partnership’s impact: “Ceva’s cellular IoT platform IP has been instrumental in developing a best-in-class 5G RedCap solution.” Echoing this, Guy Keshet, Vice President and General Manager of Ceva’s Mobile Broadband Business Unit, highlighted how PentaG Lite shortens development cycles, allowing faster market entry for future-ready platforms. The result? A decisive edge for manufacturers in a competitive arena where connectivity defines differentiation.

Bottom line: This initiative signals a broader shift toward edge intelligence in mobility. As 5G ecosystems mature, collaborations like UMT and Ceva’s will bridge the gap between hype and deployment, fostering safer, smarter roads. With HyperMotion, the promise of ubiquitous connected vehicles edges closer to reality, propelling the automotive industry into an era of efficient, scalable innovation.

Contact CEVA or United Micro

Also Read:

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

A Remote Touchscreen-like Control Experience for TVs and More

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier


Transforming Functional Verification through Intelligence

Transforming Functional Verification through Intelligence
by Daniel Payne on 12-01-2025 at 10:00 am

Wilson Research Group, project schedule min

SoC projects are running behind schedule as design and verification complexity has increased dramatically, so just adding more engineers, more tests and more compute aren’t the answer. The time is ripe to consider smarter ways to improve verification efficiency. The added complexity of multiple embedded processors, multiple power domains, plus security and functional requirements create millions of corner cases. Brute-force verification methods no longer scale, so the team at Siemens have an approach with Questa One to unify coverage, changing verification from outdated methods into a targeted, intelligent, and collaborative discipline.

Coverage Plateau

About 75% of complex ASIC projects are now missing schedule, up from 66% just a few years ago. Old verification methodologies typically stall at 85% coverage, no matter how many regressions you throw at it. Engineers are spending nearly half their time on verification activities, but with diminishing returns, huge regression suites and endless coverage reports. This coverage plateau has become a bottleneck, exposing the limits of traditional verification methodologies.

Source: 2024 Wilson Research Group

Intelligent Verification

Questa One has a new approach with a unified, end-to-end architecture that combines systematic verification planning, automated regression management, and real-time analytics. Instead of just launching more random tests, Questa One’s intelligence analyzes existing runs, identifies gaps, and generates targeted new tests. Coverage is pursued and directed strategically, breaking through old plateaus using fewer tests, and delivering project savings: coverage closure with 500x fewer tests, debug time cut by 30%, and regression times reduced by more than a third.

Unified Coverage Database

At the heart of this approach is a UCIS-compliant, unified coverage database (UCDB). This database bridges the entire verification journey, starting from the initial block-level analysis to simulation traces in a compute farm on the other side of the world. UCDB merges static, dynamic, code, functional, assertion, power, and safety coverage in one compressed format. Beyond just storage, this design enables collaborative features that reduce closure times by 20–25% and free up 100 engineering hours per week, all while maintaining continuity even as designs evolve.

Unified Coverage Database (UCDB)

Questa One’s analytics use a  browser-based dashboard with heatmaps to highlight where the next 10% of coverage can be won. Machine learning algorithms find and rank test effectiveness, suggest regression optimizations, and provide historical trend visualizations. Pattern matching groups related coverage issues, saving verification engineers from manual deep-dives into the tiniest test disruptions.

Coverage heat maps

Coverage Acceleration and Debug

To improve coverage from 80% towards 100% requires something new. Traditional approaches require teams to just run more tests, but Questa One Sim Coverage Acceleration (QCX) takes a smarter approach. QCX analyzes the coverage landscape, maps the most efficient routes, and generates only the tests required to reach closure. Where a large SoC regression might have taken a week, it now closes in under an hour with QCX. Peripheral IPs that once took thousands of test cases can now be fully verified with a couple hundred. QCX’s guided approach achieves 100% coverage where brute force fell short. The result is up to 100x faster time to closure, a big help for teams under pressure to meet their schedule.​

QCX

Coverage Acceleration and Debug

To improve coverage from 80% towards 100% requires something new. Traditional approaches require teams to just run more tests, but Questa One Sim Coverage Acceleration (QCX) takes a smarter approach. QCX analyzes the coverage landscape, maps the most efficient routes, and generates only the tests required to reach closure. Where a large SoC regression might have taken a week, it now closes in under an hour with QCX. Peripheral IPs that once took thousands of test cases can now be fully verified with a couple hundred. QCX’s guided approach achieves 100% coverage where brute force fell short. The result is up to 100x faster time to closure, a big help for teams under pressure to meet their schedule.​

VIQ Regression Navigator

Summary

Questa One has many unified pieces: planning, verification engines, closure, dynamic debug, analysis, regression and process management. Used together these pieces create an environment that’s always adapting, always optimizing. Each tool amplifies the effectiveness of the others, turning verification into a coherent, nimble, and ultimately more efficient process.

Read the entire 28 page white paper online.

Related Blogs


Podcast EP320: The Emerging Field of Quantum Technology and the Upcoming Q2B Event with Peter Olcott

Podcast EP320: The Emerging Field of Quantum Technology and the Upcoming Q2B Event with Peter Olcott
by Daniel Nenni on 12-01-2025 at 9:00 am

Daniel is joined by Peter Olcott, Deeptech Principal at First Spark Ventures specializing in early-stage investments. His background encompasses over 20 years of experience in electrical engineering, software engineering, algorithm design, combined hardware-software robotic devices, and novel innovations in biomedical engineering. Peter’s academic and professional portfolio consists of 150+ articles and 34 issued patents spanning various fields such as semiconductor devices, compressed sensing, analog front-end readouts, novel uses of optics, and applications in positron emission tomography and radiotherapy.

Dan explores the emerging field of quantum technology with Peter, who describes current and potential future applications of the technology. Security is discussed, as well as other applications including applications in the medical field and methods to reduce carbon emissions.

Peter also describes an important upcoming event on quantum technology called Q2B which will be held from December 9-11, 2025 at the Santa Clara Convention Center in Santa Clara, CA.
 
Peter will be moderating a panel at this event on December 9 called “Quantum Technologies: Innovation and Investment.” The panel has been specially arranged by Silicon Catalyst, as they are continuing to expand their quantum ecosystem. You can learn more about this conference and register to attend here. 
You can use Discount Code: SC-20-SV for a 20% discount compliments of Silicon Catalyst. They have a booth in the exhibit hall (E2) so stop by and network.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Website Developers May Have Most to Fear From AI

Website Developers May Have Most to Fear From AI
by Bernard Murphy on 12-01-2025 at 6:00 am

Engineer struggling with a website

Further on the theme of what jobs will AI displace or radically change, I have been thinking about Walmart’s recent announcement with OpenAI, to enable customers to buy products directly within ChatGPT. Seems far removed from any care-abouts in electronic design but bear with me. We’ve been hearing about sizeable layoffs at Amazon, Bosch, Dell and other tech companies. Little mention of what roles are being cut, though notably Walmart cuts included tech and e-commerce, perhaps an indicator for where others might be cutting.

We know in our neck of the woods we already have too few hardware and software engineers and are instead emphasizing productivity enhancement. (Programmers shouldn’t get too comfortable. They too may be replaced if they aren’t yet onboard when peers begin to show higher productivity using AI.) What other roles in tech are looking shaky? I argue that website development may be a leading candidate.

Websites already look like yesterday’s marketing tech

We have been trained to believe that our website is the primary customer/ investor/ analyst-facing view of our company. We agonize over how it should look and how it should be organized. We agonize over how we can populate the website with content: customer endorsements, blogs, thought-leader articles and much more. All understandable, but have the goal posts been moving while we have been agonizing?

I have two pet peeves important in the discovery phase of my web searches. I am guessing these are widely shared. The first is search itself. I have yet to find search capabilities in any website (outside native browser searches) which are anywhere close to what I can easily do in a browser, much less a chatbot. Even for the big social platforms, search is hardly worth using. My default now is to search in a browser, then drill down in whatever website in the most promising website recommended.

My second peeve is ease of use, and relevance. Every website owner wants to believe that their organization of information is carefully crafted for ease of use and user friendliness. But when I’m in discovery I want to retrieve information from multiple companies which might have a match to what I need. Each has their own view of ideal website organization and what kinds of exploration might be most friendly to my immediate needs (or their promotion objectives). All mutually incompatible. This approach to promotion is setup to fail for anyone in discovery.

There is still high value information contained on the website – the leaf-level content. Product images, specs, white papers, blogs, even prices if appropriate. It’s much of the superstructure that is becoming less useful, apart from storefront/persona promotion: big picture vision/mission, values, investor information, etc..

GenAI search and SEO versus content

SEO (search engine optimization) has dominated the world of website/content design for many years. The idea behind SEO is that you can “game” your content to have it appear at the top of the first page of a search or at least somewhere on the first page. You do this by carefully structuring all kinds of meta-data: SEO title, meta-description, and high-scoring keywords. A lot of work goes into getting this right and the field abounds with books and training courses. You can get the impression (unfortunately many do) that optimizing your SEO is much more important than optimizing your content.

I have views on this topic because for a few years I had to spend time adding and refining SEO information for my blogs. I hated it. It seemed like a superficial way to get high visibility in web searches, reinforced by the fact that much of what comes up in my searches is often not very useful to my needs. I have been happy to see suggestions and my own experience that I can now avoid much of this gaming, reinforced by my experience that clients no longer require me to add SEO dressing to my blogs.

Of course the SEO industry protests that they aren’t obsolete, only the nature of SEO has changed. Equally search giants don’t want to lose their advertising revenue so the SEO people have a point. But I hold out hope that SEO evolution will have to become much more content-centric and that market competition will reinforce content relevance over gaming tricks.

A common theme for content now is to be authoritative and original, supported by credible citations and references, which builds trust. This kind of content is likely to draw more views, in turn helping with ranking retrieval links (RAG) in GenAI searches.

AI-generated content is unlikely to clear this bar. Blatantly commercial sales pitches won’t either. The link in the previous paragraph and my own experience support the appeal of helpful articles and white papers, guides, FAQs and explainers (I’m especially fond of explainers). Readers want to find information that will help them understand key points to consider in planning, without needing to wade through self-serving commercials. The kind of information that will build confidence that you put your reader’s needs ahead of your own needs.

The emphasis shifts from website design to content, which you must continue to develop either through your own product/solution experts if they can write well or collaborate with respected content developers in your industry. I don’t see any other options, but of course I do have a vested interest 😀

What happens to websites?

Websites aren’t dead. They continue to fill an important role as the storefront for your corporate persona – who you are, what you do, investor support, that sort of thing. But they are hopelessly clunky in supporting early discovery for products, services, and technical insight.

In the late stages of discovery, a reader may want to understand more about your company and what other capabilities you provide. A traditional website in some form could continue to be useful in supporting this need, though I believe the product/service-centric kind of support could be better served through a chat interface with RAG retrieving appropriate content.

In other words, put all your agonizing into the storefront website and replace the rest of the structure with a chatbot retrieving all that great content you have built. And build more content!


CEO Interview with Brandon Lucia of Efficient Computer

CEO Interview with Brandon Lucia of Efficient Computer
by Daniel Nenni on 11-28-2025 at 2:00 pm

headshot

Brandon Lucia is the CEO and co-founder of Efficient Computer, the company building the world’s most energy-efficient general-purpose processors, and a Full Professor in the department of electrical and computer engineering at Carnegie Mellon University. He is an extensively published author, and his research has appeared in top publications such as IEEE Micro, Computer, Proceedings of the ACM on Programming Languages, and IEEE Computer Architecture Letters.

Brandon earned his Ph.D. in Computer Science and Engineering from the University of Washington in 2013. He received the NSF CAREER Award in 2017, the IEEE TCCA Young Computer Architect Award in 2019, and the Sloan Foundation Fellowship in 2021.

Tell us about your company.

After a decade of research at Carnegie Mellon University, a team of world-leading computer architects, frustrated with the pervasiveness of deeply inefficient computer systems, founded Efficient Computer in 2022 to solve computing’s energy problem. Led by CEO Brandon Lucia, Chief Architect Nathan Beckmann, CTO Graham Gobieski, and founded with SmartThings founder and now BrightAI CEO Alex Hawkinson, the group sought to commercialize key breakthroughs in efficient computation and reshape computing from the ground up, with a focus on extreme energy efficiency.

Today, Efficient Computer has built and launched the world’s most energy-efficient general-purpose processor, Electron E1, and the intuitive, developer-friendly software stack that unlocks orders of magnitude in efficiency gains for entire, complex applications, replacing inefficient legacy architectures and over-specialized accelerator chips with a new efficiency-first design that enables far-reaching innovation through its general-purpose programmability.

With Efficient’s cutting-edge effcc Compiler and software stack, Efficient’s fundamentally new spatial dataflow architecture delivers extreme efficiency to solve computing’s energy challenges across a wide range of scales: the tiniest far-edge systems, high-performance edge devices, and even the datacenter; and for a wide range of application use cases: physical AI, infrastructure observability and industrial automation, robotics and automotive use cases, wearable AR vision systems, satellites, defense, and many more.

Efficient’s patented architecture is an unprecedented alternative to catastrophically inefficient, legacy, “von Neumann” computer architectures. von Neumann architectures are inherently sequential and mired by decades of entrenched design choices and intellectual inertia that have neglected efficiency, while adhering to the status quo. Efficient’s novel spatial dataflow architecture is a clean-slate redesign that offers efficiency, generality, and performance, through hardware/software co-design that achieves an extremely high degree of parallelism, without compromising on the familiar software interface that generations of programmers and infrastructure expect.

What problems are you solving?

Today’s processors waste most of their energy — often more than 99% of the energy for each CPU instruction — moving data around, fetching and decoding instructions, and configuring circuits to ready them for the next operation, which then consumes a vanishingly small fraction of the total energy to perform.For decades, Moore’s Law masked these inefficiencies by delivering steady gains in energy efficiency. But as transistor scaling has slowed, those architectural inefficiencies have become the hard limit on progress. Attempts to reduce power have either specialized away general-purpose programmability or shifted the inefficiency elsewhere in the system.

Efficient was founded to solve this root problem of architectural inefficiency, and to spend a majority of energy on a computation’s actual operations. The careful co-design of Efficient’s Fabric spatial dataflow architecture eliminates the energy overheads inherent in legacy designs. Instructions in Efficient’s Fabric architecture are spatially distributed, avoiding frequent fetch and decode costs, and eliminating costly cycle-by-cycle circuit reconfiguration. A data value produced by one operation flows through a highly efficient on-chip network directly from the output of one instruction to the input of the instructions that consume that data value. The benefits of these key differences in the architecture are unlocked by Efficient’s compiler and software stack, which ingests standard, general-purpose code and readies it for efficient execution on the Fabric. By dramatically reducing the energy required for general-purpose computation, including AI, our approach enables applications that were previously not possible, due to thermal dissipation, power delivery, battery lifetime, or limited performance under a power cap.

With Efficient’s architecture, an application designer has the freedom to spend their energy dividends any way they like. One customer may translate efficiency into longer battery life, another may incorporate more capability at a given power level, and another may opt for a smaller battery form factor at lower cost. The result is a new category of computational use cases and applications — defined by the possibilities created through unprecedented efficiency, not by the limitations of ever-tightening power budgets.

What application areas are your strongest?

Efficient’s Fabric architecture scales from the smallest edge devices to large-scale performance-intensive applications, providing extreme efficiency across the spectrum. Electron E1 is an implementation of Efficient’s Fabric architecture designed for the edge — where efficiency, performance are crucial, and where a wide range of different computational tasks intersect. Electron E1 is especially strong in applications such as physical AI for infrastructure observability and industrial automation, robotics and near-actuator control, automotive sensors, next-gen AR/VR wearables, and space & defense use cases. Electron E1 enables systems to process complex, multimodal data like vision, audio, and vibration directly on-device. These are the kinds of challenges our Electron product line is built to address.

And while the Electron product line targets the edge, our underlying Fabric architecture scales seamlessly — from ultra-low-power devices all the way to the datacenter — bringing the same efficiency and flexibility to every level of computing. Across these domains and scales, Efficient enables developers to spend their energy savings where it matters most: more capability for less power, lower cost, longer lifetime, and all without sacrificing the key advantage of general-purpose programmability.

What keeps your customers up at night?

Up until now, customers have had to make painful trade-offs when designing products — sacrificing features, performance, or intelligence just to meet battery, size, or cost constraints. Take the infrastructure observability space: they’re building devices that must process complex data and make decisions on the spot, often in harsh, disconnected, and power-limited environments. These devices need to deliver real-time intelligence where it’s needed most, without being limited by energy, size, or cost.

Today, many are forced to rely on the cloud for processing — sending massive amounts of sensor data over unreliable networks, constant communication to which drains batteries in just days or weeks. This “data backhaul” strategy is a non-starter for long-lived infrastructure observability applications, making true autonomy in these use cases impossible. There is an urgent need, instead, to efficiently support on-device computation, avoiding the need for data backhauling, but still extracting benefits from valuable sensor data.

Considering legacy compute solutions for these use cases leads to unsatisfying and constant compromise: every milliwatt, millisecond, and square millimeter matters, and designers must forfeit performance for lifetime, degrade intelligence due to inefficiency, and often remain cloud dependent. Efficient eliminates these unsatisfying trade-offs. By making energy efficiency fundamental to general-purpose computing, we give customers the freedom to design for what matters most — capability, autonomy, longevity, form factor, or scale — without compromise.

What new features/technology are you working on?

This year, we launched our first silicon product, the Electron E1 general-purpose processor, which has already been delivered to our early-access customers. We are now ramping toward large-scale volume and broad distribution in 2026, when Electron E1 will be generally available.

Electron E1 leverages the Efficient Fabric architecture, eliminating the energy overhead caused by moving data between memory and compute cores in traditional von Neumann systems. We are seeing strong customer traction in areas such as physical AI for infrastructure and automation, space and defense, automotive, and consumer products. Recently, we celebrated the first real-world deployment of the Electron E1 with physical AI leader, BrightAI. With the E1, BrightAI’s platform can perform the majority of its processing directly at the edge, avoiding the high energy cost of offloading data to the cloud for compute-intensive tasks such as signal processing and AI.

How do customers normally engage with your company?

One of the ways customers typically engage with us is through our Early Access Silicon Program, which provides a pathway to evaluate and integrate our technology. Through this program, customers receive access to our development platforms, documentation, and tools, along with hands-on support from our engineering team. We help them benchmark their applications and provide guidance on how to optimize performance, ensuring they get the most out of our hardware. Early engagements often focus on prototyping and testing, while ongoing collaboration supports larger deployments and integration into production environments.

Also Read:

CEO Interview with Dr. Peng Zou of PowerLattice

CEO Interview with Roy Barnes of TPC

CEO Interview with Mr. Shoichi Teshiba of Macnica ATD


Podcast EP319: What Makes Agile Analog a Unique Company with Chris Morrison

Podcast EP319: What Makes Agile Analog a Unique Company with Chris Morrison
by Daniel Nenni on 11-28-2025 at 10:00 am

Daniel is joined by Chris Morrison, vice president of product marketing at Agile Analog, the customizable analog IP company. Chris has over 18 years’ experience developing strong relationships with key partners across the semiconductor industry and delivering innovative analog, digital, power management and audio products. Previously, he has worked for international companies including Dialog Semiconductor.

Dan explores Agile Analog’s unique custom analog focus with Chris, who describes the company’s agileSecure product line. Tamper prevention and tamper detection are both discussed, along with the benefits of the company’s unique temperature sensor. Chris also discusses significant collaboration in the security space with high-profile companies. Future collaboration as well as new product introduction across a broad array of applications including security and data conversion are also covered.

Contact Agile Analog

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


TSMC Formally Sues Ex-SVP Over Alleged Transfer of Trade Secrets to Intel

TSMC Formally Sues Ex-SVP Over Alleged Transfer of Trade Secrets to Intel
by Daniel Nenni on 11-28-2025 at 6:00 am

TSMC vs Wei Jen Lo

The big semiconductor news this week is the legal action TSMC is taking against former Senior Vice President Wei-Jen Lo. This looks to be a serious game of 3D chess between CC Wei and Lip-Bu Tan so it is worth a look. I got this notice in my inbox early Tuesday morning:

Immediately following, there was an interesting discussion on SemiWiki. Since this story is still unfolding I thought it would be worth while to take a closer look and share perspectives.

Wei-Jen Lo immigrated to the US from Taiwan and earned his PhD in Solid State Physics & Surface Chemistry from U.C. Berkeley in 1979.  He joined TSMC in 2004 after 18 years at Intel in positions including Director of Technology Development and as a Factory Manager running a development facility in Santa Clara, CA.

Joining TSMC after Intel was not uncommon back then. Intel and TSMC did not compete directly and that is how Silicon Valley worked. We changed jobs in pursuit of equity so Wei-Jen Lo only working for two different companies in 40 years is quite remarkable. It also goes to the depth of knowledge he had at both companies.

Here is the TSMC hiring announcement from 2004:

Approved the appointment of Dr. Wei-Jen Lo as Vice President of TSMC Operations II. Both Dr. Wei-Jen Lo and Dr. Mark Liu, who is also a Vice President of TSMC Operations II, report directly to Dr. Rick Tsai, President and Chief Operating Officer for TSMC. Dr. Wei-Jen Lo recently joined TSMC from Intel Corporation where he held various positions in technology development and management. Prior to joining Intel, Dr. Lo served in the IT and semiconductor industries, and he also held teaching and research position in the university in the US.

TSMC and many other semiconductor companies have hired former Intel employees. In fact, you will be hard pressed to find a company that did not have Intel experience inside so this is nothing new. In fact, Dr. Mark Liu also worked for Intel prior to joining TSMC and later became CEO and Chairman of the Board.

The problem as I see it is two-fold:

Wei-Jen Lo told TSMC he was retiring which provided a completely different exit than if he had said he would go to work for a competitor. For example, it was reported that Wei-Jen Lo was allowed to take 20 boxes of hand written notes he had complied over the last 20 years while working at TSMC. That would not have happened if it was known that he was going to work for Intel.

The second problem is that Intel did not publicly announce Wei-Jen Lo’s arrival  which is normally done for the executive staff. This supports the argument of deception. Wei-Jen is said to be a Senior VP at Intel reporting directly to Lip-Bu Tan. He will work in Intel’s manufacturing group and its packaging business which directly competes with TSMC.

On Wednesday morning Lip-Bu Tan sent a message to Intel employees defending the hiring:

“Based on everything we know today, we see no merit to the allegations involving Wei-Jen, and he continues to have our full support. Intel has welcomed back Wei-Jen Lo, who previously spent 18 years at Intel working on the development of Intel’s wafer processing technology before joining TSMC, where he continued his work in their wafer processing technology development.”

On Thursday it was reported that Taiwan prosecutors had raided the homes of the former senior TSMC executive and seized computers after the company accused him of leaking trade secrets. This is both a criminal and civil investigation.

This big question is why?

Wei-Jen Lo is 75 years old and has had a remarkable career working for two semiconductor giants that changed the world. His legacy is one semiconductor professionals like myself honor greatly. Wei-Jen has also worked for some of the most decorated people in the semiconductor industry including Andy Grove, Craig Barrett, Morris Chang, and CC Wei.

Additionally, Wei-Jin Lo owns significant TSMC stock. I found a source that said as of February 28, 2025, Lo held 1,282,328 shares, valued at about US$63 million. Could that be true? And now he risks losing it as a result of a lawsuit? Not to mention the shame of betraying Taiwan’s most valued company? Seriously, why would he do this?

Bottom Line: Hopefully this story has a happy ending. TSMC is still an important supplier to Intel and this will be yet another test of leadership for Lip-Bu Tan. Lip-Bu needs to get in front of this situation before it does irreparable harm to Intel, absolutely.

Also Read:

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

Exploring TSMC’s OIP Ecosystem Benefits

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging