CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

CEO Interview with Bob Fung of Owens Design

CEO Interview with Bob Fung of Owens Design
by Daniel Nenni on 08-01-2025 at 10:00 am

Bob Fung Photo

Bob Fung is the CEO of Owens Design, a Silicon Valley company specializing in the design and build of complex equipment that powers high-tech manufacturing. Over his 22-year tenure, Bob has led the development of more than 200 custom systems for world-class companies across the semiconductor, biomedical, energy, and emerging tech sectors, solving their most demanding equipment challenges. Under his leadership, Owens achieved a 10x revenue increase while maintaining a 100% delivery record, reflecting its engineering excellence and unwavering customer commitment.

Tell us about your company

At Owens Design, our story began in 1983 with a bold vision: to build a company where world-class engineers and technicians collaborate seamlessly to design and manufacture the next generation of high-tech equipment. Today, that vision is realized through our legacy of delivering over 3,000 custom tools across various industries, including semiconductors, renewable energy, medical devices, hard disk drives, and emerging technologies. We are proud to maintain a 100% on-time delivery record, a testament to our culture of precision, partnership, and performance.

What sets Owens Design apart is our deep understanding of complex equipment engineering, our ability to design production-ready prototypes and rapidly scale manufacturing to meet our customer’s growth. Our customers don’t just come to us to design equipment; they come to us to co-develop future-proof solutions. We offer turnkey services that span custom design, precision engineering, prototyping, pilot builds, and scalable manufacturing. This integrated approach minimizes risk, compresses development cycles, and enables rapid ramp-up for production in fast-evolving markets.

What problems are you solving?

At Owens Design, we help high-tech innovators turn intellectual property (IP) into fab-ready systems and enable new processes with production-capable equipment that scales. We focus on sectors that require precision, speed, and scalability, where standard solutions often don’t suffice.

Across the semiconductor and electronics manufacturing sectors, the increasing complexity of products is reshaping equipment requirements. Advanced technologies such as chiplet integration, 3D packaging, and heterogeneous system design, demand highly customized tools that can meet exact standards for precision, reliability, and integration capability.

At the same time, companies face compressed development timelines and pressure to bring new solutions to market faster. Owens Design addresses these needs by engineering application-specific equipment that enables breakthrough innovations, which are tightly aligned with each customer’s performance and production goals.

As the industry shifts toward more regionalized manufacturing and supply chain resilience, companies are reevaluating their approach to equipment strategy. There is a growing need for agile, scalable platforms that can adapt to rapidly changing product roadmaps and evolving production environments. We support that transition by working closely with our clients’ R&D and operations teams, acting as an extension of their organization to deliver complex tools on accelerated timelines while maintaining the engineering rigor that’s core to their success.

What are your strongest application areas?

Being rooted in Silicon Valley, we’ve grown alongside the semiconductor industry and built deep expertise in the design of complex equipment. Many of our customers are semiconductor OEMs, ranging from early-stage startups to Tier-1 equipment manufacturers, who are looking to bring advanced, high-precision systems to market quickly. They turn to us not just because we understand the technical demands of semiconductor tools but because we consistently deliver on tight timelines with the engineering depth, domain knowledge and execution reliability they need.

As Silicon Valley has evolved into a hub for a broader range of advanced technologies, Owens Design has evolved with it. Today, we apply the same level of rigor and creativity to automation challenges in renewable energy, data storage, medical devices, and emerging tech. These projects often involve highly specialized needs, such as advanced laser processing, precision robotics, or the automated handling of fragile materials. In many cases, standard solutions don’t meet the requirements, and that’s where our broad technical experience becomes essential.

What connects all of our work is the ability to take on complex programs while moving quickly and maintaining high quality. Our development process is designed to compress timelines and give customers confidence from concept through production. For over 40 years, we’ve maintained a 100% delivery record by a focusing on areas where we can deliver exceptional results and a commitment to meet our customer’s needs. That combination of discipline and engineering versatility is what continues to set Owens Design apart.

What keeps your customers up at night?

For semiconductor OEMs, whether they’re early-stage startups or established Tier-1 OEMs, the pressure to move fast while getting it right the first time is intense. They’re trying to bring highly complex systems to market on aggressive timelines, and every delay or design misstep can have real commercial consequences. What we hear most often is concern about bridging the gap between a promising concept and a reliable, production-ready tool, especially when resources are limited, and there is no room for second chances.

These teams aren’t just looking for a contract manufacturer; they’re seeking a partner who thoroughly understands semiconductor equipment. Someone who can engage early, ask the right questions and design a system that meets both performance specs and production realities. Reputation matters in this space. If you’re a startup, getting into the fab with a tool that doesn’t perform can be a deal breaker. And if you’re a Tier-1 company, quality and consistency are non-negotiable across your entire roadmap.

That’s why we place such a strong emphasis on early alignment. We work closely with customers to de-risk development from day one, bringing decades of domain expertise, a proven process, and a sense of urgency that matches theirs. Ultimately, it’s about giving them the confidence that they’ll reach the market quickly with a system that works, scales, and earns the trust of their end users.

What does the competitive landscape look like, and how do you differentiate?

The equipment development space is becoming increasingly specialized as technologies grow more complex and timelines get tighter. While there are many players offering contract manufacturing or niche engineering services, very few are structured to provide proper end-to-end support from early design definition through to production-ready delivery. That’s where Owens Design stands apart.

What differentiates us is our ability to engage early in the product lifecycle, even when requirements are still evolving, and carry that design through to a production-ready, scalable tool. Many other service providers are either focused on early-stage prototyping or late-stage manufacturing. However, few can bridge both sides with the same level of technical depth and delivery reliability. Owens Design has the ability to close the gap.

What new features/technology are you working on?

We’re focused on adding value to our customers, working on new ways to address what they need most. In the semiconductor capital equipment space, we’re receiving a strong message that customers need to get to market even faster and require assistance with navigating their customers’ expectations and fab requirements, including SEMI spec compliance, particle control, vibration control, and fab interface software. With artificial intelligence accelerating the advanced packaging market, we’re seeing many new technologies being developed to help improve yield. There is a high interest in our experience with developing Inspection and Metrology equipment to help new technologies get into production quickly, which has led us to our most recent initiative, called PR:IME™. This new platform accelerates the commercialization of these technologies.

What is the PR:IME platform, and how does it accelerate the development of semiconductor inspection and metrology tools?

The idea behind PR:IME came from a recurring challenge we’ve observed over the years: the time it takes to bring inspection and metrology tools from concept to something ready for deployment in a fab. For most tool developers, every new system starts as a clean sheet, custom mechanics, software, controls, and wafer handling, and that means long lead times and a lot of engineering effort spent on non-differentiating components.

We asked ourselves: What if we could take some of that burden off their plate? PR:IME is our answer to that. It’s a modular platform with standardized mechanical, electrical, and software interfaces. A flexible foundation that lets customers plug in their core IP while we handle the rest. That way, teams can focus on what makes their technology unique, not on reinventing basic infrastructure.

One of the things we’re most excited about is its scalability. For R&D environments, a manual wafer loading option is available to get up and running quickly. Then, as the tool matures and heads toward volume production, there is a clear path to fully automated wafer handling without changing the process hardware. That kind of flexibility makes it easier to iterate early and scale later without having to start over from scratch. It’s really about helping innovators move faster and with more confidence.

How do customers usually engage with your company?

Our engagements typically begin with a collaborative discovery process. We work closely with customers to understand their technical challenges, commercial objectives, and long-term vision for the project. This includes discussing key performance objectives, test methods, cost and schedule constraints, system complexity, and barriers to success. By gaining a clear understanding of both the engineering and business context, we’re able to align early on around what success looks like.

If the opportunity is a strong technical and commercial fit, we partner with customers through a phased development approach. This model offers a structured, low-risk pathway for transitioning from concept to implementation, starting with system architecture and feasibility, then progressing through detailed design, prototyping, and ultimately, scalable production. Each phase is designed to validate assumptions and refine the scope, giving customers confidence in both the technical viability and the business case.

This process enables us to build trust and deliver value at every step, whether we’re designing a new tool from scratch or helping an existing system evolve for the next stage of production.

Contact Owens Design

Also Read:

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Jutta Meier of IQE


Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell

Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell
by Daniel Nenni on 08-01-2025 at 10:00 am

Dan is joined by John O’Donnell, Founder and CEO of yieldHUB, a pioneering leader in advanced data analytics for the semiconductor industry. Since establishing the company in 2005 he has transformed it from a two-person startup into a trusted multinational partner that empowers some of the world’s leading semiconductor companies to improve yield, reduce test costs, boost engineering efficiency and enhance quality.

yieldHUB has recently celebrated its 20th anniversary. The company has also received national recognition for its accomplishments in Ireland. Dan explores yieldHUB’s history and future plans with John, including the company’s expansion worldwide and its new R&D focus areas. John describes the company’s new yieldHUB Live system, a test agnostic real-time capability with an AI recommendation system and digital twin models. John explains that AI is a new development focus for the company and this new system is having significant impact on improving yield, reducing test costs, and increasing product quality. John also describes a new API native platform that is in development.

Dan also explores the four pillars of yieldHUB with John, which are the previously mentioned improve yield, reduce test cost, boost engineering efficiency, and enhance quality. John describes the importance of each pillar and explains the approach yieldHUB takes to achieve these goals with its customers.

Contact yieldHUB here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification
by Admin on 08-01-2025 at 9:00 am

DAC 62 Systems on Chips

On July 9, 2025, a DACtv session with Zackary Glazewski of ChipAgents AI introduced Waveform Agents, an AI-driven solution by Chip Agents designed to tackle the complex challenge of waveform debugging in semiconductor design. The speaker highlighted the difficulties of traditional waveform debugging and demonstrated how AI agents, leveraging large language models (LLMs), offer a scalable, autonomous approach to streamline verification, addressing the “needle in a haystack” problem inherent in analyzing massive waveform datasets.

Waveform debugging is notoriously challenging due to the sheer volume of data involved, often ranging from tens of gigabytes to terabytes in large systems. Bugs typically manifest within a narrow temporal range, making their identification akin to finding a needle in a haystack. Beyond localization, fixing these bugs requires deep knowledge of signal interactions, their roles in design and testbench files, and their interconnections. This dual challenge—locating and resolving issues—demands significant time and expertise, as engineers must navigate intricate relationships and interpret signal behaviors.

Traditional methods fall short in addressing these issues. Waveform viewing software, while useful for visualizing signal toggles and protocols, struggles with the scale of modern datasets, requiring engineers to manually comb through vast amounts of data. Signal tracing features, though helpful, rely heavily on engineer guidance, adding to the time-intensive process. Anomaly detection using traditional machine learning can identify patterns but lacks flexibility for novel issues, as it depends on pre-trained datasets that may not cover new anomalies. Formal tools, while powerful for finding counterexamples, do not scale well for large systems and still require manual effort to pinpoint and fix errors.

Waveform Agents, powered by LLMs, offer a transformative solution by autonomously handling end-to-end waveform debugging. Unlike traditional LLMs that process entire datasets and risk exceeding context limits, these agents employ intelligent context selection. They traverse design and testbench files, log files, and waveforms, selectively analyzing relevant data to identify bugs without overwhelming computational resources. This approach ensures scalability and efficiency, even for terabyte-scale datasets. The agents can detect a wide range of issues, from functional failures to assertion violations, whether in the design or testbench, without requiring specific error syntax or predefined signatures.

The process begins with the agent analyzing regression logs and waveforms to localize bugs, leveraging its pre-trained understanding of the codebase and specifications. It then proposes fixes by interpreting signal relationships and referencing design intent, reducing the need for manual intervention. For instance, in a regression with thousands of tests, the agent autonomously parses logs, identifies failure signatures, and maps them to specific design or testbench issues, eliminating the need for repeated training per project. This pre-trained model adapts dynamically, pulling relevant context as needed, ensuring flexibility across diverse scenarios.

The benefits are significant: Waveform Agents drastically reduce debugging time, enhance scalability, and improve accuracy by understanding complex signal interactions. By automating both bug localization and resolution, they free engineers from tedious manual tasks, allowing focus on higher-value design work. The session addressed concerns about context limits, with the speaker noting that intelligent context selection avoids processing entire waveforms, making the solution practical for real-world applications.

Chip Agents’ approach marks a paradigm shift in verification, aligning with the industry’s need for smarter, data-driven tools to handle growing design complexity. By integrating AI into waveform debugging, Waveform Agents promise to accelerate verification cycles, improve first-time silicon success rates, and empower engineers to tackle the challenges of modern semiconductor design with greater confidence and efficiency.

Contact ChipAGents

Also Read:

AI-Driven Verification: Transforming Semiconductor Design

Building Trust in AI-Generated Code for Semiconductor Design

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research


AI-Driven Verification: Transforming Semiconductor Design

AI-Driven Verification: Transforming Semiconductor Design
by Admin on 08-01-2025 at 8:00 am

DAC 62 Systems on Chips

In a DACtv session on July 9, 2025, Abhi Kolpekwar, Vice-President & General Manager at Siemens EDA, illuminated the transformative role of artificial intelligence (AI) in addressing the escalating challenges of semiconductor design verification. The presentation underscored the limitations of traditional methods and introduced a smarter, AI-driven approach to enhance efficiency, scalability, and insight in the verification process.

The semiconductor industry is grappling with unprecedented complexity. In 2024, only 14% of chips achieved first-time silicon success, highlighting the difficulty in managing intricate designs. Additionally, 75% of chips are behind schedule due to tight time-to-market pressures, compounded by a critical shortage of skilled professionals, with 80% of the industry’s demand for talent unmet. These challenges—complexity, time constraints, and resource scarcity—render traditional rule-based and database-driven verification methods inadequate. The speaker emphasized that siloed, manual processes, such as testbench creation and debugging, consume 40% of the verification workforce’s time, yet fail to deliver the necessary confidence in tape-out readiness.

The industry is undergoing a profound shift, moving from the electrification era, which automated physical labor, to the computation era, and now to the “cognification” era, where cognitive tasks are delegated to AI. By 2030, the semiconductor market is projected to exceed a trillion dollars, with AI as a dominant driver. Generative AI, expected to grow at an 85% compound annual growth rate by 2027, is reshaping infrastructure, services, and software. Meanwhile, the rise of 3D integrated circuits (3D ICs) is set to propel the market from $200 billion to $1 trillion by 2030, introducing complexities in heterogeneous integration, interconnects, power, and thermal management. Data security is another concern, with 8.2 trillion breaches reported in 2023, costing an average of $4.5 million per incident.

Traditional verification, reliant on disconnected simulation, formal, and static methods, struggles to scale. Coverage reports often lack actionable insights, and manual debugging is inefficient. The speaker proposed an AI-driven solution, exemplified by their “Quetsa One” approach, which integrates connected workflows, data-driven insights, and scalable tools. This smarter verification leverages AI in three key areas: specification generation, debugging, and coverage analysis.

In specification generation, AI tools like Property Assist convert plain English or PDF-based design specifications into standardized SystemVerilog assertions, eliminating the need for engineers to master complex languages. These assertions integrate seamlessly with formal verifiers and netlists, while test plans can be generated from the same specifications, ensuring alignment with design intent. This streamlines the process, reducing human effort and maintaining a single source of truth.

For debugging, AI transforms regression testing into “smart regression.” By prioritizing test cases to trigger failures quickly, AI minimizes resource waste. It clusters failures, maps them to specific design changes, and identifies the responsible commits, significantly reducing debugging time. This targeted approach contrasts with traditional methods, where engineers manually sift through extensive logs.

In coverage analysis, analytical AI processes large datasets to identify common expression patterns, creating configurable graphs that highlight critical coverage gaps. This enables engineers to write targeted testbenches, improving coverage quality without relying on arbitrary test additions. By providing actionable insights, AI moves beyond mere numbers to deliver meaningful verification outcomes.

The Costa One approach embodies this shift, offering connected workflows that unify tools across domains, data-driven wisdom to uncover insights, and scalable tools to handle growing verification loads. The speaker urged the verification community to adapt, questioning siloed and static processes to embrace AI’s potential. By fostering adaptability and innovation, AI-driven verification promises to enhance productivity, reduce time-to-market, and ensure robust chip designs in an increasingly complex semiconductor landscape.

Also Read:

Building Trust in AI-Generated Code for Semiconductor Design

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research

Google Cloud: Optimizing EDA for the Semiconductor Future


Building Trust in AI-Generated Code for Semiconductor Design

Building Trust in AI-Generated Code for Semiconductor Design
by Admin on 08-01-2025 at 7:00 am

On July 9, 2025, a compelling session at DACtv by Vishal Moondrha of Perfroce addressed a critical challenge in the semiconductor industry: building trust in AI-generated code. The speaker highlighted the unique hurdles of integrating generative AI into semiconductor design, emphasizing issues like data provenance, quality, and intellectual property (IP) management. As AI becomes integral to designing analog circuits, RTL, and other components, these concerns must be tackled to unlock its full potential.

The semiconductor industry faces distinct challenges compared to other sectors. First, there is heightened sensitivity to liability and data provenance. Unlike software development, where errors can often be patched quickly, mistakes in chip design are costly and difficult to rectify, especially as designs progress. A single error in generated code can lead to expensive rework, making reliability paramount. Additionally, data quality is a concern, as every company has unique standards and workflows developed over years. Ensuring that AI-generated code adheres to these standards is critical to maintaining design integrity.

Another significant issue is the legal and ethical use of data for training AI models. Semiconductor designs often incorporate external IP, which may carry royalty obligations or export control restrictions. Using such IP to train AI models without proper authorization risks legal violations and data leakage. For instance, training a model on proprietary IP could inadvertently produce outputs that violate licensing agreements, exposing companies to liability. Furthermore, highly sensitive IPs, such as those with geographical restrictions, must be excluded from training datasets to comply with regulations.

To address these challenges, the speaker proposed a robust IP lifecycle management system. By treating every design block as an IP, whether internally developed, purchased, or reused from prior projects, companies can attach traceability and metadata to each block. This approach ensures clear data provenance, allowing teams to track the origin, ownership, and usage rights of each IP. For example, metadata might include technical specifications, workflow outputs, or verification results, providing context that enhances the AI model’s understanding of the design.

The proposed workflow involves breaking down designs into hierarchical IPs, each with a defined lifecycle. This modular approach enables companies to set rules for which IPs can be used for training. For instance, IPs with export controls or proprietary restrictions can be flagged to prevent their inclusion in datasets. Tools like those from Siemens, in collaboration with Perforce, generate metadata through IP validation and quality assurance processes, which can be linked to specific IP versions. This ensures that AI models are trained on compliant, high-quality data.

The speaker also emphasized the importance of traceable data pipelines. A typical training workflow starts with raw design files and metadata, which are pre-processed, cleaned, and split into training, validation, and testing sets. By maintaining traceability throughout this pipeline, companies can audit the data’s journey, ensuring no unauthorized IPs are included. This is particularly crucial as design data evolves, with incremental updates like bug fixes or new features requiring continuous tracking to maintain accuracy.

Implementing such a system offers multiple benefits. It mitigates data leakage risks, simplifies data management by modularizing IPs, and supports compliance with legal and regulatory requirements. By using data management tools like SQL databases, Perforce, or Git, companies can avoid ad-hoc data handling, ensuring that training datasets are sourced from managed repositories. Additionally, modeling AI models themselves as IPs allows tracking of which LLM version was used, enhancing transparency.

This approach fosters innovation by providing a trusted framework for AI-driven design, enabling semiconductor companies to leverage generative AI confidently while safeguarding IP and ensuring compliance.

Also Read:

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research

Google Cloud: Optimizing EDA for the Semiconductor Future

Synopsys FlexEDA: Revolutionizing Chip Design with Cloud and Pay-Per-Use


CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Andrew Skafel of Edgewater Wireless
by Daniel Nenni on 08-01-2025 at 6:00 am

Skafel pic

As the demand for high-capacity, low-latency wireless networks explodes across residential, enterprise, and industrial environments, a Canadian innovator is quietly reshaping the way Wi-Fi works—from the silicon up. Edgewater Wireless (TSXV:YFI/OTC:KPIFF), headquartered in Ottawa, is pioneering a transformative approach to wireless connectivity with its patented Wi-Fi Spectrum Slicing™ technology and an AI-enhanced roadmap that’s drawing attention from industry heavyweights.

We sat down with Andrew Skafel, CEO of Edgewater Wireless, to discuss the company’s impressive recent milestones—including selection to Canada’s prestigious FABrIC semiconductor program, backing from Silicon Catalyst, and entry into Arm’s Flexible Access (AFA) Program. We also dove into Edgewater’s vision for AI-powered spectrum optimization and how their “standards-leading” strategy is carving out a unique path in a crowded wireless landscape.

Andrew, thanks for joining us. Edgewater Wireless has had a banner year. Let’s start with the basics—what makes Edgewater different from the rest of the Wi-Fi ecosystem?

Thanks—it’s great to be here. At Edgewater, we’ve taken a fundamental departure from traditional Wi-Fi design. Instead of incrementally optimizing the existing single-channel-per-radio architecture, we pioneered Wi-Fi Spectrum Slicing, which enables multiple concurrent channels in the same frequency band—essentially “slicing” the spectrum to maximize capacity and performance.

This isn’t just theoretical. Our patented silicon architecture can deliver 10x performance gains and up to 50% lower latency—even on legacy client devices. We’re not asking the world to adopt a new standard or wait for new device rollouts. We’re enhancing performance from the infrastructure side, and we’re fully aligned with current and evolving Wi-Fi standards.

That’s impressive. Let’s talk about the AI element. You recently announced the initiation of prototyping an AI subsystem using Arm® technology. What does that mean for Wi-Fi?

It’s a big step for us—and for Wi-Fi, more broadly. We’ve entered the Arm Flexible Access Program, which gives us cost-effective access to industry-proven Arm IP. With this, we’re prototyping our next-generation Wi-Fi baseband chip using Arm Cortex® CPUs, and we’re exploring Arm Ethos™ NPUs for on-chip AI acceleration.

Why does this matter? Because we’re bringing real-time, AI-driven spectrum engineering directly to the network edge. Our patent-pending machine learning algorithms are designed to do things Wi-Fi’s never done before—mitigate congestion, manage spectrum allocation in real-time, and autonomously reconfigure channel/link density for optimal throughput—all within the chipset. This is a paradigm shift that will allow networks to self-optimize based on real-time user and device behavior.

Wi-Fi is notoriously messy in dense environments. Are you saying this AI-driven approach can solve that?

Exactly. Traditional Wi-Fi is reactive and when collisions and interference occur, it has limited ability to mitigate it, and often does so ineffectively. Our AI subsystem is proactive based on data received from the coverage area environment — delivering a significant boost to the overall QoS and transforming the user experience. Imagine your Wi-Fi recognizing a spike in video traffic and re-allocating spectrum capacity to handle the load—or rerouting traffic away from interference without any human intervention.

This is particularly crucial in environments like MDUs (multi-dwelling units), enterprise campuses, industrial IoT deployments, and even in smart homes. As device counts continue to explode, intelligent spectrum management becomes the key differentiator.

You will get unparalleled Wi-Fi QoS for all devices with Wi-Fi Spectrum Slicing.

You mentioned prototyping in Arm’s ecosystem. What advantages does that bring?

Arm has created an incredibly robust platform for innovation. Their CPUs and NPUs are energy-efficient, scalable, and deeply supported, which accelerates our time to market and de-risks development. More importantly, being part of the Arm ecosystem validates our approach and opens doors for deeper industry collaboration.

We’re not building a niche chipset—we’re creating a standards-aligned, AI-enabled platform that can be licensed by leading silicon players or deployed by OEMs, service providers, and enterprises. Working with Arm helps us do that at scale.

That leads us to another major milestone—your inclusion in the Canadian government’s FABrIC program. What does that support mean for Edgewater?

The Government of Canada’s FABrIC initiative is a game-changer. For Edgewater, it represents a national vote of confidence in our vision to lead the next wave of intelligent, AI-enabled wireless innovation. Managed by CMC Microsystems, FABrIC is a five-year $223M program to accelerate the commercialization of semiconductor-based processes and products. We’re proud to be one of the first recipients. The program provides strategic funding that allows us to de-risk R&D and accelerate commercialization of our next-gen AI-enabled Wi-Fi chipsets.

This isn’t just about money—it’s about national competitiveness. Canada has world-class talent and ideas, but turning those into commercially viable silicon solutions for the worldwide market requires deep support. FABrIC recognizes that and is helping position Canada as a global player in intelligent connectivity.

You’ve also secured support from Silicon Catalyst, the world’s only incubator focused exclusively on semiconductors. What has that partnership unlocked?

Joining Silicon Catalyst has given us access to a global network of mentors, corporate partners, and critical infrastructure for silicon development. They’ve helped hundreds of semiconductor startups navigate the journey from concept to production—and their backing speaks volumes about our technology and market potential.

The Silicon Catalyst ecosystem is already helping us refine our product-market fit, prepare for mass manufacturing, and navigate relationships with fabs and IP providers. It’s a massive accelerant for our roadmap.

Let’s step back a bit. Where do you see Edgewater Wireless headed over the next 12–24 months?

Over the next 12 to 24 months, Edgewater Wireless is laser-focused on commercializing our next-generation Wi-Fi Spectrum Slicing silicon under the PrismIQ™ product family. This family will span three high-value, global market segments—residential, enterprise, and Industrial Internet of Things (IIoT)—each with tailored variants designed to optimize performance, capacity, and reliability for dense device environments.

We’re already well through the ‘beta’ prototyping phase. Key upcoming milestones in our silicon realization roadmap include:

  • Prototyping and silicon realization of our next-gen AI-enabled baseband is underway, with sampling beginning in early 2026.  For those parties interested in early access to the PrismIQ product family, development platforms will be made available to select partners in the first quarter of 2026.
  • Partnering with Tier-1 industry players to accelerate market deployment through licensing and joint development initiatives, ensuring broad industry alignment and faster paths to integration.
  • Scaling our software stack and AI-driven algorithms to ensure seamless compatibility with evolving standards like Wi-Fi 7—and laying the groundwork for next-gen protocols including Wi-Fi 8.
  • Leveraging support from strategic partners and government-backed programs, including Silicon Catalyst, Arm, CableLabs, and FABrIC, to fast-track innovation and market readiness.

Interested and qualified service providers and equipment vendors are encouraged to contact us to gain early access to the PrismIQ product family. Let’s shape the future of wireless, together.

There’s a lot of buzz around “standards-leading” versus “standards-following.” How does Edgewater view its role in the Wi-Fi standards ecosystem?

We’re deeply involved in the standards community, and we build our technology with alignment in mind. But we’re also pushing the boundaries of what those standards enable.

Spectrum Slicing is a great example—it fits within existing standards yet unlocks next-level performance by reimagining how the physical layer handles channelization and interference. That’s what we mean by “standards-leading.” We respect the standards, we align with them—but we’re also creating new categories of capability within them.

It’s not about breaking compatibility—it’s about enhancing what’s possible, without requiring changes on the client side. That’s key to enabling broad adoption.

Final question—what’s the big vision here? What does success look like for Edgewater Wireless?

Success for us means redefining Wi-Fi—not just making it faster, but smarter. Our goal is to embed AI-enabled Spectrum Slicing into the heart of Wi-Fi infrastructure and make intelligent wireless connectivity the norm, not the exception.

We believe Wi-Fi should be as dynamic as the environments it serves, and with our technology, it can be. Whether it’s in a smart home, an enterprise network, or a factory floor, we want to be the intelligence layer that ensures every device gets the performance it needs, when it needs it.

We’re not just solving for today—we’re building the wireless fabric of the future.

About Edgewater Wireless (TSXV: YFI / OTC: KPIFF)

We make Wi-Fi. Better.

Edgewater Wireless delivers unmatched Wi-Fi QoS—bar none—by intelligently mitigating congestion, managing spectrum allocation in real-time, and autonomously reconfiguring channel and link density—driving economic gains for service providers and their customers through reduced churn, improved efficiency, and high-performance connectivity in dense environments. Redefining Wi-Fi from the silicon up, Edgewater’s patented, AI-powered Spectrum Slicing platform—delivered through the PrismIQ™ product family—breaks the limits of legacy Wi-Fi by enabling multiple concurrent channels in a single band. This delivers 10x performance and up to 50% lower latency, even for legacy devices. With 26 patents and a fabless model, Edgewater is transforming the economics of Wi-Fi for service providers, OEMs, and enterprises—powering scalable, standards-aligned/leading connectivity across residential, enterprise, and Industrial IoT markets.  Edgewater is building the intelligence wireless foundation for the next era of global connectivity.

Visit https://edgewaterwireless.com

Andrew Skafel is a recognized leader in wireless and next-generation Wi-Fi, driving innovation as President and CEO of Edgewater Wireless. Under his leadership, the company has pioneered Wi-Fi Spectrum Slicing, revolutionizing high-density wireless performance. With over two decades in telecom and technology, Andrew has led product development, strategy, and key industry partnerships. His customer-focused vision ensures Edgewater’s patented solutions address real-world connectivity challenges. A sought-after expert on Wi-Fi innovation, Andrew continues to shape the future of wireless communications, positioning Edgewater Wireless as a global leader in scalable, high-performance networking solutions.

Also Read:

CEO Interview with Jutta Meier of IQE

Executive Interview with Ryan W. Parker of Photonic Inc.

CEO Interview with Jon Kemp of Qnity


Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research
by Admin on 08-01-2025 at 6:00 am

DAC 62 Systems on Chips

At a recent conference session on July 9, 2025, Prashant Varshney, head of the Silicon and Physics Industry Vertical for Microsoft’s Discovery and Quantum Division, unveiled the transformative potential of the Microsoft Discovery Platform. This innovative platform, announced at Microsoft’s Build event, aims to redefine chip design and scientific research by integrating generative AI, quantum computing, and high-performance computing (HPC) into a cohesive ecosystem. With its focus on scalability, interoperability, and IP security, the platform addresses the evolving needs of R&D teams across industries, particularly in semiconductors.

The rapid pace of technological innovation, driven by generative AI, has shifted the landscape from experimental to functional applications. Warn highlighted three key trends reshaping engineering: the pervasive influence of generative AI, the rise of domain-specific models, and the imminent impact of quantum computing. These advancements are pushing R&D teams to meet heightened expectations, balancing short-term cost efficiencies with long-term goals like market disruption and new revenue streams. However, current systems often fall short, relying on bespoke, loosely integrated tools that are difficult to scale or secure.

The Microsoft Discovery Platform tackles these challenges by offering a unified, AI-centric solution. Built on Azure’s scalable infrastructure, it integrates AI services, HPC, and data management to support industries like chemistry, material science, life sciences, and semiconductors. At its core is the Science Co-Pilot, a natural language interface that transcends traditional chatbot functionality. This orchestration layer understands domain-specific requirements and design goals, enabling context-aware decision-making and execution plans. By leveraging a knowledge graph termed the Science Bookshelf and an HPC-optimized Science Supercomputer, the platform empowers researchers to manage complex workflows efficiently.

A key strength of the platform is its ecosystem approach. Microsoft collaborates with EDA vendors like Synopsys, research labs, and system integrators to ensure seamless integration of existing tools and workflows. This interoperability is critical for semiconductor design, where commercial EDA tools are indispensable. The platform also supports specialized agents, such as those for planning, orchestration, or data processing, which can be customized by partners or users. These agents, grounded in domain-specific data like product documentation and prior designs, enable precise, context-driven insights.

Warn showcased practical applications, including a project with Pacific Northwest National Labs to develop a battery electrolyte with 70% less lithium, achieved by narrowing 32 million combinations through AI-driven simulations. Another example involved discovering a novel data center coolant, free of harmful PFAS chemicals, in just ten days. For chip design, the platform demonstrated its prowess by generating a microcontroller for an aircraft instrument landing system. Agents handled tasks like writing specifications, creating RTL, verifying designs, and conducting power, performance, and area (PPA) analysis, all while incorporating insights from the platform’s knowledge graph.

The platform’s ability to abstract hardware complexities ensures future-proofing, accommodating classical compute today and quantum workloads tomorrow. Specialized virtual machines, like EV6 and FXV2, powered by Intel’s fifth-generation Xeon processors, cater to EDA workloads’ unique demands, such as large memory and optimized core counts. Partnerships with Intel and NetApp further enhance performance and storage capabilities.

Despite its autonomous capabilities, human oversight remains integral. Each agent-driven step includes human-in-the-loop validation to ensure accuracy and alignment with project goals. Addressing concerns about training data for advanced nodes, Warn emphasized that newer models require minimal data, with encouraging results at 16-18 nm nodes. Future challenges, particularly for 2-5 nm designs, will require ecosystem collaboration with foundries.

The Microsoft Discovery Platform is poised to accelerate innovation, reduce time-to-market, and enable breakthroughs in chip design and beyond. By fostering collaboration and integrating cutting-edge technologies, it offers a scalable, secure, and extensible solution for the next generation of scientific discovery.

Also Read:

Google Cloud: Optimizing EDA for the Semiconductor Future

Synopsys FlexEDA: Revolutionizing Chip Design with Cloud and Pay-Per-Use

Perforce and Siemens: A Strategic Partnership for Digital Threads in EDA


CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs
by Mike Gianfagna on 07-31-2025 at 10:00 am

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

Much of advanced technology is data-driven. From the cloud and AI accelerators to automotive processing and edge computing, data storage and transmission efficiency are of critical importance. It turns out that lossless data compression is a key ingredient to deliver these requirements.

While there are both software and hardware solutions, hardware-based approaches offer the best balance for throughput, latency, and power demands. CAST recently presented an informative webinar on this topic with SemiWiki. In case you missed it, there is now a replay available. A link is coming but first let’s examine what is discussed in the CAST webinar about supercharging your systems with lossless data compression IPs.

The Presenter

Dr. Calliope Louisa Sotiropoulou

The webinar is presented by Dr. Calliope-Louisa Sotiropoulou, an Electronics Engineer who holds the position of Sales Engineer & Product Manager at CAST. Dr. Sotiropoulou specializes in image, video and data compression, and IP stacks. Before joining CAST, she worked as a research and development manager and an FPGA systems developer for the Aerospace and Defense sector. Dr. Sotiropoulou is very knowledgeable on the topic of data compression and has an easy-to-understand style.

She has a long academic record as a researcher, working on various projects, including the trigger and data acquisition system of the ATLAS experiment at CERN. She received her PhD from the Aristotle University of Thessaloniki.

What is Covered

The full title of the webinar is Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs. The topics discussed during the 35-minute webinar include:

  • What is lossless data compression, and why do we use it?
  • Differences between software and hardware implementations
  • How to choose the right algorithm for your application
  • Real-life examples: Integration and implementation
  • The CAST approach and IP cores portfolio

The webinar is followed by about 10 minutes of Q&A from the webinar audience that covers some very relevant and interesting topics.

Some Highlights

There is a lot of great information shared by Dr. Sotiropoulou in this webinar. She touches on the pros and cons of various approaches and backs it up with details from real examples. To whet your appetite, here is her summary of why a hardware approach is preferred:

Software is flexible, but:

  • It creates a high CPU load
  • It scales poorly, especially when the high throughput in today’s applications is considered
  • It has unpredictable latency

Hardware:

  • Offers deterministic performance: latency, throughput, power
  • Scales to multi-Gbps throughput
  • Lower power consumption
  • Real-time ready

She goes into detail on multiple approaches to data compression, illustrating the pros and cons of each. This discussion gets into topics such as optimizing memory size and file size for various problem sets.  

She then discusses the lossless data compression delivered by CAST. Target applications and system integration details are presented, along with specific results for various ASIC and FPGA technologies. She ends with a summary of the CAST approach and key takeaways.

To Learn More

I have just presented a high-level summary of the webinar content. If you are dealing with data intensive applications, approaches to lossless data compression will definitely be important and I highly recommend experiencing the complete webinar. You will be glad you did. You can access the webinar replay here.

You can also learn more about CAST’s lossless data compression IP here. And that’s the CAST webinar about supercharging your systems with lossless data compression IPs.

Also Read:

WEBINAR Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs

Podcast EP273: An Overview of the RISC-V Market and CAST’s unique Abilities to Grow the Market with Evan Price

CAST Advances Lossless Data Compression Speed with a New IP Core


Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities
by Daniel Nenni on 07-31-2025 at 8:00 am

sqr 1

Designing a system-on-chip (SoC) has never been more complex—or more critical. With accelerating demands across AI, automotive, and high-performance compute applications, today’s SoC architects face a series of high-stakes tradeoffs from the very beginning. Decisions made during the earliest phases of design—regarding architecture, IP selection, modeling, and system integration—can make or break a project’s success.

That’s why SemiWiki is proud to host a live webinar:

“What to Consider When Architecting Your Next SoC: Architectural Tradeoffs, IP Selection, and Ecosystem Realities”

 Thursday, August 14, 2025 | 9:00 AM PDT

This session will feature a practical, fast-paced conversation between two seasoned experts in SoC architecture and IP design:

  • Paul Martin, Global Director of SoC Architecture, Aion Silicon

  • Darren Jones, Distinguished Engineer & Solutions Architect, Andes Technology

Together, they’ll walk through real-world scenarios, decision frameworks, and lessons learned from working with some of the most demanding silicon customers in the world.

Rather than a static presentation, the format is designed as a fireside chat—highlighting the nuance and complexity of early-stage architecture decisions through dialog. Expect candid insights, live Q&A, and audience engagement—not a canned marketing pitch.

Watch the replay

What You’ll Learn:

  • How to weigh architectural tradeoffs when performance, flexibility, and schedule are in tension

  • What questions to ask when selecting IP across multiple vendors

  • The role of modeling, simulation, and emulation in derisking “works-first-time” silicon

  • How system-level decisions (like interconnect width or coherency models) impact overall architecture

  • Where ecosystem support—toolchains, deliverables, and foundry alignment—can determine downstream success

You’ll also gain a deeper understanding of performance, power, and area (PPA) metrics—how to interpret them, and how to avoid common traps when comparing IP blocks. This session goes beyond datasheets to explore how real design teams validate assumptions and make decisions that hold up under pressure.

Whether you’re leading architecture for your next chip, evaluating IP options, or supporting teams through SoC integration, this webinar will sharpen your perspective and provide actionable strategies.

Why Attend Live:

This is a live-only event, and attendees will have the chance to ask questions directly. If your team is facing architectural decisions this quarter—or simply wants to learn how top-tier firms approach system tradeoffs—this is a valuable opportunity to hear from peers in the trenches.

Watch the replay

Speaker Bios:

Darren Jones, Distinguished Engineer and Solutions Architect, Andes Technology

Darren Jones is a seasoned engineering leader with more than three decades of experience in processor architecture, SoC design, and IP integration. Currently a Distinguished Engineer and Solutions Architect at Andes Technology, he helps customers develop high-performance RISC-V–based solutions tailored to their systems, drawing on his deep expertise in system-on-chip design and verification.

Prior to Andes, Darren held senior leadership roles at Esperanto Technologies, Wave Computing, Xilinx, MIPS Technologies, and LSI Logic, where he led teams through multiple successful chip tapeouts—from 7nm inferencing accelerators to complex multi-core and multithreaded architectures. His experience spans architecture definition, RTL design, IP delivery, and full-chip integration.

Darren holds more than 25 patents in processor design and multithreading. He earned his M.S. in electrical engineering from Stanford University and his B.S. with highest honors from the University of Illinois Urbana-Champaign.

Paul Martin, Global Director of SoC Architecture, Aion Silicon

Paul Martin is the Global Director of SoC Architecture at Aion Silicon, where he leads international engineering teams and drives customer engagement across complex semiconductor design projects. With decades of experience in commercial, technical, and strategic roles at companies including ARM and NXP, he has helped bring cutting-edge SoC technologies to market. Martin is known for his ability to bridge technical innovation with business value across Europe, North America, and Asia.

Watch the replay

Also Read:

The Sondrel transformation to Aion SIlicon!

2025 Outlook with Oliver Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel


cHBM for AI: Capabilities, Challenges, and Opportunities

cHBM for AI: Capabilities, Challenges, and Opportunities
by Kalar Rajendiran on 07-31-2025 at 6:00 am

cHBM Panelists at Synopsys Executive Forum

AI’s exponential growth is transforming semiconductor design—and memory is now as critical as compute. Multi-die architecture has emerged as the new frontier, and custom High Bandwidth Memory (cHBM) is fast becoming a cornerstone in this evolution. In a panel session at the Synopsys Executive Forum, leaders from AWS, Marvell, Samsung, SK Hynix, and Synopsys discussed the future of cHBM, its challenges, and the collective responsibility to shape its path forward.

Pictured above: Moderator: Will Townsend, Moor Insights & Strategy; Panelists: Nafea Bshara, VP/Distinguished Engineer, AWS; Will Chu, SVP & GM Custom Cloud Solutions Business Unit, Marvell; Harry Yoon, Corporate EVP, Products and Solutions Planning, Samsung; Hoshik Kim, SVP/Fellow, Memory Systems Research, SK Hynix; John Koeter, SVP, IP Group, Synopsys.

The Rise of cHBM in Multi-Die Design

HBM has already propelled AI performance to new heights. Without it, the AI market wouldn’t be where it is today. The evolution from 2.5D to 3D packaging brings a nearly 8x improvement in bandwidth and up to 5x power efficiency improvement—transformative gains for data-intensive workloads. Custom HBM further optimizes performance, reducing I/O area by at least 25% compared to standard HBM implementations. But with this comes a familiar tension: performance versus interoperability.

The Customization Dilemma

While hyperscalers may benefit from custom configurations—given they control the entire stack—the broader industry risks fragmentation. As panelists noted, every time memory innovation went too custom (e.g., HBM1, RDRAM), interoperability and industry adoption suffered. Without shared standards, memory vendors face viability issues, and smaller players risk exclusion. Custom HBM must not become a barrier to collaboration.

The Need for Speed in Standardization

A major takeaway from the panel was the urgency for faster-moving standards bodies. JEDEC has traditionally led HBM standardization, but with the pace of AI, panelists discussed how a new and more agile standards body could accelerate interoperable HBM frameworks. The industry needs validated, standardized D2D IP as well to preserve ecosystem harmony while scaling performance. The UCIe standard is fast establishing itself as that D2D IP standard.

Implications for Memory Vendors

Memory vendors are in a tough spot. On one hand, custom HBM demands more features and integration flexibility; on the other, it erodes volume leverage and introduces supply chain risks. To stay competitive, vendors must support both standard and semi-custom memory—while collaborating more deeply with SoC architects, EDA tool providers, and packaging experts.

The Power of the Ecosystem

To unlock cHBM’s full potential, ecosystem-wide collaboration is non-negotiable. Synopsys is playing a central role for system-level enablement—offering integrated IP, power modeling, thermal simulation, and system-level co-design tools. Only through coordinated efforts can companies navigate packaging complexity, ensure test coverage, and deliver performant, scalable AI systems.

The 3D Packaging Imperative

3D packaging is the physical foundation of multi-die design. Compared to 2.5D solutions, 3D integration supports significantly higher bandwidth and tighter physical proximity. However, the benefits come with challenges: thermal hotspots, TSV congestion, and signal integrity must be carefully managed. Architects must co-design silicon and packaging to meet AI’s escalating demands.

From Custom to Computational HBM

The panel reached consensus on one transformative idea: the future isn’t just custom HBM. Computational HBM may be a better way to describe cHBM. This paradigm emphasizes workload-aware partitioning across logic and memory, where memory becomes an active participant in AI processing, not just a passive storage layer. Right now, hyperscalers may be the main drivers for cHBM. But, unlike proprietary custom approaches, computational HBM can scale across markets—cloud, edge, automotive—and thrive through standardization and reuse.

Summary

cHBM holds tremendous promise—but how the industry moves forward will determine whether it accelerates innovation or impedes it. With standardization, agile packaging integration, and coordinated ecosystem efforts, computational HBM can power the next generation of intelligent systems. With ecosystem aligned vision and execution, the industry will be on its way to multi-die design success to the fullest extent.

The “cHBM for AI” panel session recording can be accessed on-demand from here.

Also Read:

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

SNUG 2025: A Watershed Moment for EDA – Part 2