ads mdx semiwiki building trust gen 800x100ai

CEO Interview with Dr. Noah Sturcken of Ferric

CEO Interview with Dr. Noah Sturcken of Ferric
by Daniel Nenni on 06-30-2025 at 10:00 am

Noah Sturcken headshot

Noah Sturcken is a Founder and CEO of Ferric with over 40 patents issued and 15 publications on Integrated Voltage Regulators. Noah leads Ferric with focus on business development, marketing and new technology development. Noah previously worked at AMD R&D Lab where he developed Integrated Voltage Regulator (IVR) technology.

Tell us about your company

Ferric is a growth stage technology company that designs, manufactures, and sells chip-scale DC-DC power converters called Integrated Voltage Regulators (IVR) which are critical for powering high performance computers. Ferric’s line of IVR products are complete power conversion and regulation systems that can be placed especially close to a processor or even within processor packaging to provide significant reductions in system power consumption and area while enabling improved performance. Systems that employ Ferric’s IVRs realize 20%-70% reduction in solution footprint and bill of materials costs with a 10%-50% improvement in energy efficiency. Ferric’s IVR products are being adopted to power next generation Artificial Intelligence (AI) processors where Ferric’s market leading power density and efficiency provides a direct advantage in processor performance.

What problems are you solving?

The intense demand for high performance computing spurred by recent breakthroughs in AI has driven steep increases in datacenter power consumption. The latest generation of processors developed for AI training use liquid-cooling and require more than 1000 Watts per processor, which conventional power supplies struggle to provide, resulting in inefficiency and loss of performance. Soon, processors will consume more than 2000 Watts per processor, further straining conventional power delivery solutions. In addition to performance issues, traditional power solutions take up vasts amounts of PCB real estate. Doubling the size of power solutions as power demand doubles is untenable. Next-generation systems must integrate power with significantly better power density, system bandwidth and energy saving capability, which is only possible with IVR solutions such as Ferric’s.

What application areas are your strongest?

High-performance processor applications are among our strongest because of the urgent requirements for powering AI workloads. Our products can achieve a solution current density exceeding 4A/mm2 with conversion efficiency better than 94% and regulation bandwidth approaching 10MHz. The unique combination of density, conversion efficiency and regulation bandwidth available from Ferric’s IVRs allows high performance processors to receive more power with less waste. Other applications for Ferric’s IVRs include FPGAs and ASICs, which tend to have high supply counts and therefore realize dramatic reductions in board area by integrating voltage regulators into the package.

What keeps your customers up at night?

What keeps our customers up at night is the prospect that their processors will underperform because their power solution does not provide enough power when it’s needed. AI workloads are pushing processor power demands like never before and a company’s competitiveness may boil down to whether they can reliably and efficiently deliver enough power to their processors.

What does the competitive landscape look like and how do you differentiate?

Ferric’s technology team consists of experts who have been leading the integration of switched-inductor power converters with CMOS for the past 15 years. Ferric’s technology is 10x denser than the next closest option on the market and is readily available to customers in convenient, chip-scale IVR modules or through Ferric’s partnership with TSMC. Ferric’s patented magnetic inductor and power system technology enables a remarkable improvement in IVR efficiency and integration density, delivering a substantial advantage to Ferric’s customers.

What new features/technology are you working on?

Greater power density, higher conversion efficiency, faster regulation bandwidth and better power integration options. High-performance computing systems are continuously pushing integration and power levels, so we must do the same with our IVRs. We are accomplishing this by driving our designs and magnetic composites even further while working closely with our customers to integrate our products with their systems.

How do customers normally engage with your company?

Similar to other power module vendors, we provide datasheets, models, eval. boards and samples to facilitate our customers’ evaluation and adoption of Ferric’s IVRs. Our applications team provides direct support to customers as they progress through evaluation, adoption and production. We are experienced in supporting our customers in a wide variety of ways. In addition to supporting our devices, we do power integrity analysis for customer systems, perform layout and integration scheme assistance, and offer thermal solutions options and support for numerous integration methods, ranging from PCB attach to co-packaging with the processor.

Contact Ferric

Also Read:

CEO Interview with Vamshi Kothur, of Tuple Technologies

CEO Interview with Yannick Bedin of Eumetrys

The Sondrel transformation to Aion SIlicon!


Jitter: The Overlooked PDN Quality Metric

Jitter: The Overlooked PDN Quality Metric
by Admin on 06-30-2025 at 6:00 am

Figure 1 – Accumulated jitter

Bruce Caryl is a Product Specialist with Siemens EDA

The most common way to evaluate a power distribution network is to look at its impedance over the effective frequency range. A lower impedance will produce less noise when transient current is demanded by the IC output buffers. However, this transient current needs to be provided at the same time for each transition or jitter will be produced that will limit the maximum operating speed of the interface.

While not typically evaluated in PDN design, jitter can have a significant effect on the timing margins on single-ended nets found in interfaces such as double data rate (DDR) memory, limiting the maximum operating speed.

Jitter provides a metric for evaluating the quality of a PDN, since reducing it can improve the performance of a data-driven interface. . In this article we describe a simulation methodology to automatically measure the jitter caused by a PDN and use the results to evaluate the quality and effectiveness of the PDN decoupling.

The first step in measuring the jitter induced by a PDN is to create a good electrical model of the PDN that captures all its electrical characteristics, such as the frequency response of the decoupling capacitors, the mounting inductance of the ICs, and the spreading inductance of the planes. This can be done with a 3D electro-magnetic field solver, which is a hybrid of a full-wave solver, with simplifications to support a large power network structure. This type of model is often an S-parameter model with ports at the IC of interest and at the VRM connection.

The PDN model is then placed in a schematic and connected to a VRM model at the input and a driver current model as a load. The VRM model should be a simplified representation of the output impedance of the VRM, covering a range of frequencies below where the main decoupling capacitors are effective. The driver current model produces a linearly increasing current with time.  The current waveform is based on a pseudo-random bit sequence so that a variety of frequencies are covered.

Finally, we need to measure the jitter produced by the varying driver current.  We will use the VHDL-AMS behavioral language to create a model that can measure the jitter between the driver current waveform and the resulting waveform produced at the output of the PDN.  The model will keep track of the largest jitter, as well as the generated voltage noise, and report that in the output waveforms.

Interpreting PDN Performance

Once the testbench has been created, it is easy to substitute various PDN models and then quickly determine how much jitter each PDN implementation introduces. You can add or remove decoupling capacitors and see what the impact would be. You can also experiment with different capacitor values to see which combination is best.

One of the challenges with setting up the simulation is determining what the data rate and edge rate should be when the stimulus is directly connected to the PDN. In the real design, the IC has additional decoupling due to the package and die capacitance. We could add that to our model, but that information is often hard to come by. As a compromise, we will assume that for our DDR4 power net example (1.2 V), the edge rate is slowed to 200 ps by the package and die. For the data rate we will use 4 ns per UI, which corresponds to a 125 MHz Nyquist frequency. This is near the upper frequency limit in which we expect the PDN to be effective. The PRBS stimulus will then produce 4 ns transitions and many sub-harmonics, stimulating the PDN at a variety of frequencies.

Figure 1 shows the maximum jitter (maxJitter), the skew per edge transition, and the PRBS9 data pattern. After about 1.0 µs, the jitter does not increase significantly for the applied data pattern. The maximum jitter is shown to be about 6.6 ps for both the rising and falling edges.

Figure 1 – Accumulated jitter from PRBS9 data pattern.

We can also display the noise generated at the BGA pins (blue) caused by the stimulus (red), shown in Figure 2.

Figure 2 – Noise generated at BGA pins because of 1V stimulus pattern.

We can now use this technique to compare multiple PDNs to see how they perform. First, we extract the frequency domain models for three decoupling configurations and look at the Z-parameter (impedance versus frequency) plots, as shown in Figure 3. The green plot is the actual decoupling used on the 0.85 V power in a working design. The blue plot is the impedance with all the 100 µF and 4.7 µF caps removed, and the red plot is the optimized impedance profile, which has a higher impedance magnitude but a smoother and flatter impedance profile.

Figure 3 – Three example PDNs plotted as impedance versus frequency.

In Figure 4, we compare the jitter produced by the three different PDNs, and we see that the lowest jitter (5.6 ps) comes from the optimized PDN (red), which has the flattest impedance curve. The next lowest jitter (7.5 ps) is from the actual PDN as designed (green). When we remove some capacitors but keep a similar profile, the impedance and the jitter (8.5 ps) both go up (blue).

Looking at the noise amplitude as a percentage of the signal (maxv_percent), we see a direct correlation between the impedance and the noise induced by the PDN, as expected. If we look at impedance as the only quality metric for the PDN, we might conclude that the lowest impedance PDN has the best performance. However, we see that while the noise amplitude is lower, the jitter is higher.

The PDN was optimized by selecting capacitors that just met a flat impedance profile. This flatter profile also has a more consistent phase shift over the frequency range, so the edges for all data transitions tend to be more aligned and thus produce less jitter.

You may have wondered whether a flat impedance profile is better than a so-called “deep V” profile, where most of the caps have the same value. In this case, it appears that the flatter profile produces better jitter performance, which may be an important consideration for output data switching signals.

Figure 4 – Maximum Jitter and noise voltage noise percent on three example PDNs.

So, the next time you are thinking about how robust your PDN is, consider how well it can supply current at the right time across all frequencies. PDN induced jitter is another factor that can limit a high-speed design’s performance.

Please download the full white paper, Evaluating a PDN based on jitter, to learn more about this methodology and to see how adding capacitors can sometimes create even more jitter.

Bruce Caryl is a Product Specialist focused on board analysis products, including signal and power integrity, and mixed signal analysis. Prior to this role, he worked in product and technical marketing, consulting, and applications engineering. Bruce has authored several white papers on analysis techniques based on the needs of his customers. He began his career as a design engineer where he did ASIC and board design.

Also Read:

DAC News – A New Era of Electronic Design Begins with Siemens EDA AI

Podcast EP293: 3DIC Progress and What’s Coming at DAC with Dr. John Ferguson and Kevin Rinebold of Siemens EDA

Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future


Facing the Quantum Nature of EUV Lithography

Facing the Quantum Nature of EUV Lithography
by Fred Chen on 06-29-2025 at 8:00 am

Absorbed Photons Exposing EUV

The topics of stochastics and blur in EUV lithography has been examined by myself for quite some time now [1,2], but I am happy to see that others are pursuing this direction seriously as well [3]. As advanced node half-pitch dimensions approach 10 nm and smaller, the size of molecules in the resist becomes impossible to ignore for adequate modeling [3,4]. In other words, EUV lithography must face its quantum nature.

Table 1 compares edge dose fluctuations for key cases for DUV, Low-NA EUV, and High-NA EUV lithography [5]. While for a standard dose of 30 mJ/cm2, DUV shows no significant dose fluctuations down to the smallest practical half-pitch, even at 60 mJ/cm2, EUV shows much greater than 50% fluctuation (3 sigma). Such a large dose fluctuation is expected to result in severe edge placement error, leading to roughness, linewidth errors, and/or feature placement errors. The key aggravating factors are: (1) an EUV photon has ~14 times the energy as a DUV photon, so that the photon density is already much less even at double the dose; (2) resist thickness scales with pitch, to avoid large aspect ratios for patterning, leading to reduced absorption; (3) EUV resists targeted for higher resolution would have smaller molecular sizes, leading to smaller photon collection area.

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work.

Table 1. Half-pitch edge dose fluctuations within a molecular pixel for DUV, Low-NA EUV, and High-NA EUV. An incident dose of 30 mJ/cm2 is assumed for DUV, 60 mJ/cm2 for EUV.

The photon absorption is not the final step in the resist exposure. An EUV photon will produce a photoelectron which then proceeds to migrate and produce more electrons, known as secondary electrons [2]. These electrons can in fact migrate distances greater than the molecular size [3]. As a result, the reaction of a molecule at a given location can be affected by the electrons resulting from absorption of photons at different locations, perhaps even several nanometers away [1].

Thus, the modeling needs to be addressed in stages. First, the absorption of EUV photons within the size of a molecule (~ 2nm [3,4]) needs to be addressed. Then, the effect of EUV absorption at different locations producing (random numbers of) secondary electrons all of which affect a given exposure location must also be taken into account. For chemically amplified resists, the acid blur should also be included for comprehensive modeling.

As a reference case, we will examine the 40 nm pitch (20 nm L/S binary grating) with a 0.33 NA EUV system. I’ll assume a chemically amplified resist with 40 nm thickness and absorption coefficient 5/um. Figure 1 shows the photon absorption density plot (top view), using 2 nm as the molecular pixel size, also representing the molecular size for the EUV resist. A 60 mJ/cm2 incident dose is assumed, which would result in 11 mJ/cm2 absorbed, averaged over the pitch. A threshold of 31 absorbed photons/nm2 would nominally correspond to the half-pitch linewidth.

Figure 1. Absorbed photon density in 40 nm thick EUV resist with absorption coefficient 5/um and incident dose 60 mJ/cm2 (averaged over pitch) for imaging a 20 nm L/S binary grating. The pixel size is 2 nm.

Each photon is expected to produce up to 9 electrons [1], but can also be less. These electrons are not produced all at once but entail some electron migration. Thus, the electron blur parameter is often used to characterize this phenomenon. Contrary to the common assumption, we should not expect this to be uniform throughout the resist [1,6] due to the density inhomogeneity of the resist itself. Thus, a random number generator can be used to simulate the local electron blur parameter (Figure 2).

Figure 2. Electron blur is generated as a random number to represent local variation, due to natural material inhomogeneity.

The blur parameter actually describes the probability of finding an electron that has migrated a given distance. For the current example, we use the probability function shape shown in Figure 3, resulting from the difference of two exponential functions [7]. By convolving the local electron blur probability function with the absorbed photon density, then multiplying by the electron yield (taken to be a random number between 5 and 9), we can see the expected migrated electron density (Figure 4). Owing to the extra randomness of the electron yield added to the Poisson noise from photon absorption, the electron density plot includes enhanced non-uniformity.

Figure 3. The shapes for the electron blur distance probability functions used in the modeling for this article. The shapes are generated from the difference of two exponential functions, one with the labeled long-range decay length (1-5 nm), and one with 0.2 nm decay length, so that the probability is zero at zero distance.

Figure 4. The migrated electron density is obtained by convolving the absorbed photon density of Figure 1 with the local electron blur probability functions (up to 8 nearest neighbor pixels) from Figures 2 and 3, then multiplying by the local random electron yield.

Since the resist is a chemically amplified resist, acids are produced by the electrons. These acids finally render resist molecules dissolvable in developer; this is known as deprotection [8]. The acids also diffuse, with the corresponding acid blur probability function taken to be a Gaussian. Figure 5 shows the final acid density plot after convolving a sigma=2.5 nm Gaussian acid blur probability function with the electron density. There is a smoothing effect from the acid blur, so some of the noise from the electron density plot seems to have diminished. However, there is still density variation that remains, since the seed of the acid generation is still random, i.e., the local electron density. Thus, the edge placement error is easily 10% of the linewidth.

Figure 5. The acid density is obtained by convolving a 2.5 nm sigma Gaussian acid blur probability function with the migrated electron density of Figure 4 (in a 5 x 5 pixel area).

Smoothing with a larger sigma Gaussian would lead to further diminishing of the acid deprotection gradient; this would actually increase sensitivity to the level of deprotection, i.e., a smaller change in dose could wipe out the whole feature (Figure 6).

Figure 6. Smoothing by Gaussian blur with a larger sigma results in a less steep deprotection gradient (orange), which is more susceptible to wiping out features with dose changes. A smaller sigma would be less susceptible (blue).

The stochastic edge fluctuations begin to consume the whole feature as the exposed or unexposed linewidth shrinks (Figure 7). Essentially, the full-fledged stochastic defectivity shows up.

Figure 7. (Left) 10 nm exposed line on 40 nm pitch; (right) 10 nm unexposed line on 40 nm pitch, under same resist exposure conditions as Figure 1.

This level of edge fluctuation is a specific feature of EUV. The fundamental way to counteract this edge dose fluctuation is to increase the dose. To reduce it 10x, the dose needs to be increased 100x. Referring back to Table 1, to get the 3 sigma fluctuation down to 7%, the dose easily approach 1000 mJ/cm2! That is clearly not feasible.

An alternative is to use double patterning, since doubling the pitch would enable 4-beam imaging, which can have higher normalized image log slope (NILS) than 2-beam imaging [9,10]. The dose would not have to be elevated as much. On the other hand, the 80 nm pitch exposure for double patterning is done more cost-effectively and quickly by DUV instead of EUV.

Moreover, 20 nm linewidths on 80 nm pitch with DUV 2-beam imaging at 30 mJ/cm2 look slightly smoother than optimized EUV 4-beam imaging at 60 mJ/cm2 (Figure 8). Besides the higher absorbed photon density per molecular pixel, there is no random electron yield component for DUV. Thus, DUV avoided the “perfect storm” that befell EUV [11], as it met its classical optical resolution limit before even approaching the molecular quantum limit.

Figure 8. (Top left) 20 nm DUV exposed line on 80 nm pitch; (top right) 20 nm EUV exposed line on 80 nm pitch; (bottom left) 20 nm DUV unexposed line on 80 nm pitch; (bottom right) 20 nm EUV unexposed line on 80 nm pitch. The DUV (3/um) and EUV (5/um) chemically amplified resist thicknesses are both 40 nm.

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work.

References:

[1] F. Chen, Impact of Varying Electron Blur and Yield on Stochastic Fluctuations in EUV Resist;

[2] F. Chen, Stochastic Effects Blur the Resolution Limit of EUV Lithography;

[3] H. Fukuda, “Statistics of EUV exposed nanopatterns: Photons to molecular dissolutions.” J. Appl. Phys. 137, 204902 (2025), and references therein; https://doi.org/10.1063/5.0254984.

[4] M. M. Sung et al., “Vertically tailored hybrid multilayer EUV photoresist with vertical molecular wire structure,” Proc. SPIE PC12953, PC129530K (2024).

[5] F. Chen, Stochastic EUV Resist Exposure at Molecular Scale

[6] G. Denbeaux et al., “Understanding EUV resist stochastic effects through surface roughness measurements,” IEUVI Resist TWG meeting, February 23, 2020.

[7] F. Chen, A Realistic Electron Blur Function Shape for EUV Resist Modeling

[8] S. H. Kang et al., “Effect of copolymer composition on acid-catalyzed deprotection reaction kinetics in model photoresists,” Polymer 47, 6293-6302 (2006); doi:10.1016/j.polymer.2006.07.003.

[9] C. Zahlten et al., “EUV optics portfolio extended: first high-NA systems delivered and showing excellent imaging results,” Proc. SPIE 13424, 134240Z (2025).

[10] F. Chen, High-NA Hard Sell: EUV Multipatterning Practices Revealed, Depth of Focus Not Mentioned

[11] F. Chen, A Perfect Storm for EUV Lithography

This article first appeared in Substack: Facing the Quantum Nature of EUV Lithography


Podcast EP294: An Overview of the Momentum and Breadth of the RISC-V Movement with Andrea Gallo

Podcast EP294: An Overview of the Momentum and Breadth of the RISC-V Movement with Andrea Gallo
by Daniel Nenni on 06-27-2025 at 10:00 am

Dan is joined by Andrea Gallo, CEO of RISC-V International, the non-profit home of the RISC-V instruction set architecture standard, related specifications, and stakeholder community. Prior to joining RISC-V International, Gallo worked in leadership roles at Linaro for over a decade. He built Linaro’s server engineering team from the ground up. He later managed the Linaro Datacenter and Cloud, Home, Mobile, Networking, loT and Embedded Segment Groups and underlying open source collaborative projects, in addition to driving the company’s membership acquisition strategy as vice president of business development.

Andrea describes the current focus of RISC-V International and where he sees the standard having impact going forward. He details some recent events that have showcased the impact of RISC-V across many markets and applications across the world. Andrea describes the substantial impact RISC-V is having in mainstream and emerging markets. AI is clearly a key part of this. So is automotive and high-performance computing. He explains that some mainstream AI applications shipped over 1 billion RISC-V cores. The growth of the movement and the breadth of its application are quite impressive.

Contact RISC-V

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Silvaco’s Diffusion of Innovation: Ecosystem Investments Driving Semiconductor Advancements

Silvaco’s Diffusion of Innovation: Ecosystem Investments Driving Semiconductor Advancements
by Admin on 06-27-2025 at 8:00 am

Silvaco at a Glance 2025

In Silvaco’s June 2025 Tech Talk, “The Diffusion of Innovation: Investing in the Ecosystem Expansion,” Chief Revenue Officer Ian Chen outlined how strategic partnerships accelerate R&D in semiconductor design and digital twin modeling. As a leading provider of TCAD, EDA software, and SIP solutions, Silvaco positions itself at the forefront of enabling technologies for power devices, memory, photonics, and AI. The talk emphasized innovation as getting customers to adopt new capabilities, using Silvaco’s Fab Technology Co-Optimization (FTCO) platform as a prime example. This approach resonates with the semiconductor industry’s shift toward AI integration, mirroring broader trends in mastering chip complexity.

Chen began with an overview of Silvaco’s growth trajectory. Post its 2024 IPO—the first successful EDA IPO in 20 years—the company has sustained double-digit revenue growth, with 95% derived from customer R&D efforts and 68% from new or expanded commitments. Employing over 300 staff, Silvaco collaborates with top semiconductor firms, establishing new workflows quarterly. Innovation, Chen argued, involves overcoming adoption friction through ecosystem investments.

Focusing on FTCO, a fab assistant deploying AI and ML, Chen described three EDA AI adoption methods: smarter tools for fewer simulations, digital assistants leveraging past experiences, and generative AI for spec-to-design automation. Historically, EDA focused on designers, but complexity now spans fabrication, packaging, and testing. FTCO innovates by unifying physics-based simulation, quantitative analysis, and AI/ML; starting with virtual experiments for data generation; and tailoring interfaces for diverse engineers. This builds digital twins of wafer processes, enhancing yield and efficiency.

Drawing from communication theory, Chen explained diffusion of innovation via five factors: relative advantage, trialability, compatibility, observability, and complexity reduction. FTCO embeds the first two in its product and field application engineers (FAEs) demonstrate workflows. Ecosystems address the latter three: partnering with AI/digital twin initiatives expands utility (compatibility); research institutes and consortia build evidence through papers and conferences (observability); and customer feedback refines tools (reducing complexity).

Examples abound. Silvaco’s decade-long collaborations include universities like Purdue and industry groups, yielding press releases on FTCO applications in memory manufacturing with Micron, advanced displays for AR/VR efficiency, bendable screens, and power semiconductors for faster yields. Ecosystems benefit all products, from EDA to TCAD, fostering innovations like beyond-Six-Sigma reliability for AI chips using ML to cut simulations.

The Q&A, moderated by Gregory McNiff, delved deeper. Partnerships vary in negotiation time (weeks to months) and structure, often without IP transfers, providing tools to consortia for indirect revenue. Silvaco has dozens of active partners, including in China and AI research, selected for alignment and value. They drive customer momentum by offering observability and anticipating R&D needs, such as new materials or manufacturing efficiencies. Chen highlighted how diverse perspectives on shared problems spark innovations, with FAEs as consultants bringing insights back to R&D.

This ecosystem strategy positions Silvaco to tackle industry challenges like talent gaps and skyrocketing costs, akin to Synopsys’ emphasis on shift-left methodologies for power optimization and multi-die designs. By investing in partnerships, Silvaco reduces adoption barriers, accelerating diffusion. As AI workloads demand exponential hardware innovation, such collaborative models ensure sustainable progress, from digital twins to next-gen processes. Ultimately, Silvaco’s approach exemplifies how ecosystem enablement transforms good ideas into widespread breakthroughs, fueling the $383 billion AI chip market by 2032.

Also Read:

Analysis and Exploration of Parasitic Effects

Silvaco at the 2025 Design Automation Conference #62DAC

TCAD for 3D Silicon Simulation


CEO Interview with Vamshi Kothur, of Tuple Technologies

CEO Interview with Vamshi Kothur, of Tuple Technologies
by Daniel Nenni on 06-27-2025 at 6:00 am

vamshi profile (1)


It was my pleasure to meet with Vamshi Kothur and the Tuple team at #62DAC for a briefing on their Tropos platform and Omni, a new multi-cloud optimizer. The conferences this year have been AI infused with exciting new technologies but one of the lingering questions is: How will the existing semiconductor design IT infrastructure support AI infused EDA tools and complex AI chip designs? Also, has there ever been a time where time-to-market has been more critical?

Vamshi Kothur is a NJ–based cloud and DevSecOps veteran with over 20 years of experience leading large-scale IT transformations at Fortune 100 financial firms and high-growth technology startups. In 2017, he founded Tuple Technologies to close the infrastructure and security gaps that chip-design startups face when racing to tape-out next-generation ICs, FPGAs, and AI accelerators on tight budgets.

As a frequent speaker at DAC, presenter at Cadence Live, Vamshi mentors early-stage semiconductor founders on building secure, license-aware, elastic cloud infrastructures that protect IP and accelerate innovation.

Tell us about your company

Tuple Technologies is a managed services and cloud automation company focused exclusively on the semiconductor industry. Our flagship platform, Tropos, automates IT infrastructure provisioning and DevSecOps for IC, FPGA, and system design workloads—across cloud, hybrid, or on-prem environments.

Tropos simplifies the most complex and error-prone aspects of semiconductor IT—like infrastructure-as-code, CAD license orchestration, job scheduling, and security hardening—so design teams can focus on tapeout, not IT & toolchain configurations. Whether you’re running RTL simulation on a few hundred CPUs or AI/ML workloads on massive GPU clusters, Tropos ensures performance, security, and cost-efficiency at scale.

At DAC 2025 we launched Omni, a multi-cloud optimizer for deep learning and HPC-based workloads, which brings 70–90% cost savings on GPU-heavy jobs through dynamic orchestration and predictive scaling.

What problems are you solving?

We’re solving the invisible but critical infrastructure challenges that slow down chip innovation. At a high level:

– Data Security & Compliance: We protect design IP at every layer, whether in cloud or on-prem environments.

– Cloud Assets & Sprawl: Our clients save up to 90% on compute-heavy jobs through intelligent resource scheduling, especially GPU-heavy AI/EDA workloads.

– Vendor Lock-In: Tropos platform gives teams freedom from rigid EDA vendor constrained compute environments by supporting heterogeneous toolchains and license models.

– Operational Inefficiencies: We reduce setup time from weeks to hours through automation, and improve turnaround time with real-time telemetry and burst scheduling across clouds.

– Put simply, we let design teams focus on building chips—not scripts, servers, or cloud invoices.

What application areas are you strongest?

We specialize in end-to-end infrastructure automation for semiconductor design pipelines. This includes:

– Provisioning, monitoring, and optimization of compute resources for Frontend, Backend, and Post-Tapeout Flows (PTOF).

– ECAD license management, including real-time analytics to help reduce over-provisioning and maintain compliance.

– DevSecOps services for CI/CD, with embedded security protocols tailored to IP-sensitive workflows.

– Multi-cloud orchestration that automatically routes workloads to the most cost-efficient and performant resources—AWS, GCP, or Azure.

– GPU workload optimization, especially for AI/ML-driven verification and simulation flows.

What keeps your customers at night?

Our customers are semiconductor engineers, CTOs, and startup founders—people who want to innovate but are being bogged down by:

– The complexity of managing hybrid cloud infrastructure with a limited IT, CAD & DevOps team.

– Security and compliance risks, especially around proprietary RTL or AI models.

– CAD license sprawl, where costs balloon and usage data is opaque.

– Unscalable DevOps practices, often relying on hand-crafted scripts that break under load.

– And perhaps most urgently, rising IT costs that outpace budgets.

Tropos and Omni are our answers to these concerns—automation, optimization, and visibility in a single platform.

What does the competitive landscape look like and how do you differentiate?

We see two primary types of competitors:

– Generic IT service providers – skilled in infrastructure, but not tuned for semiconductor workflows, licensing, or toolchains.

– EDA vendors (Synopsys, Cadence, Siemens, etc.) – strong on tools, but limited when it comes to customizable infrastructure and multicloud strategies.

Tuple is different because :

– We bring deep cloud-native and DevSecOps expertise.

– We are hyper-focused on semiconductor design work flows.

– Tropos is not a generic platform—it’s purpose-built for IC/FPGA/system development flows.

– Our “Pay-as-you-go” model fits lean startups just as well as growing mid-sized design teams.

– We also maintain deep technical partnerships and integrations across the cloud and EDA ecosystem.

What new features/technology are you working on?

We’re constantly pushing the envelope on intelligent infrastructure for silicon R&D. Some of our current focus areas include:

– AI/ML-driven automation for workload profiling and infrastructure scaling—especially for deep learning-based PPA optimization.

– Predictive multi-cloud scheduling to optimize across Spot and Reserved capacity in AWS, GCP, Azure and Neoclouds.

– Advanced security automation—integrating zero-trust networking and runtime compliance checks.

– Sustainability analytics to help customers reduce energy use and carbon footprint from large compute jobs.

We want to make cloud infrastructure programmable, secure, and efficient.

How do customers normally engage with your company?

Our customers typically come from the semiconductor ecosystem—startups, fabless companies, and design services firms. Engagement usually starts in one of three ways:

– Direct use of the Tropos platform to automate and optimize their infrastructure for IC design.

– Adopting Omni to control and reduce cloud spend on Compute & GPU AI/EDA jobs.

– Managed service partnerships—where we take ownership of infrastructure, DevSecOps, and cost governance, letting design teams focus on innovation.

– We offer tiered service models, so whether you’re a lean startup or scaling up to multiple tapeouts per year, there’s a Tuple Technologies solution that fits.

Closing Thoughts?

Tuple Technologies is quietly powering a wave of semiconductor innovation by making infrastructure invisible. With deep roots in IT automation and a sharp focus on chip design, Tuple is enabling design teams to move faster, spend less, and stay secure—without the pain of managing infrastructure manually.

Contact Tuple Technologies

Tuple Case Study:
Start up best practices for Semiconductors

A Practical Guide to Building Scalable, Secure, and Efficient IC Design Workflows for Start-Ups.

Also Read:

CEO Interview with Yannick Bedin of Eumetrys

The Sondrel transformation to Aion SIlicon!

CEO Interview with Krishna Anne of Agile Analog


Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs

Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs
by Mike Gianfagna on 06-26-2025 at 10:00 am

Webinar Scaling Power Performance for Next Generation SoCs

What if you could reduce power and extend chip lifetime, without compromising performance? We all know the importance of power optimization for advanced SoCs. Thanks to the massive build out of AI workloads, power consumption has gone from a cost or cooling headache to an existential threat to the planet, if current power consumptions can’t be managed. Against this backdrop proteanTecs recently held an informative webinar on the topic of power and performance optimization.

The discussion goes beyond adaptive techniques, better design strategies and enhanced technology. A method of performing per-chip, real-time optimization is described and the impact on power consumption and device reliability is dramatic. A replay link is coming but first let’s explore an overview of the proteanTecs webinar – “Power is the New Performance: Scaling Power & Performance for Next Generation SoCs”.

What Was Discussed

Noam Brousard, vice president of solutions engineering at proteanTecs, begins the webinar with a summary of industry best practices and the compromises they represent. He explains that applying a Vmin to all chips has significant drawbacks. Thanks to effects such as different process variation, usage intensity, system quality and operational conditions, a single Vmin will often result in wasted power due to an excessive value. Sometimes it can also lead to performance problems if the operational voltage is set too low.

Current best practices cannot accurately accommodate precise individual chip requirements.  He explains that design must take worst case conditions into account and finding the minimal required voltage per chip is very costly from a test-time perspective. And beyond that, the optimal Vmin over the chip’s lifetime will change due to effects such as changing workloads, aging and stress.

Noam then introduces proteanTecs power reduction applications. These unique technologies provide a strategy to reduce power based on personalized device assessment and real time visibility of actual voltage guard-bands. The figure below provides some of the strategies used and benefits achieved.

proteanTecs Power Reduction Applications

proteanTecs offers power reduction applications to personalize each chip’s minimum voltage requirement efficiently, including real-time visibility of the actual margins from design through production and during lifetime operation. So, over time, adjustments to the voltage supply can be done instantaneously.

A key element of the approach is embedded on-chip monitors that deliver real time information of chip operation and performance. AVS Pro provides reliability and workload-aware adaptive voltage scaling; Prediction-based VDDmin optimization per chip (during production testing) and VDDmin margin-based optimization per system are also part of the overall offering.

VDDmin is the minimum required voltage to achieve correct digital functionality. Voltage applied to a product when it runs in the field, however, needs to account for effects such as aging, temperature, workloads, and environmental conditions. Typically, these require “worst case” guard bands to be built in. But, in reality not all of these conditions take place, therefore these guard bands are wasting critical power and energy. By employing proteanTecs applications, these guardbands can be minimized and dynamically adjusted real-time. proteanTecs’ unique in-chip Agents provide high coverage and in-situ monitoring of the actual performance of limiting paths, in mission-mode. The workload aware power reduction application has a built-in safety net for dynamic adjustment should the timing margin fall critically low. Several examples of how the system works are shown, along with a live demonstration.

There is a lot of important detail shared in the webinar. I highly recommend you watch the event replay. To whet your appetite, Noam shows power savings of 8-14% for designs at advanced nodes. The live demonstration shows how the system adapts Vmin based on current conditions and how the system reacts to an under-voltage situation.

Dr. Joe McPherson, CEO, McPherson Reliability Consulting spoke next then discusses the reliability impact of power reduction. He explains the details of how chip temperature is reduced with power reduction. McPherson then explores the details of how reduced temperature impacts chip lifetime. He presents some eye-popping statics about chip lifetime increase through power induced temperature reduction. The improvement is based on the failure mechanism (e.g., hot carrier, bias temperature instability, interconnects). He explains how a 15% chip power reduction can translate to a 20 – 90 percent improvement in chip lifetime. This should get your attention.

Going beyond temperature, McPherson describes the impact of lower voltage and the associated thinner oxides. Here, the impact on device lifetime can be measured in hundreds of percent improvement. Quite impressive.

Alex Burlak, vice president of Test & Analytics at proteanTecs, concludes the webinar with details of how proteanTecs implements prediction-based VDDmin optimization for chip production and VDDmin margin-based optimization for system production. During ATE testing, typical testing requires multiple tests to identify the minimum required voltage at which the chip will still pass. In order to get the most accurate voltage, test-time becomes long and expensive. So customers must compromise to either run long tests or apply excessive VDD without ideally optimized voltage. proteanTecs offers a technique in which per-chip VDDmin application leverages an automated trained ML prediction model which eliminates the need to run a full test but will assign the correct minimum voltage to achieve the best power optimization.

Additionally, at system level testing, functional workloads or operations are often different than those made by ATE assumptions which usually rely on structural tests. Therefore, the guard bands provide a safe solution to in-field operation. Leveraging the Agents, by measuring the timing margin during functional workloads, proteanTecs can recommend a voltage optimization per chip or per system to further optimize VDDmin. Alex’s presentation is supplemented with live demonstrations of how proteanTecs delivers these capabilities. 

Noam concludes the following compelling points about proteanTecs power reduction solutions:

  • This approach goes beyond traditional AVS by leveraging embedded agents and real-time margin monitoring
  • Optimized power and performance during production and lifetime operation
  • Ensured reliability without risk
  • Lifetime extension of devices – less power, less heat, longer lifespan
  • Already deployed in custom systems, demonstrated up to 14% power savings

To Learn More 

There is a lot of relevant and useful information presented in this webinar. If power, heat and device lifetime are important for your next design, this webinar will help provide many new strategies and approaches. You can access the webinar replay here.  Or you can learn more about AVS Pro™ in this link.

Webinar Presenters

The topics covered in this webinar go deep into the testing, characterization and performance of advanced semiconductor devices. The graphic at the top of this post illustrates some of the significant challenges that are discussed. The team of presenters is highly qualified to discuss these details and did a great job explaining what is possible with the right approach.

Noam Brousard, vice president of solutions engineering at proteanTecs. With over 20 years of experience in System/Hardware/Software product development and management, consumer electronics, Telecom, mobile, IoT systems and silicon, Noam joined proteanTecs in August 2017, soon after it was founded. Before joining proteanTecs, Noam held VP R&D position at Vi, and senior technical positions at Intel Wireless Innovation Solutions, Orckit and ECI Telecom. Noam holds a M.Sc. in Electrical Engineering from Tel Aviv University and a B.Sc. in Electrical Engineering from Ben Gurion University. 

Dr. Joe McPherson, CEO, McPherson Reliability Consulting. Dr. J.W. McPherson. Dr. McPherson is an international and renowned expert in the field of Reliability Physics and Engineering. He has published over 200 scientific papers, is the author of the Reliability Chapters for 4 books and has been awarded 20 patents. Dr. McPherson was formerly a Texas Instruments Senior Fellow and past General Chairman of the IEEE International Reliability Physics Symposium (IRPS) and still serves on its Board of Directors. He is an IEEE Fellow and Founder/DEO of McPherson Reliability Consulting, LLC. Dr. McPherson holds a PhD degree in Physics. 

Alex Burlak, Vice President of Test & Analytics at proteanTecs. With combined expertise in Production Testing and Data Analytics of ICs and system products, Alex joined proteanTecs in October, 2018. Before joining the company, Alex held a Senior Director of Interconnect and Silicon Photonics Product Engineering positions at Mellanox. Alex holds a B.Sc. in Electrical Engineering from The Israel Institute of Technology, Technion.

Jennifer Scher, Content and Communications Manager at proteanTecs. Jennifer moderated the event, including an informative live Q&A session from the audience. Jennifer spent over 20 years in product and solution marketing at Synopsys.

 

 

 


Reachability in Analog and AMS. Innovation in Verification

Reachability in Analog and AMS. Innovation in Verification
by Bernard Murphy on 06-26-2025 at 6:00 am

Innovation New

Can a combination of learning-based surrogate models plus reachability analysis provide first pass insight into extrema in circuit behavior more quickly than would be practical through Monte-Carlo analysis? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick, HFMV: Hybridizing Formal Methods and Machine Learning for

Verification of Analog and Mixed-Signal Circuits, was published in the 2018 DAC and has 2 citations. The authors are from Texas A&M University.

While in this series we usually look at innovations around digital circuitry, the growing importance of mixed signal in modern designs cannot be overlooked. This month’s pick considers machine learning together with formal methods in AMS to explore reachability for out-of-spec behaviors. The goal is to find out-of-spec performance behavior, normally addressed through Monte Carlo analysis but very expensive across a high-dimensional parameter space. Instead an ML-based surrogate model plus reachability analysis could be an effective aid in quick turnaround design exploration, while still turning back to standard methods for full/signoff validation.

My interest in this paper was triggered by a question in a DVCon keynote a couple of years ago, looking for a better way to check CDC (clock domain crossings) in mixed signal designs. Might this approach be relevant? Reachability in that case would be finding a case where a register data input doesn’t settle until inside the setup window.

Paul’s view

Great paper on combining machine learning with formal methods for high sigma verification of small analog circuits. The authors test their method, hybrid formal / machine-learning verification (HFMV), on a differential amplifier, LDO, and DC2DC converter across a range of performance specifications (e.g. gain, GBW, CMRR, …). Comparing to scaled sigma sampling (SSS), a relatively state-of-the-art importance sampling method, HFMV, finds multiple failures with less than 1k samples across all three circuits, whereas SSS is unable to find any failure after 4-9k samples. Impressive.

HFMV works by first building a predictor for a sample being a failure with some given probability using Bayesian machine learning methods. An SMT problem (a Boolean expression which can include numerical expressions and inequalities within it, e.g. a>b AND x+a<y) is constructed using this predictor. This expression is satisfied by any sample point that is predicted to have a probability of being a failure greater than some threshold, P. An existing state of the art SMT solver, Z3, is then used to try and satisfy the expression with a value of P close to 1.0, i.e. to find a sample point that has a probability of being a failure. Neat trick! After a batch of 350 samples has been generated from the SAT solver, real simulations are run to determine the ground truth results for these samples. The Bayesian model is updated with the new results, and the process repeated. If the SMT solver fails to converge on a solution, then P is decreased in small steps until the converges.

The other key innovation in this paper is some clever math tricks that modify the Bayesian model to make it more amenable to SMT solvers. The first is to apply a mapping function to the input parameters to make the model behave more linearly. The second removes a quadratic term in the model, again to make it more linear. This paper is a wonderful example of how disruptive innovation often happens at the intersection of different perspectives. Blending Bayesian with SMT, as the authors do here, is a brilliant idea, and the results speak for themselves.

Raúl’s view

Verifying that an analog circuit meets its specifications as design parameters vary (transistor channel length and width, temperature, supply voltage, …) requires simulating the circuit at numerous points within a defined parameter space. This is essentially the same problem addressed by commercial tools such as Siemens’ Solido, Cadence’s Spectre FMC analysis, Synopsys’ PrimSim, Silvaco’s Varman, etc. all of which are crucial for applications such as library characterization and analog design. A common metric is the probability of identifying rare failures, presuming a Gaussian distribution with a specified width of σ; for instance, 6σ refers to approximately 2 parts per billion (2×10⁻⁹) being undetected. Employing “brute force” Monte Carlo simulations could require around 1.5 billion samples to identify at least one 6σ failure with 95% confidence which is infeasible. Commercial tools address this in different ways, e.g., statistical learning, functional margin computation, worst-case distance estimation (WCD) and scaled-sigma sampling (SSS). The paper in this blog introduced a novel technique, not yet commercialized, called “Hybrid Formal/Machine learning Verification” (HFMV). This verification framework integrates the scalability of machine learning with the precision of formal methods to detect extremely rare failures in analog/mixed-signal (AMS) circuits.

The paper focuses heavily on mathematics and may be challenging to read, but the main ideas are as follows. HFMV exploits commonly used probabilistic ML models such as Bayesian additive regression trees, relevance vector machine (RVM) [12], and sparse relevance kernel machine (SRKM), trained on limited simulation or measurement data to probabilistically predict failure. Points in the parameter space can be characterized as strong failure, weak failure, weak good, or strong good, depending on prediction confidence. Formal verification using Satisfiability Modulo Theories (SMT) is applied to identify high probability failure points (confirmed as true failures by a single simulation), and to prove that all points within a space have a very low probability of being failure points. Since rare failures might not appear in the initial training set, active learning refines the model iteratively. SMT solving is accelerated by several orders of magnitude by input space remapping and linear approximation.

HFMV is tested on a Differential amplifier, a Low-Dropout Regulator (LDO) and a DC-DC converter, tested on specs like GBW, gain, CMRR, overshoot, quiescent current, etc. and compared to Monte Carlo and SSS. HFMV hits the first true failure point using 600-1,500 samples, which are about 10x and up to 1,000x lower than used by SSS and MC respectively. Yet, neither MC and SSS can find any true failure in the bounded parameter space.

Despite being published in 2018, our blog paper shows that HFMV outperforms techniques used in current commercial tools for high σ rare failure detection, effectively bridging the gap between accuracy and scalability (ML models) and rigor and completeness (formal methods). In addition, given the significant advancements in machine learning since its publication, implementing such capabilities today could be very interesting.

Also Read:

A Novel Approach to Future Proofing AI Hardware

Cadence at the 2025 Design Automation Conference #62DAC

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025


CEO Interview with Yannick Bedin of Eumetrys

CEO Interview with Yannick Bedin of Eumetrys
by Daniel Nenni on 06-25-2025 at 10:00 am

Yannick Bedin CEO Eumetrys

Yannick founded EUMETRYS in 2012. He began his engineering career for Schlumberger in 1998 in West Africa and then became an applications engineer for the company from 2000 to 2004 in the semiconductor sector. In 2004, he joined Soluris as a field service engineer until 2006, then Nanometrics as a technical product support specialist.

Tell us about your company.

Founded in 2012, EUMETRYS is a global integrator of turnkey metrology, inspection, and robotics solutions for semiconductor manufacturers. Our headquarters are located in Gaillac in the south of France with our operations center in Meylan near Grenoble in France’s Silicon Valley. We also have subsidiaries in Germany and in the United States. EUMETRYS sells measurement and inspection equipment with associated installation, maintenance, and technical support services, as well as spare parts and robots to help its customers worldwide increase the lifespan of all their production equipment.

What problems are you solving?

Defective silicon wafers could cost the global semiconductor industry between $10 and $20 billion annually, depending on yield rates, technology nodes, and fab efficiency. This is the reason why inspection plays such a vital role in semiconductor manufacturing, ensuring chip reliability and performance in such critical sectors as aerospace, healthcare, and automotive where even the slightest failure can have serious consequences. A rigorous quality control process helps detect defects at the earliest stages of production, optimize yields, and reduce costs related to returns or repairs. By ensuring consistent quality, chip makers strengthen customer confidence and position themselves in a highly competitive market.

What application areas are your strongest?

We participate in the value chain of the inline production control, sample qualification, and processed substrates in compound semiconductor manufacturing. We bring a very high level of expertise acquired in the opto-photonic, MEMs and compound semiconductor sectors since 2012 in supporting our customers, process engineers, and cluster leaders to maintain their line yield.

What keeps your customers up at night?

In an industry as competitive as the global semiconductor sector, it is all about making sure that chip makers meet their wafer fab financial and productivity objectives. Yield is the one metric where we can help them move the needle. Metrology and inspection are the last in line when it comes to customer CapEx, though the philosophy has changed – key users are now focusing on improving the manufacturing yield as well and the sizing of their fab, with the aim to produce more efficiently than before. We do understand that very well and bring each customer a dedicated solution to help them meet their challenges.

What does the competitive landscape look like and how do you differentiate?

There are many companies on the market that offer all kinds of different solutions, but to implement those, customers often have to interface with different suppliers, making the process lengthy, cumbersome, and complex. The way we stand out from the pack and differentiate ourselves from the competition is that we provide a one-stop-shop turnkey solution – we offer equipment/hardware, integration services, spare parts, and after-sales service, and therefore a comprehensive approach from A to Z and only one single partner to interface with. But that’s not all – an important part of our solution is customization. Each customer has their own concerns and our support team made of high-level expertise technicians and engineers will tailor training and support to meet each individual customer requirement.

What new features/technology are you working on?

We just made an exciting announcement at CS ManTech in New Orleans, LA – we have been awarded the exclusive global distribution of the YPI – Clear Scanner manufactured by Japanese company YGK. This laser scanning particle inspection tool for unpatterned compound semiconductor substrates greatly enhanced the quality control of semiconductors by inspecting the surface of a variety of opaque and transparent wafers from two to 12 inches, including silicon carbide (SiC), gallium nitride (GaN), indium phosphide (InP), Sapphire, gallium arsenide (GaAs), silicon, and glass. This scanner offers unparalleled functionality at a very affordable price for the compound semiconductor market – unmatched substrate flexibility, advanced surface inspection capabilities, robust and proven engineering for longevity, as well as optimized cost efficiency and uptime. With this reliable and fully customizable scanner developed specifically for the compound semiconductor market, chip makers can quickly respond to the stringent quality standards dictated by increasingly complex manufacturing processes. In addition, by July of this year, we will be launching a new complementary offering for all of our customers, so watch this space for more information!

What would be your best advice to semiconductor fab owners on how to reach the best efficiencies for their manufacturing facilities?

I would recommend fab owners to always adjust their metrology and inspection investment to their fab’s design size requirements. It is really not necessary to own the most expensive metrology tools if they provide overspecification compared to actual product design needs. For me, device manufacturing efficiency starts with adapting control to fab capability. If not, spending on CapEx and resources could negatively impact the cost of chip manufacturing. Semiconductor fabs need to be lean to achieve the best yield and to be fit to meet capabilities over the long term.

How can customers engage with your company?
Also Read:

The Sondrel transformation to Aion SIlicon!

CEO Interview with Krishna Anne of Agile Analog

CEO Interview with Kit Merker of Plainsight


Visualizing System Design with Samtec’s Picture Search

Visualizing System Design with Samtec’s Picture Search
by Mike Gianfagna on 06-25-2025 at 6:00 am

Visualizing System Design with Samtec’s Picture Search

If you’ve spent a lot of time in the chip or EDA business, “design” typically means chip design. These days it means heterogeneous multi-chip design. If you’ve spent time developing end products, “design” has a much broader meaning. Chips, subsystems, chassis and product packaging are in focus. This is just a short list if you consider all the aspects of system design and its many disciplines, both hardware and software. Samtec lives in the system design world. The company helps connect the chiplets, chips, subsystems and racks that comprise a complete system.

There is a huge array of products from Samtec that need to be considered in any system design project. The screenshot above gives you a sense of the diversity involved. Choosing a particular connector will impact other choices. Not all combinations of Samtec products are viable as an integrated solution. The company has developed tools to help system designers navigate all this. Last year, I described something called Solution Blocks that assisted with choices that were compatible. Samtec has now taken that concept to the next level by adding visualization, along with more control and a broader perspective. If the entire semiconductor ecosystem worked this way, design would be a lot easier. Let’s take a short tour of visualizing system design with Samtec’s Picture Search.

Seeing is Believing

A link is coming so you can try this out for yourself. Here are a couple of examples of what’s possible. We’ll start with one of the many edge connectors available from Samtec. I chose the vertical Edge Rate® High-Speed Edge Card Connector. The initial configuration is shown below, including a high-resolution image that can be rotated for different views.

Vertical Edge Rate® High Speed Edge Card Connector

I decided to change the number of positions per row from 10 to 50. I also specified a polyimide file pad. In seconds, I got an updated image with detailed dimensions, see below.

Updated Edge Card Connector

The system summarized the features as shown below, along with detail specs, volume pricing/availability and extensive compliance data.

  • Optional weld tab for mechanical strength
  • 00 mm pitch, up to 140 positions
  • Accepts .062″ (1.60 mm) thick cards
  • Current rating: 2.2 A max
  • Voltage rating: 215 VAC/304 VDC max

This all took seconds to do. The ability to perform what-if experiments to converge on the best solution is certainly enabled by a system like this.

For one more experiment, I decided to try the Solutionator instead of browsing categories. Here, I chose active optics.  I then invoked the Solutionator interface for Active Optics, with its promise to “design in a minute.”

With the Active Optics Cable Builder interface, I could quickly browse the various options available. I chose a 12-channel, 16.1 Gbps, unidirectional AOC and instantly received the 3D diagram and all the specs as before. See below.

Active Optics Cable Builder

I could go on with more examples of how this new technology from Samtec makes it easier to pick the right components to implement your next system. The communication, latency and power requirements for advanced semiconductor systems continues to get more demanding.  The technology delivered by Samtec is a key ingredient to developing channels that meet all those requirements. And the process just got easier with Samtec’s picture search.

To Learn More

If high performance channels matter to you, you must try this new technology from Samtec. If power, performance and form factor are key care abouts, you must try it as well. You can access the starting point for your journey here. Have fun visualizing system design with Samtec’s Picture Search.