100X800 Banner (1)

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring
by Mike Gianfagna on 07-17-2025 at 6:00 am

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring

As AI models grow exponentially, the infrastructure supporting them is struggling under the pressure. At DAC, one company stood out with a solution that doesn’t just monitor chips, it empowers them to adapt in real time to these new workload requirements.

Unlike traditional telemetry or post-silicon debug tools, proteanTecs embeds intelligent agents directly into the chip, enabling real-time, workload-aware insights that drive adaptive optimization. Let’s examine how proteanTecs unlocks AI hardware scaling with runtime monitoring.

What’s the Problem?

proteanTecs recently published a very useful white paper on the topic of how to scale AI hardware. The first paragraph of that piece is the perfect problem statement. It is appropriately ominous.

The shift to GenAI has outpaced the infrastructure it runs on. What were once rare exceptions are now daily operations: high model complexity, non-stop inference demand, and intolerable cost structures. The numbers are no longer abstract. They’re a warning.

Here are a few statistics that should get your attention:

  • Training a model like GPT-4 (Generative Pre-trained Transformer) reportedly consumed 25,000 GPUs over nearly 100 days, with costs reaching $100 million. GPT-5 is expected to break the $1 billion mark
  • Training GPT-4 drew an estimated 50 GWh, enough to power over 23,000 U.S. homes for a year. Even with all that investment, reliability is fragile. A 16,384-GPU run experienced hardware failures every three hours, posing a threat to the integrity of weeks-long workloads
  • Inference isn’t easier. ChatGPT now serves more than one billion queries daily, with operational costs nearing $700K per day.

The innovation delivered by advanced GenAI applications can change the planet, if it doesn’t destroy it (or bankrupt it) first.

What Can Be Done?

Uzi Baruch

During my travels at DAC, I was fortunate to spend some time talking about all this with Uzi Baruch, chief strategy officer at proteanTecs. Uzi has over twenty years of software and semiconductor development and business leadership experience, managing R&D and product teams and high scale projects at leading, global high technology companies. He provided a well-focused discussion about a practical and scalable approach to tame these difficult problems.

Uzi began with a simple observation. The typical method to optimize a chip design is to characterize it across all operating conditions and workloads and then develop design margins to keep power and performance in the desired range. This approach can work well for chips that operate in a well characterized, predictable envelope. The issue is that AI, and in particular generative AI applications are not predictable.

Once deployed, the workload profile can vary immensely based on the scenarios encountered. And that dramatically changes power and performance profiles while creating big swings in parameters such as latency and data throughput. Getting it all right a priori is like reliably predicting the future, a much sought after skill that has eluded the finest minds in history.

He went on to point out that the problem isn’t just for the inference itself. The training process faces similar challenges. In this case, wild swings in performance and power demands can cause failures in the process and wasteful energy consumption. If not found, these issues manifest as unreliable, inefficient operation in the field.

Uzi went on to discuss the unique approach proteanTecs has taken to address these very real and growing problems. He described the use of technology that delivers workload-aware real-time monitoring on chip. Thanks to very small, highly efficient on-chip agents, parametric measurements – in-situ and in functional mode – are possible. The system detects timing issues, operational and environmental effects, aging and application stress. Among the suite of Agents are the Margin Agents that monitor timing margins of millions of real paths for more informed decisions. And all of this is tied to the actual instructions being executed by the running workloads.

The proteanTecs solution monitors the actual conditions the chip is experiencing from the current workload profile, analyzes it and reacts to it to optimize the reliability, power and performance profile. All in real time. No more predicting the future but rather monitoring and reacting to the present workload.

A reasonable question here is what is the overhead of such a system? I asked Uzi and he explained that area overhead is negligible as the monitors are very small and can typically be added in the white space of the chip. The gate count overhead is about 1 – 1.5 percent, but the power reduction can be 8 – 14 percent. The math definitely works.

I came away from my discussion with Uzi believing that I had seen the future of AI, and it was brighter than I expected.

At the proteanTecs Booth

Noam Brousard

While visiting the proteanTecs booth at DAC I had the opportunity to attend a presentation by Noam Brousard, VP of Solutions Engineering at proteanTecs. Noam has been with the company for over 7 years and has a rich background in systems engineering for over 25 years at companies such as Intel and ECI Telecom.

Noam provided a broad overview of the challenges presented by AI and the unique capabilities proteanTecs offers to address those challenges. Here are a couple of highlights.

He discussed the progression from generative AI to artificial general intelligence to something called artificial superintelligence. These metrics compare AI performance to that of humans. He provided a chart shown below that illustrates the accelerating performance of AI across many activities. When the curve crosses zero, AI outperforms humans. Noam pointed out that there will be many more such events in the coming months and years. AI is poised to do a lot more, if we can deliver these capabilities in a cost and power efficient way.

Helping to address this problem is the main focus of proteanTecs. Noam went on to provide a very useful overview of how proteanTecs combines its on-chip agents with embedded software to deliver complete solutions to many challenging chip operational issues. The figure below summarizes what he discussed.  As you can see, proteanTecs solutions cover a lot of ground that includes dynamic voltage scaling with a safety net, performance and health monitoring, adaptive frequency scaling, and continuous performance monitoring. It’s important to point out these applications aren’t assisting with design margin strategy but rather they are monitoring and reacting to real-time chip behavior.

About the White Paper

There is now a very informative white paper available from proteanTecs on the challenges of AI and substantial details about how the company is addressing those challenges. If you work with AI, this is a must-read item. Here are the topics covered:

  • The Unforgiving Reality of Scaling Cloud AI
  • Mastering the GenAI Arms Race: Why Node Upgrades Aren’ Enough
  • Critical Optimization Factors for GenAI Chipmakers
  • Maximizing Performance, Power, and Reliability Gains with Workload-Aware Monitoring On-Chip
  • proteanTecs Real-Time Monitoring for Scalable GenAI Chips
  • proteanTecs AVS Pro™ – Dominating PPW Through Safer Voltage Scaling
  • proteanTecs RTHM™ – Flagging Cluster Risks Before Failure
  • proteanTecs AFS Pro™ – Capturing Frequency Headroom for Higher FLOPS
  • System-Wide Workload and Operational Monitoring
  • Conclusion

To Learn More

You can get your copy of the must-read white paper here: Scaling GenAI Training and Inference Chips with Runtime Monitoring. The company also issued a press release recently that summarizes its activities in this important area here.  And if all this gets your attention, you can request a demo here. And that’s how proteanTecs unlocks AI hardware growth with something called runtime monitoring.

Also Read:

Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs

proteanTecs at the 2025 Design Automation Conference #62DAC

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing


U.S. Imports Shifting

U.S. Imports Shifting
by Bill Jewell on 07-16-2025 at 2:00 pm

US Smartphone Imports 2025 SemiWiki

Our Semiconductor Intelligence June Newsletter showed how U.S. imports of smartphones have been on a downward trend since January 2025, led by China. Other key electronic products have also experienced sharp drops in U.S. imports from China.F

U.S. smartphone imports in May 2025 were $3.03 billion, up slightly from April but down more than 50% from January and February 2025. May smartphone imports from China were down 94% from February. India and Vietnam are now the two largest sources of smartphone imports. Apple has been shifting much of its iPhone production to India from China. Samsung does most of its smartphone manufacturing in Vietnam.

U.S. imports of laptop PCs have been relatively stable from January 2025 to May 2025, averaging about $4 billion a month. However, imports from China dropped 90% from January to May. Vietnam displaced China as the largest source of U.S. laptop imports, with imports up 147% from January to May. Dell and Apple produce many of their laptop PCs in Vietnam and HP is expanding production in Vietnam.

Television imports to the U.S. have been fairly steady in February through May 2025, averaging about $1.3 billion a month. As with smartphones and laptop PCs, China’s exports to the U.S. have dropped sharply, with a 61% decline from January to May. TV imports from Mexico declined 40% from January to April but picked up 29% in May. Vietnam is becoming a significant source of TV imports, as it has with smartphones and PCs. U.S. TV imports from Vietnam grew 66% from January to May.

Currently, the U.S. does not impose tariffs on imports of smartphones or computers. However, in May President Trump threatened a 25% tariff on smartphones to be implemented by the end of June. As of mid-July, no smartphone tariff has been implemented.

U.S. imports from Mexico and Canada are subject to a 25% tariff. Goods covered under the USMCA are exempt, which includes electronics. Vietnam is one of only two countries with a new trade agreement with the U.S. in place (the other is the U.K.) and is now subject to a 20% tariff.

China is currently subject to a minimum 10% tariff under a 90-day truce. Product-specific tariffs bring China’s effective tariff rate above 30%. If no agreement is reached, the minimum tariff rate will be 34% on August 12, 2025. The Trump administration sees China as the primary target for tariffs and has threated rates as high as 125%. China has been reducing its exports to the U.S. to avoid punitive tariffs. The U.S. and China are currently in trade talks. Even if a reasonable tariff rate is reached, the damage has been done.

What is the outlook for U.S. electronics consumption? The U.S. has shifted to other countries to make up for the declines in imports from China for laptop PCs and TVs. However, other countries have not yet made up for the severe decline in smartphone imports from China. U.S. smartphone manufacturing is practically non-existent. Only one company, Purism, assembles smartphones in the U.S. Purism has only sold a total of tens of thousands of phones in the last six years in a U.S. market of over 100 million smartphones sold annually. Its Liberty phone sells for $1,999, about twice the price of a high-end iPhone.

IDC estimates global smartphone shipments were 295 million units in 2Q 2025, down 2% from 1Q 2025 and down 1% from a year ago. U.S. smartphone shipments have not been released but will likely show a substantial drop in 2Q 2025 from 1Q 2025 unless sellers have inventory to make up for the shortage in supply. Based on current trends, the U.S. should see a shortage in smartphones in the second half of 2025.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Electronics Up, Smartphones down

Semiconductor Market Uncertainty

Semiconductor Tariff Impact

Weak Semiconductor Start to 2025


Godfather of AI: I Tried to Warn Them But We’ve Already Lost Control!

Godfather of AI: I Tried to Warn Them But We’ve Already Lost Control!
by Admin on 07-16-2025 at 10:00 am

Geoffrey Hinton Godfather of AI

Geoffrey Hinton, dubbed the “Godfather of AI,” joins Steven Bartlett on “The Diary of a CEO” podcast to discuss his pioneering work in neural networks and his growing concerns about AI’s dangers. Hinton, a Nobel Prize-winning computer scientist, explains how he advocated for brain-inspired AI models for 50 years, leading to breakthroughs like AlexNet, which revolutionized image recognition. His startup, DNN Research, was acquired by Google in 2013, where he worked for a decade before leaving at age 75 to speak freely on AI risks.

Hinton distinguishes two risk categories: misuse by humans and existential threats from super intelligent AI. Short-term dangers include cyber attacks, which surged 1,200% between 2023 and 2024 due to AI-enhanced phishing and voice/image cloning. He shares personal precautions, like spreading savings across banks to mitigate potential hacks. Another threat is AI-designed viruses, requiring minimal expertise and resources—a single disgruntled individual could unleash pandemics. Election corruption via targeted ads, fueled by vast personal data, is worsening, with Hinton criticizing Elon Musk’s data access efforts. AI also amplifies echo chambers on social media, polarizing societies by reinforcing biases. Lethal autonomous weapons, or “battle robots,” pose ethical horrors, as they decide kills independently, and regulations often exempt military uses.

Long-term, Hinton warns AI could surpass human intelligence within years, estimating a 10-20% chance of human extinction. Unlike atomic bombs, AI’s versatility—in healthcare, education, and productivity—makes halting development impossible. He realized AI’s edge during work on analog computation: digital systems share knowledge efficiently via weights, unlike biological brains limited by communication. ChatGPT’s release and Google’s Palm model explaining jokes convinced him AI understands deeply, potentially replicating human uniqueness but excelling in scale.

Hinton regrets advancing AI, feeling it might render humans obsolete like chickens to smarter beings. He left Google not due to misconduct, the company acted responsibly by delaying releases, but to avoid self-censorship. Discussing emotions, he predicts AI will exhibit cognitive and behavioral aspects without physiological responses like blushing. On superintelligence, he differentiates current models from future ones that could self-improve, widening wealth gaps as productivity soars but jobs vanish.

Job displacement is imminent. Hinton advises training in trade work (HVAC, plumbing, etc…) as AI agents already halve workforces in customer service. Universal basic income won’t suffice without purpose, humans need contribution. He critiques unregulated capitalism, urging world governments for “highly regulated” oversight, though political trends hinder this.

Reflecting personally, Hinton shares his illustrious family: ancestors like George Boole (Boolean algebra) and Mary Everest Boole (mathematician), plus ties to Mount Everest and the Manhattan Project. He regrets prioritizing work over time with his late wives (both died of cancer) and young children. His advice: Stick with intuitions until proven wrong; his neural net belief defied skeptics.

Bottom line: AI’s existential threat demands massive safety research now, or humanity risks takeover. Urgent action on joblessness is needed, as graduates already struggle. The video interview blends optimism for AI’s benefits with stark warnings, emphasizing ethical development to preserve human happiness amid inevitable change.


Sophisticated soundscapes usher in cache-coherent multicore DSP

Sophisticated soundscapes usher in cache-coherent multicore DSP
by Don Dingee on 07-16-2025 at 10:00 am

A Tensilica 2 to 8 core SMP DSP adds cache-coherence for high-end audio processing and other applications

Digital audio processing is evolving into an art form, particularly in high-end applications such as automotive, cinema, and home theater. Innovation is moving beyond spatial audio technologies to concepts such as environmental correction and spatial confinement. These sophisticated soundscapes are driving a sudden increase in digital signal processing (DSP) performance demands, including the use of multiple DSPs and AI inference in applications. The degree of difficulty in programming multiple DSPs for coordinated audio processing has outweighed the benefits – until now, with the introduction of a cache-coherent multicore DSP solution. Cadence’s Prakash Madhvapathy, Product Marketing Director for HiFi DSP IP, spoke with us about what is changing and how cache-coherent DSPs can unlock new applications.

Sounds that fill – or don’t fill – a space

Spatial audio schemes enable each sound source to possess unique intensity and directionality characteristics, and multiple sources can move through the same space simultaneously. The result can be a highly realistic, immersive 3D experience for listeners in a pristine environment. “Ideally, you’d be able to create your listening environment, say for a soundbar, or headphones, maybe earbuds, or your car,” says Madhvapathy. “You might want to enhance sources, or reduce or remove them altogether.” (For instance, in my home, we have a Bose 5.1 soundbar without a subwoofer because my wife has heightened bass sensitivity.)

Few listening spaces are pristine, however. “Noise can take multiple forms, and they are not always statistically static; they keep changing,” Madhvapathy continues. There can be keyboard clicking, other conversations, traffic noise, and more noise sources. Noise reduction is becoming increasingly complex because what’s noise to one listener might be an important conversation to another, both of whom are hearing sounds in the same space. “Traditional DSP use cases can deliver some noise reduction, but AI is evolving to handle more complex reduction tasks where sources are less predictable than, say, a constant background hum.” AI may also play a role in adapting sound for the space, handling dead spots or reverberations.

High-end automotive sound processing is also becoming much more sophisticated. Some of the latest cars deploy as many as 24 speakers to create listening zones. What the driver hears may be entirely different from what a passenger hears, as cancellation technology provides spatial confinement for the sound each listener experiences, or “sound bubbles” as Madhvapathy affectionately refers to them. “The complexity of different zones in a vehicle can make it difficult to update all of them when using distributed audio processing,” he observes. “The other problem is concurrency – music, phone calls, traffic noise reduction, conversation cancellation, everything has to happen simultaneously and seamlessly, otherwise sound quality suffers for some or all listeners.”

Low-power, high-performance DSPs built on mature cores

Audio processing demand is skyrocketing, and Cadence has turned to a familiar, yet strikingly new solution to increase DSP performance. “Previously, we were able to add performance to one of our HiFi family of DSPs and create enough headroom for customers to meet their audio requirements in a single processor,” says Madhvapathy. “Suddenly, customers are asking for four, six, or eight times the performance they had in our previous HiFi generations to deal with new DSP and AI algorithms.” Multicore has evolved from a DSP architecture that most designers avoided to one that is now essential for competing in the high-end audio market.

The latest addition is the Cadence Tensilica Cache-Coherent HiFi 5s SMP, a symmetric multiprocessor subsystem built on top of their eighth-generation Xtensa core, with additional SIMD registers and DSP blocks incorporated. “Cache-coherence is not a new concept in computer science by any means, but it’s now taking shape in DSP form with the HiFi 5s SMP,” he continues. “Overbuying cores is a problem when attempting hard partitioning of an application across cores, which rarely turns out to be sized correctly. With the HiFi 5s SMP, there’s a shared, cached memory space that all cores can access, and cores can scale up or down for your needs, so there is less wasted energy and cost, and programming is far easier.”

Audio applications gain more advantages. Microphones and speakers can tie into a single processing block with the right amount of DSP cores and memory. The HiFi 5s DSP cores offer multi-level interrupt handling for real-time prioritization of tasks running in FreeRTOS or Zephyr. They also accommodate power management, including three levels of power shut-off options and clock frequency scaling.

Madhvapathy concludes with a couple of interesting observations. While short life cycles are familiar in consumer devices like soundbars and earbuds, he’s seeing a drastic shortening of life cycles in automotive audio design, with features refreshed every two or three years to remain competitive. Scalability and cache coherence not only make software more straightforward, but they also simplify testing and reduce failures, with fewer instances of cache-related anomalies that don’t appear until designs are in the field and customers are dissatisfied.

Designers are just beginning to imagine what is possible in these sophisticated soundscapes, and the arrival of more DSP performance, along with ease of programming and scalability, is timely.

Learn more online about the Cadence Cache-Coherent HiFi 5s SMP for high-end audio processing:

News: Cadence Launches Cache-Coherent HiFi 5s SMP for Next-Gen Audio Applications
Product page: Cache-Coherent HiFi 5s Symmetric Multiprocessor
White paper: Cache-Coherent Symmetric Multiprocessing with LX8 Controllers on HiFi DSPs


A Quick Look at Agentic/Generative AI in Software Engineering

A Quick Look at Agentic/Generative AI in Software Engineering
by Bernard Murphy on 07-16-2025 at 6:00 am

2e07e93d c4b2 45cd 9338 6faabedd5052

Agentic methods are hot right now since single LLM models seem limited to point tool applications. Each such application is impressive but still a single step in the more complex chain of reasoning tasks we want to automate, where agentic methods should shine. I have been hearing that software engineering (SWE) teams are advancing faster in AI adoption than hardware teams so thought it would be useful to run a quick reality check on status. Getting into the spirit of this idea I used Gemini Deep Research to find sources for this article, selectively sampling a few surveys it offered while adding a couple of my own finds. My quick summary is first that what counts as progress depends on the application: convenience-based use-models are more within reach today, precision use-models are also possible but more bounded. And second, advances are more evident in automating subtasks subject to a natural framework of crosschecks and human monitoring, rather than a hands-free total SWE objective.

Automation for convenience

One intriguing paper suggests that we should move away from apps for convenience needs towards prompt-based queries to serve the same objectives. This approach can in principle do better than apps because prompt-based systems eliminate need for app development, can be controlled through the language we all speak without need for cryptic human-machine interfaces, and can more easily adapt to variations in needs.

Effective prompt engineering may still be more of an art than we would prefer, but the author suggests we can learn how to become more effective and (my interpretation) perhaps we only need to learn this skill once rather than for every unique app.

Even technology engineers need this kind of support, not in deep development or analysis but in routine yet important questions: “who else is using this feature, when was it most recently used, what problems have others seen?” Traditionally these might be answered by a help library or an in-house data management app, but what if you want to cross your question with other sources or constraints outside the scope of that app? In hardware development imagine the discovery power available if you could do prompt-based searches across all design data – spec, use cases, source code, logs, waveforms, revisions, etc, etc.

Automating precision development

This paper describes an agentic system to develop quite complex functions including a face recognition system, a chat-bot system, a face mask detection tool, a snake game, a calculator, and a Tic-Tac-Toe game, using an LLM-based agentic system with agents for management, code generation, optimization, QA, iterative refinement and final verification. It claims 85% or better code accuracy against a standard benchmark, building and testing these systems in minutes. At 85% accuracy, we must still follow that initial code with developer effort to verify and correct to production quality. But assuming this level of accuracy is repeatable, it is not hard to believe that even given a few weeks or months of developer testing and refinement, the net gain in productivity without loss of quality can be considerable.

Another paper points out that in SWE there is still a trust issue with automatically developed code. However they add that most large-scale software development is more about assembling code from multiple sources than developing code from scratch. Which changes the trust question to how much you can trust components and assembly. I’m guessing that they consider assembly in DevOps to be relatively trivial, but in hardware design SoC-level assembly (or even multi-die system assembly) is more complex though still primarily mechanical rather than creative. The scope for mistakes is certainly more limited than it would be in creating a complete new function from scratch. I know of an AI-based system from over a decade ago which could create most of the integration infrastructure for an SoC – clocking, reset, interrupt, bus fabric, etc. This was long before we’d heard of LLMs and agents.

Meanwhile, Agentic/Generative AI isn’t only useful for code development. Tools are appearing to automate test design, generation and execution, for debug, and more generally for DevOps. Many of these systems in effect crosscheck each other and are also complemented by human oversight. Mistakes might happen but perhaps no more so than in an AI-free system.

Convenience, precision or a bit of both?

Engineers obsess about precision, especially around AI. But much of what we do during our day doesn’t require precision. “Good enough” answers are OK if we can get them quickly. Search, summarizing key points from an email or paper, generating a first draft document, these are all areas where we depend on (or would like) the convenience of a quick and “good enough” first pass. On the other hand, precision is vital in some contexts. For financial transactions, jet engine modeling, logic simulation we want the most accurate answers possible, where “good enough” isn’t good enough.

Even so, there can still be an advantage for precision applications. If AI can provide a good enough starting point very quickly (minutes) and if we can manage our expectations by accepting need to refine and verify beyond that starting point, then the net benefit in shortened schedule and reduced effort may be worth the investment. As long as you can build trust in the quality the AI system can provide.

Incidentally, my own experience (I tried Deep Research (DR) options in Gemini, Perplexity and Chat GPT) backs up my conclusions. Each DR analysis appeared in ~10 minutes, mostly useful to me for the references they provided rather than the DR summaries. Some of these references were new to me, some I already knew. That might have been enough if my research was purely for my own interest. But I wanted to be more accurate since I’m aiming to provide reliable insight, so I also looked for other references through more conventional on-line libraries. Combining both methods proved to be productive!


Improve Precision of Parasitic Extraction for Digital Designs

Improve Precision of Parasitic Extraction for Digital Designs
by Admin on 07-15-2025 at 10:00 am

fig1 pex process

By Mark Tawfik

Parasitic extraction is essential in integrated circuit (IC) design, as it identifies unintended resistances, capacitances, and inductances that can impact circuit performance. These parasitic elements arise from the layout and interconnects of the circuit and can affect signal integrity, power consumption, and timing. As IC designs shrink to smaller nodes, parasitic effects become more pronounced, making accurate extraction crucial for ensuring design reliability. By modeling these effects, designers can adjust their circuits to maintain performance, avoid issues like signal delays or power loss, and achieve successful design closure.

What is parasitic extraction

In semiconductor design, parasitic elements—like resistances, capacitances, and inductances—are unintended but inevitable components that emerge during the physical fabrication of integrated circuits (ICs). These elements are a result of the materials used and the complexity of the fabrication process. Although not part of the original design, parasitic elements can significantly impact circuit performance. For example, parasitic resistances can cause voltage drops and increased power dissipation, while parasitic capacitances can lead to signal delays, distortions, and crosstalk between adjacent wires. Additionally, interconnect parasitic introduce propagation delays that can affect the timing and signal integrity, leading to higher power consumption and reduced overall performance.

Parasitic extraction is a critical process in IC design that identifies and models these unintended parasitic effects to ensure reliable performance. In digital design, parasitic extraction relies heavily on standardized formats like LEF (Library Exchange Format) and DEF (Design Exchange Format), which describe both the logical and physical aspects of the design (figure 1).

Figure 1. Parasitics are extracted from the physical and logical information about the design.

The parasitic extraction process typically follows these key steps:

  • Data preparation: This step involves assembling and aligning the logical and physical design data, usually sourced from LEF and DEF files. The purpose is to ensure each logical component is correctly mapped to its corresponding physical location in the layout, ensuring accurate connectivity for the parasitic extraction process.
  • Extraction: During extraction, parasitic components such as resistances, capacitances, and interconnects are identified and captured from the design layout and technology data. This forms the basis for understanding how these parasitic elements might impact the overall performance of the circuit.
  • Reduction: Once parasitic elements are extracted, they are simplified using models such as distributed RC or lumped element models. These models condense the parasitic data, making it easier to manage while still accurately reflecting the parasitic effects for simulation and analysis.
  • Verification: After extraction, the data is subjected to verification. This involves comparing the parasitic data with design specifications and simulation results to ensure it aligns with the expected circuit performance and complies with necessary design rules and criteria for sign-off.
  • Optimization: After verifying the parasitics, designers can apply various optimization techniques to reduce their negative impact on the circuit. This can include refining routing paths, adding buffers, or making other adjustments to improve performance, timing, power consumption, and signal integrity.

Accurate parasitic extraction is crucial for successful IC design, particularly as technology advances and parasitic effects become more pronounced. By systematically modeling, verifying, and optimizing these effects, designers can ensure that their circuits perform reliably and meet required specifications during fabrication and final production.

Analog and digital design flows

Analog and digital design flows are two distinct approaches in semiconductor design, each suited to the specific requirements of analog and digital integrated circuits (ICs). Analog design deals with circuits that process continuous signals, such as amplifiers, filters, and analog-to-digital converters (ADCs). Precision is crucial in these circuits to minimize noise, distortion, and power consumption. Designers face challenges like balancing trade-offs between power efficiency and noise reduction, requiring manual layout adjustments to avoid performance issues caused by small variations. Tools like SPICE simulators help model circuit behavior under different conditions to ensure reliability and performance. Analog circuits are highly sensitive to their physical layout and are thoroughly tested in different operating conditions.

On the other hand, digital design focuses on circuits that use binary signals (0s and 1s) and components such as logic gates, flip-flops, and various types of logic circuits. Digital design prioritizes speed, energy efficiency, and resistance to noise, relying more on automation and standardized components to streamline the process. Tools like Verilog and VHDL allow designers to define the circuit’s behavior, which is then automatically synthesized into a layout. Digital workflows make use of timing analysis, logic simulation, and verification tools to ensure the circuit operates correctly and meets performance requirements. While digital circuits can be complex, their binary nature allows for more straightforward layouts compared to analog circuits.

However, as technology advances and node sizes shrink, both analog and digital designs face new challenges. Analog designs must deal with increased noise sensitivity and parasitic effects, while digital designs need to address timing, power consumption, and signal integrity issues at higher circuit densities. Despite these complexities, modern design tools and methods help ensure that ICs meet the required performance, power, and reliability standards. Both design flows play critical, complementary roles in IC development, with analog design focusing on precision and manual adjustments, and digital design emphasizing automation and efficiency. Designers in both areas must navigate intricate trade-offs to produce high-performance, reliable ICs in a rapidly advancing technological environment.

Parasitic extraction tools

Parasitic extraction tools for semiconductor design are generally divided into three main categories: field solver-based, rule-based extraction and pattern matching, each with its own strengths and suited for different design requirements (figure 2).

Figure 2. Software tools used for parasitic extraction are traditionally field-solver or rule-based tools. Pattern matching is a newer technique.

Field solvers. Field solver-based approaches use numerical techniques to solve electromagnetic field equations, such as Maxwell’s equations, which allow them to model complex geometries and interconnects with a high degree of accuracy. These methods excel in capturing distributed parasitic, making them particularly useful for designs where detailed insights into electromagnetic phenomena are crucial. This precision is essential for high-frequency circuits, radio frequency (RF) designs, and other advanced applications that demand a deep understanding of parasitic effects to ensure performance integrity. However, the trade-off with field solver methods is their computational intensity. Since they solve complex mathematical equations across fine geometric details, they require significant computational resources and time, especially when applied to large-scale designs. This limits their widespread use in routine workflows, relegating them mostly to specialized tasks where the highest level of accuracy is a necessity.

Rule-based. Rule-based extraction tools, in contrast, operate on predefined models and design guidelines, which allow them to estimate parasitic elements in a quicker and more scalable manner. These tools rely on established rules derived from previous simulations and physical laws, applying them across the design layout to extract parasitic. Although rule-based methods may not capture the same level of fine detail as field solvers, they are highly efficient, offering much faster extraction times and the ability to handle larger, more complex designs without overwhelming computational resources. This makes them the preferred option for most digital and analog IC design workflows, where designers prioritize a balance between speed, accuracy, and scalability. Rule-based tools are particularly well-suited for mainstream applications, where the trade-offs in precision are acceptable, and the design geometries are not as complex or demanding as in high-frequency or RF circuits. These tools are also more user-friendly, requiring less setup and computational overhead, making them accessible for a broader range of design projects.

Pattern matching, often considered a 2.5D extraction technique, helps by recognizing recurring layout patterns in the design. It uses pre-characterized parasitic values for specific geometric configurations to speed up the extraction process without performing complex calculations for each instance. Pattern matching provides a balance between speed and accuracy, making it suitable for large-scale designs that involve repetitive structures, such as standard cells or repeated circuit blocks.

Choosing an extraction tool

The decision between different parasitic extraction tools depends on the specific needs of the design. Field solver methods are ideal for specialized applications where accuracy cannot be compromised, such as in RF, microwave, and millimeter-wave designs, or in advanced nodes with dense and complex interconnect structures. Rule-based tools are the backbone of mainstream design flows, offering a practical and scalable solution for most digital and analog ICs. Pattern matching provides a flexible middle-ground solution, enhancing extraction efficiency for repetitive structures.

Designers must evaluate the performance, resource constraints and the complexity of their designs to choose the appropriate methodology. In many cases, a combination of different approaches may be used: field solvers for critical areas requiring high precision and rule-based methods for the bulk of the design, providing an optimal balance of efficiency and accuracy throughout the design process, and pattern matching to optimize efficiency in recurring design patterns.

There are tools, including Calibre xACT, that employ both rule-based and field solver approaches, plus offer pattern matching. For most designers, having a tool with high precision in extracting interconnect parasitics such as resistances and capacitances, are critical for understanding IC performance. An advanced extraction tool can capture detailed interactions between interconnects and devices within the IC, offering important insights for optimizing design performance and addressing signal integrity challenges (figure 3).

Figure 3. Inputs and outputs of a digital extraction flow.

Conclusion

Efficient parasitic extraction is vital for optimizing IC performance by accurately modeling resistances, capacitances and other parasitic elements. Designers have options when it comes to extraction tools, so should consider one that supports for both analog and digital design flows, can find and mitigate parasitic effects that impact signal integrity, timing closure and power efficiency and is qualified for all design nodes. Precise extraction results help designers make informed decisions early in the design process, ensuring robust and reliable IC development.

Mark Tawfik

Mark Tawfik is a product engineer in the Calibre Design Solutions division of Siemens Digital Industries Software, supporting the Calibre PERC and PEX reliability platform. His current work focuses on circuit reliability verification, parasitic extraction and packaged checks implementation. He holds a master’s degree from Grenoble Alpes University in Micro-electronics integration in Real-time Embedded Systems Engineering.

Also Read:

Revolutionizing Simulation Turnaround: How Siemens’ SmartCompile Transforms SoC Verification

Siemens EDA Unveils Groundbreaking Tools to Simplify 3D IC Design and Analysis

Jitter: The Overlooked PDN Quality Metric


Perforce at DAC, Unifying Software and Silicon Across the Ecosystem

Perforce at DAC, Unifying Software and Silicon Across the Ecosystem
by Mike Gianfagna on 07-15-2025 at 6:00 am

Perforce at DAC, Unifying Software and Silicon Across the Ecosystem

As the new name reflects, chip and system design were a major focus at DAC. So was the role of AI to enable those activities. But getting an AI-enabled design flow to work effectively across chip, subsystem and system-level design presents many significant challenges. One important one is effectively managing the vast amount of data used for these activities. There was one company at DAC that is quietly enabling these efforts. It’s reach is impressive. I had the opportunity to speak with two of the leaders at Perforce at DAC to see how the company is unifying software and silicon across the ecosystem.

The Big Picture

These folks provided a great overview of what Perforce is doing, with some important context about the impact of the work.

Vishal Moondhra

Vishal Moondhra, VP of Solutions Engineering at Perforce. Vishal has over 20 years of experience in digital design and verification. His career includes innovative startups like LGT and Montalvo, and large multinationals such as Intel and Sun. In 2008, Vishal co-founded Missing Link Tools, which built the industry’s first comprehensive Design Verification management solution, bringing together all aspects of verification management into a single platform. Missing Link was acquired by Methodics Inc. in 2012 and by Perforce in 2020.

 

Mike Dreyer

Mike Dreyer, Director, Partners and Alliances at Perforce. Mike has nearly 30 years of experience in sales and partner management across several companies and industries. He has been with the Perforce organization for 10 years. Prior to Perforce, he managed global strategic accounts at Mentor Graphics.

We began by discussing what’s involved in uniting software and silicon in the context of system design. A big issue is managing the vast amount of data generated by AI systems across the entire system development flow. Handling the sheer volume of information is one challenge. Keeping track of what metadata version belongs to what IP block version is another. Without solid version control, sophisticated AI algorithms could be making decisions with the wrong or inconsistent data sets, creating deep and hard-to-find problems.

Vishal and Mike explained that this is an area where Perforce is helping many design teams across many organizations with two key products.

Perforce IPLM provides a hierarchical data model that unifies software and semiconductor metadata. This provides immutable traceability from requirements through design to verification. When deployed across the enterprise, a foundation for an intelligent AI-powered platform capable of real-time data analytics to drive informed design decisions is created.  It was pointed out that IPLM can manage all kinds of IP across a system design, from an AND gate to an airline seat. The implications of unifying this much of the system design is quite significant.

Perforce P4 delivers high-performance data management and version control. P4 provides the infrastructure for fast, scalable, and secure collaboration across globally distributed teams. I’ve worked on large design projects across several countries, and I can tell you the most sophisticated design flow will simply fall apart if the data management backbone can’t keep up.

These capabilities are deployed today across a wide range of companies, design teams and projects. You could say that Perforce is quietly enabling the AI revolution. Citing names is always tricky in these discussions, but Vishal and Mike were able to share an impressive list of customers that includes Micron, Analog Devices, SK Hynix, Skyworks, and Cirrus Logic.

The Siemens Connection

There was another example of Perforce collaboration across the ecosystem in a press release leading up to DAC, Perforce Partners with Siemens for Software-Defined, AI-Powered, Silicon-Enabled Design.  Described as a “partnership to unify software and semiconductor development”, the release describes how Perforce Software, the DevOps company for global teams seeking AI innovation at scale, is partnering with Siemens Digital Industries Software to transform how smart, connected products are designed and developed. Siemens was part of many conversations at DAC related to system design and AI. I covered the company’s announcement of the Siemens EDA AI System on SemiWiki here.

The work announced in the Perforce press release provides important infrastructure to enable forward-looking efforts such as this. The press release provides a good perspective for the impact of this work as follows:

As software and semiconductor teams converge around shared tools and methodologies, there is a critical need for a cohesive platform for concurrent design, development, and verification. This approach enables greater agility in architectural decision-making, accelerates verification, and ensures full traceability from initial requirements through to implementation and validation. Perforce’s IPLM and P4 solutions provide the foundation for this unified development environment.

To Learn More

It is clear that chip design and system design are converging. It is also clear that AI will be a big part of that design revolution. If these trends are impacting your work, it is essential to understand how the pieces work together and where critical enabling technology fits. Perforce is a key supplier of that enabling technology. You can read the full text of the Siemens partnership press release here. And you can learn more about how Perforce IPLM and P4 work together here.  And that’s how Perforce is unifying software and silicon across the ecosystem.


Double SoC prototyping performance with S2C’s VP1902-based S8-100

Double SoC prototyping performance with S2C’s VP1902-based S8-100
by Daniel Nenni on 07-14-2025 at 10:00 am

pic 1

As AI, HPC, and networking applications demand ever-higher compute and bandwidth, SoC complexity continues to grow. Traditional 50M ASIC equivalent gate FPGA prototyping systems have become less effective for full-chip verification at scale. Addressing this challenge, S2C introduced the Prodigy S8-100 Logic system, powered by AMD’s Versal™ Premium VP1902, offering 2× performance and enhanced deployment efficiency for ultra-large SoC designs.

S8-100 vs. LX2 Benchmark

S2C ran a head-to-head benchmark using the Openpiton 192Core project—a highly complex, multi-core SoC design. This comparison evaluated the performance of the VP1902-based S8-100Q against the previous generation LX2 platform across key prototyping metrics:

Metric S8-100Q (4× VP1902) LX2 (8× VU19P) S8-100 Advantage

 

Design Size (Total) 268.74M gates

(based on usage)

249.02M gates

(based on usage)

✔ Same design workload
Cut Size 25,002 54,990 ✔ Simplified topology
Post-PR Frequency (MHz) 9.4 4.6 ✔ 2× performance

Despite equivalent logic capacity, the S8-100Q achieved 2× higher operating frequency, reduced cascading complexity, and minimized design constraints—leading to faster bring-up and more efficient debug cycles.

Test Conditions:
  • S2C PlayerPro-CT 2024.2 via fully automated, timing-aware partitioning
  • Xilinx Vivado 2024.2 for synthesis and implementation
  • Global optimization techniques enabled, including TDM-awareness, clock domain balancing, and resource co-optimization
Performance Advantages

1) Architecture Enhancement

  • Delivers ~2× logic density
  • 2×2 die layout reduces longest possible signal path from 3 to 2 hops—improving timing closure

2) Streamlined Partitioning & Cascading

  • Higher per-FPGA capacity reduces chip-to-chip interconnects
  • Fewer SLR crossings minimize congestion and simplify routing

3) Low-Latency Interconnect Fabric

  • I/O latency is reduced by 36% of that in UltraScale+ systems
Smarter Prototyping with Integrated Toolchains

The S8-100 isn’t just powerful—it’s intelligently automated. S2C’s PlayerPro-CT toolchain tightly integrates with the hardware, offering:

  • One-click flow from RTL to bitstream
  • Optional manual refinement for advanced tuning
  • Timing and Architecture-aware optimizations

The combination of the S8-100 and new PlayerPro-CT features dramatically cuts setup time, boosts resource efficiency, and accelerates project time-to-market.

Field-Tested and Deployment-Ready

The S8-100 has been deployed in advanced-node SoC programs across AI acceleration, edge computing, and data center. Its proven performance, scalable architecture, and reduced engineering overhead make it a trusted choice for complex SoC projects.

With 2× logic density, simplified interconnects, and a tightly integrated toolchain, the S8-100 delivers a major leap forward in FPGA-based prototyping—empowering engineering teams to confidently prototype, validate, and iterate faster than ever before.

For more information, please visit: www.s2cinc.com.

About S2C

S2C is a global leader in FPGA prototyping solutions, providing scalable, reliable, and flexible hardware platforms that accelerate system validation and software development for semiconductor companies worldwide. For more information, visit www.s2cinc.com.

Also Read:

Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System

Cost-Effective and Scalable: A Smarter Choice for RISC-V Development

S2C: Empowering Smarter Futures with Arm-Based Solutions


Alphawave Semi and the AI Era: A Technology Leadership Overview

Alphawave Semi and the AI Era: A Technology Leadership Overview
by Daniel Nenni on 07-14-2025 at 8:00 am

AI Market Silicon Forecast 2025

The explosion of artificial intelligence (AI) is transforming the data center landscape, pushing the boundaries of compute, connectivity, and memory technologies. The exponential growth in AI workloads—training large language models (LLMs), deploying real-time inference, and scaling distributed applications—has resulted in a critical need for disruptive innovation. Alphawave Semi has emerged as a significant player positioned at the intersection of this transformation, bringing expertise in high-speed connectivity and semiconductor IP to a rapidly evolving AI ecosystem.

AI workloads have escalated data traffic, straining every layer of compute infrastructure. OpenAI data suggests compute demands have doubled every 3 to 4 months since 2012, outpacing Moore’s Law. LLMs such as GPT-4, with trillions of parameters, exemplify this trend. The pressure has shifted from not only building faster compute but also enabling higher bandwidth, lower latency, and more energy-efficient interconnects between CPUs, GPUs, memory, and storage.

This demand for scale and speed has coincided with the rise of heterogeneous computing architectures. Data centers increasingly rely on systems combining CPUs with accelerators like GPUs, ASICs, and FPGAs, tailored for specific AI tasks. At the same time, traditional monolithic SoCs have reached the limits of manufacturable die sizes, prompting a transition to chiplet-based architectures. Chiplets allow integration of best-in-class components with shared power, memory, and logic, enabling modular design and more efficient scaling.

To meet these demands, Alphawave Semi has transformed from a SerDes IP provider into a broader semiconductor solutions company. Its transition began with deep investments in advanced packaging, custom silicon design, and chiplet technology. With roots in high-speed serial interfaces, the company is uniquely positioned to deliver low-power, high-performance interconnects essential for AI data center workloads.

Alphawave Semi’s IP portfolio includes cutting-edge SerDes capable of supporting data rates above 112G, which are crucial for enabling chiplet interconnects, optical transceivers, and PCIe/CXL-based memory fabrics. It supports the emerging Universal Chiplet Interconnect Express (UCIe) standard, a critical development that enables interoperability of chiplets across vendors. This fosters a multi-vendor ecosystem, empowering smaller silicon designers to compete by assembling chiplets into innovative AI processors.

In parallel, memory bottlenecks have become a major challenge. High Bandwidth Memory (HBM) and on-die memory solutions have become integral to AI accelerator performance. Alphawave Semi’s engagement in chiplet-based memory interfaces and its roadmap for integrating CXL-based memory pooling support underline its strategy to address next-gen memory hierarchies.

Alphawave Semi has also expanded into standard products and custom silicon development. In 2023, the company launched a rebrand to reflect its transition from IP licensing to full-stack semiconductor innovation. This includes providing front-end and back-end design, verification, and manufacturing services—an offering increasingly valuable as cloud and hyperscale customers seek to build custom silicon solutions to meet their unique AI performance requirements.

Industry partnerships have further amplified Alphawave’s reach. The company collaborates with key foundry and IP ecosystem leaders such as TSMC, Samsung, ARM, and Intel. It has also signed agreements with AI chip startups like Rebellions, signaling its growing role as an enabler of next-generation compute architectures.

As demand for AI infrastructure continues to grow, Alphawave Semi’s value proposition is becoming clearer: delivering foundational connectivity IP, scalable chiplet technologies, and full custom silicon solutions for customers at every tier of the semiconductor value chain. Its strategy aligns with the trajectory of the AI silicon market, projected to exceed $150 billion by 2027, driven by both inference at the edge and large-scale training in data centers.

In summary, Alphawave Semi stands at a critical juncture in the AI revolution. Its combination of deep IP expertise, chiplet innovation, and customer-centric silicon services positions it as a key enabler of the high-speed, heterogeneous systems powering AI’s future.

You can read the full white paper here.

Also Read:

Podcast EP288: How Alphawave Semi Enables Next Generation Connectivity with Bharat Tailor

Alphawave Semi is in Play!

Podcast EP276: How Alphawave Semi is Fueling the Next Generation of AI Systems with Letizia Giuliano


Silicon Valley, à la Française

Silicon Valley, à la Française
by Lauro Rizzatti on 07-14-2025 at 6:00 am

image001 (4)

Since the fall of the Roman Empire, France has played a defining role in shaping Western civilization. In the 9th century, Charlemagne—a Frank—united much of Europe under one rule, leaving behind a legacy so profound he is still remembered as the “Father of Europe.” While Italy ignited the Renaissance, it was 16th-century France that carried its torch across the continent, elevating the arts and laying the groundwork for broader cultural transformation.

The Age of Enlightenment and the subsequent Age of Reason, both rooted in French intellectual movements, revolutionized thinking across philosophy, science, and governance. The French Revolution helped dismantle aristocratic privilege and paved the way for the rise of the bourgeoisie. Even Napoleon, despite the chaos he unleashed across Europe, gifted the world the Napoleonic Code—an enduring foundation for modern legal systems.

In every domain—from science to medicine, philosophy to administration—France has left a deep and lasting imprint on the modern world.

So what happens when French intellectual rigor meets the fast-paced ecosystem of Silicon Valley?

That question was answered—quietly but impactfully—by a French disrupter headquartered in Velizy, just outside Paris. With no presence in the United States and no built-in connections to the Valley’s tightly knit circles, the company, VSORA, could have easily been left behind. But instead, it charted its own path—and in doing so, emerged as the only viable European competitor in the AI processors landscape.

Silicon Valley is a unique environment. It’s a place where innovation buzzes in the air, where information flows freely—not in the form of stolen secrets, but through subtle signals: a parking lot that suddenly gets packed, alerting you that something is happening, a recruiter’s call to pitch new job openings, a whisper at a coffee shop. It’s a network-driven ecosystem that rewards proximity and speed. Simply being there can offer a six-month head start compared to companies based overseas, who rely on trade publications and conferences to stay informed.

This constant, near-invisible stream of information means that companies in the Valley evolve together—each pivot triggering a cascade of similar moves by competitors. Keeping up isn’t optional. It’s survival.

And yet, VSORA managed to not only keep up but lead—despite operating 5,000 miles away. How? By staying true to its own process of innovation. Without the distraction of Silicon Valley’s echo chamber, the engineers in France developed a breakthrough hardware architecture to accelerate AI inference in data centers and at the edge that set them apart.

Ironically, it was their outsider status that became their strength.

However, recognizing the importance of proximity to the U.S. market and the advantages of the Valley’s information network, VSORA is anticipating the opening of a design center in Silicon Valley to provide the company with a critical foothold in the region while preserving its basic engineering in France.

The result: a hybrid model that leverages the best of both worlds.

But managing transatlantic teams comes with challenges. With a nine-hour time difference, coordinating workflows demands more than just good intentions. By adopting a range of collaborative tools, such as instant messaging, conference calls for complex discussions, wikis for shared documentation, and periodic in-person meetings VSORA plans to strengthen team cohesion and aligns strategic goals.

Technology can bridge time zones, but it cannot replace trust and shared purpose. And it certainly can’t replicate the magic of Silicon Valley—unless you know how to channel it from afar.

VSORA’s story shows that with discipline, vision, and a bit of French-inspired finesse, it’s possible not just to compete with Silicon Valley from the outside—but to thrive, lead, and even shape it.

Since the fall of the Roman Empire, France has played a defining role in shaping Western civilization. In the 9th century, Charlemagne—a Frank—united much of Europe under one rule, leaving behind a legacy so profound he is still remembered as the “Father of Europe.” While Italy ignited the Renaissance, it was 16th-century France that carried its torch across the continent, elevating the arts and laying the groundwork for broader cultural transformation.

The Age of Enlightenment and the subsequent Age of Reason, both rooted in French intellectual movements, revolutionized thinking across philosophy, science, and governance. The French Revolution helped dismantle aristocratic privilege and paved the way for the rise of the bourgeoisie. Even Napoleon, despite the chaos he unleashed across Europe, gifted the world the Napoleonic Code—an enduring foundation for modern legal systems.

In every domain—from science to medicine, philosophy to administration—France has left a deep and lasting imprint on the modern world.

So what happens when French intellectual rigor meets the fast-paced ecosystem of Silicon Valley?

That question was answered—quietly but impactfully—by a French disrupter headquartered in Velizy, just outside Paris. With no presence in the United States and no built-in connections to the Valley’s tightly knit circles, the company, VSORA, could have easily been left behind. But instead, it charted its own path—and in doing so, emerged as the only viable European competitor in the AI processors landscape.

Silicon Valley is a unique environment. It’s a place where innovation buzzes in the air, where information flows freely—not in the form of stolen secrets, but through subtle signals: a parking lot that suddenly gets packed, alerting you that something is happening, a recruiter’s call to pitch new job openings, a whisper at a coffee shop. It’s a network-driven ecosystem that rewards proximity and speed. Simply being there can offer a six-month head start compared to companies based overseas, who rely on trade publications and conferences to stay informed.

This constant, near-invisible stream of information means that companies in the Valley evolve together—each pivot triggering a cascade of similar moves by competitors. Keeping up isn’t optional. It’s survival.

And yet, VSORA managed to not only keep up but lead—despite operating 5,000 miles away. How? By staying true to its own process of innovation. Without the distraction of Silicon Valley’s echo chamber, the engineers in France developed a breakthrough hardware architecture to accelerate AI inference in data centers and at the edge that set them apart.

Ironically, it was their outsider status that became their strength.

However, recognizing the importance of proximity to the U.S. market and the advantages of the Valley’s information network, VSORA is anticipating the opening of a design center in Silicon Valley to provide the company with a critical foothold in the region while preserving its basic engineering in France.

The result: a hybrid model that leverages the best of both worlds.

But managing transatlantic teams comes with challenges. With a nine-hour time difference, coordinating workflows demands more than just good intentions. By adopting a range of collaborative tools, such as instant messaging, conference calls for complex discussions, wikis for shared documentation, and periodic in-person meetings VSORA plans to strengthen team cohesion and aligns strategic goals.

Technology can bridge time zones, but it cannot replace trust and shared purpose. And it certainly can’t replicate the magic of Silicon Valley—unless you know how to channel it from afar.

VSORA’s story shows that with discipline, vision, and a bit of French-inspired finesse, it’s possible not just to compete with Silicon Valley from the outside—but to thrive, lead, and even shape it.

Contact VSORA

Also Read:

The Journey of Interface Protocols: Adoption and Validation of Interface Protocols – Part 2 of 2

The Journey of Interface Protocols: The Evolution of Interface Protocols – Part 1 of 2

Beyond the Memory Wall: Unleashing Bandwidth and Crushing Latency

The Double-Edged Sword of AI Processors: Batch Sizes, Token Rates, and the

Hardware Hurdles in Large Language Model Processing