SNPS1670747138 DAC 2025 800x100px HRes

Synopsys is Paving the Way for Success with 112G SerDes and Beyond

Synopsys is Paving the Way for Success with 112G SerDes and Beyond
by Mike Gianfagna on 05-08-2024 at 10:00 am

Synopsys is Paving the Way for Success with 112G SerDes and Beyond

Data communication speeds continue to grow. New encoding schemes, such as PAM-4 are helping achieve faster throughput. Compared to the traditional NRZ scheme, PAM4 can send twice the signal by using four levels vs. the two used in NRZ. The diagram at the top of this post shows the how data density is increased. With progress comes challenges. PAM4 has a worse signal-to-noise ratio, and reflections are also much worse. More expensive equipment is required and even then, there are challenges in establishing a link. There were a couple high-profile events recently that showcase what Synopsys is doing to address these challenges. Capabilities of its IP were demonstrated, as well as how a reference design from Synopsys is helping with interoperability across the ecosystem.  Let’s take a closer look to see how Synopsys is paving the way for success with 112G SerDes and beyond.

Webinar Presentation

 On February 20 a webinar was held to explore how to get the best performance out of a 112G SerDes solution. The challenges of PAM-4 were discussed, along with the the importance of auto negotiation (AN) and link training (LT) to address those challenges. Three knowledgeable people participated in this webinar, as shown below.

Madhumita Sanyal from Synopsys began the webinar with a presentation. She discussed the growing use of high-speed ethernet in many applications and how 112G ethernet is enabling 400G and 800G. On this topic, she discussed the role of PAM-4 modulation as an enabler and some of the design challenges PAM-4 presents.

Madhumita went on to discuss the importance of auto-negotiation and link training to help address the challenges presented by PAM-4. She cited several design examples. The approach is applicable to copper cables and backplanes. She explained that PAM4 challenges are partly compensated through a Tx equalization after auto-negotiation is completed. She also pointed out these techniques are applicable to all rates defined in IEEE 802.3 Clause 73 (ranging from 1G to 200G) and the Ethernet Consortium 400GBASE-CR8/KR8 specification. The figure below summarizes the discussion.

Auto Negotiation & Link Training are Essential

The Webinar also presented a very useful interoperability demo. I’ll get to that in a moment, but first some other news from the trade show floor is quite relevant.

News From the Trade Show Floor

 At the 49th European Conference on Optical Communications (ECOC) last October, Madhumita Sanyal presented impressive details of the impact the Synopsys 800G ethernet subsystem link-level interoperability was having across the ecosystem.

The 800G demos at ECOC’23 were done in the Ethernet Alliance booth. The demos used 8 lanes of Synopsys LR 112G ethernet PHY IP and 800G MAC/PCS interop with exercisers, analyzers & third-party 800G EVBs over DAC channels showing linkup, packet receive/transfer, FEC histogram & other performance metrics.

800G ethernet throughput was shown with zero errors across several demonstrations integrating Synopsys products with ecosystem partners. This interoperability success highlights the possibilities for future design collaboration across the ecosystem. The figure below illustrates one HPC data center rack-like demo configuration.

HPC Data Center Rack Like Demo

You can view a complete summary of this work from the show floor at ECOC with this short video.

The Webinar Demo

A detailed demonstration of interoperability based on the Synopsys 800G SS evaluation board was featured during the recent webinar as well.

Martin Qvist Olsen of Teledyne LeCroy set up the first demonstration. The demo configuration is summarized below. A Teledyne LeCroy Xena Z800 Freya tester connected to a Synopsys 800G SS evaluation board with a Teladyne LeCroy SierraNet M1288 in between as a probe.  

The first demo showed how the Xena and Synopsys devices performed auto-negotiation. Details of the operation of each device was shown by examining UI outputs to explain how the results were achieved. The protocols and standards used were also discussed, along with the details of tuning and the associated challenges.

Martin then moved to the next demonstration, which focused on how the Xena and Synopsys devices initiate automatic link training and how the devices perform the link training. The ways the performance of the link is improved was also covered. Details of FEC and BER statistics were shown. The impact of standards and what is covered by those standards was also discussed.

The third demonstration was also presented by Martin. Here, the detailed steps of how link training is achieved are examined. The steps involved and the associated pre-sets were shown. A great amount of detail of how the process works and how optimal results can be achieved were reviewed with many examples.

The next demonstration was presented by Craig Foster of Teledyne LeCroy. Craig focused on the various ways to implement link training. How the Xena device performs link training on its own is examined. This is followed by how the Synopsys device implements link training on its own. And finally, how the two devices work together to implement link training was reviewed.

Craig covers a lot of detail regarding how each device implements link training and then how they work together. The attendees were able to view, step by step, how each device works.

Martin presented the final demonstration, which is the Xena and Synopsys device running a 1X800G channel. The parameters used to implement the link are shown in detail, along with performance statistics.

To Learn More

This masterclass webinar is rich in technical information, backed up with practical demonstrations to show the details. If high-speed communications is important to you, I highly recommend you take a look. You can access the webinar replay here.

Also, as mentioned, you can view a summary of the Synopsys interoperability work from the show floor at ECOC here. Synopsys provides a complete Ethernet IP solution for up to 1.6Tbps, including MAC, PCS, PHY, VIP and security.

And that’s how Synopsys is paving the way for success with 112G SerDes and beyond.

 


Oops, we did it again! Memory Companies Investment Strategy

Oops, we did it again! Memory Companies Investment Strategy
by Claus Aasholm on 05-08-2024 at 8:00 am

Opps We Did it Again Semiconductor Memory 4

We are in the semiconductor market phase where everybody disagrees on what is going on. The market is up; the market is down. Mobile phones are up…. oh no, now they are down. The PC market is up—oh no, we need to wait until we can get an AI PC. The inflation is high—the consumer is not buying.

For us in the industry, the 13-week financial analyst cycle is the entire universe – time did not exist before this quarter, and it will cease to exist after it.

Sell, sell, sell, pull, push, cheat, steal, fake, blame! Anything to make the guidance number. If you are in the hamster wheel, there is no oxygen, and you lose the overview.

The bottom line is that it does not matter, and the quarterly cycles are a (sometimes expensive) distraction from achieving long-term business success. The semiconductor business does not rotate around a quarter or a financial year. It is orientated around a four-year cycle, as can be seen below.

The growth rates are high or negative – rarely moderate. The semiconductor industry is briefly in supply/demand alignment every second year. It is about as briefly aligned as two high-speed trains passing each other.

One of the primary reasons for this cyclical behaviour is the long capital investment cycle for semiconductor manufacturing. A capacity expansion might take 2-3 quarters, while a new fab takes 2-3 years to construct and fill. This leads to the first law regarding semiconductor manufacturing investments:

The first law of Semiconductor Manufacturing Investments:

“You need to invest the most when you have the least.” The quarterly hamster wheel and pressure to deliver to analysts and share squatters (There are few owners) make this law incredibly hard to follow. Failing to do so leads to the second law of semiconductor manufacturing:

The second law of Semiconductor manufacturing investments:

If you fail to abide by the first law of semiconductor manufacturing,
New capacity will come online when you need it the least.
So, a high-level and long-term strategy to create sustainable growth and profitability should be possible. That is until you learn about the third law of semiconductor manufacturing investments:

The third law of Semiconductor Manufacturing Investments:

All semiconductor cycles are different.
All semiconductor cycles are the same.
Only quantum engineers understand the third law of Semiconductor manufacturing investments. I will try and explain it anyway.

The upcycle is easy to explain. Everybody repeat after me: We are doing a great job and taking market share. Everybody is taking market share.

The down cycle is more complex. Every time I have faced a downcycle (I don’t want to reveal my age here, but is it more than a couple), I have heard new arguments about why this cycle is different: Dot Com Crash, Financial Crisis, Asia Crisis, Covid and so forth. This makes companies address this well-established cycle as something new that we might never recover from – so we have to be careful.

Once the cycle is behind us, we see it was the same as the others, plus or minus a brick.

Collectively, we never learn.

But THIS cycle is different!

The memory markets and the three leading companies, Samsung, SK Hynix, and Micron, make the semiconductor industry even more cyclical. The market is more commodity-orientated than the rest of the industry; prices vary greatly depending on the cycle’s timing.

When the market is near a peak, memory prices are so elevated that smartphone and PC prices become prohibitive, and consumers stop buying. This propels the industry into a down-cycle.

At the bottom, where memory is sold at a loss, PCs and smartphones are so cheap that a replacement cycle initiates the next semiconductor upcycle.

Other factors impact the Semiconductor market over time (AI and GPUs are currently pushing the envelope), but the memory cycle has the most potent effect.

The combined memory revenue of the three leading companies can be seen below.

This shows the massive difference in memory companies’ profitability depending on the timing of the semiconductor cycle. In particular, the last downcycle was nasty.

This significantly impacted the memory giant’s combined capital expenditure, as seen below.

During the last down cycle, the CapEx got below the level needed to service and sustain the existing capacity (Maintenance CapEx), and there have been indications that the production capacity has declined since Q3-22.

“The best time to invest is when you have the least money.” The memory companies have failed to comply with the first law of Semiconductor Manufacturing Investment.

The lack of investment also comes at a bad time when the Memory supply chain is changing.

Large Data centre processing companies now need High Bandwidth Memory (HBM) for their AI systems. HBM needs 2x the capacity per bit than regular DRAM. As HBM is expected to be 10% of the total DRAM market in 2025, the capacity needs to increase with that amount before there is capacity for the average upcycle growth.

As the processing companies negotiate long-term contracts directly with the memory companies, they will get the capacity first. This can already be seen in general DRAM pricing, which is rising.

Two potential AI upgrade cycles for Smartphones and PCs might need extra capacity.

The Memory Capacity Outlook

As there are concerns about the large memory companies that have underinvested, it is worth taking a deeper dive into the capacity situation.

When analysing memory capacity, we investigate the historic Expansion CapEx (investment beyond maintenance) and add our projection of future expansion capex, as seen below.

Adding the Semiconductor cycles as the backdrop reveals an investment void in the 22 upcycle compared to the 18 upcycle. Companies were likely waiting with capital investments to use the Chips Act signed on August 22. Then, all memory companies were deep in the red, and only Samsung kept investing above the maintenance level to support its Taylor Fab expansion.

The Memory market has a lot of inventory, but it looks like it is drying out, revealing the severity of the capacity shortage. We will know very soon.

Samsung was the only company investing in maintenance after the Chips Act. This supported its Taylor Expansion, expected to come online in late 2024. Kudos to Samsung for timing Taylor ideally to open in the middle of the upcycle. This is the only significant capacity increase that the memory market will benefit from during this upcycle.

SK Hynix Cheongju expansion will go online precisely at the projected 26’ peak, creating the next downcycle if it is not already underway.

Micron’s Boise expansion will likely go online when the market is deep in the slide, making it difficult to make it profitable.

The semiconductor tool sales confirm that there has been no immediate response to this potential capacity shortage. An uptick in Q4-23 was not followed up in Q1-24, and the market level is generally low.

I am certainly not here to criticise the leaders of Memory companies. It is a crazy business, not for the faint of heart. When Elon Musk talked about eating glass and staring into the abyss, he was indeed speaking about the downcycle in memories.

However, if the logic in this post were applied, there might be less panic.

We have presented the facts and analysis and expect a challenging period for the memory market. However, while we trust our facts, our analysis might be misguided, and other experts can add colour to the discussion.

The Semiconductor industry is too large and complex for anybody to know it all.

Also Read:

Nvidia Sells while Intel Tells

Real men have fabs!


An Enduring Growth Challenge for Formal Verification

An Enduring Growth Challenge for Formal Verification
by Bernard Murphy on 05-08-2024 at 6:00 am

Math blackboard min

A high-quality verification campaign including methods able to absolutely prove the correctness of critical design behaviors as a complement to mainstream dynamic verification? At first glance this should be a no-brainer. Formal verification offers that option and formal adoption has been growing steadily, now used in around 30-35% of designs per the Siemens/Wilson Research Group survey. However anecdotal evidence from formal verification service companies such as Axiomise suggests that real benefit extracted from formal methods still falls significantly short of the potential these methods can offer. A discussion with Dr Ashish Darbari, CEO of Axiomise, prompted this analysis of that gap and how it can be closed.

What’s going on?

About half of reported usage is attributable to support apps which make canned checks more accessible to non-expert users. The balance comes in straight property checking, a direct application of formal engines for which you must define all required assertions, constraints, and other factors in constructing proofs. Apps offer a bounded set of checks; property checking offers unbounded options to test whatever behavior you want to prove.

The complexity of formulating an efficient property check is another matter. Like any other problem in CS, problem complexity can span from relatively simple to difficult to practically insoluble. By way of example, consider a check for deadlocks. In a single finite state machine (FSM) such a check is sufficiently easy to characterize that it is included in standard apps. Checking for possible deadlocks in multiple interacting FSMs is more challenging to package because problem characterization is more complex and domain specific. Checking for deadlocks in a network on chip (NoC) is more challenging still given the span, topology, and size of a typical NoC. Cross-sub-system proofs or proofs of behavior under software constraints I suspect are beyond the bound of methods known today (without massive manual abstraction – I’d be happy to hear I’m wrong).

Another complication is that while many argue you don’t need to be a mathematician to use these methods, effective formal attacks to complex problems still very much depend on finesse rather than brute force. You may not need a math degree but you do need something of a mathematical or at least a puzzle mindset, constantly reinforced. I think this is why successful formal verification deployments run as separate teams. Dynamic verification teams also face difficult challenges but of a different kind. It is difficult to see how one person could routinely switch between both domains and still excel in each.

In this light, outsourcing for complex formal verification objectives becomes inevitable, to specialists with concentrated and growing experience in that domain. Axiomise certainly seems to be benefiting from that demand, not just from small ventures but from major names among semiconductor and systems enterprises.

Why Axiomise?

Axiomise provide consulting and services, training, and an app they call formalISA for RISC-V formal verification. Apps of this type may ultimately add a high margin revenue component to growth though it appears today that clients prefer a full turnkey solution, for which formalISA underlies a value-added service.

Ashish and his CTO Neil Dunlop have extensive experience in formal methods. The rest of the technical team are mathematicians, physicists, and VLSI experts trained from scratch in Ashish’s approach to formal problem solving. This they have applied across a wide variety of subsystems and test objectives. One very topical application is for RISC-V cores.

Extensibility and multiple sources for cores are key strengths for RISC-V but also come with a weakness I have mentioned before. Arm spends between $100M and $150M per year in verification; Intel and AMD probably spend much more. They have all built up decades of legacy verification assets, spanning many possible CPU variants and optimizations. To rise to comparable verification quality on an unmodified RISC-V core is a major task, given a staggering range of test scenarios which must be covered against a wide range of possible architecture optimizations. Add in a custom instruction or two and the verification task is amplified even further.

Formal methods are the ideal way to prove correctness in such cases, assuming an appropriate level of finesse in proofs. Axiomise use their formalISA app to run push-button proofs on correctness on 32-bit and 64-bit implementations, and they have production-ready implementations for RV32IMC and RV64IMC instruction sets. Examples of problems found include a bug in RISC-V specification v 2.2 and over 70 deadlocks found in the previously verified zeroriscy. The app found new bugs in the ibex core and architectural issues with LOAD instruction in zeroriscy. It found 30 bugs in WARP-V (2-stage, 4-stage, and 6-stage in-order cores) and cv32e40p core from OpenHW. Axiomise has also verified the out-of-order execution CVA6 core using formalISA. Details of these bugs are available on GitHub.

As usual, the development involved in these tests is a one-time effort. Once in place, regressions can be run hands-free. Ashish tells me that with formalISA, diagnosis of any detected problem is also simplified.

Takeaway

I’d like to believe that in time, more of these kinds of tests can be “app-ified”, extending the range of testing that can be performed in-house without service support. Today building such tests require a special level of formal expertise often only available in long-established centers of excellence and in organizations such as Axiomise. Since other big semiconductor and systems companies are happy to let Axiomise consult and train their teams to better corral these complex problems, you might want to check them out when you face hard proof problems.

You can learn more about Axiomise HERE, the formalISA studio HERE and the RISC-V studio HERE.

Also Read:

2024 Outlook with Laura Long of Axiomise

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®

WEBINAR: The Power of Formal Verification: From flops to billion-gate designs


Rigid-flex PCB Design Challenges

Rigid-flex PCB Design Challenges
by Daniel Payne on 05-07-2024 at 10:00 am

PADS Professional Design

From Zion Research I learned that the flexible electronics market was about $13.2B in 2021 and growing at a CAGR of 21%, so that was impressive. There are several factors that make rigid-flex circuit so attractive, like: space efficiency, reduced weight, enhanced reliability, improved signal integrity, streamlined assembly, design flexibility, cost savings, miniaturization, and better durability. I learned more by reading a new eBook online.

Wearable products like fitness trackers, smart watches and AR glasses have very limited space, so they benefit from space efficiency. Traditional PCBs use connectors and cables that add to product weight, so rigid-flex provides weight savings in markets like automotive and aerospace. Connectors and cables contribute to reliability issues, while rigid-flex designs are engineered to withstand bending and movement. With fewer electrical discontinuities, rigid-flex circuit exhibit improved signal integrity and impedance control, a benefit for high-speed and high-frequency products. Using less components to connect makes rigid-flex a simpler assembly process with lower costs from labor and materials. Engineers can design products with new shapes and configurations using rigid-flex, not possible with rigid PCBs, enabling new categories of products.

PCB design tools that support rigid-flex PCB need to manage layer stackups in both the rigid and flexible layers. Bend areas should be easily defined and visualized and must support the bend radius and fold lines. Components ought to be placed quickly in both the rigid and flexible areas, along with 3D visualization. Routing tools are required to support trace routing along the flex areas while maintaining signal integrity and allowing sketch or manual routing. Issues like excessive bending or trace spacing violations need to be found during Design Rule Checking (DRC). Both thermal and signal integrity simulations must be performed to ensure reliable operation. An accurate media library should include flexible materials and support both rigid and flexible substrates.

Rigid-flex stackups

PCB designers use 3D visualization to best understand the entire rigid-flex PCB, along with bending simulations. An ideal EDA tool automatically generates fabrication and assembly drawings, detailing the flexible regions. Mechanical and electrical designers collaborate through import and export features. The best manufacturing yield is ensured when Design For Manufacturing (DFM) checks specific to rigid-flex are run. Using library components designed for flex circuits makes for quicker work.

Designs require trace, plane, cover layers, bend area, bend radius, vias and stiffener support

Rigid-flex technology is used in a wide range of markets as demands are growing for miniaturization, like for wearable smart products, automotive electronics, aerospace and defense, medical devices, consumer electronics, and IoT.

Markets for flexible electronics

PADS Professional

Siemens offers their PADS Professional software as a solution to rigid-flex PCB design challenges that have been discussed so far. The approach with PADS Professional is to use a correct-by-construction technology, which enables your design team to create an optimal form factor with high quality, in the shortest time frame.

PADS Professional Design

With this EDA tool PCB designers can define unique stack-up types, specify bend areas, and use flex-aware placement and routing to get high quality results. Both 3D bending and 3D DRC are supported, eliminating surprises in fabrication. Signal Integrity and Power Integrity are validated quickly through simulations. DFM validation understands rigid-flex and your designs are readied for NPI hand-off.

Summary

So much of our consumer electronics, automobiles and aircraft are already using rigid-flex technology and the market projections show a healthy growth for years to come. There are many challenges to adopting a rigid-flex PCB design flow, so you really want to adopt technology that is well proven over many years, designs and industries. With PADS Professional there is solid technology to address each of the challenges of rigid-flex PCB design.

Read the complete 7 page ebook online at Siemens.

Related Blogs


Accelerate SoC Design: DIY, FPGA Boards & Commercial Prototyping Solutions (I)

Accelerate SoC Design: DIY, FPGA Boards & Commercial Prototyping Solutions (I)
by Daniel Nenni on 05-07-2024 at 6:00 am

Logic System

In the early days, chip designers had to rely on time-consuming simulation results or wait for the engineering sample to validate whether the design meets its intended objectives. With the increasing complexity of SoC designs, the need to accelerate software development has also risen to ensure a timely entry to market.

In the pursuit of accelerating the verification process, FPGA prototyping has become an integral SoC development tool. FPGA prototyping capitalizes on the reconfigurability of FPGA logic and its flexible IOs, enabling a cost-effective and efficient means of validating designs when compared to simulation and emulation.

FPGA prototyping tools are generally made available from three sources: FPGA boards designed in-house by IC design companies (Build Your Own, BYO), evaluation boards sold by FPGA vendors, and commercial prototyping systems developed by EDA companies. With the rise of commercial solutions, FPGA prototyping now plays an important role of chip design and software development. This table compares the three types of FPGA prototyping.

BYO FPGA Development Board Commercial Prototyping System
Design size Suited for small & medium-sized designs. (Limitations on Large designs.) Suited for small-scale design/ softcore development/ specific protocol tasks, etc. Suited for various size of designs.
Reliability Low High High
Performance Medium Low High
Scalability Low Low High
Software Support/Ease of Use Low Low High
I/O Interface Need additional work to develop Restricted Wide range to choose from
Portability Low Low High
Remote system management Not provided Not provided Built-in
Technical Support Not support Limited support Support
Maintenance Not support Limited support Support
Time-to-Deployment Low Medium High
Overall Cost-effectiveness Low Medium High

While Do It Yourself (DIY) can provide a highly customizable “made-to-order” solution, such path would require design expertise spanning FPGA system development, PCB design, and manufacturing, making the process to be manageable only if the design size is small to medium-size or a large team is dedicated to the project; time and resources to conduct iterative testing to ensure stability and reliability is also a challenge. This challenge is not an issue for FPGA development boards from FPGA vendors as the boards are made in bulk; but the capacity offered by FPGA development boards are generally are more limiting, the boards cannot be combined to scale, and IO count and the IO interface selections are often limited.

In contrast, commercial prototyping systems, such as Haps from Synopsys and Prodigy from S2C, excel in capacity, scalability, and ease of use. With the high stability and quality support, commercial solutions provide an effective means to accelerate hardware verification and software development, and also enabling early customer engagements through functional SoC prototypes and demos.

Taking S2C’s solution as an example, the Prodigy FPGA prototyping offers a wide choice of models in single, dual, quad, and octa FPGA configurations and a large selection of daughter cards for fast time-to-deployment. With prototyping capacities ranging from small & medium sizes for IP verification and software development, to hyperscale designs targeting AI and 5G which need multiple systems cascaded together & partitioning support, S2C has it all covered.

S2C’s Prodigy Prototyping Solution

In the next post, we’ll dive deeper and explore the common problems encountered during functional verification and how S2C’s tools can be used to overcome these challenges.

Also Read:

Enhancing the RISC-V Ecosystem with S2C Prototyping Solution

2024 Outlook with Toshio Nakama of S2C

Prototyping Chiplets from the Desktop!


The latest ideas on time-sensitive networking for aerospace

The latest ideas on time-sensitive networking for aerospace
by Don Dingee on 05-06-2024 at 10:00 am

Aircraft domain requirements for time sensitive networking in aerospace

Time-sensitive networking for aerospace and defense applications is receiving new attention as a new crop of standards and profiles approaches formal release, anticipated before the end of 2024. CAST, partnering with Fraunhofer IPMS, has developed a suite of configurable IP for time-sensitive networking (TSN) applications, with an endpoint IP core now running at 10Gbps and switched endpoint and multiport switch IP cores available at 1Gbps and extending to 10Gbps soon. They have just released a short white paper on TSN in aerospace applications, providing an overview of TSN standards and how they map to aerospace network architectures.

Standards come, standards go, but Ethernet keeps getting better

A couple of decades ago, at a naval installation far, far away – NWSC Dahlgren, Virginia, to be exact – I had the privilege of accompanying Motorola board-level computer architects in our customer research conversation with Dahlgren’s system architects. The topic was high-bandwidth backplane communication, with the premise of moving from the parallel VMEbus to a faster serial interface.

The Dahlgren folks politely listened to a presentation overviewing throughput and latency differences between Ethernet, InfiniBand, and RapidIO. Ethernet was just transitioning to 1Gbit speeds. Our architects leaned toward RapidIO for its faster throughput with low latency but were open to considering input from real-world customers. When the senior Dahlgren scientist started speaking after the last slide, he gave an answer that stuck with me all these years.

His well-entrenched position was that niche use cases notwithstanding, Ethernet would always win by satisfying more system interoperability requirements and positioning deployed systems to survive in long life-cycle applications via upgrades. Just as Token Ring and FDDI appeared and were subsequently displaced by Ethernet, he projected that InfiniBand and RapidIO would eventually fall to Ethernet as standards work improved performance.

The discussion is still relevant today. Real-time applications demand determinism and reliability, especially in mission-critical contexts like aerospace and defense. Enterprise Ethernet technology provides robust interoperability and throughput but occasional non-deterministic behavior. It only takes one late-arriving packet to throw off an application depending on that data within a fixed time window. Bandwidth covers up many sins, but as applications demand more data from more sources in large systems, the margin for error shrinks.

Addressing four TSN concepts and common aerospace profiles

Quoting the white paper: “TSN consists of a set of standards that extend Ethernet network communication with determinism and real-time transmission.” IEEE TSN standards address four key concepts not available in enterprise-class Ethernet:

  • Time synchronization, setting up a common perception across all devices in a network, builds on concepts from IEEE 1588 and adds resilience through multiple time domains.
  • Latency, establishing the idea of high-priority traffic versus best-effort traffic, and applying time-aware and credit-based shaping.
  • Resource management, with protocols to set up switches, determine topology, and request and reserve network bandwidth.
  • Reliability focuses on recovering from defective paths while minimizing redundant transmissions and protecting networks from propagating incorrect data.

Profiles are being developed for using TSN in specific industries, such as aerospace and defense, industrial automation, automotive, audio-video bridging systems, and others. Existing communication standards for aerospace, like ARINC-664, MIL-STD-1553, and Spacewire, fail to handle all the above concepts that drove the design of TSN.

CAST/Fraunhofer IPMS offer a concise table of domains and key requirements for networking, noting that a big problem is isolated network islands using various technologies inside a large system. They propose TSN as a unifying network architecture that can meet all the requirements with less expensive components and simpler cabling.

The paper concludes with a brief overview of the CAST/Fraunhofer IPMS TSN IP cores available in RTL source or FPGA-optimized netlists, describing features designed for real-time networking with determinism and low latency.

To get the whole story on time-sensitive networking in aerospace and defense with more detail on the emerging IEEE TSN standards and profiles and aerospace network requirements, download the CAST/Fraunhofer IPMS white paper:

White Paper — Time Sensitive Networking for Aerospace


Analog Bits Continues to Dominate Mixed Signal IP at the TSMC Technology Symposium

Analog Bits Continues to Dominate Mixed Signal IP at the TSMC Technology Symposium
by Mike Gianfagna on 05-06-2024 at 6:00 am

Analog Bits Continues to Dominate Mixed Signal IP at the TSMC Technology Symposium

The recent TSMC Technology Symposium in the Bay Area showcased the company’s leadership in areas such as solution platforms, advanced and specialty technologies, 3D enablement and manufacturing excellence. As always, the TSMC ecosystem was an important part of the story as well and that topic is the subject of this post. Analog Bits came to the event with three very strong demonstrations of enabling IP on multiple fronts. Let’s examine how Analog Bits continues to dominate mixed signal IP at the TSMC Technology Symposium.

Demo One – New LDO, High Accuracy PVT Sensors, High Performance Clocks, Droop Detectors, and more in TSMC N3P

As more designs are going to multicore architectures, managing power for all those cores becomes important. The new LDO macro can be scaled, arrayed, and shared adjacent to CPU cores and to simultaneously monitor power supply health. With Analog Bits’ detector macros, power can be balanced in real time. Mahesh Tirupattur, Executive Vice President at Analog Bits said, “It is like PLL’s that maintain clocking stability we have are now able to offer IP’s to maintain power integrity in real time.”

Features of the new LDO macro include:

  • Integrated voltage reference for precision stand-alone operation
  • Easy to integrate, use, and configure with no additional components or special power requirements
  • Scalable for multiple output currents
  • Programmable output level
  • Trimmable
  • Implemented with Analog Bits’ proprietary architecture
  • Requires no additional on-chip macros, minimizing power consumption

Taking a look at one more IP block for power management, Analog Bits’ Droop Detector addresses SoC power supply and other voltage droop monitoring needs. The Droop Detector macro includes an internal bandgap style voltage reference circuit which is used as a trimmed reference to compare the sampled input voltage against.

The part is synchronous with latched output. Only when the monitored voltage input has exceeded a user-selected voltage level will the Droop Detector output signal indicate that a violation is detected.

Below is a block diagram of an implementation. The composite Droop Detector comprises a primary Droop Detector, which includes a bandgap, plus additional Droop Detectors if needed by the application and which connect by abutment.

Droop Detector Block Diagram

Demo Two – Patented Pinless PLL’s and Sensors in TSMC N3, N4 and N5

As discuss in this post, for gate-all-around architectures there will be only one gate oxide thickness available to support the core voltage of the chip. Other oxide thicknesses to support higher voltages are simply no longer available. In this scenario, the Pinless Technology invented by Analog Bits will become even more critical to migrate below 3nm as all of the pinless IP will work directly from the core voltage.

Examining the Pinless PVT Sensor at TSMC N5 and N3, this device provides full analog process, voltage, and temperature measurements with no external pins access required by running off the standard core power supply. This approach delivers many benefits, including:

  • Pinless PVT Voltage Linearity
    No on-chip routing of the analog power supply
  • No chip bumps
  • No package traces or pins
  • No PCB power filters

For voltage measurements, the device delivers excellent linearity as shown in the diagram.

Demo Three – Automotive Grade SERDES, PLL’s, Sensors, and IOs in TSMC N5A

As the electronic content in automobiles continues to increase, the need for a complete library of IPs that meet the stringent requirements of this operating environment become more important. In this demo, Analog Bits showcased a wide range of IP that meets automotive requirements on the TSMC N5A process.

I’ll take a look at Analog Bits’ Wide Range PLL.  This IP addresses a large portfolio of applications, ranging from simple clock de-skew and non-integer clock multiplication to programmable clock synthesis for multi-clock generation.  This IP is designed for AEC-Q100 Automotive Grade 2 operation.

The PLL macro is implemented in Analog Bits’ proprietary architecture that uses core and IO devices. In order to minimize noise coupling and maximize ease of use, the PLL incorporates a proprietary ESD structure, which is proven in several generations of processes. Eliminating bandgaps and integrating all on-chip components such as capacitors and ESD structure helps the jitter performance significantly and reduces stand-by power.

The diagram below shows the block diagram for this IP.

PLL Block Diagram

Stepping back a bit, the figure below shows the various Analog Bits IP that is showcased in the N3P test chip demo at TSMC Technology Symposium. 

N3P Test Chip Demo

Executive Perspective

Mahesh Tirupattur

I had the opportunity to chat with Mahesh Tirupattur to get his comments on the recent announcements at the TSMC Technology Symposium. He said:

“The Analog Bits team is always innovating leading edge novelty IP solutions to solve customer design challenges on the latest processes.  Our mission is to enable integration of many off-chip components that reside on the board to on-die and soon on-chiplets. The benefits are significant by enabling embedded clocks, PMIC, LDO on-die – reduced form factor, costs, and improved performance with lower power. What is not to like about this? Our approach sets us apart in the marketplace as we truly add value by amalgamating system knowledge with leading edge mixed signal designs in advanced processes to enable new AI architectures.”

To Learn More

All of the Analog Bits demos from the TSMC Technology Symposium are now available to see online. If you missed the event, you can catch up on the Analog Bits demos here. You can also see all the TSMC processes Analog Bits supports here. It’s quite a long list. And that’s how Analog Bits continues to dominate mixed signal IP at the TSMC Technology Symposium.

 


Why NA is Not Relevant to Resolution in EUV Lithography

Why NA is Not Relevant to Resolution in EUV Lithography
by Fred Chen on 05-05-2024 at 8:00 am

Why NA is Not Relevant to Resolution in EUV Lithography

The latest significant development in EUV lithography technology is the arrival of High-NA systems. Theoretically, by increasing the numerical aperture, or NA, from 0.33 to 0.55, the absolute minimum half-pitch is reduced by 40%, from 10 nm to 6 nm. However, for EUV systems, we need to recognize that the EUV light (consisting of photons) is ionizing, i.e., releases photoelectrons in absorbing materials. A 92 eV EUV photon, once absorbed, kicks off a ~70-80 eV photoelectron which gradually deposits most of the energy of the original photon [1]. The image information originally in the EUV photon density is replaced by the final migrated photoelectron density. The difference between minimum and maximum resist exposure is defined by where all the photoelectrons finally reach a particular energy cutoff. We also need to add the photoelectron number randomness which stems from the photon absorption randomness. The stochastic effects from the roughness lead to edge roughness, unpredictable edge position shifts, and even defects. All these considerations taken together require us to revisit the actual practical resolution for EUV lithography.

Photoelectron Model details

EUV light arriving at the wafer is a mixture of two components, one polarized parallel to the plane of incidence (TM), and one polarized perpendicular to the plane of incidence (TE). The photoelectron is emitted predominantly along the direction of polarization [2]. Purely unpolarized light should be a 50-50% mixture, but we may expect some departure because the mirrors in the EUV system may reflect near the Brewster angle [3]. With regard to lines being imaged, the photoelectrons along the TE polarization move along the lines, while the photoelectrons along the TM polarization move perpendicular to the lines. It is only the latter which degrades the image. Photoelectrons moving laterally effectively shift the image. Figure 1 shows the relative probability for a photoelectron to migrate a given distance to a 3 eV cutoff. This cutoff corresponds to the resist thickness loss following exposure and development [1].

Figure 1. Probability density for EUV photoelectron travel distance for an open-source resist, to a 3 eV energy cutoff [1].

The resist exposure at a given point will be affected by photoelectrons from a given distance away, weighted by the probability density for that distance, only for the TM case. The TE portion is taken to be unaffected by photoelectron migration as the photoelectron travels along the lines [3].

Picturing Photoelectron Spread

In a previous study [3], it was found that as pitch decreased below 40 nm, the image contrast, indicated by the normalized image log-slope (NILS), would degrade due to the photoelectron spread. As a reference, the photoelectron spread at 40 nm pitch can be pictured in Figure 2. The absorbed photon dose is affected by shot noise which directly affects the number of photoelectrons generated. At 5 mJ/cm2, roughly what is absorbed by 40 nm thick organic chemically amplified resist (CAR) at 30 mJ/cm2, the nominally unexposed region is penetrated by photoelectrons which can potentially print defects, and the nominally exposed region is thoroughly penetrated by photoelectron printing gaps. Raising the dose reduces this stochastic severity.

Figure 2. Photoelectron spread for 20 nm half-pitch vs absorbed dose. The orange portion indicates where the photoelectron density exceeds the half-pitch printing threshold. The pixel size is 1/40 of the pitch.

When the pitch is increased to 50 nm (Figure 3), the photoelectrons do not appear to spread as far stochastically, especially at the higher dose. This is due to the increased contrast, i.e., separation between maximum and minimum photoelectron densities in the image.

Figure 3. Photoelectron spread for 25 nm half-pitch vs absorbed dose. The orange portion indicates where the photoelectron density exceeds the half-pitch printing threshold. The pixel size is 1/40 of the pitch.

Redirecting the Determination of EUV Lithography Resolution

The EUV photoelectron spread probability density function shown in Figure 1 leads to a practical resolution limit of ~50 nm pitch for ~30 mJ/cm2, ~40 nm pitch for ~90 mJ/cm2 for a typical CAR. This is far above the expected resolution limit from 0.33 or 0.55 NA. The resolution limit should therefore not be primarily associated with the optics of the EUV system but in fact be tied to the photoelectron as well as secondary electron migration in the EUV resist. Moreover, the resolution is closely tied to the dose absorbed by the resist; a higher dose enables better resolution. This leads to a throughput tradeoff [4], requiring higher source power to compensate. The resolution of a given EUV resist must be characterized by calibrating low-energy electron scattering simulations [1,5] with resist thickness loss vs. electron dose measurements [1]. It must be kept in mind that while metal-containing resists are known for their enhanced EUV absorption [6], the SnOx-based resist does not necessarily have an advantage in photoelectron spread distance over the organic CAR [4]. Restrictions in elemental composition will prevent much deviation in the photoelectron spread function. As the resist will be the main determinant of EUV lithography resolution, less attention should be paid to the marketing of High-NA.

References

[1] A. Narasimhan et al., “What We Don’t Know About EUV Exposure Mechanisms,” J. Photopolym. Sci. and Tech. 30, 113 (2017).

[2] M. Kotera et al., Extreme Ultraviolet Lithography Simulation by Tracing Photoelectron Trajectories in Resist,” Jap. J. Appl. Phys. 47, 4944 (2008).

[3] F. Chen, Resolution Limit From EUV Photoelectron Spread, 2024 https://www.youtube.com/watch?v=3BIGo9UsIEA

[4] H. J. Levinson, Jpn. J. Appl. Phys. 61 SD0803 (2022).

[5] P. L. Theofanis et al., “Modeling photon, electron, and chemical interactions in a model hafnium oxide nanocluster EUV photoresist,” Proc. SPIE 11323, 113230I (2020).

[6] http://euvlsymposium.lbl.gov/pdf/2015/Posters/P-RE-06_Fallica.pdf).

This article first appeared in LinkedIn Pulse: Why NA is Not Relevant to Resolution in EUV Lithography

Also Read:

Intel High NA Adoption

Huawei’s and SMIC’s Requirement for 5nm Production: Improving Multipatterning Productivity

ASML- Soft revenues & Orders – But…China 49% – Memory Improving


Podcast EP221: The Importance of Design Robustness with Mayukh Bhattacharya

Podcast EP221: The Importance of Design Robustness with Mayukh Bhattacharya
by Daniel Nenni on 05-03-2024 at 10:00 am

Dan is joined by Mayukh Bhattacharya, Engineering, Executive Director, at Synopsys. Mayukh has been with Synopsys since 2003. For the first 14 years, he made many technical contributions to PrimeSim XA. Currently, he leads R&D teams for PrimeSim Design Robustness and PrimeSim Custom Fault products. He was one of the early adopters of AI/ML in EDA. He led the development of a FastSPICE option tuner – Customizer – as a weekend hobby, which later became the inspiration behind the popular DSO.ai product. He has 11 granted (and 4 pending) patents, 7 journal papers, and 20 conference publications.

Dan explores the concept of design robustness with Mayukh. Design robustness is a measure of how sensitive a design is to variation – less sensitivity means a more robust design. For advanced nodes where there is significant potential for variation, design robustness becomes very important.

Mayukh explains the many dimensions of robustness, with a particular focus on memory design. He describes the methods required and how the Synopsys PrimeSim portfolio supports those methods. How AI fits into the process is also discussed, along with the benefits of finding problems early , the importance of adaptive flows and the overall impact on reliability.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Harish Mandadi of AiFA Labs

CEO Interview: Harish Mandadi of AiFA Labs
by Daniel Nenni on 05-03-2024 at 6:00 am

image2

Harish Mandadi is the CEO and Founder of AiFA Labs, a service-based IT company that provides best-in-class solutions for clients across various industries. With over 20 years of experience in IT sales and delivery, I have a unique blend of entrepreneurial vision and hands-on expertise in the dynamic landscape of technology.

Tell us about your company?
Sure. So, AiFA Labs is an IT solutions company that guides clients through digital transformations as they update their business processes to incorporate generative AI, machine learning, optical character recognition, and other technologies.

In the past, we were a service-based company. However, we just launched our first product, Cerebro AI, on March 30th, 2024. It’s an all-in-one generative AI platform with more tools, features, and integrations than any other AI platform on the market. It will change the way companies do business and we are very proud of it.

What problems are you solving?
We are solving problems related to scalability, high overhead, and time to market. Cerebro has the ability to expand reach globally, reduce labor costs by 30-40%, and speed up content creation by 10x. One of Cerebro’s main features is SAP AI Code Assist, which automates SAP ABAP development  SDLC process and brings down the effort by 30 to 50% with the click of a button.

We also have an AI Prompt Marketplace, where users can buy and sell AI prompts to make their interactions with generative AI more efficient and effective. Knowledge AI collects all company data and trains AI based on it. It allows business users to interact with their business data in natural language. In total, Cerebro is 17 tech products in one and we expect that number to grow. It is truly a one-stop shop for everything AI.

What application areas are your strongest?
Our strongest application areas are IT, life sciences, Consumer, marketing, customer service, HR, and education. We are hoping to break into law, entertainment, and a few other use cases.

What keeps your customers up at night?
I don’t know for sure, but I think missed opportunities keep our customers up at night. Experiencing increased demand without the means to meet client expectations is every business owner’s nightmare. With Cerebro, no customer inquiry goes unanswered, every hot topic is covered in print within a few hours, and software solutions are delivered in half the amount of time it usually takes.

What does the competitive landscape look like and how do you differentiate?

Right now, the competitive landscape is flooded with new generative AI products, and we expect to see many more of them come onto the market in the next few years. Most of them have a singular mode of operation, similar to ChatGPT or Gemini.

Our product is special because it incorporates all of the most popular large language models and allows users to choose which ones they use. It also integrates with Amazon AWS, Microsoft Azure, Google, SAP, and more. Cerebro possesses almost any AI functionality you can think of and then some. 

What new features/technology are you working on?
Our latest features are AI Test Automation and an AI Data Synthesizer. The first feature runs tests on SAP ABAP code to gauge performance and identify potential issues. The second feature processes data with missing information and fills in the gaps based on context.

How do customers normally engage with your company?
Customers engage with us on LinkedIn, Twitter/X, or our company website.

Also Read:

CEO Interview with Clay Johnson of CacheQ Systems

CEO Interview: Khaled Maalej, VSORA Founder and CEO

CEO Interview with Ninad Huilgol of Innergy Systems