Banner 800x100 0810

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing
by Daniel Nenni on 04-26-2022 at 10:00 am

Expert talk banner Maven Silicon

India’s top VLSI Training Services company Maven Silicon, a RISC-V Global Training Partner, conducted an insightful discussion with the industry experts Ms. Calista Redmond, CEO, RISC-V International and Mr. Sivakumar P R, CEO, Maven Silicon, on the topic “RISC-V Open Era of Computing”.

To introduce RISC-V, it is a free and open ISA, enabling processor, hardware, and software innovations through open collaboration.

Maven Silicon’s vision is to produce highly skilled VLSI engineers and help the global semiconductor industry to reach skillful chip design experts. The global semiconductor industry is transforming faster than ever, enabling us to create powerful integrated chips to reinforce the next generation of advancing technologies like IoT, AI, Cloud, and 5G. So, we need to produce skilled chip designers who can design more powerful and optimized processors of different kinds. This is how we aim to disrupt the semiconductor industry. Our vision aligns with that of RISC-V’s in disrupting the semiconductor industry.

We are delighted to introduce our Industry mavens who honored the discussion.

Ms. Calista Redmond
CEO, RISC-V International

Calista Redmond, CEO of RISC-V International has more than 20 years of senior-level management and alliance experience, along with significant open source community experience. Throughout her career, she has developed strategic relationships with the chip, hardware, and software providers, system integrators, business partners, clients, and developers.

Mr. Sivakumar P R
Founder and CEO, Maven Silicon

Sivakumar is the Founder and CEO of Maven Silicon. He is also the Founder and CEO of Aceic Design Technologies. He is a seasoned engineering professional who has worked in various fields, including electrical engineering, academics, and the semiconductor industry for more than two decades and specializes in offering Verification IPs and consulting services and EDA flow development.

This profound ‘Expert talk’ was hosted by Ms. Sweety Dharamdasani, Head of Learning & Development Division at Maven Silicon, who is extremely passionate about upskilling the young aspirants.

The discussion highlighted some incredible topics on RISC-V, and how it can be leveraged in redefining the VLSI curriculum.

Click here to watch the video

Sweety: I would like to understand from Calista the What and Why of RISC-V? An introduction for our audience on RISC V, please.

Calista: Along the journey of the Hardware industry, RISC-V discovered the collaboration in software that helps the entire industry to form a foundation upon which they can still compete. Now, there is a boom in customized processors, and so they have taken on the opportunity.

Below are the two reasons why RISC-V catapulted to being the most prolific open ISA that the microprocessor industry has ever seen:

[A] Disruptive technology

[B] Design Flexibility with unconstrained opportunities

Sweety: Why RISC V? What were our reasons for collaboration with RISC V?

Siva: RISC-V is an open ISA. It’s free, no license constraints, but nothing comes for free when it comes to designing the chip. Still RISC-V is special because of the freedom we engineers enjoy in designing the processor as we like. Also, we need specialized processors to build the chips as monolithic semiconductor scaling is failing. As RISC-V is an open and free ISA, it empowers us to create different kinds of specialized processors.

Why RISC-V for Maven: VLSI engineers need to understand how we build electronic systems like laptops/smartphones using chips and SoCs. Obviously, we need processors to build any chips/SoCs. Without knowing the processor, the VLSI engineers can’t deal with any sub-systems/chips. VLSI training has always been about training engineers on different languages, methodologies, and EDA point tools. So, we introduced processors as part of our VLSI course curriculum to redefine the VLSI training. As RISC-V is open, simple, and modular ISA, it was our choice.

Click here to watch the video

Sweety: So what is happening in the RISC V space right now? If you would like to share some success stories or developments, it would be great.

Calista: The predictions say that the semiconductor IP market will go from 5.2 billion dollars in 2020 to 8.6 billion dollars in 2025. We are on course with the prediction that RISC-V will consume 28% of that market in IoT, 12% in industrial, and 10% in automotive. Many venture capitalists are investing billions of dollars in RISC-V companies. There are many opportunities here for those who are starting their own companies and many more success stories are coming out every day.

Sweety: What are our current plans that are going on with RISC-V?

Siva: We are doing many things creatively with RISC-V. We have included RISC-V in our VLSI course curriculum, and it is open to all new college graduates, engineers, and even corporate partners.

Since Jan 2022, we have trained more than 200+ engineers in various domains, RTL Design, Verification and DFT, for a global chipmaker. All of them have used RISC-V/RISC-V SoCs as their projects/case studies to learn all these different technologies. It works beautifully.

Since we became RISC-V global training partner, we trained 1000+ engineers on RISC-V processor design and verification and introduced them to our semiconductor ecosystem.

Click here to watch the video

Sweety: What are we looking at in terms of the future for RISC V?

Calista: Being successful in the microprocessors or in any business with some small incremental growth will not be helpful. What we do in RISC-V is incredible. We have 12000 developers who are engaged with RISC-V. We have 60 different technical works that have been going on, which has been an incredible compliment for the education and the knowledge-based organization like Maven Silicon.

Sweety: What are our plans at Maven Silicon with regard to RISC V? Any upgrades on curriculums?

Siva:  New applications will demand RV128. There would be new security challenges, but still RISC-V will emerge as an industry standard open ISA for all kinds of specialized processors, replacing most of the current proprietary ISAs.

At Maven, we would be adding more new topics like complex pipelining, floating point core design, cache controllers, low power mode, compilers, and debuggers, etc., into our existing RISC-V course curriculum. We are also looking forward to creating long-term master learning programs like designing SoC using RISC-V.

Click here to watch the video

Sweety: It would be great if you can share with us a few tips for all our young VLSI aspirants, who plan to build a career in the semiconductor industry.

Calista: Understand where your ability fits in the VLSI space. Connect with your mentors, colleagues, peers, and co-work shoulder to shoulder, and strengthen your network in your domain. When you work together, you learn faster and understand better. You can select any of the 60 courses that we have and join RISC-V and learn the topics that are going on in the various areas in the RISC-V domain.

Sweety: What are the few tips that you would like to share with the young engineers?

Siva: One major piece of advice I would like to give to the next generation is that “Do not choose the domain based on the popularity, choose whatever you are interested in. Do not lose motivation when things do not fall in place, just do it sincerely”. Seek guidance from the people who will help you to grow better. Learning is a continuous process, ask questions to yourself and keep learning. Be part of a Non-profit organization like RISC-V that is contributing to the engineering community.

Click here to watch the video

Sweety: What are your takes on organizational culture, sensitivity, gender awareness, women in business, etc?

Calista: It is important to drive ourselves as women, and all of us to create an environment and opportunities to cultivate the women around us. It is difficult to be in the spotlight as it is more transparent, but it is important to take those steps. Find out the passion that will drive us. It is important to work in a company that you believe in and to grow with them. Lift up the people around you. Shine the light on others to help cultivate their success, while cultivating your own.

Sweety: We know that Maven Silicon voices out to people. Around 60% of our employees are women. How do you take care of your employees?

Siva: I would like to mention here our Co-founder and Managing Director Ms. Praveena G who is all about people and processes. She is extremely composed, honest, and detail-oriented whereas I look at the big picture and do business creatively. Along with our co-founder there are many super talented women who do amazing things at Maven and help us stay at the top of our game.

Organizational culture reflects the style of leadership. Our culture is based on our core values. We respect our customers, partners, and employees. We believe in ‘Lead without Title’.

Click here to watch the video

We truly appreciate Ms. Calista Redmond and Mr. Sivakumar P R for sharing their experiences, and so beautifully explaining the various topics including RISC-V, tips for the young aspirants, women empowerment, and the organizational culture. Also, we would like to thank RISC-V International for this great opportunity to work with their open-source community and contribute to RISC-V learning as a RISC-V Global Training Partner.

Also read:

Verification IP vs Testbench

CEO Interview: Sivakumar P R of Maven Silicon

RISC-V is Building Momentum


Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness

Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness
by Fred Chen on 04-26-2022 at 6:00 am

Compounding EUV Stochastic Edge Roughness 2

The list of possible stochastic patterning issues for EUV lithography keeps growing longer: CD variation, edge roughness, placement error, defects [1]. The origins of stochastic behavior are now well-known. For a given EUV photon flux into the resist, a limited fraction are absorbed. Since the absorption is less than 5% affected by dose [2], the absorbed photon number per unit area practically follows a Poisson distribution [3]. The Poisson distribution is much like a normal distribution whose standard deviation is the square root of the mean, yet truncated at zero (no negative values allowed). Prior work has already shown that the stochastic edge appearance is smoothed by resist blur [4]. The resist blur is taken to be a continuous function (e.g., Gaussian with sigma=2nm), but this does not take into account the actual random secondary generation yield [5] following EUV photon absorption.  Ionized electrons do not need to ionize other electrons to release energy; they can also lose energy through plasmons and vibrational excitations [6]. In this article, we will explore the electron number randomness as an extra stochastic factor in EUV lithography.

The electron yield per absorbed photon is estimated to be ~3 for organic chemically amplified resists [6], and ~8 for metal oxide resists [7]. Instead of being fixed numbers, these should be taken to be typical or average values; the actual number comes from a second Poisson distribution, distinct from that for photon absorption. Then the blur amplitude should naturally be scaled according to the actual electron number. Thus, secondary electrons effectively compound the stochastic behavior.

Edge deformation is a natural generalization of edge roughness, one of the known manifestations of stochastic behavior. The most obvious manifestation of this is a deviation of a contact or via shape from circularity. For a ~20 nm feature size, Figure 1a shows the edge deformation when electron yield per photon is fixed, whereas Figure 1b shows the same when electron yield per photon follows a Poisson distribution for an average value of 3 electrons per photon. The resist blur is modeled with 4x Gaussian convolution (sigma=1 nm), giving an effective sigma of 2nm.

Figure 1a. 16 simulation runs with 2nm blur and fixed secondary electron yield. The assumed resist layer absorption is 18% and the dose 60 mJ/cm2. Grid pixel size is 1 nm x 1 nm.

Figure 1b. 16 simulation runs with 2nm blur and secondary electron yield following a Poisson distribution with mean=3 electrons per photon. The same conditions as in Figure 1a were assumed.

Even without calculating the individual via areas, the difference in appearance is already striking. Besides increasing the photon dose, increasing the electron yield per photon is also suggested to keep stochastic effects in check, by reducing the standard deviation/mean ratio. Even so, electron number is constrained by the energy needed for ionization (~10 eV); EUV has only enough energy for no more than 9 ionized electrons per photon. A higher photon energy, i.e., shorter wavelength, can raise the upper limit. However, increasing the electron number also increases the range of electron paths [8]. This increases blur, which is fundamentally detrimental to resolution [9].

References

[1] https://www.prnewswire.com/news-releases/new-stochastics-solution-from-fractilia-enables-semiconductor-euv-fabs-to-control-multi-billion-dollar-industry-yield-problem-301506120.html

[2] R. Fallica et al., “Dynamic absorption coefficients of chemically amplified resists and nonchemically amplified resists at extreme ultraviolet,” J. Micro/Nanolith. MEMS MOEMS 15, 033506 (2016).

[3] https://en.wikipedia.org/wiki/Poisson_distribution

[4] https//www.linkedin.com/pulse/euv-resist-absorption-impact-stochastic-defects-frederick-chen

[5] C. E. Huerta et al., “Secondary electron emission from textured surfaces,” J. Phys. D: Appl. Phys. 51, 145202 (2018).

[6] J. Torok et al., “Secondary Electrons in EUV Lithography,” J. Photopolym. Sci. &Tech. 26, 625 (2013).

[7] Z. Belete et al., “Stochastic simulation and calibration of organometallic photoresists for extreme ultraviolet lithography,” J. Micro/Nanopattern. Mater. Metrol. 20,  014801 (2021).

[8] https://stats.stackexchange.com/questions/230302/is-there-a-relation-between-sample-size-and-variable-range; http://euvlsymposium.lbl.gov/pdf/2007/RE-08-Gallatin.pdf.

[9] https://www.linkedin.com/pulse/blur-wavelength-determines-resolution-advanced-nodes-frederick-chen

This article first appeared in LinkedIn Pulse:  Adding Random Secondary Electron Generation to Photon Shot Noise: Compounding EUV Stochastic Edge Roughness 

Also read:

EUV Resist Absorption Impact on Stochastic Defects

Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems


A MasterClass in Signal Path Design with Samtec’s Scott McMorrow

A MasterClass in Signal Path Design with Samtec’s Scott McMorrow
by Mike Gianfagna on 04-25-2022 at 10:00 am

Samtec Flyover Technology
Scott McMorrow

We all know signal integrity and power integrity are becoming more important for advanced design. Like package engineering, the obscure and highly technical art of SI/PI optimization has taken center stage in the design process. And the folks who command expertise in these areas have become the rock stars of the design team. I had an opportunity to speak with one of those rock stars recently. Scott McMorrow is the Strategic Technologist for the company’s 224 Gbps R&D work. Scott has a storied career in all manners of signal path design and optimization. What follows is essentially a MasterClass in signal path design. If you want your next system to work, this is important stuff. Enjoy.

Signal integrity and power integrity are disciplines that have been around for a while. For a long time, they were “fringe” activities – highly complex, hard-to-understand work done by rare experts. While the work is still quite complex, SI and PI are now mainstream, critical activities in almost all designs. What do you think drove this change? 

Simply, systems break when SI and PI are not considered.  In my consulting career prior to joining Samtec, a considerable number of customers requested my services in SI and PI because they had current or previous designs that had failed either in testing or at customer sites.  These sorts of things tend to sensitize managers and directors to the importance of deep SI and PI work.  What has now conspired against complacent design is the physics. 

At today’s data rates switches and AI processors are using extraordinary amounts of power, sometimes multiple kilowatts. There are systems that require over 1000 A of current at less than 1 V, and ICs that require 600 A at sub-µs rise times. This requires a power system capable of delivering mΩ and sub-mΩ impedance targets, which are difficult to engineer and measure. At these high switching currents, low frequency magnetic fields require careful management of component selection and via placement to minimize system noise and guarantee reliable operation.

As the speed and power requirements for silicon increase, the lower the probability that previous “Known Good Methods” will work.  Approximations and assumptions developed for 10 Gbps or 28 Gbps interconnect may not be valid as we begin to reach the statistical limits of signal recovery.  At 112 Gbps PAM4, with a risetime of approximately 10 ps (20%/80%), a signal bandwidth (BW) > 40 GHz (1.5 times Nyquist), and a bit time < 20 ps (< 10 ps for 224 Gbps PAM4) there is very little margin for noise.  Crosstalk and power systems are the primary contributors that must be contained.  These require system interconnect bandwidth of 50-90 GHz. For each performance step (56 Gbps PAM4 to 112 Gbps PAM4 as an example), the bandwidth and noise in the system essentially double.  This requires an SI engineer to accurately model and measure across a wider bandwidth. For example, Samtec SI engineers routinely model to 110 GHz and measure using 67 GHz and 110 GHz Vector Network Analyzers (VNAs).

The term “signal path” has taken on new meaning in the face of the convergence of multiple technologies found in contemporary designs. Can you comment on that evolution? What does a signal path entail in advanced designs today? What role does convergence play, and what new pieces will be added going forward? 

Signal interconnect in the last 20 years has always been a combination of copper, optics, and even radio transmission. From a cost tradeoff perspective, copper is the least expensive for the short distances as seen in system electronics enclosures and racks.  Up until recent years, a full copper interconnect was possible up to 3 m spanning a full rack, with the transition to optics occurring at the Top of Rack (TOR) switch to extend down a data center rack.  Although fiber optics is significantly less expensive than copper cable, the cost associated with electrical to optical conversion in the optical module is much more expensive than direct attach copper cables.  But, as data rate increases, the “reach” of electrical cables is reducing.  At 112 Gbps PAM4 and 224 Gbps PAM4 the architecture of switch locations in a rack must change to keep interconnect losses within design targets of about -31 and -39 dB from silicon-to-silicon in the link. At 112 Gbps, data center architects may need to place the TOR switch in the middle (a Middle of Rack switch?) to keep direct attach copper cable lengths to 2 m.  At 224 Gbps PAM4, multiple switch boxes per rack may be needed to keep total cable length to 1 m to remain within the end-to-end loss budgets.

At lower data rates, signals could be transmitted entirely on copper PCB interconnects until they reach the front panel module (QSFP, OSFP, etc.). However, to improve the loss budget, newer systems utilize Samtec Flyover® technology to reduce total loss. This is accomplished by using from 34 AWG to 30 AWG cable that has been engineered to work in the high temperature environment of modern electronics chassis.  Flyover technology extends copper’s usefulness to 112 Gbps PAM4 and 224 Gbps PAM4 operation. However, even this is a temporary measure.  Today we use Flyover technology from a PCB mounting location near to the silicon, but on the PCB.  However, at 224 Gbps PAM4 the losses in the silicon package copper traces accumulate to the point that one third of the system loss budget is accounted for simply in the package substrates of the transmitter and receiver, which conspires to reduce the total available external reach.

Samtec Flyover Technology

To fight “loss erosion” at 224 Gbps PAM4 several potential changes are posited by designers and architects:

  • Exit the silicon through optical fiber interconnect
    • This will be the “future”, but that future is a long way off, due to the complexity of designing silicon with mixed electrical and optical technology.
    • This future also requires full optical interconnect throughout the system, rack and data center, which is extremely expensive.
  • Move the electrical-to-optical conversion to a device mounted on the package, the so called Co-Packaged Optics (CPO).
    • This removes electrical transmission issues entirely for the external system, but greatly increases total cost, because of the need to fly optical for all external interconnects.
    • Placing an optical component on an IC package removes mixed silicon technology from being a problem. The optical device can be designed with the optimal process.  However, the rugged environment on package can approach a 600 W beast of a chip. This is daunting for many optical technologies.
  • Route signals off package via Flyover technology.
    • Flyover solutions are proven to reduce in-box interconnect losses and can be applied to packages.
    • This will work to achieve reliable 224 Gbps PAM4 channel operation, but it is proving hard to scale the connectors for attachment to the size needed for current packages.
    • As a result, package architectures are changing to provide more area for interconnect attachment.

Given the demands presented by form factor, density and performance, what are the considerations for materials involved in high-performance channels? Are there new materials and/or configurations on the horizon? Where does optical fit?

See above.  Materials will move to the lowest loss possible, but there is a bound set by the size of the copper conductors used.  Cable is lower loss than PCB trace simply because the conductor circumference is 2 – 3x larger than PCB traces. Inside the package designers will need to use materials that can withstand IR reflow during assembly along with operating temperatures from 85 – 120 °C near the die.  Many materials that were adequate for external or in-box usage are untenable for on-package use.

In terms of data rates, what will happen over the next five years? What will be a state-of-the-art data rate in five years, and how will we get there? 

This is a good question. Realistically, 56 Gbps PAM4 designs will be around for years to come, as 112 Gbps PAM4 designs are just prototyping. 224 Gbps PAM4 will be the next step in the data rate progression with a signal rise time of 5 ps and a BW > 80 GHz. Although test silicon is being built now, I suspect it will take three years for the early prototype systems to be revealed and five years for production to begin. By that time, we will be looking at how to either utilize higher order transmission encoding (PAM8, PAM16) or abandon copper totally and make the full transition to optical in about 10 years. This might be a good time for us copper interconnect specialists to retire.

There it is, a MasterClass in signal path design. I hope you found some useful nuggets. You can read more about Samtec here. 

Also read:

Passion for Innovation – an Interview with Samtec’s Keith Guetig

Webinar: The Backstory of PCIe 6.0 for HPC, From IP to Interconnect

Samtec, Otava and Avnet Team Up to Tame 5G Deployment Hurdles


Assembly Automation. Repair or Replace?

Assembly Automation. Repair or Replace?
by Bernard Murphy on 04-25-2022 at 6:00 am

Arteris SoC Integration 8000x4500 20210421 1

It is difficult to imagine an SoC development team not using some form of automation to assemble their SoCs; the sheer complexity of the assembly task for modern designs is already far beyond hand-crafted top-level RTLs. An increasing number of groups have already opted for solutions based on the IP-XACT integration standard. Still, a significant percentage use their own in-house crafted solutions. The solution of choice for many has been spreadsheets and scripts. Spreadsheets to capture aspect-wise information on instances and connections, scripts to convert this bank of spreadsheets into full SoC RTL. Great solutions, but eventually we must ask a perennial question. When reviewing in-house assembly automation – repair or replace?

Teams rightly take great pride in their creations, which serve their purposes well. But like all in-house inventions, with time, these solutions age. Original developers move on to other projects or companies. Designs become larger and must be distributed to geographically diverse teams. Local know-how must be replaced by professional training and support. Capability expectations (internal and external) continue to rise – more automation, directly integrating the network-on-chip, supporting traceability. Inevitably the organization must ask, “Should we continue to repair and enhance our in-house software, with all the added overhead that implies, or should we replace it with a professionally supported product?”

Scalability

Other groups copy successful in-house implementations, which they then modify to their own needs. Maybe there’s a merger with a company which has its own automation. Organizationally, your automation quickly becomes fragmented, with little opportunity to share code, design data or know-how. No one is eager to switch to another in-house solution in preference to the automation they already know. The only way to break this deadlock is to consider a neutral, standards-based platform.

A common platform immediately solves problems of sharing data between teams; common platforms encourage shareable models. For training and support, let a professional supplier manage that headache. For continuous improvement against diverse requirements across many design teams, let the software product supplier manage and prioritize demand,. And produce regular releases featuring fixes, performance improvements and enhancements.

Enhanced capabilities

There’s a widely held view in technology businesses that no one is going to switch to a new product purely for incremental improvements. Prospects will buy-in only to must-have advantages that would be out of reach if they didn’t switch. One opportunity here is closer automation linkages between the endpoint IPs and the network-on-chip. To better manage coupling between changes in network interfaces, performance expectations, address offsets, and power management. Fully exploiting the potential benefits is a journey, but as a provider of both the integration and network-on-chip technologies, Arteris IP is already on this journey.

Another high-demand capability is in re-partitioning designs, for emulation and prototyping, for floorplanning, power management and reuse. I’ve talked elsewhere about the pain of manual re-partitioning, limiting options you can explore. You can automate this process with truly interactive what-if analysis. Experimenting with new configurations in real-time.

A more recent demand is for traceability support. In safety-critical systems and in embedded systems with close coupling between the system and the SoC, compliance between system requirements and implementation in silicon is mandatory. As requirements traceability automation in software development has become common, there is a growing expectation from OEMs and Tier 1s that similar support should be provided for hardware implementation. Accurate linking between requirements tools and SoC design semantics is a complex task, beyond the scope of most in-house development scripts. Arteris IP now offers this capability in its tool suite.

Legacy compatibility

All of this sounds interesting, but what about the sunk costs  you have in all those spreadsheets and scripts? Will this solution only have value to you on completely new designs? Not at all – you can start with what you already have. The Arteris IP SoC & HSI Development platform can import CSV files with a little scripting support. It can also directly read IP and design RTL files, supported by intelligent name matching for connectivity, again perhaps with a little interactive help. Once you have setup scripts and mappings, you should be able to continue to use those legacy sources. Which is critical for long-term maintenance.

Many of your legacy scripts will probably no longer be needed, especially those relating to netlist generation and consistency checking. Those facilities are provided natively in the SoC/HSI platform. You can use some scripts, for IO pin-muxing or power sequence control, for example, as-is initially if the generator is sufficiently decoupled from the rest of the design. These scripts can also, if you wish, be redesigned to work under the SoC/HIS platform. You can build your scripts in Python using an API operating at an easy-to-understand semantic level (clocks, bus interfaces, etc.).

In summary, it’s never been easier to switch and now you have compelling reasons to switch. If you want to learn more, click HERE.

Also read:

Experimenting for Better Floorplans

An Ah-Ha Moment for Testbench Assembly

Business Considerations in Traceability


OnStar: Getting Connectivity Wrong

OnStar: Getting Connectivity Wrong
by Roger C. Lanctot on 04-24-2022 at 6:00 am

OnStar Getting Connectivity Wrong

One of my pet beefs with the car industry is that car makers, on the whole, have failed to agree among themselves as to what basic vehicle connectivity ought to consist of. From car maker to car maker prices vary, bundles vary, free periods of service access vary and the variations get worse between model years as offers change and systems are modified.

The 26-year veteran of vehicle connectivity – OnStar – is one of the worst offenders. OnStar offers four basic service tiers ranging in price from $24.99 to $49.99 per month. In other words, GM appears to believe that consumers will pay more for vehicle connectivity than they would for Hulu, Amazon Prime, Netflix, Apple+, or HBO.

At that high end, GM OnStar is priced in the neighborhood of a mid-range gym membership. What do they think they’re selling?

If that weren’t bad enough, OnStar has a host of “a la carte” offerings which only add to the cost and confusion. This is connected vehicle “monetization” malpractice.

But GM is not alone. Toyota and BMW, to name just two competing auto makers, have spreadsheets to explain which services are available on which trim levels for which years at which price points.

But the confusion doesn’t stop there. GM’s uncoordinated and awkward approach to connectivity extends to its Cruise Automation subsidiary.

The latest misadventure at Cruise – a Cruise robotaxi wandering away from police responding to a Cruise vehicle with its headlights off – highlights the reality that Cruise remains siloed off from GM. Cruise vehicles are clearly not equipped with OnStar, and this disconnect might well prove fatal.

San Francisco Police Stop Self-Driving Car: – https://www.nbcbayarea.com/news/local/san-francisco/driverless-car-traffic-stop-san-francisco/2860690/ – NBC

The death of Elaine Herzberg in a collision with a self-driving Uber in Tempe, AZ, led to the termination of Uber self-driving car testing and the shedding of the Advanced Technology Group that was working on the technology. It only takes one slip to erase billions of dollars of corporate value.

More than a decade ago GM added a “remote vehicle slowdown” function to OnStar as an enhancement to its stolen vehicle tracking and recovery solution. Since GM’s introduction of the feature, Hyundai has added it to its available connected services as well.

The function allows police – who have been alerted to a stolen GM vehicle and who have the vehicle in sight – to remotely slow the vehicle down to a stop. That function would have been awfully nice to have built into the wayward Cruise vehicle that appeared to be evading the police – but was really reacting to the flashing lights on the police cruiser.

The lack of uniformity in connected vehicle services from all auto makers – with the exception of Tesla – reflects the dysfunctional pursuit of aftermarket subscription-based service revenue. Tesla generally charges $10 a month and includes software updates and a range of connected services in that single price.

Ten dollars a month falls into the category of a no brainer for the average Tesla buyer. GM’s $24.99/month basic service is a no-go for many new car buyers.

The strangest thing of all is that nearly every auto maker, with the exception of Tesla, has put together an automatic crash notification capability – but all charge for it rather than viewing customer care as a core brand building value proposition. The automotive industry ought not to be charging for the automatic crash notification – it’s like paying for a fire extinguisher in your hotel room.

The crowning stroke of all of the misguided connected car decision making at OEMs, though, is the decision not to include basic diagnostic data communications and software updates (along with automatic crash notification) in an inexpensive basic connectivity package. One of the most amazing brand building value propositions for Tesla has been the company’s ability to provide post-crash analytics for regulators and the press. Time and time again Tesla has exonerated itself – blaming misbehaving drivers – for crashes.

This contrasts brightly with GM’s denials, eight years ago, that it had any idea there was a problem with its ignition switches – prior to the massive government fine and mandated vehicle recalls to correct the problem blamed for multiple fatal crashes. Similarly, Toyota pleaded ignorance of the existence of or an explanation for unintended acceleration events.

It’s time for auto makers to include a basic level of connected services with their vehicles including crash notification, software updates, and basic vehicle diagnostics. There is no room for plausible deniability, and confusing subscription schemes are the enemy of successful connected car programs and safer driving.

Also read:

Tesla: Canary in the Coal Mine

Chip Shortage Killed the Radio in the Car

A Blanche DuBois Approach Won’t Resolve Traffic Trouble


LRCX weak miss results and guide Supply chain worse than expected and longer to fix

LRCX weak miss results and guide Supply chain worse than expected and longer to fix
by Robert Maire on 04-23-2022 at 6:00 am

Lamb to the Slaughter LRCX

-Lam missed on both top & bottom line due to supply chain
-Previous guide was “overly optimistic” about fixing issues
-Demand is great but doesn’t matter if you can’t serve it
-We remain concerned about ability to fix issues in near term.

A miss on numbers- supply issues will persist

Lam reported Revenues of $4.06B versus already reduced street expectations of $4.25B. EPS also missed coming in at $7.40 versus street of $7.51. Results would have been even worse if we back out one time gains from Lam’s venture investments which were $0.11 per share. Guidance was even weaker with June expected to be $4.2B +- $300M and EPS of $7.25+- $0.75 falling well short of street expectations of $4.46B and EPS of $8.24.

Obviously things are not getting better. We hope that Lam has taken enough of a haircut to their numbers that they don’t miss the June guidance

Management says it was “overly optimistic” about fixing supply chain

Lam management previously thought that the supply chain issues would be solved more quickly which turned out not to be the case, in fact its quite clear that issues will persist into the June quarter and likely beyond. We don’t think management will quote an expected end date to issues again after being “overly optimistic” about resolving them. It sounds like things may be getting worse/broader. Deferred revenue grew and likely grow again due to missing parts

With roughly $2B in differed revenue and the probability that June will see a further increase in differed revenue there is a lot of product that has been shipped to the field that is missing parts that may or may not get delivered.
We also wonder about the higher costs/lower gross margins implied by doing field installs of missing parts on incomplete tools.

This is obviously a messy situation, using customer premises to store unfinished tools which would otherwise clog Lam’s production facilities.
Customers are likely willing to accept this sub optimal solution likely because Lam forces it on them lest they lose their place in line but its a poor substitute.
We would hope that these unfinished tools get completed in the second half of the year and the deferred revenue can come home.

Move to Malaysia doesn’t help

Unfortunately Lam’s move of significant production was at an inopportune time. Trying to bring up new sub suppliers in Asia during stressful times in the supply chain is not very good timing. Many of those potential suppliers already have issues and Lam as a new customer will take more time and a lower priority.

This likely stretches out the time to move to Asia resulting in more costs/double costs while Malaysia coming on line takes longer.

We remain concerned about possible share loss inability to gain share

If Lams competitors can fix their supply chain issues while Lam can’t, we could see share shift as desperate customers will easily move to suppliers who can supply. Right now everyone seems in the same boat but that could change. Its also harder for Lam to gain share with new products that just may not be available to customers.

Generally in time of short supply there is likely less ability to gain share

More vertically integrated manufacturers may fare better as they have more control over their supply chain. Some competitors, in Japan, such as TEL, typically have stronger relationships with suppliers that almost look like a vertical supply chain.

Should the industry go back to being more vertical than outsourced?

We had asked the question of Lam management several years ago at an analyst meeting about moving away from from a fully outsourced model to a more vertical model as the industry has become less cyclical and thus more suited to vertical supply. The company said they were thinking of it but never did it.

We think that many in the semiconductor tool business have to take a harder look at getting more vertical such that they have better control over their parts and manufacturing. We are at a point in the industry where it is much more important to insure supply than outsource risk of cyclicality.

The stock

Obviously the results are disappointing as is the guidance. It’s also disappointing that management underestimated the length and severity of the situation.

We don’t think there has been a lot of damage but that could change. Lam should be less impacted than ASML as their components are much less complex than ASML’s systems and sub components but it appears that Lam has been even more impacted which we find strange. Given the lens issue ASML has a better excuse for supply chain issues.

Lams stock will obviously take a hit here and we think upside may be limited until Lam can factually state that the issues have been finally fixed and deferred revenues start to come home.

After missing the projected recovery we are likely in wait and see mode.
We see no reason to get in to the stock until we have better clarity and no real reason to continue to own it as we don’t see near or medium term upside.

Also read:

Chip Enabler and Bottleneck ASML

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022


Podcast EP73: Adventures in Supercomputing with Luminous Computing and Andes Technology

Podcast EP73: Adventures in Supercomputing with Luminous Computing and Andes Technology
by Daniel Nenni on 04-22-2022 at 10:00 am

Dan is joined by Dr. Dave Baker, VP digital design at Luminous Computing and John Min, director of field applications at Andes Technology. Dave and John explore with Dan their collaboration to build high-performance supercomputers.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Chip Enabler and Bottleneck ASML

Chip Enabler and Bottleneck ASML
by Robert Maire on 04-22-2022 at 6:00 am

ASML Zeiss SemiWiki

-ASML reported an “in line” Q1- Orders remain super strong
-Ongoing supply chain issues will limit growth and upside
-ASML targets 2025 for supply fixes- We are not so sure
-Intel, TSMC, Samsung won’t be able to build all fabs they plan

ASML has “In linesh” Q1, orders still off the charts

ASML reported Euro3.5B in revenues and EPS of Euro1.73. Revenues were slightly light while earnings were a slight beat. Margins were 49%. More importantly orders remain very strong at Euro7B including Euro2.5B of EUV and multiple high NA systems.

Orders continue to outstrip ability to supply so more of the focus of both management and investors will be on ASML’s ability to ramp their supply chain to meet demand.

Talking about 2025 as target to fix supply issues

“Lets keep our fingers crossed and see what 2025 brings us” -Peter Wennick on the earnings call, not very confidence building. 2025 was mentioned many times on the call as the target to fix the supply chain issues that are limiting ASML’s ability to ship tools. We think many investors misunderstand the factors limiting ASML’s supply chain and therefore growth.

ASML is not limited by chips or current issues in Europe due to Ukraine or even Covid related issues. The supply chain issues are unique and specific to ASML, and ASML suppliers. Suggesting that 2025 will be the answer is more of a current hope than definitive plan that is in place to insure that issues will be fixed.

Zeiss is the key bottleneck and immovable object in the road to growth

While ASML is the key enabler to the chip industry, Zeiss is the key enabler to ASML. Zeiss makes the key optics that are the differentiator that makes ASML tools work. There is no second source, ASML is totally dependent upon Zeiss.

Most investors do not understand that Zeiss is not a normal company with shareholders. Zeiss is a foundation. The stated target of the foundation is furthering of science and insuring the employees well being and continued employment. Profit and growth is an afterthought. It is essentially run for the betterment of employees not profit. It is German labor unions and labor relations taken to an extreme.

Being the oldest such foundation in Germany also makes it slower to change.
One of the current issues is that Zeiss does not have enough space to increase production and doesn’t want to ruffle the feathers of neighbors with construction.

They is also the fact that not a lot of young Germans want to apprentice for years to polish glass for the rest of their lives. ASML just may be stuck with a supplier that can’t respond as quickly as needed and also just doesn’t care to respond as quickly as needed and doesn’t have to. It’s like trying to get a 175 year old Galpagos tortoise to run a sprint. Not gonna happen.

This all trickles down to Intel, TSMC & Samsung not building fabs

The demand for EUV tools, let alone High NA tools, far outstrips ASML’s ability to deliver them. Somethings gotta give. Intel, TSMC and Samsung can place all the orders they want and they will just pile up on ASML’s desk.

TSMC is far ahead of Samsung in EUV tool count and Intel is a distant third. Other companies in the memory space are also entering the EUV club and placing orders as well. This means order growth likely well in excess of the roughly 20% limit of production growth…in other words a significant shortfall.

It likely that TSMC could take up 100% of ASML’s EUV production by itself. Intel can never hope to catch TSMC until it can get more EUV tools.

Basically there is no way the chip industry will be able to get enough tools for all the plans of fabs today and either fabs will not be built or remain empty shells until EUV tools become available. Intel recently uprooted an EUV tool from Portland to send to Europe which is something you never want to do unless you are very desperate.

In a way this is likely a good thing for the industry in that it will put off the oversupply cyclicality that has historically plagued the industry. It will allow prices to remain higher for chips due to shortages and will allow ASML to charge whatever it wants.

Maybe ASML could charge more for a “Fastpass” to cut the EUV line much like Disney does. Could financial buyers take a place in line and “scalp” EUV tools?

Chip industry needs alternatives to current lithography process

The chip industry is clearly limited by ASML and Zeiss. The industry desperately needs either an alternative to existing lithography process and tools such as E beam direct write and DSA (directed self assembly) or process enhancements to existing lithographic process that can speed or make more efficient use of existing lithographic tools to get more output.

We don’t need more dep and etch tools. Maybe some more yield management to help optimize EUV and litho tools. Necessity is the mother of invention.

The Stock

ASML is in the enviable position of being a monopoly in an industry desperate for their tools. This will not change any time soon, in fact the gap between demand and supply of litho tools will likely only get worse over the medium term as orders pile on without a corresponding increase in supply.

While the quarter reported was only OK, the order news says that the shortage of ASML tools will continue. ASML will be able to increase capacity but we think it will take far longer than most investors expect or understand. We would be careful not to extrapolate and expect too much growth out of ASML due to the limitations inherent in their system.

We expect that ASML’s stock while down for the year will respond positively as there were some concerns about demand slowing. The issue is clearly not demand but ability to supply… its a better problem to have as it is much more fixable, but will take time.

Also read:

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022

Intel buys Tower – Best way to become foundry is to buy one


Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution

Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution
by Kalar Rajendiran on 04-21-2022 at 10:00 am

Truechip TruEYE GUI

Integrating IP to build SoCs has been consistently on the rise. Growth in complexity and meeting time to market pressures are some primary drivers behind this phenomenon. Consequentially, the IP market segment has also been enjoying tremendous growth. While this is great news for chip design schedules, it does highlight the increased demand for quick, easy and accurate verification. Without a time and cost efficient way to verify an IP solution, the cost of verifying can end up being higher than the cost of developing the IP itself. And an SoC’s development schedule would be adversely impacted. Naturally, the Verification IP (VIP) segment of the IP market has seen high growth rates.

There are IP verification solutions offered by a number of companies. One company that was introduced in late 2020 to the SemiWiki audience is Truechip. Founded in 2008, Truechip characterizes itself as the Verification IP Specialist. It offers an extensive portfolio of VIP solutions to verify IP components interfacing with industry-standard protocols integrated into ASICs, FPGAs and SoCs.

Salient Aspects of Truechip’s VIP Solutions

Truechip’s Verification IPs are fully compliant to standard specifications and come with an easy plug-and-play interface to enable a rapid development cycle. The VIPs are highly configurable by the user to suit the verification environment. They also support a variety of error injection scenarios to help stress test the device under test (DUT). Their comprehensive documentation includes user guides for various scenarios of VIP/DUT integration. Truechip’s VIP solutions work with all industry-leading dynamic and formal verification simulators. The solutions also include Assertions that can be used in formal and dynamic verification as well as with emulations.

And their solutions come with the TruEYE GUI-based tool that makes debugging very easy. This patented debugging tool reduces debugging time by up to 50%.

Truechip’s DisplayPort 2.0 VIP Solution

One interface IP that is gaining lot of attention these days is the DisplayPort IP. Truechip has been supporting the Display market segment with VIP solutions for HDMI, HDCP and DisplayPort. They recently expanded their portfolio with the addition of DisplayPort 2.0 VIP solution. Their DisplayPort 1.4 VIP has a long track record within the customer base. Their DisplayPort 2.0 VIP has brought lot of upgrades to keep up with the enhancements from DisplayPort 1.4 to 2.0. The following Figure depicts a block diagram of the corresponding VIP environment.

The DisplayPort 2.0 VIP is fully compliant with Standard DisplayPort Version 2.0 specifications from VESA. Nonetheless, it is a light weight VIP with easy plug-and -play interface for a rapid design cycle and reduced simulation time. The solution is offered in native System Verilog (UVM/OVM/ VMM) and Verilog, with availability of compliance and regression test suites.

Some Salient Features of Truechip’s DisplayPort 2.0 VIP Solution

  • Supports High Bandwidth Digital Content Protection System Version 1.4, 2.2 and 2.3.
  • Supports Multi-Stream Transport (MST)
  • Supports Link Training(LT) Tunable PHY Repeaters (LTTPR)
  • Supports Reed-Solomon Forward Error Correction RS(254,250)
  • Supports multi lane configuration (up to 4 lanes)
  • Supports DSC v1.2a (Compressed Display Stream Transport Services)
  • Supports DisplayPort Configuration Data (DPCD) version 1.4
  • Support of legacy EDID is provided
  • Supports I2C over AUX Channel and Native AUX
  • Supports dynamically configurable modes.
  • Supports Dynamic as well as Static Error Injection scenarios.
  • On the fly protocol checking using protocol check functions, static and dynamic assertions
  • Built in Coverage analysis.
  • TruEYE GUI analyzer tool to show transactions for easy debugging

Deliverables

  • DisplayPort 2.0 BFMs for:
    • Source – Link Layer
    • Source – MAC Layer
    • Source – PHY Layer
    • Sink – Link Layer
    • Sink – MAC Layer
    • Sink – PHY Layer
    • Branching Devices
  • DisplayPort layered monitor & scoreboard
  • Test Environment & Test Suite :
    • Basic and Directed Protocol Tests
    • Random Tests
    • Error Scenario Tests
    • Assertions & Cover Point Tests
    • Compliance Test Suite
    • User Test Suite
  • Integration guide, user manual, and release notes
  • TruEYE GUI analyzer to view simulation packet flow

About Truechip

Truechip, the Verification IP specialist, is a leading provider of Design and Verification solutions. It has been serving customers for more than a decade. Its solutions help accelerate the design cycle, lowers the cost of development and reduces the risks associated with the development of ASICs, FPGAs and SoCs. The company has a global footprint with sales coverage across North America, Europe and Asia. Truechip provides the industry’s first 24×5 support model with specialization in VIP integration, customization and SoC Verification.

For more information, refer to Truechip website.

Also Read:

Bringing PCIe Gen 6 Devices to Market

PCIe Gen 6 Verification IP Speeds Up Chip Development

USB4 Makes Interfacing Easy, But is Hard to Implement


The ASIC Business is Surging!

The ASIC Business is Surging!
by Daniel Nenni on 04-21-2022 at 6:00 am

Alchip Revenue

Application Specific Integrated Circuits were the foundation of the semiconductor industry up until the IDMs came to power in the 1980s and 90s. Computer companies all had their own fabs, I worked in one, until start up companies like SUN Microsystems started using off the shelf chips from Motorola. SUN moved to the fabless model and designed their own SPARC chips but Intel was too powerful, with Windows and Linux they took over the CPU space forthwith.

During this transition quite a few semiconductor companies adopted the ASIC model and designed and manufactured chips for other systems companies. IBM, NEC, Toshiba, come to mind and there were also dedicated ASIC companies like VLSI Technology and LSI Logic who had their own fabs.

The fabless transformation changed all of this of course and now the ASIC market is dominated by fabless companies. The ASIC business today is split into two categories: There are pure-play fabless ASIC companies (GUC, Faraday, Alchip, Sondrel, Verisilicon, Alphawave, SemiFive, eInfochips, etc…) and Chip companies that also do ASICs (Broadcom, Marvell, MediaTek). Broadcom has the former Avago ASIC business and Marvell acquired eSilicon and the Globalfoundries/IBM ASIC business. MediaTek grew their ASIC ambitions organically.

Why is the ASIC business surging you ask? The same reason EDA, IP and TSMC are surging. Systems companies from all walks of life are now doing their own ASICs. It really has come full circle.

Alchip recently released numbers that are quite telling with four consecutive years of record setting performance. Alchip, founded in 2003, is headquartered in Taipei but has roots in the US and Japan.

One of the more telling pieces of data from Alchip is that the majority of their revenue (88%) came from FinFET designs from 16nm down to 5nm, some with complex packaging (CoWos and MCM).

Alchip President and CEO Johnny Shen expects strong demand in 2022 to come from AI, HPC, and IoT. He also pointed out that a select number of large production-quantity leading edge AI devices entered mass production in 2021, accounting for the company’s record performance.

Also in 2021, Alchip, a TSMC-certified Value Chain Aggregator, taped-out a significant number of 16nm, 12nm and 7nm designs. Several of the 7nm designs involved advanced packaging technology. A number of the 5nm designs will tape out in 2022 and a 3nm test chip is under development, with expected tape out in late 2022.

Bottom line: The ASIC business is the semiconductor world’s oldest profession, much of which is done undercover. The chip supply constraints of the roaring 20s have done more for ASIC’s than anyone could have imagined and will continue to do so, absolutely.

About Alchip
Alchip Technologies Ltd, headquartered in Taipei, Taiwan, is a leading global provider of silicon design and production services for system companies developing complex and high-volume ASICs and SoCs.   The company was founded by semiconductor veterans from Silicon Valley and Japan in 2003 and provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced, including 7nm processes. Customers include global leaders in AI, HPC/supercomputer, mobile phones, entertainment device, networking equipment and other electronic product categories. Alchip is listed on the Taiwan Stock Exchange (TWSE: 3661) and the Luxembourg stock exchange and a TSMC-certified Value Chain Aggregator.

Also read:

Alchip Reveals How to Extend Moore’s Law at TSMC OIP Ecosystem Forum

Alchip is Painting a Bright Future for the ASIC Market

Maximizing ASIC Performance through Post-GDSII Backend Services