webinar banner2025 (1)

Tesla: Two Heads are Better Than One

Tesla: Two Heads are Better Than One
by Roger C. Lanctot on 02-06-2020 at 10:00 am

Tesla Two Heads are Better Than One

Telsa Motors’ stock skyrockets and all observers are shocked and amazed. The shorts that took a multi-billion-dollar hit then double down with their concerns regarding the German gigafactory construction permits or coronavirus or the company’s ability to create demand or fulfill it.

All of these investors are ignoring something that Tesla owners or wannabe owners know, that Tesla’s vehicles are fundamentally built differently. From their Ethernet networks to Tesla’s dual redundant fully self-driving system, Tesla’s vehicles are unlike anything manufactured anywhere else in the world for sale to the general public.

Tesla is also building up a powerful network effect narrative around its stated plans to enable a car sharing/ride hailing service on its existing connectivity platform. But fleet operators aren’t waiting for Tesla’s own networked car solution. Fleet operators from Daimler (that’s right – buying 60 Tesla’s) to Kapten (Las Vegas taxi operator – bought 50) and many others are stuffing their fleets with Tesla’s due to their low cost of operation and reliability.

Underlying all of this is the most remarkable value multiplier of all: transparency. While other autonomous vehicle operators and car companies tout their long-term and short-term plans for electrification, connectivity, and autonomy – Tesla publicly discloses its plans, its architecture, its philosophy, and its results.

Is Tesla perfect? Far from it. Tesla vehicles continue to periodically collide with vehicles parked in travel lanes and even on shoulders of roads. We likely have not seen the end of Tesla-related injuries and fatalities.

But Tesla is doing more than any other car maker or operator to explain how its systems work, how and why they fail (usually attributable to the human in the loop), and what the company is doing to correct the shortcomings in the system. Tesla’s approach actually raises questions regarding the legacy auto industry’s approach to safety based on standards such as ISO 26262 and ASIL-D.

ISO 26262 and ASIL-D require organization-wide commitments and behavioral adjustments in order to anticipate and account for and test for all potential system failures. With Tesla’s dual redundant computing platform, Tesla is suggesting an entirely different path closer to the aerospace industry where triple-redundancy is not unusual.

Tesla Autonomy Investor Day: https://theteslashow.com/tesla-autonomy-investor-day/fzh9g5j2wcze3euvra49vdab0s69hw

Tesla discloses the architecture in detail along with the nature of the decision-making process that supports the existing semi-autonomous system operation. Tesla nestles its custom-made SoC/chip between the two computers on the board (pictured above) which receive identical data feeds independently.

It’s difficult to say whether this is Tesla’s answer to ISO 26262 and ASIL-D protocols. It’s also difficult to say it’s the wrong way forward. Tesla has made more progress than any other operator on a path toward a global, scalable autonomous driving solution.

If all of these realities are not enough to convince investors of the upside prospects for Tesla, they need only reconsider the fact that competing with Tesla is like trying to stop a bullet with a badminton shuttlecock. The bullet’s moving too fast and the shuttlecock is moving too slow.

Tesla continues to update its vehicle systems with software upgrades that universally dazzle and delight Tesla owners. The company has gone further, though, in updating hardware on the assembly line – very nearly unheard-of among legacy auto makers – as well as routinely updating hardware within vehicles already on the road.

Tesla is a model of value creation, extension, and maintenance. It’s an aggressive and successful business model that is helping the company gobble up luxury vehicle market share and for which there is no remedy in sight.

A decade in to the Tesla era, no car maker has yet risen to the competitive challenge posed by the company. Car makers are scrambling to deploy Tesla-like large screens, Tesla-like over-the-air software updates, Tesla-like 300-mile range, Tesla-like performance, Tesla-like semi-autonomous driving, and Tesla-like reliability. But, so far, only Tesla is delivering.

It remains to be seen whether 2020 will be the year that some car maker catches up to Tesla. Until that happens it will be difficult for any analyst to doubt the stock’s potential to soar even higher.

Don’t miss the Tesla hackathon: https://twitter.com/elonmusk/status/1224087317364854785


Executive Interview: Howie Bernstein of HCL

Executive Interview: Howie Bernstein of HCL
by Daniel Nenni on 02-06-2020 at 6:00 am

Howie Bernstein SemiWiki

Howie began his career at Digital Equipment Corporation working on real-time device drivers, but within a few years started working at the other end of the stack with one of the pioneering electronic mail systems.  Since then, Howie has worked on developing systems involving electronic mail, workflow processing, configuration management, and activity management.  He joined Atria in 1994 shortly before they went public to work on ClearGuide, and subsequently worked on several projects with Atria, Pure Atria, Rational and IBM involving systems and user interface architecture, design and development involving both ClearCase and ClearQuest.  He is currently the product manager for HCL’s configuration and change management products, VersionVault and Compass, as well as ClearCase and ClearQuest.

Hi Howie.  Welcome to Semiwiki. Could you tell me a little about HCL?
Of course.  HCL may be known by many of your readers as an Indian company that traditionally has been involved with software and hardware services.  We’re a $10B USD company, with over 110,000 employees working around the world, with most revenue derived in the United States and Europe.  What’s different now is that we have created a software company, HCL Software.  In 2016 we entered into an agreement with IBM, where for many IBM software products,  HCL is responsible for the development and support of the products under agreement with IBM, and all of the IBM development and support engineers were rebadged to HCL, so none of the decades of experience were lost in the transition.

What does this agreement with IBM allow HCL to do?
This agreement is a 15-year intellectual property agreement that automatically renews after 15-years.  In fact, during the previous year IBM sold several products covered under this agreement and are now fully owned, developed, supported and sold by HCL.   For the remaining products, in addition to supporting IBM customers, HCL has ownership of the of the IP as well.  HCL may create, market and sell products that are derived from the IP of the products we are developing and supporting for IBM.  The products that I am responsible for, HCL VersionVault and HCL Compass, are products derived from IBM ClearCase and IBM ClearQuest, enterprise scale configuration and change management products.  IBM has never created an agreement like this before, and that speaks to the trust it has in HCL’s ability to manage the development and support of the products.

Tell me more about your products, and why they might be relevant to our readers?
I think the term used by your readers to describe these products Is “Design Management”. In particular, HCL VersionVault, traditionally a “software” configuration management product, has always been capable of managing huge designs and very complex systems.  About 10 years ago, at the request of some customers, we created an integration with Cadence Virtuoso.  Over the years we have continued to improve this integration and our customers have migrated from other Design Management tools to ours.  The integration is a deep integration, fully embedded in Cadence Virtuoso, written in SKIL.  This allows the designer to fully interact with the design management system without leaving Cadence Virtuoso.  There are several attractive features that provide significant benefit for our customers.

Could you tell me about some of those features?
Yes, the crown jewels.  Design management and configuration management systems rely on what we call a workspace where an engineer will access the assets they need to use or change. Pretty much every system out there relies on copying files to the workspace or creating links in the workspace to server-based files, which must later be converted to local files for performance reasons. HCL VersionVault uses a virtual filesystem to provide fast, transparent access to any file on the server. Any tool that works against files on the filesystem can work against files in what we call a “Dynamic View”. The workspace is created instantaneously, and the user can start browsing and opening files immediately, or running builds or simulations.  This is incredibly useful when workspaces can contain tens, or hundreds of gigabytes of data.  While this technology has traditionally required both clients and servers to be co-resident in the same LAN, a couple of years ago we introduced a workspace that uses this technology in a WAN environment.

Another important feature of HCL VersionVault is what we call configuration specifications, or “config specs” for short.  A config spec is a set of rules that identifies which files should be visible in the workspace.  Config spec rules include pathnames where files or directory trees are located, what identifier should be used to select the file, a branch, which might define what release the files are on, a label (previously applied to an important build or release), a time rule, which might specify that only versions created before a certain date/time should be included.  They are extremely powerful and allow very specific, repeatable configurations.  You can resurrect a config spec a decade after its last use and instantaneously recreate that configuration, along with all of the tools used for builds, simulations or verification, to diagnose or fix a defect or implement an enhancement to that release.

In addition, we have something is very useful for projects that have auditing and compliance requirements.  When using Dynamic Views, if builds are run under “clearmake”, or “clearaudit”, any tools used, any versions touched, and any assets created are included in an automatically generated bill of materials associated with the assets created.  The BOM show exactly which tool versions were used and which files versions were used, in the creation of that asset.  This BOM can not only be used to demonstrate the validity of the resulting asset but can also be extremely helpful in tracing a problem detected during testing.  We have pharmaceutical and medical device companies that have used this technology to streamline their approval process with the FDA.

That sounds like very interesting, powerful technology.  That’s a very compelling story for HCL VersionVault.  What can you tell me about HCL Compass?
HCL Compass is an interesting product.  There are many who don’t know much about it that think of it as a defect tracking system, but it is so much more than that.  I like to describe it as a fully customizable workflow and process management database application tool.  I know, that’s a mouthful.  We do support out-of-the-box applications, the simplest of which is the Defect Tracking application.  We also have a more robust ALM (application lifecycle management) application.  A major chip manufacturer uses a customized ALM schema to manage their global chip development.  We’re working to expand those out-of-the-box applications to fully support the Scaled Agile Framework (SAFe) as well as more traditional Requirements Management and Quality Management applications.  The amazing thing about HCL Compass is that it is fully customizable in every aspect, from the data stored, the user interface to enter and change data, and the process that can executed on almost any user interaction.  We have some customers who have created their own applications, including a North American government, which manages their social security claims with our product.

Can you talk about what HCL is doing now that they are responsible for the development and support of the IBM products as well as the HCL derivative products, VersionVault and Compass?

HCL is investing significantly in both products.  Since entering into the agreement with IBM, we have more than doubled the size of the development teams, and are beginning to market the HCL products.  Our investments have been focused in several areas, including primarily modernization of the products with current technologies.  We are implementing REST APIs, webhooks, new GUIs, cloud and container support, and functional enhancements our customers have been asking for.  IBM has traditionally not done product marketing. HCL on the other hand has started doing our own product marketing.  Some of your readers may have seen us at the Cadence CDN Live! conferences over the past year.  We plan to continue our attendance as exhibitors there and expand to other conferences as well.  We’re also in the process of expanding our EDA footprint with an integration with Synopsys Custom Compiler, and are considering other tools as well.

Also Read:

CEO Interview: Adnan Hamid of Breker Systems

CEO Interview: Cristian Amitroaie of AMIQ EDA

CEO Interview: Jason Oberg of Tortuga Logic


TSMC Unveils Details of 5nm CMOS Production Technology Platform Featuring EUV and High Mobility Channel FinFETs at IEDM2019

TSMC Unveils Details of 5nm CMOS Production Technology Platform Featuring EUV and High Mobility Channel FinFETs at IEDM2019
by Don Draper on 02-05-2020 at 10:00 am

Diagram of BEOL metallization comparing EUV vs. immersion photolithography

Back in April, 2019, TSMC announced that they were introducing their 5 nm technology in risk production and now at IEDM 2019 they brought forth a detailed description of the process which has passed 1000 hour HTOL and will be in high volume production in 1H 2020.  This 5nm technology is a full node scaling from 7nm using smart scaling of major design rules (gate, fin and Mx/Vx pitches) for improved yield featuring an SRAM cell of 0.021um2 and a declining defect density D0 that is ahead of plan.

A primary reason for the success of the 5nm technology platform is the implementation of Extreme Ultra-Violet (EUV) photolithography.  Fully-fledged EUV replaces at least four times more immersion layers at cut, contact, via and metal line masking steps for faster cycle time, better reliability and yield. Total mask count in 5nm is several masks less than in the previous 7nm node.  Fig. 1 shows how one EUV mask replaced five immersion masks yet produces better patterning fidelity, shorter cycle time and fewer defects.

Fig. 1. Diagram of BEOL metallization comparing EUV vs. immersion photolithography showing how one EUV mask replaced five immersion patterning layers with better patterning fidelity, shorter cycle time and fewer defects.

FinFETs have been used in four generations from the 16nm node to 7nm, but performance as a function of channel mobility has been stagnant.  To address this, the High Mobility Channel (HMC) was implemented to increase performance.  The TEM in Fig. 2 shows the fully-strained HMC lattice constant interfaced with the Si lattice constant. The diffraction pattern confirmed HMC strain.

Fig. 2. Diagram showing finFET cross-section TEM showing fully-strained HMC lattice constant interfaced with the Si lattice constant.  The second plot shows higher leakage vs drive current of the silicon vs HMC transistors. The third plot shows the channel stress in GPa vs channel depth from the fin top to the fin bottom. The diffraction pattern shown confirms the HMC strain.

The HMC finFET has excellent Id-Vg characteristics as shown in Fig. 3 and produces ~18% more drive current than the Si finFET.  Figure-of-Merit (FOM) ring oscillator standby power also correlates well to transistor leakages.

Fig. 3. Chart showing drain current vs gate voltage (Id vs Vg) characteristics of the High Mobility Channel (HMC) transistors for different drain voltages.  The second plot shows the off-current ranges, Ioff-N and Ioff-P and the relative impact on standby current of the seven different Vt’s available in the technology. The currents in both diagrams are in logarithmic scale with one decade per division.  The Drain-Induced Barrier Lowering (DIBL) is 45mV and 35 mV and the swing is 69mV and 68mV for p-channel and n-channel transistors respectively.

This 5nm CMOS platform technology is a full node scaling from the 7nm process described in IEDM 2016. The availability of up to seven Vt’s for each transistor type, shown in Fig. 4, enables product design to meet the needs of power efficiency in mobile SoC as well as peak speed requirements of HPC.

Fig. 4. Chart of up to seven Vt’s available in N5 showing standby power in uW vs speed in GHz for N5 and N5 HPC compared to N7 to meet maximum power efficiency for mobile and peak speed in HPC.   eLVT offers 25% faster peak speed over 7nm.  Silicon data close to matching FOM ring speed vs stand-by power.

New HPC features are the extremely Low VT (eLVT) transistor with 25% faster peak speed over 7nm   and three-fin standard cells for an additional 10% performance increase. The technology is available for 3D chip stacking using hybrid bonding.   In addition to impressive density and performance gains relative to 7nm, the technology has achieved 1000 hour HTOL qualification with improved stress aging characteristics relative to the 7nm technology. The high-yielding SRAM and logic defect density D0 is ahead of plan. Technological achievements enabling this progress feature full-fledged implementation of EUV and high-mobility channel (HMC) finFETs.

This 5nm platform technology was designed and developed to meet objectives of PPACT(Power, Performance, Area, Cost and Time to Market). Design-Technology Co-Optimization (DTCO) is emphasized for smart scaling, avoiding brute-force scaling which would lead to drastically-increased process cost and yield impact.  Design features such as gate-contact-over-diffusion and unique diffusion termination along with EUV-based gate patterning enable SRAM size reduction and increased logic density.  The 5nm technology offers 15% faster speed at the same power or 30% power reduction at the same speed with 1.84x logic density of the 7nm node , as shown in Fig. 5.

Fig. 5. Plot comparing the speed in GHz vs. the core area in um2 of the N5 technology vs the previous N7. The 5nm technology offers 15% faster speed at the same power or 30% power reduction at the same speed with 1.84x logic density of the 7nm node.

Interconnect delay has a critical impact on product performance and with each generation the interconnect propagation delay has been  getting significantly worse.  Backend metal RC and via resistance is shown in Fig. 6 for generations from N28 to N5. The tightest pitch Mx RC and the Vx Rc are kept similar to the 7nm node by EUV patterning, innovative scaled barrier/liner ESL/ELK dielectrics and Cu reflow.

Fig. 6.  Charts of  normalized BEOL metallization RC product and via resistance vs nodes from N28 to N5 are shown. For the tightest metal pitch, MX RC and via resistance Vx Rc are kept similar to that of the previous 7nm node by EUV patterning, innovative scaled barrier/liner ESL/ELK dielectrics and Cu reflow.

SRAM density and performance/leakage are critical for mobile SoC and for HPC AI. Scaling of SRAM cells with more advanced nodes is becoming more difficult in feature size terms of F 2.  The offered High Current (HC) and the High Density (HD) SRAM cells with cell areas of 0.025um2 and 0.021 um2 respectively are the densest in the industry as shown in Fig. 7. Consistent high yield of the 256 Mb SRAM and logic test chips of >90% peak yield and ~80% average yield (without repair) has been achieved.

Fig. 7.  Chart of published SRAM cell size in um2 vs year of publication. The 5nm HD SRAM cell at 0.021 um2 is the densest offered in the industry.

The Ultra-low leakage ULHD can be used to reduce retention leakage for better power efficiency while higher-speed HSHD SRAM may be used as an alternative to HC SRAM cells to allow ~22% reduction in memory area as shown in Fig. 8.

Fig. 8.  Chart of standby leakage in pA at 0.4V  vs cell current in uA for ULHD, HSHD and standard HD SRAM cells. The Vout vs Vin butterfly curve plots of the 5nm HD SRAM cell  are shown at voltages  from 0.75V down to 0.3V.

The shmoo plot of the 256Mb 0.021 um2 HD SRAM cell with full read/write function is shown down to 0.4V in Fig. 9.

Fig. 9.  Shmoo plot showing Vout vs Vin from 1.0V down to 0.4V of the 256Mb SRAM based on the 5nm 0.021 um2 HD SRAM cell.

The frequency response shmoo plots of the GPU and CPU blocks in the high-yielding logic test chip are shown in Fig. 10.

Fig. 10. Shmoo plots of frequency in GHz vs. voltage for the GPU and CPU blocks respectively in the high yielding logic test chip in the 5nm qualification vehicle.

The 256Mb HD/HC SRAM and logic test chip passed 1000 hour HTOL qualification. The SRAM Vmin showed a negligible shift at 168 hours and passed the 1000 hour HTOL with ~51mV margin as shown in Fig. 11.

Fit. 11.  Plots of log-normal distribution vs Vmin in mV at 168 hours HTOL showing negligible Vmin shift and at 1000 hours HTOL, passing 1000 hours with 51mV margin.

Stress aging data at 0.96 V and 125C on the 5nm FOM ring oscillator made with the High Mobility Channel finFETs shown in Fig. 12 with improved aging relative to the 7nm node.

Fig. 12. Plot showing T50% lifetime(years) vs. stress voltage Vstr of aging study at 125C of N5 HMC finFET ring oscillators and N7 silicon finFET ring oscillators showing improved aging at the 5nm node relative to that at 7nm.

Another important feature for HPC is the metal-insulator-metal (MiM) capacitor formed in the upper layers of the BEOL metallization.  The 5nm node MiM has 4x higher capacitance density than the typical HD-MiM and produces ~4.2% faster Fmax by minimizing transient drooping voltage and achieved ~20mV Vmin reduction in a CPU test chip.

HPC critically depends on high-speed IOs especially SERDES.  By successfully optimizing finFET driving strength and capacitance/resistance with special high-speed devices, PAM-4 SERDES transmitter speed of 112 Gb/s at 0.78 pJ/bit and 130 Gb/s at 0.96pJ/b power dissipation as shown in Fig. 13.

Fig. 13. Plots showing signal characteristics of voltage out in mV vs time in ps of 112 Gb/s and 130Gb/s data transmission in SERDES PAM-4 with 0.78pJ/b and 0.96pJ/b respectively.

In conclusion, TSMC has presented a very competitive technology platform, establishing itself as the leader in best-in-class highest density logic technologies.  Volume production in 1H 2020 will enable leading edge products in advanced SoC for mobile, especially 5G, as well as HPC applications for AI, datacenter and blockchain products which increasingly need high performance with best power efficiency.


Verification, RISC-V and Extensibility

Verification, RISC-V and Extensibility
by Bernard Murphy on 02-05-2020 at 6:00 am

RISC-V

RISC-V is obviously making progress. Independent of licensee signups and new technical offerings, the simple fact that Arm is responding – in fundamental changes to their licensing model and in allowing custom user extensions to the instruction set – is proof enough that they see a real competitive threat from RISC-V.

Which all sounds great, but there’s a problem – verification. Dave Kelf of Breker gave me some interesting perspectives on this. In verification of the CPU itself, Arm has decades of experience and rich ecosystem support in this area. This all works very well when the CPU IP can’t be changed. The development team build up giant regression suites which they can use to verify complete compliance with compilers, and backward compatibility and all those other requirements.

But when you have a core allowing for instruction extensions, the core vendor can confidently verify the core as they ship it, but how does the ultimate product user verify the core with their extensions? You can’t just assume that the extensions have zero interaction with the behavior of the unmodified core. You really need to re-verify everything, including all the stuff you didn’t mess with.

This is apparently a problem all the extensible core vendors run into. They can’t ship their massive regression suites to their customer. Instead they typically reverify customer cores with extensions, in-house, a service which apparently they are expected to perform at no cost.

As a part-solution to this problem, the RISC-V group has put a lot of effort into testing compliance between independent implementations of the CPU to encourage cross-compatibility and a healthy ecosystem. Test suites are available from Imperas, Codasip, Google and many others.

As an aside, I wonder how well behavior can be bounded around custom extensions to the instruction set? Timing certainly, with stalls to the pipeline if an instruction doesn’t complete within a cycle. That seems like a necessary but insufficient and compromise condition. How will a stall affect other operations? How will the instruction interoperate with caching and other complications? Proving that a custom extension cannot disturb the correct operation of the rest of the system, or vice-versa, sounds like a hard verification problem. Maybe that’s just me.

Back to Dave, he sounded pretty confident that between the compliance standard groups and companies building compliance solutions, they’d figure out ways to ensure strong compliance in CPU implementations. But what they aren’t working on (as far as he knows) is system compliance – the interoperation of a CPU (or CPUs) with all the surrounding infrastructure: bus fabrics, caches and coherent fabrics, interrupt management, memory management, etc, etc.

Arm have put a lot of work, through their ecosystem, into verifying this kind of infrastructure. If an SoC product team switches from Arm to RISC-and loses this support, they are really going to struggle in verifying their SoCs. Breker had already developed an app on their Trek platform support of verifying integrations around the ARMv8 platform, so it was natural to spin a comparable solution for RISC-V.

The Breker team started with the ARMv8 tests, including cache coherency and interrupt testing, and added tests they thought might be necessary for RISC-V. They found some early customers who were sufficiently interested to run evals. Then SiFive approached them, maybe referred by one of those eval clients. SiFive were also running into the problem I mentioned earlier, needing to re-regress customer modifications against the internal SiFive regression suite.

SiFive also wanted a method to test their own internal processes. They saw the Trek RISC-V app as a way to do that, an independent audit of their quality. They helped Breker add more standardized tests, including a bunch of load/store-type operations according to Dave. SiFive were sufficiently impressed with the ultimate app that they have become one of the biggest customers for this product. That’s an impressive endorsement given SiFive’s leading role in RISC-V cores.

Breker released the RISC-V app a couple of months ago and Dave tells me they’re getting a ton of interest from customers. He says for him it’s really clear a lot of design teams are having this integration problem. They build their SiFive core, integrate it into their SoC and the system falls over. Without the Arm debug ecosystem, they need an alternative. They are evidently seeing a lot of promise in the Breker Trek RISC-V app.

You can learn more about the Trek RISC-V app HERE.

Also Read

Build More and Better Tests Faster

Taking the Pain out of UVM

WEBINAR: Eliminating Hybrid Verification Barriers Through Test Suite Synthesis


Signal Channel Design and Simulation for Silicon Interposer Packaging on High-Speed SerDes

Signal Channel Design and Simulation for Silicon Interposer Packaging on High-Speed SerDes
by Mike Gianfagna on 02-04-2020 at 10:00 am

Picture2

This year is the 25th anniversary for DesignCon.  The show has changed a lot over the years. Today, it’s a vibrant showcase of all aspects of advanced product design – from ICs to boards to systems. The show floor reflects the diverse ecosystem. If you missed it this year, definitely plan to go next year.

The DesignCon technical program has many tracks. Some discuss theoretical research while others focus on real design issues being faced today. I attended a very interesting presentation that falls in this latter category. Danny Ho, SI/PI department manager at MediaTek discussed 2.5D design. There are many presentations at DesignCon on this topic. This one was different. Danny began with an overview of the motivation for 2.5D packaging vs. more traditional approaches such as flip-chip and discussed the need for a silicon interposer to support designs containing HBM memory stacks.

This was not the focus of his talk, however. Rather, Danny focused on the signal channel created by the silicon interposer. The associated microbumps, C4 bumps, TSVs and dense routing create structures that are significantly more complex that what’s seen in a flip-chip package. It turns out there are many technical challenges associated with these structures, and Danny’s presentation explored several of them. The work presented was a collaboration with Cadence Design Systems. Cadence sponsored the session.

From an electrical perspective, there will be signal integrity challenges such as dense coupling and reflection effects. The TSVs also present different characteristics than has been seen with more traditional packaging. The tight die-to-die tolerances will also present EMI challenges.

The large size of the silicon interposer and the associated high-power consumption of the on-board components will also present warpage and heat dissipation issues.

In the study presented, the signal integrity issues associated with coupling and reflection were investigated. The performance levels of interest extend to those delivered by 50G – 112G SerDes technology. This presents a complex modeling problem. Traditional tools cannot deliver the required accuracy in reasonable time. Cadence Clarity 3D Solver was chosen to perform the analysis. Danny explained that Cadence Clarity can accommodate the complex models associated with the silicon interposer channel and employ massive parallel compute power to perform the analysis. According the Danny, this capability was previously unavailable.

Danny then discussed some real case studies and what was learned. A key issue is alignment of the larger C4 bumps on the interposer and the microbumps on the chip. Due to the potential incompatibility of chip design constraints and foundry interposer design rules, one can have the same or different pitch relative to these two structures. Misalignment of these structures can cause reflection and coupling. Specifically, crosstalk issues are seen with mis-aligned C4/microbump structures.

Next, the effects of copper dummy metal were discussed. All foundries have rules regarding metal uniformity and dummy metal must be added to adhere to these rules. Using Cadence Clarity, it was found that insertion loss degradation was not a major issue below 5GHz due to dummy metal. Above 5GHz, insertion loss and return loss become much worse with dummy metal however since the dummy metal increases trace impedance and capacitance.

Another experiment consisted of re-arranging the microbumps to improve alignment. This improves crosstalk. Return loss and insertion loss still showed some degradation. A final experiment looked at the effects of ground plane shielding. It was found that insertion loss and return loss improved when ground plane shielding was removed.

This work provides a lot of guidance for effective interpose design. Re-arrangement of microbumps for better alignment provides improved performance. This requires careful design modifications, however. Now that data is available, Danny reported that there is now discussions with foundries regarding dummy fill and ground planes and their effect on design performance.

 


Intel vs AMD Q4 2019 Conference Calls

Intel vs AMD Q4 2019 Conference Calls
by Daniel Nenni on 02-04-2020 at 6:00 am

Intel 10nm Roadmap

Now that the dust has settled and I’m out of cronovirus quarantine let’s talk about the Intel and AMD conference calls. Unfortunately, the Intel and AMD marketing teams are still outpacing engineering so it is difficult to write something serious but I will do my best.

Spoiler Alert: Both CEOs disappoint.

First an Intel 10nm update:

In client computing, we are seeing excellent momentum for our first 10 nanometer mobile CPU, Ice Lake, with 44 system designs already shipping. In Q4, we ramped our 10 nanometer production and continue to see yields improve. We are planning nine new product releases on 10 nanometer this year, including our next-gen mobile CPU, a 5G base station SOC, an AI inference accelerator, our first discrete GPU and Xeon for server, storage and networking.

Not bad for a node that was “rumored” to be cancelled. Lesson learned, I hope. Remember, Intel attempted a 2.7x density increase for 10nm which led to serious manufacturing challenges.  Intel 7nm with EUV will be back to a 2.0x density target.

Across our 14 and 10 nanometer nodes, we are adding 25% wafer capacity this year to deliver a high single digit increase in PC unit volume. This will enable us to meet market demand, deliver our 2020 financial plan and increase inventory to more normalized levels.

In the second half of 2019 I had heard rumblings amongst the supply chain that AMD was getting design wins in client computing due to the Intel 14nm shortages. Whether these design wins get into production or not I don’t know but the window for AMD certainly is open in 2020. Intel is again upping capacity but that will not calm the supply chain until later this year.

“In 2019, we generated $3.8 billion in AI-based revenue. The AI market opportunity is expected to be $25 billion by 2024 and we are investing to lead with a strong portfolio of products.”

Intel has CPUs, GPUs, FPGAs, and AI specific silicon. How is AMD, or anybody else, going to compete with Intel in the cloud? Except maybe the cloud companies themselves once they ramp up the latest in-house silicon. Even so there will still be an Intel CPU inside for the software stack, house-keeping,  etc…

“We are also on track to deliver 10 nanometer-plus this year, our first performance upgrade on 10 nanometer. Our 7 nanometer process remains on track to deliver our lead 7 nanometer product, Ponte Vecchio, at the end of 2021 with CPU products following shortly after in 2022.”

Translation: Intel 7nm HVM in 2022. AMD will be shipping TSMC 5nm in 2022 so again we will have process parity amongst the giants, similar to Intel 10nm vs AMD 7nm. This gets Intel back to the two year tick-tock cadence. Tick is a new process, tock is a new architecture, if you count 10nm HVM for 2020 and 7nm HVM for 2022.

In the meantime, TSMC and Apple will deliver industry leading 5nm SoCs this year and 5nm AMD CPUs will probably start to appear in 2021. It will be interesting to see how Intel 10nm++ stacks up against TSMC 5nm. I will check on that during SPIE later this month.

In the Q&A Intel once again said they are not outsourcing CPUs so the TSMC and now GF outsourcing rumors are click-bait. Seriously, AMD 14nm is GF so why would Intel even go there?

I wish there was something good to say about the AMD call but there isn’t from my point of view. Just the same rose colored glasses and promises for a better tomorrow. If you disagree hit me up in the comments.

One thing I did hear in China is that their home grown x86 CPUs are doing better. In 2013 VIA Technologies created a joint venture (Zhaoxin) with the Shanghai Municipal Government. It started with a single core at TSMC 40nm, 4 cores at 28nm, and is now on 16nm with 8 cores. The 7nm version has already been taped out and I’m sure they will continue down the TSMC process node road map.

From what I am told this will hit AMD based laptops in China before Intel but the Made in China semiconductor initiative is coming, absolutely.


DVCon Is a Must Attend Event for Design and Verification Engineers

DVCon Is a Must Attend Event for Design and Verification Engineers
by Daniel Payne on 02-03-2020 at 10:00 am

dvcon 2020

Learning is a never-ending process for design and verification engineers, so outside of reading SemiWiki you likely want to attend at least a few events per year to keep updated, learn something new, attend a workshop, or even present something that has made your IC project work much better than before. Sure, DAC is always a great event in July, but did you know that at DVCon there are expected to be over 1,000 engineers attending  from March 2-5 in San Jose?

I’ve attended DVCon in past years and can tell you that it’s well organized in its 32nd year, and has quite the wide range of activities:

  • Tutorials
  • Luncheons
  • Workshops
  • Receptions
  • Sessions
  • Keynotes
  • Panel Discussions
  • Exhibitors

The General Chair this year is Aparna Dey and she wrote a concise welcome blog on the DVCon site, while her day job is at Cadence working on standards. Speaking of standards, the DVCon event is sponsored by Accellera, the group that promotes so many EDA and semiconductor IP standards activities.

Topics on Monday this year include:

AI is all the buzz in our tech world, so the Tuesday Keynote is titled AI for EDA, then presented by Dr. Anirudh Devgan, president of Cadence with past stints at Magma and IBM. Deep learning applied to EDA tools will be discussed.

Panel sessions should get lively on Wednesday because the first one includes RISC-V and the second one wants to fix what’s broken today, plus you can ask questions to either stump the panelists or get clarification:

You’ll be exhausted trying to attend everything, because there are 42 papers, four tutorials and some 23 poster sessions, 10 workshops and the exhibitors. I’d love to hear your feedback about DVCon this year, so send us your trip reports for sharing with other engineers, or better yet, post your trip report in our Forum.

About DVCon
DVCon is the premier conference for discussion of the functional design and verification of electronic systems. DVCon is sponsored by Accellera Systems Initiative, an independent, not-for-profit organization dedicated to creating design and verification standards required by systems, semiconductor, intellectual property (IP) and electronic design automation (EDA) companies. In response to global interest, in addition to DVCon U.S., Accellera also sponsors events in China, Europe and India. For more information about Accellera, please visit www.accellera.org. For more information about DVCon U.S., please visit www.dvcon.org.

Follow DVCon on Facebook https://www.facebook.com/DVCon or @dvcon_us on Twitter or to comment, please use #dvcon_us.


Logic and Memory Make for a Recovery

Logic and Memory Make for a Recovery
by Robert Maire on 02-03-2020 at 6:00 am

Lam Research 2020
  • LAM- “Logic And Memory” make for a recovery-NAND (Samsung) & Logic (TSMC) + China
  • Great Q4 Results & Q1 guide as memory restarts
  • Logic strength continues-China is crucial to growth
  • 2019 better than expected- 2020 WFE up about 5-8%

Lam reports nice finish to 2019 and start of 2020
The company reported revenues of $2.58B and EPS of $4.01 with guidance of $2.8B +-$200M and EPS of $4.55+-$0.40.

This was at the higher end of guidance and above expectations for Q1.

We would note that Lam has a consistent history of beating guidance so the beat was in line with prior beats but the future guidance was better by a wider margin than expected.

Memory uptick was well expected
We have been saying for some time now that NAND memory spending had already picked up even though we remain in an over supplied condition.

We are a bit surprised that the street reaction was such surprise at the upside in memory as it has been known and expected for a while now.

Samsung spending to get ahead in technology not capacity
In our view the prior memory spending cycle was heavily weighted to pure capacity adds where the current increase in spend appears to be more technology directed and more specific to Samsung.

In prior up cycles we have seen Samsung spend money to try to get a technology/cost advantage over competitors which in this case would be pushing to get to 128 level NAND and perhaps beyond while competitors are stuck at lesser levels and higher per bit costs.

We think there still exists excess capacity and some idled machines but that having a technology advantage is worth spending on.  In prior cycles, Samsung was still able to turn a profit in memory while others were losing money due to the large cost gap from technology differences.  In this most recent cycle, Samsung was more negatively impacted as they did not have the same wide cost differential with their competitors.

In short, we think Samsung is stepping on the accelerator to put distance between them and the competitions technology.

Logic remains large
TSMC remains a big spender in foundry and Intel is spending money as well (although not as much…). As the “poster child” of memory spending Lam was doing OK with foundry/logic but real growth can come back if memory comes back.

The upside should have been well expected
We pointed out that when Ichor (a Lam sub supplier)pre-announced a strong quarter a while ago, it was a very clear, unmistakable signal that both Lam and AMAT (their two biggest customers) were obviously going to have a good quarter. We suggested buying into Lam and AMAT on the Ichor news….if if you didn’t, you were asleep and not reading…..

China likely discounted out
China has become a big part of Lam’s business and one of the biggest geographic regions with a good mix.  The company sounds like they have “pre-discounted” a corona virus discount into their guidance and planning to offset any expected problems.  This is obviously both prudent and conservative planning.  Given the companies historical under promising and over delivering they have likely discounted a worst case.

The stocks
The stock had a huge run up in the aftermarket which seems to indicate that many investors have not been paying attention to the improvement in memory of late or had been discounting it.  Memory spend is very volatile on the upside as well as downside and the spend can be very quickly turned on without much time needed for a ramp.  Memory makers certainly know the tool sets they need and have been working on the process flow non stop throughout the cycle so they can quickly turn on when they want.

We would expect other semiconductor equipment companies and their suppliers to have equally positive reports and similarly commentary on China. DRAM spend remains questionable in our view as does the length of foundry spend but right now NAND memory recovery will likely support the stocks for the near term.


The Tech Week that was January 27-31 2020

The Tech Week that was January 27-31 2020
by Mark Dyson on 02-02-2020 at 6:00 am

Semiconductor Weekly Summary 1

This week the Coronavirus has been escalated to a Global Health Emergency status by the WHO, China extended the Chinese New Year shutdown to 9th February and many companies have implemented a travel ban on business travel to/from China or Asia as well as implementing other business continuity procedures. The last crisis in 2003 from SARS was estimated to have cost the global economy US$40billion and shut down the Chinese semiconductor industry for months. Already the number of infected people with the Coronavirus is more than SARS, though luckily so far the fatality rate seems to be much lower. Already many companies are seeing the first impact of this crisis and are desperately trying to get a full understanding of the total supply chain impact which is difficult to accurately gauge at present as many people in China have not returned back to work. This is not good for an industry that was just starting to recover from a challenging year last year.

As we start the new year Semiconductor Packaging News has been running a series of articles from various leaders in the industry with their forecasts for 2020. Lena Nicolaides from KLA expects semiconductor packaging growth to be very strong in 2020 with adoption of advanced packaging solutions. Jim Faine from Marvin Test solutions expects 5G and autonomous cars to be the main growth areas. Ram Trichur from Henkel Corporation expects 2020 to be a growth year, with solid gains across semiconductor packaging applications driven by 5G telecom and mobile electronics, and by some specific growth areas within the automotive/industrial and datacenter/memory sectors.  David Butler from SPTS Technologies Inc is very optimistic about the prospects for advanced package technology in 2020 driven by the roll out of 5G. David Wang from ACM Research sees opportunity from the trade war in being able to do business in both China and US through it’s US headquarters and Shanghai based wholly owned subsidiary at a time when many US equipment suppliers are concerned about doing business in China.

Latest economic data from Taiwan shows that Taiwan is one of the main winners economically last year reporting a GDP growth of 2.73% for 2019 as a whole and a 4th quarter GDP growth of 3.38%. Taiwan is particularly benefitting from the trade war as it sees increased orders from both China and US.

SEMI has released it’s North American semiconductor equipment sales report showing that billings were 17.5% higher in December than in November, with total billings of US$2.49billion.

Apple reported a solid Q1 2020 for the quarter just finished due to increased iPhone sales. For fiscal Q1 they reported all time record revenue of US$91.8billion, and they are forecasting revenue for Q2 of US$62.4 billion as it believes that its phones and other devices such as AirPods wireless headphones will continue to sell well during what is often a slow time of year.

AMD reported revenues of US$2.13billion in Q4 increasing 50% yoy and up 18% sequentially. AMD are projecting revenue of US$1.8billion in Q1, this is up 42% yoy but down 15% sequentially. For 2019 as a whole AMD reported record annual revenue of US$6.73billion, up 4% yoy. For 2020 as a whole it is expecting revenue growth of about 28% to 30% year-over-year.

Xilinx reported a decrease in revenue in Q3 which ended December, total revenue was US$723.5million, down 10% yoy and down 13% sequentially, as it suffered from the US China trade war and particularly from the trade restrictions on dealing with Huawei as well as the slower than expected deployment of 5G technology. As result it has announced it plans to cut about 7% of its worldwide workforce. For the coming fiscal Q4, they are forecasting revenues with a midpoint of US$765million.

Cree announced its fiscal Q2 earnings reporting revenue of US$240 which is a 14% yoy decrease and a 1% sequential decline due to lower LED segment revenue and weakness for power and RF device sales. They also announced that recently their application for a licence to ship to Huawei was turned down. Despite the short term headwinds they see a growing momentum for their silicon carbide technology. Cree is forecasting revenue for the current fiscal Q3 with a midpoint of US$225million.

Finally a report by the US Department of Energy predicts that due to the adoption of LED lamps in the US general lighting market this will produce energy savings of more than 569 terawatt hours annually by 2035, equal to the annual power output of more than 92 1,000 mega-watt power plants.


Privacy is Different in Cars

Privacy is Different in Cars
by Roger C. Lanctot on 01-31-2020 at 6:00 am

Privacy is Different in Cars

The New Yorks Times’ “The Privacy Project” highlights all that is terrifying about our surveillance economy. We blithely throw away our privacy for the privilege of freely accessing mountains of information about the things we want to buy, the celebrities and teams we follow or support, or to get directions home.

Thousands of applications are tracking our movements via our smartphones – a reality we are more or less comfortable with since we can control that access using our privacy settings. Still, we hear about apps that continue to track and gather data long after we would have expected them to stop – and we have no control or visibility into how the information is being used.

The most telling trope in the latest installment of “The Privacy Project” – which appeared on-line a few weeks ago but just arrived in the physical paper last Sunday – demonstrates how the President of the United States could be tracked using data from the smartphones carried by his secret service detail. It’s a chilling illustration magnified by examples of massive data extractions regarding the movement of CIA personnel into and out of their offices in Langley, Va., along with satellite imagery illustrating similar movements for White House and Pentagon employees.

There are limits to what can be illustrated in a newspaper article, but the point is to demonstrate the ability to look at this data in the aggregate – more or less heat maps of masses of people – and individually – tracking a senior diplomat, military general or security figure all the way back to his or her home, for example. It’s enough to make you want to put your smartphone in the freezer with your car keys – or maybe wrap it in lead.

Car companies have been struggling to come to grips with the unique demands of privacy in the context of the operation of a motor vehicle. Every year, one or more car company CEOs step forward and assert their commitment to protecting the privacy of their customers. GM executives are fond of saying: “The customer owns the data.” The only problem is that the typical GM OnStar customer can’t get access to the data that GM is collecting – which renders “ownership” meaningless. The privacy game is played differently in the automotive industry and the stakes are higher.

Tesla Motors has set the terms of engagement for owning a Tesla. Owners are virtually obliged to share their vehicle information and, with that comes some level of privacy violation. Like the surveillance economy built by Google and Facebook on the foundation of freely shared information exchanged for economic value, Tesla offers a vehicle enhancement value proposition founded on software updates – which requires an always available wireless connection.

I moderated the keynote panel discussion at the Consumer Telematics Show preceding CES2020 in Las Vegas, where a senior executive from Karma, maker of a connected EV that competes with Tesla, noted that customers must agree to share vehicle data to take delivery of their Karma. No sharing, no vehicle.

The requirement sounds onerous for two reasons. Firstly, the average car buyer sees their vehicle as a refuge and a source of freedom. A vehicle connection and a data sharing proposition suggests intrusion and loss of control.

The requirement is also worrisome because cars have yet to implement smartphone-like consumer controls for privacy and data sharing. A consumer driving a connected car cannot easily take him or herself “off the grid” – without driving beyond cellular coverage.

More importantly, car companies are increasingly being told that they must take steps to ensure drivers are paying attention. New requirements emanating from the European New Car Assessment Program (NCAP) call for a driver monitoring system capable of measuring percent closure (“perclos”) of eyes. In other words, within a few years drivers will begin to see cameras introduced in vehicle cockpits to ensure they are paying attention to the driving task.

Once driver monitoring systems are in place, though, driver identification and credentialling will follow rapidly – especially given the rapid integration of on-board e-commerce systems and personalized digital assistants. For me, it’s all okay and it all makes sense as long as the guiding principle is safety and collision avoidance.

Collision avoidance is a clear value proposition. I also want the peace of mind that my car maker can find me if it needs to notify me of a flawed or failing system in my car. Year after year car makers in the U.S. and elsewhere around the world have struggled to locate all of the cars equipped with potentially deadly Takata airbags that need to be replaced. Please, please violate my privacy to get me this urgent notification.

If on-board systems in my car violate my privacy, but do so in the interest of preserving my life, I am good with that. Of course, this is a step above and beyond the software update value proposition promised by Tesla and Karma.

Driving a car is a life and death proposition. To the extent that privacy violations are tied to safety, the automotive industry should represent something of an exception or require unique regulatory accommodations.

The implications are that companies working in the automotive space are entitled to some sort of special status – and I’d include in this equation mapping companies HERE, TomTom, and the likes of Mobileye, Google, Continental, Bosch, Harman, and others. At the same time, companies such as Apple, Mapbox, Waze, Facebook and others building their businesses off of crowdsourced smartphone data ought to merit extra scrutiny and, perhaps, more stringent regulation.

The New York Times’ “The Privacy Project” reveals the ways in which crowdsouced smartphone information can be used to manipulate and oppress entire populations or even individuals. Smartphone privacy violations are not occurring in the context of a life-saving value proposition. The value proposition is purely commercial and the individual user is the economic unit.

Auto makers will be increasingly violating the privacy of their consumers. It probably is time to give car owners the ability to manage and control their data sharing in a smartphone-like manner directly from the dashboard. And, soon, it will probably make sense to compensate drivers for sharing their information. But the focus for auto makers, first and foremost, ought to be safety – with an emphasis on the safe operation of the vehicle.