webinar banner2025 (1)

Arm Reveals Custom Instructions, Mbed Partner Governance

Arm Reveals Custom Instructions, Mbed Partner Governance
by Bernard Murphy on 10-30-2019 at 6:00 am

Tipping the scale

At TechCon Arm announced two more advances against competitive threats, one arguably tactical and the other strategic, at least in this writer’s view.  The tactical move was to add support for custom instructions, the ability to collapse multiple instructions into a single instruction through customer-added logic which hooks into the CPU pipeline. This supports customer differentiation in performance and low power consumption for IoT devices, for example for trig functions used in GPS location. Software developers access these new functions as intrinsics apparently.

Custom instruction capability has been around for a while, especially in DSP IPs (where I believe it is extensively used for vectorized operations) and more recently and obviously in the RISC-V architecture. Of course in RISC-V you can do anything because it’s an open ISA but I imagine you may need to curb your enthusiasm for over-exotic capabilities if you want to remain compliant with the ecosystem. Perhaps then the customization advantages over the Arm offering will not be so great.

Custom instruction support will initially be provided in Cortex-M33, available in the first half of 2020, and will be extended further in the Cortex-M family at a later date. The capability comes at no additional cost for new and existing licensees. Further ability to differentiate at no added cost would I imagine give pause to anyone thinking about a switch to a different architecture.

The more strategic move is in opening up the governance of the Mbed OS to silicon partners. The ecosystem has always been a powerful advantage for Arm and will (in my view) remain a major hurdle for any competing solutions. They have equally built a big ecosystem over the last 10 years around their open-source IoT operating system, Mbed OS (about half a million third-party software developers and 150 Mbed-enabled boards and modules so far).

So far that’s been under Arm’s direction. Apparently silicon partners have been asking for more insight and input into Mbed OS future directions. Arm proposed this new governance approach which was well received and is now implemented in a technical working group and a product working group. The product working group meets monthly to prioritize and vote on new capabilities. As one example they’re already working on new low-power battery optimizations based on contributions from partners. Analog Devices, Cypress, Maxim Integrated, Nuvoton, NXP, Renesas, Realtek, Samsung, Silicon Labs and u-blox, among others, are already active in the WG and any Mbed silicon program partner is welcome to join at no cost.

This is strategic because will help further establish the ecosystem and technical investment partners make in a solution. In turn they’ll become more and more unwilling to drop to switch to another solution which may not provide all those nice features ready-made. Not to say that Arm (or the competition) couldn’t cross a line at some point where a switch would become more compelling. But so far at least, Arm seems to be making all the right moves to reinforce their position, conceding just enough in tactical areas while continuing to reinforce their strategic advantages. They continue to impress me as a thoughtful and well-managed company. They just keep adding more reasons to tip the scale in their direction.


“Connecting the Divide” at SEMICON Europa

“Connecting the Divide” at SEMICON Europa
by admin on 10-29-2019 at 2:00 pm

Connecting the Divide between Design and Manufacturing is an overarching theme within the ESD Alliance as these two essential semiconductor disciples become more reliant on each other. It’s also the reason we’re hosting  SMART Design, the first system-centric series showcasing advances in electronic system design to be held at SEMICON Europa held November 12 through November 15 in Munich, Germany.

SMART Design’s program, “Designing Electronic Systems for Future Applications,” includes presentations and a panel discussion underscoring how the increasing applications of advanced electronic system designs including automotive and medical pose new challenges that demand closer collaboration between design and manufacturing. Our goal is to create an opportunity for attendees to deepen their understanding of the links across Design and Manufacturing and throughout the supply chain. This will foster the collaborations essential to addressing technical challenges and ushering exciting new electronic products from concept to consumer.

The 2.5-hour program begins with Babak Taheri, Silvaco’s CEO and CTO, who will assess “Next Generation SoC Design: From Atoms to Systems.” “Near-Threshold Logic Benefits the Full Application Stack,” will be addressed by Lauri Koskinen, CTO of Minima Processor. Next up will be “Deep Learning for Electronics Manufacturing” by Javier Cabello, software and vision engineer at Mycronic AB.

“Cloud-Accelerated Innovation for Semiconductor Design and Verification,” a topic of interest to a wide audience, will be given by David Pellerin, head of worldwide business development, Hitech/Semiconductor for Amazon Web Services. Ian Campbell, OnScale’s CEO, follows with another talk on cloud-based Design titled, “Cloud Engineering Simulation: A Game Changer for Engineers.” The last presentation before a panel session is titled, “Addressing the ‘New-Space’ Paradigm Shift in Development and Production of High Reliability, Space Grade Semiconductor Components.” The presenter will be Christian Sayer, field applications engineer from Cobham Advanced Electronics Solutions.

Noted industry executive Jim Hogan, managing partner of Vista Ventures, will moderate “The Risk of Obsolete Design and Verification Environments in the RISC-V Era,” an ideal topic in the open-source era. Panelists include Gabriele Pulini, senior business development manager at Mentor, a Siemens Business; Silvaco’s Babak Taheri; Adnan Hamid, Breker Verification Systems’ CEO; Raik Brinkmann, president and CEO at OneSpin; and Paul Cunningham, corporate vice president and general manager of the System Verification Group from Cadence Design Systems, Inc.

SMART Design is scheduled for Thursday, November 14, from 2:30 p.m. until 5 p.m. in TechARENA 1, Hall B1. A networking hour hosted by the ESD Alliance and SEMI immediately follows.

Also debuting this year at SEMICON Europa is the SMART Transportation Forum led by SEMI’s Global Automotive Advisory Council (GAAC) with presentations from the Design, semiconductor equipment and materials suppliers and automotive OEM communities. The SMART Transportation Forum, “Connected-to-Everything Automated Mobility,” will be held Wednesday, November 13, from 9:30 a.m. until 3:30 p.m. in Room 14C at International Congress Center Munich.

As the SMART Design program offers, “Connecting Design and Manufacturing” is not only a catchphrase. While Design may be where electronics begins, it’s not the whole picture. With the complexity of systems being designed and manufactured, connecting Design and Manufacturing must be more than just talk. Connecting them will enable smarter, faster, more powerful, smaller, more reliable and more affordable electronic products produced by the $2-trillion global electronic product manufacturing and supply chain. This is a huge responsibility. Meeting it demands cooperation and collaboration across multiple disciplines including semiconductor Design, packaging, software development, materials and manufacturing, system integration and testing.

We look forward to seeing Semiwiki readers at SMART Design at SEMICON Europa as we extend design expertise in the worldwide electronics industry by “Connecting the Divide between Design and Manufacturing.


Synopsys’ New Die-to-Die PHY IP – What It Means

Synopsys’ New Die-to-Die PHY IP – What It Means
by Randy Smith on 10-29-2019 at 10:00 am

This morning, Synopsys announced its new Die-to-Die PHY IP. This announcement is critically important as it addresses two major market drivers – the growing need for faster connectivity in the datacenter and similar markets, and a path to better exploit the latest processes by dealing with yield issues for larger dies in a different manner. Also, this seems to be just the first step in this area, and we will anxiously await further advances in die-to-die connectivity. I believe Synopsys is trying to take the lead here and potentially help drive for industry standards that do not yet exist. Please read the press release for the details. Below, I will focus on what solutions this announcement can enable for use by chip architects and designers.

I have written a few times in the past few months on SerDes and other high-speed connectivity paths in the datacenter. Given the seemingly ever-growing demands for cloud computing, whether in e-commerce, machine learning, AI, gaming – the list is growing daily – datacenter administrators are hungry to find ways to deliver high performance. This pursuit has seen many gains in PCIe (inside the chassis), computer-to-computer, rack-to-rack, and datacenter-wide areas. High-speed optical solutions are now targeting high-speed in lengths up to 10 km. But these solutions still come with some latency and area penalties that make them prohibitive for a die-to-die solution.

By being able to connect multiple dies on a substrate in a point to point manner using the new Synopsys die-to-die PHY, which is available now, you can create a larger piece of functionality with less latency between the blocks. Admittedly, there has not been enough standardization for this type of solution. BGA-style connections within multi-chip modules (MCM) are not new, but there has not been much standardization of the PHYs connecting them. Initially, this solution will only be available on a single 7nm FinFET process, so it doesn’t yet support a heterogenous MCM solution. However, I expect that will certainly be coming soon. For now, this advancement alone is impressive.

As you make larger and larger semiconductor dies on the latest manufacturing processes, yield usually drops dramatically which significantly increases cost. If you take that same design and split it into multiple smaller dies in the same process you can see a huge saving in cost just from the improved yield. To achieve lower cost, the saving from the yield need only be more than the additional cost of the substrate used to connect the dies. If the original die was already going to sit on a substrate, then this is an easy decision. If not, it is still an option worth exploring as it may very well be less expensive to produce.

Another interesting consideration is what this technology can enable in conjunction with other technologies. For example, I can envision a design where multiple chiplets are placed in a row on a substrate forming a datapath (e.g., data flowing left to right, from die to die). If you need chunks of nearby memory, you have a choice, north or south of the datapath elements on the substrate, or can you place the memory on top of the datapath element, perhaps using another substrate? In other words, can you have BGA connection above and below the die? It is an interesting thought. Of course, that may also bring up thermal and other EM considerations. The use of stacked die is not a new thought. So how far can we take this new development from Synopsys? My imagination is started to run wild.

“1.8 terabit-per-second per millimeter unidirectional bandwidth for high throughput die-to-die connectivity.”

“One picojoule per bit (pJ/bit) for ultra-low-power.”

What will designers due with that? We should all be excited to find out.

Related Blog


New ARC VPX DSP IP provides parallel processing punch

New ARC VPX DSP IP provides parallel processing punch
by Tom Simon on 10-29-2019 at 6:00 am

The transition to the digital age from a mostly analog world really began with the invention of the A-to-D and D-to-A converters. However scalar processors can easily be overwhelmed by the copious data produced by something as simple as an audio stream. To solve this problem and to really jumpstart the digital age, the development of the digital signal processor (DSP) catalyzed the sweeping changes we are still witnessing today. If you are old enough, you probably remember when early DSPs were added to audio systems to enhance sound. Because of their immense usefulness, the applications of DSPs have expanded to include an ever-growing list of domains. However, it is a safe generalization to say that they’re more most useful in helping computing systems deal with the external world.

With applications as diverse as RF signal processing, automotive sensor RADAR, LIDAR, sensor fusion, vision, and in some cases machine learning, DSPs have needed to support a wider range of operations and increasing parallelism. To help SoC designers meet these challenges Synopsys has just introduced the impressive ARC VPX family of DSP Processor IP. The two new entrants in this family are the VPX5 and a Functional Safety version called the VPX5FS. So, what makes these DSPs different?

The answer in a word is parallelism at every level. Each VPX core offers 4 VLIW execution slots. VLIW introduces parallelism with the ability to encode multiple instructions for parallel execution in the same process or cycle. VLIW can be tricky to program because intermediate results in long expressions need to be cascaded through multiple VLIW operations. However, Synopsys has announced ARC MetaWare Tools that hide the mechanics of VLIW operation from developers, so they can write C++ code as usual and reap the benefits of VLIW acceleration.

The next level of parallelism for the VPX DSP IP cores is support for SIMD. SIMD lets one instruction operate on many (up to 512 for VPX5) data items. Three of the VLIW slots support SIMD, providing massive acceleration. Once again, Synopsys has made sure that the ARC MetaWare Tools help software developers easily take advantage of SIMD with minimal effort.

On top of this, VPX can scale up to 4 cores, adding a third layer of parallelism in VPX based SoCs. This ‘three dimensional’ parallelism gives the VPX based SoCs the ability to tackle a range of problems far beyond the now seemingly quaint uses for the first generations of DSPs.

A trend in machine learning, where parallel processing pays off, is the to move toward 8 or 16 bit integer operations to speed ML recognition algorithms. However, the real world is a messy place filled with data that can only be characterized by floating point values. Before a ML algorithm can be applied, high quality sensor data is needed, often in the form of sensor fusion output, to properly characterize what is happening in the real world. Floating point data offers high dynamic range and higher accuracy than integer values. The VPX cores support floating point operations in their VLIW slots. To further help with processing, VPX offers a linear algebra math unit that can perform sine, cosine, arctan, sqrt, log, exponent and other operations. Of course, there is also a VLIW slot that can use SIMD on 8, 16 and 32 bit integer data.

In their announcement Synopsys highlights several key areas where they see the VPX DSPs playing an important role. They are LIDAR, RADAR, Sensor Fusion and 5G communications. The low power and configurability of the VPX cores mean that they can be applied where needed to help process enormous amounts of data. A result of having higher performance in the system is a potential reduction of sensors and sensor complexity. For 5G, where signal processing becomes more important because of channel complexity, additional processing power can help improve data rates and reduce power. The full announcement with more details on the VPX5 and VPX5FS can be found on the Synopsys website.


Cadence Shows off 5LPE Hercules Implementation

Cadence Shows off 5LPE Hercules Implementation
by Randy Smith on 10-28-2019 at 10:00 am

In a joint presentation given by Samsung, Arm, and Cadence at the Arm TechCon event on October 9, 2019, Cadence showed some results and explained its collaboration project used to implement the new Arm Hercules CPU on Samsung’s advanced 5LPE process. I do not want to minimize the significance of Samsung’s and Arm’s participation in this process. Samsung’s work in properly characterizing this cutting-edge process and Arm’s multiple contributions to the project, including its POP IP, which greatly aids in the implementation of Arm processors, were critical to the success of the project. But, if you have already selected the process and core, then it is tools that are the variable. Therefore, this blog will focus on Cadence’s efforts at implementing the Hercules core in this process.

The presenters at the event were Kevin K. Yee (Samsung), Fakhruddin Ali Bohra (Arm), and Edson Gomersall (Cadence). At the beginning of Edson’s portion of the talk, Edson readily acknowledged the critical importance of the contributions of his other partners in the projects. In particular, he stressed the importance of the tuning between the Arm IP and the Cadence tools, as seen in the diagram below.

The support Cadence received from Samsung was also important as there is typically a lot of tool tuning to be done for a new manufacturing process. This pioneering project enables better “out of the box” results for mutual customers looking to implement this core on this process for their designs. ‘Global route tuning,’ and ‘layer promotion adherence’ were two areas Cadence focused on in the discussion.

As I mentioned in the initial paragraph, the contributions of all three members of the project were critical to the success of the project. And I believe the project has paved the way for customers of these companies to implement this core on the process successfully. Nevertheless, designers will want to know which EDA toolset is best to work with to implement this core on this process. I think it is too soon, and not enough information is being published yet, to give a definitive answer – but we can see Cadence’s strengths.

Design optimization has been part of Cadence’s DNA since its very initial acquisition in 1989, Tangent Systems (I was a co-founder of Tangent). Up until the time of Tangent’s first product release, TANCELL, all placement and route implementations only focused on minimizing area while completing all the connections without design rule violations. In other words, before TANCELL, the only thing being optimized was area. TANCELL was the first commercial timing-driven layout tool. It would analyze the timing of a design (using static timing analysis) to prioritize the wire length of timing critical nets. This optimization was considered in every design step – global placement, global routing, detailed placement, and detailed routing. It was a crucial initial step, but it put Cadence miles ahead of the competition and developed an engineering culture of co-optimization.

Physical design tools have become much more sophisticated now. From an optimization perspective, they minimally need to optimize for Power, Performance, and Area (PPA). In reality, it is now far more complicated than that. ‘Power’ does not only mean power consumption. IR drop is an important consideration. Wire length is far from the only goal in routing as now crosstalk considerations must be addressed. There are many more concerns beyond this, and Cadence is addressing them all.

There are two principal techniques to optimize a layout result – concurrent analysis and design optimization. Design optimization is the older technique, though it still has its place. In design optimization, you analyze a competed design, target areas to improve, try to improve it, then reanalyze the design. This loop continues until you have an acceptable result. This technique usually works, but not always. A more effective approach is concurrently analyzing the design as you are impending it. To make this work, you should analyze it with the actual signoff tool, not some simplified model. Cadence can do this as it has the signoff tools.

Cadence, Samsung, and Arm seem to have worked together quite well on this project. It will be interesting to hear more when benchmark results start to become available. Learn more about Innovus here.


DAC 2020 – Call for Contributions

DAC 2020 – Call for Contributions
by Daniel Payne on 10-28-2019 at 6:00 am

57DAC in SFO

My first DAC was in 1987 so I’ve seen our industry expand greatly over the years, and I expect that #57DAC on July 19-23, 2020 in SFO to be another exciting event to attend for semiconductor professionals from around the globe. What makes DAC so compelling for me to visit are the people, exhibitors, panel discussions, technical presentations and industry buzz that you just cannot glean from a blog or glossy brochure.

Let me just summarize all of the contribution deadlines to get you thinking about what you could share with the rest of us:

The topic domains for 2020 at DAC include the following seven areas:

  • Design
  • EDA
  • Embedded Systems & Software
  • Machine Learning, AI
  • Security
  • Autonomous Systems
  • Semiconductor IP

You can even submit a panel proposal in one of four categories:

  • Research
  • Designer Track
  • IP Track
  • Embedded Systems

I hosted a panel discussion on SPICE circuit simulators a few years back, and it was a learning experience to invite four panelists from CAD and design backgrounds to answer my questions and even audience questions about the state of the art.

To teach other engineers something new in about 1.5 to 3 hours, consider a tutorial proposal.  These tutorials are delivered on Monday, July 20th. For a topic longer than a tutorial, consider putting on a workshop in up to 9 hours.

Finally, if you have a topic for a large audience that has both technical and business impact, consider submitting a DAC Pavilion proposal. These are either Panels or SKYtalk formats.

I look forward to seeing my friends, co-workers, EDA vendors, IP companies, foundries, plus industry movers and shakers in San Francisco for the 57th DAC. Submitting a proposal for DAC will certainly raise your personal and corporate profile and lead to advancement in our high-tech industry, so go for it and beat the deadline.

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging EDA, silicon, intellectual property (IP), embedded systems and design services providers. The conference is sponsored by the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design Automation (ACM SIGDA).


GM is Burning

GM is Burning
by Roger C. Lanctot on 10-27-2019 at 11:00 am

What is it about General Motors? The once largest maker of cars in the world (now sixth or seventh) has been in an all-fronts retreat for years – while Wall Street analysts and top GM brass whistle past the graveyard touting gains in the company’s stock price and profitability.

GM has exited key markets including Europe, India and South Korea, is winding down its passenger car business, and is even paring back startup operations such as Maven car sharing (eight cities shut down). Meanwhile GM’s Cruise Automation operation posts quarterly losses in excess of $250M.

With the United Auto Worker’s strike against GM entering its fifth week, Wall Street analysts express their “comfort” with an extended strike as long as “the ends justify the means,” in the words of one. We are all familiar with the technological, regional, and economic whipsaw facing GM – and other car companies – as a) production shifts to lower cost countries, b) electrification threatens internal combustion vehicle demand, and c) software developers are prioritized over factory workers.

But GM is an extreme case. At each turn, GM is rewarded for shrinking and pulling back, with the apparent goal of rewarding stockholders. What has been lost is a sense of mission. What is GM’s purpose, at the end of the day?

The UAW strike now darkening GM’s doorstep and rippling across the entire industry and supply chain is a clearly calculated measure representing merely the latest step in GM’s extended shrinkage program. It’s notable that at a time when GM is announcing ongoing plant closings a sticking point in the negotiations with the UAW are the size of plant investments – in addition to compensation and health care costs.

Since the arrival of Mary Barra as GM’s CEO “down” has become “up” for GM. The worse the news is, the better the stock performs. One can imagine senior GM executives glancing out of their office windows in the Renaissance Center in downtown Detroit – gazing off in the direction of idled factories and musing: “Do your worst, UAW!”

GM may be overlooking what it is truly up against in the global auto market of 2019. The company is up against a determined foe in the form of one Tesla Motors.

Tesla may currently be putting its biggest visible market share hurt on luxury auto makers from Germany, but the bigger hurt is coming from a talent bleed. For new college grads looking for inspiring automotive opportunities, Tesla offers a green carbon-free vision of automobile ownership and self- driving technology. And for its factory workers (who have had their share of disputes and opposition to management) the company offers growth and potential prosperity.

GM is doing its best to cull its share of new engineering grads with Cruise Automation. But GM can’t offer its line workers much of a vision of the future. The UAW strike is clearly a defensive rearguard action.

So the incredible shrinking GM is being sliced both ways. The company is trying to pivot to electrified vehicles – which will require system-wide factory shifts and further closures for sure – while preserving its purportedly profitable (in spite of massive incentives) SUV/truck/crossover business and exiting passengers vehicles. It must add expensive software talent, while tamping down compensation expectations along the production line.

The company must fund the future – electrified, self-driving vehicles – from the still-profitable husk of the internal combustion past with a restive, tortured workforce well aware of its future marginalization. In essence, the UAW is holding GM’s future hostage as a last resort as the company’s remaining production seeps away to Mexico, China, and other distant shores.

At five weeks, it appears that the stakes couldn’t be higher and GM’s resolve more determined. For all its alleged corruption and political vulnerability (UAW workers are estimated to be making $13 more per hour than workers at non-union U.S. plants – i.e. quit your complaining!), the UAW is likely responding to the awkward and contradictory picture GM brass has painted – pleading poverty after posting an $8B profit to the delight of Wall Street.

It’s hardly a surprise that the unions want a piece of that action and some commitment to their long-term needs – as they watch the incredible shrinking GM’s market share evaporate and plants close. In the end, it only seems fair. And in reality the strike appears to be a calculated risk that GM has chosen to take on – not the unions. There are no surprises in GM factories. The only surprise is how little anyone – not GM management, not the investors, nor the workers – seems to care. That’s the worst news of all out of the UAW strike. Does anyone – other than stressed out suppliers – care what happens to GM? Is GM, as the commentary on its Website suggests, just building memories?


Let’s Pass the Hot Cars Act of 2019

Let’s Pass the Hot Cars Act of 2019
by Roger C. Lanctot on 10-27-2019 at 10:00 am

It’s happened again. The 42nd fatality of 2019 in the U.S. from a child being left behind in a hot car has occurred – this time, in New Mexico. While horrific and staggering, the total number of fatalities due to children being left in overheated cars for 2019 is still less than the 54 fatalities suffered in 2018.

According to Kidsandcars.org, a child dies after being left behind in a car every nine days in the U.S. The situation is reminiscent of 2007 when the U.S. Congress signed the Cameron Gulbransen Kids Transportation Safety Act into law requiring the National Highway Traffic Safety Administration to set rear visibility standards by 2011. At the time, 200 people were killed and 14,000 injured annually in backover incidents.

As of 2018, and after extensive testing and research, all cars sold in the U.S. were required to come with a backup camera system. In the same spirit, Kidsandcars.org has been promoting legislation – widely supported by dozens of safety advocates and organizations – to require a rearseat passenger detection system.

The core of the proposed legislation states: ” Not later than 2 years after the date of the enactment of the Hot Cars Act of 2019, the Secretary (of Transportation) shall issue a final rule requiring all new passenger motor vehicles with a gross vehicle weight of 10,000 pounds or less to be equipped with a system to detect the presence of an occupant in a rear designated seating position after the vehicle engine or motor is deactivated and engage a warning. In developing the rule required under this subsection, the Secretary shall consider requiring systems that also detect the presence of any occupant unable to independently exit the vehicle as well as detect the presence of a child who has entered an unoccupied vehicle independently.”

The act would have the added benefit of protecting children and disabled adults as well as pets, which also suffer when forgotten in overheated vehicles.

Safety advocates can be forgiven for being disappointed in the news that arrived last week of a voluntary agreement between safety regulators and the auto industry for the introduction of visual and audible rearseat reminder alerts – after the vehicle is turned off. The agreement provided for fitment of such an alert in all new cars sold in the U.S. beginning in the 2025 model year.

This voluntary agreement is largely a reflection of the need to short circuit the normal broken and bureaucratic NHTSA regulatory process which can extend for years – 11, to be exact, in the case of the Backup Camera mandate. It’s a nice good faith effort, but it shows the U.S. auto industry once again out of step with the rest of the global industry.

The Federal government in the U.S. is currently seeking to undermine emissions and fuel efficiency standards, while governments elsewhere in the world are setting deadlines for the end of the sale and use of internal combustion engine driven vehicles within decades. European regulators are requiring driver monitoring systems, while U.S. regulators are nudging the industry toward the introduction of rearseat alerts.

The voluntary approach to industry-wide safety system adoption in the U.S. was first instituted three years ago for automatic emergency braking (AEB). In 2016, 20 auto makers agreed with NHTSA and the Insurance Institute for Highway Safety to install AEB on “nearly all US vehicles” by 2022.  Now, 20 auto makers have agreed to do the same for Child Presence Detection by 2025.

If NHTSA were to have pursued a mandate for AEB, the process would still be underway with heated debates over different technical solutions. Voluntary adoption of AEB has saved time and money and, presumably and eventually, lives.

Meanwhile, Europe’s NCAP (New Car Assessment Program) is requiring the introduction of driver monitoring systems (to mitigate fatalities and injuries resulting from drowsy or inattentive drivers) by 2022, mandating the technology for all vehicles by 2024. Starting next year, Euro NCAP’s coveted and coercive five-star safety rating will only be available to cars equipped with driver monitoring systems. Of course, the precise nature of these systems is yet to be determined.

Critics of the voluntary implementation of rearseat passenger detection technology in the U.S. suggest that the proposed solution is insufficient to correct the problem. Since the program is voluntary it is likely that different auto makers will take different approaches with different levels of efficacy.

The bigger issue, of course, is the struggle that NHTSA faces in pushing more active safety systems, such as automatic emergency braking, in the interest of mitigating the escalation in highway fatalities. The challenge of ending the scourge of heat-stroke deaths among left behind children in cars may serve as sufficient motivation to call for camera-based in-vehicle monitoring systems capable not only of detecting children left behind in rearseats, but also minding an inattentive driver.

Approximately 4,000 fatalities and 400,000 injuries are attributed to driver distraction every year in the U.S. Camera-based driver monitoring systems will be capable of simultaneously detecting the presence of children, pets, and disabled adults left behind in rearseats while also monitoring driver behavior.

A little camera and a little code could go a long way to saving lives and making driving safer for all. General Motors is actually in the vanguard of introducing camera-based driver monitoring technology – with its Super Cruise enhanced cruise control system on select Cadillacs. The Super Cruise system will allow Cadillac drivers to take their hands off the steering wheel while driving as long as the car is on a Super Cruise-compatible highway and the driver is paying attention to the road.

In essence, it may be time to mainstream driver monitoring systems. The only question remaining is will it take a mandate, or just good sense?


TSMC Update Q3 2019 Absolutely!

TSMC Update Q3 2019 Absolutely!
by Daniel Nenni on 10-25-2019 at 6:00 am

This will be a combination of the recent TSMC quarterly report, a look back at Cliff Hou’s keynote at the most recent TSMC conference, and conversations on SemiWiki.com. There has been a lot of press on this but of course the most important points are being missed. Semiconductors are complicated and getting more so, absolutely.

The big news out of the conference call was the increase in TSMC Capex from $12.5B to an even $15B which will be repeated in 2020 ($15B) and grow in 2021 due to increasing demand. Remember, TSMC closely partners with customers and builds capacity based on demand (wafer agreements) and not imagined demand like IDMS (Intel and Samsung).  On the technology side lets look at the opening statement from the transcript:

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO Now I will talk about our N5 and N3 status. Our N5 technology has already entered risk production with good yield. The N5 will adopt the EUV extensively and is well on track for volume production in the first half of next year. With 80%, 8-0, logic density gain and about a 20% speed gain compared with the 7-nanometer, our N5 technology is true full node stride from our N7. We believe it will be the foundry industry’s most advanced solution with the best density, performance and power until our 3-nanometer arrives. With N5, we are further expanding our customer product portfolio and increasing our addressable market. The initial ramp will be driven by both mobile and HPC applications. We are confident that 5-nanometer will have a strong ramp and be a large and long-lasting node for TSMC.

Daniel Nenni – Founder – SemiWiki.com LLC:. 5N may be considered a full node from 7N but not 6N (with an 18% density advantage over N7). In my opinion the 6nm node will be a VERY long lasting node and while 6N revenues will be lumped into 7N and 7N+, 6N revenue will rule all, my opinion.

5N and 3N will also share the same fabs as did 10N and 7N which will again speed HVM ramp and reduce development costs. It is the TSMC recipe to foundry success, absolutely.

According to a conversation on SemiWiki, N5 is said to be 30nm M2P and 50nm CPP with 6 tracks and 173MTx/mm2. This works out to ~1.8x denser than N7 which is what TSMC has said. Scott Jones is pretty sure M2P is ~ 30nm and 50nm for CPP which is what is needed to get the 1.8x density improvement they have discussed.

N5P was not mentioned but from what was discussed on SemiWiki N5P is the same design rules, just more strain, a performance enhancement. Apple requires a new process every year so this is it. N5P will be out in 2021 for the Apple iProduct refresh. I would expect more optimizations will be announced next year so you may see a density improvement based on better EUV or something like that.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: Our N7 process is the industry’s first commercially available EUV lithography technology. N7+ provide 15% to 20% higher density with improved power consumption when compared to N7. That is already in high volume production with yield similar to N7. We expect the strong demand for N7+ continue into next year and are increasing Capex to meet this demand for multiple customers.

Now N6. Our N6 provide a clear migration path for the second-wave N7 product as its design rules are 100% compatible with N7 while providing 18% logic density gain with performance-to-cost advantage. The N6 uses one more EUV layer than N7+. N6 risk production is scheduled to begin in first quarter next year with volume production starting before the end of 2020. We reaffirm 7-nanometer will contribute more than 25% of our wafer revenue in 2019 and we expect even higher percentage in 2020 due to worldwide development of 5G, accelerated demand from HPC, mobile and other application continue to grow.

Daniel Nenni – Founder – SemiWiki.com LLC: Remember, 7N/6N is 28N déjà vu all over again so there will be plenty of 6N capacity moving forward once Apple and the other mobile giants move to 5N. The big difference between 6N and 28N is that there will be no cheap knock-off processes from UMC and SMIC, not even close.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO Now I will talk about the N3. We are working with customers on N3 and the technology development progress is going well. Our N3 will be another full node from our N5 with PPA gain similar to the gain from N7 to N5. We expect our 3-nanometer technology will be the most advanced foundry technology in both PPA and transistor technology when it is introduced.

Daniel Nenni – Founder – SemiWiki.com LLC: TSMC N3 will again use FinFETs unlike Samsung who will use GAA which highlights the real difference between Samsung and TSMC. TSMC is focused on manufacturability versus bleeding edge technology. TSMC does not really have a choice here since they have the mobile giants (Apple, Huawei, etc…) pushing them for a new process node every year that can yield at a very high rate right out of the box. GAA will be 2nm for TSMC.

Packaging was also a focus of the call. We covered packaging and design enablement here:

A Future Vision for 3D Heterogeneous Packaging

A Review of TSMC’s OIP Ecosystem

Now for the relevant Q&A:

Gokul Hariharan – JP Morgan Chase & Co, Research Division – Head of Taiwan Equity Research and Senior Tech Analyst: So first of all, if we look at the history, whenever TSMC has had a step-up in CapEx, that is typically accompanied by a step-up in growth as well. So just wanted to kind of narrow down a little bit on the 5% to 10% growth, which is still kind of — the kind of growth that we were expecting when we were spending TWD 10 billion to TWD 11 billion. So could you give a little bit more details or maybe narrow down the forecast a little bit more for us? Because if we say a TWD 14 billion to TWD 15 billion range of CapEx, that’s closer to the high 30s or 40% capital intensity, higher than our previous range.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: Gokul, let me answer the question carefully. Let’s say that TSMC always build capacity, working closely with customer and to meet their demand. That’s our number one, okay? We discuss with the customer on their demand, we make our judgment also. Now we are increasing the CapEx quite a lot, no doubt about it. But then, that’s due to some of the reasons I can foresee for the future. First, the 5G’s ramp-up is much faster than 4G as we expected. Second, TSMC actually is expanding our customer portfolio, and in the same times, we’re also expanding our product portfolio. And so put all the factors together, we have a good reason that we increase our CapEx this year and probably next year.

Daniel Nenni – Founder – SemiWiki.com LLC: I was hoping packaging would come up in the Q&A. From the very beginning TSMC’s packaging efforts were looked at as a low margin business but it is also a VERY sticky business, much stickier than the wafer business. The mobile giants depend on packaging and they now “depend” on TSMC for packaging, absolutely.

Charlie Chan – Morgan Stanley, Research Division – Technology Analyst: Okay. And my next question is about the advanced packaging. I remember in the previous quarters, you commented advanced packaging should outgrow the front-end business. So first of all, is this remains the same trend? And also how about the potential margin dilution from the packaging business?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: The forecast on the advanced packaging business, the growth is — the growth rate is still faster than silicon growth rate. The wafer’s revenues growth rate stays the same, okay? Still that statement is still valid. The gross margin, that’s another consideration. The gross margin of the back-end business actually is lower today, still lower than the wafer margin. But we look at it whether it’s a good business to go or not on 2 factors. One, we really want to support our customer to improve their system performance. So we have to do it because of TSMC is the only one company right now who can support customers’ advanced packaging. Second actually is the CapEx intensity on the back end, and that’s advanced packaging business, is smaller. And so the asset turnover is better. So put all in all together, we still think it’s a very good business to pursue.

Daniel Nenni – Founder – SemiWiki.com LLC: As expected, the China question. TSMC’s China strategy started with Morris Chang many years ago and is nothing short of brilliant. Morris may or may not have seen the US-China trade riff coming but he positioned TSMC perfectly. TSMC has more than 400 active customers and more than 100 of those are now in China. In Q3 2019 China accounted for 20% of TSMC revenue, up from 15% last year. North America is -2% to 60%, EMEA is -1% to 6%, Asia Pacific is –1% to 9%, and Japan is -1% to 5%. I expect this trend to accelerate as the US and China continue to play economic politics.

Brett Simpson – Arete Research Services LLP – Senior Analyst: I had a question really on China. I guess in the last couple of years, we’ve seen the business double with Chinese customers. I guess at the moment, it’s pretty clear you’re going through a very healthy inflection point in the Chinese customers at the moment. So can you talk about how you see this part of the business evolving over the next 1 or 2 years? And then I guess from a planning perspective, are you concerned that the rise of your China business comes at the sacrifice of other customers, particularly U.S. companies?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: Well, we did see the strong course from China because that’s a very big market, especially in the semiconductor area. And we are happy to see that growth, and TSMC is offering the most leading-edge technology to support our customer in China. And so to be exact, we are going to grow with the China market. At the expense of other customer, the answer is no because we support all the customer with all our strength and our capacity.

Daniel Nenni – Founder – SemiWiki.com LLC: Interesting EUV question. I don’t remember TSMC publicly saying they made their own pellicle but of course we all knew. Yet another TSMC differentiation.

Roland Shu – Citigroup Inc, Research Division – Director and Head of Regional Semiconductor Research: Okay. And the second question is you announced that your EUV tools have been reached potentially maturity, but how about for the infrastructure? It means that for other component like photoresist, pellicle, photomask or even for this inspection tools, chemical and materials. So yes, we have — going to have a very fast ramp on 5-nanometer because of very strong demand from a customer, but are there any gating items for this EUV infrastructure will be potentially a risk?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: So far we do not see any gating item. All the infrastructure, actually TSMC, we are prepared. We have a — we produce our own pellicle. We have a large number of masking capacity and everything. So even photoresist, those kind of thing, we have been taking into account. So we are ready for the — actually, we are in a high-volume production for the EUV lithography technology. For next year, you have big — even higher volume, and I can assure you that we are all prepared.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: Okay. In TSMC, EUV lithography technology is now in the production stage. But are we happy with that? Not yet. We are still improving availability. We have output power of 250 watts, as we expected. Now we can operate the tool with 250 watts consistently. However, there’s still something that we need to improve so that we can improve the throughput, we can improve the availability so you can reduce the cost, continue to improve.

Daniel Nenni – Founder – SemiWiki.com LLC: And I’ll finish this blog with the lighter side of the Q&A. Remember this is live in front of an audience in Taipei and C.C. says this stuff with a straight face. It really is fun to watch:

Roland Shu – Citigroup Inc, Research Division – Director and Head of Regional Semiconductor Research: Okay. I think just a follow-up for — I know you don’t comment on the ASP, but for the same amount of the wafer shipment on N7+, is this going to contribute more revenue upside to TSMC?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: You just mentioned we don’t.

Roland Shu – Citigroup Inc, Research Division – Director and Head of Regional Semiconductor Research: No, I talk about revenue. I don’t talk about ASP.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: That’s the same thing.

Bruce Lu – Goldman Sachs Group Inc., Research Division – Research Analyst: That’s why I wanted you to give us some hint, right? We cannot just tell my investors that we have to trust TSMC. Even though I say that all the time but…

C.C. Wei – Taiwan Semiconductor Manufacturing Company Limited – Vice Chairman & CEO: You can trust TSMC. No doubt about it.

Daniel Nenni – Founder – SemiWiki.com LLC: Absolutely!

Reference: https://www.tsmc.com/uploadfile/ir/quarterly/2019/3d36C/E/TSMC%203Q19%20transcript.pdf


IP-XACT helps you produce exactly what you need in SoC deliverables

IP-XACT helps you produce exactly what you need in SoC deliverables
by Tom Simon on 10-24-2019 at 10:00 am

If you have ever watched an experienced glass blower, your first thought is that they make it look so easy. I have had the opportunity to blow glass, and I can tell you that it is a constant struggle against temperature, time and muscles to get the glass to do anything like what you want. This is akin to what is required to take the elements of an IC design and provide each of the deliverables to various stakeholders. When it is done right, it can look effortless and straightforward; when done wrong it leads to chaos and confusion. In glass blowing there is no substitute for practice and experience – I know this firsthand. Fortunately for IC design teams there are tools, like Magillem’s IP-XACT solutions that can make the process relatively straightforward.

Magillem offers a suite of IP-XACT tools, based on the IEEE 1685 standard, that help create deliverables for the various consumers of the design data for an SOC. Who are these consumers? They might be internal teams that have to work on the design at various stages, such as simulation and verification teams. Alternatively, they might be other groups within a company that are using the design for their own projects. Lastly, they can be external consumers of the design in the form of hard or soft IP. Hard IP can even be targeted for delivery for specific technologies.

There is the need to hand off simulation models, RTL (clear and encrypted), netlists (hierarchical or flat) and even hard macros. The data provided might only need to be for a subunit of the hierarchy, or for the entire design. IP-XACT makes it easy to associate specific file sets with different views of the design. Then those files can be turned into deliverables. IP-XACT leaves all design data in its native format, which means that it does not interfere with design tools in any way.

Magillem’s IP-XACT tools allow for centralization of all the design files and related information. Production of each deliverable can be automated to provide precisely what is required for each of the deliverables. They also provide checking mechanisms that help assure the deliverables meet quality standards.

IP-XACT can also help organize and release collateral information such as test benches, documentation and verification related files. Magillem will be presenting at DVcon Europe shortly on how IP-XACT can help with highly configurable IPs. The presentation will also delve into how IP-XACT can help with restructuring IP to improve implementation. Also, they will talk about how IP-XACT can help address the needs of ISO 26262 in the design and delivery process.

When I was blowing glass, I remember looking into the blazing hot furnace holding the molten glass and envisioning the next piece I would make. My ability to control the glass was the make-or-break factor determining my success. It is not unlike having SoC design data and needing to pull out the relevant parts for each deliverable. The skill and precision applied to the task determines the end result. Magillem IP-XACT tools can play a crucial role in effectively developing and then utilizing the design data for an SoC. Further information about Magillem’s comprehensive IP-XACT based solutions can be found on their website.