RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Semiconductors, the E-waste Problem, and a Pathway for a Solution

Semiconductors, the E-waste Problem, and a Pathway for a Solution
by rahulrazdan on 04-14-2021 at 10:00 am

ewaste

Semiconductors have been central to the information revolution which is reshaping society. The modern world would not exist without this critical resource.  Further, semiconductors are central to many sustainability solutions such as the enablement of smart infrastructure, electrification, and virtualization. One of the consequences of this tremendous growth is the ever increasing generation of e-waste.  How big is this problem ?   Is there a way to solve this problem ?  Let’s examine the situation. 

What is the nature of the e-waste problem ?   

E-Waste Dump in Agbogbloshie Ghana

E-waste is the result of electronic products which have outlived their usefulness.  Waste electronics contain several toxic additives or hazardous substances  such as mercury, brominated flame retardants (BFR), chlorofluorocarbons (CFCs), or hydrochlorofluorocarbons (HCFCs). Without safe disposal,  this waste stream poses significant risks to the environment and to human health.  In fact, the toxic nature of E-waste has caused countries such as China to stop accepting e-waste shipments from the west, and prompted this Atlantic article to ponder “ Is This the End of Recycling?”    Other locations for e-waste disposal have included third-world countries such as Ghana (Video Story)  and Pakistan (Video Story). The e-waste problem is  increasingly becoming visible in the popular consciousness. Recent articles include:

  1. The World Has an E-Waste Problem (Time)
  2. We dissected nearly 100 devices to study e-waste. What we found is alarming (fast company)
  3. News by Numbers: India’s mountain of e-waste (Forbes)

How did we get here ?

Of course,  the starting point is the explosion of consumer devices which are reshaping society.  Consumer devices are of particular concern because of the both high volume and short lifetime of use.  After their end of useful life, electronic products turn into e-waste and follow a backwards deconstruction flow as shown in the figure below.   The fundamentals of the flow are to use a managed process to salvage value when economically appropriate, and if value cannot be found, the product results in a landfill. The large steps are:   

  1. Product Collection:  A percentage of e-waste (only 35% in US)  is collected in “official” e-waste where it goes through a reclamation process. The rest end-up in landfills.  
  2. Disassembly:  Products are disassembled into major components if the cost of disassembly is viable. Otherwise the product is shredded.  As an example, Apple has built robots  such as daisy which employs mass volume techniques to reduce the costs dramatically.
  3. Chip Reuse:  Valuable chips can be extracted and reused based on the economics of doing so (extraction, secondary markets).  Unfortunately, the secondary markets for these chips are dark, and prone to issues of counterfeiting.  In some cases, these can even lead to national security issues.
  4. Materials Extraction:  Finally, valuable elements such as gold/silver can be extracted in a chemical deconstruction process.

In the best of circumstances, these processes are done in safe working conditions, but as the video stories above show, these processes often occur in third-world countries under less than ideal conditions. 

What is the pathway to a solution ?

The electronics industry has spent countless billions on research to build fantastic products at incredibly low costs.   Today, for the most part, when a product moves into the e-waste bucket, it’s value moves from an incredibly engineered machine down to its base value in elemental materials. This is akin to viewing the human body for its mineral value.  

A very critical issue is the ability to capture value at higher levels of reuse because that changes the economic dynamics of collection and disposal to a landfill. Today, the research investment in the backwards flow is remarkably lacking.  Research and a minor amount of industry wide standards cooperation along a number of vectors can change the economics of e-waste significantly.  These vectors include:

 

  1. Mechanical Disassembly:  Mechanical disassembly is a key cost which must be managed. Robotics based high throughput, low cost mechanical disassembly is critical to addressing these costs. With a sufficiently efficient process, the economics of recycling become enabled. Research:   Novel design-for-disassembly techniques which enable high volume, low cost disassembly while not compromising product performance are required.  Standards/Industry Cooperation:  Ideally, any robotics based disassembly techniques are provided in open standards which enable the e-waste eco-system.
  2. Board and Subsystem Reuse:  If the whole product cannot be reused, research in productive ways to reuse natural subsystems is desirable.  Research:  Most dominant consumer devices contain more powerful capability than the Apollo space computers. This capability should have a value which is useful for other domains.  Computer architecture research which finds valuable repurposing the major consumer platforms (laptops, mobile phones, tablets, etc) can significantly shift the economics of reuse. Standards/Industry Cooperation:  To fruitfully use a board, the first step is to validate its correctness, so standards for testing the boards with associated test suites are invaluable. OEMs may even consider a “Certified Board” construct to enable this process.
  3. Chip Reuse:  Safe and scaled manufacturing methods for board disassembly can lower the costs to obtain chips. Once the chip is available, the reuse flow is well understood, and this process occurs today. However, the chips themselves are in dark secondary markets. Standards/Industry Cooperation: To fruitfully use these chips, Semiconductor companies should consider a “Certified Chip”  construct where they are actively engaged in the validation process. 
  4. Materials Extraction and Disposal:   Today, various chemical techniques are used to harvest key elements from e-waste.  Research:  Continued research on lowering the cost of extraction and extracting more value in a safe manner can lead to positive consequences. A recent example is the work of ex-Tesla employees with Redwood Materials to extract lithium from batteries.  Standards/Industry Cooperation: Standards on the major processes for extraction and disposal of these materials is likely to lead to lowering of costs and higher levels of reuse.

Why now ?

Good citizenship and the potential for building profitable businesses are good reasons for the electronics industry to pursue solving the e-waste problem.  This work can certainly be part of any ESG reporting for the investment community. However, an urgency to this process may well be to address the rise of the coming regulatory response.  Significant activity to date includes:

  1. Europe:  In the european Union,  the fundamental concepts of circular material flows and “right to repair”  are gaining traction. 
  2. US States:  In the US, over 25 states have enacted electronics waste laws. 

Recently, some industry wide movement has happened recently with the World Economic Forum’s announcement of the Circular Electronics Partnership (CEP), and there has been some investment from the US Government in e-waste research with entities such as the Re-Made institute ($70M).   However, to make progress,  it is very likely that the traditional standards bodies such as CTA and SIA will have to get involved.  In addition, the traditional funders of the forward flow research such as Semiconductor Research Consortium (SRC), NSF, and perhaps even DoD will need to participate.   Overall, Industry focus on standardization, market formation, and reuse research, can significantly impact e-waste.

Note 1:  There has been some churn on the e-waste issue direction in the popular media. The article “Is E-Waste Declining ?     The rest of the story “ addresses this topic.

Note 2: The Global E-waste Monitor 2020 report  by  Vanessa Forti, Cornelis Peter Baldé, Ruediger Kuehr, and Garam Bel  from the United Nations University provides a decent background to this topic.  


Siemens EDA Updates, Completes Its Hardware-Assisted Verification Portfolio

Siemens EDA Updates, Completes Its Hardware-Assisted Verification Portfolio
by Bernard Murphy on 04-14-2021 at 6:00 am

Siemens Hardware assisted Verification platform launch graphic 2 32521 min

Siemens EDA’s Veloce emulation products are long-established and worthy contenders in any emulation smack-down. But there was always a hole in the complete acceleration story. Where was the FPGA prototyper? Current practice requires emulation for fast simulation with hardware debug, plus prototyping for faster simulation for software testing and debug. For Siemens that prototyping hole is now filled. They now offer the complete range: hardware acceleration platforms for enterprise-level emulation, enterprise-level prototyping and desktop prototyping. Good news all round, because ESDA is now showing that spend on hardware acceleration solutions is overtaking spend on conventional simulation.

Upgraded emulation

This announcement is part of a broader announcement. The emulator, Veloce Strato, now has a higher capacity/performance sibling – Veloce Strato+. They’ve made multiple improvements for this system. First they’ve added the HYCON platform to co-model between the emulator and a virtual platform. You can run large software stacks on a virtual platform at near real-time speeds, jumping to the emulator (at lower speeds) only as needed.

Second, they have upgraded the compile flow to take advantage of hierarchy and distributed processing wherever possible. Here they take advantage of multi-CPU processors to accelerate throughput. Total verification turn-time is as much a function of compile time as much as run-time. A fact that comes home very quickly to designers dealing with frequent RTL drops.

New chip goes 2.5D

And third, Siemens EDA have spun a new accelerator chip, Crystal3+. Here they took the Strato board architecture and pushed the memory and processor into a 2.5D package. Increasing performance per processor (interposer delays to memory rather than board delays) and increasing the number of chips they can put on a board from 16 to 24. Interestingly, the chassis remains the same. You can simply pull out the Strato boards and replace them with Strato+ boards. AMD (Alex Starr) has been working with Siemens on this platform and has provided an endorsement. They’ve also endorsed models for the 2nd and 3rd-gen EPYC processors to work with this solution.

The Primo and ProFPGA prototypers

This is the big news for me. Primo is a full function prototyper with a datacenter footprint, with all the required features. ICE and virtual prototyping support, streaming to memory or host, multi-user support, down to one FPGA granularity. It shares a common compile front-end with Strato+ (as does the ProFPGA solution), minimizing turn-time to jump between platforms. And it also provides a common interface to the virtual modeling platform. Very neat. You can setup a complete verification environment, virtualized CPUs to RTL modeling to ICE/virtualized testbench and swap between emulation, enterprise prototyping and desktop prototyping.

Primo scales to 320 FPGAs and is based on the latest Ultrascale+ device from Xilinx. Arm has endorsed Veloce Primo.

Desktop prototyping

Veloce ProFPGA is for desktop prototyping, developed in partnership with ProFPGA. Because that’s what you need when you want to work on a real (not virtualized) testbench, plugging into real traffic generators, consumers and monitors. You can scale from one uno board to 5 quad boards, with choices of Intel Stratix 10GX10M or Xilinx XCVU19P FPGAs.

Jean-Marie Brunet (Sr. Dir Marketing and more at Siemens EDA) mentioned that they also provide full visibility to signal states in both prototyping platforms, leveraging signal reconstruction technology from Veloce software.

You can learn more about the release HERE.

Also Read:

Formal for Post-Silicon Bug Hunting? Makes perfect sense

Library Characterization: A Siemens Cloud Solution using AWS

Smarter Product Lifecycle Management for Semiconductors


Certitude: Tool that can help to catch DV Environment Gaps

Certitude: Tool that can help to catch DV Environment Gaps
by admin on 04-13-2021 at 2:00 pm

Certitude 9

Design verification (DV) is still one of the biggest challenges in the ASIC based product world. In last two decades, we have seen many changes in terms of HVLs and methodologies used for design verification. System Verilog is the most popular HVL these days and UVM is the most popular verification methodology.

Even after such an advancement in HVLs and DV methodologies, quality of actual Design Verification is still largely dependent on DV engineer’s verification skills. Due to human intervention, there are chances of design bugs escape in ultimate design. Such design bugs, fall in critical bug category, leads to chip respin which can be very expensive and above all affects the “time to market” of the product which is very critical for a product company.

To avoid such possible design bug escapes and to minimize the risk, chip design companies adopt different tools in addition to actual EDA simulation tools used for verification. This includes formal verification tools, emulation tools and more. The key goal is same and that is to make sure final design that is taped out is design bug free.

Sometimes, Certitude, a tool from EDA tool company Synopsys, is used to measure verification environment strength. The fundamental usage of this tool is to catch DV environment holes that can lead to design bug escape. This paper talks about how the Certitude tool works, how can it help in catching DV environment gaps, it’s pros and cons based on our own experience, where it is used on a complex mix signal ASIC design.

How Certitude Works?

Certitude introduces bugs in specified RTL design modules which are known as “Faults”. It then runs specified (in certitude_testcases.cer) tests using existing verification environment (VE) and checks whether the VE detects the injected fault or not. At least one test is expected to fail for one fault at time due to one or more failing symptoms mentioned below,

  • Data checker failure that includes data integrity check OR status check
  • SystemVerilog Assertion failures
  • Simulation timeout reached.

Certitude mainly divides “Faults” into few class categories. But the most important fault classes that should be exercised and analyzed are as below.

Top Outputs Connectivity:

This category of fault inserts faults on the output signals of top design module (specified in certitude_config.cer).

For an output port of a top module, Certitude updates RTL design by injecting StuckAt1, Stuckat0 and Negated faults. This means actual output port is driven by “1”, “0” OR “inverted” value considering the fault.

See in below top module of a design, for an output port at line 16, certitude injects three different faults,

Top Module of designs specified.

OutputPortStuckAt0 fault (Fault ID 691)

As shown in below snippet, actual output port is commented and internally driven to “0” by tool.

OutputPortStuckAt1 fault (Fault ID 692)

As shown in below snippet, actual output port is commented and internally driven to “1” by tool.

OutputPortNegated fault (Fault ID 693)

As shown in below snippet, actual output port is commented and internally driven to inverted value by tool.

ResetConditionTrue:

This fault class considers reset related signals, which are used to set internal signals, status, and error signals to their Init (PowerOnReset) values.

Example, in below snippet, this category of fault injects three different faults for line 67.

ConditionFalse :

It replaces line 67 with, if(1’b0) begin

ConditionTrue:

It replaces line 67 with, if(1’b1) begin

NegatedCondition:

It replaces line 67 with, if (!(i_async_rst_x) begin

InternalConnectivity:

This fault class considers signals to inject faults to the input of design sub modules.

Below snippet shows a submodule is instantiated in a module and tool has identified 3 faults for port connection at line 503.

The faults are,

InputPortConnectionStuckAt0 (Fault ID 1100)

It replaces line 503 with,

.i_ch1**_x (‘h0) begin

InputPortConnectionStuckAt1 (Fault ID 1101)

It replaces line 503 with,

.i_ch1**_x (‘h1) begin

InputPortConnectionNegated (Fault ID 1102)

It replaces line 503 with,

.i_ch1**_x (~(ch1_**_x[27:0])) begin

Brief on Certitudes Phases:

Certitude runs in the phases listed below. There are options to selectively run one or all pf these phases.

Model Phase:

Certitude analyzes the RTL design and collects all required details about design modules to consider, their outputs, inputs and wires to be considered for different types of faults insertion. This phase finds all possible faults to be considered to exercise design in next phases. Design modules to consider for certitude run can be added in certitude_hdl_files.cer

Activate Phase:

Before starting this phase, one needs create the test suite (tests that DV engineers think can effectively exercise the design modules), which tool will use to exercise the selected RTL design modules. Certitude runs the regression using this test suite and checks that signals on which certitude is going to insert the faults toggle or not. This mean it qualifies the test suite against faults to be inserted.  If the fault related signals toggles for a test, then that test is considered as qualified test for that fault and is considered in next detect phase.

Based on the report out of this phase, one needs to update the test suite and run this phase again multiple times to identify more qualifying tests for all “Non Activated” faults. As a final result of this phase, we have test suite that qualifies all faults that tool has identified. There are options to run this phase incrementally so that for each iteration it only considers newly added test to qualify “Non Activated” faults.

Detect Phase:

Certitude injects faults on each qualified signal one by one and runs the regression. At end of the regression, it provides us the summary report how many faults once injected are detected by verification environment. The expectation is at least one test to fail for each fault. The report gives details as,

Detected faults:

Details on all the faults which are detected by the existing verification environment when certitude insert the faults. This number represents faults for which we have some failures. VE can detect design issue for these faults.

Non-detected faults:

Details about faults which are not detected by the existing verification environment when certitude insert the faults.  This means, for these faults, none of the test failed. It means VE missed to catch the design bug inserted for all such non-detected faults.

Non-Propagated faults:

These are the faults which though activated, does not change the ultimate design simulation result. This means, final output remains the same with and without the fault.

This can happen when fault propagated the effect up to certain hierarchy but not all the way that enables change in final behavior.

Looking at detect phase report, one needs to analyze the Non-detected and Non-propagated faults. The DV engineer has to look at each fault and decides why it remained Non Detected or Non propagated based on the reference specs. Corrective actions can be taken based on analysis and that includes,

  1. New test needs to be added in the existing test suite provided to certitude. This can be an existing test OR to be created brand new.
  2. There are chances that some faults may result in some illegal scenario considering design reference specs. DV engineer can check with the designer for such fault and can decide to exclude it. There are options to disable specific fault(s) once required approval is received from concern person.
  3. We may end up updating existing checkers to catch such faults.

In case we update any assertion or checker, which is considered as DV update, we have to run Model + Activate + Detect phase again from scratch. If it is just an addition to the test suite then we can just run incremental Activate + Detect phase. This will not exercise existing database for faults already detected.

Below are some basic examples of possible scenarios which certitude can help to catch. We can say them as “DV holes or DV gaps”.

Address flip for memory:

If Memory controller’s output address bits are swapped due to fault insertion for both write and read accesses, then it won’t be detected using normal write and read operations.

To detect this, test with front-door write and backdoor read scenario has to be present in DV test suites. If this test scenario is not present, then Certitude’s “OutputPortNegated” fault remains undetected. Certitude can help to identify such faults if any.

Assertion at sub-module hierarchy:

If SV assertion is written using sub-module’s I/O ports where Certitude inserts faults (StuckAt-1, StuckAt-0, Negated fault), then such faults remains undetected. This is because assertion uses same faulty value to predict/check design behavior.

To catch such faults, assertion has to be written using higher-level module’s port or using some glue logic that derives final values to be driven at the submodule’s port boundary. For example, such logic can include data bus writes OR read requests decoding to derive final control signals to use in assertion rather than using sub module’s i/o ports.

Assertion that checks 1’b1 OR 1’b0 on event:

If SV assertions are coded to check that value on some port is driven 1’b1 OR 1’b0 on some condition and if Certitude inserts faults (StuckAt0 OR StuckAt1) that matches assertion expectation, then it won’t result in any failure. DV engineers can find such coding issues “always PASS” using Certitude that may leads to some interesting scenarios.

Reports and details each report provides:
Below snippets depict certitude results report after the detect phase. Tool allows access to such reports while detect phase is ongoing without affecting simulations running.

Report Summary:

Below snippet gives overall status of the detect phase.

Report Summary:

Below snippet gives overall status of the detect phase.

Where,

Faults Disabled By Certitude is for faults disabled by certitude based on configuration option settings.

Faults Disabled by User is for user disabled faults based on analysis or discussion with designer.

Faults Dropped are faults that Certitude drops if it is dependent on some other non-detected or non-propagated fault in the same area. For example, if case statement is not detected then tool drops faults on all statements under it

Per Fault Class Summary for a design module:

It shows details related to the faults exercised during detect phase based per Fault Class of a module. So far, this paper has talked about top 3 Fault Class types and they are the one DV engineers should analyze with highest priority compare to others. Reports shows how many faults are detected, non-detected, non-propagated etc for a design module.

Per Design Modules overall status:

Below snippet shows module-based fault classification. Each module participating in Certitude run and their faults status.

The runtime of the detect phase mainly depends on,

  1. Number of tests added in Certitude test suite to activate each fault. It is preferred that user identifies the key tests in such a manner which can hit maximum logic in the design. This will help to run a smaller number of tests thus saving on simulation license as well as certitude run time
  2. Runtime of each test added. Longer the simulation runtime result in increases the detect phase run time
  3. Number of faults to be exercised. Mode design modules means more faults to exercise and hence more tests required to hit related functional scenarios.

Summary of using Certitude in a project:

  • Helps in identifying corner cases which are hard to think about.
  • Design module selection and other options allow Certitude to run effectively on key design modules and help to gain confidence on Design Verification done
  • It is advisable to run Certitude in the later stage of a project, when most of the DV work is done and once thorough design verification is complete, but this affects the project deadline.
  • Some faults are not legal with respect to design spec but still if they fall in Non-Detected or Non-Activated category, one has to spend time closing them, which is time consuming or  “Return over efforts” may not add much value.
  • Some faults may not lead to use case scenarios of final product but still one may end up spending time to close them.
  • Depending of design complexity, tool identifies large number of faults and to run them using certitude is time consuming and needs large number of simulation licenses. This adds an additional cost to the project.

Conclusion:

As described, based on design complexity, third party proven design IPs vs key custom design blocks used in an SoC, one can go with Certitude tool and run it on entire design or on selected design modules to gain more confidence on quality of the design verification done on those design modules. Using this tool, one may or may not find any bugs in design or DV gaps based on how thoroughly the DV is done but it can surely help in gaining DV confidence. Sometimes it leads to uncover some corner case scenarios, which are hard to think about and are missed to cover in verification plan. Based on design specs and use case scenarios, DV engineers can decide on whether to add those scenarios or can get waiver from design leads to drop them.

To know more please see the Synopsys webinar or contact our experts.

Authors-
Manzil Shah – Technical Manager, eInfochips-An Arrow Company
Bipin Patel – Member Technical Staff – (Level 2), eInfochips-An Arrow Company
Vishal Mistry – Member Technical Staff – (Level 2), eInfochips-An Arrow Company
Shashank Mistry – Senior Engineer – (Level 2), eInfochips-An Arrow Company
Ronak Bhatt – Engineer, eInfochips-An Arrow Company

Also read:

Understanding BLE Beacons and their Applications

Digital Filters for Audio Equalizer Design

Sign Off Design Challenges at Cutting Edge Technologies

Techniques to Reduce Timing Violations using Clock Tree Optimizations in Synopsys IC Compiler II

 


The Juggernaut Continues as ESD Alliance Reports Record Revenue Growth for Q4 2020

The Juggernaut Continues as ESD Alliance Reports Record Revenue Growth for Q4 2020
by Mike Gianfagna on 04-13-2021 at 10:00 am

The Juggernaut Continues as ESD Alliance Reports Record Revenue Growth for Q4 2020
Marvel’s Juggernaut

Apologies for the slightly hyperbolic title of this post. Webster defines Juggernaut as “a massive inexorable force, campaign, movement, or object that crushes whatever is in its path.”  Marvel Comic fans will recall the term also refers to a superhero nemesis. But I digress. The ESD Alliance recently announced its Q4 2020 Electronic Design Market Data Report.   I covered the Q3 2020 report here. That was very upbeat news. Something that seemed hard to top. But that’s exactly what happened in Q4. Stronger and record-setting results. Read on to find out how the juggernaut continues as ESD Alliance reports record revenue growth for Q4 2020.

Let’s start with some basic statistics. Revenue for EDA, IP and services grew 15.4 percent in Q4 2020 when compared to Q4 2019. That same growth comparison for Q3 2020 was 15 percent. Overall revenue increased by over $1 billion in 2020, which is a new milestone for the industry. This is only the fourth time since 2011 that year-over-year growth exceeded 15%. The four-quarter moving average, that compares the most recent four quarters to the prior four quarters, rose by 11.6%, which is the highest annual growth since 2011 and the second highest in the last 14 years. The companies tracked in the report employed 48,478 people in Q4 2020, a 6.7% increase over the Q4 2019 headcount of 45,416 and up 3% compared to Q3 2020. Wondering what happened to all that pandemic gloom and doom? Me, too. These results are in the rare air regarding positive achievement.

I was able to catch up with Wally Rhines to get some of the back story on all this good news. Wally is the executive sponsor of the SEMI Electronic Design Market Data report.  We started by remembering our prior discussion on the Q3 results. Those were really good, and Wally did confirm they’re now even better. We talked about what’s driving all this good news. The many new system companies who have realized the significant benefits that custom silicon delivers are certainly helping to raise the water level for all. Wally referred to the customization phase of Makimoto’s Wave, a characterization of innovation cycles by Tsugio Makimoto, former CEO of Hitachi Semiconductor. You can learn more about this phenomenon here.

The bottom line is that more custom design is being deployed to achieve the required PPA for next generation products, and that’s good news for EDA and IP. Here is a summary of revenue by application category for Q4 2020:

  • CAE revenue increased 9.4% to $956.9 million compared to Q4 2019. The four-quarter CAE moving average also increased 9.4%
  • IC Physical Design and Verification revenue increased 36.6% to $637.1 million compared to Q4 2019. The four-quarter moving average for the category rose 12.3%
  • Printed Circuit Board and Multi-Chip Module (PCB and MCM) revenue decreased 0.8% to $292.9 million compared to Q4 2019. The four-quarter moving average for PCB and MCM increased 4.3%
  • SIP revenue increased 16.9% to $1,052.9 million compared to Q4 2019. The four-quarter SIP moving average grew 17.1%
  • Services revenue increased 2.2% to $91.6 million compared to Q4 2019. The four-quarter Services moving average decreased 2.4%

Architectures are getting more complex and chip/system-in-package designs are getting larger, which explains the dramatic growth of physical design & verification as well as semiconductor IP. Overall, Wally saw no reason for the current wave of innovation to slow down. I completely agree. As we’ve all endured a year with a lot of bad news, you can take comfort in knowing there is good news for EDA and IP and the world-changing innovation it drives. I would say Tsugio Makimoto got it right, so take comfort – the juggernaut continues as ESD Alliance reports record revenue growth for Q4 2020.

Also read:

ESD Alliance Report for Q3 2020 Presents an Upbeat Snapshot That is Up and to the Right

EDA Appears to Have COVID Immunity – ESD Alliance Reports Strong Q2 2020

UPDATE: Everybody Loves a Winner


Design IP Sales Grew 16.7% in 2020, Best Growth Rate Ever!

Design IP Sales Grew 16.7% in 2020, Best Growth Rate Ever!
by Eric Esteve on 04-13-2021 at 6:00 am

Table IP vendors 2021

Design IP Sales Grew 16.7% in 2020, to reach $4.6B and this is the best growth since year 2000!

The main trends shaking the Design IP in 2020 are very positive for the Top 3 IP vendors, each of them growing more than the market and confirm the importance of the wired interface IP market, aligned with the data-centric application, hyperscalar, datacenter, networking or IA.

ARM is again a solid #1, with more than 40% market share, and, important to notice, 17.4% growth, slightly more than the IP market. Does this growth rate indicate that ARM’s trouble with Chinese management has been cleared, or simply shows that ARM’s sales behavior is online with the IP market? It’s difficult to answer, but ARM’s IP royalty sales grow by 16% and IP license sales by 19.9%.

This high growth in IP license sales is incredibly positive for the future. It indicates that, if RISC-V still generates strong interest and communication, the industry accepts to pay license fee to benefit from ARM products, and ARM has released enough new products to attract customers in high-end markets, CPU and GPU for smartphone, CPU v9 for AI, security and computing and more. ARM being the undisputed CPU and GPU IP leader for application processor for smartphone, royalty sales have reached a maximum for these applications. The next big markets to target are automotive and datacenter or AI if we consider that IoT or MCU are real, but too fragmented to represent the same large opportunities. It took some time to ARM’s management to come to this conclusion, that they did when transferring IoT ARM to Softbank. There is certainly room for significant license and royalty sales increase if ARM can be successful when targeting these automotive and datacenter application – or for whoever will acquire the company, Nvidia being on top of the list.

Now, let’s have a look at the various IP vendors who have been successful, as well as IP categories growing share of the IP market.

Synopsys and Cadence, respectively #2 and #3, are growing respectively by 23.4% and 19%. This trend confirms the validity of one-stop-shop model, at least for large company benefiting from a wide sales organization. If you want to understand why Synopsys had better sales growth than Cadence, the answer is first linked with the wired interface IP category. Synopsys had 55% market share (Cadence 12.2%) in this category which grown by 22.4%, and IP sales have grown by an impressive 28% when for Cadence it was 20%. But Synopsys has been successful in many other categories, namely Analog & mixed-signal, Library & I/O, memory compiler or Others memory compiler. That’s why in 2020, Synopsys has confirmed his leader position for IP license sales with 30% market share, before ARM with 25.5%, and was strong #2 in IP sales with 19.2%.

We will see that the other winners in the IP market are, at the opposite, companies being extremely focused and able to be technical leaders on their segment or sub-segment. Let’s mention a few examples.

  • Alphawave, created in 2017 by a serial entrepreneur, Tony Pialis, enjoyed $25 million sales in 2019, based on advanced SerDes. In 2020, Alphawave sales has been multiplied by X3 to reach $75 million! We think that this incredible success is linked with their support of the most advanced interface IP protocols, PCI Express, 112G SerDes for Ethernet or D2D. These which are extensively used in hyperscalar, datacenter, networking or AI accelerators.
  • Silicon Creations, leader of the Analog Mixed-Signal (AMS) category in 2019 and 2018, and again in 2020, the company being about ten years old is now #1, just before Synopsys, with almost 35% growth.
  • Arteris IP with the Network-on-Chip (NoC), joining the Top 15 in 2019 is now #12 after the acquisition of Magillem with revenues above $40 million in 2020
  • Moortec was a good example, being focused on on-chip monitoring IP for IC on advanced technology node. So good that Moortec has been acquired in 2020 by Synopsys!

The next picture shows the weight of the various IP category in 2020. The main trend shown last year, the relative importance of wired interface growing more than all the other categories, is confirmed. Which is new is that the CPU category has stopped decreasing in 2020, like it did in 2019, 2018 and 2017.

Another interesting point to highlight is the percentage of IP business compared to the semiconductor business (less the memory business, DRAM and Flash). When the semiconductor (less memory) grew from $302B to $322B, or 6.8%, IP grew by 16.7%, almost 10% more than the semiconductor market.

We will see that this trend has been constant during the last 10 years (except in 2017 and 2018):

Eric Esteve from IPnest

To buy this report, or just discuss about IP, contact Eric Esteve (eric.esteve@ip-nest.com)

Also Read:

How SerDes Became Key IP for Semiconductor Systems

Interface IP Category to Overtake CPU IP by 2025?

Design IP Revenue Grew 5.2% in 2019, Good News in Declining Semi Market


How PCI Express 6.0 Can Enhance Bandwidth-Hungry High-Performance Computing SoCs

How PCI Express 6.0 Can Enhance Bandwidth-Hungry High-Performance Computing SoCs
by gruggles on 04-12-2021 at 2:00 pm

How PCI Express 6.0 Can Enhance Bandwidth Hungry High Performance Computing SoCs

What do genome sequencing, engineering modeling and simulation, and big data analytics have in common? They’re all bandwidth-hungry applications with complex data workloads. High-performance computing (HPC) systems deliver the parallel processing capabilities to generate detailed and valuable insights from these applications. To break through any bandwidth limitations, HPC SoCs need the fast data transfer and low latency that high-speed interfaces like PCI Express® (PCIe®) provide. With each new generation of PCIe delivering double the bandwidth of its predecessor, the latest iteration, PCIe 6.0, promises to be a boon for compute-intensive applications.

The HPC solutions that transform high volumes of data into valuable knowledge can be deployed in the cloud or on on-site data centers. Regardless, they demand compute, networking, and storage technologies with high performance and low latency, as well as artificial intelligence (AI) prowess. PCIe 6.0, which is expected to be released sometime this year, is expected to help solve the bandwidth limitations that HPC SoCs are constantly facing. The I/O bus specification will provide:

  • An increased data transfer rate of 64 GT/s per pin, compared to 32 GT/s per pin for PCIe 5.0
  • Power efficiency via a new low-power state
  • Cost-effective performance
  • Backwards compatibility to previous generations

Faster data transfer via PCIe 6.0 will result in faster computations for HPC, as well as cloud computing and AI applications. For example, as an AI algorithm is being trained, data needs to move back and forth quickly across multiple processors. PCIe 6.0 will remove the bottlenecks to allow a fast data flow for a more rapid training process. At the moment, the HPC landscape is dominated by hyperscale data centers. Given their disaggregated computing structure, hyperscale data centers currently provide the most powerful HPC capabilities for applications like AI engines. PCIe 6.0 will be beneficial by supporting more efficient disaggregated computing.

Another emerging application for PCIe 6.0 is storage, namely solid-state drives (SSD) used in data centers. Technology advances in SSD manufacturing—including stacked die—have increased storage capacity. At the same time, this has pushed the limits of 4-lane PCIe form factors. PCIe 6.0 will open the doors to the bandwidth and fast data transfer needed to take full advantage of the increased storage capacity.

New Architecture Brings New Challenges

PCIe 6.0 does come with a new architecture, moving from the non-return-to-zero (NRZ) structure with two logic levels of previous generations to Pulse Amplitude Modulation with four levels (PAM-4). PAM-4 encoding brings the increased data transfer rate and bandwidth. The latest generation also introduces forward error correction (FEC) to address raw bit error rate (BER) challenges that result from the new architecture. FEC traditionally introduces latency; however, PCI-SIG has defined a “lightweight FEC” with retry buffers and cyclic redundancy check (CRC) to maintain low latency. Another change in this iteration is the move to FLIT (flow control unit) mode, which also supports low latency and high efficiency.

PCIe 6.0 moves from the NRZ structure to PAM-4 for faster data transfer and higher bandwidth

Transitioning to this new architecture from earlier PCIe generations will involve some design considerations. For example, the receiver architecture for the PAM-4 PHY is based on an analog-to-digital converter, which calls for optimization of analog and digital equalization to achieve the optimal power efficiency regardless of the channel. Given the massive data pipe involved—potentially up to 1TB per second of data being moved in each direction—proper management of this data is critical. Another consideration relates to testbench development for verification, which ideally should be as efficient a process as possible while also accounting for factors like functional coverage.

Complete IP Solution for PCIe 6.0

Synopsys, which has long been a key contributor to PCI-SIG workgroups, has unveiled a complete IP solution that will allow for early development of PCIe 6.0 SoC designs. Synopsys DesignWare® IP for PCIe 6.0 is built on the silicon-proven DesignWare IP for PCIe 5.0 and supports the latest features of the upcoming new specification. As such, the solution is designed to address the bandwidth, latency, and power-efficiency demands of HPC, AI, and storage SoCs. The solution consists of:

  • The DesignWare Controller for PCIe 6.0, which utilizes a MultiStream architecture consisting of multiple interfaces to provide the lowest latency with maximum throughput for all data transfer sizes. Available in a 1024-bit architecture, the controller allows designers to achieve 64 GT/s x16 bandwidth while closing timing at 1 GHz.
  • The DesignWare PHY for PCIe 6.0, which provides unique, adaptive digital signal processing (DSP) algorithms that optimize analog and digital equalization for maximum power efficiency across backplane, network interface cards, and chip-to-chip channels. With its diagnostic features, the PHY enables near-zero link downtime. Its placement-aware architecture minimizes package crosstalk and allows dense SoC integration for x16 links.
  • Verification IP, which uses a native SystemVerilog/UVM architecture that can be integrated, configured, and customized with minimal effort to help accelerate testbench development while providing a built-in verification plan, sequences, and functional coverage.

Data Makes the World Go ’Round

It’s a data-driven world, and this will only intensify in the coming years. By 2025, according to IDC estimates, worldwide data will grow to 175 zettabytes, with as much of this data residing in the cloud as in data centers. That’s a 61% compounded annual growth rate from 33 zettabytes in 2018. While PCIe 6.0 early adopters are anticipated to be hyperscalers and other HPC SoC designers, the newest standard promises to eventually gain traction among designers working on edge, mobile, and automotive applications. While leading the shift to PCIe 5.0 with hundreds of design wins, Synopsys is helping designers gain a head-start on PCIe 6.0 designs with a complete PCIe 6.0 IP solution and expertise in the popular high-speed SerDes IP. As bandwidth demands increase, designers of PCIe 6.0-based applications can be well-positioned to keep the data moving.

By Priyank Shukla, Staff Product Marketing Manager, High-Speed SerDes IP, and Gary Ruggles, Sr. Staff Product Marketing Manager; Solutions Group
Also Read:

Why In-Memory Computing Will Disrupt Your AI SoC Development

Using IP Interfaces to Reduce HPC Latency and Accelerate the Cloud

USB 3.2 Helps Deliver on Type-C Connector Performance Potential


eFPGA IP – in Videos

eFPGA IP – in Videos
by Daniel Nenni on 04-12-2021 at 10:00 am

Menta eFPGA IP

eFPGA has been a hot topic on SemiWiki for the past five years and it is still going strong. eFPGA is more descriptively categorized as embedded programmable logic and reconfigurable computing. Whatever you want to call it, eFPGA is an important piece of the computing puzzle, absolutely.

We recently did a series of videos with Menta that is worth seeing as a retrospective of were eFPGA is used today and some hints for tomorrow.

The first one is titled Design Adaptive eFPGA IP:

“In this webinar, we will explain what makes an eFPGA different to a FPGA but also to embedded CPUs/GPUs – and in which cases an eFPGA IP is the way to go. We will then explain what is a design adaptive eFPGA IP and why this adaptiveness is so important when it comes to integrating an eFPGA IP.”

The second one is titled: eFPGA Using Adaptive DSP:

“In this webinar, we will explain why the use of Menta Adaptive Digital Processor (DSP) makes Menta eFPGA different to any other FPGA, embedded CPUs/GPUs or eFPGA. We will then explain why its ease of use, its adaptiveness and its performance are so important when we are looking for the best trade-off between flexibility and PPA to better suit your hardware requirements”

The third one is titled: How to build a secure system over the time using eFPGA IP

“In this webinar we will present the Secure-IC & Menta corporate overview and vision, why eFPGA can be powerful in security, the first results of the collaboration, the joint value proposal summary, and what is coming next for eFPGA IP in secure Element.”

What is the value proposition of eFPGA you ask?

Secret protection: With Menta eFPGA you can wait to deliver your most proprietary technology to end-customers as a field-upgrade, minimizing any opportunity for competitors to reverse engineer your product.

Cost reduction: At higher production volumes, onboard FPGAs quickly become cost prohibitive. With Menta eFPGA you integrate on-board FPGA functionality.

Performance: With Menta eFPGA, sacrifices in board-space, I/O latency, and bandwidth disappear, as you bring those accelerators on-chip, without the limitations/overhead due I/O padcount or chip-to-chip communication interfaces.

Lower power: In a COT FPGA, all the extra to the programmable logic, such as high-speed interfaces, PLL, and controllers consume around half of the power. All our power saving advances lead an algorithm on a Menta eFPGA IP to consume between 10 and 50% of the power of the same algorithm on a FPGA.

Design insurance: Maximizing flexibility requires maximizing process-portability. Menta eFPGA is the only 100% standard-cell based solution and this approach enables rapidly porting your eFPGA to whatever new process geometry/variant you desire, using the same automated, standard EDA flow as for the rest of your SoC. Menta, using our industry gold-standard Synopsys-based implementation flow, enables portability within just weeks.

Security: In today’s global, multi-player design-chain, preserving IP/trade secrets is more critical and challenging than ever. With Menta eFPGA you can wait to deliver your most proprietary technology to end-customers as a field-upgrade, minimizing any opportunity for competitors to reverse engineer your product.

ABOUT MENTA
Menta is a privately held company based in Sophia-Antipolis, France. For ASIC and SoCs designers who need fast, right-the-first time design and fast time to volume, Menta is the proven eFPGA pioneer whose design-adaptive standard cells-based architecture and state-of-the-art tool set provides the highest degree of design customization, best-in-class testability and fastest time-to-volume for SoC design targeting any production node at any foundry. For more information, visit the company website: www.menta-efpga.com


Meeting the Need for Hardware-Assisted Verification

Meeting the Need for Hardware-Assisted Verification
by Lauro Rizzatti on 04-12-2021 at 6:00 am

image001 5

Editor’s Note: Siemens EDA recently introduced a comprehensive hardware-assisted verification system comprised of hardware, software and system verification that streamlines and optimizes verification cycles while helping reduce verification cost. What follows is an edited version of an interview Verification Expert Lauro Rizzatti conducted with Jean-Marie Brunet, senior director of product management and engineering for emulation and prototyping at Siemens EDA.

LR: Siemens EDA introduced a suite of comprehensive and integrated hardware-assisted verification tools. Before we discuss the details of this “Big Launch,” let me ask you a general question to set the stage. What trends do you see in chip design and what is driving those trends?

JM: The market trends we see in IC verification and validation are interesting and promising for us. We see that verification costs continue to grow faster than design costs. As software validation costs grow , hardware verification spending overtook register transfer level (RTL) simulation spending in 2018 and this trend continues for the foreseeable future. All market indicators are moving in the direction of the need for more hardware-assisted verification tools, and more spending in this category, which is good news for us.

We see four major markets driving our hardware-assisted verification technology investment.

Number one is networking. Number two is communication with 5G. Number three is computing and storage. Number four is transportation, not only automotive, but any type of transportation.

Consistently, all four vertical markets share the same challenges when designing, verifying, and realizing an IC. The challenges come from the ability to meet power consumption and performance targets.

LR: System-on-Chip (SoC) designs consist of a combination of hardware and software, where software is becoming the dominant contributor. How do you measure success designing such a complex design?

JM: This is an interesting question. The market dynamics driving IC verification are clear. We are moving into an era where software performance defines semiconductor success. It used to be that meeting hardware functional specs defined semiconductor success. While today you still have to meet hardware functional specs, on top of that, you have performance and power targets. The challenge can be met by handling a test environment consisting of lots of workloads, frameworks, and industry benchmarks. Clearly, today’s SoC designs are driven by software performance. That’s the key trend we see.

For any vertical market segment, there is a long list of different workloads or benchmarks that must be executed to certify a design. Looking at artificial intelligence (AI), machine learning ML and deep learning (DL), you have many different types of frameworks or workloads that must be run. The same is true for advanced driver-assisted systems (ADAS) where lots of sensor and raw sensor fusion data must be processed. For the mobile market, we used to see activity within AnTuTu as the main reference. With graphics, we see a lot of Kishonti type of benchmarks. The bottom line is that there is a wide variety of frameworks and benchmarks that have to be run.

For those who paid attention to the announcement by AMD of its third generation EPYC server, the reference to SPECint and SPECfp was captured many times in the slides. Those benchmark references must be run pre-silicon while monitoring the behavior of the design. They are really the references that semiconductor companies use to position their products. They define how products must behave within the context of software workloads.

LR: Let’s move on and discuss your “Big Launch” and the specifics encompassing the launch of the Veloce Suite targeting the verification space. How is software changing the verification process?

JM: The story behind the “Big Launch” is relatively simple –– we identified three reasons driving the need for a complete and integrated suite of hardware-assisted verification tools designed around our Veloce emulation platform.

First, the software environment that we just described in terms of workload requires a massive amount of cycles to run. Billions of cycles are needed for booting operating systems, running benchmarks, and even applications.

Second, it is critical to estimate and measure power and performance while processing these massive workloads. For that you need visibility, accurate analysis and comprehensive debug tools.

And the third aspect is the size and the complexity of the SoC system that now can exceed many billions of gates. You cannot handle such complexity without a hardware-assisted platform.

We launched a suite of products built around our Veloce hardware-assisted verification platform. The first one is Veloce HYCON, our virtual platform to validate software. The second is Veloce Strato+, our new generation of emulator. The third is Veloce Primo, our brand-new enterprise FPGA prototyping engine, and the fourth is Veloce proFPGA, offered through an OEM relationship with ProDesign, to address the desktop FPGA prototyping market.

Based on this offering, we assert not only that the IC verification market needs a complete and integrated solution, but also it needs to approach verification with the context of the right tools for the right tasks. Throughout the verification cycle of an SoC design, different milestones must be met, and these different steps have different needs. Very early on, you need a virtual platform that requires virtual models to process software workloads with a tight integration to hardware emulation to take charge of RTL design blocks. At this stage, it’s all about speed. Eventually, your RTL becomes stable. Now you need to debug your RTL with full design visibility. You need different verification use models. You need scalable design capacity. All of the above needs are offered by an emulator like Veloce Strato+ that allows you to perform power analysis and gate-level emulation. As you approach tape-out, you have a gate-level netlist. Now you have to perform accurate power analysis. Only emulation like Veloce Strato+ can do the job.

We are also introducing a new concept called Emulation Offload from our Veloce Strato+ emulator to our enterprise field programmable gate array (FPGA) prototyping tool, Veloce Primo. The concept here is, early on, you need an emulator that provides accurate and efficient RTL debug. At some point, the design becomes more stable and less buggy, and then you need to verify at a higher speed. At this stage, you want to trade off debug for speed. You can do that by offloading the design from the emulator to the enterprise prototyping tool to accelerate the verification cycle and reduce the cost of verification task.

The last piece of the launch is our desktop FPGA prototyping with Veloce proFPGA, a single-user, smaller footprint type of prototyping tool that sits on the desk or in the lab, with easy bring up and very fast execution.

LR: Could you explain the positioning of the Siemens EDA suite of verification tools against the competition? How are they addressing the challenges you enumerated and why do you think your solution is superior to their offerings?

JM: In the highly competitive market we operate in, there are two critical factors. First is the timing. You have to be first, because when you are first, you define a direction that moves the market forward. That’s what we have accomplished with this launch. The second factor is that you have to offer a complete solution. In the past, we were known only for providing a point tool –– that is, an emulator.

With this launch, we now have all the necessary pieces, well integrated with each other where each tool is the right solution for the right task. This is how we are different from our competitors.

Let’s start with Veloce HYCON, an evolution of the traditional virtual platforms and hybrid emulation offerings. It stands for HYbrid-CONfiguration for the included set of configurable reference platform. Via HYCON, users enjoy an end-to-end software-enabled verification and validation solution that allows them to implement a shift-left strategy by providing a hardware-assisted verification environment where softwre development and validation occurs in parallel with hardware design and verification.

Next is our Strato+ emulation platform. Not all emulators are created equal. The foundation of what we do in Veloce Strato+ sits on three pillars.

The first pillar is the chip. We designed our Crystal chip, essentially a custom-emulator-on-chip. The second is the system hardware architecture. We also completely architect our hardware from the ground up. And the third is the software. We design the chip and the system hardware at the same time we design the software so all three are tightly coupled. When you design a new generation of a tool, you’re looking at what enhancements should be implemented.

With Veloce Strato+, we realized two types of improvement. The first improvement is in capacity. We implemented an exact 1.5X increase in capacity by starting at the source of the Crystal chip. Here we created on a new 2.5D IC package. We integrated a good portion of the memory components that were previously on the board onto the substrate of the package. With this silicon innovation we were able to free up space on the board. Now instead of installing 16 chips on a board as it is with Veloce Strato, with Veloce Strato+ we have 24 chips – 24 divided by 16 leads to an exact increase of capacity of 1.5X.

The second improvement is in performance. Performance comes from a combination of multiple things, from speed of compilation to speed of execution, all contribute to increase the throughput. To accelerate compilation, we implemented a hierarchical flow. Today, every big design is hierarchical, which extends the benefit to virtually all SoC designs. Regarding run-time execution, some emulator providers talk about clock speed and how quickly the chips are running. That’s one aspect and an important aspect, but it’s not the only aspect. What matters is total throughput or what is known as wall-to-wall execution time. That execution time includes processing on the co-model host, plus the interaction with the system hardware. Clearly, the channel-communication architecture is a critical element to achieve the best results.

Within the system hardware, data travels from point-to-point often via a backplane, another critical component for fast execution. At some point, the design data is mapped to the boards in the emulator. Now, the clock speed of the chip becomes important. When data emerges from the chip, it propagates through the backplane to the communication channel, then to the host. The addition of all the above accounts for the total execution time. It is throughput that matters. The Veloce architecture is tailored to optimize throughput and reduce total execution time. On every step along the way, Veloce offers superior throughput to other emulation alternatives on the market.

And the third improvement is improved design debug efficiency. The Crystal 3+ chip provides 100 percent visibility. This is a fundamental advantage of the Veloce Strato platform and the roadmap for future generations of Veloce, continuing on this path of offering superior debug capability versus FPGA-based emulators.

Regarding prototyping, Veloce Primo enterprise prototyping provides five fundamental advantages.

First is performance. Performance ranges from about seven megahertz (MHz) for very large designs up to 70 to 100 megahertz for smaller single FPGA designs.

Second is scalable capacity from a single FPGA user granularity all the way to 320 FPGAs, providing over 10-billion gates in design capacity, certainly enough capacity to handle extremely large SoC designs.

Third, it has the best probe-based debug capabilities in an enterprise prototyping platform for both in-circuit emulation (ICE) and virtual environments. Full visibility is supported by reconstructing combinational values from register data. Design states can be captured at speeds up to 300 MHz. Root cause detection of hard-to-isolate bugs can be accelerated by exporting the design under test (DUT) and test environment from Veloce Primo to Veloce Strato to enjoy faster recompile-debug turnaround time and a higher level of visibility and control.

The fourth advantage is productivity through the consistency of the emulation-prototype flow making it easy for designers to migrate from emulation into prototype and to return to debug prototypes within that Veloce Strato environment. This consistency delivers the ability to have the same DUT RTL and the same virtual environment functioning in both emulation and prototyping.

And, finally, number five is the lowest total cost of ownership, a key value for Veloce Primo. We are the industry leader in both low power consumption and density as we can fit 80 FPGAs in a single 42U rack. We also deliver lights-out remote management to make the day-to-day operational support of your FPGA prototyping farm very cost effective with job scheduling and monitoring as well.

LR: It sounds like a breakthrough introduction. To conclude, could you summarize the story for our audience?

JM: Sure. A key goal of the launch was to be first. As I said before, when you are first, you’re establishing a direction in the industry. We now have defined a direction, namely, you need a complete and fully integrated solution with the right tool for the right task.

To summarize, we introduced an entire new suite of verification tools, well integrated into a consistent flow. For the first time, we provide a versatile offering in FPGA prototyping from enterprise to desktop levels.

We have delivered on time with confidence based on our emulation roadmap. Compared to our competition, the confidence level of delivering against the roadmap is not the same as ours. Fully funded by Siemens, we are executing on our roadmap. The notion of having all the pieces fully integrated is clearly demonstrated in this launch. The future looks very bright for this type of complete and integrated platform.


Apple’s Cook Paints Himself into an Autonomous Corner

Apple’s Cook Paints Himself into an Autonomous Corner
by Roger C. Lanctot on 04-11-2021 at 10:00 am

Apples Cook Paints Himself into an Autonomous Corner

These days tech journalists and analysts appear confident of one thing. Apple is working on an autonomous car.

There’s a “Garbo Talks” quality to the tea leaf reading around Apple’s autonomous vehicle development efforts. The latest chapter was written with the publication of Kara Swisher’s latest “Sway” podcast episode in which she interviews Apple CEO Tim Cook.

“Is Apple’s Privacy Push Facebook’s Existential Threat?” – https://tinyurl.com/ynauapr9 – “Sway” podcast

Near the end of the episode, which largely focuses on Apple’s App Tracking Transparency initiative, Swisher takes a few shots at unearthing some details regarding Apple’s automotive efforts. She notes the Elon Musk, CEO of Tesla Motors, comment that he had offered his company to Apple for one tenth of its value but couldn’t get a response from Apple. She also astutely notes that Apple famously eschews high priced and high profile acquisitions.

For his part, Cook coyly comments that he has never met Musk, though he admires him and his achievement of establishing and preserving electric vehicle leadership. He then adds:

“The autonomy itself is a core technology in my view. If you sort of step back the in a lot of ways the car is a robot.  An autonomous car is a robot.  There’s lots of things you can do with autonomy.  We’ll see what Apple does.  We investigate so many things internally.  Many of them never see the light of day.  I’m not saying that one will not.”

Swisher: “Would it be in the form of a car or the technology within a car.”

Cook: “I’m not going to answer that question.”

Swisher: “I think it has to be a car. It can’t be just… You’re not Google.” Swisher is referring to Google’s automotive strategy which has included injecting Android operating system software into embedded infotainment systems, enabling Android smartphone mirroring, and Alphabet’s Waymo initiative which is reliant on mass produced vehicles equipped with Waymo-developed hardware and software.

Cook: “We look to integrate hardware, software, and services and find the intersection points of those because we think that’s where the magic occurs.  And so that’s what we love to do.  We love to own the primary technology that’s around that.”

Swisher: “I’m going with car with that if you don’t mind…”

So we know Apple’s autonomous vehicle plans for what they are not rather than for what they are.  Cook won’t say.

From what he has said, though, one might presume that Apple has considered the options and has already eliminated a few. For example:

Might Apple acquire an EV startup or legacy auto maker?  Apple has no history of massive acquisitions of the sort necessary to bring the company more directly and immediately into the automotive industry generally, whether to manufacturer human controlled or autonomous vehicles.  Besides, it is likely that any existing auto maker would have to be reorganized from the inside out to suit Apple’s requirements and vision.

Might Apple license its software for integration into existing mass produced vehicles?  Apple’s Cook appears to be interested in owning the entire hardware-software-services nexus, which would appear to rule out a licensing scheme.  Apple’s Cook is unlikely to be granted sufficient control over the hardware-software-service nexus for him or the licensing partner to be comfortable.

Might Apple create an aftermarket/add-on module for vehicles – to be built in at the factory or sold retail (a la Amazon Echo Auto)?  This seems highly unlikely and a half-assed approach to a market entry.  Once again, Apple’s control over the customer experience would be limited, compromised.  Such strategies and devices have seen limited success.

Like Google, Apple has a foothold in the automotive industry thanks to its CarPlay smartphone mirroring and the adoption of its Siri voice recognition by multiple auto makers.  But reports from multiple media outlets point to the hiring of hundreds of engineers – including many former Tesla executives – all tasked with creating an Apple car of some kind – autonomous or not.

Apple has approximately $200B in cash on hand – precisely the kind of cash pile necessary to sustain an automotive market entry through the expensive process of tooling, hiring, plant building, and vehicle production.  Interest in an Apple car of some kind appears to be robust.

Technically, there is not much new that Apple can bring to the automotive party.  Apple has its own vision of electrical architecture, but nothing particularly revolutionary.  There is general agreement that Apple does not possess groundbreaking battery technology, in spite of speculation to the contrary.

Apple has some patents around sensing technology and experience in AI and augmented and virtual reality – but nothing particularly automotive oriented.  And reports suggest that the performance of Apple’s autonomous test vehicles in California has been unremarkable.

Apple has four assets that might help define a path to market.  Apple has emotional appeal.  Apple has its focus on privacy.  Apple has its “Think different” ethos.  And Apple has a global distribution network and a unique hands-on approach to customer service.

It’s entirely possible that Apple could bring some unconventional one or two-seat vehicles to market, in the manner of Renault’s Twizy two-seater or Daimler’s Smart cars.  These vehicles have found enthusiastic followings, though sales volumes are still well shy of the millions that would be more attractive to mass market auto makers.

There is room for innovation in electric vehicle charging.  Perhaps Apple could find success in swappable batteries where Better Place failed.

The existing excess manufacturing capacity in the automotive industry is equivalent to one-third of current production.  Much of that excess capacity is in the hands of current market leaders in the U.S.  Apple would be an ideal contract manufacturing partner to target the passenger vehicle market now being collectively abandoned by Ford Motor Company, General Motors, and Stellantis in the U.S.  So Apple has the engineering talent, it has an enthusiastic clientele, it has a distribution network, and it has a coy CEO who may be experiencing automotive hesitancy – strange to see in a time of SPAC-happy investors leaping blindly into billion-dollar EV opportunities.

If autonomous operation is the focal point for Apple’s automotive ambitions, one can expect continued vapor lock.  The path to market via robotaxis is muddled and via semi-autonomous mass market vehicles (a la General Motors Super Cruise or Tesla Autopilot) is fraught with regulatory, user experience, and technical challenges.

The final notable Apple car rumor is the 2024 timing for launch.  In the end, the market is ready and waiting for Apple, and Cook.  It seems customers are more interested in an Apple car than is Apple.


Technology Under Your Skin: 3 Challenges of Microchip Implants

Technology Under Your Skin: 3 Challenges of Microchip Implants
by Ahmed Banafa on 04-11-2021 at 6:00 am

Challenges of Microchip Implants

As technology continues to get closer to merge with our bodies, from the smart phones in our hands to the smartwatches on our wrists to earbuds. Now, it’s getting under our skin literally with a tiny microchip. A human microchip implant is typically an identifying integrated circuit device or RFID (Radio-Frequency IDentification) transponder encased in silicate glass and implanted in the body of a human being. This type of subdermal implant usually contains a unique ID number that can be linked to information contained in an external database, such as personal identification, law enforcement, medical history, medications, allergies, and contact information. [6]

In Sweden, thousands have had microchips inserted into their hands. The chips are designed to speed up users’ daily routines and make their lives more convenient — accessing their homes, offices and gyms is as easy as swiping their hands against digital readers. Chips also can be used to store emergency contact details, social media profiles or e-tickets for events and rail journeys. [2]

Advocates of the tiny chips say they’re safe and largely protected from hacking, but scientists are raising privacy concerns around the kind of personal health data that might be stored on the devices. Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. Implanting chips in humans has privacy and security implications that go well beyond cameras in public places, facial recognition, tracking of our locations, our driving habits, our spending histories, and even beyond ownership of your data, which poses great challenges for the acceptance of this technology. [1][2]

To understand the big picture about this technology, you need to know that the use of the chips is an extension of the concept of Internet of Things (IoT), which is a universe of connected things that keep growing by the minute with over 30 billion connected devices at the end of 2020, and 75 billion devices by 2025. Just as the world begins to understand the many benefits of the Internet of Things, but also learns about the ‘dark side’ from ‘smart everything,’ including our connected cities, we are now looking at small chips causing major new privacy challenges. [1][5][7]

Like any new trend, in order for that trend to be accepted and become main stream, it needs to overcome three challenges: Technology, Business, and Society (regulations and laws)

The first challenge is Technology: which is advancing every day and the chips are getting smaller and smarter, in the world of IoT the chips are considered as the first element of a typical IoT system which consists of: Sensors, Networks, Cloud, and Applications. As a sensor, the chip touches upon your hand, your heart, your brain and the rest of your body —literally. This new development is set to give a very different meaning to ‘hacking the body’ or biohacking. While cyber experts continue to worry about protecting critical infrastructure and mitigating security risks that could harm the economy or cause a loss of life, implanted chips also affect health but add in new dimensions to the risks and threats of hacking of sensors as they considered as the weakest link in IoT systems [1]

The second challenge is Business: there are many companies in this field and the opportunities are huge with all aspects of replacing ID in stores, offices, airports, hospitals just to mention few. Also, chips will provide key physical data and further processing of that data in the cloud to deliver business insights, new treatments, and better services — presents a huge opportunity for many players in all types businesses and industries in private and public sectors. [5]

The third challenge is Society: As individuals try to grapple with the privacy and security implications that come with technologies like IoT, big data, public- and private-sector data breaches, social media sharing, GDPR, a new California privacy law CCPA, along with data ownership and “right to be forgotten” provisions, along comes a set of technologies that will become much more personal than your smartphone or cloud storage history, and the tiny chip under your skin is sitting at the top of the list of these technologies, posing new risks and threats. [1]

This challenge can be divided into two tracks: Government regulations like GDPR in EU and recent regulations in the US to ban forced usage of the chip for example, and consumer trust which is built on three pillars; SSP (Security, Safety and Privacy):

Safety is a major concern in using tiny chips inside your body including infection risks, MRI’s use with chips, and corrosion of the chip’s parts.

Security and Privacy concerns deal with stolen identity, risk to human freedom and autonomy to mention few. [6]

This technology is promising and another step towards more convenience and simplifying many of the daily tasks of billions of people around the world, but without solid security, safety and privacy measures applied when using this tiny chip, we will be facing a cybersecurity nightmare with far reaching consequences, in addition to an ethical dilemma in dealing with population who refused to use it is, they will be marginalized when it comes to jobs for instance. According to a recent survey of employees in the United States and Europe, two-thirds of employees believe that in 2035, humans with chips implanted in their bodies will have an unfair advantage in the labor market. One big concern raised by many privacy advocates is the creation of surveillance state tracking individual using this technology. [3]

Too many moving parts to deal with, in this technology, until we answer all questions related to this technology, many people will look at it as another attempt of both governments and businesses to gain access to another piece of data about us and add it to many channels used now in gathering info. using our electronic devices, knowing that by 2030, there will be an average of 15 IoT devices for each person in the US. [7]

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications Read more articles at: Prof. Banafa website