webinar banner2025 (1)

The Evolution of the Extension Implant Part IV

The Evolution of the Extension Implant Part IV
by Daniel Nenni on 05-10-2019 at 2:00 pm

Perhaps the most innovative and effective Extension implant does not involve an implant at all, but is instead an etch followed by a selective epitaxial deposition.

In this Extension fabrication methodology the Source/Drains regions in a planar device are etched away in the normal fashion to accommodate the replacement Source/Drain stressor material (SiGe for the PMOS device and SiC for the NMOS device). This etch is sometimes called the “Sigma” etch. It begins with an HF etch to remove any native oxide present on the surface of the silicon. (Note that the Halo implant is already in place.)

Next, the Source/Drain regions are etched out using a wet etch of NH[SUB]4[/SUB]OH. Alternate etch chemistries that will also work for this task are NH[SUB]3[/SUB]OH, TMAH, KOH, NaOH, BTMH or amine-based etchants.

Initially, the wet etch creates a facet in the lateral direction for a short distance in the channel region beneath the spacer and the gate dielectric and along the {010} plane (refer to Figure #1). This is followed by the formation in the Source/Drain region of an angled facet along the {111} plane. This etch chemistry is substantially preferential in the {111} crystallographic plane and therefore the silicon is etched more deeply in that direction.

The hard mask on the top of the Gate Electrode protects the polysilicon during this wet etch.

Figure #1

The {010} facet creates a precisely shaped Extension cavity that will define the Extension region of the transistor, and when it is filled with N-doped silicon, it will become the Extension of the transistor.

The Source/Drain cavities are then filled with Epitaxially deposited SiC or simply conventional Silicon that is in-situ doped with Phosphorus (refer to Figure #2).

Figure #2

In this methodology the Extension of the transistor is defined by the precisely etched {010} undercut of the gate dielectric, as illustrated. It is claimed that this well-defined structure provides superior dimensional control compared to an implanted Extension, as well as improved short channel effects.

Figure #3 displays the resulting structure when this technique is employed with two adjacent Gates Electrodes. In this instance the wet etch is self-limiting.

Figure #3

After forming the {010} facet, the wet etch will progress along the two {111} facets of the two adjacent transistors until the two {111} facets meet and form a “V” shape. Because of the etch’s directional preference for the {111} plane, once the facets from the two transistors join up to form a “V”, the rate at which the etch proceeds into the substrate decreases. In this respect the etch is self-limiting.

In addition, the depth of the joined facets formed by the “V” can be controlled by controlling the pitch of the adjacent transistors.

The Source/Drain cavities are then filled with N-doped SiC or with Epitaxially deposited Silicon that is in-situ doped with Phosphorus (refer to figure #4).

Figure #4

The Extension of each transistor is now defined by the precisely etched {010} undercut of the gate dielectric, as illustrated. Thus, the Extension of the transistor is formed without an implant operation and this reduces process complexity and produces a superior Extension structure. It is also claimed that this well-defined Extension provides superior dimensional control than an implanted Extension as well as improved short channel effects.

This content was authored by Jerry Healey of Threshold Systems Inc. For detailed information on the entire process flows for the 10/7/5nm nodes attend the course “Advanced CMOS Technology 2019” to be held on May 22, 23, 24 in Milpitas California.

Also read: The Evolution of the Extension Implant Part III


Three Reasons Why You Should NOT Miss 56thDAC

Three Reasons Why You Should NOT Miss 56thDAC
by Daniel Nenni on 05-10-2019 at 7:00 am

Reason number ONE:The next five DACs will be in San Francisco and this will probably be the last one held in Las Vegas so you absolutely do NOT want to miss it. One of my most memorable DACs was in Las Vegas in 1985. My wife came with me for our second honeymoon and, by definition, it was just that, a honeymoon. This year we will probably spend more time at the conference but you never know, it is Las Vegas.

Reason number TWO:The release of an updated version of our groundbreaking book “Fabless: The Transformation of the Semiconductor Industry”. It has grown more than 60 pages and includes updates from eSilicon, Synopsys, Mentor Siemens, Cadence, ARM, and new “In Their Own Words” entries from Achronix, Methodics, and Wave/MIPS. Also included are industry updates on: FPGA, Foundry, EDA, IP, TSMC, GLOBALFOUNDRIES, and a new sub chapter on IP Management. Most importantly there is a NEW chapter 8: “What’s Next for the Semiconductor Industry” written by EDA icon Dr. Walden Rhines.

At 56thDAC I will be speaking in booth #1321 and signing copies of the book courtesy of Methodics. A free PDF version will be available on SemiWiki.com after 56thDAC.

Reason number THREE:The program. Rob Aitken is the Chairperson and he definitely thinks outside of the box as you can see from the keynotes. Rob is a great guy, if you have not met him you should, he is a tireless contributor to the semiconductor industry. Rob and I talked about the keynotes and I can’t remember a better line-up. Seriously, Thomas Dolby?!?! I grew up with his music and I am always up for a DARPA talk. Reverse engineering the brain? I hope I remember to attend that one!

Game Changers: How Automation Has Changed the Gaming Industry
Monday, June 3 | 9:00am – 9:20am
Mark Yoseloff, PhD – Univ. of Nevada, Las Vegas, NV

Cutting Edge AI: Fundamentals of Lifelong Learning and Generalization
Monday, June 3 | 1:00pm – 1:45pm |
Hava T. Siegelmann, Defense Advanced Research Projects Agency (DARPA)

Hors D’Oeuvres from Chaos
Tuesday, June 4 | 9:20am – 10:00am
Thomas Dolby, Musician, Producer & Innovator, Johns Hopkins Univ.
Thomas Dolby – Youtube

March of the Machines – Building Ethical AI
Tuesday, June 4 | 1:00pm – 1:45pm |
Carolyn Herzog – Arm, Ltd

From student project to tackling the major challenges in realizing safe & sustainable electric vehicles
Wednesday, June 5 | 9:20am – 10:00am |
Bas Verkaik, Founder, SPIKE

The Memory Futures
Wednesday, June 5 | 1:00pm – 1:45pm |
Gurtej S. Sandhu, Micron Technology, Inc.

Reverse Engineering Visual Intelligence
Thursday, June 6 | 9:20am – 10:00am | N250
James DiCarlo, MD, PhD, Massachusetts Institute of Technology

RESEARCH TRACK HIGHLIGHTS
To accommodate this year’s large number of accepted papers, we have organized forty-four technical sessions conducted in five daily parallel tracks. A few highlights from this year’s conference include twenty-two papers (four sessions) on machine-learning and artificial-intelligent architectures. For example, session 31, Emerging Technologies Meet Intelligent Machines, highlights recent advances in emerging device technologies for hardware implementation of neural networks. While session 61, ET meets AI: Emerging Technologies for Accelerating AI, explores emerging memory technologies such as RRAM and 3D die-stacked memory, and novel computing ideas such as stochastic computing to achieve low power consumption while improving performance and maintaining sufficient accuracy.

Other highlights from this year’s conference include twenty-eight papers (five sessions) dedicated to hardware, embedded and cross-layer security. For example, session 41, Hide and Seek: Encryption and Obfuscation, presents methods and tools for quantum-resilient cryptography and piracy-resilient hardware. While session 52, Secure and Private Embedded System Design, focuses on security and privacy of future embedded systems in the context of consumer privacy solutions and secure communication/computation.

In terms of core EDA technical paper highlights, RTL and high-level synthesis continues to be a popular research area with twelve papers (four sessions). For example, session 34, What happens in logic synthesis stays in logic synthesis, presents advances in traditional logic synthesis and emerging applications.

This year, physical design and verification, lithography and DFM continues to be another popular core EDA topic area with sixteen accepted papers (three sessions). For example, session 64, Deep Manufacturing : Design, Data, and Machine Learning, where you will learn how deep learning is opening doors to new approaches for Design for Manufacturing.

For a complete list of technical sessions, please visit this year’s online DAC program.

Research Panel Highlights:
7 research panels in machine learning, security/privacy, autonomous systems, architecture, design and CAD, provide exciting opportunities to hear internationally renowned experts from industry, academia and government debate critical controversial topics, such as:

  • Will revolutionary computing hardware and software such as Quantum Computing lead a major paradigm shift for future EDA?
  • Is open-source EDA making a comeback to lead the future or is it just another hype?
  • Will open-source hardware live up to the promise of offering better security against malicious attacks than traditional proprietary architectures?
  • Will in-memory computing for machine learning ever become reality or is it all just a fallacy?
  • Can we ever trust AI to provide robust cybersecurity given that the machine learning models and algorithms are themselves vulnerable to adversarial manipulation?

And please someone have an Elvis impersonator in your booth…. Viva Las Vegas!


ESD Alliance CEO Outlook Features Powerhouse Lineup

ESD Alliance CEO Outlook Features Powerhouse Lineup
by Bob Smith on 05-09-2019 at 2:00 pm

Just two more weeks before the 2019 CEO Outlook Thursday, May 23, at SEMI. If you haven’t registered yet, do so today. We’re expecting a full house as a result of our powerhouse lineup and networking opportunities.

That lineup includes Ed Sperling, editor in chief of Semiconductor Engineering, who will serve as moderator. Panelists will be John Chong, vice president of product and business development for Kionix, Jack Harding, president and CEO of eSilicon, John Kibarian, PDF Solutions’ president and CEO, and Wally Rhines, CEO emeritus of Mentor, a Siemens Business.

The panel’s unfamiliar composition is intentional. We are acknowledging our move into SEMI where the focus is on the entire electronic product design and manufacturing chain, not just electronic system design. Two member companies are part of our Governing Council, while two, eSilicon and Kionix, a division of Rohm, have experience in other supply chain segments. Kionix is the third largest supplier of MEMs devices to the electronics industry. eSilicon is a fabless semiconductor design company and a leader in FinFET ASIC design and 2.5D packaging integration.

The evening begins at 6 p.m. with networking, dinner and drinks. The panel discussion begins at 7 p.m. and goes until 8:30 p.m. For anyone who wants to miss the traffic to Milpitas, registration opens at 5:30 p.m. Everyone from the electronic system and semiconductor design ecosystem is welcome to attend free of charge, though advance registration is required.

SEMI is located at 673 S. Milpitas Boulevard, Milpitas, Calif.

ESD Alliance’s 10-Member Governing Council
Results are in, votes are counted and we offer a hearty welcome to our 10-member Governing Council who will serve a two-year term.

Returning Governing Council members are:

  • Aart de Geus, chairman and co-CEO of Synopsys, Inc.
  • Dean Drako, president and CEO at IC Manage
  • John Kibarian
  • Wally Rhines
  • Simon Segars, CEO at Arm
  • Lip-Bu Tan, CEO of Cadence Design Systems

New Governing Council members Raik Brinkmann, president and CEO of OneSpin Solutions, Prakash Narain, president and CEO of Real Intent and David Dutton, CEO of Silvaco. Congratulations to all!

As executive director of the ESD Alliance, I am a member of the council as well.

ESD Alliance’s ES Design West
We’re working hard to make the inaugural ES Design West July 9-11 co-located with SEMICON West 2019 at San Francisco’s Moscone Center South Hall a success. As its host, we are working diligently to showcase the design ecosystem’s innovation and commercial successes from IP, EDA and embedded software to design services, design infrastructure and the cloud. And, we are continuing to see more companies sign up to exhibit at this inaugural event.

The Advisory Council has done an outstanding job of lining up topics and speakers that will satisfy a broad range of interests. Attendees can expect presentations and panel discussions showcasing electronic system design, its business achievements, commercial technological accomplishments and role in the broader electronics manufacturing supply chain. Exhibitors are enthusiastic and preparing for a great event. The result will be an event that enabled and accelerate conversations, information exchange and collaboration to address common issues, challenges and opportunities that move new electronic products from concept to consumer.

Details can be found at the ES Design West website.

Follow ES Design West on Twitter: #ESDesignWest and @ESDAlliance


Bottom of a Semiconductor Canoe Cycle Shape

Bottom of a Semiconductor Canoe Cycle Shape
by Robert Maire on 05-09-2019 at 12:00 pm

Nice numbers despite the cycle bottom
KLA put up EPS of $1.80 versus street of $1.67 on revenues of $1.097B versus street of $1.08B. However guidance was weaker than the street was hoping for with a range of $1.21B to $1.29B in revenues generating between $1.55 and $1.85 in non GAAP EPS. This is compared to current street estimates of $1.21B and EPS of $1.88.

Obviously the inclusion of Orbotech added slightly to the complexity but it seems fairly straight forward.

Given that the stock was down in the after market its safe to assume that investors were somewhat underwhelmed by the June quarter guidance.

March is the bottom of a “canoe” shaped down cycle
We have talked a lot about the shape of the semiconductor cycle and it seems fairly clear that we are not in a “V” shaped bottom nor even a “U” shaped bottom but rather an extended, flattish bottom shaped a bit like a “canoe”. Managment confirmed their prior view that March is the likely bottom but the bottom is very flat and we are not seeing a “bounce” off the bottom but rather a slow and shallow recovery.

To be clear, this cycle shape is not unique to KLA as the whole industry is in this soft, murky bottom waiting for memory pricing to stabilize and excess supply to get sopped up so that memory spending can recover. Until then the industry will bump along at lower levels subsisting primarily on foundry and logic spending.

Orbotech Upside
KLA’s acquisition of Orbotech couldn’t have come at a better time as the diversity away from mainstream, core, semiconductor tools, will help during the down cycle. The expected roughly $50M in synergies should be easily achieved.

Orbotech gives KLA more diversity than Lam, AMAT or ASML. AMAT does have the display business , but the display business makes the semiconductor business look stable by comparison.

While Orbotech is certainly not large enough to offset all of the cyclicality of the core semiconductor business , it will, none the lessen, dampen the volatility.

China Downside
Of the group, KLA has one of the highest exposures to China. The recent negative news about the China trade deal sounds like things have gone backwards. While KLA is critical to China’s chip business and tariffs don’t YET affect exports, the worsening trade environment is an added risk that investors will likely be concerned about. KLA could be collateral damage if things escalate.

Yield management still better than process
In our view, yield management remains the best segment in the semiconductor tool space. Process tools, especially dep & etch, are most impacted by the memory slowdown and KLA’s traditional bias toward foundry & logic helps given that those sectors are the ones spending the most money right now.

A slow ramp into H2
It sounds like things will slowly improve into the second half but no real bounce back, just continued improvement. Our guess is that the balance of 2019 remains weak with some marginal improvements until we get a true recovery in the memory business.

Its important for investors to remember that even if memory pricing increases, due to capacity being artificially taken off line, spending will not recover until excess capacity has been used up and demand gets back on track.

This means that even though memory pricing may stabilize or get better, it could still take a few quarters before that trickles down to tool makers.

Will investors get impatient?
We have asked this question before as the stocks have flown way ahead of reality. While we think KLA remains one of the best of the group, given its positioning and now new diversity, we think part of the negative reaction in the after market is that perhaps investors are hoping for a faster recovery that is not coming.

The stocks
In general, we don’t see KLA’s report as being much different than expected with perhaps a bit more tepid outlook. We have suggested this will be a slower recovery and investors may get more unhappy and frustrated as time goes by.

The negative China news today will also obviously weigh on the group and could get worse through the week. The has the potential to become a catalyst.

Our view remains that most of the stocks are ahead of themselves and have a lot of air that could quickly come out with negative news points (like China).

Of the large cap stocks, KLA is perhaps our favorite, but we see no reason to pay the currently overly high valuation in the face of a slow lumpy recovery coupled with other risks.


Meeting Automotive IC Design Challenges for Safety using On-Chip Sensors

Meeting Automotive IC Design Challenges for Safety using On-Chip Sensors
by Daniel Payne on 05-09-2019 at 7:00 am

I’ve been driving cars since 1978 and have even done a few DIY repairs in the garage, so I know how warm the engine compartment, transmission or exhaust system can become which makes automotive IC design rather unique in terms of the high temperature and voltage ranges that an electronic component is subjected to. Our safety while driving a car is paramount, so automotive designers have a big responsibility to manage the electronic subsystems that are hidden from view. Recent advances in ADAS (Advanced Driver Assistance Systems) by the major automotive companies along with EV startups like Tesla, are also adding an unprecedented number of ICs and sensors like RADAR and LiDAR to our vehicles.


ADAS Features. Source: Robotics & Automation

Ideally then, at the chip-level, your designers would like to know what the Process variation, Voltage levels and local Temperatures (PVT) are so that they can control the chip operation, keeping it operating safely and within specifications, instead of failing from heat-induced electromigration failures or supply voltages out of spec. Let me just summarize some of the automotive IC design challenges:

  • Reliability
  • Adherence to standards like ISO 26262
  • Long term commitment from suppliers
  • Monitoring aging effects
  • Drift
  • Safety
  • Long development cycles

ISO 26262. Source: National Instruments

ITRI Industrial International forecasts an 11.9% annual growth rate for automotive electronics from 2017 to 2022, so that has the IC design community highly motivated to design new chips to meet the challenges of ADAS. On the infotainment side our cars are becoming mobile hotspots, enabling us to enjoy non-stop smart phone use along with new ways of controlling the car dashboard with our voice or touching a screen.

Chips inside of cars can use bleeding edge 7nm silicon from Samsungor TSMC, all the way up to mature 180nm nodes. The smaller the node, the greater the impact of process variation has on the reliability. If you knew which process corner each block was operating under, then you could take design steps to control the frequency and voltage levels in order to stay within your power spec for example.

Thermal effects continue to be important for automotive ICs:

  • FinFET structures are less able to dissipate heat than planar CMOS
  • Increased density is leading to increased thermal challenges
  • Electrical OverStress (EOS)
  • Electromigration (EM)
  • Hot carrier aging
  • Increased Negative Bias Temperature Instability (NBTI)
  • Device leakage causes heat and heat causes more leakage (Thermal runaway)
  • Leakage to increase when we move from one FinFET node to the next smaller node

Ashish Kumar Gupta from Freescale Semiconductors summarizes thermal concerns, “Designers face new challenges of providing thermal-efficient systems that balance or equally distribute possible on-chip hot spots. In this scenario, Dynamic Temperature Management (DTM) techniques arise as a promising solution. DTM relies on accurately sensing and managing on-chip temperature, both in space and time, by optimally allocating smart temperature sensors in the silicon.

Fortunately for new chip designs targeted at automotive you don’t have to create your own semiconductor IP for PVT monitoring, because there’s a vendor focused solely on PVT monitoring fabrics, Moortec. They are members of the TSMC IP Allianceand have over a decade of experience in this domain. Their in-chip monitoring subsystem IP is silicon proven at 40nm, 28nm, 16nm, 12nm and 7nm, so that’s a wide range to choose from. In addition the IP that Moortec supplies to TSMC users has passed the rigors of the TSMC9000 quality program.


Source: TSMC

Engineers want to know how all of the pieces fit together for IP, so here’s a diagram that shows the concept of connecting multiple PVT monitors and a subsystem to control them.


PVT Sub-system

For automobiles the environmental temperature range is typically -40C to 125C, but the junction temperature of the IC is going to be even hotter than 125C worst case based on the number of transistors, process node, operating frequency and voltage levels. Having multiple Temperature monitors on-chip is a wise choice in managing the thermal specification. As a chip reaches its thermal limits then the control logic can be used to lower voltage levels, decrease frequency or a little of both.

As IC designers identify thermal hotspots in the layout, then engineers can judiciously place Thermal monitors around the chip in order to measure junction temperatures in realtime, then take corrective action when needed.

Summary
The process variation, voltage variation and thermal challenges of designing automotive ICs can be met by placing multiple PVT monitor IP blocks as a fabric across your chip. Moortec is the leader in PVT monitoring subsystems from 40nm through 7nm nodes and has plenty of silicon proven results and use case experience, so that you can quickly use their IP and control how your chips react to variations, keeping them safe and operating within spec.

Related Blogs


eSilicon ASICs all in the Google Cloud

eSilicon ASICs all in the Google Cloud
by Daniel Nenni on 05-08-2019 at 12:00 pm

Having just completed a cloud evaluation for SemiWiki I can tell you why eSilicon chose Google. Simply put, they are working harder to get cloud business. Google ($4B) is the number five cloud provider behind Microsoft ($21.2B), Amazon ($20.4B), IBM ($10.3B) and Oracle ($6.08B). There is a lot of money in the cloud and a lot more to come which is why cloud providers are designing their own chips and partnering with ASIC providers like eSilicon to get that competitive edge. Speaking of ASICs in the cloud:

eSilicon Signs Multi-Year Agreement with Google Cloud
Under the terms of the agreement, Google Cloud will provide support from their professional services team to assist eSilicon as it moves its ASIC and IP design workloads to GCP. eSilicon has been running a hybrid on-premise/cloud environment for approximately the last 18 months, with ASIC design running on premise and IP design running primarily on GCP. This new agreement paves the way for a complete migration of all design activity to GCP.

“Google Cloud has demonstrated the resources, technical depth and domain knowledge to successfully move IC design to GCP, a significant undertaking,” said Naidu Annamaneni, CIO & vice president of global IT at eSilicon. “There are many unique requirements to support this kind of workload on the cloud and the need to collaborate with several infrastructure vendors to create the complete solution. Google Cloud possessed the domain knowledge and operational focus, backed by a substantial worldwide computing capability to get the job done.”

“Moving to the cloud provides the flexibility to build the right compute environment for each design project, resulting in improvements in time-to-market and design quality,” said Mike Gianfagna, vice president of marketing at eSilicon. “This is a substantial project with a lot of innovation and many partners. We’ll be talking more about this fast-paced program at it unfolds over the coming months.”

The big news here is that eSilicon is committing to move ALL ASIC and IP design to the Google Cloud Platform (GCP). They have been operating in a hybrid cloud environment for about a year and a half and now they are taking the next big step. Most of the chip design cloud activity today is hybrid so eSilicon is blazing trails here.

When it’s all said and done, there will be very few computers left inside eSilicon with all major infrastructure served in a flexible, cost effective, and highly secure way with GCP. Cost and security is critical but it’s the flexibility that makes the cloud a no-brainer for chip designers.

Remember, chip design is spiky in regards to compute resources. Processor speed, memory and storage requirements can vary a great deal, especially during the tapeout phase. There is no way an on-premise server farm can be competitive with a cloud offering during a tape-out. eSilicon can now apply the required compute resources needed to meet schedules, perform rigorous verification and deliver world-class ASICs. This ability to apply massive compute resources only when needed is unique to the cloud. The result is a higher quality design and all-important first-time-right silicon which is a make-or-break for the ultra competitive ASIC business.

The downside of the cloud is that you get what you ask for so you had better be careful what you ask for. For example, SemiWiki cloud resource requirements are easily predicted. Google cloud load balancing is as simple as adding a CPU, memory, or disk in a matter of clicks. eSilicon on the other hand is applying machine learning algorithms for an intelligent orchestration layer that assesses the need of a specific design project in regards to compute resources. It also manages costs based on project budgets from IP to full chip design. Expert resource management is a normal part of the ASIC business but with ML and the cloud it will be a completely different type of business, absolutely.

AbouteSilicon
eSilicon provides complex FinFET ASICs, market-specific IP platforms and advanced 2.5D packaging solutions. Our ASIC-proven, differentiating IP includes highly configurable 7nm 56G/112G SerDes plus networking-optimized 16/14/7nm FinFET IP platforms featuring HBM2 PHY, TCAM, specialized memory compilers and I/O libraries. Our neuASIC™ platform provides AI-specific IP and a modular design methodology to create adaptable, highly efficient AI ASICs. eSilicon serves the high-bandwidth networking, high-performance computing, AI and 5G infrastructure markets. www.esilicon.com


Anirudh Keynote at CDNLive 2019

Anirudh Keynote at CDNLive 2019
by Bernard Murphy on 05-08-2019 at 7:00 am

Anirudh Devgan (President of Cadence), gave the third keynote at CDNLive Silicon Valley this year. He has clearly become adept in this role. He has a big, but supportable vision for Cadence across markets and technologies and he’s become a master of the annual tech reveals that I usually associate with keynotes.


Anirudh opened with factors driving system design across major verticals: aero and defense, datacenters, mobile, auto and industrial. These drive trends in distributed and cloud-computing, 5G and edge computing (5G, recently a future, is catching up fast), automotive and industrial disruption, machine learning and deep learning. And in support of all this, the need for more complete optimization across systems, and always improving design excellence and productivity.

Cadence is organizing its approach to these new opportunities through SDE (System Design Enablement) 2.0, a three-level attack through support of design excellence (EDA and IP), System Innovation (a new area) and Pervasive Intelligence (also a new area). All of this leverages Cadence core-competence in computational software, the CS plus math expertise that underlies most EDA technologies. To Anirudh this is very important; Cadence needs to build and grow around existing core-competencies. Since a lot of system analysis requires computational software, as does ML, these are reasonable directions.

This requires a larger view of systems design because customers in all of these markets are now expecting to optimize complete systems, down into the chip/multi-chip design. It also requires an expanded view of computation, embracing the rapid ascendance in support of and use of AI technologies. This doesn’t mean that investment in “conventional” EDA takes a back seat. At the more advanced process nodes, tools must continue to progress in capability. Performance, and coupling with foundries becomes even more essential.

Stepping back for a second from new and shiny stuff, Anirudh had to toot Cadence’s (and his) horn on the dominance of the Cadence digital flow – 20% better PPA than alternatives in the full flow and over a hundred 7nm tapeouts.

Back to the new stuff, first Machine Learning. Anirudh breaks this up into Inside (ML inside a tool), Outside (eg. ML optimizing a flow for improved PPA) and Enablement (eg the support for customer ML objectives through the Tensilica IP). As examples of Inside, a tool looks more or less the same to a user but runs faster or delivers a better result; he cited Tempus as an example where these capabilities are already available.

An example of Outside ML is an iterative flow around Genus and Innovus, using learning gained through earlier runs to improve subsequent runs. Cadence has shown cases where they can get improvement in TNS of 10-20% using such flows. A somewhat different example is use of ML in optimizing PCB automated routing in Allegro where design times can be significantly reduced.

In System Innovation, one of Anirudh’s big reveals was the Clarity 3D solver, from die to package and system. I won’t spend much time on this because Tom Simon wrote about Clarity earlier, but some of the use-cases Anirudh cited may be new – an automotive application with LPDDR4, package, PCB and DIMM connector, a datacenter application for a 112G connection from server to cable to server, and a 5G handset application analyzing fanout wafer-level packaging through 40 DDR signals and power. All of this of course in distributed (and elastic) compute. Anirudh cited this as a good example of leveraging core expertise – building on Cadence know-how in solving matrices – not a skill you would find among most big-data experts.

The other big systems initiative is the partnership with Green Hills software; Anirudh noted that this has Cadence starting to play a role in a $4.5B system analysis market. Coupling that expertise together with early software bring-up and debug on the Palladium and Protium platforms enables analysis and optimization to move into pre-silicon design and further supports the total system objective that so many of the verticals are now finding essential.

I for one am eager to see support of electronic design expand beyond the narrow bounds of package and board and embrace at least some of the latest and greatest trends in tech. We need to play a bigger role in applications innovation. Cadence certainly seems to be taking some interesting steps in that direction.


When Artificial Intelligence Becomes Emotionally Intelligent

When Artificial Intelligence Becomes Emotionally Intelligent
by Krishna Betai on 05-07-2019 at 12:00 pm

“AI is the biggest risk we face as a civilization.” Words from the visionary Tesla CEO, Elon Musk. After each iteration of innovation, artificial intelligence edges closer to replicating the human brain; people fear that AI would soon steal their jobs, which has already started happening in some parts of the world. Yet, humans can take solace in the fact that they are ahead of AI-powered robots in one sphere—emotions.

The process that humans follow to complete a given task is strikingly similar to what AI does—identify data, analyze data, interpret the analysis, identify a suitable course of action, and implement that action. Consequently, the jobs of personal assistants, drivers, delivery persons, factory workers, financial analysts, and even doctors are endangered, along with the hundreds of jobs that involve clerical work. Moreover, there is only so much a human being can learn; AI has the upper ground in that there is no limit to what it can learn. If it runs out of memory, a new server comes to its rescue; there is no stopping the increase in its processing power.

Even though the smartest minds in the world strive to make AI smarter by adding more neural networks and feeding it volumes of data, it is unable to express emotion. It’s not surprising that the next avenue for artificial intelligence to explore is emotional intelligence. An Alexa that can laugh, a Google Assistant that can feel sympathy, a Siri that can admit its mistake and ensure to never do it again. What if artificial intelligence had emotional intelligence?

While AI can become the cause of employee redundancy in various industries due to its superior data processing capabilities and hard skills, humans are at an advantage for their emotional processing capabilities and soft skills, two things that they can hold onto proudly. Some tasks require more than just analyzing data and coming up a solution, these jobs demand a human touch. For example, a robot cannot take the place of a psychologist, who has to dive deep and understand the emotions and problems of a patient and offer tailored solutions and suggestions that might improve the patient’s mental health, although gradually. Only a manager in the form of a human being can motivate their team, tackle individual issues and conflicts and instill in them a sense to perform better and achieve greater results.

For artificial intelligence to develop emotional intelligence, it would need to understand a variety of complex emotions that requires more than simple data processing. If this did happen, having conversations with AI would result in more natural, human responses instead of robotic replies. It would take a considerable amount of time for human beings to open up to emotional artificial intelligence, for obvious reasons. A personal robot assistant with an emotional quotient would advise their boss against taking any stress during situations that emulate that feeling. Along with giving the perfect tactical instructions to the players of a sports team, emotional AI could motivate them by giving a pep talk, or even talk to players one-on-one.

There are a few instances where AI has slowly started developing emotional intelligence. The Google Assistant apologizes every time it makes an inadvertent mistake and acknowledges our appreciation when it performs a task with perfection. Amazon’s Echo for Kids communicates with children in a way that elders would, encouraging them to say “please” and “thank you”, suggesting that they should talk to their parents, siblings, or elders about sensitive subjects such as bullying, and even giving them leeway by recognizing “Awexa” instead of the usual name.

Emotional AI has a lot of potential in the real world. Humans are biased and can be judgmental at times, which is discouraging and even frightening for some who are too scared to vent their feelings. Emotional AI, with all its capabilities, can offer a safe haven for the people who find it difficult to express their emotions. A glimpse of this is shown in HBO’s sitcom Silicon Valley, in Episode 5 of Season 5, titled “Facial Recognition”. The protagonist, Richard, expresses his frustrations and feelings of jealousy toward a coworker to a robot named Fiona. The robot then sympathizes with Richard using its “emotional recognition protocol,” which helps it to identify a wide range of emotions including anger, anxiety, humility, entitlement, and self-loathing, to name a few.

However, AI with an emotional quotient could have potential downsides. The HBO sitcom covers this issue too when the robot analyzes its relationship with its creator, which is shown to be unhealthy and unprofessional and sends Richard an SOS message asking to be saved. In the next episode, “Artificial Emotional Intelligence,” Jared, a member of Richard’s team, develops an emotional attachment with the robot and is devastated when Fiona is torn apart for safety reasons.

Sure, emotional AI can help robots understand humans better, but there remains a possibility that humans can get emotionally attached to them and lose sight of reality. Or even worse, what if emotionally-intelligent robots realized how they are treated and turn against humans in an uprising?


Design IP in 2018: Synopsys and Cadence Increase Market Share…

Design IP in 2018: Synopsys and Cadence Increase Market Share…
by Eric Esteve on 05-07-2019 at 7:00 am

…but ARM, Imagination, MIPS or Ceva have declined and lose market share. Semiconductor design IP market is still doing good in 2018, with 6% growth year over year. It’s half the growth rate seen in 2017, 2016 and 2015 and the growth decline is imputable to bad results from ARM, the market leader, but also from Imagination (#4), MIPS (#10) or CEVA (#5).

In fact, 2018 was an excellent year for #2 Synopsys (+19.4%) or #3 Cadence (+18.4%), as well as for Achronix (eFPGA IP vendor) joining the Top 10 for the first time. We think that a combination of reasons is responsible of the market behavior. We can invoke the negative impact of corporate strategy for ARM or Imagination (or Apple’s decision to develop their own GPU to be more specific), but we think that these results highlight the beginning of a shift from general purpose IP toward more application specific products.

If we start by the positive outcome, the result of strategic decision taken long time ago by Synopsys or Cadence, we see that both companies have developed a strong offer in interface IP category. Namely, memory controller IP, PCI Express or Ethernet/SerDes for both, completed by USB, MIPI, SATA or HDMI for Synopsys. Synopsys has invested in the wired interface IP market with the acquisition of inSilicon (USB IP) in 2002 or Cascade (PCI Express) in 2004 and has built a one stop shop port-folio to address the interface market. With Denali acquisition in 2010, Cadence became a serious challenger, offering top class memory controller and PCI Express IP.

At that time, this wired interface market was just weighting $250 million (IPnest was already publishing the “Interface IP Survey”) and this market has seen a 13.7% CAGR for 2018/2010, to reach $700+ million. Betting on high growth and sustainable market just reflects the quality of a corporate strategy. (Should I remind you that, in 2005, ARM was one of key players, offering PCIe SerDes, before deciding to exit this market…)

We can clearly see the market share transfer between “Processor” and “Interface” during 2015 to 2018 on the above graphic. But this market share transfer between these two categories can’t be the only explanation. In fact, both Cadence and Synopsys have acquired processor IP vendors, respectively ARC in 2010 and TenSilica in 2013. When ARM has seen strong licensing revenues decline (-16%) in 2018, both Cadence and Synopsys are experiencing good revenues growth (+12 to 13%) in the processor category. The impact on IP vendors ranking by licensingrevenues in 2018 is clear, as we can see on the below picture:

We think that these results reflects a change in the chip maker behavior. During the last two decades, the SoC designers have integrated a CPU which was available, proven and benefiting from a strong ecosystem, and they could concentrate on developing the chip on the next node to benefit from Moore’ law impact on cost, performance and power consumption.

Today, except for data center, networking or application processor, the chip makers have to differentiate while using less expensive nodes. They have to develop an application specific chip, differentiating on power consumption and/or cost rather than pure performance. They expect to integrate a CPU (or DSP) which can be exactly tailored for their application need, instead than just a general purpose CPU/DSP.

When looking at the IP offering from Cadence or Synopsys, they can find this kind of application specific CPU/DSP. In short, this move from general purpose CPU/DSP IP towards application specific IP could explain Synopsys or Cadence success.

To comfort this theory, the decline of ARM licensing is not unique, and two very dynamic IP vendors, CEVA (DSP) and Andes Technology (CPU) have experienced the same decline in licensing revenues in 2018. Nevertheless 2019 and after will be very interesting, with RISC-V potential adoption (will this attractive solution be effectively widely adopted?) and the explosion of chips developed using performance intensive AI.

Unfortunately for ARM, offering RISC-V is not realistic, and MIPS has been the first to take the virage to AI support, thanks to their acquisition by Waves Computing. Nevertheless, ARM will stay the king of application processor for smartphone, offering a very complete solution with their CPU, big-little and GPU IP. This should grow again their royalty flow (like it did in 2018), but what’s about their licensing revenues?

If I want to be exhaustive about ARM, I have to mention the company “strategy” in China, which I find pretty difficult to understand. I propose to just give validated facts, to be honest I have no insight information, and I prefer the reader to make his decision:

  • In May 2017, ARM (SoftBank) create a JV with “Chinese Partners”
  • In June 2018Arm, owned by SoftBank, has agreed to sell control of its Chinese business for $775m”
  • In February 2019, ARM publish their results for 2018, showing a 16.1% decline for licensing revenues ($490 million in 2018 vs $584 million in 2017) after 8.3% in 2017 ($584 million in 2017 vs $637 million in 2016)

If anybody would like to comment or provide some validated information, please feel free to do it!

Let’s end with some very positive information that the reader could find in the “Design IP Report” 2019 version from IPnest, related to the mid-size IP vendors who are doing very well, posting revenues growing by 25% or even 40% YoY. This is an heterogeneous list of companies, like Silicon Creations, PLDA or Achronix. That they have in common is a strong focus on the product they develop, the goal to provide top quality IP and support their customer differentiation needs.

If you buy a PLL in 7nm to Silicon Creations, it really has to perfectly work, or your SoC will collapse. PLDA sells PCIe controller IP, for 15 years, and knows how to fill customer needs for differentiation. Achronix is selling high-end FPGA since mid-2000, and the company has decided to offer embedded FPGA (eFPGA IP) since a couple of years, making the necessary investment to offer a viable solution. The company strategy has been recognized in 2018 with IP revenues topping $50 million.

Finally, I have no doubt that next year we will see the quantitative effect of very high speed DSP-based SerDes adoption. It should positively impact the 56/112G SerDes IP licensing revenues of Synopsys, Cadence, Alphawave, Rambus or eSilicon… Don’t forget that this type of SerDes is an essential piece of the overal modern system, used to linked data center, networking and 5G base stations (more to come about PAM4 112G SerDes, the session that I chair at DAC this year)…

If you’re interested by this “Design IP Report” released in May 2019, just contact me: eric.esteve@ip-nest.com .

I hope you will go to the DAC 2019 in Las Vegas, so we could meet!


Eric Esteve
from IPnest


Blockchain and AI A Perfect Match?

Blockchain and AI A Perfect Match?
by Ahmed Banafa on 05-06-2019 at 12:00 pm

Blockchain and Artificial Intelligence are two of the hottest technology trends right now. Even though the two technologies have highly different developing parties and applications, researchers have been discussing and exploring their combination [6].

PwC predicts that by 2030 AI will add up to $15.7 trillion to the world economy, and as a result, global GDP will rise by 14%. According to Gartner’s prediction, business value added by blockchain technology will increase to $3.1 trillion by the same year.

By definition, a blockchain is a distributed, decentralized, immutable ledger used to store encrypted data. On the other hand, #AI is the engine or the “brain” that will enable analytics and decision making from the data collected. [1]

It goes without saying that each technology has its own individual degree of complexity, but both AI and blockchain are in situations where they can benefit from each other, and help one another.[3]

With both these technologies able to effect and enact upon data in different ways, their coming together makes sense, and it can take the exploitation of data to new levels. At the same time, the integration of machine learning and AI into blockchain, and vice versa, can enhance blockchain’s underlying architecture and boost AI’s potential.

Additionally, blockchain can also make AI more coherent and understandable, and we can trace and determine why decisions are made in machine learning. #Blockchain and its ledger can record all data and variables that go through a decision made under machine learning.

Moreover, AI can boost blockchain efficiency far better than humans, or even standard computing can. A look at the way in which blockchains are currently run on standard computers proves this with a lot of processing power needed to perform even basic tasks.[3]

Smart Computing Power
If you were to operate a blockchain, with all its encrypted data, on a computer you’d need large amounts of processing power. The hashing algorithms used to mine Bitcoin blocks, for example, take a “brute force” approach – which consists in systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem’s statement before verifying a transaction.[3]

AI affords us the opportunity to move away from this and tackle tasks in a more intelligent and efficient way. Imagine a machine learning-based algorithm, which could practically polish its skills in ‘real-time’ if it were fed the appropriate training data.[3]

Creating Diverse Data Sets
Unlike artificial intelligence based-projects, blockchain technology creates decentralized, transparent networks that can be accessed by anyone, around the world in public blockchain networks situation. While blockchain technology is the ledger that powers cryptocurrencies, blockchain networks are now being applied to a number of industries to create decentralization. For example, SinguarlityNET is specifically focused on using blockchain technology to encourage a broader distribution of data and algorithms, helping ensure the future development of artificial intelligence and the creation of “decentralized A.I.” [4]

SingularityNET combines blockchain and A.I. to create smarter, decentralized A.I. Blockchain networks that can host diverse data sets. By creating an API of API’s on the blockchain, it would allow for the intercommunication of A.I. agents. As a result, diverse algorithms can be built on diverse data sets. [4]

Data Protection
The progress of AI is completely dependent on the input of data — our data. Through data, AI receives information about the world and things happening on it. Basically, data feeds AI, and through it, AI will be able to continuously improve itself.

On the other side, blockchain is essentially a technology that allows for the encrypted storage of data on a distributed ledger. It allows for the creation of fully secured databases which can be looked into by parties who have been approved to do so. When combining blockchains with AI, we have a backup system for the sensitive and highly valuable personal data of individuals.

Medical or financial data are too sensitive to hand over to a single company and its algorithms. Storing this data on a blockchain, which can be accessed by an AI, but only with permission and once it has gone through the proper procedures, could give us the enormous advantages of personalized recommendations while safely storing our sensitive data.[4]

Data Monetization
Another disruptive innovation that could be possible by combining the two technologies is the monetization of data. Monetizing collected data is a huge revenue source for large companies, such as Facebook and Google.[4]

Having others decide how data is being sold in order to create profits for businesses demonstrates that data is being weaponized against us. Blockchain allows us to cryptographically protect our data and have it used in the ways we see fit. This also lets us monetize data personally if we choose to, without having our personal information compromised. This is important to understand in order to combat biased algorithms and create diverse data sets in the future.[4]

The same goes for AI programs that need our data. In order for AI algorithms to learn and develop, AI networks will be required to buy data directly from its creators, through data marketplaces. This will make the entire process a far more fair process than it currently is, without tech giants exploiting its users.[4]

Such a data marketplace will also open up AI for smaller companies. Developing and feeding AI is incredibly costly for companies that do not generate their own data. Through decentralized data marketplaces, they will be able to access otherwise too expensive and privately kept data.

Trusting AI Decision Making
As AI algorithms become smarter through learning, it will become increasingly difficult for data scientists to understand how these programs came to specific conclusions and decisions. This is because AI algorithms will be able to process incredibly large amounts of data and variables. However, we must continue to audit conclusions made by AI because we want to make sure they’re still reflecting reality.

Through the use of blockchain technology, there are immutable records of all the data, variables, and processes used by AIs for their decision-making processes. This makes it far easier to audit the entire process.

With the appropriate blockchain programming, all steps from data entry to conclusions can be observed, and the observing party will be sure that this data has not been tampered with. It creates trust in the conclusions drawn by AI programs. This is a necessary step, as individuals and companies will not start using AI applications if they don’t understand how they function, and on what information they base their decisions.

Conclusion
The combination of blockchain technology and Artificial Intelligence is still a largely undiscovered area. Even though the convergence of the two technologies has received its fair share of scholarly attention, projects devoted to this groundbreaking combination are still scarce.

Putting the two technologies together has the potential to use data in ways never before thought possible. Data is the key ingredient for the development and enhancement of AI algorithms, and blockchain secures this data, allows us to audit all intermediary steps AI takes to draw conclusions from the data and allows individuals to monetize their produced data.

AI can be incredibly revolutionary, but it must be designed with utmost precautions — blockchain can greatly assist in this. How the interplay between the two technologies will progress is anyone’s guess. However, its potential for true disruption is clearly there and rapidly developing [6].

Ahmed Banafa, Author the Book: Secure and Smart Internet of Things (IoT) Using Blockchain and AI.

Read more articles at IoT Trends by Ahmed Banafa

References:
[1]https://aibusiness.com/ai-brain-iot-body/
[2]https://thenextweb.com/hardfork/2019/02/05/blockchain-and-ai-could-be-a-perfect-match-heres-why/
[3]https://www.forbes.com/sites/darrynpollock/2018/11/30/the-fourth-industrial-revolution-built-on-blockchain-and-advanced-with-ai/#4cb2e5d24242
[4]https://www.forbes.com/sites/rachelwolfson/2018/11/20/diversifying-data-with-artificial-intelligence-and-blockchain-technology/#1572eefd4dad
[5]https://hackernoon.com/artificial-intelligence-blockchain-passive-income-forever-edad8c27844e
[6]https://blog.goodaudience.com/blockchain-and-artificial-intelligence-the-benefits-of-the-decentralized-ai-60b91d75917b