RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Accelerating 5G Innovation and Reliability Through Simulation and Advanced FinFET Design

Accelerating 5G Innovation and Reliability Through Simulation and Advanced FinFET Design
by Camille Kokozaki on 02-14-2019 at 7:00 am

In an ANSYS seminar held at DesignCon 2019, Dr. Larry Williams, ANSYS Director of Technology, outlined how 5G design innovation can be accelerated through simulation. He posited that 5G will become a general-purpose technology that affects an entire economy, drastically alter societies and unleash a cascade of complementary innovations (*).

For starters, advanced simulation can help 5G in the analysis of advanced antennas, data processing, and complex mixed signals. The types of analysis and the tools used to accomplish them are summarized in this table below.

5G Phased Arrays produce multiple beams and null out interfering unrelated users. The 5G new radio provides each user their own beam, and with massive MIMO can support multiple simultaneous users in the same bandwidth.

The Array Design Methodology includes antenna element design which includes quick performance prediction and analysis when standalone or when integrating into a finite array or a unit cell (i.e. infinite array). The finite array analysis captures all effects including edge effects, mutual coupling, and active S-parameters. Finally, the Array Design methodology includes Array Platform integration and real-world phased array performance effects including platform effects.

Encrypted 3D components can incorporate the original simulation model which may include sensitive and proprietary IP allowing sharing with 3rd parties while preserving the fidelity of the fully encapsulated and encrypted original model allowing geometry visibility and defined fields and making possible a 3D EM simulation. Installed antenna performance analysis can thus occur without exposing sensitive IP.

Simulation for 5G (28GHz) base station performance requires a physics-based simulation of large-scale environments with SBR (Shooting Bouncing Ray) for efficient simulation of electrically large environment allowing an accurate representation of antenna array through FEM simulation.

System performance evaluation will include the antenna array, site evaluation, beamforming, and null steering algorithms, received power at the user equipment and base to base station interference or unintentional jamming. Received power includes a line of sight and multi-path propagation. A device traveling in dense urban areas between coverage zones of two base stations can have its received power observed allowing site evaluation and base station to base station interference.

Complex mixed-signal systems

According to Skyworks’ CTO Peter Gammel (**), mobile operators can achieve 5G speeds on a 4G network by enabling Carrier Aggregation. Schematic-based and error-prone tedious methods can be replaced with assembly modeling automating the process allowing scripted steps that reduce the time and eliminate system wiring errors.

Full mesh assembly is now possible in ECAD and MCAD allowing an accurate and efficient mesh creation and capturing full network parameters for all nets including small pitch and/or meandering traces and with accurate coupling and isolation.

Data Center Electronics

By 2022, autonomous vehicles are estimated to use 4TB/day/car for an average hour of operation. By 2022, Mobile users are estimated to pull 25GB/day/person. By 2023, mobile data traffic is expected to reach 18 Exabytes per month in North America. Issues facing new data centers include the need for extremely fast channel Signal Integrity analysis of SerDes (25-100Gbps), PAM4 (56-112Gbps), Power Integrity (0.65V with 5% tolerance) and finally Thermal Integrity analysis including thermal stress in boards and components due to the increased power required to operate data centers.

Virtual prototypes analysis includes virtual compliance and Printed Circuit board reliability analysis which include electrical and thermal assessment, temperature distribution, power map, mechanical and thermal stress, potential die cracking, flip chip attachment, package deformation, and solder joint reliability due to thermal cycling.

Ansys has new electronics reliability tools that include an EMI scanner for EMC/EMI design rule check (in HFSS and SIwave) and electromigration analysis allowing calculation of Mean Time to Failure (MTBF) in addition to the existing thermal tools (Icepak in AEDT, Ansys Mechanical Thermal).

Semiconductors now require improved reliability for HPC electronics and rigorous safety for automotive electronics allowing long system lifetimes and mandating zero defects in harsh thermal environments. Autonomous vehicles require Radar, Camera, and LiDAR sensor models that must be tested with vehicle control systems and algorithms to validate safe operation. Radar is susceptible to the environment like induced degradation with snow, rain, dirt, road/wind/weather conditions. Full system simulation is needed. Ansys acquired France-based Optis which has a tool that allows just that.

5G is shaping to be ‘the biggest thing in wireless since wireless’ as Dr. Williams states, with insight, breakthrough thinking, and innovation possible with simulation.

Advanced FinFET Designs Enabling 5G Electronics System Reliability

Following that, Dr. Norman Chang, ANSYS Chief Technologist, addressed enabling 5G Electronics system reliability for advanced FinFET designs. The trends for integration include increasing IP content, 2.5/3D IC packaging. In the power/performance arena trends include 112G SerDes, WideIO, 5G mmWave high frequency and fan-less cooling. Making system reliability an increasing focus with needed lifetimes greater than 15 years, aging, thermal/EM/ESD/EMC reliability, substrate and RFI noise challenges. This mandates accurate electrical models for on-chip components such as a spiral inductor and clock-tree/transmission lines.

Aging in FinFET is accelerated by two major device degradation mechanisms:

  • Negative Bias Temperature instability (NTBI)
  • Hot Carrier Injection (HCI) and Time Dependent Dielectric Breakdown (TDDB)

This requires aging aware SoC timing closure to ensure long term reliability

Substrate noise coupling can affect analog circuitry. Noise coupling causes spikes in the FM spectrum and impacts audio quality and performance. Other needs exist for ESD/EMC simulation for Chip-Package-System which include ESD/EMC sign-off at IOs/IP and SoC levels. ESD rules are checked Board-to-Chip, Chip-to-Chip, Board-to-Board. Pin-to-Pin ESD connectivity, resistance checks, current density checks, driver receiver checks, and dynamic ESD checks result in an ESD IP model that feeds into an IC ESD model run through CECM allowing EMC/System-level ESD simulations.

System-level ESD simulations are critical for 5G/ADAS systems. The target is system level ESD signoff with CECM for IEC61000-4-2 testing. What-if analysis allows system level ESD optimization. The solutions needed are IO/IO Ring Modeling, Full chip layout modeling, and power grid extraction, ESD device modeling, chip ESD modeling, and system level ESD analysis with CECM.

Dr. Chang closed by stating that moving forward Machine Learning and Deep Learning will be instrumental in enabling EM/Timing Assistants for reliability and timing check through an integrated ML stack also allowing for user-driven ML-Apps.

Call to Action URL: https://www.ansys.com/resource-library/article/speeding-5g-network-infrastructure-design-aa-v13-i1

________________
(*) According to Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy
(**) Skyworks Whitepaper: “5G in Perspective – A Pragmatic Guide to What’s Next”


Semiconductor Equipment Companies Facing Significant Headwinds in 2019

Semiconductor Equipment Companies Facing Significant Headwinds in 2019
by Robert Castellano on 02-13-2019 at 12:00 pm

In January 2019, the memory market has been hit with a significant amount of negative news.

  • On Jan. 15, DRAM manufacturer Nanya Technology reported its Q4 2018 revenue was $551 million, down 30.4% QoQ.
  • On Jan. 24, 2019, SK Hynix reported Q4 2018 earnings. Revenues fell 13.7% QoQ to $8.7 billion, while operating profit amounted to $3.9 billion, down 32.4%. SK’s DRAM and NAND bit shipments dropped 2.5% and rose 10% QoQ, respectively, while its DRAM and NAND ASPs fell 11.1% and 21.2% QoQ, respectively.
  • Also on Jan. 24, Western Digital reported revenue of $4.2 billion for its second fiscal quarter ended Dec. 28, 2018, down 20.7% QoQ. Operating income was $176 million with a net loss of $487 million.
  • On Jan. 31, 2019, Samsung Electronics reported Q4 2018 DRAM and NAND revenues of $18.8 billion and $6.6 billion in operating profit, down 27.7% and 43.0% QoQ, respectively. Samsung’s DRAM and NAND bit shipments were down 18.3% and 9.7% QoQ, respectively, while ASPs were down 9.7% and 21.5% QoQ, respectively.

The memory market is an important gauge on the health of the current semiconductor market because equipment spent by NAND and DRAM chip manufacturers was largely responsible for the 37.2% increase in the equipment market in 2017. In 2016, the equipment market grew just 12.9% and an estimated 9.7% in 2018.

It’ also important because the memory market represents a significant percentage of sales of leading semiconductor equipment manufacturers. For example, memory represented 60% of Applied Materials revenues and 79% of Lam Research’s revenues in their most recent quarterly announcements.

In addition, the poor earnings of the memory companies reported in Q4 had a significant impact on ASML’s revenue in the past quarter of 2018; its memory revenue represented just 40% in Q4 compared to 58% in Q3 and 54% in Q2.

Chart 1 illustrates the percentage of NAND and DRAM compared to the overall WFE market from 2013 and forecast for 2019 based on guidance from memory companies.

Capex spend in 2019 is expected to decrease 23% increase in NAND spend while DRAM spend will decrease 46%, as shown in Table 1.

The equipment companies must also contend with cuts in capex spend in logic and foundry as well. TSMC’s capex spend will increase modestly because the company will continue as the sole supplier of Apple’s A13 processor for iPhones.

Conversely, the move to 7nm, which TSMC says made up 23% of its revenue in 4Q 2018, 7nm+ (with EUV) is on track for volume ramp in the second quarter. That’s the bad news for equipment companies like Applied Materials and Lam Research, because EUV lithography will reduce the amount of deposition and etch equipment required for DRAMs.

Until Q4 2018, cloud computing was the lone bright spot for semiconductor companies following weakness in demand in autos, PCs, and smartphones, and the crash in cryptocurrency. Late in the year even cloud spending succumbed to the weakness.

Industry fundamentals have deteriorated from both an oversupply of NAND and DRAM chips and macroeconomic factors tied to the China trade war, which are affecting logic chip manufacturers and foundries. Capex spending reductions across the board will impact equipment manufacturers.

How ironic that the trade war with China is behaving like “death by a thousand cuts,” which is a form of torture and execution originating from Imperial China! At this point, I’m predicting 2019 WFE will be down 15%.


Data Center Optimization Through Game Theory

Data Center Optimization Through Game Theory
by Bernard Murphy on 02-13-2019 at 7:00 am

I always enjoy surprising synergies so I was immediately attracted to a Research Highlight in the Communications of the ACM this month, on a game-theoretic method to balance discretionary speed-ups (known as computational sprints) in data centers. If you don’t have an ACM membership and want to dig deeper, I include an open link at the end of this blog.

This research starts with the widely used trick to reduce job run-time on a processor by speeding up the clock. The downside is that running with a faster clock dissipates more heat so you can’t do it for too long otherwise you’ll fry your processor. Heat dissipates relatively slowly, so there’s a recovery time for cooling during which clock speed has to return to nominal or perhaps even slower. Like sprinting in a long-distance race; you can sprint periodically, but you can’t sprint through the whole race.

Also you’re not running this race alone. In a data center, and particularly in cloud environments, you’re in a load mix with many other jobs which also want to optimize their performance. Your job may swap onto a processor on which another job was just sprinting, or vice-versa; either way the second job loses a sprinting opportunity, at least until recovery. Or maybe multiple jobs in a rack want to sprint at the same time but that risks tripping the rack power supply, perhaps switching over to UPS with its own recovery cycles. Who loses out in this race for extra power?

If you know anything about game theory, this should sound very familiar. You have multiple players with no intrinsic reason to cooperate, all grabbing for more than their fair share of finite resources. The default (greedy) approach is simply to let it work itself out. Jobs grab whatever they can within the bounds of hardware limits. If a chip is going to overheat, hardware enforced slow-down kicks in and can’t be overridden. Similarly, if multiple chips in a rack want to sprint, early requestors make it, up to the power limit of the rack and later requestors are out of luck. Or perhaps they can exceed the power limit, the power supply trips and switches to UPS, as mentioned above, with its own discharge and recovery considerations.

Equally if you know anything about game theory, you’ll know that the greedy method is less efficient overall than methods which take a more comprehensive approach to optimization. For those who don’t know much about game theory, this isn’t about altruism. You might believe that by being better at being greedy you can win out (the Wolf of Wall Street approach where you don’t care what happens to the others) but the probabilities are against you when others are can behave similarly. You are likely to do worse following that strategy than if you more effectively cooperate; check out the prisoner’s dilemma.

A team at Duke have researched several strategies to finding a more effective equilibrium between jobs. Each of these is based on an architecture in which a user job is supported by a selfish (to that job) mechanism to optimize job performance by judicious sprinting in job phases where it will have most advantage. Sprinting profiles are shared in advance with a central coordinator which is responsible for returning recommended profiles that each job should use to ensure an equilibrium strategy across the system. Per job, the mechanism then uses its assigned profile in driving sprinting.

One quick note first. This approach avoids centralized command and control for each sprinting decision, which could be very burdensome in overall performance. Instead jobs and the coordinator need communicate only infrequently while exchanged profiles remain representative of actual activity.

More important, consider how the game plays out. If each job follows its assigned profile and the coordinator calculated an effective equilibrium strategy, jobs should remain in that equilibrium, achieving optimal throughput. What if a job cheats and offers a non-representative profile to the coordinator or chooses not to follow the equilibrium profile the coordinator returns? In either case, the cheating job is likely to suffer because, by not following the computed equilibrium strategy it is most likely to fall into less optimal performance. The only way it could (possibly) avoid this and cheat its way to better performance would require knowing what profiles other jobs have, and it doesn’t have access to that information.

In their experiments, the Duke researchers found the gaming approach provided a 4-6x total task throughput improvement over greedy approaches for data analytics workloads and close to optimal throughput based on a globally optimized policy. Not bad, and an impressive use of a math/economics theory you might never have considered relevant to this domain. You can access an open copy of the Duke paper here.


Renaming and Refactoring in HDL Code

Renaming and Refactoring in HDL Code
by Daniel Nenni on 02-12-2019 at 12:00 pm

I’ve enjoyed my past discussions with Cristian Amitroaie, the CEO of AMIQ EDA, in which we covered their Design and Verification Tools (DVT) Eclipse Integrated Development Environment (IDE) and their Verissimo SystemVerilog Testbench Linter. Cristian’s descriptions of AMIQ’s products and customers have intrigued me. They seem to be doing very well selling “utility” tools that are largely ignored by the big EDA vendors. The more I learn about AMIQ, the more I see that their tools are successful because they solve real-world user problems when writing, debugging, and maintaining hardware description language (HDL) design and verification code. I have also come to appreciate that there is a lot of technology “under the hood” to enable the tasks they automate.

CEO Interview: Cristian Amitroaie of AMIQ EDA

I Thought that Lint Was a Solved Problem

Easing Your Way into Portable Stimulus

Cristian likes to use the term “refactoring” when describing some of these tasks. I have not heard this word very often in the context of hardware design, and so I did a little research. I found an interesting site called “Refactoring.Guru” that offers this crisp definition:

“Refactoring is a controllable process of improving code without creating new functionality. Refactoring transforms a mess into clean code and simple design.”

Thus, refactoring means making changes in code to improve readability and comprehensibility. My search for this term turned up many references for software programs, but very few for hardware. Modern HDLs, especially SystemVerilog, are very complex and encompass many ideas borrowed from software, including coverage, assertions, and object-oriented programming (OOP). So, it seems natural that refactoring could apply to hardware design code as well.

I learned from Cristian that there are many types of refactoring possible, ranging from consistent formatting of white space and code alignment to significant transformations of the code. Rich languages such as SystemVerilog offer multiple ways to achieve the same functionality. Choosing one way and refactoring the code to follow the chosen style clearly makes it easier to maintain, especially when it is re-used or passed on to someone who was not the original coder. Some types of refactoring may also make HDL code more suitable for the EDA tools that consume it. For example, one style may simulate more efficiently than another, even when the functionality is identical.

I asked Cristian for an example of refactoring that is very common, and he replied with the task of renaming an element (variable, port, function, class, etc.) in the HDL code. This surprised me; renaming at first seems just a simple matter of find-and-replace in a text editor. But it’s not that simple. For one thing, searches often find a lot of similar names. Engineers tend to use common terms such as “counter” or “size” that may apply to many places in the code plus short names often appear as part of longer names. Not all text editors have find-and-replace functions with enough wildcard or regular-expression features to define unambiguously the name to be changed.

Renaming is much harder if there are ripple effects beyond the file being edited. Renaming an input or output, or an element used outside of the file, means that other files must be edited. Historically, engineers have used the Unix family of “grep” utilities to search through the design and verification files to identify where the name occurs, and then edit each such file to make the changes. This is obviously an inefficient process. Depending upon the language and the text editor, it may be possible to use a “Ctags” index to identify more easily which names appear in which file. However, generating and maintaining such an index takes additional effort, especially when the list of HDL design and verification source files, header files, and libraries changes constantly through a project.

As Cristian points out, an integrated development environment such as AMIQ’s DVT Eclipse IDE makes renaming a much simpler task. There’s no need for an index file; the tool takes the same file list used by the simulator, reads in every file, and compiles them together into a single model of the complete design and verification environment. In the IDE’s graphical user interface (GUI), the user selects the element to be renamed and types in the new name. The GUI shows all relevant files, and the user can preview the proposed changes in each file. Renaming happens instantly and accurately, updating both the internal model and the HDL files themselves. This level of speed and accuracy is possible only because the IDE has a complete compiled model and “knows” how every signal is connected and how every code element is used.


Figure 1: The IDE automates renaming of variables and other code elements.

I think that renaming is a great example of a seemingly simple HDL coding task that benefits from an IDE and requires strong technology underneath the spiffy GUI. I look forward to learning more about the more advanced forms of refactoring and how DVT Eclipse IDE automates them. Thanks to AMIQ for the education and for providing the screen shot.
To learn more, visit https://dvteclipse.com/products/dvt-eclipse-ide.

Also Read

I Thought that Lint Was a Solved Problem

Easing Your Way into Portable Stimulus

CEO Interview: Cristian Amitroaie of AMIQ EDA


GLOBALFOUNDRIES Talks About Enabling Development of 5G ICs

GLOBALFOUNDRIES Talks About Enabling Development of 5G ICs
by Tom Simon on 02-12-2019 at 7:00 am

5G is in the news again. Sprint has mounted a legal challenge against ATT, claiming that ATT is misleading people into believing that they already are offering 5G. While ATT is about to start testing of 5G, they have also sent out updates that causes customer phones to display 5GE when they are still on 4G LTE systems. The truth is that 5G will be rolling out soon, however it is not just going to be your phone made faster. 5G is a whole new set of ways that devices can communicate. There are three ranges of frequencies that 5G will be using in an integrated fashion.

The low frequency band will be centered around 700MHz, which will propagate well outdoors and will help serve rural areas. The next range of frequencies will be around 2 to 4GHz, and will offer much higher data rates than existing LTE service. At the high end will be frequencies centered around 26 GHz, and 66 to 71 GHz. As you are probably aware, millimeter wavelength (mmWave) frequencies like these do not travel far in the air. This makes them ideal for line of sight transmission in dense urban areas. mmWave frequencies can offer extremely low latency and very high bandwidth.

A big challenge for the semiconductor industry is designing the RF chips needed to build base stations and handsets. It is worth pointing out, though, that 5G will probably also appear in the home as “fixed wireless” internet service. Verizon is already working on this. But, back to mmWave design. I recently had a chance to talk with Peter Rabbeni, GLOBALFOUNDRIES RF Business Unit VP, around the time when he presented on RF EDA for 5G at a DesignCon event sponsored by ANSYS.

One of the important technologies that will make mmWave feasible for handsets is phased array antennas. They will allow focusing the beam so it sends maximum energy toward the desired receiver. Due to propagation loses, omnidirectional signals would require too much energy and battery life would suffer. Peter talked about how EDA solutions are critical to enable TTM for 5G mmWave applications. As part of this he cites electromagnetic simulation, passive and t-line design tools. thermal modeling, fill pattern generation and chip-package co-simulation. Without all of these elements available in the flow and enabled by foundry deliverables, it would not be possible to build the high frequency RF chips and assemblies needed for 5G.

Electromagnetic simulation is used for signal integrity, parasitic extraction and antenna radiation performance. At frequencies around 60 GHz EM simulation is a must have. It is extremely useful for identifying coupling and crosstalk. It also plays a major role in passive design. Peter told me that GLOBALFOUNDRIES has included tools for designing things like inductors inside of their PDKs. They have three tools here: RF Optimum Inductor Finder, RF/mmWave Optimum Transformer Finder and RF/mmWave Optimum Transmission Line Finder. With these, designers can create these critical structures knowing they are the best fit for their design.

Thermal modeling is very important for the PA arrays found in 5G devices. Peter talked about how important PA efficiency is for managing thermal issues. Inefficient PAs dissipate more thermal energy, and in a PA array this can affect neighboring elements. Thermal coupling between PAs must be considered in simulations or important device behavior might be overlooked.

Peter talked about how GLOBALFOUNDRIES has solved a vexing problem for RF designers. Traditionally foundries were very strict and made it difficult to go to fab with the optimal fill for certain devices. GLOBALFOUNDRIES offers customer controlled fill for sensitive RF devices. Now designers can specify the amount and type of fill they need. This will lead to better circuit performance without yield impacts.

The last piece is their chip-package co-design flow. This allows improvement of RF performance and power efficiency. By designing concurrently, silicon, package and system can be optimized and validated with fewer iterations before tape out. This flow has been validated on their 22FDX process which targets mmWave designs.

Consumers will see revolutionary changes when products compliant with the 5G specification are rolled out with the proper supporting infrastructure. There is a lot of work ahead for all members of the food chain. But it all starts with the silicon that will support the entire ecosystem. GLOBALFOUNDRIES is putting a lot of effort into development that supports RF designs for these applications.


Semiconductor Security and Sleep Loss

Semiconductor Security and Sleep Loss
by Daniel Nenni on 02-11-2019 at 12:00 pm

One of the semiconductor topics that keeps me up at night is security. We track security related topics on SemiWiki and while the results are encouraging, we still have a very long way to go. Over the last three years we have published 148 security related blogs that have garnered a little more than 400,000 views. Security touches every market we track: IoT, mobile, automotive, AI, and 5G so there should be more, absolutely.

Security breaches are happening at an alarming pace and we are working very hard to keep the cloud and edge devices safe, believe me, but we are just not writing about it. Unfortunately, now that security breaches are commonplace it really is not clickable news anymore.

Frankly, if the masses knew how unsecure our devices really are, everyone would be losing sleep. Just wait until autonomous automobiles are clogging our transportation arteries. Hackers will have a field day. If losing control of your laptop or phone does not scare you, just wait until hackers take control of your car!

It is interesting to note that my grandchildren will not need to learn how to actually “drive” a car. They will just get in and tell the car where they want to go. That is a big change of life. I remember the anticipation of getting my license on my 16th birthday. So much work and responsibility. We even repaired our own cars back then and knew exactly how they worked. Today, not so much, but I digress…

The point is semiconductor security is a big deal and will touch every piece of silicon we manufacture. Thankfully security is now playing a much bigger role in our conferences including the upcoming DVCon:

System-Level Security Verification Starts with the Hardware Root of Trust
Speaker: Jason Oberg – Tortuga Logic
Organizer: Jonathan Valamehr – Tortuga Logic

With the seemingly continuous discovery of security vulnerabilities at the hardware/software boundary, a new awareness has been built around hardware as the basis for system security. An emerging trend to reduce the likelihood of vulnerabilities is the utilization of a Hardware Root of Trust (HRoT) as the foundation for a secure system. HRoTs are responsible for many of the security features on a chip including secure boot, secure debug, key provisioning and management, and memory isolation. While employing an HRoT has now become a necessity, HRoTs have a vast amount of components and verifying that a secure system has been built around them is a daunting task.

Unfortunately, the current manual techniques for HRoT security analysis tend to miss many unobvious system-level security vulnerabilities. A major reason for the unsuccessful identification of security vulnerabilities is the lack of sophisticated tools that specifically target security verification. Without these, engineers are left to manually review state diagrams, manually review design files, and postulate on design and architecture specifications. This ends up being extremely time-consuming, is not automated and thus susceptible to human error, and consequently leaves systems susceptible to costly vulnerabilities that often can compromise a vendor’s customer data.

In order to properly verify the security of a system built around a HRoT, several challenges need to be addressed. In this workshop, we discuss the state of hardware security in general, then discuss how HRoTs are employed in systems today ranging from the datacenter to the IoT edge. We will also discuss common attacks on HRoT implementations, and the damage that can occur without adequate security verification. We then discuss common hardware security verification techniques, as well as their benefits and drawbacks. Next, we will present the best-in-class techniques and methodologies for verifying the security of a HRoT, and how these techniques can be employed across the entire design and verification lifecycle. Lastly, we will present an example security analysis on a real world HRoT using the discussed techniques. The security analysis will showcase the entire process from threat model specification to tangible results.

Jason and Johny are very approachable guys, as am I, so I hope to see you there…

DVCon is the premier conference for discussion of the functional design and verification of electronic systems. DVCon is sponsored by Accellera Systems Initiative, an independent, not-for-profit organization dedicated to creating design and verification standards required by systems, semiconductor, intellectual property (IP) and electronic design automation (EDA) companies. In response to global interest, in addition to DVCon U.S., Accellera also sponsors events in China, Europe and India. For more information about Accellera, please visit www.accellera.org. For more information about DVCon U.S., please visit www.dvcon.org. Follow DVCon on Facebook https://www.facebook.com/DvCon or @dvcon_us on Twitter or to comment, please use #dvcon_us.


A Detailed History of Samsung Semiconductor

A Detailed History of Samsung Semiconductor
by Daniel Nenni on 02-11-2019 at 7:00 am

From our book “Mobile Unleashed”, this is a detailed history of Samsung semiconductor:

Conglomerates are the antithesis of focus, and Samsung is the quintessential chaebol. From humble beginnings in 1938 as a food exporter, Samsung endured the turmoil and aftermath of two major wars while diversifying and expanding. Its early businesses included sugar refining, construction, textiles, insurance, retail, and other lines mostly under the Cheil and Samsung names.
Continue reading “A Detailed History of Samsung Semiconductor”


McKinsey Mangles Micromobility

McKinsey Mangles Micromobility
by Roger C. Lanctot on 02-08-2019 at 12:00 pm

Micromobility – characterized by shared bicycles, scooters and other wheeled devices – was all the rage in 2018 and that enthusiasm has carried over into 2019 – if McKinsey is to be believed. McKinsey published a white paper last week touting the glories of micromobility and a potential market of $300B-$500B by 2030. The only problem? McKinsey neglected to recognize obvious sources of woe in the sector.

Micromobility’s 15,000-mile Checkup – McKinsey

Micromobility leaders around the world are struggling to come to grips with unanticipated levels of theft, vandalism and vehicle abuse. These issues have combined to undermine the confidence of investors and operators.

Simultaneously, the increased awareness of scooter-related injuries is alarming transportation policy makers. A Consumer Reports study found 1,545 electric—scooter implicated injuries since late 2017 in the U.S.

Investigations Find Scooters a Cause of 1,500+ Accidents – Consumer Reports

Combined with the disinclination of scooter and bike users to wear helmets and the inclination to leave scooters and bikes in inconvenient public spaces, the micromobility explosion is in danger of imploding. These downside insights are coming to the fore as cities weigh the relative merits of shared cars, bikes and scooters as part of an increasingly diverse portfolio of transportation options.

The most obvious failure in the McKinsey report was the estimate that the average scooter lasts four months. Strategy Analytics has found that, on average, scooters last anywhere from 1-3 months, definitely not four. McKinsey further provides estimates for repairs at $0.51/ride – with an average of five rides/day translating to $2.55/day allocated for repairs – or $306 over a four-month life cycle.

Strategy Analytics interviews with operators reveals that most don’t repair their scooters – they cycle the fleet, replacing the old and worn with new scooters. Occasional repairs, yes. Not $306 over a four month period.

There are other flawed assumptions regarding transactions, fees and taxes. Attendees at the recent Micromobility conference in Richmond, Cal., were treated to a healthy dose of enthusiasm leavened by the more sobering concerns of coping with the high cost of keeping scooters on the street in the face of higher than expected levels of theft and abuse.

The most full-throated endorsement came from Horace Dediu, co-founder of Micromobility Industries. Dediu looks at the market from the number and length of trips noting the total addressable market for micromobility consists of all 0-5 mile journeys comprising a total of 4 trillion kilometers/year. (Don’t ask me why Horace combined miles and kilometers.)

The Reason for Micromobility – Luke Wroblewski blog

Dediu may be correct. That would explain the rapid valuation ramp for companies like Bird and Lime. But attendees at the conference came away recognizing that locks, docking stations and drop off points need to see wider adoption, a solution to the helmet problem must be found, and the scooters themselves need to be made more durable.

Users of scooters need to learn better etiquette as well – a steeper objective for all operators. If the rising tide of reported injuries is not corralled, cities can’t be expected to be unalloyed advocates of micromobility.

Bottom line – there are a lot of bumps along the path to a multi-hundred billion dollar market opportunity. It’s best to approach that opportunity with a helmet, open eyes and ears and better market estimates.


Data Management Challenges in Physical Design

Data Management Challenges in Physical Design
by Alex Tan on 02-08-2019 at 7:00 am

IC physical design (PD) teams face several challenges while dealing with tapeout schedules. With shrinking process nodes and stringent PPA targets, the complexity of physical design flows and EDA design tools has increased multifold. In addition the amount of design data that needs to be managed has also increased exponentially. Managing the design data gains even more importance when the design teams are collaborating across multiple design sites.

Some of the challenges faced by physical design teams are:
• High network disk space explosion
• Multi-sites collaboration
• IP reuse & tracking etc.
• Direct exposure to foundry technology changes
• A clean front-end and back-end handoffs

The SOS7[SUP]TM[/SUP] design-management platform from ClioSoft[SUP]®[/SUP] empowers single or multi-site design teams to collaborate efficiently on complex analog, digital, RF and mixed-signal designs from concept to GDSII within a secure design environment. Tight integration with EDA tools, and an emphasis on performance for data transfer, security and disk space optimization provides a cohesive environment that enables design teams to streamline the development of SoCs.

In terms of data management from a physical design perspective, let us take a look at the following three primary issues and how they are addressed using SOS7:
• Handling network disk space explosion
• Treating design data as a composite design object rather than just files
• Reusing and tracking IPs

Handling Disk space explosion
As the technology nodes shrinks, the design databases produced by physical design tools have exponentially increased in size. The design-databases are often tagged for consumption by the functional and geographically dispersed design teams.

The SOS7 design management platform provides a unique feature – the Cache server – which helps manage the network disk space. SOS7 creates shared smart cache areas where all design files, not being modified, are hosted. User access to these design files is provided by tool-managed Linux symbolic links in the user’s working directory. This is one key feature that helps reduce up to 90% of a design team’s storage requirements, as most designers working on a design project tend to download all the project data into their workspace.

From a design collaboration perspective, the cache server also functions as an agent, prefetching the files for remote users in case of geographically dispersed multi-site scenarios. This ensures that the remote teams are not waiting on the design data to be available at their site, thereby not wasting precious time when a tapeout deadline looms large.


Managing design data as composite design objects
A typical physical design database is essentially a collection of files, which are auto generated by the physical design tools. When a physical design engineer kicks off a run, depending on the operation (for example, floorplanning, placement, routing) and the optimization option and/or goal set for the run (search and repair, SI, post route optimization), the PD tool may change large number of files in the database or only a few files. From a design data management perspective, any change, small or large (number of files) is treated as a new revision of the design data.

SOS7 incorporates the UDMA technology, which allows the CAD teams to define a composite object as a collection of files (the physical design database). Using this technology, SOS7 automatically tracks changes to individual files in a composite object and translates it as changes to the design object. The physical design team, therefore tracks changes to the design object rather than the files individually changed during design cycle. This often proves to be a very useful feature for design teams when dealing with large quantity of data. This enables the PD engineer to easily track down any changes in any specific run.

Tracking IPs used in your design
With the sole objective of shrinking the product development cycles, many design companies have leveraged IP reuse either internal or 3rd party IPs. In the case of IP reuse, physical design teams face two main challenges namely

1. The top level integrator has to be absolutely certain that the product tapeout includes the appropriate releases of IP blocks from downstream sources.
2. IP developers need to keep track of the different releases of the IPs being used in upstream SoC products.

The SOS7 design platform provides an IP referencing feature that allows product teams to choose a specific release of the IP they are going to incorporate in their top level. Whenever a newer version of the IP becomes available, SOS7 notifies the users /engineering leads that a newer release of an IP available for use. The design lead can then review the issues fixed in the newer release or review the release notes before deciding to upgrade to a newer version of the IP. In the event, the designer decides to upgrade to the newer version he has an option to try out the new release of the IP before upgrading the entire design team to the newer version of the IP.

SOS7 also provides a live report for the producers & maintainers of IPs and their releases used in upstream products. This helps the IP producer to plan their next release and also indicate the consumers of their IPs if any serious issue in the IP arises.

To conclude, SOS7 design management platform provides a solution to data management challenges for physical design teams and flows.

For more information on SOS7, click HERE.

Also Read

Webinar: Tanner and ClioSoft Integration

The Changing Face of IP Management

Data Management for SoCs – Not Optional Anymore


Building Better ADAS SOCs

Building Better ADAS SOCs
by Tom Simon on 02-07-2019 at 12:00 pm

Ever since we replaced horses in our personal transportation system, folks have been pining for cars that offer some relief from the constant need for supervision, control and management. Indeed, despite their obvious downsides, horses could be counted on to help with steering and obstacle avoidance. There are even cases when a horse could guide its owner home without any supervision. We’ve had to wait a long time, but it appears that partially and even fully autonomous vehicles are arriving and on the horizon. Getting to this point took far longer than anyone hoped. Remember the Jetsons?

During this interval, humans have proven that there is a lot of room for improvement when it comes to driver safety. The same is true for congestion management and fuel efficiency. Autonomous vehicles will also open up access for effective transportation to many classes of people, such as the elderly or disabled, who face serious restrictions today. The push for autonomous vehicles really got moving when DARPA announced its first Grand Challenge for driverless vehicles in 2002. In 2004 not one vehicle passed the finish line when the first challenge was held. However, the following year five vehicles completed the 212 km race.

The SAE J3016 taxonomy describes five levels of vehicle automation. They range from level 0 with no automation, to level 5 which is fully autonomous and requires no driver at all. Moving up the chain to each higher level requires exponentially more processing power. The single biggest technology breakthrough that enabled the progress we are seeing today is AI in the form of machine learning (ML). However, within this field there are a large number of evolving technologies and architectures. Should processing be centralized or distributed? Should it use traditional CPUs or SOCs? In building autonomous vehicles major challenges have popped up concerning in-car networking bandwidth, memory bandwidth, ensuring reliability, and adaptability to evolving standards and required upgrades, among others.

Sensor data is exploding, expanding from pure optical sensors to proximity, LIDAR, radar, etc. Data needs to be combined using sensor fusion to create a real-time 3D model of the car’s environment that the vehicle can use to make navigation decisions. The natural outcome of this pressure is to place processing power closer to the sensors to speed up processing and reduce the amount of data that needs to be transferred to a central processing unit. This come with a side benefit of power reduction. Other data sources will include 5G V2V and V2X communication that will let cars exchange information and allow the road environment to communicate with the car as well. Examples of this might include information about traffic, road repairs, hazards or detours.

A new white paper by Achronix talks about meeting the power, performance and cost targets for autonomous vehicle systems. Often designers need to choose between CPU style processors or dedicated ASICs to process the information flowing through the system. However, there is an interesting third choice that can help build accelerators and offers adaptability in the face of changing requirements or updates. Achronix offers an embeddable FPGA (eFPGA) fabric that can be built into an SOC. Because it is on-chip, it can have direct high-speed access to memory and system caches. Their latest GEN4 architecture is targeted at machine learning applications, with special features for handing ML processing.

In addition to the obvious advantages that embeddable FPGA can offer, such as lower power, higher speed processing, ability to update algorithms in the field, or retargeting SOCs for multiple applications, it offers unique features for safety and reliability. As part of automotive safety standards, such as ISO26262, there is a need for frequent self-testing through BIST and other types of proactive test error insertion and monitoring. eFPGA can be reprogrammed during system bring up and even operation to create test features as needed. Another unique application for eFPGA will be in V2X communication implementation. When eFPGA is used in communication SOCs, it can be field updated to handle next generation communication protocols, like those in the upcoming 5G rollout.

The Achronix white paper outlines a number of other interesting ways that eFPGA can help enable the development of autonomous vehicle processing systems. It discusses how each instance can be specifically configured with the resources needed to perform the intended function. This eliminates some of the other problems with off-chip programmable logic – wasted real estate and mismatched resources. The white paper, which can be downloaded from their website, also offers interesting insight into how the market for autonomous vehicles is shaping up. I for one, look forward to the day that I can simply ask my car to take me home, instead of having to direct my full focus on the task of vehicle operation.