webinar banner2025 (1)

Car Sharing Prophets and Losses

Car Sharing Prophets and Losses
by Roger C. Lanctot on 02-15-2019 at 12:00 pm

Industry analysts are fond of painting terrifying pictures of catastrophic market changes resulting from disruptive technologies or business models. Nowhere is this more evident, of late, than in the automotive industry where everything from artificial intelligence to automated driving is expected to upend current thinking about transportation.

A recent Bloomberg headline says it all:

– “Here is the Future of Car Sharing, and Car Makers Should be Terrified” – Bloomberg

Not so fast, Bloomberg.

The arrival of Tesla Motors on the automotive scene in the U.S. more than six years ago and the successful introduction of the Model S brought a tsunami of forecasts including the demise of dealers, the decline of the internal combustion engine, the devastation of the industry’s mechanically-oriented supply chain, and the fall of independent repair shops and gas stations.

Hasn’t happened.

The arrival of Uber brought prodigious predictions of the end of vehicle ownership and parking garages.
Not exactly.

Zipcar stimulated visions of universal ad hoc vehicle usage undermining rental car companies, car makers and taxis alike.

Nope.

There are two challenges at work – one is to avoid extrapolating from a singular use case scenario, the other is not to overlook consumer resistance to change. As the saying goes, the more things change the more they stay the same.

The Bloomberg report focuses on the performance of Russian tech leader and search engine Yandex and its popular though unprofitable ride hailing and car sharing services. From a distance Yandex would appear to be a massive success story on the mobility landscape. The reality is that the company has bought its way to prominence with deeply discounted alternatives to taxis and owned vehicles.

Car sharing is nothing new, having been introduced decades ago before gaining popularity and market presence with the launch of Zipcar in the U.S. in 2000 and Autolib in Paris in 2011. The widespread support, awareness and adoption of car sharing is good. The lack of parabolic growth is bad.

As in the case of usage-based insurance, once the early adopters of car sharing are skimmed off wider adoption tails off. Yandex has sought to overcome this anemic growth profile by flooding Russian cities such as Moscow with thousands of vehicles available for use at the equivalent of 8 cents a minute – one-fifth the rate of operators such as Car2Go. Not surprisingly, the service has caught on.

Yandex’s dearly bought success in Russia is offset by apparent consumer indifference in China – where interest in car sharing is overwhelmed by the availability of ride hailing alternatives – and, in Paris, Autolib’s demise after failing to turn a profit.

Car companies can see that car sharing has yet to demonstrate the staying power that would be implied by operational profitability. One exception can be found in Japan where Park24 has combined a dominant presence in the parking industry with its own massive fleet of sharable cars.

Park24 claims to be the only profitable car sharing company on the planet and the business model warrants closer scrutiny. Founded in 1971, Park24 is the world’s largest parking company with more than one million parking spaces under its control (590K in Japan and 540K overseas) – commanding more than 54% domestic market share overseeing 18K parking facilities and 6.5M Times Club members – accessing vehicles strategically positioned near transportation hubs.

Park24’s profitable platform and Yandex’s fire sale pricing perfectly capture the quandary of assessing prospects in the car sharing business. Growth is uneven. Profitability is rare. Long-term prospects are unclear.

Car sharing clearly poses only a minor existential threat to car makers. So, the Bloomberg headline misses the mark. Some car makers see car sharing as an opportunity.

Strategy Analytics research has shown that the most frequent users of both ride hailing and car sharing services are strong self-indicated candidates for car ownership. This is one reason car makers support both alternatives (sharing and hailing) to vehicle ownership in the interest of influencing future customers.

Car sharing is less an imminent threat to car makers than it is a lead generating opportunity for them. In the end, ride hailing and car sharing both represent scenarios where a transportation challenge is being solved with a car – and that is good news for car makers.

As for car sharing’s growth prospects, the Park24 model suggests there may be as yet unexplored paths to profitability and wider adoption. Renewed growth can be expected to come from new startups bringing new thinking to the car sharing model. As one might say in these circumstances: “There’s a pony in here somewhere.”


Goldilocks Solution for SOC Processors

Goldilocks Solution for SOC Processors
by Tom Simon on 02-15-2019 at 7:00 am

SOC designers face choices when it comes to choosing how to implement algorithms in their designs. Moving them to hardware usually offers advantages of smaller area, less power and faster processing. Witness the migration of block chain hashing from CPUs to ASICs. However, these advantages can come with trade-offs. For one, hardware is fixed during the chip design and cannot be adapted and modified, possibly leaving an SOC with a less than optimal or potentially out of date algorithm.

The appeal of using processors to implement algorithms is undeniable. Code can be adapted for evolving specifications or standards. As technology advances SOCs can receive code updates that accommodate newer or improved methods of solving problems. However, this flexibility comes at the costs articulated above.

The development of special purpose processors has made this dilemma a little easier for SOC designers by providing many of the advantages of each approach in a single solution. There are many examples of these: GPUs, vision processors, network processors, neural network processors, FEC processors, etc. Indeed, most SOCs are built with a host of processors that contain optimizations for specific tasks. However, in the search for product differentiation, designers sometimes are looking for special purpose processors or accelerators that are not available. The features of these might include unique parallelization, special branching, pipelining, new data types, special instructions, etc.

If there is an advantage to be gained by adding a special purpose processor, the barriers to creating one and adding it to a design have been too high for most SOC teams. Synopsys has written a white paper that talks about how these barriers can be overcome, to yield improved design time and product performance.

Handcrafting a specialized processor and optimizing it for an SOC comes with a lot of overhead. In addition to customized software compilers and debuggers, RTL and simulation views are required. Importantly, it is necessary to iterate on processor design and its application code to ensure the system will work as intended and as efficiently as possible. This means looping in the code compiler and the RTL synthesis tool, having them work from an easily modified processor specification.

Synopsys solves this problem with their ASIP Designer. The process begins with their nML, a high-level language for specifying the architecture of the processor. nML is hierarchical and highly structured. It can be thought of as being similar to Verilog or VHDL, but is used for the specification of a processor and is used to create downstream deliverables.

The nML drives a retargetable SDK that consists of an optimized C/C++ compiler, assembler/disassembler, linker, cycle-accurate and bit accurate instruction set simulator and graphical debugger. Next the nML can be used to generate a hardware description of the processor in fully synthesizable RTL. The nML can be fully verified for correctness by the ASIP Designer. Subsequently, the RTL model produced can also be verified and checked to see that it implements the processor model correctly. Included in ASIP Designer, Synopsys provides extensive verification tools, along with debugging and diagnostic capabilities to run consistency checks, and produce reports on connectivity analysis, hardware conflicts, unused instructions, pipeline hazards and much more.

The Synopsys ASIP Designer opens up many opportunities to take advantage of the power and flexibility of special purpose processors. In a “best of both worlds” solution, highly optimized processors can be used as a middle ground between bulky general-purpose processors and hardwired RTL code. Specializations ideally suited to the task at hand can be easily implemented in any aspect of the custom processor. Best of all, a full featured SDK is ready from the moment of specification to validate performance of the architecture and accelerate firmware development.

The benefits of specialized processors cannot be overstated. They can be tuned and optimized to perform a wide range of options with very minimal overhead. General purpose processors are just that – general purpose. They are frequently not a good fit when the problem they are being applied to is narrow and well understood. Historically the problem has always been the high effort required to create new special purpose processors. With their ASIP Designer, Synopsys has taken many of its core strengths and applied them to solve a difficult problem. Their white paper on “Softening Hardware” is very informative and goes into greater detail on both the benefits of their solution and on the details of its implementation. It is a good read.


AI, Deep Learning, SystemC, UVM, PSS – DVCon Has it All

AI, Deep Learning, SystemC, UVM, PSS – DVCon Has it All
by Daniel Payne on 02-14-2019 at 12:00 pm

Today I had the pleasure to speak with Tom Fitzpatrick, TPC Chair for the DVCon conferenceand exhibition slated for February 25-28 in the heart of Silicon Valley – San Jose. Tom lives in Massachusetts, a place where I used to live and work at Wang Labs, back in the day before the PC and WordPerfect software ended Wang’s fortunes. We swapped stories about the first Personal Computers, and computer languages starting from BASIC all the way up to Python.

DVCon spans four days, so let’s take a look at the highlights so that you can focus on what interests you most about IC design and verification.

Monday
AI and Deep Learning permeate our technical literature daily and there’s good reason for that, just consider that the VC companies are backing hundreds of start-ups in this emerging area. John Aynsley of Doulousleads a short workshop from 1:45PM to 3:15PM titled, “Deep Learning for Engineers“. Vision and speech recognition are the two biggest application areas that I hear about for deep learning. John’s career started out in 1980 at Plessey where he developed their VHDL simulator, and he’s the CTO and co-founder of Doulos.

Verification engineers know about Cliff Cummings, because he’s an expert at SystemVerilog and a developer of UVM. At Sunburst Designthey teach engineers best practices. This year his tutorial has perhaps the longest title ever, “Gain Valuable Insight into the Changes and Features that are part of the new IEEE 1800.2 Standard for UVM and how to make the most of them“. It’s best to learn good habits for a new language from a master, instead of grasping at how to be efficient on your own.

Lunchtime is sponsored by Acellera and they have a Technical Excellence Award, but you must be present to hear about the winner, no spoilers allowed. A panel discussion covers the SystemC language with distinguished panelists from: Mentor, Synopsys, Cadence and NVIDIA.

Tuesday
I was surprised to learn that there are 33 technical committee members, and they culled through over 100 submissions, selecting 39 papers and 25 poster topics. So you’ll want to check out the Opening Sessionto hear from the Steering Committee as they announce the Best Paper Award.

Three notable verification papers in the morning include:

  • Formal Verification Methodologies, Sean Sarfarpour, Synopsys
  • Verification Strategies I, Greg Tumbush, EM Microelectronics
  • Analog/Mixed-Signal Verification, Logie Ramachandran, Accelver Systems Inc.

With 25 Poster Sessions this year you can choose from timely topics and chat with the authors to increase your skillset:

  • Low power design and verification
  • Python
  • SystemVerilog
  • SystemC
  • UVM
  • PSS

The Keynote for Tuesday comes from a familiar systems company, Siemens PLM Software,as Fram Akiki shares his vision, “Thriving in the Age of Digitalization“. He covers all of the hot topics: AI, ML, 5G, IoT, autonomous vehicles.

Harry Foster has conducted studies about functional verification over the years and shares that in a special session, “Trends in Functional Verification: A 2016 Industry Study“. Verification has become more of a software problem, and not so much a hardware problem. The verification process and effort can actually be more complex than the design.

In between sessions, keynotes and posters you should set aside some time to check out the vendors on the Expo floor.

| Agnisys
| Aldec
| Altair Engineering
|-
| AMIQ EDA
| Avery Design Systems
| Blue Pearl Software
|-
| Breker Verification
| Cadence
| CircuitSutra
|-
| DINI Group
| Doulos
| EDACafe.com
|-
| ESD Alliance
| FormalSim
| MathWorks
|-
| Methodics
| Metrics
| OneSpin
|-
| Oski
| Pro Design
| Semifore
|-
| Sigasi
| Sintegra
| SmartDV
|-
| Symbiotic
| Synopsys
| Truechip
|-
| Verific
| Werifyter
| Vtool

Wednesday

RISC V has grown rapidly in building market awareness for Open Source ISA, but how do you verify all of these new cores coming out? There’s a panel for that, “Verification and Compliance in the era of open ISA – Is the Industry ready to Address the Coming Tsunami of Innovation?“.

PSS, CDC and emulation are covered in the morning, and should be quite popular this year.

Is deep learning applied to verification just hype or reality? The panel with folks from AMD, Arm, Mythic, Achronix and NVIDIA will keep you informed.

Drum roll please, and the winner of this year’s Best Paper and Best Poster Session awards go to… Well, show up and hear Tom Fitzpatrick announcement at 4:45PM in the Bayshore Ballroom. It’s an honor and a testimony to all of the hard work that goes into the design and verification of billion transistor chips, something not financially or technically viable even five years ago.

Thursday
On the final day of DVCon is where you get to learn something new and useful to your engineering career by attending the six tutorials with presenters from leading companies:

  • Cadence
  • UltraSoC
  • Axiomise
  • Synopsys
  • Mentor
  • Verilab
  • Breker
  • Willamette HDL 

    Summary
    DVCon is a healthy and growing conference, so expect to see some 700 attendees during your time of networking, sessions, keynotes and exhibits. Receive the proceedings as PDF documents once you get registered, and if you Tweet then use the #DVCon_US to let us know what is most interesting this year.


Accelerating 5G Innovation and Reliability Through Simulation and Advanced FinFET Design

Accelerating 5G Innovation and Reliability Through Simulation and Advanced FinFET Design
by Camille Kokozaki on 02-14-2019 at 7:00 am

In an ANSYS seminar held at DesignCon 2019, Dr. Larry Williams, ANSYS Director of Technology, outlined how 5G design innovation can be accelerated through simulation. He posited that 5G will become a general-purpose technology that affects an entire economy, drastically alter societies and unleash a cascade of complementary innovations (*).

For starters, advanced simulation can help 5G in the analysis of advanced antennas, data processing, and complex mixed signals. The types of analysis and the tools used to accomplish them are summarized in this table below.

5G Phased Arrays produce multiple beams and null out interfering unrelated users. The 5G new radio provides each user their own beam, and with massive MIMO can support multiple simultaneous users in the same bandwidth.

The Array Design Methodology includes antenna element design which includes quick performance prediction and analysis when standalone or when integrating into a finite array or a unit cell (i.e. infinite array). The finite array analysis captures all effects including edge effects, mutual coupling, and active S-parameters. Finally, the Array Design methodology includes Array Platform integration and real-world phased array performance effects including platform effects.

Encrypted 3D components can incorporate the original simulation model which may include sensitive and proprietary IP allowing sharing with 3rd parties while preserving the fidelity of the fully encapsulated and encrypted original model allowing geometry visibility and defined fields and making possible a 3D EM simulation. Installed antenna performance analysis can thus occur without exposing sensitive IP.

Simulation for 5G (28GHz) base station performance requires a physics-based simulation of large-scale environments with SBR (Shooting Bouncing Ray) for efficient simulation of electrically large environment allowing an accurate representation of antenna array through FEM simulation.

System performance evaluation will include the antenna array, site evaluation, beamforming, and null steering algorithms, received power at the user equipment and base to base station interference or unintentional jamming. Received power includes a line of sight and multi-path propagation. A device traveling in dense urban areas between coverage zones of two base stations can have its received power observed allowing site evaluation and base station to base station interference.

Complex mixed-signal systems

According to Skyworks’ CTO Peter Gammel (**), mobile operators can achieve 5G speeds on a 4G network by enabling Carrier Aggregation. Schematic-based and error-prone tedious methods can be replaced with assembly modeling automating the process allowing scripted steps that reduce the time and eliminate system wiring errors.

Full mesh assembly is now possible in ECAD and MCAD allowing an accurate and efficient mesh creation and capturing full network parameters for all nets including small pitch and/or meandering traces and with accurate coupling and isolation.

Data Center Electronics

By 2022, autonomous vehicles are estimated to use 4TB/day/car for an average hour of operation. By 2022, Mobile users are estimated to pull 25GB/day/person. By 2023, mobile data traffic is expected to reach 18 Exabytes per month in North America. Issues facing new data centers include the need for extremely fast channel Signal Integrity analysis of SerDes (25-100Gbps), PAM4 (56-112Gbps), Power Integrity (0.65V with 5% tolerance) and finally Thermal Integrity analysis including thermal stress in boards and components due to the increased power required to operate data centers.

Virtual prototypes analysis includes virtual compliance and Printed Circuit board reliability analysis which include electrical and thermal assessment, temperature distribution, power map, mechanical and thermal stress, potential die cracking, flip chip attachment, package deformation, and solder joint reliability due to thermal cycling.

Ansys has new electronics reliability tools that include an EMI scanner for EMC/EMI design rule check (in HFSS and SIwave) and electromigration analysis allowing calculation of Mean Time to Failure (MTBF) in addition to the existing thermal tools (Icepak in AEDT, Ansys Mechanical Thermal).

Semiconductors now require improved reliability for HPC electronics and rigorous safety for automotive electronics allowing long system lifetimes and mandating zero defects in harsh thermal environments. Autonomous vehicles require Radar, Camera, and LiDAR sensor models that must be tested with vehicle control systems and algorithms to validate safe operation. Radar is susceptible to the environment like induced degradation with snow, rain, dirt, road/wind/weather conditions. Full system simulation is needed. Ansys acquired France-based Optis which has a tool that allows just that.

5G is shaping to be ‘the biggest thing in wireless since wireless’ as Dr. Williams states, with insight, breakthrough thinking, and innovation possible with simulation.

Advanced FinFET Designs Enabling 5G Electronics System Reliability

Following that, Dr. Norman Chang, ANSYS Chief Technologist, addressed enabling 5G Electronics system reliability for advanced FinFET designs. The trends for integration include increasing IP content, 2.5/3D IC packaging. In the power/performance arena trends include 112G SerDes, WideIO, 5G mmWave high frequency and fan-less cooling. Making system reliability an increasing focus with needed lifetimes greater than 15 years, aging, thermal/EM/ESD/EMC reliability, substrate and RFI noise challenges. This mandates accurate electrical models for on-chip components such as a spiral inductor and clock-tree/transmission lines.

Aging in FinFET is accelerated by two major device degradation mechanisms:

  • Negative Bias Temperature instability (NTBI)
  • Hot Carrier Injection (HCI) and Time Dependent Dielectric Breakdown (TDDB)

This requires aging aware SoC timing closure to ensure long term reliability

Substrate noise coupling can affect analog circuitry. Noise coupling causes spikes in the FM spectrum and impacts audio quality and performance. Other needs exist for ESD/EMC simulation for Chip-Package-System which include ESD/EMC sign-off at IOs/IP and SoC levels. ESD rules are checked Board-to-Chip, Chip-to-Chip, Board-to-Board. Pin-to-Pin ESD connectivity, resistance checks, current density checks, driver receiver checks, and dynamic ESD checks result in an ESD IP model that feeds into an IC ESD model run through CECM allowing EMC/System-level ESD simulations.

System-level ESD simulations are critical for 5G/ADAS systems. The target is system level ESD signoff with CECM for IEC61000-4-2 testing. What-if analysis allows system level ESD optimization. The solutions needed are IO/IO Ring Modeling, Full chip layout modeling, and power grid extraction, ESD device modeling, chip ESD modeling, and system level ESD analysis with CECM.

Dr. Chang closed by stating that moving forward Machine Learning and Deep Learning will be instrumental in enabling EM/Timing Assistants for reliability and timing check through an integrated ML stack also allowing for user-driven ML-Apps.

Call to Action URL: https://www.ansys.com/resource-library/article/speeding-5g-network-infrastructure-design-aa-v13-i1

________________
(*) According to Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy
(**) Skyworks Whitepaper: “5G in Perspective – A Pragmatic Guide to What’s Next”


Semiconductor Equipment Companies Facing Significant Headwinds in 2019

Semiconductor Equipment Companies Facing Significant Headwinds in 2019
by Robert Castellano on 02-13-2019 at 12:00 pm

In January 2019, the memory market has been hit with a significant amount of negative news.

  • On Jan. 15, DRAM manufacturer Nanya Technology reported its Q4 2018 revenue was $551 million, down 30.4% QoQ.
  • On Jan. 24, 2019, SK Hynix reported Q4 2018 earnings. Revenues fell 13.7% QoQ to $8.7 billion, while operating profit amounted to $3.9 billion, down 32.4%. SK’s DRAM and NAND bit shipments dropped 2.5% and rose 10% QoQ, respectively, while its DRAM and NAND ASPs fell 11.1% and 21.2% QoQ, respectively.
  • Also on Jan. 24, Western Digital reported revenue of $4.2 billion for its second fiscal quarter ended Dec. 28, 2018, down 20.7% QoQ. Operating income was $176 million with a net loss of $487 million.
  • On Jan. 31, 2019, Samsung Electronics reported Q4 2018 DRAM and NAND revenues of $18.8 billion and $6.6 billion in operating profit, down 27.7% and 43.0% QoQ, respectively. Samsung’s DRAM and NAND bit shipments were down 18.3% and 9.7% QoQ, respectively, while ASPs were down 9.7% and 21.5% QoQ, respectively.

The memory market is an important gauge on the health of the current semiconductor market because equipment spent by NAND and DRAM chip manufacturers was largely responsible for the 37.2% increase in the equipment market in 2017. In 2016, the equipment market grew just 12.9% and an estimated 9.7% in 2018.

It’ also important because the memory market represents a significant percentage of sales of leading semiconductor equipment manufacturers. For example, memory represented 60% of Applied Materials revenues and 79% of Lam Research’s revenues in their most recent quarterly announcements.

In addition, the poor earnings of the memory companies reported in Q4 had a significant impact on ASML’s revenue in the past quarter of 2018; its memory revenue represented just 40% in Q4 compared to 58% in Q3 and 54% in Q2.

Chart 1 illustrates the percentage of NAND and DRAM compared to the overall WFE market from 2013 and forecast for 2019 based on guidance from memory companies.

Capex spend in 2019 is expected to decrease 23% increase in NAND spend while DRAM spend will decrease 46%, as shown in Table 1.

The equipment companies must also contend with cuts in capex spend in logic and foundry as well. TSMC’s capex spend will increase modestly because the company will continue as the sole supplier of Apple’s A13 processor for iPhones.

Conversely, the move to 7nm, which TSMC says made up 23% of its revenue in 4Q 2018, 7nm+ (with EUV) is on track for volume ramp in the second quarter. That’s the bad news for equipment companies like Applied Materials and Lam Research, because EUV lithography will reduce the amount of deposition and etch equipment required for DRAMs.

Until Q4 2018, cloud computing was the lone bright spot for semiconductor companies following weakness in demand in autos, PCs, and smartphones, and the crash in cryptocurrency. Late in the year even cloud spending succumbed to the weakness.

Industry fundamentals have deteriorated from both an oversupply of NAND and DRAM chips and macroeconomic factors tied to the China trade war, which are affecting logic chip manufacturers and foundries. Capex spending reductions across the board will impact equipment manufacturers.

How ironic that the trade war with China is behaving like “death by a thousand cuts,” which is a form of torture and execution originating from Imperial China! At this point, I’m predicting 2019 WFE will be down 15%.


Data Center Optimization Through Game Theory

Data Center Optimization Through Game Theory
by Bernard Murphy on 02-13-2019 at 7:00 am

I always enjoy surprising synergies so I was immediately attracted to a Research Highlight in the Communications of the ACM this month, on a game-theoretic method to balance discretionary speed-ups (known as computational sprints) in data centers. If you don’t have an ACM membership and want to dig deeper, I include an open link at the end of this blog.

This research starts with the widely used trick to reduce job run-time on a processor by speeding up the clock. The downside is that running with a faster clock dissipates more heat so you can’t do it for too long otherwise you’ll fry your processor. Heat dissipates relatively slowly, so there’s a recovery time for cooling during which clock speed has to return to nominal or perhaps even slower. Like sprinting in a long-distance race; you can sprint periodically, but you can’t sprint through the whole race.

Also you’re not running this race alone. In a data center, and particularly in cloud environments, you’re in a load mix with many other jobs which also want to optimize their performance. Your job may swap onto a processor on which another job was just sprinting, or vice-versa; either way the second job loses a sprinting opportunity, at least until recovery. Or maybe multiple jobs in a rack want to sprint at the same time but that risks tripping the rack power supply, perhaps switching over to UPS with its own recovery cycles. Who loses out in this race for extra power?

If you know anything about game theory, this should sound very familiar. You have multiple players with no intrinsic reason to cooperate, all grabbing for more than their fair share of finite resources. The default (greedy) approach is simply to let it work itself out. Jobs grab whatever they can within the bounds of hardware limits. If a chip is going to overheat, hardware enforced slow-down kicks in and can’t be overridden. Similarly, if multiple chips in a rack want to sprint, early requestors make it, up to the power limit of the rack and later requestors are out of luck. Or perhaps they can exceed the power limit, the power supply trips and switches to UPS, as mentioned above, with its own discharge and recovery considerations.

Equally if you know anything about game theory, you’ll know that the greedy method is less efficient overall than methods which take a more comprehensive approach to optimization. For those who don’t know much about game theory, this isn’t about altruism. You might believe that by being better at being greedy you can win out (the Wolf of Wall Street approach where you don’t care what happens to the others) but the probabilities are against you when others are can behave similarly. You are likely to do worse following that strategy than if you more effectively cooperate; check out the prisoner’s dilemma.

A team at Duke have researched several strategies to finding a more effective equilibrium between jobs. Each of these is based on an architecture in which a user job is supported by a selfish (to that job) mechanism to optimize job performance by judicious sprinting in job phases where it will have most advantage. Sprinting profiles are shared in advance with a central coordinator which is responsible for returning recommended profiles that each job should use to ensure an equilibrium strategy across the system. Per job, the mechanism then uses its assigned profile in driving sprinting.

One quick note first. This approach avoids centralized command and control for each sprinting decision, which could be very burdensome in overall performance. Instead jobs and the coordinator need communicate only infrequently while exchanged profiles remain representative of actual activity.

More important, consider how the game plays out. If each job follows its assigned profile and the coordinator calculated an effective equilibrium strategy, jobs should remain in that equilibrium, achieving optimal throughput. What if a job cheats and offers a non-representative profile to the coordinator or chooses not to follow the equilibrium profile the coordinator returns? In either case, the cheating job is likely to suffer because, by not following the computed equilibrium strategy it is most likely to fall into less optimal performance. The only way it could (possibly) avoid this and cheat its way to better performance would require knowing what profiles other jobs have, and it doesn’t have access to that information.

In their experiments, the Duke researchers found the gaming approach provided a 4-6x total task throughput improvement over greedy approaches for data analytics workloads and close to optimal throughput based on a globally optimized policy. Not bad, and an impressive use of a math/economics theory you might never have considered relevant to this domain. You can access an open copy of the Duke paper here.


Renaming and Refactoring in HDL Code

Renaming and Refactoring in HDL Code
by Daniel Nenni on 02-12-2019 at 12:00 pm

I’ve enjoyed my past discussions with Cristian Amitroaie, the CEO of AMIQ EDA, in which we covered their Design and Verification Tools (DVT) Eclipse Integrated Development Environment (IDE) and their Verissimo SystemVerilog Testbench Linter. Cristian’s descriptions of AMIQ’s products and customers have intrigued me. They seem to be doing very well selling “utility” tools that are largely ignored by the big EDA vendors. The more I learn about AMIQ, the more I see that their tools are successful because they solve real-world user problems when writing, debugging, and maintaining hardware description language (HDL) design and verification code. I have also come to appreciate that there is a lot of technology “under the hood” to enable the tasks they automate.

CEO Interview: Cristian Amitroaie of AMIQ EDA

I Thought that Lint Was a Solved Problem

Easing Your Way into Portable Stimulus

Cristian likes to use the term “refactoring” when describing some of these tasks. I have not heard this word very often in the context of hardware design, and so I did a little research. I found an interesting site called “Refactoring.Guru” that offers this crisp definition:

“Refactoring is a controllable process of improving code without creating new functionality. Refactoring transforms a mess into clean code and simple design.”

Thus, refactoring means making changes in code to improve readability and comprehensibility. My search for this term turned up many references for software programs, but very few for hardware. Modern HDLs, especially SystemVerilog, are very complex and encompass many ideas borrowed from software, including coverage, assertions, and object-oriented programming (OOP). So, it seems natural that refactoring could apply to hardware design code as well.

I learned from Cristian that there are many types of refactoring possible, ranging from consistent formatting of white space and code alignment to significant transformations of the code. Rich languages such as SystemVerilog offer multiple ways to achieve the same functionality. Choosing one way and refactoring the code to follow the chosen style clearly makes it easier to maintain, especially when it is re-used or passed on to someone who was not the original coder. Some types of refactoring may also make HDL code more suitable for the EDA tools that consume it. For example, one style may simulate more efficiently than another, even when the functionality is identical.

I asked Cristian for an example of refactoring that is very common, and he replied with the task of renaming an element (variable, port, function, class, etc.) in the HDL code. This surprised me; renaming at first seems just a simple matter of find-and-replace in a text editor. But it’s not that simple. For one thing, searches often find a lot of similar names. Engineers tend to use common terms such as “counter” or “size” that may apply to many places in the code plus short names often appear as part of longer names. Not all text editors have find-and-replace functions with enough wildcard or regular-expression features to define unambiguously the name to be changed.

Renaming is much harder if there are ripple effects beyond the file being edited. Renaming an input or output, or an element used outside of the file, means that other files must be edited. Historically, engineers have used the Unix family of “grep” utilities to search through the design and verification files to identify where the name occurs, and then edit each such file to make the changes. This is obviously an inefficient process. Depending upon the language and the text editor, it may be possible to use a “Ctags” index to identify more easily which names appear in which file. However, generating and maintaining such an index takes additional effort, especially when the list of HDL design and verification source files, header files, and libraries changes constantly through a project.

As Cristian points out, an integrated development environment such as AMIQ’s DVT Eclipse IDE makes renaming a much simpler task. There’s no need for an index file; the tool takes the same file list used by the simulator, reads in every file, and compiles them together into a single model of the complete design and verification environment. In the IDE’s graphical user interface (GUI), the user selects the element to be renamed and types in the new name. The GUI shows all relevant files, and the user can preview the proposed changes in each file. Renaming happens instantly and accurately, updating both the internal model and the HDL files themselves. This level of speed and accuracy is possible only because the IDE has a complete compiled model and “knows” how every signal is connected and how every code element is used.


Figure 1: The IDE automates renaming of variables and other code elements.

I think that renaming is a great example of a seemingly simple HDL coding task that benefits from an IDE and requires strong technology underneath the spiffy GUI. I look forward to learning more about the more advanced forms of refactoring and how DVT Eclipse IDE automates them. Thanks to AMIQ for the education and for providing the screen shot.
To learn more, visit https://dvteclipse.com/products/dvt-eclipse-ide.

Also Read

I Thought that Lint Was a Solved Problem

Easing Your Way into Portable Stimulus

CEO Interview: Cristian Amitroaie of AMIQ EDA


GLOBALFOUNDRIES Talks About Enabling Development of 5G ICs

GLOBALFOUNDRIES Talks About Enabling Development of 5G ICs
by Tom Simon on 02-12-2019 at 7:00 am

5G is in the news again. Sprint has mounted a legal challenge against ATT, claiming that ATT is misleading people into believing that they already are offering 5G. While ATT is about to start testing of 5G, they have also sent out updates that causes customer phones to display 5GE when they are still on 4G LTE systems. The truth is that 5G will be rolling out soon, however it is not just going to be your phone made faster. 5G is a whole new set of ways that devices can communicate. There are three ranges of frequencies that 5G will be using in an integrated fashion.

The low frequency band will be centered around 700MHz, which will propagate well outdoors and will help serve rural areas. The next range of frequencies will be around 2 to 4GHz, and will offer much higher data rates than existing LTE service. At the high end will be frequencies centered around 26 GHz, and 66 to 71 GHz. As you are probably aware, millimeter wavelength (mmWave) frequencies like these do not travel far in the air. This makes them ideal for line of sight transmission in dense urban areas. mmWave frequencies can offer extremely low latency and very high bandwidth.

A big challenge for the semiconductor industry is designing the RF chips needed to build base stations and handsets. It is worth pointing out, though, that 5G will probably also appear in the home as “fixed wireless” internet service. Verizon is already working on this. But, back to mmWave design. I recently had a chance to talk with Peter Rabbeni, GLOBALFOUNDRIES RF Business Unit VP, around the time when he presented on RF EDA for 5G at a DesignCon event sponsored by ANSYS.

One of the important technologies that will make mmWave feasible for handsets is phased array antennas. They will allow focusing the beam so it sends maximum energy toward the desired receiver. Due to propagation loses, omnidirectional signals would require too much energy and battery life would suffer. Peter talked about how EDA solutions are critical to enable TTM for 5G mmWave applications. As part of this he cites electromagnetic simulation, passive and t-line design tools. thermal modeling, fill pattern generation and chip-package co-simulation. Without all of these elements available in the flow and enabled by foundry deliverables, it would not be possible to build the high frequency RF chips and assemblies needed for 5G.

Electromagnetic simulation is used for signal integrity, parasitic extraction and antenna radiation performance. At frequencies around 60 GHz EM simulation is a must have. It is extremely useful for identifying coupling and crosstalk. It also plays a major role in passive design. Peter told me that GLOBALFOUNDRIES has included tools for designing things like inductors inside of their PDKs. They have three tools here: RF Optimum Inductor Finder, RF/mmWave Optimum Transformer Finder and RF/mmWave Optimum Transmission Line Finder. With these, designers can create these critical structures knowing they are the best fit for their design.

Thermal modeling is very important for the PA arrays found in 5G devices. Peter talked about how important PA efficiency is for managing thermal issues. Inefficient PAs dissipate more thermal energy, and in a PA array this can affect neighboring elements. Thermal coupling between PAs must be considered in simulations or important device behavior might be overlooked.

Peter talked about how GLOBALFOUNDRIES has solved a vexing problem for RF designers. Traditionally foundries were very strict and made it difficult to go to fab with the optimal fill for certain devices. GLOBALFOUNDRIES offers customer controlled fill for sensitive RF devices. Now designers can specify the amount and type of fill they need. This will lead to better circuit performance without yield impacts.

The last piece is their chip-package co-design flow. This allows improvement of RF performance and power efficiency. By designing concurrently, silicon, package and system can be optimized and validated with fewer iterations before tape out. This flow has been validated on their 22FDX process which targets mmWave designs.

Consumers will see revolutionary changes when products compliant with the 5G specification are rolled out with the proper supporting infrastructure. There is a lot of work ahead for all members of the food chain. But it all starts with the silicon that will support the entire ecosystem. GLOBALFOUNDRIES is putting a lot of effort into development that supports RF designs for these applications.


Semiconductor Security and Sleep Loss

Semiconductor Security and Sleep Loss
by Daniel Nenni on 02-11-2019 at 12:00 pm

One of the semiconductor topics that keeps me up at night is security. We track security related topics on SemiWiki and while the results are encouraging, we still have a very long way to go. Over the last three years we have published 148 security related blogs that have garnered a little more than 400,000 views. Security touches every market we track: IoT, mobile, automotive, AI, and 5G so there should be more, absolutely.

Security breaches are happening at an alarming pace and we are working very hard to keep the cloud and edge devices safe, believe me, but we are just not writing about it. Unfortunately, now that security breaches are commonplace it really is not clickable news anymore.

Frankly, if the masses knew how unsecure our devices really are, everyone would be losing sleep. Just wait until autonomous automobiles are clogging our transportation arteries. Hackers will have a field day. If losing control of your laptop or phone does not scare you, just wait until hackers take control of your car!

It is interesting to note that my grandchildren will not need to learn how to actually “drive” a car. They will just get in and tell the car where they want to go. That is a big change of life. I remember the anticipation of getting my license on my 16th birthday. So much work and responsibility. We even repaired our own cars back then and knew exactly how they worked. Today, not so much, but I digress…

The point is semiconductor security is a big deal and will touch every piece of silicon we manufacture. Thankfully security is now playing a much bigger role in our conferences including the upcoming DVCon:

System-Level Security Verification Starts with the Hardware Root of Trust
Speaker: Jason Oberg – Tortuga Logic
Organizer: Jonathan Valamehr – Tortuga Logic

With the seemingly continuous discovery of security vulnerabilities at the hardware/software boundary, a new awareness has been built around hardware as the basis for system security. An emerging trend to reduce the likelihood of vulnerabilities is the utilization of a Hardware Root of Trust (HRoT) as the foundation for a secure system. HRoTs are responsible for many of the security features on a chip including secure boot, secure debug, key provisioning and management, and memory isolation. While employing an HRoT has now become a necessity, HRoTs have a vast amount of components and verifying that a secure system has been built around them is a daunting task.

Unfortunately, the current manual techniques for HRoT security analysis tend to miss many unobvious system-level security vulnerabilities. A major reason for the unsuccessful identification of security vulnerabilities is the lack of sophisticated tools that specifically target security verification. Without these, engineers are left to manually review state diagrams, manually review design files, and postulate on design and architecture specifications. This ends up being extremely time-consuming, is not automated and thus susceptible to human error, and consequently leaves systems susceptible to costly vulnerabilities that often can compromise a vendor’s customer data.

In order to properly verify the security of a system built around a HRoT, several challenges need to be addressed. In this workshop, we discuss the state of hardware security in general, then discuss how HRoTs are employed in systems today ranging from the datacenter to the IoT edge. We will also discuss common attacks on HRoT implementations, and the damage that can occur without adequate security verification. We then discuss common hardware security verification techniques, as well as their benefits and drawbacks. Next, we will present the best-in-class techniques and methodologies for verifying the security of a HRoT, and how these techniques can be employed across the entire design and verification lifecycle. Lastly, we will present an example security analysis on a real world HRoT using the discussed techniques. The security analysis will showcase the entire process from threat model specification to tangible results.

Jason and Johny are very approachable guys, as am I, so I hope to see you there…

DVCon is the premier conference for discussion of the functional design and verification of electronic systems. DVCon is sponsored by Accellera Systems Initiative, an independent, not-for-profit organization dedicated to creating design and verification standards required by systems, semiconductor, intellectual property (IP) and electronic design automation (EDA) companies. In response to global interest, in addition to DVCon U.S., Accellera also sponsors events in China, Europe and India. For more information about Accellera, please visit www.accellera.org. For more information about DVCon U.S., please visit www.dvcon.org. Follow DVCon on Facebook https://www.facebook.com/DvCon or @dvcon_us on Twitter or to comment, please use #dvcon_us.


A Detailed History of Samsung Semiconductor

A Detailed History of Samsung Semiconductor
by Daniel Nenni on 02-11-2019 at 7:00 am

From our book “Mobile Unleashed”, this is a detailed history of Samsung semiconductor:

Conglomerates are the antithesis of focus, and Samsung is the quintessential chaebol. From humble beginnings in 1938 as a food exporter, Samsung endured the turmoil and aftermath of two major wars while diversifying and expanding. Its early businesses included sugar refining, construction, textiles, insurance, retail, and other lines mostly under the Cheil and Samsung names.
Continue reading “A Detailed History of Samsung Semiconductor”