webinar banner2025 (1)

Integrated Power Management IP to Decrease Power and Cost

Integrated Power Management IP to Decrease Power and Cost
by Eric Esteve on 09-19-2018 at 7:00 am

This blog is the synthesis of a white paper “New Power Management IP Solution from Dolphin Integration can dramatically increase SoC Energy Efficiency”, which can be found on Dolphin Integration web site.

The power consumption generated by complex chips was not a real issue when the system could be simply plugged in the wall to receive electricity. The most important feature used to be raw performance, expressed in GHz or MIPS, and was used to market PC to end-user, for example.

Nevertheless, with the huge adoption for wireless mobile devices in the 2000 and later, the metric has changed. For battery powered devices, the time between two battery charge became almost as important as the MIPS power delivered by the phone/smartphone. That’s why efficient but costly external power management solution (7$ for the iPhone 6, up to $14 for the iPhone X) have been implemented in smartphone.

With the massive adoption of battery powered system expected in IoT, industrial or medical, power efficiency is becoming a mandatory requirement, together with low system cost. To reach these goals, SoC will have to be architectured for low-power and integrate power management IP on-chip.

In fact we realize that power consumption is also becoming a key concern in automotive application (ADAS) or sever/network infrastructure. These applications will have to be much more energy efficient, for different reasons like reliability (automotive) or cost (raw electricity and cooling cost in data center). In short, the entire semiconductor industry will have to offer more energy efficient devices, at the right cost. The way to do it is through defining power aware architecture and design chips integrating power management IP.

In this white paper “New Power Management IP Solution from Dolphin Integration can dramatically increase SoC Energy Efficiency”, Dolphin Integration has reviewed all the solutions available to decrease SoC power consumption. The first technique is to define and manage power domains inside the SoC, well-known in the wireless mobile industry, but other techniques can be considered.

Dynamic voltage frequency sclaing (DVFS), near threshold voltage, body biasing, globaly asynchronous localy synchronous (GALS) are also reviewed and discussed in this paper. But a low-power SoC has much smaller noise margin, and special care must be taken to preserve signal integraty in this SoC. In other words, you need experts to support you when implementing these new power management solutions.

In the second part of the paper, the author explains how implementing power management IP in customer’s SoC. The design team has to start from the architecture level to define “power-aware” SoC architecture.The the first task for the designer will be to identify the various functions belonging to the same power domain. This power domain is not simply defined by voltage, but in respect with the functionality of the various blocks expected to be part of the same task in a given power mode.

Once you have split the chip in power islands (memory, logic, analog, always-on, etc.) it’s time to control, gate (power gating, clock gating) and distribute power and clock in the SoC. This will be done by implementing the various power management IPto power the SoC core, and the designer will select controlling power switches, VREG or Body Biasing generator and clocks gating anddistribution, all part of the power network IP port-folio from Dolphin Integration.

The implementation of the SoC power mode control is made straight-forward, thanks to a modular IP solution (named Maestro). This power mode control IP is equivalent to an external Power Management IC (PMIC), similar to those integrated in a smartphone. Maestro will manage start-up power sequence, power mode transition and optimize the power regulation network.

Dolphin Integration claim offering faster TTM and lower cost for customer selecting their power management IP solution, instead of trying to design a low-power SoC on his own. As the design team implement the power management controller (Maestro) as an IP in the SoC, the sytem BOM is lower than using an external PMIC controller (remember the impact of PMIC on the iPhone 6 and X BOM).

Faster TTM is made possible, thanks to technical support from expert engineer, from top level (power-aware architecture definition) to design phase (implementation of the various power management IP). This expertize in low-power design will allow escaping traps, like new noise issues due to mode transition for a SoC partitioned in power domains, or signal integrity suceptibility to crosstalk when operating at extremelly low voltage level.

In the Moore’s law golden age, a chip maker had just to re-design to the next technology node to automatically benefit from higher performance and lower power consumption-at the same time. Today, the semiconductor industry is addressing new applications like edge IoT, medical wearable devices and many more, requiring low-cost IC probably designed on mature technology nodes.

Frequently battery powered, these SoC require an ultra-low power consumption, which can only be reached by using power management techniques. In this white paper, Dolphin Integration describes these new techniques and how to implement power management IP from the architecture level down to the back-end. This is not an easy task, especially when doing it for the first time. That’s why the SoC design team need technical support from low-power design experts and a silicon proven IP portfolio to release an energy-efficient device, on line with the time to market requirement.


To see the White Paper, go to this link

ByEric Esteve fromIPnest


Beyond DRC and LVS, why Reliability Verification is used by Foundries

Beyond DRC and LVS, why Reliability Verification is used by Foundries
by Daniel Payne on 09-18-2018 at 12:00 pm

Reliability of ICs isn’t a new thing, because back in 1980 I was investigating why a DRAM chip using 6um technology was having yield loss due to electromigration effects. I recall looking through a microscope at a DRAM layout and slowly ramping up the Vdd level then suddenly the shiny aluminum interconnect started to change colors and actually bubble because of the high currents, then the metal evaporated. We never could identify the cause of that reliability failure, so it’s kind of haunted me all of these years.

IC designers today will start out a new project by requesting the foundry files for DRC (Design Rule Check), LVS (Layout Versus Schematic) and PEX (Parasitic EXtraction) so that they can perform physical verification tasks to ensure high-yielding silicon and meet timing specifications. In addition to these checks there are an increasing number of reliability checks that need to be done, like:

  • ESD (Electro Static Discharge)
  • LUP (Latch-up)
  • Interconnect reliability
  • Electrical overstress

The number one foundry is TSMC, so no surprise that they have also been at the forefront of providing reliability checks for:

  • ESD
  • LUP
  • Point-to-point resistance
  • Current Density
  • Layout-based rules

Another foundry TowerJazz has been offering reliability checks for automotive IC designers as they follow the ISO 26262 standard for functional safety. These checks include:

  • ESD
  • Charge Device Model (CDM)
  • Analog design constraints

    • Device alignment
    • Symmetry
    • Orientation/parameter matching

Both TSMC and TowerJazz support Mentor’s EDA tool Calibre PERCfor the reliability checks mentioned above.

When adopting a set of foundry reliability design rules you need to understand what these rules are doing, and how your company reliability goals align with them. As a starting point the foundry rules for ESD and LUP can be used, then your team or company can decided to extend the rules to check for certain conditions:

  • Each IP is implemented OK
  • LUP that is context aware
  • Interconnect analysis
  • Stacked device in context of full chip
  • Power ties correct per well

Best practice is to run the reliability checks on each IP block of your chip, so that with final integration there are no surprises that need to be fixed. Reusing IP on a new project but with multiple power domains is another good reason to run reliability checks, as shown below:


Multiple power domains require validation

Verifying IP individually is a good start, but not sufficient for full-chip integration because context matters. Here are three cases where context matters:

  • ESD and EOS protection
  • Voltage-aware DRC
  • CDM checking for point-to-point resistance values


Reliability checking with full-chip context

If the bulk node is connected to a higher voltage level than the gate, then it causes a reliability issue for EOS. We do voltage-aware DRC checks to verify that time-dependent dielectric breakdown (TDDB) isn’t happening. Avoiding CDM issues is accomplished through detailed point-to-point resistance checking.

Summary
I’ll never forget the transition from manual DRC and LVS to automated, what a relief for IC designers. The same thing is happening with reliability checks, so start using the foundry-supplied reliability rule checks then add to them as needed for your design and reliability goals. The Calibre PERC tool has been around for a while and it’s ready for your designs and supported by the major foundries.

White Paper
Mentor has an 8 page White Paper on this topic, so start the download process here.

Related Blogs


Webinar: Multicycle Vectorless Dynamic IR Signoff for Near 100 Percent Coverage

Webinar: Multicycle Vectorless Dynamic IR Signoff for Near 100 Percent Coverage
by Bernard Murphy on 09-18-2018 at 7:00 am

Check this webinar out – Mediatek will share a novel approach to early IR drop estimation. Competition in system design has become even more intense because potential markets are huge and there are more players with deep pockets chasing those markets. Wherever you are in those value chains, you want to shift everything left to accelerate schedules. But there’s a challenge in that goal; what do you do if the step you want to accelerate depends on simulation vectors? Dynamic IR signoff is one such application.

REGISTER HERE for this webinar on October 2[SUP]nd[/SUP], 2018 at 9am PDT.

You could wait until late in design but then you take the risk that you’ll find a significant weakness when you’re almost out of time. If you start early, you can use a vectorless approach, but the general view has been that these are too inaccurate to provide effective guidance. Necessity (or at least competition) apparently is the mother of invention. MediaTek will share in this webinar how they use a vectorless approach to get to very high IR coverage, isolate hotspots and increase confidence in their final signoff.

REGISTER HERE for this webinar on October 2[SUP]nd[/SUP], 2018 at 9am PDT.

Summary
Next-generation mobile, automotive and networking applications require the use of advanced SoCs to deliver greater functionality and higher performance at much lower power. For these SoCs, the margins are smaller, schedules are tighter while costs are higher. Faster convergence with exhaustive coverage is therefore imperative for first time silicon success. For next generation applications, the numbers of vectors for which you need to run simulations have increased multi-fold. It is nearly impossible to uncover potential design weaknesses when you are simulating a handful of vectors for just a fraction of second. How do you ensure you have enough design coverage? The common approach for dynamic IR signoff uses vector-patterns with true delay information. This approach is performed late in the design cycle, requires long simulation time and yields limited IR coverage due to limited vector-patterns. Published traditional vectorless approaches produce unsatisfactory IR coverage and correlation to vector-patterns. Attend this webinar to learn how MediaTek’s novel multicycle vectorless flow uses a design power target, selectively sets a high toggle rate and enables state-propagation features in ANSYS-RedHawk. The new flow achieves 97 percent IR coverage of non-memory cells and 100 percent IR coverage of memory cells, captures IR hotspots and reduces run time by 3X, accelerating signoff processes and improving power grid quality.

Speakers
Annapoorna Krishnaswamy, Product Marketing Manager at ANSYS

Huajun Wen is a distinguished engineer at Mediatek. She is currently responsible for developing new IR signoff flows to ensure robust and efficient power grid designs of Mediatek products. She has comprehensive knowledge and expertise in modeling and reducing IR from PMIC to transistors. Her experiences span chip power modeling and characterization, physical design and timing methodology, and on-chip power conversion techniques. Prior to joining Mediatek at 2013, Huajun was an IBM Senior Technical Staff Member, holding various physical design leadership positions on IBM Power and Mainframe processors designs as well as Cell Processor design used in the Sony Playstation3. She published 20+ technical papers and holds 10+ US granted patents. Huajun received her Ph.D. in solid state physics from Free University Berlin, Germany.

About ANSYS
If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in engineering simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and engineer products limited only by imagination.


Facebook and WhatsApp are not just flawed they are downright dangerous

Facebook and WhatsApp are not just flawed they are downright dangerous
by Vivek Wadhwa on 09-17-2018 at 7:00 am

Facebook’s woes are spreading globally, first from the U.S., then to Europe and now in Asia. A study by researchers at the University of Warwick in the U.K. has conclusively established that Facebook has been fanning the flames of hatred in Germany. The study found that the rich and the poor, the educated and the uneducated, and those living in large cities and those in small towns were alike susceptible to online hate speech on refugees and its incitement to violence, with incidence of hate crimes relating directly to per-capita Facebook use.

And during Germany-wide Facebook outages, which resulted from programming or server problems at Facebook, anti-refugee hate crimes practically vanished — within weeks.

As The New York Times explains, Facebook’s algorithms reshape a user’s reality: “These are built around a core mission: promote content that will maximize user engagement. Posts that tap into negative, primal emotions like anger or fear, studies have found, perform best and so proliferate.”

Facebook started out as a benign open social-media platform to bring friends and family together. Increasingly obsessed with making money, and unhindered by regulation or control, it began selling to anybody who would pay for its advertising access to its users. It focused on gathering all of the data it could about them and keeping them hooked to its platform. More sensational Facebook posts attracted more views, a win-win for Facebook and its hatemongers.

In countries such as India, WhatsApp is the dominant form of communication. And sadly, it is causing even greater carnage than Facebook is in Germany; there have already been dozens of deaths.

WhatsApp was created to send text messages between mobile phones. Voice calling, group chat, and end-to-end encryption were features that were bolted on to its platform much later. Facebook acquired WhatsApp in 2014 and started making it as addictive as its web platform — and capturing data from it.

The problem is that WhatsApp was never designed to be a social-media platform. It doesn’t allow even the most basic independent monitoring. For this reason, it has become an uncontrolled platform for spreading fake news and hate speech. It also poses serious privacy concerns due to its roots as a text-messaging tool: users’ primary identification being a mobile number, people are susceptible everywhere and at all times to anonymous harassment by other chat-group members.

On Facebook, when you see a posting, you can, with a click, learn about the person who posted it and judge whether the source is credible. With no more than a phone number and possibly a name, there is no way to know the source or intent of a message. Moreover, anyone can contact users and use special tools to track them. Imagine the dangers to children who happen to post messages in WhatsApp groups, where it isn’t apparent who the other members are; or the risks to people being targeted by hate groups.

Facebook faced a severe backlash when it was revealed that it was seeking banking information to boost user engagement in the U.S. In India, it is taking a different tack, adding mobile-payment features to WhatsApp. This will dramatically increase the dangers. All those with whom a user has ever transacted can harass them, because they have their mobile number. People will be tracked in new ways.

Facebook is a flawed product, but its flaws pale in comparison to WhatsApp’s. If these were cars, Facebook would be the one without safety belts — and WhatsApp the one without brakes.

That is why India’s technology minister, Ravi Shankar Prasad, was right to demand, this week, that WhatsApp “find solutions to these challenges which are downright criminal and violation of Indian laws.” The demands he made, however, don’t go far enough.

Prasad asked WhatsApp to operate in India under an Indian corporate entity; to store Indian data in India; to appoint a grievance officer; and to trace the origins of fake messages. The problems with WhatsApp, though, are more fundamental. You can’t have public meeting spaces without any safety and security measures for unsuspecting citizens. WhatsApp’s group-chat feature needs to be disabled until it is completely redesigned with safety and security in mind. This on its own could halt the carnage that is happening across the country.

India — and the rest of the world — also need to take a page from Germany, which last year approved a law against online hate speech, with fines of up to 50 million euros for platforms such as Facebook that fail to delete “criminal” content. The E.U. is considering taking this one step further and requiring content flagged by law enforcement to be removed within an hour.

The issue of where data are being stored may be a red herring. The problem with Facebook isn’t the location of its data storage; it is, rather, the uses the company makes of the data. Facebook requires its users to grant it “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content” they post to the site. It assumes the right to use family photos and videos — and financial transactions — for marketing purposes and to resell them to anybody.

Every country needs to have laws that explicitly grant their citizens ownership of their own data. Then, if a company wants to use their data, it must tell them what is being collected and how it is being used, and seek permission to use it in exchange for a licensing fee.

The problems arising through faceless corporate pillage are soluble only through enforcement of respect for individual rights and legal answerability.

For more, read my book “Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain — and How to Fight Back” and follow me on Twitter:@wadhwa.


UMC and GF or Samsung and GF?

UMC and GF or Samsung and GF?
by Daniel Nenni on 09-17-2018 at 7:00 am

One of the interesting rumors in Taiwan last week was the possibility that UMC and GF will do a deal to merge or UMC will buy some GF fabs. I have talked to quite a few industry experts about it and will talk to more this week at the GSA US Executive Forum (more at the end). The US Executive Forum is what they call a C Level event which means it is invitation only and expensive.

This year’s program looks very good. Notice the heavy AI emphasis, as I have said many times before AI will touch most every chip and will keep pushing the leading edge processes, absolutely. EDA CEOs Wally Rhines and Aart de Geus will be there. Wally does a great “Industry Vision” loaded with facts and figures and Aart is not afraid to ask the difficult questions on his panel so both of these talks should be interesting.

Keynote: Looking To The Future While Learning From The Past, Daniel Niles / Founding Partner / AlphaOne Capital Partners

Keynote: Convergence of AI Driven Disruption: How multiple digital disruptions are changing the face of business decisions, Anthony Scriffignano / Senior Vice President & Chief Data Scientist / Dun & Bradstreet

Significance of AI in the Digitally Transformed Future
This session will discuss how developments in machine learning, deep learning and AI are impacting technology segments and market verticals and the significance of Artificial Intelligence in the Digitally Transformed Future.

AI and the Domain Specific Architecture Revolution
Wally Rhines / President and CEO / Mentor, a Siemens Business

AI Driven Security
Steven L. Grobman / Senior Vice President and CTO / McAfee

Innovating for AI in Semis and Systems

AI Accelerators in the Datacenter Ecosystem
Kushagra Vaid / General Manager & Distinguished Engineer – Azure Infrastructure / Microsoft

Delivering on the promise of AI for all – from the data center to the edge of cloud
Derek Meyer / CEO / Wave Computing

Driving the Evolution of AI at the Network Edge
Remi El-Ouazzane/Vice President and COO, Artificial Intelligence Products Group / Intel

The Physics of AI: Architecting AI systems into the Future
Sumit Gupta / Vice President AI, ML and HPC / IBM

AI Panel Discussion
The panel will discuss the innovations in the semiconductor and systems space that are empowering Artificial Intelligence and the collaboration opportunities between semiconductor and systems players to enable emerging markets.

Moderator:
Aart de Geus / Chairman and Co-CEO / Synopsys

Panelists:
Derek Meyer
Sumit Gupta
Kushagra Vaid
– Remi El-Ouazzane

Keynote: Long Term Implications of AI & ML
Byron Reese / CEO, Gigaom / Technology Futurist / Author

VIP Reception
Book signing by Byron Reese
The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity

Back to the UMC GF Samsung rumor. Remember, GF has built-out fabs in Singapore, the US, and Europe. GF also has the IBM patents and technology plus the ASIC group which has been spun out. Think about UMC’s pros and cons and see if they match up to GF’s assets and keep in mind, whatever UMC does not need Samsung may want. The ASIC business for example (UMC has Faraday). It would also give GF’s owner a somewhat graceful exit from the semiconductor industry. If you combine UMC and GF it gets you a $5B pure-play foundry which is much closer to TSMC’s $15B.

Of course Samsung could just buy GF outright so there is always that. Just a rumor of course but not unlike the “GF buys IBM semiconductor” rumor we started a while back: GLOBALFOUNDRIES Acquires IBM Semiconductor Unit!


Mentor Rise and Fall

Mentor Rise and Fall
by Daniel Nenni on 09-14-2018 at 7:00 am

This is the fifteenth in the series of “20 Questions with Wally Rhines”

During 1980 and 1981, three companies, Daisy, Mentor and Valid were founded. Daisy and Valid attacked the computer automated design business with custom hardware workstations plus software to provide the unique capabilities required by engineers. Mentor made a critical decision. Charlie Sorgy at Mentor evaluated the Motorola 68K-based workstations being introduced and concluded that the Apollo workstation could provide everything that Mentor needed. Meanwhile Jerry Langler and others worked on the product definition, interacting with design engineers to zero in on a set of capabilities that would solve the design problems of those designing electronic chips and systems.

Until this time, semiconductor companies developed their own design tools, typically running on large mainframes. In the 1970’s, that meant IBM 3090-600 mainframes, the largest in the IBM family. The mainframes at most corporations were shared with the rest of the company. The result: not much design work was done during the last week of the quarter when the corporation was closing its books because the computers were loaded to capacity.

During this period, I worked for Texas Instruments which considered its EDA software as a competitive differentiator. Minicomputer based work stations from companies like Calma and Applicon, began taking over much of the physical layout task. TI developed its own system based upon the TI 990 minicomputer, a system that was not really well suited to the task. The 1982 Design Automation Conference, where Mentor introduced its first product called IdeaStation, changed things at TI. The Apollo workstation, based upon the Motorola 68000 provided clearly superior capabilities. TI signed up for a complete conversion to Mentor based upon the Apollo and, for a while, was the largest company user site for Apollo computer systems, with more than 900 workstations in use at the peak.

But TI was not a good customer for Mentor. The Mentor software had been adopted by much of the military/aerospace and automotive industries who needed standardization of design capture and simulation processes across their companies. Mentor was winning that battle against Daisy and Valid. TI Semiconductor Group, however, had a different set of needs; they wanted to modify the software, customize capabilities and do all sorts of things that distracted Mentor from its strategic direction. So TI and Mentor parted ways. TI went back to its proprietary software and deployed it on the Apollo and Mentor focused on systems companies.

In the next ten years, semiconductor companies like TI became more important to the EDA industry. Most semiconductor companies didn’t have TI’s history of EDA software development. So a commercial EDA industry became increasingly viable. As the growth of the EDA industry accelerated, it became apparent that a new generation of product was needed. There were no standards for user interfaces for engineering workstations from Apollo, Sun or other new competitors. Interoperability with third party applications was not well supported. Mentor proposed a totally new architecture, called Version 8.0 (later euphemistically referred to as Version Late dot Slow), or Falcon. The idea was a unified environment that could utilize the same user interface and database. Not a good approach but the customers loved the idea. One more example of Clayton Christianson’s “Innovator’s Dilemma” where doing what customers say they want is frequently not the right solution.

In retrospect, a single database is not appropriate for the wide variety of formats required for integrated circuit design. And the workstation companies solved the problem of standard user interfaces themselves so ultimately there was no need for Mentor to provide one. The really critical mistake, however, was terminating the legacy family of design software without completion of a new generation of products. As the schedule for Version 8.0 slipped, there was less and less product available for customers to buy. Mentor’s revenue peaked at about $450 million and declined to $325 million with lots of employee frustration and resignations as the entire company was mobilized to save Version 8.0.

The Falcon approach never really worked. Fortunately Mentor was saved with a variety of world class point tools like Calibre and Tessent that became the basis for specialty design platforms. Throughout this transition, the leading competitors adopted, and argued for, a new paradigm. That paradigm was a single vendor flow which never evolved. Why? Because no one company can be the best at everything. So integration of tools and methodologies from different companies became critical to all those who wanted best-in-class design environments.

One great aspect of the EDA industry was the ability of new startups to successfully introduce a new point tool and grow to be valuable enterprises. Most of these companies were acquired by one of the “Big Three”, Mentor, Cadence or Synopsys. Interestingly, the combined market share of Mentor, Cadence and Synopsys, remained almost constant over a twenty year period at 75% plus or minus 5% despite all the acquisitions. Cadence grew almost exclusively by acquisition while Mentor did very few. Synopsys was somewhere in between. I once joked to a group of Cadence employees that, other than Spectre, I couldn’t think of a single successful Cadence product that was conceived and developed within Cadence. The group looked shocked and told me that I was not correct. “Every line of Spectre”, they said, “was developed at U.C. Berkeley”.

Mentor went through a very difficult period. Rarely does a software company go into decline and then recover. The reason is that software companies have a large fixed cost base of employees and, when their revenue declines, they have no choice but to reduce personnel, which makes recovery difficult. Mentor was an exception. But it missed many turns of the industry and had to focus on areas of specialization where it could be number one. That strategy worked but it took a long time.

And today, it still works. EDA is a business like the recording industry. There are rock stars and they develop hits. Once a hit becomes entrenched, it’s very hard to displace. Mentor focused on a few key areas where its position is hard to attack. Physical verification through the Calibre family is an example. Calibre is the Golden signoff tool, even though there are foundries that will grudgingly accept alternatives. When the debate about a variation in design rules occurs, the discussion between design and manufacturing people always returns to Calibre. PCB technology has similarities. You can use a variety of less expensive tools but why make life difficult for yourself?

Tessent Design for Test became a hallmark of this specialization strategy by putting together a group of the world’s best test people and letting them do their thing. Under Janusz Rajski and a large group of test gurus, unique technologies like test compression, cell-aware test, hierarchical test, etc. were developed and used to build a commanding market share. Other areas where this point tool strategy was used to grow a complete design platform included high level synthesis, optical proximity correction, automotive wiring and others for a total of 13 out of the forty largest segments of EDA, according to Gary Smith EDA.

Since the mid 1970’s, three companies have had the largest share of the EDA market. Computervision, Calma and Applicon gave way to Daisy, Mentor and Valid and then Mentor, Cadence and Synopsys. Are three large EDA companies a stable configuration as long as technology is evolving rapidly? Probably. Unless a major discontinuity occurs. At which time, new companies will appear and we’ll probably have another shakeout.

The 20 Questions with Wally Rhines Series


OnStar Missing the Florence Boat

OnStar Missing the Florence Boat
by Roger C. Lanctot on 09-13-2018 at 12:00 pm

Here we go again. A hurricane is closing in on the U.S. East Coast and General Motors’ OnStar connected car team – now part of something called Global Connected Consumer Experience – is AWOL.

While mandatory evacuations have been ordered and two-way highway connections to the coast have been switched to single direction exit routes to the mainland, OnStar is once again missing an opportunity to offer a public service to its own customers as well as to non-GM-owning drivers. While AAA is posting bulletins and helpful tips for drivers, OnStar remains mute – a glaring lost branding opportunity if there ever was one.

For those unfamiliar with the OnStar connected car platform, OnStar is the more-or-less familiar “blue button” located on the lower rim of rear view mirrors in General Motors vehicles including Chevrolets, Cadillacs, GMCs and Buicks. In fact, GM equips all the vehicles it sells to consumers with this blue button which allows GM vehicle owners to summon emergency assistance in the event of a crash or simply access roadside assistance or other less urgent requests.

Of course, OnStar is most famous for automatically calling for assistance in the event of a cash that results in an airbag deployment – which is especially important in the event of an unconscious driver. All of this means that the OnStar command center in downtown Detroit is receiving requests for assistance along with critical vehicle diagnostic data from cars in and around the hurricane impact zone.

Perhaps more importantly, OnStar is in a position to communicate vital information to its customers in those same zones via its in-vehicle systems or its smartphone apps. In addition, OnStar is in a position to share its view of the entire geographic area correlated to the kinds of requests for assistance that it is receiving – a valuable resource for regional government authorities, news reporting organizations and the general public. But, as a resident of the soon-to-be-impacted area and with a GM owner in my family, I can honestly say GM and its OnStar team are asleep at the switch.

n past hurricanes, such as Katrina among others, GM has turned OnStar on for customers with lapsed accounts and attempted to share evacuation route information. Some day GM will be able to share probe-based traffic information that might help customers and the general public find the clearest routes to safe havens.

But it’s not happening this time. The silence from Detroit is deafening and will soon be drowned out by howling Florence-related winds and the pronouncements from AAA. Where have you gone, OnStar? A nation turns its lonely ears and eyes to you.


Easing Your Way into Portable Stimulus

Easing Your Way into Portable Stimulus
by Bernard Murphy on 09-13-2018 at 7:00 am

The Portable Stimulus Standard (PSS) was officially released at DAC this year. While it will no doubt continue to evolve, for those who were waiting on the sidelines, it is officially safe to start testing the water. In fact it’s probably been pretty safe for a while; vendors have had solutions out for some time and each is happy to share stories on successful customer pilot projects. And I’m told they’re all lined up with the released standard.

That said, PSS obviously isn’t mainstream yet. According to Tom Anderson, who provides marketing consulting to a number of companies and has significant background in this domain, you might think of PSS adoption today as not so different from where formal verification was ~10 years ago – early enthusiasts, pilot projects, likely more innovation and evolution required before it becomes mainstream.

Different people probably have different takes on PSS but for me the motivation isn’t hard to understand. SoC verification engineers have been screaming for years for a way to better automate and accelerate system-level testing while driving better coverage and better reuse of verification components. PSS is what emerged in response to that demand.

Some were upset that PSS wasn’t just an add-on to UVM, but this shouldn’t be too surprising. We’ve already accepted that we need multiple different types of modeling tool – from virtual prototyping to formal verification, software simulation, emulation and FPGA prototyping – because the verification problem is too big to be managed in just one tool. The same applies to verification methodologies – from ad-hoc testbenches to UVM to PSS – because, again, the verification problem is too big to be managed in one verification methodology. UVM is great for the block/IP-level task, but if you’re an architect or a full-system verifier, you’ll probably welcome PSS as a platform much better suited to your needs.

Still, new languages undeniably create new learning curves. For a variety of reasons, PSS comes in two flavors – C++ and Domain-Specific Language (DSL). Demand for C++ unsurprisingly evolved from the software folks. DSL as I understand it is designed to look more familiar to SV users. This split may seem unnecessarily fragmented to some but remember the range of users this has to span. I’m told the standard has been tightly defined so that features in each use-model are constrained to behave in the same way.

PSS adds another wrinkle – it’s declarative, unlike probably most languages you’ve worked with, which are procedural (C, Perl, Python, SV, UVM, …). In a procedural (test) language, you describe how to perform a test; in a declarative language you describe what you want to test and let the tool figure out how. Constrained random does some of this, but still heavily intermixed with details of “how”. Declarative languages such as PSS go further, making test development easier to reuse but also making thinking about test structures a bit more challenging.

So given this, if you want to experiment with PSS, who you gonna call? The simulation vendors of course for compilers / constraint solvers / simulators. They also provide nice graphical tools to view task graphs and generated paths through those graphs. But if history is any indicator, many verification engineers aren’t big fans of graphical creation tools. Most of us prefer our favorite text or “studio”-like editors which may provide additional support for class browsing, autocompletion and similar functions.

AMIQ recently added support for PSS in the very rich range of design languages they support through their Eclipse-based integrated development environment (IDE). This provides on-the-fly standard compliance checking, autocomplete, intelligent code-formatting, code and project navigation through links, search, class-browsing and integration with all the popular simulators and revision control systems. The figure above shows DVT Eclipse DVE offering an auto-completion suggestion to fix a detected syntax error.

AMIQ supports both C++ and DSL users. Tom tells me that each finds different value in the tool. For the C++ users, the real benefit is in class library support. For DSL users, that the language is close to SV is both a plus and a minus – it’s familiar but also it’s easy to make mistakes. On-the-fly checking helps get around those problems quickly.

Tom wrapped up by acknowledging that in PSS, everyone is learning; there are no experts yet. You can choose to learn quickly in an IDE or you can choose to learn slowly, alternating between your editor window and the LRM. I know, I’m just as bad as everyone else in wanting to stick to my favorite editor, but maybe there’s a better way. You can learn more HERE.

Also Read

CEO Interview: Cristian Amitroaie of AMIQ EDA

Automated Documentation of Space-Borne FPGA Designs

Continuous Integration of RISC-V Testbenches


Fuzzing on Automotive Security

Fuzzing on Automotive Security
by Alex Tan on 09-12-2018 at 12:00 pm

The ECU. That was the service department prognosis on the root cause of thealways-on air bag safety light on my immaculate car. Ten years ago the cost for its replacement with after market part was at par with getting a new iPhone 8. Today, we could get four units for the same price and according to data from several research companies, the ECU market size in 2018 hovers between USD 40 to 63 billion.

ECU (Electronic Control Unit) is central to a vehicle electronic system. The average number of ECUs in today’s medium size car grows to around 40, but could be well over 100 with a highly engineered car as the increased integration of advanced driver assistance systems (ADAS), infotainment, powertrain and chassis electronics are becoming more prevalent. All of these units are interconnected with system buses to perform thousands of vehicle related functions. The move towards more code driven automotive requires complex embedded software and hardware interactions such as sensors driven collision avoidance by brake activation or skid mitigation due to loss of tractions.

Failure, Faults and Fuzzing
As a common means to measure and document the safety level of their systems, ISO 26262 defines requirements for systematic failure and hardware failure verification. The former is related to common design bugs identified with functional verification, while the later involves the use of fault injection to validate certain assumed safety mechanism functionality.

Random hardware failure in automotive domain is a probable event and its impacts could potentially be catastrophic. The process relating to its risk, analysis, remedies and metrics were captured in ISO 26262 functional safety standards among others.

As implied by its name, the triggering origin of such hardware faults could be random and could be external to the subjected system such as an extreme ambient temperature increase due to a mechanical malfunction or an electronic interference. They can however, be modeled based on its inherent characteristics and classified using a formal COI (Cones of Influence). COI helps visualize the potential correlation between the probability of a fault causing a safety critical failure through analyzing six different categories: safe, single-point, residual, detected dual-point, perceived dual-point, latent dual-point.

A common form of fault injection method is called fuzzing(fuzz testing), which involves applying anomalous input stimulus to a system to see how it handles it. It is a form of vulnerability analysis and testing derived from the early day of software stress test, some may refer it as the ultimate of black-box approach to security testing. The main benefit of fuzzing includes a minimal up-front effort in capturing the test patterns and optional understanding of DUT (Device Under Test) specifications.
Fuzzing classification is based on the premise of how much prior knowledge assumed on the DUT. A minimal understanding causes more reliance on randomness and mutation based anomalies (a.k.a dumb fuzzing), while having some understanding on the DUT (such as protocol used) leads to fuzzing with generated based anomalies (smart fuzzing). Figure 2 illustrates how fuzzing process on a DUT entails.

Virtual Prototyping and Security
Combining both fuzzing and virtual prototyping delivers many benefits over conventional hardware centric approaches to security testing. Software driven data acquisition and test profiling during debugging steps provide automation and less costly implementation.

Vista Virtual Prototyping is part of Mentor Vista™ platform that provides an early hardware functional model prior to its availability and can run software on embedded processor models at speeds on-par with the actual hardware. It is tightly integrated withSourcery™ CodeBench Virtual Edition to provide additional debug capabilities related to complex software/hardware interactions and the ability to optimize the software to meet final product performance and power goals. Vista has non-intrusive tracing technology and accommodates the use of flexible interfaces to inject various faults and/or packets based on interface protocols. Bypassing physical connection between fuzzing tools and DUT can be done as well in virtual prototyping platform by directly embedding the interfaces models in software.

FFRI, a Tokyo-based computer security firm with global presence demonstrated the use of automated fuzzing coupled with Mentor Vista Virtual Prototyping and FFRI’s Raven testing suite (as fuzzer) to eliminate security vulnerabilities in vehicle ECU under development. As illustrated in Figure 4, the test environment involves a vulnerable software application susceptible to stack-based buffer overflow and monitoring scheme of packet transactions at the guest network interface port to identify the HTTP packet that triggers buffer overflow in the target application.

To simplify the analysis step, a GNU debugger is linked to the system and used as the analysis tool. This enables performing virtual prototyping based testing remotely and automating fuzzing tests in the Vista VP environment through scripting on the host side –allowing a restart should a freeze is encountered.

As a case example, a segmentation fault occurs as a consequence of a stack-buffer overflow event and halts the application –allowing analysis of all frames in the stack. FRRI was able to root cause the segmentation fault through the debugging step, and believes that such approach can be expanded to cover also more complex systems security testing and analysis.
Using this method, a profile can be generated (as shown in Figure 5) and used to identify weak points, thus improving the overall system security robustness against potential side-channel attacks (such as timing or power analysis-based), which may acquire confidential system information.

With the exploding number of code content in modern cars (embedded as part of increased ECU numbers), it is imperative to devise a method to detect safety defects early in the automotive software development process –since deferring fault injection based testing late in the ECU development cycles is costlier. The neat thing about using virtual prototyping coupled with fuzzing allows multi-scenarios testing early and provides cold reboot flexibility.

For more details on Vista check HEREand for FFRI whitepaper check HERE


2018 Semiconductor Winners and Losers

2018 Semiconductor Winners and Losers
by Daniel Nenni on 09-12-2018 at 7:00 am

This is an ongoing conversation inside the semiconductor ecosystem, especially when I am traveling. Everyone wants to know what is going on here or there and since I just returned from Taiwan I will post my thoughts. Last week was also my birthday which was cut short due to the time change but I did get preferential treatment on the flight and at the hotel. Upgrades, champagne, treats, and a full-fledged cake from Hotel Royal in Hsinchu. Either I haven’t traveled on my birthday before or they didn’t roll out the red carpet last time or I would have remembered this, absolutely.

My choice for #1 winner is of course TSMC. They are having a great year and will continue to do so, my opinion. GF ending 7nm put AMD firmly in place at TSMC, Intel is rumored to be moving more products to TSMC, and of course Apple and the rest of the industry has already taped-out to TSMC 7nm so get the marching bands and the dragon dance ready for the end of year celebration.

One note about the Intel move, it is being reported that processors (Coffee Lake) are being moved to TSMC. I find this highly unlikely. Intel 14nm is in no way compatible to any TSMC process so this would be a redesign and why would Intel do that? It is much more likely that mobile chips would be retargeted for TSMC (SoCs, modems, IoT, etc…). Remember, the Intel Silicon Engineering Group is now run by Jim Keller. Jim was at Apple, AMD, and Tesla before Intel so he knows TSMC. Maybe even the next generation FPGAs since Intel is going to be short on 10nm and the ex Altera folks are very TSMC experienced. Or maybe the GPUs since TSMC is very good at GPUs (NVIDIA). There is a forum thread on this you may want to take a look at: Intel 14nm capacity issue.

Broadcom is another winner. Hock Tan keeps changing the rules of the semiconductor game and there is no telling what he will do next but you can bet it will be disruptive. We all scratched our heads when Broadcom acquired Computer Associates for $18.9B in cash. Hock clarified his strategy in the quarterly call:

Speaking of acquisitions, before I turn this call back to Tom to talk about the financials in greater detail, let me perhaps take a few more minutes and talk about CA Technologies. The number one question we get from when we get with CA is, why did we choose to buy? Cut to the chase. We’re buying CA because of the customers and their importance to these customers. CA sales mission critical software to virtually all of the world’s largest enterprises. These are global leaders in key verticals including financial services, telecoms, insurance, healthcare and retail. And CA does it a scale fairly unique to the infrastructure software space. This can only come from longstanding relationships with these customers that spend several decades. In other words, these guys are deeply embedded… https://www.legacy.semiwiki.com/forum/f302/interesting-notes-broadcom-q3-2018-call-10764.html

I also consider GF a winner with their new boutique foundry pivot. I covered this in a previous post GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies.

For losers I would start with Intel. 10nm is still in question and even more loserish is the way they disposed of their CEO who spent his entire career at Intel. I cannot believe a Silicon Valley icon like Intel would do such a despicable thing to a 36 year veteran. Clearly it was sleight of hand, waving one hand so you do not see what the other is doing, or not doing in this case. Replacing a questionable CEO with a temp CEO who has publicly declared he does not want to be CEO while you spend months looking for a new CEO? The big question I have is: Why is the Intel Board of Directors NOT being held accountable for this blunder?Correct me if I’m wrong here but this does not pass the corporate smell test.

Let’s continue this discussion in the comments section. Who do you think the semiconductor winners and loser of 2018 will be?