CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Samsung Galaxy Unpacked

Samsung Galaxy Unpacked
by Daniel Payne on 08-14-2015 at 7:00 am

Apple announces their new products with much media attention to an audience of enthusiastic attendees, along with a live stream to all of us on the Internet who couldn’t be there in person. Samsung is following that same marketing playbook and today hosted an event in New York dubbed, “Everything Galaxy Unpacked 2015” introduced by JK Shin, the President & CEO of Samsung Electronics himself. I’ve used the Samsung Galaxy Note, Note 2 and now the Note 4 devices, so this event today had my full attention. Samsung basically created the category known as phablet, a tablet-sized smart phone with a stylus. Bigger displays allow us to more easily read email, browse the web, watch videos, chat on social media, and do multiple things at once.

I was pretty impressed that JK Shin did the intro in English himself, while Justin Denison the VP of Product Strategy & Marketing did the new Galaxy Note5 presentation:

Galaxy Note5

  • Larger display at 5.7″, Quad HD
  • Smaller size
  • Metal frame
  • Thinner
  • Improved S-pen, more sensitive and precise
  • Keyboard cover (looks like a blackberry)

Alaina Cotton showed off the new Galaxy S6 edge+ mobile device:

Galaxy S6 edge+

  • Two visible edges
  • Larger display at 5.7″ Quad HD (while being more slender than the iPhone 6+)
  • Metal bezel
  • Comes in Silver Titanium color
  • Live Broadcast – allows you to broadcast live to YouTube

Common Features

  • Side Sync – like DropBox, easy to share files across Android, Mac and Windows devices
  • Fast Wireless Charging – in just 2 hours (Ikea and Starbucks are adding these)
  • 4GB of RAM for more and faster apps
  • Ready for LTE Cat 9 speeds
  • 4K video recording

These two new phones are ready for sale on August 21st in USA and Canada, although you can pre-order online today.

As a Note 4 user I’m attracted to the new Note5 and would consider upgrading to this phone at the end of my contract with AT&T. I noticed this past week that AT&T is offering upgrades to the Note 4 at just $49.99 while the price was $249.99 only 3 months ago.

Apple with the iPhone 6+ caught up to the Galaxy Note 2 (circa 2012), so Samsung is still ahead by 2 years in the feature department for phablets with the introduction of these two new models today. Our family uses both Android and Apple smart phones, and each system gets the job done, so it’s mostly a personal decision by consumers on which brand they adopt. My experience using a 5.7″ display has been very positive, so I won’t go back to anything smaller in the future, although now my Note 4 doesn’t fit into the bag underneath my road bike saddle, instead I have to slip the phone into my jersey pocket.

Samsung pay – a new mobile payment system that works with existing magnetic retail terminals, unlike NFS and Apple Pay which require new hardware and software systems. Lots of companies are backing Samsung pay, like:

This system also supports store-branded credit cards, membership cards and gift cards. This system goes live on August 20 in Korea, then in USA on September 28th, followed by China, Spain and the UK. I’d love to know if Samsung pay will work on my Note 4 device as well, because there’s a dearth of retail support for NFC where I live in Oregon today.

Safety for mobile payment is designed securely through a hardware-based system that does not store or transmit private information during a transaction, rather it has one-time use security code instead.

The new Samsung Galaxy Gear 2a new smart watch will be formally announced on September 3rd, so stay tuned for a few more weeks when that product roll-out occurs.

You may watch the complete 1 hour and 16 minute archived video online here.

I’m also very curious to see the tear-down of these devices to see which chips are being used inside, and how many of them are made by Samsung. We’ll have to wait a few weeks before the first tear-down reports are ready, but we already expect that Samsung continues to add more of their own chips with each new smart phone release.


Intel: Their Week in the Spotlight

Intel: Their Week in the Spotlight
by Paul McLellan on 08-13-2015 at 3:00 pm

Next week is the Intel Developer Forum (IDF) here in San Francisco at the Moscone Center. I’ll be there at least some of the time. But it is also the time that people who haven’t a clue about semiconductors and the market that it serves get to lap up press releases and try and sound intelligent.

For example:
Intel is about to drive a new wave of Moore’s Law, as personal computing converges with mobile technology due to the development of smaller processors, increased power efficiency, non-volatile memory, flexible/agnostic software, wireless peripherals and cloud access.

Intel is leapfrogging its competition with the launch of Mobile Personal Computing Convergence this fall, which has the potential to create huge sales increases and expanded profit margins for the company. Intel is about to revolutionize smartphones by bringing lower-power use to the high-end computing processors used in smartphones.

At the Intel Developer Forum 2015, which runs between August 18-20 through the upcoming Intel Investor Day, the company will introduce high powered processors, memory and communications capability packaged as a System on a Chip (SOC). Intel’s miniaturizing high power processors will power “supersmart-phones” within a year.

Yeah, Moore’s Law is all about personal computing converging with mobile technology. And while Intel has good technology I don’t think Apple, Qualcomm and Samsung (and the Asian army) would accept that Intel is “leapfrogging” them. Intel can afford to buy any particular socket it wants, it invented the concept of negative revenue. But I think it is highly unlikely that they will be powering the supersmartphones. Apple does their own thing silicon-wise, as does Samsung and Huawei. Xiaomi has been using Qualcomm I believe. Mediatek is out there like a predatory shark mopping up the manufacturers who don’t have the capability to do it in-house. That leaves who as the remaining market?

And even I am wrong and Intel is wildly successful, “expanded profit margins”…compared to server processors…I don’t think so. In mobile Intel can only lose (on Wall Street) by winning (on Main Street, or these days 大街).

So next week’s IDF. I will be at two events for sure. One is the ARM event the evening before IDF opens. If your major competitor invites everyone to town then the obvious thing to do is throw a party for them. Then on Tuesday morning Brian Krzanich, the CEO, takes the stage for the opening keynote. No real information about what, if anything, he might reveal.

Go on. See for yourself:Brian Krzanich, Chief Executive Officer, will explore trends and developments in technology highlighting what we can develop today and what opportunities developers can look forward to in the future.

So what are my guesses? Skylake of course. Intel has been more reticent than normal about talking about their future processors, perhaps because ramping them in a new process has not gone as well as hoped for which has pushed out deliveries. Mobile, for sure. Rumors are everywhere that Intel has a modem in the next iPhone, at least in some markets. Hey, if Intel paid me to use their modems then I’m sure I’d be as keen as Tim Cook to take some.

IoT. the cloud, gaming, security. These are all topics for Mega Sessions. And who knows what will be on the screen for:Join Genevieve Bell, Intel Fellow and Vice President, Corporate Strategy Office and renowned anthropologist for a conversation

Who knew Intel had an anthropologist? I’ll be back here telling you what was said next week.

IDF starts on Tuesday. Full details are here.


A New Unified Power Solution at All Levels

A New Unified Power Solution at All Levels
by Pawan Fangaria on 08-13-2015 at 7:00 am

When situation demands, multiple solutions appear with a slight lag of time. Similar is the story with estimating and optimizing power at SoC level. In the SoC era, power has become a critical criterion long ago, and there are tools available for power analysis and optimization. However, with more mobile and IoT (Internet of Things) devices gaining momentum, any sub-optimal solution for power is unwarranted. These devices operate at extremely low voltages and require minimum power consumption. Hence, it’s essential that the power measurement, analysis and optimization is accurate, consistent at all levels, and is well correlated with the power consumed by actual applications running on these devices.

Recently, I had talked about great innovations in power calculation, analysis and management at system level involving emulation technologies that allow capture of power numbers while live applications run on the system. I am happy to learn about yet another innovative and impressive power solution that is fast and accurate, and seamlessly works at all levels including system, RTL and gate with well correlated power numbers.

Cadencehas come up with an excellent power analysis solution with a large capacity that can run at the system-level as well as integrates with implementation level analysis. The Joules RTL Power Solution provides time-based RTL power analysis which can be accurate like gate-level implementation.


The Joules power analysis is seamlessly integrated with Cadence Genus Synthesis Solution. The Genus Synthesis utilizes a newly added rapid prototyping technology which provides order of magnitude faster design synthesis. The solution thus provides implementation-level accuracy in the power estimation.

The power analysis uses a multi-threaded architecture utilizing multiple CPUs that accelerates power exploration and enables an in-depth analysis. The solution allows simultaneous analysis of multiple stimulus files and each stimulus file can be time-sliced into frames to enable time-based power reporting. Also, multiple stimulus files across different design hierarchies can be merged to model full SoC traffic and identify peak power and clock gating opportunities. The power solution has a rich suite of library analysis and profiling tools for advanced analysis and debugging. The profiling of cells can be done by drive strength versus area, delay or power. The power reporting can be done in terms of bit or register level and can be categorized based on cell type, power type, design hierarchy, clock domain, power domain, or timing mode.

Also, there is a seamless integration between Cadence’s Joules RTL Power Solution, Palladium Emulation Platform and the Stratus HLS (High-Level Synthesis) Platform. For an early and accurate, time-based system-level power analysis and optimization, the Joules power analysis can be invoked directly from the Palladium Dynamic Power Analysis Solution GUI. The native read and write to/from Palladium database allows analysis of live software applications running on the hardware early in the development cycle. The peak and critical power frames in a long system-level simulation can be zoomed in for analysis with increased resolution and identification of correct time-slice for IR-drop. The Stratus HLS platform leverages Joules solution during high-level synthesis to provide SystemC level power profiling. The Stratus also enables IP teams to better evaluate system-level micro-architecture tradeoffs for power optimization.

This power analysis solution is quite fast at system-level while its result correlates well with that at the gate-level implementation and signoff.


The Joules RTL power analysis was performed on several customer designs. The results were within 15% accuracy to signoff in the Cadence Voltus IC Power Integrity Solution with unified power calculation and advanced RTL-to-gate name mapping.

The Joules power solution provides about 20 times faster time-based power analysis compared to other methods. By using this solution with its integrated prototype synthesis, a design with 20 million instances can be analyzed overnight with gate-level accuracy within 15% of signoff in Voltus IC Power Integrity Solution.

Also read:
How Emulation Enables Complex Power Intent Modeling
Power Analysis Needs Shift in Methodology
How PowerArtist Interfaces with Emulators

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Aldec updates two EDA product lines

Aldec updates two EDA product lines
by Don Dingee on 08-12-2015 at 4:00 pm

Continuous, incremental improvement based on customer feedback and insight from researchers is a pillar of the Aldec EDA strategy. Within the last two weeks, two of the Aldec product lines – Riviera-PRO, and ALINT-PRO-CDC – have seen new version releases. Here’s a quick look at some of the highlights of both.

Riviera-PRO 2015.06 released to the public on July 30. The major enhancement in this release of Aldec’s mixed-language simulation and verification tool is better coverage analysis capability. For SystemVerilog users, condition coverage has been introduced, gathering coverage information on expressions inside if statements and the conditional ?: operator. The option to collect condition coverage is easily enabled from a command line or the GUI interface. For VHDL users, path coverage has been introduced, with similar control via command line.

The results of coverage analysis are displayed in HTML reports, extracted from merged ACDB files. Reports show the number of hits on a conditional expression, and mousing over the hit count reveals a tool-tip that shows which test names covered the bin. Further information is shown in a test details table of the ACDB report.


Other features have been added, such as a UVM configuration window for debugging. This shows resources available in the UVM configuration database, tracking user-defined information and UVM scope. Keeping pace with open source releases, the OpenSSL library bundled with Riviera-PRO has been updated to version 1.0.2a, and the OSVVM library has been updated to version 2015.03. Also to be appreciated is a streamlined installation process, using an upgraded version of the setup program which reduces installation time by half.

ALINT-PRO-CDC 2015.08 released to the public on August 10, and is the subject of a live webinar with Aldec product manager Pavel Leshtaiev coming up on August 13. This clock domain crossing analysis tool performs in-depth automated analysis using both static and dynamic methods. Ten new rules have been added to the rules plug-in, enhancing the ability to locate CDCs and reducing the chance of random logic being incorrectly identified as a clock.

Static checks have received new visual highlighting in schematics. For example, all nets in a particular clock domain can be highlighted with a color for easy observation. Convergence through combinational logic is also highlighted with color.

A big part of ALINT-PRO-CDC is not just detection of CDCs, but also verification of synchronizer constructs that mitigate them. Assertions and coverage extensions help with EN-based and handshake synchronizers, and metastability emulation is generated for reset synchronizers. Also, the concept of virtual clocks has been added; for example, a delay can be specified on an input or output port.

A major new feature is the format of the automatically generated testbench. Three formats are now supported: SystemVerilog, VHDL with PSL for assertions and coverage, and pure VHDL with assertions but no coverage. Users can control which of these formats is created.

Press releases:
Aldec delivers complete Coverage Analysis for FPGA and ASIC Designers with the latest release of Riviera-PRO

Aldec enhances ALINT-PRO-CDC with Advanced Violation Analysis Capabilities and an Extended Set of Dynamic Checks

As always from Aldec, following those links leads to a What’s New presentation and a complete set of detailed release notes.

Webinar registration:
Eliminating Clock Domain Crossing (CDC) Issues Early in the Design Cycle (US)
Eliminating Clock Domain Crossing (CDC) Issues Early in the Design Cycle (EU)

Again, many of these enhancements come from requests by actual users in these tools working on real-world designs. Often the addition of an individual feature might seem minor, but the constant sweeping by Aldec development teams with these enhancements adds up to significant productivity improvements, and keeps these tools on the leading edge.


Why Qualcomm Lost Samsung and Will Get Them Back!

Why Qualcomm Lost Samsung and Will Get Them Back!
by Daniel Nenni on 08-12-2015 at 12:00 pm

2016 will be a banner year for the System on Chip (SoC) industry. For the first time we will have leading edge SoCs (Apple, Qualcomm, Samsung) on the same manufacturing process enabling a true Apples to Apples comparison. Unfortunately, how we got there is being misrepresented by the media and analysts but that is Situation Normal for the semiconductor industry, absolutely.

It all started back in September of 2013 with the release of the Apple A7 SoC inside the Apple iPhone 5s which used the 64-bit ARMv8-A architecture versus the 32-bit ARMv7. A 64-bit CPU inside a smartphone? Surely you must be joking:

“I know there’s a lot of noise because Apple did [64-bit] on their A7. I think they are doing a marketing gimmick. There’s zero benefit a consumer gets from that” said Anand Chandrasekher, senior vice president and chief marketing officer at Qualcomm.

Prior to Qualcomm Mr. Chandrasekher spent his career at Intel including 5 years as Senior Vice President of the now defunct Intel Mobility group making SoCs. His comment was later retracted and Mr. Chandraskher was demoted (lost his CMO title):

“The comments made by Anand Chandrasekher, Qualcomm CMO, about 64-bit computing were inaccurate. The mobile hardware and software ecosystem is already moving in the direction of 64-bit; and, the evolution to 64-bit brings desktop class capabilities and user experiences to mobile, as well as enabling mobile processors and software to run new classes of computing devices.” – Qualcomm

At the time of this announcement Qualcomm was architecting their next 32-Bit SoC which was scrapped immediately in favor of a 64-Bit version. In a rush to market, QCOM had to use the ARM Cortex A57-A53 Big-Little cores in the Snapdragon 808 and 810 chips versus their own custom architecture. As a result the famed Snapdragon SoC, which had previously ruled the mobile industry, lost their most important customer in the number one mobile company Samsung.

A recent report, which is now being repeated by the Parrots of Wall Street, credited Samsung’s need to fill their own fabs as the reason for the switch from Snapdragon to the Samsung Exynos SoC. If you know Samsung (as I do) you will know that they do not work that way. If you know the SoC business (as I do) you will know that this makes no sense whatsoever.

The competing Samsung Exynos SoC was launched in 2010 using the ARM Cortex A8. Remember, QCOMM and Apple both license the ARM instruction set and build their own microarchitectures (Cores). Samsung, Mediatek, and other SoC vendors use off-the-shelf ARM cores. The differences are noticeable in regards to performance and power usage which is why Samsung continued to use the Snapdragon in the majority of their mobile devices up until this year.

Samsung is a fierce competitor both inside and out meaning that even the internal divisions of Samsung compete for business. Bottom line: If the Samsung semiconductor group can make a better SoC, Samsung mobile will use it, and that’s what makes Samsung a market leader. That is the real reason why the new Samsung mobile devices use the 14nm Exynos 7 versus the 20nm Snapdragon 810, it is simply a better chip. I bought a Samsung 6S Edge and have experienced firsthand the superior performance and power usage. Even my 20nm A8 based iPhone 6 significantly trails the Edge. The next QCOMM Snapdragon chip (820) uses a 64-Bit custom core (Hydra) manufactured on the Samsung 14nm LPE process and my guess is that it will again get the Samsung mobile business, absolutely.

The Snapdragon 820, Exynos 7, and the next Apple SoC (A9) will all use Samsung 14nm LPE so we will get to do a head-to-head comparison of the QCOMM and Apple custom architectures for the first time. We will also get to compare custom cores versus the ARM cores used in the Exynos. I will give you my bet on this race in the comments section. I will also tell you the REAL reason why QCOMM switched from longtime partner TSMC to Samsung for 14nm.

Why the comments section you ask? Because you have to register to see comments of course!

Also read: 3 Key Frontiers for Samsung’s Next Mobile SoC


The Alphabet Starts With G

The Alphabet Starts With G
by Paul McLellan on 08-12-2015 at 7:00 am

What is the second biggest tech company in the world? If you said Alphabet, you get bonus points. If you have never heard of Alphabet, then perhaps you have heard of Google.

On Monday, Google announced that it was going to reorganize its corporate structure. This would usually provoke a big yawn but this could turn out to be significant. Google is creating a new holding company called Alphabet that Page and Brin will run. One of the subsidiaries is what you might think of as the old Google. It consists of search, search advertising, the datacenters, YouTube and Android. Alphabet will also own Calico, Fiber, Nest, Google Ventures, Google Capital, and Google X. One aspect of the reorganization is that these companies will be run largely independently of Google and, in particular, will have to establish their own brand identities in the way that Nest has done since Google acquired it (and ran it independently). I would be willing to bet that Google Capital will soon have a new name, and probably not Alphabet Capital (nor anything beginning with the letter G).

The other reason for making this change is that Google needs to really focus. Advertising is in transition as millennials cut the cord (and don’t read newspapers). So TV advertising and newspaper advertising is continuing to shift more and more online and still has a deficit in terms of time spent on the medium versus advertising spend (whereas newspapers in particular are the other way around: lots of advertising spend but declining eyeballs). Two huge competitors are also out there in Facebook and WeChat. If you don’t live in China, then WeChat doesn’t even seem that significant but it is simultaneously the Chinese WhatsApp (owned by Facebook, of course), Facebook itself, online portal for purchases and more. Its revenue per user is reckoned to be 7-8X what WhatsApp makes. Anyway, investors have been saying for ages that Google needs to focus on its core and not get distracted by these pie-in-the-sky things like autonomous vehicles or Google Glass. Or the glucose sensing contact lens that was the subject of one of the keynotes at DAC. This restructuring is a way to do both. Google (the search and advertising business) will now be run by Sundar Pichai who has no responsibility for any of these other businesses that will have to sink or swim depending on how successful they become.

In an interview with the Financial Times last year, Larry Page said:Looking forward 100 years from now at the possibilities that are opening up, we could probably solve a lot of the issues we have as humans.

As the FT says: Even Google’s famously far-reaching mission statement, to “organize the world’s information and make it universally accessible and useful”, is not big enough for what he now has in mind. The aim: to use the money that is spouting from its search advertising business to stake out positions in boom industries of the future, from biotech to robotics.

If you read between the lines, I think it is clear the Larry (and probably Sergey too) is pretty much bored with the search business and wants to spend his time focused on some of these other areas, using the money pump from traditional Google to do it. Google has a weird corporate structure with two tiers of shares so that they maintain control and, basically, nobody can stop them doing something like this whether they like it or not. They can only sell their Google stock if they choose to.

Before he died, Steve Jobs used to argue with Larry that Google was doing too much. Larry would push back that Apple was not doing enough:It’s unsatisfying to have all these people, and we have all these billions we should be investing to make people’s lives better. If we just do the same things we did before and don’t do something new, it seems like a crime to me.

Alphabet seems to be the way to do the “something new” parts.

Financial times interview with Larry Page from last year is here. Larry Page’s blog announcing Alphabet is here. Alphabet’s minimal website is here.


Meeting Demand as Fab Capacity is Stretched Again

Meeting Demand as Fab Capacity is Stretched Again
by Tom Simon on 08-11-2015 at 8:00 pm

Global semiconductor production capacity and its utilization level are key elements of the technology economy. During a panel at DAC in June Mentor Graphics posited that we are entering into a period where leading edge processes will be in high demand and also older nodes are seeing increasing demand due to Internet of Things designs that are relying on low power and low cost silicon. All this could put a squeeze on wafer availability.

Without enough wafer fabrication capacity available, electronic product manufacturers who rely on semiconductor components will fall short on their own revenue targets. On the other hand, foundries need to carefully maximize their utilization to ensure adequate margins and profitability. Semiconductor fabs are notoriously expensive to build and their construction comes with extremely long lead times.

It’s interesting to look at hard data on utilization, but unfortunately the SIA stopped issuing their reports on wafer fab capacity and utilization in October 2012. Nevertheless, looking at their last report which covers up to 2011 a number of interesting things can be observed.

Prior to the 2008 downturn, utilization percentages were hovering in the high 80’s, occasionally reaching a peak of around 90%. During the 2008-2009 recession, they fell as low as 56%. But then quickly recovered. From 2010 to the end of the report period utilization exceeded 90%.

So what happens as capacity reaches its limits? The obvious answer is that it can lead to allocation, and all the unpleasantness that come with that. Of course certain processes will be in greater relative demand than others, leading to bottlenecks on particular nodes. Also, process portability affects how elastic the market will be if customers are able move to second sources.

It turns out that there are things that foundries and even their customers can do to help avoid these problems. Similarly, foundries and their customers, by working together, can also help avoid overcapacity which can be equally problematic. The key to this is accurate forecasting on the part of foundry customers.

At DAC 2015 Mentor hosted a panel discussion on this topic. The panelists were Prasad Subramaniam – Vice President of Design Technology and R&D from eSilicon, Kelvin Low – Senior Director of Foundry Marketing with Samsung and UMC USA Vice President of Business Development Walter Ng. The discussion was moderated by Michael Buehler-Garcia – Senior Director Calibre Design Solutions at Mentor.

Walter Ng made the point that foundries really see allocation causing lost opportunity; in his words “it’s not fun for the foundry.” He added that it is essential that customers work with the foundry to ensure forecasts are accurate. In reality several customers might be competing for the same socket and only one will win, causing the other competing prospective orders to not materialize. This means that customers need to make a strong business case for their product to get access to valuable wafers during a period of allocation.

Samsung’s Kelvin made the point that silicon is not the only issue. Design for manufacturing affects yield and consequently the actual number of chips that can be marketed. Higher yield means fewer wafers are needed for a given number of finished chips. Another real limitation is tester time. Improved test vectors will speed production. All of the panelists agreed that customers play a significant role in enabling foundry capacity.

Kelvin also wants to see more data to back up the forecast numbers that are now being used for IoT chips. My own thought is that a lot of these chips might be on older nodes. This of course comes with a mixed blessing. It’s nice to have demand for older nodes, but if the demand is growing as new products are designed for older nodes, how do foundries fill this demand? Nobody is going make more 8” fabs. Walter Ng asked if it might make sense to actually build 12” fabs at older nodes.

Another consequence of significant new designs on older nodes are the questions raised about retrofitting the older flows for these nodes to add critical features like updated power management strategies. Not just the flow, but existing IP might need to be updated on these popular older nodes.

It seems that all the foundries are moving aggressively to increase supply. Kelvin cited Samsung’s outlay of $15B to build their new fab.

Likewise, Walter said that UMC is banking on increased demand for 28nm, and are investing in increasing their capacity.

Even in the distracting environment at DAC this panel drew quite a crowd, overflowing from the seating area of their booth out into the aisle. This alone indicates the level of interest in the topic of foundry capacity. Mentor did a good job of pulling this panel together, even though it could be argued that this topic is out of their wheel house. Of course only time will tell if the growing markets like IoT actually lead to capacity shortfalls for leading or training nodes. For further reading on why and how the IoT is putting higher demand on older process nodes I suggest this article.


Design For Safety in Automotive Electronics

Design For Safety in Automotive Electronics
by Daniel Payne on 08-11-2015 at 12:00 pm

Do you remember how auto maker Toyota had to pay a $1.2 billion settlement in 2014 because some of their automotive models experienced sudden, unintended acceleration? That scenario has to be an engineer’s worst nightmare because something was missed during the design and testing of an automotive electronics system that has to meet rigid safety standards. Prevention is always cheaper than a cure, especially when it comes to IC design, so I learned something new this week while watching an archived webinar called, “STMicroelectronics’Experience: Synopsys Logic BIST for Automotive and Safety-Critical Designs.”


A Toyota Camry that crashed in 2010. Source: NY Post

Related – Virtual HIL and the 100M LOC car


Safety Critical Applications
I already mentioned automotive as a safety-critical application, other industries include: medical devices, aviation, trains, bridges, power plants, etc. Just in the automotive space, stop and think about Advanced Driver Assistance Systems (ADAS) and how electronics control the feature and safety:

  • Air bags
  • Anti-lock Brake System
  • Electronic stability control
  • Adaptive cruise control
  • Emergency breaking assist
  • Blind-spot monitoring
  • Lane-departure warning
  • Rear cross-traffic detection
  • Pedestrian detection
  • Traffic sing recognition

Safety standards are defined for each industry: ISO 26262 for automotive, ISO 13485 for medical devices, DO-254 for aviation. Self-testing is a best practice for electronic systems to help meet each of these standard requirements. Synopsys recently added a synthesis-based in-system self-test product called Logic BIST (Built-In Self Test), and here’s where it fits into the overall design and test flow:


Logic BIST Flow

The required logic for BIST is automatically added to your gate-level design during logic synthesis, so you don’t have to modify the RTL source code in this approach. The Design Compiler tool meets the timing, area, power and test goals during the synthesis step shown in the first blue box above. The TetraMAX ATPG tool computes the seed and signature used by the logic BIST for self-testing purposes, which is different from the Synopsys manufacturing test flow where TetraMAX generates the test program for ATE and also provides silicon diagnostics capabilities.

Logic BIST adds controllability and observability to the scan flip-flops of your design, shown in grey below while the test logic is shown in blue.


Logic BIST Architecture

PRPG stands for Pseudo-Random Pattern Generation, and this is where stimulus is automatically created for self-testing of your logic. Test values are loaded into the grey flip-flops of you design, then results of your logic design are saved in the MISR (Multi-Input Shift Register) to be compared against a known-good value saved in the Signature, shown in green.

Logic BIST at STMicroelectronics
Cinzia Bartolemmei spoke about how her group is using logic BIST for both power-on test and in-system live test of safety critical cores. Requirements of this logic BIST approach for their designs are:

  • Small silicon area overhead
  • LBIST must be modular
  • Doesn’t require data from chip input pins
  • Simple to interface
  • Pass or fail response
  • Support both stuck-at and transition testing
  • Trade-off patten count and test coverage
  • Divide LBIST run into several timing intervals

They’ve been able to meet these requirements on IC design blocks ranging from thousands to millions of gates, and fulfill the automotive safety standards even on designs with multiple synchronous or asynchronous clocks. For a case study Cinzia talked about a macro cell used in automotive with about 120K flip-flops, scan chain length of 100 and two asynchronous clocks:

With this approach the area overhead for all DFT was 3% while LBIST required just 1.6%. For single stuck-at faults a test coverage of 91.76% was achieved using 20K patterns, while LBIST used just 2,300 patterns to reach 90% coverage. On transition faults a test coverage of 86.11% was reached using 20K patterns, and LBIST took just 12,400 patterns to get 85% coverage.

Related – Two New Announcements at ITC from Synopsys

Summary
We live in a complex world where our very lives depend on electronics systems functioning perfectly in order to keep us safe. One method to address safety requirements is through Logic BIST, and companies like STMicroelectronics have used Synopsys tools to make their automotive chips adhere to stringent safety requirements. View the entire 21 minute archived webinar here.


What Does Legal Sea Foods Have to Do With EDA?

What Does Legal Sea Foods Have to Do With EDA?
by Paul McLellan on 08-11-2015 at 7:00 am

When I drive down to Silicon Valley I usually listen to podcasts rather than just listen to the radio. One that I especially like is Russ Robert’s EconTalk, which has an hour-long episode every Monday morning on a wide range of different aspects of Economics. Normally he interviews an economist. He has also interviewed the manager of a car dealership, and other people only tangentially connected to economics. This week it was Roger Berkowitz, the CEO of Legal Sea Foods. If you have been to Boston, even just the airport, you may well have eaten their clam chowder, but they actually have 34 fish restaurants on the East coast. They started just as a wholesale fish market, then went retail too, then a restaurant and gradually grew until they now have the portfolio of different seafood restaurants that they have today.

During the podcast at one point Russ asked Roger what was the difference between running a handful of restaurants versus 34. And how different it would be to run 68. They didn’t spend a lot of time discussing it but obviously there is a huge difference in scale from operating a fish wholesaler with a restaurant next door, to operating a fish operation in Boston that handles all the fish for three dozen restaurants, the furthest afield of which is in Atlanta.

That got me thinking about support in the EDA industry. Customer support in a successful EDA startup company goes through three phases, each of which actually provides poorer support than the previous phase (as seen by the long-term customer who has been there since the beginning) but which is at least scalable to the number of new customers. I think it is obvious that every designer at a Synopsys customer who has a problem with Design Compiler can’t simply call a developer directly, even though that might provide the answer the fastest.

There is actually a zeroth phase, which is when a startup company doesn’t have any customers. As a result, it doesn’t need to provide any support. It is really important for engineering management to realize that this is actually happening. Any startup engineering organization that hasn’t been through it before is completely unaware of what is going to hit them once the immature product gets into the hands of the first real customers who attempt to do some real work with it. They don’t realize that new development is about to grind to a complete halt for an extended period. “God built the world in six days and could rest on the seventh because he had no installed base.”

The first phase of customer support is to do it out of engineering. The bugs being discovered will often be so fundamental that it is hard for the customer to continue to test the product until they are fixed, so they must be fixed fast and new releases got to the customer every day or two at most. By fundamental I mean that the customer library data cannot be read, or the coding style is different from anything seen during development and brings the database to its knees. Adding other people between the customer engineer and the development engineer just reduces the speed of the cycle of finding a problem and fixing it, which means that it reduces the rate at which the product matures.

Eventually the product is mature enough for sales to start to ramp up the number of customers. Mature both in the sense that sales have a chance of selling it and the company has a chance of supporting it. It is no longer possible to support customers directly out of engineering. Best case, no engineering other than customer support would get done. Worst case, there wouldn’t even be enough bandwidth in engineering to do all the support. Engineering needs to focus on its part of the problem, fixing bugs in the code, and somebody else needs to handle creating test cases, seeing if bugs are fixed, getting releases to the customer and so on. That is the job of the application engineers.

During this second phase, a customer’s primary support contact is the application engineer who they work with anyway on a regular basis. But as the company scales further, each application engineer ends up covering too many customers to do anything other than support them. Since their primary function is pre-sales, to help sales close new business, this is a problem. So the third phase of customer support is to add a hotline.

The hotline staff are typically not tool users, they are more akin to 911 dispatchers. Customers hate them since they are not as knowledgeable as they are themselves. Their job is to manage the support process, ensure that the problem is recorded, ensure that it eventually gets fixed, and that the fix gets back to the customer and so on. It is not to fix anything except the most trivial of problems themselves.

At each phase of support, the quality (and knowledge) of the engineer directly interfacing to the customer goes down but the bandwidth of available support increases. Engineering can only directly support a handful of customers themselves. Each AE can only directly support a handful of customers but more AEs can easily be added as sales increase. A hotline can scale to support a huge number of customers 24 hours per day, and it is easy to add more hotline engineers.

Econtalk has an episode every week going back to 2006 including Milton Friedman, Ronald Coase when he was already over 100 (he died recently) and many other famous, and not-so-famous, names. The website is here(or you can download from iTunes too). The Legal Seafood episode is here.


Make American Semiconductor Great Again!

Make American Semiconductor Great Again!
by Daniel Nenni on 08-10-2015 at 4:00 pm

As I watched the GOP debate between the top 10 candidates last week I asked myself which one of those men would I pick to help the United States stay competitive in the semiconductor industry. I’m saddened to say that the only candidate even remotely qualified for that conversation in my opinion is Donald Trump. Of course I backed Ross Perot in 1992 so I’m not what you would call a “politically correct” person.

My first political candidate of choice was Ronald Reagan in 1981 mainly because I thought it would be fun to have an actor in charge of our country, and it certainly was. He was also a Captain in the Air Force as was my father which I respected greatly. I remember a sound check prior to a radio address when Reagan made the following Cold War joke that went viral:

“My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.”

For the same reason Arnold Swarzenegger was my candidate for California Governor and Clint Eastwood for Mayor of Carmel but I digress…

Right before the debate I read the IC Insights top 20 semiconductor company sales report where Samsung cut Intel’s lead to 16% in the first half of 2015. During one of the many trips I made to South Korea I was told quite clearly by Samsung that their goal was to be the number one semiconductor supplier in the world so this did not surprise me at all. Based on my experience Samsung is a very deterministic company, much more so than Intel, and they have all of the tools necessary to lead the semiconductor industry, absolutely.

Another interesting development is that SK Hynix jumped both Qualcomm and Micron. Other than that the top ten did not change. The next big changes will be the Avago acquisition of Broadcom making them a Singapore based company. Avago already acquired Silicon Valley semiconductor legend LSI Logic so they are gone as well. NXP is acquiring U.S. based Freescale (Motorola) making it the largest European semiconductor company ahead of both ST and Infineon. The other big change to the semiconductor landscape that is not reflected in this chart is the GolbalFoundries acquisition of the IBM Semiconductor. It will be interesting to see what impact that will have on GF’s ranking in the second half of 2015.

Looking back 30 years, the advent of the personal computer brought semiconductors into our homes. The PC industry was controlled by three companies: IBM (system), Intel (semiconductor), and Microsoft (software). Samsung, the largest consumer electronics company, takes that a step further by providing both the systems and semiconductors. On the other side of Intel is Apple who I would argue is the most influential fabless semiconductor company in the world today. Apple of course controls the system, semiconductors, AND software.

Given the influence semiconductors have on modern day life one would think semiconductor design and manufacturing would be an integral part of the coming political platforms. As I said, Trump’s “Make America Great Again!” slogan resonates with me both personally and professionally. Unfortunately this seems to be Ross Perot déjà vu all over again… just my opinion of course.