webinar banner2025 (1)

Accelerated Verification with Synopsys

Accelerated Verification with Synopsys
by Alex Tan on 07-17-2018 at 12:00 pm

At DAC 2018, Synopsys held a lunch panel discussing verification challenges faced by the industry leaders, their adopted approaches and the overall verification technology trends. This panel of industry experts from Intel, AMD, Samsung, STM and Qualcomm also shared their viewpoints on what drives the SoC complexity and how their teams have tackled them and achieved successes.

From the Synopsys Verification Group, Michael Sanie (VP of Marketing) and Chris Tice (VP of Verification Continuum Solutions), kicked-off the session by highlighting the current state of Synopsys verification landscape.

As an established verification leader a TTM 40%+ emulation growth, Synopsys showcased at the DAC exhibit floor, ZeBu® Server 4. It offers a 2x emulation performance gain, 5x lower power consumption and lower cost of ownership with half the datacenter footprint. Other features include scalable capacity to 19B+ gates, 12x faster waveform data transfer vs ZeBu Server-3, architected for simulation acceleration, with 16x higher host-bandwidth vs dual PCI/e Gen3 I/F; faster compile, hybrid emulation and an advanced debug. Also dubbed as his ‘15th generation emulation system’ by Chris Tice.

HAPS-80 (High-Performance ASIC Prototyping System), Synopsys prototyping systems had 1700+ successful deployments. Newly introduced, HAPS-80 Desktop comes with built-in software and increased debug throughput.

The maturity of FGP (Fine Grain Parallelism), originally announced as Cheetah Technology (2016), then part of the standard VCS release and now available to every VCS user as VCS-FGP (2018). FGP eliminates any manual work by dividing the design into groups of events and exploits many-core processor architectures to parallelize these clustered tasks.

Also highlighted the investment made in Verdi debug format support (interactive, text, waveform based debug) and VCFormal experienced the fastest software growth in Synopsys verification group. Other summary includes SpyGlass has over 300+ customers using static verification; new features RDC and improved performance. VC Lint and VC CDC was introduced as the next generation Static technology with advanced word-level data model from VC Formal combined with rules/engines from SpyGlass (Rules, Engine) with integration compatibility with DesignCompiler and PrimeTime.


Accelerating Digital IP Verification Methodology (STMicroelectronics)
According to Mirella Negro, MCD Verification Group Manager, being a leading supplier for Smart Driving and the Internet of Things through its 32-bit general-purpose microcontrollers requires digital IP robustness, very aggressive market-introduction schedules and complex MCU verification need. STM deployed VC Formal last year, augmenting its coverage driven dynamic verification to prevent simulation iterations due to RTL refreshes, which occur between bug huntings and final coverage analysis.

Using Synopsys comprehensive formal Apps such as Property Verification (FPV), Formal Coverage Analyzer (FCA) and Automatic Extraction of Properties (AEP), STM was able to achieve faster property convergence in many scenarios. VC Formal’s broad portfolio of formal assertion IPs, also uncovered significant number of pre-silicon corner-case bugs, enabling STM to deliver more designs in less time, without compromising quality.

The native integration of VC Formal with VCS and Verdi debug engine, allows design and verification teams to easily leverage formal technologies and automate root cause analysis of formal results –such as the code unreachability issue which affect final coverage. Although some complex STM IP’s still need simulations, a growing number of IPs are validated using formal only.

Acceleration of Pre-Silicon Emulation (Qualcomm)

Senthil Dayanithi, Sr. Engineering Director at Qualcomm concurred on the continuing shift-left trend in SOC H/W development, which can be achieved through migrating to system emulation and a high-level S/W development with real peripherals.

Pre-Silicon Emulation Efficiency Improvement (Intel)
Raju Kothandaraman, Graphics H/W Director at Intel described the increasing visual experiences as driver to complexity in both design sizes and workloads. He believes that while emulation spending is growing it is necessary to efficiently use hardware to keep-up with verification complexity. The prerequisite to that is to understand the key metrics which include the following:

  • Compile time – Try to reduce bottlenecks during compile step through the use of emulation friendly RTL, the best known methods and the selection type of models. All of which could deliver 4-5x improvement.
  • Model frequency – Use emulation friendly transactors, evaluate and fix any unusually slow frequency due to long timing paths (may be attributed to timing loop). Work and collaborate with EDA vendor. A 2-3x improvement and higher model frequency can be gained here.
  • Wall clock efficiency, Utilization and Debug TAT – Improving how fast regression time, identifying inefficient DPI calls, having more offline debug capacity and local memory solution on emulator. Also key to pre-silicon validation is to have effective board packing, an enhanced debug methodology and model types, ID of right content to run. All of these could bring in 3-4x performance improvement.

His final take on gaining emulation efficiency includes a change of mindset to a more efficiency driven, knowing when to empower design versus automation, an internal cross-team partnership and external EDA collaboration.

Driving Performance and Power Tuning Presilicon (AMD)
Andrew Ross, Principal MTS from AMD stated that the basic premise of the two metrics (performance, power) is to complete more work with less time and completing more work with less energy. ZeBu emulation system has been at the center of this solution handling full SOC RTL (1B+ gates), GPU, clock domain ratios modeled for silicon accuracy and applied performance tuned register init and fusing. As part of the virtualize execution environment, the (hypervisor) VirtualBox interfaced through PCie transactors to the ZeBu box. The App running visualization (GPU) reporting # of cycles required for execution. AMD also leveraged ZeBu zDPI passive monitoring activity and fast waveform capture.

Performance analysis can be done using full S/W and H/W stack as in silicon bring-up, allowing high fidelity observability of the combined system behaviors. In summary, system level emulation solution enables power and performance analysis with real world workloads, a more data driven analysis and augmenting simulation based approaches.

Addresing Unique Verification Challenges of Era Constant Changes (Samsung)
According to Seonil Brian Choi, Master Principal Engineer at Samsung, an increased in design complexity has led to less verification and development time. Moreover, multiple specification changes triggers a redesign and subsequent verification efforts that may eventually shift the completion verification time closer to the deadline leaving no time to allocate for S/W verification.

Seonil shared the methodology evolution for verification and S/W development from simulation centric to a more system level virtual prototyping. A shift from little modeling to more modeling while enabling an early S/W development.

The takeaway from this panel session is that leading SOC’s require advanced technologies including simulation, formal verification, fast emulation, hybrid emulation prototyping, and debug to complete the challenging task of verification of such SoCs.

To watch a video of the DAC Verification Panel, visit HERE.

More 55DAC blogs


Platform ASICs Target Datacenters, AI

Platform ASICs Target Datacenters, AI
by Bernard Murphy on 07-17-2018 at 7:00 am

There is a well-known progression in the efficiency of different platforms for certain targeted applications such as AI, as measured by performance and performance/Watt. The progression is determined by how much of the application can be run with specialized hardware-assist rather than software, since hardware can be faster and consume less power than software running on a less specialized platform. At the low end are general-purpose CPUs, where the application is entirely in software, then GPUs, FPGAs, DSPs and finally custom hardware – an ASIC such as the Google TPU.

So why not just build every such solution as an ASIC, at least as long as you can justify the initial build investment? Two reasons dominate. First, the underlying algorithms may be rapidly changing (as in AI) and second the time required to design an ASIC can be significant, making it very difficult to keep pace with rapidly-changing needs. You’d have to look hard to find more fiercely competitive markets than AI applications (q.v. Facebook, Apple, Amazon, Google, Baidu, Alibaba, TenCent, and ADAS/autonomous car suppliers) and datacenters (q.v. Amazon, Microsoft, Google and others). All are working in rapidly-evolving winner-take-all markets. In these domains, time isn’t just money, it’s survival.

Which is why eSilicon is launching a platform approach to targeted applications. These ASIC platforms are augmented with libraries and infrastructure targeting AI and datacenter networking needs. Each is built on 7nm technology and is PPA-optimized as a whole to optimize for the specific needs of those domains.

Let’s start with the networking platform. This offers:

  • 56G and 112G SerDes with long-reach and short-reach architectures at 56G, to support many lanes at very high data rates, yet at the lowest power achievable
  • TCAM memory to speed route lookups, packet classification, packet forwarding and ACL commands
  • PHY to connect to high-bandwidth memory (HBM2) stacks in the package. Note incidentally that eSilicon has significant experience in building 3D and 2.5D systems, both at die and package levels. So a system-in-package solution becomes an easy choice
  • Specialized memories/memory compilers for pseudo-2-port, pseudo-4-port and other application-specific memories, providing high bandwidth with area and power saving, along with a range of I/O buffers

The AI platform (which they call neuASIC) is a little more involved. The goal here is to provide first all the IP components you would expect in a standard SoC (CPU, local SRAM, NoC interconnect, interface to external memory I/O buffers), here called the ASIC Chassis. The neural-net (NN) part of the design is implemented on a stacked layer above the chassis, with 3D interconnect to connect to the AI layer. Again, this leverages eSilicon experience in 3D packaging.

If you simply hardwire your AI architecture, it will have great PPA but you may need to replace it (build a new ASIC) as soon as a competitor jumps past you. The neuASIC structure is optimized to limit the need for redesign against algorithm changes. First the Chassis hardware should be relatively insensitive to changes in NN algorithms. Next, the AI layer is divided into tiles. This mega-cell partitioning encourages durability in the underlying hardware to changes in the NN algorithms, thanks I would assume to the natural modular style of NN designs. Each tile is built around commonly-used macro AI functions such as convolution or pooling functions, some pre-designed by eSilicon, some might be 3[SUP]rd[/SUP]-party, some may be designed by the ASIC customer.

As of May of this year, neuASIC provides a library of MAC blocks, convolution engines and memory-transpose functions as pre-built macro functions (they continue to work on more), speeding assembly of common NN structures. Since memory and operations must be very tightly coupled in NNs to reduce overall power, they also provide pseudo-4-port memories for neuron support (2 neuron data inputs, 1 weight input, one neuron output) and a specialized memory called a weight-all-zero-power-saving (WAZPS) which will zero outputs at lower power if weights are zero (but at lower power than by default), a common occurrence in NNs with sparse weight matrices.

Design is supported through a modeling system they call the Chassis Builder, through which you can model the functional operation of an NN, while also extracting PPA estimates to guide optimizing the design to your targets.

For both platforms, the goal is to provide a fast path to a working solution, while also meeting your aggressive PPA goals. Doing so requires more than a standard ASIC platform. You need to be able to put together a chassis quickly with predefined I/O ring, interconnect and high bandwidth memory access, you must have the IP/macro primitives required in those applications, those IP should be optimized together for the application and you must be able to configure and characterize your planned design to your PPA objectives. These platforms look like a good start and a promising long-term path to accelerating high-performance, low-power ASIC design in these domains. You can learn more about the networking platform HERE and the AI platform HERE.

Related Blog

Related Blog


VLSIT Conference – imec on CFETs

VLSIT Conference – imec on CFETs
by Scotten Jones on 07-16-2018 at 12:00 pm

The 2018 VLSI Technology conference was held in Hawaii in June and is one of the premier conferences covering integrated circuit process technology and circuit design. The Complementary FET (CFET) is an emerging option to continue logic scaling into the next decade. At the conference imec, GLOBALFOUNDRIES, Tokyo Electron and Coventor presented “The Complementary FET (CFET) for CMOS scaling beyond N3,”. I have copies of the paper and presentation and had the opportunity to interview one of the authors, Julian Ryckart of imec.

The mainstream technology of choice for high performance ICs is currently the FinFET. Leading foundries are ramping 7nm FinFET technologies with risk starts of 5nm FinFETs planned for next year. Looking forward, somewhere around 3nm a transition is expected to horizontal nanowire/nanosheet (HNW/HNS) technologies, in fact Samsung has already announced a 3nm “Gate All Around” technology based on nanosheets for 2021. As we look even further forward beyond the introduction of HNS, a variety of scaling issues present themselves. The CFET is an emerging concept to provide scaling by stacking devices in the third dimensions.

In CMOS technologies nFETs and pFETs are used in pairs so that when the nFET is on, the pFET is off or vice versa. This results in low power consumption because current only flows during switching. nFET and pFET pairs are therefore a natural primitive in CMOS logic. In current technologies the nFET and pFET devices are fabricated in the same plane, in a CFET technology the nFET and pFET devices are stacked on top of each other providing an area reduction for the same pitches. The combination of the CFET with buried power rails can reduce the track height of the cells as well and for SRAMs a 40% structural gain is seen for the same pitches. Figure 1 illustrates some of the scaling advantages of CFETs.


Figure 1. CFET Structural Advantage

The CFET fabricated in this work is a fin over fin configuration with pFET fin on the bottom to benefit from substrate induced stress and the nFET vertical sheet on the top. Due to lower hole mobility in silicon, pFETs are typically weaker devices than nFETs and need extra stress to match the nFET performance. Putting the nFET on top also makes fabrication easier because the nFET work function is a sub-set of the pFET work function.

This CFET process features separate electrodes for the nFET and pFET allowing connections to be made either up to the interconnect stack or down to the buried power rails. Figure 2 illustrates the split electrode and the buried power rail.

Figure 2. Stacked Electrodes and Buried Power Rails

The split gate CFET process makes routing easier and the routing propagates up into place and route. The ability to shift P and N connections “north and south” means that only 1D connections are needed.

The stacked devices produced by this process are comparable to conventionally fabricated FinFETs in terms of performance. Parasitics can makes one of the devices different from the other and needs to be addressed and mitigated. There are some advantages to this process that can even result in better than FinFET performance. The CFET drain extensions can be minimized reducing gate to drain parasitic capacitance and improving performance, see figure 3.

Figure 3. Optimizing CFET Performance

In a standard FinFET the middle of line is routed parallel to the gate whereas for a CFET it is orthogonal to the gate also reducing gate to drain capacitance.

The process of building CFETs shares many steps with standard FinFET processes. A CFET process doesn’t add very many steps but the steps are more critical and require better control. Fill – planarize – etch-back steps in the CFET process require precise depth control of the etch back in order to fabricate and connect to the stacked devices.

Currently CFET devices are a single threshold voltage and it is already hard to build the sperate nFET and pFET work functions. Supporting multiple threshold voltages is an unsolved problem and appears very complicated. This is however unlikely to be a show stopper for CFETs because there aren’t any sliver bullets any more and CFETs appear to be the most general-purpose solution available.

I asked Julian about stacking beyond 2 layers, I am aware of groups exploring stacking up to 7 layers and more as a long-term scaling path that could even relax pitch requirements. Julian’s belief is that 2 layers makes sense because it creates a natural primitive of CMOS, but he was difficulty seeing scaling beyond 2 layers.

Some simple cost comparisons show that CFETs provide scaling less expensively than shrinking pitches by lithography.

In summary CFETs offer an intriguing option for scaling beyond HNW/HNS processes.


Semicon Wrap Up holding pattern in turbulent air

Semicon Wrap Up holding pattern in turbulent air
by Robert Maire on 07-16-2018 at 7:00 am

The stock market hates uncertainty most of all. In the absence of the known, the market will assume the worst or close to it. Right now there is a lot of uncertainty that continues to have more downside beta than upside beta. Everybody we spoke to at Semicon wakes up in the morning wondering what tweet was sent at 5AM that will impact their part of the hundreds of billions of dollars of trade in the semiconductor market. Projects and plans are up in the air as no one has a clue which way things will go.

The other large uncertainty is the length and depth of the current slowdown related to the memory market and specifically Samsung. How many quarters will it last? Will it spread to other chipmakers? We also spoke to a number of people in the industry who are already thinking about belt tightening and other standard knee jerk reactions to a slowing business model.

To be very clear, business is still quite good, everyone is making money and will still be making good money just less of it in the future. Gone are the bad old days where the majority of the industry went underwater during a cyclical downturn. We also don’t expect as much of a levered negative reaction that we used to see in the bad old days when smaller companies and sub suppliers where hit harder than the larger companies. Even the smaller companies have gotten bigger and stronger and the inventory pipeline isn’t as big as it used to be. Simply put the downturn should not be as ugly nor as long lived as we have seen in the past.

The problem remains that we don’t have a hint of when the trade issue will resolve or the length and depth of the downturn and that lack of knowledge is likely more damaging than the actual reality when it happens.

TEL confirms second half slow down but hopeful for 2019 recovery
At the Tokyo Electron investor meeting as well as in private meetings, TEL, the second largest equipment maker after AMAT confirmed what we all already know about a H2 slowdown. Less clear is when it will recover. Right now the hope is in 2019 but there is no basis other than hope and assumption.

TEL is an obvious beneficiary of the current trade war between the US and China as even if the trade war is resolved, chip customers in China will be wary and probably have a built in bias away from US makers towards Japanese makers (which runs counter to their traditional distrust of the Japanese)

You can never get the toothpaste back in the tube…
Even if we manage to work things out in the trade war with China, we think permanent damage has already very clearly been done. US companies will be distrusted as their supply could be cut off in the time it takes to tweet.

China’s “Made in China 2025” got proven 100% correct as China clearly needs to be independent of US control and leverage. Non US semiconductor companies will benefit and China will be looking for a work around for everything they depend on. This probably also doubles the pressure on Chinese hackers to steal more IP as they are more afraid of not being able to get it legally.

Waiting on Lam…the elephant in the room that isn’t talking
Its pretty clear that Lam will likely see most of the impact of Samsung’s memory slowdown. Their absence at Semicon and lack of pre-announcement only fans investor concerns and industry speculation.

Our view is that the downside risk to the stock remains as analysts can’t really do a good job of cutting estimates with nothing to go on and will have to wait until Lam announces to adjust their estimates and targets and it will take a while for the market to absorb the changes. Others in the industry will likely breath a sigh of relief once Lam officially announces reduced numbers as it reduces the speculation on their own performance.

Avoiding the “death by a thousand cuts”
Our main hope is that Lam and other companies in the industry cut their expectations enough in this first round so that we don’t get stuck in the downward death spiral of reducing numbers every quarter until we hit bottom. Expectations need to be reset to a level where the industry can meet and exceed them without worrying. Although its hard to adjust numbers to account for trade issues and we would not expect that, we think that the industry can take a whack at re-adjusting spending in light of memory pricing and near term demand. Better to under promise and over deliver in the stock market game.

Embrace the “Cycle” ….it takes pressure off management
We think that managements who have pushed the idea that this is no longer a cyclical industry have done themselves and the industry a disservice. They can no longer shift the blame and point their finger at the cyclical nature of the industry as the culprit . Since the business is up and to the right forever it must be managements fault for any blip or bad performance for several quarters cause its no longer a cyclical industry. Embrace the cycle…its your friend and whipping boy.


AMAT talks long term AI but short term is ugly

AMAT talks long term AI but short term is ugly
by Robert Maire on 07-15-2018 at 7:00 am

We attended Semicon West Monday and Tuesday, the annual show for the semi equipment industry. Its very clear from discussions with all our sources in the industry that confirm that Samsung has put the brakes on spending on memory and that message was reinforced by declines in their expected profitability due to weaker memory pricing. We maintain that a near term shipment drop of 25% for Lam and 10%-15% for AMAT is probable.

As we have said before, it is not possible for the rate of memory spending to continue at such high levels. Although demand has remained good, pricing of both NAND and DRAM has softened even though still quite profitable for memory makers. Samsung is taking prudent steps to slow spending.

Applied had no comments about short term trends in the industry . LAM & KLA were not having events so no other near term official commentary.

AMAT had a “feel good” series of presentations about AI being a major driver for chips and therefore chip equipment but unfortunately the long term positives of AI and big data are offset by the near term negatives of slowing memory spend and China trade issues.

While we certainly agree about the great long term prospects of AI for the entire chip and chip equipment industry (not just AMAT) it is the near term issues that will drive the stocks and have been pressuring the stocks.

The end of the day was capped off with the news of another $200B of tariffs being announced on Chinese goods which is yet another salvo in the escalating tit for tat trade war. The trade war continues to worsen with no resolution or even hopes of a resolution as it does not appear that there are any effective discussions going on other than the exchange of tariffs.

Our talk about China & Trade
We had a standing room only crowd at Semicon to listen to our discussion and presentation about China, trade, technology & Taiwan. We had a number of discussions with different company managements after our presentation and its clear that nothing about China trade is clear. Everyone is confused and waiting for the next shoe to drop much like the stock market.

One rumor we heard from multiple sources is that some large companies in the industry may have either slowed or put on hold projects in China. To be clear, business is going on as usual but new commitments of significant resources may be questioned given the uncertainty.

Meeting with AMEC …the Chinese semi equipment company
We spent some time with Gerald Yin, the CEO of AMEC, the Chinese semi equipment company. They have had huge success against Veeco the long term leader in MOCVD. They also are competitive in the etch market against Lam, AMAT & TEL etc;. While today they are not a huge force in etch they have made significant in-roads in non-critical etch especially at TSMC and domestic Chinese companies. They are also used in Intel’s China fab.

They estimate that the 20 or so fab project in China could amount to $100B in spend over the next few years.

We think they are the obvious beneficiaries of the near term trade issues between the US and China. While today they are a rounding error as compared to AMAT or LAM that is not likely to remain the case as they have the potential to do to the etch business what they did to MOCVD (maybe to a lesser extent).

We think much of this damage has already been done to the industry and will only get worse if export restrictions are actually put into place.

We would remind investors that export restrictions and licenses for export of semiconductor equipment used to be the norm and were only eased over the last years as trade with China increased. Putting those restrictions back into place would be very easy and just reverting to what had been the case for a longer period of time.

AI is great…so is big data and SSDs etc…
We certainly agree with AMAT about the huge upside potential of AI on the chip and chip equipment industries. We also think that Applied has some potential advantages loosely coupled to AI versus other companies. However, we view it more as a rising tide that will raise all boats. Right now the tide is going out due to the memory slow down and the seas are very choppy due to the China trade war which could tun into a hurricane, so its hard to go out and buy AMAT stock in the face of the near term issues.

We think that AMAT might be well served to broaden out its product line or go “upstream” into the EDA business to get closer to the design source of AI and reap more benefit over the longer term. We think it makes sense for an acquisition or collaboration for AMAT to get closer to chip design.

As previously mentioned we think investors will focus on a near term 10-15% drop quarter over quarter in shipments coupled with Applied’s industry leading China exposure rather than the long term AI upside We will continue to report on Semicon after tomorrow.

More Event Blogs


Black Scholes and IC Design

Black Scholes and IC Design
by Daniel Nenni on 07-13-2018 at 7:00 am

This is the sixth in the series of “20 Questions with Wally Rhines”

From the earliest days of my childhood, I was always trying to find ways to make money – paper routes, lawn mowing, coke sales at football games – I did it all. And, except for a motorcycle I bought during junior high school when, at age 14, I could get a driver’s license in Florida, I saved most of the money. During high school, I bought my first publicly traded stock, Eastman Kodak, and fortuitously profited from the introduction of the Kodak Instamatic Camera six months later, instilling me with the dangerous idea that I had some sort of intuition for investments despite the random nature of the luck.

So it should be no surprise that, as I worked on challenging research projects in the Central Research Laboratory of Texas Instruments, I also became deeply involved in trading standardized stock options when the Chicago Board Options Exchange opened during my first year at TI. Pretty soon I was doing “butterfly spreads”, “ratio writes” and even selling “naked calls”.


This activity stepped into high gear with the introduction of the SR-52 programmable calculator, as TI tried to catch up with the HP 65 programmable calculator that was already in the market. I went to work writing programs to improve returns and reduce risk in my stock option investing program. Not long before this, Fisher Black and Myron Scholes published an article in the “Journal of Political Economy” providing a mathematical derivation to calculate the intrinsic value of a stock option. Myron Scholes later won the Nobel Prize and founded a company, Long Term Capital Management, which experienced a blowup so big that Alan Greenspan writes about the threat it posed for worldwide financial stability in his book, “The Age of Turbulence: Adventures in a New World”. I went to work implementing the Black Scholes formula on the SR-52. The formula is a complex equation so it required some effort to squeeze it into the limited memory of the SR-52 (The SR-52 cost $395 on release in 1975 which is roughly $1,847 in 2018).

Volatility data was not generally available for most stocks so my use of the Black Scholes model focused on comparisons of options with different strike prices and expiration dates, where the volatility assumed in the equation would be constant. And then I began using it for trading. My broker at Merrill Lynch became fascinated and soon many of the brokers in his office had SR-52’s.

One day I became aware of a request from the management of the Professional Calculator Department at TI for sample programs written for the SR-52 that could be used as examples to attract customers, especially for applications other than engineering. I went to a meeting and met Rob Wilmot and Peter Bonfield (now SIR Peter Bonfield, who I’ve been associated with ever since). They were excited by my options trading program and decided to run a full page add in the Wall Street Journal offering customers a free copy of the program. It was a big success and I seriously began considering a career move into financial analysis software.

As Steve Jobs said in his commencement address at Stanford, connecting the dots that will be important to your career is difficult looking forward. In this case, the connection with Robb and Peter in the Calculator Products Division, or CPD, had an interesting consequence. Later that year, a decision was made to move CPD to Lubbock, Texas because the Division was growing so fast that space needs couldn’t be accommodated in Dallas. For people like Robb and Peter, who came from the UK, both Dallas and Lubbock were near the edge of civilization so they could easily adapt to the new environment in Lubbock. But for most of the employees in Dallas, a move to Lubbock didn’t sound attractive. Lots of management slots opened up, including the job of Engineering Manager for the Division, supervising 150 engineers who designed the chips and plastic cases for calculators. I am told that someone in the Calculator Division suggested, “Wasn’t that guy who wrote the Black Scholes program some type of chip design manager in the Central Research Lab? I wonder if he would be willing to move to Lubbock?” And that’s all it took. A few weeks later, I inherited responsibility for a group of people who had to be convinced that moving to Lubbock would be a good experience.

Most amazing was the group of managers who agreed to move. Those of us reporting to Ron Ritchie, the Division VP, included:

  • Rob Wilmot – Later became CEO of ICL (the largest computer company in Europe)
  • Peter Bonfield – Later became CEO of ICL, then CEO of British Telecom and subsequently served on boards including TSMC, Astra Zeneca, Ericsson, Sony and nine other public companies including Mentor Graphics. He has 11 honorary degrees and is currently in the news because he is Chairman of the Board of NXP. He is now Sir Peter.
  • Tommy George – Later became CEO of Motorola Semiconductor
  • Kirk Pond – Later became CEO of Fairchild Semiconductor
  • Jim Clardy – Later became CEO of Harris Semiconductor and then CEO of Crystal Semiconductor which became Cirrus Logic

The Figure above is the agenda for the Consumer Products Group part of the annual TI Strategic Planning Conference held in 1978. The “M. Chang” on the agenda is now well known to most everyone in the semiconductor industry. E. Pfeiffer is Eckhard Pfeiffer who later became CEO of Compaq Computer.

The 20 Questions with Wally Rhines Series


Know what 5G is? You’re probably wrong

Know what 5G is? You’re probably wrong
by Tom Simon on 07-12-2018 at 12:00 pm

If you think the transition to 5G will be anything like the transitions before it to 3G or 4G, you are in for a big surprise. Not only will the transition take longer than either of the previous transitions, its ramifications will spread far beyond cell phones and into other areas such as automobiles, AI, healthcare, and commerce. This is because 5G is not just about higher data rates or carrier efficiency. It will involve multiple bandwidths, a wide range of devices and new modalities of communication. Even though we are approaching its initial implementation, the specification is still in flux and has yet to be finalized. This makes it so that nobody, even the experts, know what 5G really is.

5G is certain to bring big changes in mobile data applications. Many companies are working toward helping to create and enable 5G hardware or to implement solutions that rely on it. One such company is Achronix. Their embedded FPGA fabric – SpeedCore eFPGA – is ideal for dealing with the need to implement 5G systems in the face of changing requirements and specifications. I recently spoke to Mike Fitton, Senior Director of Strategic Planning, at Achronix, who moved there from Intel for exactly this reason. He sees huge opportunities with 5G and understands that Achronix can play a pivotal role.

One of the most interesting areas in 5G is what is called V2X, which is the catch phrase for communications from a vehicle to anything. This includes vehicle to vehicle communication, which can help vehicles negotiate with each other and share alerts about hazards. V2X also includes things like road construction beacons informing cars of reduced speeds or lane diversions. Mike talked about how important URLLC or Ultra Reliable Low Latency Communication will be for these applications. There will be new standards and tightly coupled protocols that will necessarily be evolving. Having the ability to modify algorithms easily without respinning silicon will be extremely helpful. Low latency in communication will also be essential. Mike pointed out that on-chip FPGA fabrics will afford latency on the order of nanoseconds, as opposed to microseconds with off-chip alternatives.

Mike pointed out that everyone was surprised when SMS turned out to be the killer app for communicating with smart phones. It will be just as hard to predict what will be big with the new modes of communication that 5G will bring to the table. Regardless, flexibility will play a big role in developing system based on 5G. Mike and I discussed some examples of how 5G can be utilized. Low power and low bandwidth connections supporting IoT applications can be used to request services automatically over inexpensive and widely available links. For instance, paper shredding bins could automatically message the mother ship to send a mobile shredding truck when they are nearing full. Another example I heard of was where remote fuel tanks for water pumps on farms could signal when they need to be refilled. The savings and increased efficiencies in either of these scenarios are very compelling. This is just a small taste of the new use models that could become practical with the advent of 5G.

The evolution of 5G will be very interesting. At first it will be offered as an assisted mode for LTE, then later we will see a wholesale switch to all the elements of 5G. Necessarily the high frequency, short range component will come later. There are significant technical challenges due to interference and blockages at short wavelengths. Trials for 5G will be starting in 2020. During the initial implementation stages operators and system developers will place a premium on adaptability and the ability to provide quick updates to address unforeseen issues. Achronix is sure to play a role in this market with SpeedCore eFPGA’s combination of low power, high performance and rapid reconfigurability. For more information on how embedded FPGA fabric can facilitate 5G implementation, take a look at the Achronix website.


Deep Learning: Diminishing Returns?

Deep Learning: Diminishing Returns?
by Bernard Murphy on 07-12-2018 at 7:00 am

Deep learning (DL) has become the oracle of our age – the universal technology we turn to for answers to almost any hard problem. This is not surprising; its strength in image and speech recognition, language processing and multiple other domains amaze and shock us, to the point that we’re now debating AI singularities. But then, given a little historical and engineering perspective, we should remember that anything we have ever invented, however useful, has always been incremental and bounded. Only our imagination is unbounded. If it can do some things we can do, why should it not be able to do everything we can do and more?

So it should not come as a surprise that DL also has feet of clay. This is spelled out in a detailed paper from NYU, in 10 carefully elaborated limitations to the method. The author, Gary Marcus, is somewhat uniquely positioned to write this review since he is a professor of psychology and researches both natural and artificial intelligence. Lest you think that means he has only a theoretical understanding of AI, he founded his own machine learning company, later acquired by Uber. I quickly summarize his points below, noting his caveat that all points made are for what we know today. Advances can of course reduce or even eliminate some limits, but the overall impression suggests the big wins in DL may be behind us.

In his view, the CNN approach is simply too data hungry, not just in recognition but also in generalizing, requiring exponentially increasing levels of training data to generalize even at relatively basic levels. DL as currently understood is ultimately a statistical modeling method which can map new data within the bounds of the data it has seen but is unable to map reliably outside that set and especially cannot generalize or abstract.

Despite the name, DL is relatively shallow and brittle. While it seems from game-playing examples with Atari Breakout that DL can achieve amazing results such as learning to dig a tunnel through a wall, further research shows it learned no such thing. Success rates drop dramatically on minor variations to scenarios, such as moving the location of a wall or changing the height of a paddle. DL didn’t infer a general solution, and what it did infer breaks down quickly under minor variations. On a different and funny note, the British police wanted to use AI to detect porn on seized computers; unfortunately it keeps mistaking sand dunes for nudes, rather a problem since screensaver defaults often include pictures of deserts. More troubling, there are now numerous reports on how easily DL can be fooled through individual pixel changes.

DL has no obvious way to handle hierarchy, a significant limitation when it comes to natural language processing or any other function where understanding requires recognition of sub-components and sub-sub-components and how these fit together in a larger context. Understanding hierarchy is also important in abstraction and generalization. DL understanding however is flat. Systems like RNNs have made some improvements but have been shown in research last year to fall apart rather quicly when tested against modest deviations from the training set.

DL can’t handle open-ended inference. In reading comprehension, drawing conclusions by combining inferences from multiple sentences is still a research problem. Inferencing with even basic commonsense understanding (not stated anywhere in the text) is obviously harder still. Yet these are fundamental to broader understanding, at least understanding that we would consider useful. He offers a few examples which would present no problem to us but are far outside the capabilities of DL today: Who is taller, Prince William or his baby son Prince George? Can you make a salad out of a polyester shirt? If you stick a pin into a carrot, does it make a hole in the carrot or in the pin?

DL is not sufficiently transparent – this is a well-known problem. When a DL system identifies something, the reason for a match is buried in thousands or perhaps millions of low-level parameters, from which there is no obvious way to extract a high-level chain of reasoning to explain the chosen match. Which is maybe not a problem when matching dog breeds, but when decision outcomes are critical as in medical diagnoses, not knowing why isn’t good enough especially bearing in mind the earlier-mentioned brittleness of identification. Progress is being made but this is still very much at an early stage and it is still unclear how deterministic this can ultimately be.

Generalizing an earlier point, the DL training approach doesn’t provide a method to incorporate (and apparently in a lot of research even actively avoids) prior knowledge such as the physics of a situation. All work of which the author is aware works on bounded set of data with no external inputs. If we as a species were still doing that we would still be banging rocks together. Our intelligence very much depends on accumulated prior knowledge.

One problem in DL could equally be assigned to humans (and Big Data while we’re at it) – an inability to distinguish between correlation and causation. DL is a statistical fitting mechanism. If variables are correlated, they are correlated and nothing more. This doesn’t mean one caused the other; correlation might result through hidden variables or over-constraints. However understanding what might be a true cause is fundamental to intelligence, at least if we want AI to do better than some of us.

There is no evidence that DL would be able to handle evolving conditions, in fact there is a cautionary counter-example. Google Flu Trends aimed to predict flu trends based on Google searches on flu-related topics. Famously, while it did well for a couple of years, it missed the peak of the 2013 outbreak. Conditions evolve, and viruses evolve. Google Flu Trends didn’t use DL as far as I know, but there’s no obvious reason why a dataset for a problem in this general (evolving) class would not be used for DL training. Self-training (after bootstrapping from a labeled training set) may be able to help here, but it can’t model disruptive changes without also considering external factors of which even we may not be aware.

Prof. Marcus closes his ten points (which I compressed) with an assertion that DL still doesn’t rise to what we would normally consider a reliable engineering discipline, where we construct simple systems with reliable performance guarantees and out of those construct more complex systems for which we can also derive reliable performance guarantees. It’s still arguably more art than science, still not quite out of the lab in terms of robustness, debuggability and replicability and still not easily transferred to broader application and maintenance by anyone other than experts.

None of which is to say that Prof. Marcus disapproves of DL. In his view it is a solid statistical technique to capture information from large data sets to draw approximate inferences for new data within the domain of that data set. Where things go wrong is when we try to over-extend this “intelligence” to larger expectations of intelligence. His biggest fear is that AI investment could become trapped in over-reliance on DL, when other apparently riskier directions should be just as actively pursued. As a physicist I think of string theory, which became just such a trap for fundamental physics for many years. String theory is still vibrant, as DL will continue to be, but happily physics seems to have broken free of the idea that it must be the foundation for everything. Hopefully we’ll learn that lesson rather more quickly in AI.


Cadence’s Smarter and Faster Verification in the Era of Machine Learning, AI, and Big Data Analytics Panel

Cadence’s Smarter and Faster Verification in the Era of Machine Learning, AI, and Big Data Analytics Panel
by Camille Kokozaki on 07-11-2018 at 12:00 pm

I attended on Monday, June 25, DAC’s Opening Day, a Cadence-sponsored Lunch panel. Ann Steffora Mutschler (Semiconductor Engineering) was the Moderator and the Panelists were Jim Hogan (Vista Ventures), David Lacey (HP Enterprise), Shigeo Oshima (Toshiba Memory Corp), Paul Cunningham (Cadence).
Continue reading “Cadence’s Smarter and Faster Verification in the Era of Machine Learning, AI, and Big Data Analytics Panel”


Glad to be part of SemiWiki and my DAC Industry Report

Glad to be part of SemiWiki and my DAC Industry Report
by Mark Gilbert on 07-11-2018 at 7:00 am

Welcome to my newly relocated column, I am so excited about my new relationship with Daniel Nenni, and the other esteemed bloggers on SemiWiki. For those who do not know me, I have been a featured columnist on another EDA portal for the past 12-plus years, and in EDA for 20-plus years. As the leading recruiter in our industry, (or so I am told), I write about our industry from a non-technical viewpoint, as well as talk about the various hiring modalities and industry-specific career advice. For my first column, should I say blog, on SemiWiki, I want to talk about this past DAC and what might be coming down the DAC road, as well as how hiring in general, is going.

First, let me address the brains at DAC; it doesn’t take a PhD to know that DAC should always be held in the Bay area. While the last two DACS in Austin were OK, (and understandably need to be hosted in Austin occasionally, as it is the 2[SUP]nd[/SUP] busiest U.S. hub for EDA/SEMI), they should still be hosted in the Bay area, almost always, even if the dates vary. Vegas makes no sense! I know it is said that many of us love to go to Vegas, but that in and of itself is not enough of a reason to host there. Make no mistake, I personally can go anywhere, but most companies will only send a limited number of people to alternate destinations, if at all. Having DAC where the base lives simply makes the most sense, both economically and practically.

To complicate this, DAC has new owners, which made DAC’s future the talk of the Conference. In general people wondered:

[LIST=1]

  • Will it return, and that answer is unequivocally YES
  • Will it be in VEGAS, and that answer is 98% YES. The venue is reserved well in advance and paid for similarly. I cannot imagine them losing all that money…also, companies all selected their locations while at DAC, and while there were rumors that perhaps DAC will combine with SEMICON, no one knows if that will actually happen for this, or any other DAC. (Though my guess is it is likely).

    DAC 2018 was indeed quite busy, and in fact I thought the first Monday was the busiest I have seen in several years. In talking with so many C-Level executives over the three days I was there (from open to close), most were pretty happy with the show. I have said it before and I will repeat it now: …DAC is not a consumer show and numbers do not tell the whole story. It is WHO is there and if YOU had something that they wanted.

    Those that scheduled in advance, or had inviting booths with the right (enticing) information that compelled people to want to know more and the personnel that knew the right people that could actually get them in for a demo, did well. If demos were not an option then hopefully a future on-site visit was scheduled. That is what makes for a successful DAC…Those that sat in their seats waiting for someone to beg them to show them what they have or that put no effort into their booth, are still sitting in their seats. (Candy seemed to be the #1 option, lots of candy!)

    DAC is different these days…How you ask??? Well, in case you have not heard, which means you do not have internet or are taking too much sleep medication, acquisitions have changed our landscape. Instead of a lot of new EDA start-ups, DAC was littered with IP companies… a LOT of IP companies. A cloud aisle was added this year for various collaboration/version control type offerings, but CLIOSOFT clearly leads the way there, with a few others making significant headway.

    As for hiring, normally I walk away with several new companies and requisitions, but not as much this year. Let me be clear, that is not a statement on hiring; for me, DAC is all about building and maintaining relationships, and learning more about what’s hot, what’s coming, what’s happening, and for using that as a barometer, it was great success for me. I made so many new connections that want to work together, and while those needs will be spread over time, expertise, and locations, most will bear fruit. Hiring remains extremely difficult as the needs are plenty and the pool shallow, with good people not wanting to leave. Companies have needs and it is harder than ever to find good qualified people (which I will talk about more in my upcoming columns).

    I went to two really great parties, the Cliosoft/AUSDIA party and the ONESPIN party, and I must say they were two of my all-time favorite events. I was too tired to go to the Denali/(Cadence) party this year, the first one I’ve missed since its inception, but I am sure not much has changed since it stays pretty much the same every year. My trademarked White sports coat was as always, quite noticed and as my friend Craig Shirley (the new CEO of OSKI) said, “one day that jacket will be retired to the archives of our history” (or something like that, I hope that means I can sell them).

    I look forward to a great relationship with SemiWiki and will have my Blog published here monthly. If I can ever be of help to any of you, my advice is always free and gladly given…

    EDA-Careers is a highly specialized recruiting agency that recruits specifically for everything EDA/Semiconductor and IOT… All Levels, All Specialties. R&D Developers, Application Engineers, Designers, Sales and Marketing WE DO IT ALL!

    EDA-CAREERSis a unique and very different company when it comes to how we handle our clients and candidates. With nearly 20 years of recruiting exclusively in the EDA/Semiconductor industry, we pride ourselves on our relationships and follow-up. We WORK HARD to help find the best candidates for our companies and the best opportunities for our candidates, making sure both are satisfied every step of the way. In effect, we become PARTNERS so that both the candidate and the company process goes smoothly, from beginning to end. That is why we have worked successfully with Hundreds of companies and placed such a broad array of candidates.

    More 55DAC blogs