webinar banner2025 (1)

HOT Party for a Cause at DAC 55

HOT Party for a Cause at DAC 55
by Randy Smith on 06-20-2018 at 4:00 pm

The Design Automation Conference (DAC), now in its 55[SUP]th[/SUP] year, always offers a lively mix of activities. For EDA vendors and their customers, the focus is on the exhibit floor and in booth suites where the latest technology is on display. For R&D engineers and academics, the technical sessions dig deeply into an increasingly wide range of topics. For every attendee, DAC offers plenty of time for networking and catching up with colleagues from all around the world.

This year’s show in San Francisco offers a unique opportunity for everyone to meet, party, and support a great cause at the same time. The traditional Sunday evening DAC kick-off reception has been combined with the “HOT” party to benefit Heart of Technology and the Gary Smith Memorial Scholarship Endowment at San Jose State University. That one sentence contains multiple hints why this will be one of the hottest EDA events this year; here are some details.

For a start, Sunday evening receptions have been kicking off DAC for many years. Industry luminaries offer their thoughts on the past year and predictions for the next, all accompanied by generous food and beverage service. Do you know what the “DAC glance” is? That’s when you greet a friend or colleague at the reception and immediately look down at his or her badge to check current employment status. Changes from the previous year are not uncommon in EDA.

Over the last few years, a new DAC tradition began with the parties hosted by Heart of Technology (HOT), a philanthropic organization founded by Silicon Valley venture capitalist Jim Hogan. Based in San Jose, HOT rallies the high-tech industry to aid charities throughout the Bay Area in their fundraising efforts. To date, the organization has netted approximately $185,000 to benefit local charities in need.

Last year’s HOT party at DAC in Austin was a benefit for the Gary Smith Memorial Scholarship Endowment and this year we will again support this very worthy cause. Gary Smith was one of the luminaries of EDA, the industry’s most followed analyst, and one heck of a nice guy. He defined market segments, tracked market share, offered revenue projections, published detailed research reports, and served as a tireless advocate for EDA.

There is no better way to celebrate Gary’s extraordinary life than to support the endowment, which offers an award to one undergraduate student annually participating in the San Jose State University Educational Opportunity Program’s Guardian Scholars Program. This program serves youth emancipated from foster care, Wards of the Court, and certified homeless individuals who are highly self-motivated to complete their college education at SJSU.

So now you know the backstory and can appreciate why this combined event at DAC is so significant. The evening starts at 6:00 p.m. on Sunday, June 24th, with the DAC Welcome Reception, including the presentation “EDA Industry Observations and Outlook” from Richard F. Valera, Managing Director, Equity Research at Needham & Company at 7:00 p.m. This is followed immediately by the “HOT 55” party, which will run until 10:30 p.m.

All this happens at Moscone West in San Francisco on the Third Floor Mezzanine. Back by popular demand, Vista Roads Band will provide the entertainment with special guests Methodics Ensemble. There is no ticket or pre-registration required. All DAC attendees and sponsored guests are welcome to the party at no charge, though HOT and the 18 co-sponsoring companies and organizations ask that you donate what you can to the Gary Smith Memorial Scholarship Endowment.

We are certainly looking forward to attending and hope that you can be there as well!


The Wolper Method

The Wolper Method
by Bernard Murphy on 06-20-2018 at 11:00 am

If you read around topics in advanced formal verification you’re likely to run into something called Wolper coloring, or what Vigyan Singhal (Chief Oski at Oski) calls the Wolper method. Many domains have specialized techniques but what’s surprising in this instance is a seeming absence of helpful on-line explanations (though there are plenty of resources which cite and use the method without explanation, as if we should already know what it is.) The original source is a paper by Pierre Wolper which may be a little heavy going for some (me too), so I asked Vigyan for help, which he happily provided, adding also some interesting background. What follows is my attempt to provide an explanation for those of us who aren’t CS theoreticians.

Let’s start with the problem the method aims to address. When you want to verify the correctness of data transport logic (in network switches or on-chip interconnects or memory subsystems, for example), checking the protocol is one part of the job, with well-understood dynamic and formal approaches to verification. Checking integrity of the payloads flowing through the network – can they be corrupted in some way – is a different task. At first glance, this could be extremely difficult. In simulation you can’t practically check all possible data values in potentially long sequences and yet it may be far from obvious what corner cases would provide good coverage. And formal methods, even using clever techniques, normally have problems with very long sequences.

Wolper’s contribution was to discover and prove that, as long as control behavior in the logic is independent of payload data, it isn’t necessary to test all possible payload values or long sequences. In fact it is sufficient in formal proofs to use one or two bits in a (payload data) sequence, from which you can provably infer behavior for sequences of arbitrary length. I’m not going to attempt a proof of this; you can read the paper or take it on trust. Instead I’ll give a little background on how this technique found its way into formal verification for hardware, along with a couple of examples of application.

Vigyan told me that when he was working on his doctorate at UC Berkeley, Prof. Bob Brayton directed his formal group to study each week certain papers he would recommend, looking for possible applications in formal methods. Based on the Wolper paper they developed a formal proof approach to checking properties related to a sequence of items transported through a design, in which the design is making decisions about routing, merging and other transport-related activities. So that’s where it all started in our world.

Quite generally they found that using this Wolper technique they could formally detect any possibility that a design could drop, duplicate, corrupt or reorder data, for any possible data sequences. Again, the naive formal approach would check all possibilities out to some sequential depth and then run out of gas. But thanks to Wolper’s insight, they could prove correct behavior using a few specific sequences composed of just a few bits of data, and from that infer correct behavior over arbitrary length sequences.

Using this technique in the simplest case, you would look at only a stream of single bits coming into the router. You constrain to have just two consecutive bits in the stream set to 1 and all others are constrained to zero; importantly, the position of these consecutive bits in the stream should be unconstrained. It’s easy to get tripped up here by a simulation mindset (I did at first). Don’t think of how to set up specific sequences. Think instead of what will be constrained in the proof – two consecutive bits somewhere (unconstrained) set to 1 and all others set to zero. The formal engine will take care of looking for any possible counter-example to this being undisturbed at the output.

These kinds of constraints are what is often referred to as Wolper coloring. You can add an assertion to check transmission at the output of the design using a small state machine. This state machine will accept any sequence of zeros, followed by two consecutive 1’s, followed by any sequence of zeros. But it will error on a single 1 (a bit dropped) or a 101 sequence (maybe an erroneous data insertion or reordering). If the assertion triggers, you have a bug in the transport logic. And if you don’t get an assertion trigger you know, thanks to Wolper, that the transport logic is bug-free for any sequences.

You can continue to refine this to handle more complex transport – if bytes are merged into words on the output, you have to adjust for two streams flowing into one stream. If the design is required to repeat if no response after some time, the check has to allow for that possibility, And so on.

Which makes it rather challenging to produce a canned application (app) to do Wolper checking. Each variant to handle product differentiation, merging options, better error handling, more complex routing, etc, requires modifications to the proof. Perhaps best to think of this as a powerful technique for validating transport correctness, to be used by the full property-checking experts. Where, naturally, Oski would be happy to advise 😎

BTW I have also seen this method referred to as a data-abstraction technique. You are probably familiar with data abstractions when handling memories (reducing a large memory to a single word, byte or bit to simplify a proof). Think of the Wolper method as a way to do a similar thing with data streams – reducing an arbitrary-length stream to just a few bits in the stream.


TSMC OIP DAC Theater Schedule 2018

TSMC OIP DAC Theater Schedule 2018
by Daniel Nenni on 06-20-2018 at 6:00 am

The TSMC OIP DAC Theater schedule is finalized and ready to go. It kicks off Monday at 10:15 am in booth #1629 and ends with a raffle at 5:45 pm each day (Mon-Tue-Wed) TSMC gives out some very nice prizes so check in with the TSMC booth staff when you arrive. There are 66 coveted presentation spots representing the top ecosystem partners around the world. The TSMC theater is one of the busiest and if you look at the attached schedule you will see why.

TSMC OIP DAC:
Overview Schedule Raffle

Honorable mentions go to the presentations by companies that we work with:

  • Analog Bits: A Case Study of FinFet SERDES for AI
  • ANSYS: ADAS Reliability for Advanced FinFET Design
  • Cadence: Virtuoso Design Platform for Advanced Nodes
  • Cadence: Advanced Semiconductor Packaging
  • Cadence: IP Solutions for Advanced Nodes
  • Cadence: High Performance 7nm Digital Design
  • Flex Logix: Applications and Value Proposition of eFPGA by Market
  • Moortec: FinFET Optimization and Reliability Enhancement
  • Mentor: Verification Solutions for TSMC Advanced Packaging
  • Mentor: Verification and Advanced DRC
  • Mentor: Tessenet DFT Yield Solutions for Advanced Nodes
  • SiFive: Enabling Access to Silicon
  • Silicon Creations: High Performance PLL Design on 5nm FinFET
  • Silvaco: Technology Behind the Chip
  • Synopsys: Silicon Proven Designware IP for TSMC Processes
  • Synopsys: Power ECOs with ANSYS Redhawk
  • Synopsys: Custom Platform for TSMC
  • TSMC OIP Update

Special mention goes to Open Silicon who sent abstracts for their TSMC theater presentations:

Topic: Turnkey 2.5D HBM2 ASIC SiP Solution for Deep Learning and Networking Applications
Presenter: Asim Salim / VP of Manufacturing Operations, Open-Silicon

The most common memory requirements for emerging deep learning and networking applications are high bandwidth and density, based on real-time random operations. High Bandwidth Memory (HBM2) meets these requirements and delivers unprecedented bandwidth, power efficiency and small form factor. Open-Silicon’s silicon proven HBM2 IP subsystem in TSMC’s FinFET and CoWoS® technologies is enabling next generation high bandwidth applications and the successful ramping of 2.5D HBM2 ASIC SiP designs into volume production.

Topic:IP Subsystem solutions for Deep Learning and Networking Applications
Presenter: Kalpesh Sanghvi / Technical Manager of IP and Platforms, Open-Silicon

For Deep Learning and Networking applications ASICs, HBM IP Subsystem, Networking IP Subsystem are main building blocks. Open-Silicon’s first HBM2 IP subsystem in 16FF+ is silicon-proven at 2Gbps data rate, achieving bandwidths up to 256GBps. Open-Silicon’s next generation HBM2 IP subsystem supports 2.4Gbps in 16FFC, achieving bandwidths up to >300GBps and supports 3.2Gbps and beyond data rates in 7nm, achieving bandwidths up to >400GBps. Open-Silicon’s Networking IP subsystem includes high-speed chip-to-chip interface Interlaken IP, Ethernet Physical Coding Sublayer (PCS) IP, FlexE IP compliant to OIF Flex Ethernet standard v1.0 and v2.0, and Multi-Channel Multi-Rate Forward Error Correction (MCMR FEC) IP.

Topic:Package Design, Assembly and Test Strategies for Robust 2.5D HBM2 ASIC SiP Manufacturing
Presenter: Abu Eghan / Sr. Manager of Packaging & Assembly, Operations, Open-Silicon

2.5D HBM2 ASIC SiPs manufacturing has unique challenges for package design, assembly and testing both at the wafer level and the SiP level. Open-Silicon’s has proven solutions and strategies that are available to mitigate these issues in order to successfully ramp ASIC SiP designs into volume production.

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging EDA, silicon, intellectual property (IP) and design services providers. The conference is sponsored by the Association for Computing Machinery’s Special Interest Group on Design Automation (ACM SIGDA), the Electronic Systems Design Alliance (ESDA), and the Institute of Electrical and Electronics Engineer’s Council on Electronic Design Automation (IEEE CEDA).


Billion Transistor Designs Need Faster Full Chip Tools

Billion Transistor Designs Need Faster Full Chip Tools
by Tom Simon on 06-19-2018 at 12:00 pm

During the design cycle as tape out approaches, time pressure usually goes up dramatically. To make matters worse the design itself is much larger, because all the block level work is done and there is a requirement to work with the entire database. It feels like it’s time to put aside the garden trowel and start using a steam shovel. This is when whole-chip DRCs are run and each change needs to be double checked to ensure that no inadvertent changes to the design have been introduced. At this stage, all the tools that were used to initially make the design are most likely sagging under the weight of the fully assembled and nearly finished database. Fortunately, there is an EDA vendor working specifically on solving these design challenges. The Taiwanese company AnaGlobe was founded in 2000 and has a solution for viewing, fixing and comparing the largest design files in existence.

I caught up recently with Ted Chou, AnaGlobe’s Corporate Applications Engineer, to go over their Thunder Integration Platform and Thunder LVL tool. One of their most significant advantages is high capacity and extremely fast database read in and writing. They have their own database called Thunder DB, but can read GDS, LEF/DEF and OASIS. The Thunder DB is about one tenth the size of GDS. Their extremely fast read-in can read in a 17 GB database in around 8.5 minutes. Ted pointed out that other tools can take around 112 minutes for this size design. So, the time saved is significant. Similar gains are seen for database writing. He also pointed out that their performance scales well when additional processors are used.

Their tools operate with the full design in memory, so no viewing or editing operations require disk file access. This makes Thunder a great choice for viewing and fixing DRC errors found with Calibre. AnaGlobe offers an integration with Calibre and ICV. But what appears to be one the most compelling motivation for using Thunder is its LVL capability. Unlike Calibre, no rule deck is needed for Thunder LVL. Another incentive is that you can use your Calibre license for something else when running Thunder LVL, and Thunder only needs one license to run on multiple CPUs, unlike Calibre.

Thunder LVL can run flat or hierarchical. Some of the runtimes that Ted shared with me were impressive. On a 170 GB design file with 393 layers, other tools took 15 hours with 12 CPUs. Thunder LVL ran in 1.7 hours. Thunder LVL also features synchronized viewports for viewing differences in the design that it reports.

There is a lot to say for companies that pick a niche and focus relentlessly on delivering a support product. It helps that the founders come from TSMC and Springsoft, both companies with excellent bona fides. From the looks of it AnaGlobe is enjoying good adoption rates at a number of the largest semiconductor companies. This makes sense because their market focus is on the largest designs. For more information on AnaGlobe and their Thunder products be sure to look at their website.

In an interesting side note, I worked at Calma, the company that created Stream format. In fact, that was the name of their layout editor, GDS II. Yes, there was a GDS I before GDS II, but Stream format was a GDS II utility and was never used with GDS I. I had the opportunity to meet Sheila Brady, the woman that actually wrote the very first Stream import and export utilities. Just to give a frame of reference, this was back the in early 1980’s. It’s testament to some good software design that GDS Stream is still in use today. Of course, it has been adapted to handle today’s more complicated technologies with the addition of more layers and datatypes, and larger record sizes. However, even back then I remember her saying that she really only intended it just to be a tape archive format, not a database for design hand-off, etc.

In one sense, it’s amazing it is still in common use. But this also makes the case for using newer more efficient databases and formats for today’s design challenges. Imagine if you were still using any other technology from 1980 for your daily tasks: floppy disks, 100MHz processors, magnetic tape drives, dial up modems…..


The Starting Point of Functional Safety Analysis

The Starting Point of Functional Safety Analysis
by Bernard Murphy on 06-19-2018 at 7:00 am

In the course of building my understanding of functional safety, particularly with respect to ISO 26262, I have developed a better understanding of the design methods used to mitigate safety problems and the various tools and techniques that are applied to measure the impact of those diagnostics against ASIL goals. One area in which I was struggling was the failure mode effects analysis (FMEA), which seems to be viewed as a “given” in many presentations. I didn’t see any explanations on how the FMEA was developed in the first place. That was a problem for me because everything else in functional safety is built on top of the FMEA; get that wrong and the rest of your effort is pointless, no matter how cleverly applied. So I asked Alexis Boutillier (Corporate Apps Manager and Safety Manager at Arteris IP) to explain.

Alexis started with a great question – how are you going to prove to your customer that your safety analysis is satisfactory? Obviously you can’t get away with “our experts checked it out and all possible failures are covered”. But you also can’t get away with “here’s a list of all the safety-critical features and related diagnostic logic”. The analysis has to be more objective and reviewable by someone not expert in your design. This is what the FMEA (within the standard) provides; a method to remove opinion, no matter how expert, from the loop.

Alexis demonstrated using a routing component in a NoC IP as an example (see the table figure). The table looks at each signal at a time and considers the potential impact of different types of error on that signal, both permanent and transient. Since this is a router, there can be impact from failures affecting the header and failures affecting the payload. Permanent (e.g. manufacturing) errors are modeled as stuck at zero or one and transient errors (e.g. soft errors from neutron-induced ionization) might result in bit flips and/or might lead to multi-bit errors.

Then in each case, you determine the effect of that failure. An error in framing might lead to a transaction being lost or an unexpected transaction. An error in the address could lead to incorrect routing and an error in the data naturally leads to bad data. You can also qualify these assessments with an explanation if they will only happen in certain circumstances. Next you describe a safety mechanism to either detect or correct for that error, along with your estimate of the effectiveness of that mechanism. So now you know, for each possible failure, the coverage you have in mitigating that failure. So far, so good. Very systematic, you could share this with a customer and they would agree you have done a comprehensive and objective job of covering possible the possible failure modes.

The really interesting question is – how likely is each of these failure modes? This is where it would be easy to slide back into opinion-based debate which would undermine the objectivity goal. One way to overcome this is to base failure distribution on the number of signals in that mode as a percentage of the total number of signals in the design. In the first entry on the table the mode is associated with a single signal out of 195 total signals, so it contributes just 0.5% to the distribution. This single-point fault analysis is in line with the standard and better meets an expectation of objectivity than would expert judgement. From a Tier1 or OEM viewpoint there’s nothing objective about being asked to trust a semiconductor supplier design expert for their opinion on probable distributions.

This would make building and FMEA very challenging if you started with a decent-sized IP which hadn’t been designed from the outset around these objectives. A big chunk of flat RTL would be painful to analyze comprehensively (many more possible failure modes) as would logic where failure modes had interdependencies between sub-functions (which would have to be grouped together for FMEA, I would guess). Analyzing big complex FMEAs would also make for challenging compliance reviews with customers (tell me again why this covers all possible failure modes? I got lost 20 pages back…) But if the IP is designed bottom-up for this kind of support, each component FMEA is easier to understand and these will drop neatly into FMEA analysis in the larger system (where FMEAs can also be composed together more automatically). The net of this is that it’s best to compose FMEAs in a modular and hierarchical fashion, which you can do if the IP has been micro-architected from the outset to support this analysis.

There’s a lot more to safety and how IP suppliers need to support their customers in their safety activities. I touched on some of this in my last blog for Arteris IP (ISO 26262: My IP Supplier Checks the Boxes, So That’s Covered, Right?). All of this that support starts with the FMEA. You can learn more about Arteris IP approaches safety in their design HERE.


Apple and China to kill Intellectual Property?

Apple and China to kill Intellectual Property?
by Eric Esteve on 06-18-2018 at 12:00 pm

The recent (since 2016) news about Apple, China, FTC and other organizations positioning in respect with IP are concerning, as it seems indicating that Intellectual Property in general (Design IP and Technology IP) is at risk. Let’s consider several facts through different cases, involving ARM, Qualcomm, Imagination Technologies vs Apple, the Chinese government or various organization in Europe and the US.

ARM vs China
ARM is by far the #1 Design IP vendor, with $1, 660 million revenues in 2017. China is becoming to be an important semiconductor market, where chip design activity is fast growing with the Chinese government support. SoC design is characterized by CPU (or DSP, GPU) integration, ARM’s CPU market share is 86%. This makes ARM CPU almost unavoidable, especially for wireless application processor design. In May 2017, ARM – Softbank has created a joint-venture with “Chinese investors” (piloted by the Chinese government) at 49% (ARM) 51% (China). ARM had probably no other choice if the company wanted to continue to license IP in China, so they closed the deal.

We have learned from the press (EETimes) in June 2018 that “SoftBank Group, owner of microprocessor IP firm Arm, announced this week that the British firm will sell 51% stake of Arm’s China unit to Chinese investors and ecosystem partners for $775.2 million to form a joint venture for Arm’s business in China. Under the agreement, Arm will still receive a significant proportion of all license, royalty, software, and service revenues arising from Arm China.”

In other words, China has win the control of ARM IP business in China, thanks to a two steps maneuver… How would you call this, official theft, or weird business practices?

Apple vs Imagination Technologies
Apple was the #1 customer of IMG, licensing the company GPU IP since 2007. This GPU was integrated into Apple’ application processor integrated into iPad smartphone. For such IP licensing deal of such a critical IP, both engineering teams must work very closely, sharing information about the GPU architecture, integration and test strategies. That’s why when Apple has announced in 2017 that they will develop their own GPU, IMG was not only desperate to lose 50% of their GPU IP revenues, but also hanger because they thought that it was almost impossible for Apple to develop their own GPU without using architecture, test or integration related know-how acquired with IMG.

In term of strategy, that makes sense for Apple to develop their own IP, like CPU, GPU or even DSP. That was Qualcomm strategy: the CPU was ARM compatible (architecture license), the DSP was 100% Qualcomm (thanks to an acquisition), as well as the GPU. That’s the best to differentiate from the competition! The problem with the Apple/IMG case is that Apple did buy anything, and it could be difficult to imagine that Apple’s engineers will never use the know-how acquired when working with IMG GPU IP… The result is that IMG may disappear from the IP market, and the position from Apple is difficult to justify.


Qualcomm vs Apple
Another case is still unsolved: Apple has stopped paying royalty to Qualcomm for their wireless technology licensing. Qualcomm has invented the CDMA technology, not a surprise as one of the Qualcomm founders is Andrew Viterbi (you probably know the algorithms better than the person, as he has invented in 1967 the Viterbi algorithm to decode convolutionally encoded data). Without Qualcomm’s inventions, today’s smartphones wouldn’t be able to integrate 100 Mbit/s modem, allowing smooth download of movies, pictures, etc. In other words, without Qualcomm, today’s smartphones like iPhone would probably stay an elegant but not powerful object!

From the beginning, Qualcomm business model is to ask royalty as a percentage of the system selling price, not on the chip (application processor or modem) price. Apple, selling iPhone for $800 to $1000 is certainly paying higher royalty level than they would pay on a $25-$50 chip. But Apple is making incredibly high level of margin: evaluated to 70% to 80% of the iPhone price, depending on the amount of integrated flash. Apparently, paying a few $10 is too much for the company, so they have stopped paying royalty to Qualcomm. This case is currently reviewed by the court, but it shows how a large company like Apple, making huge profits with their iPhone sales, is just showing contempt for intellectual property, invention and design IP, which is a bit weird for a high-tech company…

Qualcomm vs China
Last but not least, this two years old case, now settled, between a Chinese organization (NDRC) and Qualcomm. The reason is still the high royalty level that Qualcomm was asking for their phone technology related patents, to Chinese companies. After months of negotiation between Qualcomm and NDRC, Qualcomm had no other choice than to cut by half the royalty the company was asking for to Chinese companies. The official result is that:

Chinese enterprise will enjoy a lower SEP royalty rate (5% for 3G devices and 3.5% for 4G devices) and royalty base (65% of the net selling price of the device)”

Who is the winner, who is the victim? I let you decide, but for all these cases, the loser is certainly intellectual property, whether it’s Design IP (ARM, Imagination technologies) or technology patent (Qualcomm), and this becomes a real concern!

Eric Esteve (IPnest) June 15[SUP]th[/SUP] 2018


Fractal Technologies Joins TSMC Open Innovation Platform EDA Alliance

Fractal Technologies Joins TSMC Open Innovation Platform EDA Alliance
by Daniel Nenni on 06-18-2018 at 7:00 am

In case you missed it, Fractal is now officially part of the TSMC EDA Alliance. Fractal Crossfire is the leading IP and Library QA tool used by TSMC and many of TSMC’s customers so this is for the greater IP good, absolutely. Fractal has also released a new white paper “Setup Generation for Fractal Crossfire” that we can talk about but first let’s check out the meat of the press release:

Fractal Technologies is proud to announce its acceptance as a partner in the TSMC EDA Alliance, a key component of the TSMC Open Innovation Platform® (OIP). Within this partnership Fractal Technologies will be cooperating with TSMC to support mutual customers. Fractal Crossfire provides a validation solution for the qualification of IP blocks prior to the integration of these components into final designs for manufacturing.

“TSMC recognizes the need of our customers to have a formal IP qualification handshake in place. This enables TSMC to deliver IP products that are compatible with customer-specific requirements on IP configuration. As an independent IP qualification solution, Fractal Crossfire is enabling this IP qualification capability,” said Suk Lee, senior director of the Design Infrastructure Marketing Division at TSMC.

Bottom line:leading edge processes are breaking internal QA flows. If your QA strategy is “If it ain’t broke, don’t fix it” then you are in for a rude awakening.

If you are not familiar with Fractal we have been covering them on SemiWiki for more than five years so their landing page is a great place to start. You could also check out the IP Library and QA with Crossfire webinar we did last month.

The new white paper “Setup Generation for Fractal Crossfire” is a quick 6 pages and registration is not required so it is open for all.

In this white-paper we will discuss this customization process for the Fractal Crossfire IP qualification tool. We will review the toolbox provided by Crossfire to automate the setup process and the ways in which a design organization can further leverage a well designed IP qualification setup by providing it as a standard to its suppliers.

You can also see Fractal at the Design Automation Conference next week in booth #2333. I will be there signing copies of “Fabless: The Transformation of the Semiconductor Industry” compliments of Fractal. I hope to see you there!

About Crossfire
Mismatches or modelling errors for libraries or IP can seriously delay an IC design project. Because of the increasing number of different views required to support a state-of–the-art deep submicron design flow, as well as the complexity of the views themselves, library and IP integrity checking has become a mandatory step before the actual design can start. Crossfire helps CAD teams and IC designers in performing integrity validation for libraries and IP. Crossfire ensures that the information represented in the various views is consistent across these views. Crossfire improves Quality of your Design Formats.

About Fractal Technologies
Fractal Technologies is a privately held company with offices in San Jose, California and Eindhoven, the Netherlands. The company was founded by a small group of highly recognized EDA professionals. For more information: http://www.fract-tech.com/.

Related Blog


Our Autonomous Moonshot

Our Autonomous Moonshot
by Roger C. Lanctot on 06-17-2018 at 7:00 am

Keynoting the TU-Automotive event in Novi, Mich., last week on the 50th anniversary of the assassination of Senator Robert F. Kennedy I took the occasion to note the lofty visions to which Robert and his brother, President John F. Kennedy, aspired. We face our own challenges in the automotive industry today, with an annual global highway death toll of 1.25M. It is in the interest of mitigating that fatality rate that we pursue our own lofty objective of automating driving.

In my talk I cited President Kennedy’s sentiments, spoken at Rice University in 1962: “We choose to go to the Moon! We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard.”

There is no question that automating driving is hard. It is also expensive. As car makers look out over the range of changing vehicle ownership and usage behavior a red ocean of expensive, loss-producing opportunity emerges. From ride hailing to car sharing to electrification and autonomy, billions of dollars are being invested in startups, acquisitions and hiring sprees producing rivers of red ink.

At the same time, industry intruders from the tech community – most prominently Amazon, Apple, Alphabet and Alibaba (the A-Team) are circling ominously, licking their corporate chops at the opportunity to gobble up great chunks of the automotive and wider transportation industry. At stake are the hearts, minds and wallets of the driving and commuting public.

Of greatest concern to auto makers is the fact that this A-Team possesses the financial wherewithal to endure the flow of red ink and swim across the red ocean to achieve the objective: seizing control of the hard won customer relationships built upon more than 100 years of automobile manufacturing. The victory in this struggle may well be determined by something as simple as speech recognition in the form of the many digital assistants cropping up on mobile devices, in smart speakers and, now, coming to cars.

The A-Team made its first foray into customer ownership with smartphone integration in the form of the increasingly familiar CarPlay and Android Auto. Now Alexa, Google Voice, Siri, Cortana and others are coming to car dashboards. These systems have the ability to turn every car into a mobile search engine – with predictable results.

Standing in the path of these digital interlopers are Nuance Communications – with its hybrid natural language understanding technology – and a tiny startup called German Autolabs. German Autolabs is offering an over-the-top digital assistant – “Chris” – purpose built in cooperation with Nuance to serve the needs of drivers and passengers

Why is Chris so important? Because the A-Team has made it clear to auto makers that they won’t be segregating or shielding vehicle-based digital assistant users from broader customer aggregation activities. The drivers of cars who may use Alexa, Google Voice and the rest will be subject to the broader customer acquisition objectives of these external solution providers.

The A-Team is seeking to sell and service cars, if not actually manufacture them, and they want to manage vehicle ownership and usage behavior – a monetization opportunity ultimately representing trillions of dollars. If successful, car makers will be left swimming helplessly in their red ocean as the A-Team sails off into the sunset.

Only time will tell whether Chris can provide the critical differentiation and digital assistance infrastructure necessary to preserve auto industry customer relationships and connectivity. But without Chris, the traditional auto industry may be unable to swim across the red ocean rising around our ankles. The key to Kennedy’s vision, after all, wasn’t just getting TO the moon, it was also getting back FROM the moon.


The Best of IP at DAC 2018 Conference

The Best of IP at DAC 2018 Conference
by Eric Esteve on 06-15-2018 at 12:00 pm

Design IP is going well, with 12% YoY growth in 2017, even if the market is about $3.5B. But Design IP is serving a $400B semiconductor market. Can you imagine the future of the semi market if the chip makers couldn’t have access to Design IP? The same is true for EDA: it’s a niche market (CAE revenues was about $3B and IC Physical Design & Verification revenue was less than $2B in 2017) driving a $400B market!

We will concentrate on IP, as I am proudly part of the DAC IP Committee since 2016, and I would like to highlight some session during next DAC in San Francisco, including the two I am chairing.

I will certainly attend on Monday 25[SUP]th[/SUP] to this session “Minimizing IC Power Consumption with PPA Optimized IPs” chaired by Farzad Zarrinfar and moderated by John Blyler. Not only because Frederic Renoux, VP sales for Dolphin Integration will be one of the panelist (Dolphin is one of my rare customer to be located in France), with Lluis Paris from TSMC (another IPnest customer…), but because I strongly think that low power will be key in the near future. Let’s call it “Energy Efficiency” instead of just low power and look at the above picture: if SoC design stay as it is today, the total energy of computing will consume more than the world’s energy production in 2040!

We have been used to a communication only focused on SoC performance (like with Intel CPU, the only metric was x.y GHz), but the chip makers will have to invest into energy efficient chips development, as their customers (running data center or simply integrating IoT devices in their systems) will force them to provide better chips, energy efficient chips.

I will have no other choice than attending on Tuesday 26[SUP]th[/SUP] at 10:30 the invited session “IP and Architectures for CMOS Imager Sensors”, as I am the chairman! Moreover, I have suggested the topic to the Committee, as CIS is already a very healthy segment of the semi market, weighting $12B in 2017 according with YOLE. The CIS market has exploded to bring always more performant CMOS Imagerto the mobile phone industry where the camera is becoming the Top selling argument (you don’t sell a smartphone because it integrates the best Viterbi algorithm).

And the CIS market is expected to rebound, thanks to the automotive segment where mirrors will be replaced by camera (today) and when many cameras, radars, LIDAR will be integrated to support autonomous vehicle (tomorrow). I am sure that most of the readers don’t know about CIS architecture, and about the type of IP integrated into a CIS (just like me in January 2017 when I started to work on this technology). I can tell you, it’s fascinating! Plenty of innovation are needed, the designers play at the limits of physical science. You will certainly learn a lot and learn from the best WW experts like Jean-Luc Jaffard, CIS market veteran working for Prophesee , who will give a state of the art overview to introduce the topic.

Still on Tuesday 26[SUP]th[/SUP], at 1:30 pm, I will not miss this session “Has The Time For Embedded FPGA Come At Last?” chaired by Ty Garibay, the DAC IP Committee chairman, CTO of Arteris, after working Intel, Altera and TI! IPnest has released a report in April this year “eFPGA IP Market Survey & Forecast-2018-2028” showing that, if the industry confirms the adoption trend for embedded FPGA, this IP market should explode and pass the $1 billion in 10 years. The “usual suspects” are part of this session with presentation from Steve Mensor (Achronix), Cheng Wang (Flex Logix) presenting with John Teifel (Sandia National Laboratories) as eFPGA IP customer and Yoan Dupret (Menta). I say usual suspects as all of them are active in communication since a couple of years, including blogs in Semiwiki, with maybe a special mention to Flex Logix in term of marcom activity! Just a precision: to be selected to present in this session, one important criteria was to justify having a SoC customer, in production. All of them have at least one identified customer (and they can share the name).

There are plenty of other session I recommend attending, including “New Challenges for IP and VIP to Support Emerging Application or Algorithm” still on Tuesday 26[SUP]th[/SUP] , at 3:30 pm, that I am chairing too, with 6 submitted papers.

Or “Latest Developments in High Performance SoC Interface IP Standards” an invited session chaired by Chirag Dhruv (AMD), dealing with IPnest domain of expertise, Interface IP (see the market report and forecast on the above picture), but it’s difficult to name all the IP sessions. But I can guarantee that the quality of the papers, submitted or invited is excellent (foar having spent hours to review and select it)!

You should go on: DAC 2018 IP & Design and also select the topic which best fit your interest.

Eric Esteve from IPnest


Stanford and Semiconductors: A Unique Combination in the 1960s

Stanford and Semiconductors: A Unique Combination in the 1960s
by Daniel Nenni on 06-15-2018 at 7:00 am

This is the second in the series of “20 Questions with Wally Rhines”

At 8am on my first day of graduate school at Stanford, I joined the “Structure of Materials” class taught by Craig Barrett, the youngest faculty member in the Materials Science and Engineering Department. Craig had just returned from a post-doc in England and was energetically publishing papers, writing a book (along with Bill Nix and Alan Tetelman) and teaching classes. He passed out mimeographed copies (for a price) of the rough drafts of the book for the class text book. His distinguished undergraduate career in the same department led to a faculty appointment and his history at Stanford included a record in the high hurdles which still stood. Ultimately, his impatience with the academic world led to his departure to join Intel where he eventually became CEO (but that’s another story). Craig, as the youngest professor, also offered the benefit that he willingly joined the grad students at the “O” (short for Oasis) and purchased pitchers of beer when the graduate student money ran out (which was early in the evening).

There were lots of interesting people in engineering at Stanford at that time, since Frederick Terman, former Dean of Engineering had recruited a variety of rising stars in the semiconductor industry including William Shockley and Gerald Pearson, both of Bell Labs transistor fame. Shockley was more famous because of the Nobel Prize and, ultimately, more INFAMOUS as he redirected his research from semiconductors to racial differences in intelligence. Since he had the office next to ours, we kept a sign in the window labeled “Shockley’s Office is Next Door” just in case someone with a fire bomb lost direction or became confused. The McCullough Building was hardly a safe place anyway with research in II-VI and III-V semiconductors down the hall involving elemental materials that were poisonous in the parts per billion range. And T.J. Rodgers, who would in the future found Cypress Semiconductor, was running experiments in the basement with the first of Stanford’s ion implanters, causing unexpected and sometimes dangerous, results.

While I plodded my way through Craig’s course, my extracurricular life was stimulated by my residence in Crothers Memorial Dorm, fondly referred to as “Cro Mem”. It consisted of two buildings, side by side, one for graduate engineering and science majors and one for lawyers and MBA students. Although love was not great between the two buildings, there were frequent touch football games and mutual enjoyment of the promotional efforts of emerging wineries, like Wente Brothers and Inglenook, who provided free wine anytime we had a party, which was frequent. Judging from those I still know from Cro Mem, the wine promotion was very effective although maybe not for Wente and Inglenook. But parties required more than wine so we turned to the most innovative of the Cro Mem residents, Roger Melen. Roger arrived at Stanford with an undergraduate Electrical Engineering degree from, of all places, Chico State (Daniel Nenni: don’t take offense). He published a book titled “Understanding Operational Amplifiers” (which I didn’t) by his second year in graduate school and he was making money in a variety of entrepreneurial ways, like consulting for Bay Area electronics startups or writing articles for Popular Electronics. Whenever we needed money for a party, Roger generously wrote an article, received $400 and the party was on.


Image sensor made by Terry Walker, Roger Melen and Harry Garland

Meanwhile, Roger worked on his PhD thesis under Prof. Jim Meindl who had dozens of graduate students (many of whom came to make up the Who’s Who of the electronics industry) designing chips and producing them in the two-inch wafer fab on campus. Roger was working on the Optacon, a reading aid for the blind, developing an 8×16 pixel charge coupled device (CCD) for a compact version of the product. But Roger was much too innovative and productive to work only on his research. On the side, his consulting business had grown. He designed the electronics for all sorts of equipment that recent graduates were developing. Since most of these companies had very limited cash flow, Roger had to be content to accept future royalties in payment for some of his work. Over time, he discovered that these entrepreneurial customers, although skilled in engineering and product development, had lost the ability to count. Roger was concerned about being cheated on the royalties so he developed a system to overcome this deficiency. He performed the design work, as he always had, but instead of labeling the integrated circuits (IC’s) in the design, he and with his graduate student friend, Harry Garland, re-marked them with proprietary letters for one of the key IC’s in each design. He then assumed the supply chain fulfillment role by relabeling the IC’s and providing the parts for production.

Roger with fellow graduate student Harry created the name Cromemco, after the “Cro Mem” dorm, which in a few years became one of the very early, successful microprocessor based computers. They needed funding and publicity to start a company so Roger turned to his tried and true technique—writing articles for Popular Electronics but this time including the Cromemco name.

During this period, I completed my degree and headed to Texas Instruments where I was coincidentally assigned the task of developing CCD imagers. You can imagine our shock at TI when we saw a cover article of Popular Electronics entitled “Build Your Own Solid State Imager” by Roger Melen along with his graduate friends Terry Walker and Harry Garland. While Fairchild, Sony, RCA and TI competed fiercely to develop early CCD imagers, the graduate students at Stanford had beaten us to the punch. Or so we thought. The article provided circuit diagrams plus a block labeled “solid state imager”. To fill that block, the article instructed the reader to send a check or money order to Cromemco, which was actually just a student living in Crothers Memorial Dorm. For $25, Cromemco would provide the needed component. But instead of sending a CCD imager, Roger sent American Microsystems S4008-9 DRAM’s (this early DRAM did not automatically refresh the bits during readout and came in a ceramic package with a metal lid that could be replaced by a quartz lid) by popping the tops off the ceramic packages. The image quality was good enough for the hobbyists. Those $25 checks added up to over $50,000 and became critical seed money for Cromemco, which Roger and Harry sold in 1987.

20 Questions with Wally Rhines Series