Synopsys IP Designs Edge AI 800x100

Multi-level abstraction accelerates verification turnaround

Multi-level abstraction accelerates verification turnaround
by Pawan Fangaria on 05-02-2013 at 8:30 pm

Often a question is raised about how SystemC improves verification time when the design has to go through RTL in any case. A simple answer is that with SystemC, designs can be described at a higher level of abstraction and then automatically synthesized to RTL. When the hands-on design and verification activity is at a higher level, there are fewer opportunities to introduce bugs, and it is much easier to identify them before RTL is generated. It’s true that at cycle accurate level of verification, RTL and SystemC will provide the same level of performance, however considering the time required by cycle accurate level of verification, it’s wise to reduce those iterations by doing quicker verification at higher levels of abstractions before RTL. The levels of abstraction in ESL (Electronic System Level) space are very neatly described in a book http://www.cadence.com/products/fv/tlm/Pages/tlm.aspx published by Cadence. Further Cadence has developed a modelled approach to verification methodology which spans through the FVP (Functional Virtual Prototype), HLS (High Level Synthesis) and RTL (Register Transfer Level) levels of abstraction. In the middle of April, in ISCUG conference, Mr. Prashanth G Baddam made a presentation on this methodology, jointly authored by Mr. Prashanth G Baddam and Mr. Yosinori Watanabe.


Prashanth G Baddam, Yoshinori Watanabe

Prashanth very concisely described about what can be verified in the models at each of these levels in order to optimize total verification time, as every item cannot be verified on every abstract model. Hence after the verification plan creation for hardware under verification, categorization of its verification items with respect to the abstract models is important as shown below.

At FPV level, the models are in SystemC programmer’s view with bus interfaces described in TLM2.0. Testbench is single at all levels described in SystemVerilog or e utilizing UVM standard. At HLS level, the model needs to be synthesizable; concurrent blocks for function and refined TLM for communication interfaces. The testbench is extended to verify the interfaces, and to add more tests for the refined functionality. At RTL, the design is closer to hardware with micro architecture details like complete interface information and state transition models. Interfaces are at signal-accurate protocol. The testbench is further extended to use protocol specific interface agents and existing VIPs (Verification IP). More checking specific to RTL is added.

It is evident that at times it becomes necessary to probe values of particular variables, either declared as private members within SystemC module or local to functions for white-box monitoring. As this is tedious to do manually, Cadence provides a library called wb_lib to make this task easy. This library consists of APIs for monitoring local and private variables which can be accessed from the testbench.

Cadence provides a complete metric-driven methodology for verifying any system from TLM to RTL level with powerful coverage capabilities. It provides verification plan editor where verification items at different levels can be identified which get refined as the level moves towards RTL.

With verification starting at the design decision stage, more of it being covered early in the design cycle which is complemented with finer level of verification at later stages and reuse of testbench at all these levels shortens the verification time considerably; 30-50% shorter debug cycle, 2X faster verification turnaround and around 10X faster IP reuse.

The methodology provides broad architectural exploration to gain higher throughput and lower power consumption and size. This also improves efficiency in exploring best verification strategies. The detailed presentation is posted at http://www.iscug.in/sites/default/files/ISCUG-2013/ppt/day1/3_5_MultilevelVerification_ISCUG_2013.pdf

Cadence is closely collaborating with its key customers to tailor this methodology for their specific production environment. That’s encouraging news!!


Kathryn Kranen Joins CriticalBlue’s Board

Kathryn Kranen Joins CriticalBlue’s Board
by Paul McLellan on 05-02-2013 at 8:05 pm

Jasper just announced that Kathryn Kranen, their CEO, had joined the board of CriticalBlue. I used it as an excuse to hit up CriticalBlue’s CEO Dave Stewart, who happened to be in the valley, for a free lunch to catch up on what they are doing.

CriticalBlue started about 10 years ago in Edinburgh (yay!). When it started it was in the business of co-processor design. Its technology would take a C-program and then you could indicate that certain parts of the code should be accelerated and it would build a specialized co-processor in hardware to do just that and then tie everything back into the code. It turned out to be hard to sell this. People confused it with high-level synthesis (with which it does share some features) and engineers would always argue that they could to it themselves with only a bit of work.

Plus, more and more, potential customers would argue that what they really wanted was to offload the code onto their own separate processor since they already had one. Plus multicore processors were suddenly becoming ubiquitous and people didn’t really know how to do partitioning. It turns out that under the hood of CriticalBlue’s product was a lot of the technology to do this. They expected to sell a large number of copies, but it turned out that they mostly sold a small number of copies and got asked a lot of question since customers didn’t know how to use the information (such as cache performance analysis or resource contention analysis).

So they switched to a more services-led model where they would help customers optimize their code. It turns out there are two big markets with a lot of demand: Linux-based telecom and Android-based handset and tablets.

It is worth a lot of money for smartphones to appear high in the list of smartphone performance benchmarks but most software groups do not understand how to do the low-level optimization close to the hardware-software boundary. They have also developed a sideline in helping make better smartphone hardware architectural decisions since these are driven by how they impact software performance.


They still have a lot of technology in the form of tools, but they have a partnership model where they become the optimization partner for a limited number of customers with relationships not limited to a single product cycle (and not limited to a small number of dollars). In some cases, customers hear the pitch and reject it, only to come back a few months later when they discover they have neither the talent nor the tools to do optimization on their own. For example, CriticalBlue have a product that will tell you what data got loaded into your cache but was then never used. Reorganizing the structure of data in the program can reduce the amount of memory traffic wasted in this way, which has a side effect of increasing the cache hit rate and thus pushing the phone up the performance benchmark hit parade.

CriticalBlue is still quite small, with 16 employees, but they are hiring. Another famous EDA name is already on the board, Lucio Lanza since 2004.

CriticalBlue’s website is here.


An AMS Seminar on May 16th

An AMS Seminar on May 16th
by Daniel Payne on 05-02-2013 at 8:05 pm

Analog and Mixed-Signal (AMS) designers are facing more challenges than ever, so where can they go to get some relief? One place is a half-day seminar scheduled for May 16th in Bridgewater, New Jersey. SemiWiki has teamed up with Tanner EDA, Abbot Labs and SoftMEMS to present topics of:

  • True collaborative design enabled through database interoperability and simplified design data re-use,
  • Efficient design work-flow practices, and
  • Advanced capabilities: high-performance/capacity SPICE simulation, parasitic extraction & 3D design/analysis.

I will be presenting remotely, while three three other professionals will be on hand to present and answer your questions. Barry Bass is an AMS designer at Abbott Point of Care, and he’ll be sharing his design experiences. Mary Ann Maher used to work at Tanner EDA, then started up her own MEMS company. Karen Lujan is an expert tool user from Tanner EDA.

Agenda

[TABLE] style=”width: 500px”
|-
| 8:30
| Seminar registration & light breakfast
|-
| 9:00
| Welcome and Introductions
|-
| 9:15
| Leveraging the EDA Ecosystem for Productive Innovation: Daniel Payne, SemiWiki
|-
| 10:00
| Case Study: Barry Bass, Abott Labs
|-
| 10:30
| Break
|-
| 10:45
| 3D Design and Analysis for improved DFM, Mary Ann Maher, SoftMEMS
|-
| 11:15
| Live Product Demo with Q&A, Karen Lujan, Tanner EDA
|-
| 11:45
| Wrap Up
|-
| 12:00
| Lunch
|-

The Tanner Mixed-Signal IC and MEMS experts will be on-hand after lunch to learn about your current design challenges and discuss how Tanner EDA tools might help mitigate them.

Featured Speakers

When and Where
Thursday, May 16th
Hilton Garden Inn
500 Promenade Blvd.
Bridgewater, NJ 08807
Phone: 732-271-9030

Register Here

Tanner EDA provides a complete line of software solutions that catalyze innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs). Customers are creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices.

A low learning curve, high interoperability, and a powerful user interface improve design team productivity and enable a low total cost of ownership (TCO). Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.

Founded in 1988, Tanner EDA solutions deliver the right mixture of features, functionality and usability. The company has shipped over 33,000 licenses of its software to more than 5,000 customers in 67 countries.

lang: en_US


CDNS V. BDA: Motion to Dismiss

CDNS V. BDA: Motion to Dismiss
by Paul McLellan on 05-02-2013 at 1:00 pm

The Cadence-BDA saga continues with Berkeley Design Automation today filing a motion to dismiss. You can read the full motion HERE. My previous blog “Cadence Sues Berkeley Design Automation” with 30+ comments is HERE.

The first problem BDA brings up is that the DMCA claim by Cadence is so vague that it doesn’t state a claim on which relief can be granted. DMCA was put in place by congress as a result of lobbying by the entertainment industries primarily to make defeating DRM illegal and posting DRM-free versions of songs/movies illegal. It is not clear what BDA are supposed to have done that falls under these rules.

There are 3 ways to integrate Cadence’s ADE with Berkely. Via Oasis, via Skill or what BDA calls Unified Integration. To integrate via Oasis clearly requires a (Cadence) Oasis license. BDA claims that the Unified Integration does not (since it doesn’t use Oasis at all).

BDA used to be a member of the Cadence Connections Program (which allows non-Cadence software to be integrated into flows with Cadence software, and gives the 3rd party EDA companies very cheap access to whatever Cadence software is needed to perform the integration).

As far as I know, BDA doesn’t actively bypass the Cadence license manager or anything similar, which seems to be the sort of thing that DMCA is meant to cover. They bypass needing an Oasis license by not using Oasis, not by (for example) patching ADE so it doesn’t request a license.

So to me (as a non-lawyer, so this is totally above my pay-grade) it seems that the DMCA claim is a bit of a stretch, which means the case comes down to what BDA were or were not allowed to do to perform integration under the terms of the Connections program legal agreement that both parties signed.

Cadence’s claim is that integrating with ADE requires an Oasis license and by not integrating through Oasis then BDA is violating either DMCA and/or the terms of their Connections Program Agreement.

However, BDA’s motion claims that:”Nothing in that agreement specifies that BDA must use any ‘OASIS integration product’ or cause its customers to obtain an OASIS license.”

BDA joined the Connections Program in 2005 (HERE is a press release about it). So for 6 or 7 years there was no issue. Of course the original Cadence complaint says that they only discovered an integration not requiring Oasis last year. Since Synopsys acquired Magma (and thus FineSim) BDA has been the “independent” simulation product. Not to take anything away from their technology, but big customers like to keep their suppliers honest and I’m sure BDA has benefited from that. You have to wonder to what extent Cadence’s actions are a response to BDA’s success. They can afford more lawyers than BDA, of course, so a lawsuit is a competitive advantage from Cadence’s point of view.

It looks like a court date is set for June 4th (in the middle of DAC in Austin) although that might be one of those boilerplate things and nothing will happen on that timescale. Avant! managed to delay going to court for literally years, as you may remember.”PLEASE TAKE NOTICE that on June 4, 2013, at 9:00 a.m., or as soon thereafter as the matter may be heard, before the Honorable Yvonne Gonzalez Rogers, Judge of the United States District Court, Northern District of California, Oakland Division, in Courtroom 5, Second Floor, there will be a hearing on Defendant Berkeley Design Automation, Inc.’s Motion to Dismiss Cadence Design Systems, Inc.’s DMCA Claim or for a More Definite Statement.”


Costello on Story Telling

Costello on Story Telling
by Paul McLellan on 05-01-2013 at 9:03 pm

Last night at Cadence was the next installment of what I have been calling Hogan University. Jim interviewed Joe Costello about how to tell a story as part of the EDAC emerging companies series of events. The main focus was how to tell a story as a small EDA company communicating with investors, although there are obviously other forms of communication. I’m assuming that if you are reading SemiWiki that you know that Joe Costello was CEO of Cadence for many years, taking it from its birth as a merger of ECAD and SDA to a big EDA company (I think over $1B by the time he left).

Rather like in his keynote at DAC a few years ago, Joe tried to distill things down into some rules (some of them the same rules even).

So rule #1, if you want to tell a compelling story then you have to have a compelling story. You can mess up a good story by the way you tell it, but you can’t make a poor idea good by the way you tell it. The biggest failure in pitching to investors is not how you tell it, it is just that your story is not compelling. “The common cold is interesting but smallpox is compelling.”

Rule #2, which also was one of the keynote rules and clearly everyone who worked for Joe had already heard: write the press release first. It will never sound better when it is just a dream and you haven’t had to make any compromises in implementation. So if it doesn’t sound good as a press release then…maybe time to think again (“pivot” in current VC terminology).

Rule #3 is to look at things from the point of view of the investors you are pitching to. This is a bit like the “think like a fish” rule at Joe’s keynote that had him lying on the stage pretending to be a fish. You might want to create a company to change the world, or to experience a startup, or to follow your dreams. Investors mostly want to make money, but they also have other agendas (like looking good to their partners or expanding into whatever all the other VCs are investing in that week). Yes, they want to see passion, but mostly because passion means you are more likely to make the effort necessary to succeed.

Rule #4 is to be yourself, don’t try and imitate Steve Jobs (or Joe Costello) in the way you present. You have to own your story and do it the way you want. Oh, and don’t use slides except for a few pictures/graphs. Joe didn’t mention it but there is actually a lot of research that shows that if your slides are basically bullet points that tell the whole story then they detract, not add, to your presentation. Just use slides to add graphics and emotion to what you are saying (watch any Jobs keynote to see the master at work, no bullet points to be seen). Or don’t use them at all.

Rule #5 is that you need to get to the emotional needs of whoever you are pitching to. Break up logical thinking with humor, oddball facts and tangential stuff. And no slides with bullet points (see above).

Here is an interesting point I’d never heard before. Intel pays more to PG&E for simulation than it pays the entire EDA industry for all the simulators they use. And they don’t even bitch about electricity prices.

When Joe left Cadence, apparently he interviewed with Jobs to be CEO of Apple. Steve eventually said that he only had experience selling very geeky stuff to very geeky companies. He had no experience selling to consumers. Joe tried to point out that every developer is a human being and has to be sold to. I guess Jobs wasn’t convinced since Joe didn’t get the job.

With that, it was time for us all to go home.


DAC: Calypto Insight Presentation

DAC: Calypto Insight Presentation
by Paul McLellan on 05-01-2013 at 5:39 pm

DAC has several “Insight Presentations” on Wednesday June 5th. Bryan Bowyer from Calypto will be presenting from 2-4pm that day (don’t know where, the DAC website doesn’t have a room number specified yet). The topic is Reducing Design and Debug Time with Synthesizable TLM. TLM, of course, stands for Transactional Level Model.

For teams designing hardware accelerators (that is, hand-crafted RTL blocks implementing a function in hardware as opposed to software) on an SoC, debugging and integrating the new block is often the most difficult task. For new standards, such as H.265 and Ultra HD TV, companies have moved to synthesizable, transaction-level SystemC to reduce design and debug time.

This Insight Presentation describes an approach to reduce design and debug time of hardware accelerators by 50%.

The presentation starts with information about designing synthesizable TLMs in SystemC (not all SystemC is synthesizable). Of course, before synthesizing the SystemC it needs to be verified and assertions are one way to meet functional coverage goals. Debugging transactions versus RTL (which is much lower level) requires a different approach, which is the next topic covered.

So now you have your design in synthesizable TLMs in SystemC. So the next step is to actually synthesize this using high-level synthesis (HLS). The output from this process is RTL, which can then subsequently be input to traditional RTL synthesis to get to a netlist and so on down the usual RTL to GDSII pipe.

But is the RTL correct? Sequential Logical Equivalence Checking (SLEC) is the tool to use to prove that the RTL matches the original TLM input, in just the same way as (non-sequential) equivalence checking can be used to verify that the RTL and a netlist match during regular RTL synthesis.


This is thus a complete methodology for creating a design using TLM in System-C, verifying it, synthesizing it and formally checking the synthesis is correct. In most ways it is like writing a design in synthesizable RTL, verifying it, synthesizing it and formally verifying it. Except that it is another level up, with all the attendant increases in productivity, ease of making big (architectural) changes and so on. Along with bringing in pre-designed IP, it takes design up to the transactional level.

Details on the Insight Presentation are on the Calypto website hereand on the DAC webtsite here.


DAC Keynotes: 5 This Year

DAC Keynotes: 5 This Year
by Paul McLellan on 05-01-2013 at 2:38 pm

DAC is in Austin this year, as I’m sure you know, and DAC has keynotes by CEOs of two Austin-based companies Freescale Semiconductor and National Instrument. Two more keynotes (one split into two) are focused on mobile, which has become the major driver of semiconductor today. A fifth keynote, including presentation of the best paper award for DAC 2013, is by Alberto Sangiovanni-Vincentelli, who should need no introduction to readers of SemiWiki.

The five keynotes are:

  • Monday June 3[SUP]rd[/SUP] at 10.15: Greg Lowe of Freescale Semiconductor in Austin on Embedded Processing—Driving the Internet of Things. Greg has been the President and CEO of Freescale since June 2012 following a career at Texas Instruments culminating in being senior vice-president of analog.
  • Monday June 3[SUP]rd[/SUP] at 4pm: James Truchard of National Instrument in Austin on Looking Ahead to 100 Years—Platform Engineering. James is the President and CEO of National Instruments. He co-founded National Instruments in 1978 where he has lead the vision to equip engineers and scientists with tools to accelerate productivity, innovation, and discovery.
  • Tuesday June 4[SUP]th[/SUP] at 9.15am: Namsung (Stephen) Woo of Samsung, Korea on New Challenges for Smarter Mobile Devices. Dr Woo is a president of Samsung and GM of the System LSI business. He joined Samsung in 2004 from Texas Instruments.
  • Wednesday June 5[SUP]th[/SUP] at 11.15am: The Designer Keynote features two speakers (dual core?), J. Scott Runner of Qualcomm in San Diego and Sanjive Agarwala of Texas Instruments in Dallas. Scott will present on Design and Methodology of Wireless ICs for Mobile Applications: True SoCs Have Come of Age.Sanjive will present on Infrastructure Embedded Processing Systems – Trends and Opportunities. Scott is currently the Vice President of Advanced Methodologies and Low Power Design at Qualcomm Technologies, Inc. He has worked in engineering in the semiconductor and EDA industries for 30 years, including being a “founding” member of the DesignWare team at Synopsys. Sanjive a TI Fellow and Director of WW Silicon Development in Processor Business at Texas Instruments, responsible for, among other things, the roadmap and development of TI C6x DSP core.
  • Thursday June 6[SUP]th[/SUP] at 11am: Alberto Sangiovanni-Vincentelli of UC Berkeley on Crystal Ball: From Transistors to the Smart Earth.Alberto was instrumental in the founding of both Cadence and Synopsys and is also a Kaufman Award recipient for “pioneering contributions to EDA”. The DAC Best Paper Award will also be presented during this session.

Details of all the keynotes, including outlines and full presenter biographies are all on the DAC website here.

The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for Electronic Design Automation (EDA) and silicon solutions. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its Exhibition and Suite area with approximately 200 of the leading and emerging EDA, silicon, IP and design services providers. The conference is sponsored by the Association for Computing Machinery (ACM), the Electronic Design Automation Consortium (EDA Consortium), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design Automation (SIGDA) and IEEE’s Council on Electronic Design Automation (CEDA). More details are available at: www.dac.com.


Accelerating Design Debug in an ASIC Prototype

Accelerating Design Debug in an ASIC Prototype
by Daniel Nenni on 04-30-2013 at 8:15 pm

ASIC prototyping in FPGAs is starting to trend on SemiWiki. As FPGA technology becomes more advanced customers tell me that the traditional debug tools are inadequate. Faced with the very restrictive debugging capabilities and very long synthesis/place/route times the debugging cycle in these prototype platforms are quite long and painful.

SemiWiki has been writing about a tool from Tektronix called Certus. This tool has been adopted by several of the top semiconductor companies because it addresses long debug cycles through advanced capabilities that are orders of magnitude better than existing tools in several dimensions.

The Certus ASIC Prototype Debug tool has significant competitive advantages versus other offerings. The differentiation is rooted in Certus’s unique and patent pending observation network. The figure below quantifies how this solution is multiple orders of magnitude better in three dimensions compared to existing ASIC Prototype debug solutions.

In order to develop a truly compelling solution, the Certus team had to develop several industry firsts:

  • the only one that enables full visibility which eliminates most FPGA synthesis-place-route cycles.
  • the only solution that provides time correlation of data across multiple clock domains and devices.
  • The only solution with real-time compression of captured data enabling capture of full startup sequences and system wide events.

The ability to fully instrument a multi-FPGA prototype with a dozen large FPGAs enables users to debug their hardware rapidly under real-world stimulus. Because of the visibility advantage, development teams can then provide the prototype platform for their software and firmware groups with the instrumentation intact to enable rapid debug of hardware/software bugs and to optimize system performance by analysis of performance bottlenecks.

Certus fundamentally changes the FPGA prototyping flow and dramatically increases debug productivity. By leveraging Certus full RTL-level visibility and making internal visibility a feature of the FPGA prototyping platform, an engineer can diagnose multiple defects in a day where it would take them a week or more with existing tools.

A major challenge today is that traditional FPGA debug tools are unable to support the requirements of the ASIC prototyping market, particularly as designs have become larger and span multiple FPGA devices. Add the increased complexity of hardware/software interactions and the high-speed operation of most prototypes, and FPGA debug has become a major bottleneck in the ASIC prototyping process. Now, by using Certus 2.0 to pre-instrument up to one hundred thousand signals per FPGA device, designers gain comprehensive RTL-level signal visibility without time consuming synthesis and place and route cycles, allowing complex problems to be pinpointed and resolved quickly.

Those who are interested in learning more can send an email request to eig-info@tektronix.com for white papers or demos at DAC.

lang: en_US


(Must Read) Arteris Blog activity: IP, 20 nm node and CTO interview

(Must Read) Arteris Blog activity: IP, 20 nm node and CTO interview
by Eric Esteve on 04-30-2013 at 8:10 pm

I just read three very interesting blogs from Arteris. In the first “The Semiconductor Industry Needs an IP Switzerland”, Kurt Shuler, VP of Marketing for Arteris, enjoys about the fact that four big IP players (ARM, Synopsys, Imagination and Cadence) are emerging after years of fragmentation within the semiconductor IP industry. You can see the Top 10 IP vendor by license revenue in 2012 (from Gartner):

For chip makers, this new landscape is certainly better, or at least more comfortable: if the pure one-stop-shop (a single vendor selling IP) is perceived as a high threat, as this unique company could be bought by anybody, including a competitor chip maker (imagine what’s happen if Samsung buy ARM Ltd only!), on the other hand, having to negotiate price, license contract, technical support etc. with as many vendor as the number of integrated IP… is also a nightmare, at least extremely time and resource consuming. Kurt position Arteris as “an IP island, enabling semiconductor vendors to choose “the best IP for the job,” helping SoC design teams assemble and verify their chips at an increasingly fast pace”.

The nature of the product developed and marketed by Arteris, “Network-on-Chip” or NoC, makes this assertion 100% (if not 110%) TRUE. The NoC is by nature at the intersection of all the functions, IP and self-designed blocks, within a chip. Read more about “IP-Switzerland” blog.

The second article is a synthesis of the discussions that Kurt had with no less than 13 analysts. Kurt summarizes these discussions by extracting three main points:

#1: Chips manufactured on the latest process nodes will cost more

#2: Never assume what you call your product or technology is what other people call it!

#3: The IP industry is growing up

The first point is true and means that our industry will have to re-invent itself, at least if we expect to benefit from continuous innovation and creativity that only high competition can bring, and not an ultimate consolidation, where only a couple of chip makers will survive…

The third point is not only true in 2012, but we can expect IP market to grow for another decade, in my opinion.

Kurt wrap-up the discussion he had with the 13 analysts (this is the second point), it seems that they disagree on the way to call Arteris flagship product. In Semiwiki, we have always called it “Network-on-Chip”, and I would like to explain why. In a blog about Arteris and Sonics, I have posted a comment, back in December last year:

Posted on 12 -03 2012
Just an addendum, I have made a Google search for “On Chip Communication Network” (or OCCN, acronym used by Sonics to name their SGN IP) and “Network On Chip” (NoC, used by Arteris to name FlexNoC IP). The result is impressive!

– “On Chip Communication Network” gives 220 000 entries

when

– “Network On Chip” gives 6 620 000 entries

Even Google is voting for Arteris…

I have checked again today:

Verified on 04 -30 2013

  • “SoC fabric” gives 13 800entries
  • “IP fabric” gives 37 800entries
  • “interconnect ip” gives 22 800entries
  • New search for “network on chip” : 3 650 000 entries
  • And for: “On Chip Communication Network” gives 126 000entries

My conclusion would be that Arteris should keep using “Network on Chip”, sometimes it’s better to listen to your feeling than analysts…especially when it gives better results!

Finally the last article that Kurt has recently posted is an interview from Laurent Moll, CTO at Arteris: System-Level Design sat down with Laurent Moll, chief technology officer at Arteris, to talk about interoperability, complexity and integration issues.

Such an interview is almost impossible to summarize is a few words, as it’s very complete and address from the nature of Network-on-Chip, to IP selection to IP commercialization… But this article is certainly a must read! For some of us, tomorrow is a bank holiday, so you should have time to read it…

From Eric Esteve

lang: en_US


Crossfire – Builds Quality with Design

Crossfire – Builds Quality with Design
by Pawan Fangaria on 04-30-2013 at 8:05 pm

Very often we talk about increasing design complexities and verification challenges of SoCs. With ever growing design sizes and multiple IPs on a single SoC, it’s a fact that SoC design has become heterogeneous, being developed by multiple teams, either in-house or outsourced. Considering economic advantage amid pressure on profit margin, it makes sense for any fabless design company or an IDM to outsource the components which can be developed by any third party at lesser cost with good quality. However whether a component is done by a team in-house or outside, it must be checked for its qualifying criteria. This task is easier said than done as the list of these components can be long including IOs, cell libraries, IPs etc. And at the end overall SoC integration has to be done by an expert team. Even after drawing such importance, QA task does not attract fancy of designers who are more focused on developing new designs.

It’s interesting to know about a tool exactly for this purpose which enables everyone in the design chain to contribute to right quality of the design at different stages of its making. In other words the tool actually realizes TQM (Total Quality Management) of the chip by sharing responsibilities across all partners. Crossfire, developed byFractal Technologies, employs an integral approach to quality by letting designers do QA checks during the development of their own IPs or library cells, at the time of shipping them to other SoC teams and at the time of receiving any components from other teams, thus enabling quality by construction of SoC. This tool also enables the SoC integration team to provide the test sets to IP providers which they need to qualify before dispatching their IPs.

The tool is quite versatile in accommodating most of the formats of circuit description in front-end as well as back-end domain including any user defined format in ASCII text, HTML or PDF (converted to text) and presenting the results in easy-to-understand format and easy-to-use browsers. It constructs its own unified data model to maintain consistency and accuracy.

[A form representing cell library QA aspects]

During the QA check, it flags any mismatch which can be between simple terminal names or complex functionalities such as Boolean values or timing arcs. The unified data model is flexible to accommodate proprietary data such as characterization data and data sheets.

Above is an example of CCS (Synopsys Liberty) specific checks. For any format, all checks required to validate a library are provided that allows users to quickly configure test sets essential to qualify the database. Crossfire also provides APIs for users to add their own customized checks.

Crossfire assists CAD teams to build partial test sets at various stages of design from the beginning, thus eliminating any backtracking, re-work or duplication of work and improving productivity. As an example, it first checks pin compatibility of layout, schematic and underlying format / database before going into further verification.

In case of IPs, Crossfire is optimized to deal with large data in terms of GDS or Verilog and is made intelligent to check compatibility with language dialects such as Verilog-A. As IP models can be delivered in various forms such as hard macro (GDS) or synthesizable IP (RTL), Crossfire makes sure that only appropriate checks are made for those, thus eliminating unnecessary tests. For example, at GDS, routing checks are relevant while those at RTL do not make sense. Conversely at RTL, tests can be done by running a few samples at relevant stages of the design flow. Crossfire can generate final QA report of an IP which can be delivered along with the IP to the IP customer or SoC team.

The report provides the summary of what passed and what failed with required explanation for waiving them. The QA reports and test sets make it possible for the SoC integrators to quickly determine about the acceptance of the IP affront without leaving any chance of discovery later in the cycle.

Crossfire plays a key role in quality checks from the very beginning of the design stages to the final integration, thus making sure that quality is in-built into the design. This automates the process, eliminates any re-work and assures predictable completion of SoC. It ensures that suppliers, consumers and other stake holders share responsibility towards quality of the final SoC. A white paper with a detailed description of the tool and processes can be found at Fractal website HERE.