RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Jasper Goes to DVCon

Jasper Goes to DVCon
by Paul McLellan on 02-02-2014 at 6:00 pm

As usual, since they are firmly in the verification space, Jasper will have a number of things going on at DVCon 2014 which is March 3-6th at the Doubletree in San Jose. In the exhibition hall they are at booth #402.

Jasper will be happy to talk to you about anything, I’m sure, but the focus this year is on the JasperGold Security Path Verification (SPV) App. This uses formal techniques (no surprise there) to verify that there are no leaks in the hardware. The nice thing about this is that you really want to prove, not just feel fairly certain, that your security is solid and, for example, it is not possible to take over the CPU and send all the encryption keys over the network. I wrote about it recently here. This is the summary paragraph:The Jasper Security Path Verification (SPV) App is used to prove properties about the paths to secured data. For example,

    [*=1]Data in secure area must not be visible by CPU if it is not in secure mode
    [*=1]Secure register must not be written by non-secure agent

Usually data propagation requirements can usually be translated into one of these questions:

    [*=1]Can data on secure location A propagate to non-secure location B?
    [*=1]Can data in non-secure location X propagate to secure location Y?

In both cases we want the answer to be no, but it can be very hard to perform good verification without proper tools. Structural analysis can be very ad-hoc. Simulation depends on how good the verification engineer is at breaking the security. And standard formal verification is not a good fit since it is hard to describe the requirements as SVA/PSL assertions.

Jasper is presenting a SPV App in a tutorial on Thursday March 6th from 8.30am to 12pm in the Carmel room. The tutorial — complete with customer case studies — covers the analysis and verification of the design path leakage that opens a design to hardware hacking. More details on the tutorial Formally Verifying Security Aspects of SoC Designs are here.

Earlier in the week, two JasperGold App users, NVIDIA and Broadcom will be giving presentations. Both presentations are in the Fir ballroom. At 10am NVIDIA will present on chip-wide clock-gating verification. This is followed by Broadcom presenting on detecting x-optimism related bugs. More details on both are here.

Full details on DVCon (including a link to register, sponsored by Jasper yeah) are here.


More articles by Paul McLellan…


Who needs DDR4 PHY running at 2667 Mbps?

Who needs DDR4 PHY running at 2667 Mbps?
by Eric Esteve on 02-02-2014 at 11:15 am

As of today, DDR4 are targeting server, networking and consumer applications, and it will take another year before we use DDR4 equipped PC at home. In fact, a majority of consumers will rather buy a smartphone or tablet than a PC, most of these devices coming with PLDDR2 and only a few high-end tablets are equipped with LPDDR3 memory. The low-power LPDDR4 specification for tablets and smartphones is still under development, and it may takes many years before it reaches mobile devices, that’s why Amjad Qureshi, senior group director, Cadence IP Group think that DDR4 may also be used for ultrathin and high end tablet markets. We may wonder if it was a good decision to develop and launch DDR4 Memory Controller IP, including a PHY up to 2667 Mbps…

“DDR4 for servers, laptops and mobile devices will be around for a long time as no successor is under development”, said Mike Howard, principal analyst at IHS iSuppli. Howard is also making a very important point, saying that “It will be the last DDR iteration”. I would think this assertion to be true, because the need to higher and higher memory bandwidth becomes more and more difficult to support with a parallel protocol like DDRn is. Increasing the DDRn bus width causes implementation issues at board level, and increasing the DDRn PHY frequency generates multiple design issues, and has been proven to be far less power efficient (in Joule per transferred bit) than a SerDes based protocol… But DDR4 is here and expected to stay for a while. The Memory Controller (including PHY) IP market has probably weighted at least $100 million in 2013 (we are still expecting results to be launched by IP vendors). If we consider that DDR4 Memory Controller IP will start selling in 2014, it may applies to only 5 to 10% of the design starts, with customers accepting to pay a premium for the first IP available, thus the Average IP Selling Price (ASP) will be in the high range. Then, when in 2015 and 2016, DDR4 will reach enterprise, micro-server and mainstream PC, DDR4 Memory Controller IP sales should generate large revenues.

We can predict these $ sales to be higher than for DDR3, for two reasons: PHY IP ASP is getting higher with PHY frequency or transfer rate, and chip makers tend to outsource more, as far as the IP is getting more complex, or in this case the transfer rate getting higher. These are very good reasons a customer may decide to outsource, and Cadence catch it very well: “As DDR4 high-speed memories now become readily available, designers are looking for IP that can support 2667 Mbps,” said Amjad Qureshi, senior group director, Cadence IP Group. “By providing both the DDR4 controller and the DDR4 PHY at such high speeds, Cadence gives designers the confidence and assurance that they can build next-generation systems which are faster, lower power and have more capacity.” Obviously some technology will eventually replace DDR, but we can predict that DDR4 will be used doe a long time, especially if it’s the last protocol iteration, and the cumulated DDR4 Memory Controller IP sales, from 2014 to… maybe as late as 2020, could be in excess of several $100s million.

DDR4 PHY IP is available now and for more information on DDR IP, please visit: http://ip.cadence.com/ipportfolio/memory-ip/ddr-lpddr#ddr-controllers

“Technologies like phase-change memory, RRAM (resistive RAM) and MRAM (magnetoresistive RAM) are under development, and one of these technologies will eventually replace DDR. Meanwhile, 3D memory chip stacking may bridge the gap until a successor emerges,” Howard from IHS iSuppli said. As far as I am concerned, I don’t see how the Memory (whichever technology) Controller protocol could escape to move to High Speed Serial, SerDes based protocol. Hybrid Memory Cube product is already relying on a 12.5 Gbps SerDes today (25 Gbps tomorrow), but this is an expensive technology, targeting High End Server or Networking. What will be the next protocol to support data transfer from memory, once again, whichever technology has been used, offering higher data transfer bandwidth than DDR4, and affordable enough so it could be used for mainstream applications? I honestly don’t know, even if I would bet for a SerDes based protocol. But I am sure that, in the meantime, DDR4 Memory Controller IP business will have generated huge ROI, as it’s expected to be the latest DDRn type of protocol (not counting LPDDR4) to be used.

By Eric Esteve from IPNEST

More Articles by Erik Esteve…..

lang: en_US


SemiWiki Job Forum

SemiWiki Job Forum
by Rich Goldstein on 02-02-2014 at 11:05 am


As Dan has mentioned, SemiWiki has added a Job Forum in an effort to help fit qualified people to jobs around the fabless semiconductor ecosystem. A quick survey of companies working with SemiWiki revealed over 1,000 job openings planned for 2014 and finding the right people for those positions is something we can help with.

Dan and I have been friends for more than 20 years and I have the utmost respect for what SemiWiki has accomplished in such a short amount of time. I have been in EDA and Semiconductor IP recruiting since the mid ’80s when the fabless semiconductor industry truly came to be. I’ve experienced the evolution of the companies, the technology, the people, and am well connected in the semiconductor industry. DAC Search Inc. was a premier search firm that I founded in the mid ’80s specializing in these markets and I have remained dedicated to this field.

Following the in-house recruiting trend, I started with Magma Design Automation in 2008 where I was responsible for all of North American Sales recruiting. I even finished my tenure there with an assignment as an inside sales person promoting Spice tools to smaller accounts. From there I had contract recruiting positions at Kilopass Technology, Xilinx, and PMC Sierra. Most recently I spent more than a year at AMD as the lead recruiter for GFX design and verification across North America.

My goal here is to provide the industry with an expert recruiting experience for in-house recruiters and hiring managers alike. We will transform the SemiWiki Job Forum into the place where Semiconductor, EDA, and IP companies can reach out to our members and viewers in order to attract the top talent to their respective career pages. As Dan says, “For the greater good of the fabless semiconductor ecosystem”.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 800,000 unique visitors have been recorded at www.SemiWiki.com viewing more than 6M pages of blogs, wikis, and forum posts.

lang: en_US


PreEDAC Mixer

PreEDAC Mixer
by Paul McLellan on 02-02-2014 at 11:00 am

Get together with your fellow industry peers and insiders at the monthly EDAC Mixer, to the benefit of local charities. You don’t need to donate anything, you just show up and pay for your own drinks. A portion of the proceeds will go to local charities, this month to the Mountain View Educational Foundation (MVEF), a volunteer driven non-profit that provides funding for enrichment programs and educational material to enhance the solid academic curriculum and maintain the high quality of education in the Mountain View Whisman School District. To learn more about MVEF, visit their web site here.

When is it? Thursday February 27th at the Savvy Cellar Wine Bar which is at 750 Evelyn Avenue, Mountain View from 6pm to 8pm. That is right next to (actually in) the Mountain View train depot and across the rails from the light rail station.

Although I doubt you will get thrown out if you just attend, EDAC would like you to register (it’s free) so they have some idea of numbers. Register here.

While on the subject of EDAC, nominations for the Kaufman award are now open until February 28th. 2013’s recipient was Chenming Hu, inventor of the FinFET. Nomination forms are here. I’m assuming that it will be presented, as in 2013, at DAC (which is June 1-5th in San Francisco, in case you have been hiding under a stone).

So just in case it isn’t clear, here is what you do:
[LIST=1]

  • Decide to go
  • Register here
  • Show up at Savvy Cellar, 750 Evelyn at 6pm on 27th Map
  • Mingle with your industry colleagues
  • Pay for any food and drink you consume
  • Savvy Cellar will donate a percentage to MVEF

    More articles by Paul McLellan…


  • Grid Vision 2050 – Unified & Open Across The Globe

    Grid Vision 2050 – Unified & Open Across The Globe
    by Pawan Fangaria on 02-02-2014 at 10:30 am

    Whenever there is good momentum in a particular technology, IEEEtakes major initiative to standardize the procedures, formats, methods, measurements etc. involved in the technology to proliferate it for the advantage of wider community. And that becomes successful by active participation and collaboration of both producers and consumers, otherwise it remains in silos. At times, monopoly of a strong organization prohibits it to open up to standards; however that’s a restrictive leadership, doesn’t last longer. A positive, healthy and true leadership is to be open, promote standards, involve broader community and deliver products adhering to those standards; it’s a win-win which can pay higher dividends to all. I admire IEEE’s unrelenting service, by fostering technological innovations in various ways (research initiatives, publishing research papers, holding technical conferences, evolving and promoting standards for universal adoption, and so on) through larger collaboration in different industries, to the global community for last 51 years; to be precise; it started on 1[SUP]st[/SUP] Jan 1963, and so I should say “Belated 51[SUP]st[/SUP] Happy Birthday” to IEEE!

    A few months ago I had attended a live webinar presented by Bill Ash and Srikanth Chandrasekaran from IEEE Standards Association (IEEE-SA) that talked about the evolution of Smart Grid across the globe. Saving of Energy is a major focus, whether it’s small chips of semiconductor (of which I am accustomedJ), households or industrial applications. Also, power generation and its efficient distribution is a major need, especially for underdeveloped countries and rural areas of developing countries. While India (with 17% of world population, 2.6% of global GDP and 6-9% share in global energy demand) is struggling by all means to provide uninterrupted power supply to 100% of its population, China is looking towards ultra-high-voltage-transmissions, and then there are countries like Japan, Germany, North Africa and others seriously considering renewable sources of energy such as sunlight, wind, water, biomass etc.

    IEEE is actively involved in Smart Grid technology initiatives such as electric vehicle, wireless power transfer, power magnetics and electronics in distributed resources, DC in home, utility forum and Data analytics. The aim is to conserve energy and power through clean technology without pollution and hazards.

    Since priorities of different countries are different from both sourcing and distribution perspective to satisfy their local needs by exploiting available resources and considering political, social, environmental and economic situations, the challenge of soliciting a common standard gets further augmented. However, common standard is a must for companies to serve larger markets with great ease of interoperability and collaboration, thereby exploiting full potential of any technology. By doing so they will be able to produce products at lower cost and provide them to consumers at lower prices. And therefore IEEE-SA approach is to foster global economic growth by meeting local needs through OpenStand, a global community that stands together to support common open standard, develop, deploy and embrace technologies for larger benefit of global society. A proper balanced process is followed to maintain broad consensus and transparency to create greater value for the society through competitive products and services.

    While the paradigm of global open standards is relevant for any industry, the focus of this conference was on “Global Smart Grid” that augments regional facilities for electricity generation, distribution, delivery and consumption with a two-way end-to-end network for communications and control. IEEE’s vision for Smart Grid by 2050 for communications, power, IT, control systems and vehicular technologies is that there will be two way movements between makemoveuse cycles of power as opposed to today’s unidirectional process of make->move->use. This provides greater sharing of resources, local utilization, conservation and re-use.

    IEEE-SA invites for open membership, participation and governance from individuals and organizations who can contribute in advancing the technology for the benefit of humanity by volunteering in various activities such as pre-standard roadmap development through use cases, application scenarios for the Smart Grid and enabling technologies, standards development and standards implementation.

    IEEE-SA Smart Grid Portal is available at http://smartgrid.ieee.org
    Here, one can fine all resources associated with Smart Grid – conferences, publications, standards, activities being performed by various working groups in different countries etc. Other sites for more information include http://open-stand.org, http://standards.ieee.org

    There is an interesting video at http://www.youtube.com/watch?v=_4qQ4qA9xeE&feature=youtu.be

    More Articles by Pawan Fangaria…..

    lang: en_US


    Why Intel 14nm is NOT a Game Changer!

    Why Intel 14nm is NOT a Game Changer!
    by Daniel Nenni on 02-02-2014 at 10:00 am

    On one hand the Motley Fool is saying, “Intel 14nm could change the game” and on the other hand the Wall Street Cheat Sheet is saying, “Intel should shut down mobile”. SemiWiki says Intel missed mobile and should look to the future and focus on wearables and in this blog I will argue why.

    Let’s look back to 2009 when Intel and TSMC signed an agreement to “collaborate on addressing technology platform, intellectual property (IP) infrastructure, and System-on-Chip (SoC) solutions.” Intel and TSMC ported the Atom Core to 40nm and offered it to more than 1,000 of TSMC’s customers:

    “We believe this effort will make it easier for customers with significant design expertise to take advantage of benefits of the Intel Architecture in a manner that allows them to customize the implementation precisely to their needs,” said Paul Otellini, Intel president and CEO. “The combination of the compelling benefits of our Atom processor combined with the experience and technology of TSMC is another step in our long-term strategic relationship.”

    Unfortunately this venture was a complete failure for business and technical reasons and was put on hold a year later. I was a frequent visitor to Taiwan at the time so I had a front row seat to this one. The excuse was that you can’t just flip a switch and be successful in the mobile market, meaning that Intel’s Atom effort will require patience and persevance. Fast forward to 2012:

    “We are moving Intel[SUP]®[/SUP] Atom[SUP]TM[/SUP] processors to our leading-edge manufacturing technologies at twice our normal cadence. We shipped 32nm versions in 2012, and we expect to launch the 22nm generation in 2013, and 14nm versions in 2014. With each new generation of technology, we can boost performance while reducing costs and power consumption—great attributes for any market, but particularly for mobile computing.”Our Mobile Edge by Paul Otellini, Intel 2012 Annual Report.

    Clearly that did not happen at 22nm with Intel literally GIVING AWAY 40 million 22nm SoCs to get “traction” in the mobile market. And Intel 14nm SoCs are delayed until 2015 which will be in lock step with the next generation of 14nm ARM based processors from QCOM, Apple, Samsung, and a handful of other fabless SoC companies.

    As a stopgap measure to fill their new 14nm fabs, Intel dipped its toe into the shark infested foundry business waters. Unfortunately the only taker was Altera and their 14nm wafer demand is 3+ years out and the volume is a fraction of what is needed to keep a fab open. Intel is lucky to have only lost a toe here as they also risked exposing the secret manufacturing sauce they are famous for. Intel then shuttered fab #42 which could have been filled by foundry customers.

    Let us not forget the other multi-billion dollar Intel forays away from their core competency: McAffee? Intel TV? Can someone help me complete this list in the comment section please? There are just too many for me to remember.

    That brings us to where we are today: Intel still does not have a competitive SoC offering and time is running out. I strongly suggest that Intel take note of Google’s recent move out of the Smartphone business selling Motorola Mobility to Lenovo:

    The smartphone market is super competitive, and to thrive it helps to be all-in when it comes to making mobile devices…..Larry Page Google CEO.

    If Intel is going to go all-in I strongly suggest Intel focus on Quark and the wearable (embedded) market. Mobile has hit commodity status and is moving way too fast for a semiconductor giant to keep up (TI already gave up their mobile SoC business). Intel has had a historically strong position in the embedded market and it is time for them to get back to a business they truly believe in, absolutely.

    More Articles by Daniel Nenni…..

    lang: en_US


    RTL Sign-off – At an Edge to become a Standard

    RTL Sign-off – At an Edge to become a Standard
    by Pawan Fangaria on 02-01-2014 at 10:00 am


    Ever since I have seen Atrenta’s SpyGlass platform providing a comprehensive set of tools across the semiconductor design paradigm, I felt the need for a common set of standards to evolve for sign-off at RTL level. Last December, when I read an EE Times articleof Piyush Sancheti, VP, Product Marketing at Atrenta, where he talks about a billion gate SoC design, shrinking market windows, and design cycles to the level of 3-6 months, I was looking for an opportunity to talk to him in a broader sense on how RTL level design paradigm is proliferating and what we can see in future. This week I had a nice opportunity talking to him face-to-face in Atrenta’s Noida office. Here is the conversation –

    Q: SpyGlass is primarily providing a platform for designs at RTL and for sign-off at that stage. What has been your experience so far?

    In today’s SoC design environment, you have size, scale and complexity of advanced nodes being the prime factors. Most of the SoCs use several soft IPs, configurable at different levels, and some hard IPs as well. Iterative design closures do not serve the purpose for such large designs. Add to it very short market windows; there is another level of market segment coming up for Internet-of-Things, that has very short turn-around-time in the order of 3 months. RTL sign-off has become a need today to answer this faster design closure with lesser cost.

    So, to answer in short, our leading edge customers are executing on RTL sign-off and are happy to see the value in it. Last year was the best year for us in terms of business and growth and we are looking at a bright future from here.

    Q: Considering the amount of IP re-use and sourcing from third party for SoC design, a standard RTL sign-off criteria can help in reliable IP exchange as most of the IPs are sourced at RTL level. Your comments?

    Yes, definitely, at the top level an SoC can have just connectivity between many IPs connected through glue logic. So, quality of the SoC will depend on the quality of IPs and therefore a standard criterion must be there for IPs, internal or external. We have been working with TSMCon a standard for soft IP qualification.

    Q: That’s quite encouraging. Looking at your talk in EE Times about billion gate SoCs becoming a reality, I can definitely see that RTL sign-off is a must. But do you see common standard RTL sign-off criteria or rather RTL coverage factors evolving across the industry for the overall semiconductor design?

    Yes, it’s required. Even if all IPs on an SoC are qualified, it doesn’t guarantee the quality of the SoC. What if there is a clocking scheme mismatch between IPs? Even at the connectivity level between IPs, we need to look at the common plane issues, consistency, synchronous versus asynchronous and the like. So, a standard at SoC level sign-off is again a must for the industry. And we are working at it, along with some of our leading customers; it depends on a majority of the design houses adopting this path. It will take time to break that inertia; people will realize that this change in methodology is needed when they are no longer able to continue with the same old methodology.

    We have talked about the problems so far, let’s talk about some solutions. We now offer a smart abstract model concept for blocks in SoC design. RTL sign-off can be done at a hierarchical level; this has very fast turnaround. This is now in use in some of the most complex SoC designs with multiple levels of hierarchy. We have seen amazing results in performance, capacity, memory utilization, number of violations etc. We are talking gains that are in one or two orders of magnitude. So, we definitely would be interested in evolving the common standard for SoC sign-off at RTL.

    Q: What all should get covered in RTL sign-off?

    It’s across various design domains; clocking, testability, physical, timing, area, and power. Rules to avoid congestion and ensure routing completion such as fan-in, fan-out, mux sizes and cell pin density. On the timing side, there is logic depth, CDC, clock gating etc. Similarly there are rules for power and area. We have about 300 rules of the first order. These have broad applicability across a wide range of the market segments.

    Q: RTL sign-off is a must at the beginning of an SoC design and a post layout sign-off at the end. Do you see the need for any intermediate level of sign-off such as post floorplan level?

    Yes, SoC design needs a continuous monitoring at each stage. Quality and sign-off is a culture which must be exercised at each stage as the SoC passes through the design phases such as floorplan, placement and so on. By doing sign-off at RTL, one can get to design closure much faster, more productively and at lesser cost. As we pass through lower levels of design, the cost and iteration time increases. The other advantage at RTL signoff is that it minimizes iterations at lower levels. Overall it can reduce the design schedule risk by 30-50%.

    Q: Do you see a possibility of leading organizations working at RTL, joining together to define a common standard for RTL sign-off of IPs and SoCs for the semiconductor industry? Can Atrenta take a lead? Who should own the standard?

    As I said earlier, we are already working with TSMCand some of our other leading customers on this. We would be very interested in a common standard evolution which can benefit the whole semiconductor design industry. However, it needs about 10-12 major players from the design community, foundry and EDA to get the ball rolling. Eventually it will become a success only when majority of the semiconductor design community embraces it, as we have seen in other spaces. At this moment, we are not limited by capability; we are limited by the number of users which need to be large enough to provide that kind of momentum.

    So, yes we can give it a start, mature it, but going forward some standard body should own it. It may be a new standard body or any of the existing one, we have to see.

    Q: How far from now do you see that standard evolving?

    I guess it should take minimum 18-24 months from now. It will not fly until we have a critical mass of the community starting to use it.

    I felt extremely happy after talking to Piyush, especially on learning that what I was thinking is already in progress. This was one of my best conversations with industry leads. I really admire Piyush’s thought process when he said, “we are not doing it on our own. We continuously learn from our customers and partners who provide us the right direction to do things better in this challenging environment and change the ways that can lead to better productivity.” Let’s watch what’s there in store for future.

    More Articles by Pawan Fangaria…..

    lang: en_US


    Power and Thermal Modeling Approach for Embedded and Automotive using ESL Tools

    Power and Thermal Modeling Approach for Embedded and Automotive using ESL Tools
    by Daniel Payne on 01-31-2014 at 7:04 pm

    Did you know that an S-class Mercedes Benz can use 100 microprocessor-based electronic control units (ECUs) networking throughout the vehicle that run 20-100 million lines of code (Source: IEEE)?


    2014 Mercedes-Benz CLA

    Here’s a quick list of all the places that you will find software controlling hardware in an automobile:
    Continue reading “Power and Thermal Modeling Approach for Embedded and Automotive using ESL Tools”


    How Do You Verify a NoC?

    How Do You Verify a NoC?
    by Paul McLellan on 01-31-2014 at 6:01 pm

    Networks-on-chip (NoCs) are very configurable, arguably the most configurable piece of IP that you can put on a chip. The only thing that comes close are highly configurable extensible VLIW processors such as those from Tensilica (Cadence), ARC (Synopsys) and CEVA but Sonics would argue their NoCs are even more flexible. But this leads to a major challenge: how do you verify one of these beasts.

    NoCs are defined by a configuration file. A NoC used to link perhaps a dozen blocks on an SoC but these days there may be a couple of hundred and the configuration file can be tens of thousands of lines long. This leads to the problem: how do you verify that the NoC RTL does indeed correctly represent the functionality defined by the configuration file. A big part of the problem is that everything is configurable. The protocols used, the performance, the connectivity, whether interfaces are blocking or non-blocking, how wide signals are. Everything. They are the Burger Kings of the SoC: have it your way.

    The next level down a NoC consists of functional blocks connected by the actual signal buses. So the best approach is hierarchical, verify that each of those functional blocks does what it is meant to do and that they are hooked up correctly.

    Like everyone else, Sonics uses constrained random verification, lots of SystemVerilog Assertions (SVAs) along with the constraints to ensure that the random vectors generated are themselves correct. Since the NOC is so configurable, these are not fixed files but also need to be automatically generated from the configuration file that specifies the NoC. Then vectors can be run and, hopefully, none of the SVAs fail (which would point to a problem).

    But Sonics also does something other people do not: it also generates a SystemC transactional level model (TLM) corresponding to most blocks. These are still cycle accurate so not necessarily quite what you first think of when you hear TLM. The SystemC model contains protocol checkers (green in the diagram below) along with signal level adapters (light blue) to hook up the reference model. The light blue block on the left is the RTL block being verified.


    The protocol checkers are a really important part of the verification environment. These monitor the signals going into and coming out of a block and verify that the protocols are implemented correctly.

    Once the NoC is verified, that doesn’t guarantee that it works correctly on the SoC. Interface protocols are a sort of contract and the user’s IP blocks need to keep their side of the bargain. Once again a key part of the verification is the protocol checkers. These will call foul if an interface does not behave in line with the contract.

    Sonics recommends that users keep the protocol checkers in place throughout the design process. They do not generate RTL so they don’t get designed into the chip itself. However during any RTL simulation they will catch many problems when they first occur rather than leaving them to show up as an obscure bug, perhaps on the other side of the chip many clock cycles later. In fact the first thing a Sonics AE will ask when a customer tries to report a bug in the NoC is whether the simulation has been run with all the protocol checkers turned on. Many bugs have gone away once this is done. What looked at first like an obscure bug in the NoC itself was actually caused by a completely different block violating the protocol.

    The process just described is how to test a particular implementation of a NoC when designing an SoC. But that doesn’t help Sonics themselves check the whole tool chain. So every night they go one step further than constrained random. They generate random NoCs and then run the verification on them long enough to be confident that the implementation is correct. Then they generate another NoC and do it again. All night, every night.


    Update on a Space-Based Router for IC Design

    Update on a Space-Based Router for IC Design
    by Daniel Payne on 01-31-2014 at 11:50 am

    When I started my IC design career back in 1978 all IC routing was done manually, today however we have many automated approaches to IC routing that save time and do a more thorough job than manual routing. To get an update on space-based routers for IC design I connected with Yuval Shay at Cadence today. The basic idea behind a spaced-based router is to simultaneously address:
    Continue reading “Update on a Space-Based Router for IC Design”