Banner Electrical Verification The invisible bottleneck in IC design updated 1

High Calibre Development Keeps Mentor on Top of the Game

High Calibre Development Keeps Mentor on Top of the Game
by Tom Simon on 12-07-2017 at 12:00 pm

One might be tempted to think that technology driven gains in computer performance might be enough to keep up with the needs of design and verification tools. We know that design complexity is increasing at a rate predicted by Moore’s Law. We also know that the performance of the computers used during IC development benefit from this same level of performance improvement. When I first started in EDA I made the analogy of a dragon chasing its tail – always close, but never quite able to catch up. However, the dragon always seemed close enough to make developing the next generation of hardware possible.

Stepping back and examining the issue more closely reveals well known issues, such as exploding design rule complexity. I have written before about the rapid increase in the thickness of DRM’s. The need for increased compute power necessary for design completion hits across the board. We see it in simulation, synthesis, place and route, etc. However perhaps the most heavily impacted area is physical design verification, aka DRC.

Many of us can remember the tectonic shift in DRC when flat checking became impractical. This is when after decades there was a move from Cadence’s Dracula to Mentor’s Calibre. Calibre effectively and reliably solved the issues surrounding hierarchical DRC and paved a new path forward that was absolutely necessary for design productivity. The holy grail of most verification tools is the overnight run. If a tool can run in 8 to 14 hours, the users can start a run, go home and come back the next day to analyze and correct issues.

As process nodes advanced, the requirements for physical verification exploded. The new requirements came from several distinct categories. Mentor has published a white paper recently that discusses each of these areas and describes the work and development effort necessary to deliver a total solution that makes physical optimization practical.

Some of the solutions are good old fashioned software development work. These include the addition of parallel processing and support for larger designs. Mentor does a lot of additional work on the algorithmic side to support faster operations. Some of these focus on handling hierarchy better, especially when it relates to metal fill. Anyone involved with nodes below 28nm knows how complex metal fill requirements have become. Calibre is used now to generate metal fill. Mentor has pushed out significant enhancements in fill generation. The white paper discusses how these enhancements and methodology changes enable much higher design throughput.

Other significant runtime improvements can be achieved through less obvious methods such as carefully writing rule decks. Two seemingly similar methods of performing a check can have vastly different runtimes. If specific knowledge is used in writing rule decks, overall runtimes can be lowered. We are long past the days where users write their own rule files. Today all the major foundries provide Calibre rule decks. Mentor has developed deep relationships with the major foundries and has, in conjunction with them, devised a process to ensure that all the supported rule decks use the most efficient methods. This requires early access and intimate cooperation between the Calibre team and the foundries.

More advanced nodes also come with completely new verification requirements. One such example is multi-patterning. Early on the foundries took care of this for their customers. However, as the process advanced from double to multiple patterning, the responsibility migrated to the physical layout teams. This is another area where Calibre has added support to ensure that customer designs are quickly and correctly completed.

In the Mentor White Paper there is discussion on the effort that Mentor expends adapting Calibre at each new process node and at the same time maintaining backwards compatibility, so that designs on nodes like 130nm or 90nm still obtain the same verification results. My observation is that Mentor is highly aware that the base tool needs to be reinvigorated at the same time as new performance features need to be added. Furthermore, none of it is useful without deep foundry relationships to anticipate the requirements for nodes that may be years away. Mentor is in an enviable position as it has the ability to work with foundries and their most advanced customers to ensure that support is ready prior to production release for each new node.

The Mentor paper entitled “Achieving Optimal Performance During Physical Verification” is a fascinating view into the work that is required to stay on top of meeting and exceeding the next generation needs for physical verification. The results that Calibre attains are a clear indication of the level of effort that Mentor dedicates to the overall process of enhancing and supporting their physical verification solution. I highly recommend reading the paper to get more detailed insights and to better understand how the whole solution works together.


Optimizing Return from your IP Portfolio

Optimizing Return from your IP Portfolio
by Bernard Murphy on 12-07-2017 at 7:00 am

Given that SoC design today is predicated on IP reuse, you would assume that processes to deliver, maintain and communicate status on reusable IP should be highly optimized. But that’s not necessarily the case, especially when so many design companies have consolidated, each brings its own IP libraries, design flows, license agreements and inevitably fragmented tribal knowledge about what works well, what doesn’t and under what circumstances.


This isn’t a hypothetical problem, Recently, a design group in the Asia arm of a design company purchased an interface IP for $750k; shortly after, the US branch of the same company purchased the same IP, also for $750k. Were they able to sort this out? I don’t know but buyers have responsibilities as much as sellers. If too much time passed, I’m guessing the vendor would feel completely justified in pushing back against requests for a refund (try getting a refund from Amazon or Expedia after a few weeks, based on a mistake you made).

Cases like that are eye-catching but I suspect the bigger losses are in more mundane areas. An important part of the value in an M&A is anticipated efficiencies and new markets opened thanks to sharing a larger pool of high-value IP. And yet I don’t think I’m alone in noticing that years after some completed mergers, more than a few design organizations are still struggling to broadly realize these supposed benefits. They certainly gained market share; efficiencies around IP are sometimes less clear. When sharing between organizations proves ineffective or inefficient, teams reinvent the wheel constantly, or spend more time adapting IP they thought would be drop-in, or they simply fail to exploit opportunities that should have delivered significant market advantage.


Some of the problem starts with the way we view IP, as an essentially static end-product of IP development. Of course you hope the functionality changes less frequently than circuitry unique to the current design, but there can be a lot more information about an IP, changing more frequently than the functionality – bugs filed and status, schedule for bug fixes, silicon experiences and measurements and who else is using and has used this IP and in what configurations. All of this is very dynamic information, critically important to any current user. And it all evolves as new releases appear and new information flows in. Another design went into a respin because of a problem on that same IP? I might want to know that. Or I’m depending on a critical fix, but that need wasn’t passed down to the IP developers or status wasn’t rolled back up to those who might need it. I might want to know that too.

A Consensia-sponsored survey of design teams’ problems with IP highlights these issues: #1 they spend too long looking for the exact IP they need (incomplete databases, inadequate documentation, …), #2 they don’t know if they are allowed to use the IP on their design (license issues, ITAR issues, discontinued support, …), #3 they didn’t know the IP they wanted was already available (so they built their own) and #4 they didn’t have a good measure of the quality of the IP (yeah it exists, but nobody has used it, or it has a history of working well in certain applications but not in your application).

None of this will be a surprise to any experienced design organization. Each has typically built up home-brewed flows to address at least some of these needs, based on different DDM, bug/defect tracking, release/testing frameworks and possibly some kind of lifecycle tracking. Which is great, but there’s no easy way to bring different flows together in a merger (which may be why many are still struggling years after the M&A). Brute-force consolidation is frequently impractical. This problem becomes even more challenging when you are working collaboratively on a design with a customer.


Consensia offers the DelphIP platform to address all these problems, at the enterprise level where it can benefit all business lines in an organization. This provides a common and centralized way to track status, history, restrictions, maturity/quality along with other characteristics that aren’t commonly represented in EDA/DDM/defect tracking views. It also sits on top of and communicates with existing DDM platforms, bridging between existing local flows and enterprise-level IP communication and management.

DelphIP also supports collaborative workspaces so you work effectively with a customer in support of their value-adds, without having to expose all the gory (and proprietary) details that go into producing a modern design. Which is something you may be required to consider if you are working in the automotive space, to name just one example. You can watch the Consensia webinar on DelpIP HERE.


Safety qualification for leading edge IP elements – presentation at REUSE 2017 in Santa Clara

Safety qualification for leading edge IP elements – presentation at REUSE 2017 in Santa Clara
by Tom Simon on 12-06-2017 at 12:00 pm

To ensure the reliability of automotive electronics, standards like AEC-Q100 and ISO 26262 have helped tremendously. They have created rational and explicit steps for developing and testing the electronic systems that go into our cars. These are not some abstract future requirement for fully autonomous cars, rather they are needed for critical systems in almost all cars today. Modern cars use significant electronics for engine systems, as well as braking and numerous safety systems such as impact sensors and airbags.

It can be easy to understand how an airbag system might be documented and tested before use in a car design. However, it is important to realize that a sensor or actuator is really part of a larger network of devices, where the reliability of the entire system is critical. Within each of the elements we can find components that enable communication, such as a SERDES. With designs moving to 7nm, the complexity of designing a SERDES is becoming even more complicated. This is further compounded by the overlaying of safety requirements. Yet, no matter how difficult, these added requirements are essential.

As you have read in my writing before, a standard like ISO 26262 does not allow the certification of the pieces of a system. Instead the final completed system can be qualified once it is built and its application is fully defined. The smaller components are what are known as a safety element out of context. They can be built with the final system level qualification in mind, but they themselves can never be certified.

The question remains, how should a component such as a SERDES be designed and evaluated to ensure that it is suitable for inclusion in a system that will be qualified. Silicon Creations, a leader in analog IP development, will be speaking on this topic at the upcoming REUSE 2017 conference on December 14[SUP]th[/SUP] in Santa Clara. Andrew Cole, Vice President of Business Development at Silicon Creations will present a talk called “Developing 7nm IP for Safety Critical Automotive Applications”. In addition to talking about steps necessary for ISO 26262 qualification, he will talk about AEC-Q100 pre-qualification through testing of completed parts.

His talk will be held at 11:15 AM on December 14[SUP]th[/SUP]. If you would like to register for this event please go to the event website for more information on the full agenda and registration. During the event, there will be talks given by Samsung, Intel Arm and many other key players in the IP arena. The event looks like it will provide an informative day for those interested in developing systems that employ IP.


Enhancing FPGA Prototype Debug

Enhancing FPGA Prototype Debug
by Daniel Nenni on 12-06-2017 at 7:00 am

FPGA prototyping is very popular in modeling hardware for early system prove-out, early embedded software development, as a cost-effective and performance-effective platform for software-driven hardware debug and for late-stage software debug, all before silicon is available. It has significant advantages in run-time performance over other hardware simulation methods and can be considerably less expensive than emulation. But as always, every solution comes with its own challenges; one particular challenge for FPGA prototyping is visibility into hardware internals for debug.

When using prototyping for hardware debug, good debug access is obviously critical; you have to be able to place probes / triggers wherever needed and you must be able to store long traces to debug back through sequences that may run for millions of cycles. In simulation-based verification this is easy. Changing observation points and triggers is simple and re-running after making a change to the design is equally simple.

But in FPGA prototyping neither of these steps is so easy. Take first the debug capability. Standard FPGAs, typically from Xilinx or Intel/Altera provide debug support in the form of on-chip logic analyzers (LAs), but these offer limited and potentially expensive debug support. First, they use extra gates, routing and (on-board) memory, more of each being required the more you want to debug. Problems are particularly acute with memory. Storing traces over millions of clock cycles may burn up all your on-board memory, undermining your design needs.

This becomes even more complex when your design spreads across multiple FPGAs, as is common when prototyping designs of any reasonable size. Commercial prototyping systems support large designs by partitioning them across multiple FPGAs on a board, and potentially across multiple boards or even multiple enclosures. In cases like this, native device-based debuggers hit yet another problem. They are effective, as far as they go, in debug at a device level, but each device now represents just a part of your design. Moreover, multi-FPGA prototyping isn’t just about partitioning across FPGAs. Logic to pipeline signals through limited package pins and board-level logic to optimize communication between FPGAs, all artefacts of prototyping rather than features of your design, further complicate the debug picture from a whole design point of view.

There’s another problem (in case you though you could somehow work your way around the problems I have listed so far). Since probes are logic embedded in your design, if you need to change probes (which you will, given a very limited budget for debug logic when using LAs), you’ll need to re-implement the whole design on almost all debug iterations (because you’ll almost certainly need to add more probes). And that is going to take a lot of time per iteration – adding the probes to the RTL, resynthesizing, perhaps re-partitioning and re-implementing through FPGA place and route. Think days at least to add a few probes.

This is where dedicated debug solutions become particularly important, by moving most of the debug functionality off the FPGA so that the bulk of device resources remain allocated to your design, by supporting many more probes per device than would be possible in LAs and by letting you interact with a view of your total design (which is what you really want to understand) by transparently handling the details of design to device mapping. I’ll illustrate this through the S2C MDM (multi-debug module) solution.

On resources, consider first memory needs. MDM doesn’t use FPGA memory. All debug data is output through a DDR3 memory port to external memory on the MDM board. Freed of the highly constrained FPGA memory resources, the solution provides 16GB of memory, supporting up to 16K probes per FPGA and several million cycles of traces. The on-device support logic is also greatly reduced to the very limited circuity, even for 16K probes, needed to capture and stream out this data.

This also largely addresses the design iteration (for debug) problem. With 16K probes per FPGA, you’ll need a lot less lengthy design rebuilds to complete debug. Which means that getting to verification closure is going to be a lot faster.

Finally, the MDM solution works with up to 16 FPGAs simultaneously. And through the Prodigy Player Pro cockpit, you can interact with your design in selecting probes, viewing waveforms and tracing back to find root-causes, all without having to worry about the mapping details between the design and the FPGA implementation. Together this provide a debug solution much closer to what you are used to in simulation.

You can learn more about the S2C Multi-Debug Module HERE.


Jump Start your Secure IoT Design with Intrinsix

Jump Start your Secure IoT Design with Intrinsix
by Mitch Heins on 12-05-2017 at 12:00 pm

Have you ever had a great idea for a new product but then stopped short of following through on it because of the complexities involved to implement it? It’s frustrating to say the least. It is especially frustrating, when the crux of your idea is simple, but complexity arises from required components that don’t add to the functionality of the design. Enter the world of an internet-of-things (IoT) device.

By definition, IoT devices live in the “wilds” of the internet, and that means they need to be made secure, else we endanger not only our own application but the internet upon which it depends. In most cases, IoT devices are made up of a simple processing unit, a network interface, and one or more sensors/actuators. These devices are typically much simpler than the required security element. This begs the question of how to bridge the security gap without having to hire an entire security team to design a secure device.

Depending on your device there are multiple options. Some CPU vendors have a complete and comprehensive approach to security built into their offerings. These offerings typically require some type of licensed IP. Alternately, some operating systems like Linux, can also provide for secure communications. Security algorithms are implemented in software and as such require sophisticated CPUs to implement them. However, many IoT devices don’t need or want the more powerful CPUs. In fact, the opposite is true. They need just enough computing power to talk to their sensors/actuators and then relay the information on to an IoT gateway. These devices need to be very low power, running on the smallest of batteries, while also being extremely cost sensitive.

Intrinsix, a leading design services company, addresses the IoT security gap by providing a Secure IoT Jump-Start kit. The kit enables designers to quickly realize IoT designs with the required performance and security while using a minimized design architecture that is frugal on power. The Jump-Start kit makes use of a RISC-V processor which can be easily configured to use minimal features and this cuts down on both chip size, power and cost.

Security is provided through a custom IoT security hardware accelerator IP that dramatically reduces power consumption by as much as 1000X and increases battery life by as much as 10X. This frees up the CPU to be further customized for the specific application. The security IP is certified to be NSA Suite-B compliant (meeting NSA Secret level requirements), more than sufficient for most IoT applications.

The Jump-Start kit also makes use of the minimalist Zephyr OS which does away with the need for more sophisticated memory management needed by operating systems like Linux. This means you don’t need to have a dedicated MMU, further reducing the chip footprint and cost.

To make system design easier Intrinsix also includes within the Jump-Start kit a full software stack and API to access the dedicated security hardware accelerator. The software comes with example code that developers can use as a guide for linking security into all transactions into and out of the device. Of course, being a design services company, Intrinsix can be called upon to help implement a portion or all of the system for their customers.

In summary, the Intrinsix IoT Jump-Start kit is meant to give IoT system designers a way to jump over the IoT security gap and to enable them to get their initial IoT product up and running and into the market with a secure system while reducing risks and both non-recurring and recurring costs.

You can learn more about Intrinsix and their IoT offerings online at the link below or by downloading their IoT e-book.

See also:
Intrinsix eBook: IoT Security The 4[SUP]th[/SUP] Element
Intrinsix website


Blurring Boundaries

Blurring Boundaries
by Bernard Murphy on 12-05-2017 at 7:00 am

I think most of us have come to terms with the need for multiple verification platforms, from virtual prototyping, through static and formal verification, to simulation, emulation and FPGA-based prototyping. The verification problem space is simply too big, in size certainly but also in dynamic range, to be effectively addressed by just one or even a couple of platforms. We need microscopes, magnifying glasses and telescopes to fully inspect this range.


Which is all very well, but real problems don’t neatly divide themselves into domains that can be fully managed within one platform. When debugging such a problem, sometimes you need the telescope and sometimes the microscope or magnifying glass.

This in turn means you need to be able to flip back and forth easily between these platforms. I wrote in an earlier blog (Aspirational Congruence) on a Cadence goal to make these transitions as easy as possible between emulation (on Palladium) and prototyping (on Protium). It looks like they have succeeded, so much so that the boundaries between emulation and prototyping are starting to blur. Frank Schirrmeister at Cadence pointed me to three customer presentations which illustrate collaborative verification between these platforms.

One, from Microsemi, illustrates needs in bringing up hardware and firmware more or less simultaneously for multi-processor RAID SoCs. Need to model with software and realistic traffic make hardware assist essential, but they also must span a range for detailed hardware debug and power and performance modeling to the faster performance required for regressions, where detailed debug access is not necessary. If a regression fails on the prototyper they may want to drop that case back onto the emulator, a step greatly simplified by the common compile front-end and memory backdoor access between the platforms.

Amlogic, who build over-the-top set-top boxes, media dongles and related products have similar needs but helpfully highlighted different aspects of the benefits of using a mixed prototyping/emulation flow. Their systems obviously depend on a lot of software (they have to be able to boot Linux and Android in full-system debug) and naturally some post-boot problems straddle both hardware and software. Again, they saw the benefit of speed in prototyping with ease of switching back to emulation for detailed debug. An interesting point here is that Amlogic used hands-free setup for Protium, which gave them 5MHz performance over 1MHz on Palladium. And probably a pretty speedy setup since they weren’t trying to tune the prototype; yet this apparently delivered enough performance for their needs. Amlogic’s measure of the effectiveness of this flow was pretty simple. They were able to get basic Android running on first silicon after 30 minutes and deliver a full Android demo to a customer in 3 days. That’s pretty compelling.

Finally, Sirius-XM Radio gave a presentation at DAC this year on their use and benefits in this coupled configuration. (If you weren’t aware, Sirius-XM designs all the baseband devices that go in every satellite receiver.) There are some similar needs, but also some interesting differences. Between satellites, ground-based repeaters and temporary loss of signal (going under a bridge for example), there’s significant interleaving of signals spanning between 4-20 seconds which must be modeled. Overall for their purposes they have to support 2 full real-time weeks of testing; both of these needs obviously require hardware assist. Historically they did this with their own custom-crafted FPGA prototype boards, but observed that this wouldn’t scale to newer designs. For example, for the DDR3 controller in their design, they had to use the Xilinx hard macro, which didn’t correspond to what they would build in silicon and might have (had they followed this path on their latest device) led to a silicon re-spin bug.

Instead they switched to a Palladium/Protium combo where they could model exactly what they were going to build. They used the Palladium for end-to-end hardware verification, running 1.5 minutes of real-time data in 15 hours. Meantime, the software team did their work on a Protium platform, running 1.5 minutes of real-time data in 3 hours, which served them well enough to port libraries, the OS and other components and to validate all of these. Again, where issues cropped up (which they did), the hardware team was able to switch the problem test-run over to the Palladium where they could debug the problem in detail.

The thread that runs through all these examples is default (and therefore fast) setup on the prototyper being good enough for all these teams, and ease of interoperability between the emulator and the prototyper. For hardware developers, this supports switching back from a software problem in prototyping to emulation when you suspect a hardware bug and need greater visibility. Equally, the hardware team could use the prototyper to get past lengthy boot and OS bring-up, to move onto where the interesting bugs start to appear. And for software developers, this setup enabled work on porting, development and regression testing using the same device model used in hardware development. For each of these customers, the bottom line was that they were able to bring up software on first silicon in no more than a few hours. Who wouldn’t want to be able to deliver that result?


35 Semiconductor IP Companies Hold 2nd Annual Conference

35 Semiconductor IP Companies Hold 2nd Annual Conference
by Daniel Payne on 12-04-2017 at 12:00 pm

Our smart phone driven semiconductor economy consumes a lot of IP blocks to enable quick product development cycles, often annually updating with new models to choose from. So where do you find all of the best semiconductor IP, verification IP and embedded software? Well, one place is at the 2nd annual REUSE conference, scheduled for December 14th in Santa Clara at the Convention Center where you’ll be able to explore what 35 companies have to offer. Here’s the list to pique your curiosity a bit:

  • arm
  • Achronix
  • Amphion
  • Andes Technology
  • Archband
  • Avery Design Systems
  • CAST
  • Certus Semiconductor
  • City Semiconductor
  • Corigine
  • efabless
  • flexlogix Technologies, Inc.
  • Intel
  • Intrinsic ID
  • menta
  • mixel
  • Mobile Semiconductor
  • mobiveil
  • Moortec
  • NSCore
  • nvm engines
  • Open Silicon
  • QuickLogic
  • Samsung
  • SiFive
  • Silicon Creations
  • Silvaco
  • SiNTEGRA
  • Sofics
  • Sonics
  • surecore
  • SmartFlow
  • Semi IP Systems
  • True Circuit Inc
  • Uniquify

It’s refreshing to see such a wide span of vendors at this conference, from small start-ups to behemoths like arm, Intel and Samsung. Even at the first annual REUSE conference last year there were some 34 companies from 11 countries signed up, and 400 attendees registered, pretty impressive for an inaugural event.

I contacted Warren Savage of Silvacoto find out more about REUSE because he was one of the founders of this new conference. Online Warren has been making YouTube videos for several years under the tagline of Take Five with Warren where he interviews key people in the semiconductor IP industry. Each video plays in about 5 to 7 minutes, so don’t get hung up on the literal 5 minute moniker.

Warren started out in 1979 as a design engineer at Fairchild Semiconductor, then moved into management with Tandem for 13 years. He started in the IP business with Synopsys from 1995 to 2003, then became President and CEO of IPextreme for the next 12 years. Last year in June his company IPextreme was acquired by Silvaco, where Warren is now the General Manager of the IP division.

Q&A

What type of engineer would be interested in attending?
Anyone that uses IP will probably find something interesting there. It’s a great venue to come and check out companies that you may not have had a chance to see at DAC or other shows, which are so much more expensive for nascent IP companies to attend.

What does an engineer get out of attending in terms of benefit, ideas, skills, awareness, etc?
As I mentioned, one of the principles is that we don’t have a committee that tells people what they can and cannot talk about. We let companies talk about anything they like, and usually they pick something that they think would be interesting to someone. Whether its applicable to a design engineer or a product manager, we don’t know. They can read the agenda and find something interesting, I’m sure.

Are there speakers at the event?

Yes, a really nice selection of IP companies, big and small. Plus some nice keynotes from Smartflow on IP piracy and from Samsung on what IP means to foundries. Plus, Intel is giving a talk on what they expect from IP suppliers.

  • Ted Miracco, CEO, SmartFlow Compliance Solutions
  • Heather Monigan Program Director & Technology Strategist, Intel
  • Hong Hao, Sr. VP Foundry Business, Samsung
  • Meredith Lucky, VP of Sales, CAST
  • Tony Kazaczuk, Director, Flex Logix
  • Warren Savage, GM, Silvaco
  • John Heinlein, Ph. D., VP, Arm
  • Timothy Saxe, Ph. D., VP and CTO, QuickLogic
  • Stephen Fairbanks, Co-Director, Certus Semiconductor
  • Brian Gardner, VP of Business Development, True Circuits, Inc.
  • Steve Mensor, VP of Marketing, Achronix Semiconductor
  • Mike Noonen, Silego Technology
  • Dr. Naveed Sherwani, President & CEO, SiFive
  • John Blyler, JB Systems
  • Michael, Wishart, CEO, efabless
  • Andrew Cole, VP, Silicon Creations
  • Jim Bruister, Director, Silvaco

What is happening in the IP industry these days?
I think that attendees can get a very good sense of the dynamics of the industry by attending, listening to the talks and visiting with suppliers and other attendees. My sense is that the IP industry is as strong as ever and we continues to morph into new areas.

Is the event expensive to attend?
No, it’s free this year to attend.

How many people do you expect this year?
I expect between 500-600 people this year.

Registration
Even though this conference is free, you must register online to reserve your spot and receive a lanyard at the event.

REUSE 2017will again bring the global IP community together, this time at the Santa Clara Convention Center in the center of Silicon Valley. Here an even greater diversity of suppliers will be promoting their products in a fair and balanced showcase. REUSE will also provide an open forum for communication and networking within our industry.


What are you ready to mobilize for FPGA debug?

What are you ready to mobilize for FPGA debug?
by Frederic Leens on 12-04-2017 at 7:00 am

There are 3 common misconceptions about debugging FPGA with the real hardware:

[LIST=1]

  • Debugging happens because the engineers are incompetent.
  • FPGA debugging on hardware ‘wastes’ resources.
  • A single methodology should solve ALL the problems.

    Continue reading “What are you ready to mobilize for FPGA debug?”


  • RISC-V Business

    RISC-V Business
    by Tom Simon on 12-04-2017 at 7:00 am

    I was at the 7[SUP]th[/SUP] RISC-V Workshop for two days this week. It was hosted by Western Digital at their headquarters in Milpitas. If you have not been following RISC-V, it is an open source Instruction Set Architecture (ISA) for processor design. The initiative started at Berkeley, and has been catching on like wildfire. There are a number of RTL implementations that work in FPGA’s or SOC’s and there is also production silicon from companies such as Si-Five. The RISC-V Workshop was sold out with over 500 attendees – most of whom stayed for the full two days.

    The agenda was filled with detailed technical presentations from a wide range of institutions and companies. They covered details of proposed additions to the specification, commercial products using RISC-V, and research projects leveraging the ISA. The presenters talked about everything from server farm simulation, machine learning, debugging tools, novel applications, and more.

    The keynote was given by Western Digital CTO Martin Fink. He had several surprising things to tell us. First off, after talking in depth about Western Digital’s take on big data versus fast data, he mentioned that Western Digital actually ships about 1 billion processors a year. These processors enable USB drives, hard drives, solid state drives and more. They play a crucial and growing role in moving and processing data. We are all familiar with the cache schemes to improve performance and monitoring to maintain data integrity. In the future, filtering and processing might even occur on the storage device directly, aided by more advanced and powerful processors.

    The second surprising announcement that Martin made was that Western Digital is committing to transition all of these processors to RISC-V. While unexpected, it probably should not have come as a complete surprise. The slide showing companies supporting RISC-V barely has any white space on it these days. Almost every large semiconductor company is represented.

    The two days of talks made clear that the RISC-V ecosystem is being built out at a rapid pace and there is a lot of momentum. Low end implementations of RISC-V were handed out to some of the guests in a smart name tag designed by Antmicro that uses the E310 from Si-Five. Si-Five has announced a 5-core chip that is suitable for running Linux. At the upper end of the performance spectrum, a new company called Esperanto came out of stealth mode at the Workshop to announce its technology that uses massively parallel RISC-V processor chips to tackle machine learning.

    I’ll be writing more about RISC-V, but because it is open source, you can go directly to the RISC-V website and view the specs and learn about the current implementations, development tools and future work planned to add to the spec. It’s worth noting that the core parts of the ISA are already defined and frozen, so they can be relied upon for development.

    RISC-V has the potential to be as transformative as Linux, or HTML. It appears to have the ability to scale from MCU to server class. Already people are using it in a wide range of applications. As an analyst, I attend a lot of technology events and I think the turnout and enthusiasm for this event was exceptional.


    IP-SoC 2017: IP Innovation, Foundries, Low Power and Security

    IP-SoC 2017: IP Innovation, Foundries, Low Power and Security
    by Eric Esteve on 12-03-2017 at 12:00 pm

    The 20[SUP]th[/SUP] IP-SoC conference will be held in Grenoble, France, on December 6-7, 2017. IP-SoC is not just a marketing fest, it’s the unique IP centric conference, with presentations reflecting the complete IP ecosystem: IP suppliers, foundries, industry trends and applications, with a focus on automotive. It will be also the celebration of Design & Reuse 20[SUP]th[/SUP] anniversary, and the conference program is very high level, with people like Aart De Geus, chairman and Co-CEO of Synopsys or Sir Robin Saxby, the founder of ARM, presenting keynotes to start the conference.

    You probably know Charles Janac, CEO, ArterisIP, the chairman of the session “The Past and the Next Decade Vision”. If you remember, he was CEO of Arteris when the company was acquired by Qualcomm in 2013 for several hundred million dollars… but, in fact, only the Network-on-Chip (NoC) IP portfolio was acquired and Arteris became ArterisIP, still developing and selling NoC IP.

    In this session, Mark Ma will give a review of China IP to IC industry in 2017, Eklovya Sharma (Sankalp Semiconductor) will tell how he expects “Changing dynamics in semiconductor industry” and Bill finch from Cast will share his experience about the “Reusable IP Revolution and How a Small Company Took Advantage”… Last year, Bill Finch has given a presentation at IP-SoC “Back to the Future. The End of IoT”, and I admit that it was provocative, but I loved it! The presentation summary was: The term Internet of Things is the most over used, over hyped, mis-used and mis-understood phrase of the last few years. It now has so many meanings that it has become useless to describe anything worthwhile. As designers of IP and electronic systems we need to refocus on what we want to accomplish going forward. As always, it’s about customer needs and long term benefits. I will certainly attend to Bill’s presentation this year.

    If a business need an ecosystem to grow and develop, this is certainly the IP business. And foundries are with EDA a very strategic part of this ecosystem. That’s why the “Foundry Vision” session is dedicated to IP friends like Samsung, GlobalFoundries and Soitec. There is a clear focus about FDSOI technology, as a reminder, Soitec is the #1 SOI wafer provider and GF will talk about FDXcellerator and 22FDX ecosystem. Don’t expect me to complain about this FDSOI focus as I have written numerous blogs, along with other at Semiwiki like Paul Mc Lellan, to introduce FDSOI technology to our readers in 2012-2013, even before the technology being adopted by Samsung and GlobalFoundry as a mainstream solution, complementary with more power-hungry FinFET technology. In FDSOI we trust, especially for battery powered applications, in pure digital or RF IC!

    There will be other sessions dealing with IP ecosystem, like “From IP to SoC: What is the Trend” or “Automotive IP and Software”. You will hear about Analog IP from Mahesh Tirupatur with Analog Bits, one of the most talented IP vendor dealing with highly complex IP, from engineering standpoint, or Interconnect IP from Charles Janac (ArterisIP). Embedded FPGA will be honored by no less than two vendors, Flex Logix Technologies and Menta as Imen Baili (Menta) will explain why “eFPGA is the key solution for Automotive embedded systems”.

    You should stay up to Thursday 7[SUP]th[/SUP] , as the 2[SUP]nd[/SUP] day is very busy with very interesting topics, in these seven sessions:

    – –Power Management and IoT vision (Microchip, Synopsys and CSEM)

    – –Security (Inside Secure, Dolphin Integration and Secure IC)

    – –Design methodology, Innovative IP in FD-SOI Technology, IP SoC design and System design

    IP-SoC 2017 is clearly this kind of high level conference where complexes engineering topics are addressed by industry experts, not just a marketing fest!

    IP-SoC conference will be located on December 6-7 in Hôtel EUROPOLE, 29 rue Pierre-Sémard, Grenoble Grenoble, France, you can register here

    See you on Wednesday 6[SUP]th[/SUP] December in Grenoble

    From Eric Esteve from IPNEST