RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Top Cybersecurity Concerns Are WRONG

Top Cybersecurity Concerns Are WRONG
by Matthew Rosenquist on 12-09-2017 at 7:00 am

A recent survey by Varonis of 500 security professionals from the U.S., UK, and France highlights the top three cybersecurity concern for 2018: Data Loss, Data Theft, and Ransomware. Sadly, we are overlooking the bigger problems!


Missed the Target by a Mile
I think we are scrutinizing at the small and known threats, when we should be looking forward at the significant risks coming our way. In some ways, it is like the child in the crosswalk who is looking down at their untied shoes, while oblivious to the truck speeding towards the intersection. The top survey results are not surprising, just disappointing.

The Real Threats
Here is what the world should really be concerned about, when it comes to cybersecurity:

[LIST=1]

  • Data Integrity Compromises. These types of attacks can cause catastrophic impacts and losses, orders of magnitude greater than data breaches and common theft. By just modifying a few transactions or data records, thieves have been able to steal tens to hundreds of millions of dollars, researchers have taken control over the operation of cars and planes, and national infrastructure systems have been physically damaged.
  • Escape of Nation-State Attack Techniques and Code. Highly sophisticated and funded capabilities are normally reserved by nation states for precision attacks. But once the vulnerabilities, exploits, and tactics are used in the wild or leaked, others will have the opportunity to harvest, dissect, and duplicate functions for their purposes. Threats such as cyber criminals, anarchists, and other nation states will gladly wield these super weapons for their end-goals and to the severe detriment of others.
  • Exploits in IoT Devices Which Pose a Risk to Life-Safety. Society is sliding over the verge where we place our lives and safety in the hands of intelligent machines. It is most relevant in the automotive, critical infrastructure, healthcare industries. Although astonishingly wonderful if used for good, it comes with risks. Autonomous vehicles, electrical grids, and medical devices all play an important role in keeping people alive and healthy. When attacks undermine functions and turn malicious, people will be put in harm’s way.

    Not a Flawed Survey
    Sadly, I believe the survey was accurate. This means those professionals who provided answers are only seeing the near-term problems: the very ones they fear most. These issues are annoying, but do not compare to what is just around the corner. The risks are as mismatched as much as the capabilities to prevent, detect, and respond to them. Consider that there are already mature tools and defenses for data loss, theft, and ransomware. They just must be instituted, configured, and maintained to work against most attacks. For the real threats, we are much less capable in our defenses. Granted, the participants may not have many options to choose from, but the answers given may speak volumes about those who voted for these categories. Namely, that they are likely not as prepared for these basic risks as they would like, therefore they fear what they know will come. With their focus on these, they fail to see the long-term strategic picture. That is bad for everyone, except the attackers. Without looking forward, like the child in the crosswalk, they are likely to be surprised when the truck hits.

    We Must Do Better
    We must think strategically if we want to be prepared and make a meaningful difference.

    “Plan for what is difficult while it is easy, do what is great while it is small” – Sun Tzu

    If we don’t perceive and understand the big problems ahead, we stand little chance in addressing them early.

    Where do you stand? Is your attention only on the immediate and well-understood risks?

    Interested in more? Follow me on your favorite social sites for insights and what is going on in cybersecurity: LinkedIn, Twitter (@Matt_Rosenquist), YouTube, Information Security Strategy blog, Medium, and Steemit


  • Free IoT SOC Books at REUSE 2017

    Free IoT SOC Books at REUSE 2017
    by Daniel Nenni on 12-08-2017 at 7:00 am

    The second annual REUSE conference is next week bringing the fabless semiconductor ecosystem together for a day of food, fun, and some very interesting presentations. It’s at the Santa Clara Convention Center this year which is nice and it is FREE! More importantly, there will be 30+ vendors in the exhibit hall which opens at 9am for registration and breakfast. Exhibit hall conversations are the best for networking, absolutely.

    You can find me at the Open Silicon booth signing free copies of our latest eBook “Custom SoCs for IoT: Simplified – A Book Focusing on the Emergence of Custom Silicon for IoT Devices. More than 500 people are expected to attend and we only have 100 books so get there early if you can, it would be a pleasure to meet you!

    In case you have not downloaded the PDF version here is the book foreword by Taher Madraswala, President and CEO of Open-Silicon. It has been a pleasure to work with Taher and the Open Silicon people over the last two years on SemiWiki.com. Not including this one, we have written 30 blogs about Open Silicon that have earned close to one million views.

    FOREWORD

    Enablers of the Internet of Things (IoT) are improving the growth rate of the semiconductor industry in a significant way. Technology advancements in algorithms and processing units have made human-to-machine communication a reality. We are now entering an era where incorporating this capability in smart devices has the potential to simplify, enhance and even save lives. The IoT ecosystem is a symbiotic collaboration of hardware and software developers, building block (aka IP) providers, architects and visionaries who want to translate complex human functions (such as voice, vision, and thought) into simpler, machine-decipherable functions. At the core of this effort are the custom system-on-chip (SoC) solutions that enable designers across vertical markets to meet the performance, power, price, and time-to-market constraints of the quickly evolving IoT universe.

    The semiconductor ecosystem has categorized the IoT space into three distinct segments: IoT cloud, IoT gateway, and IoT edge. This segmentation allows key players to devise strategies and offerings in areas of their expertise, which benefits customers with much-needed competition in each segment. Similar segmentation in the computation world helped create the “WinTel” (Microsoft + Intel) ecosystem, which ruled humanity for decades. Segmentation also helps address new and evolving standards, markets and customers in a rapid response manner. Custom silicon solutions have been deployed on the cloud side of the IoT for many years, specifically in networking, telecommunications, storage, and computing. However, until very recently, custom solutions were out of reach for IoT edge and IoT gateway segments due to cost or lengthy development schedules.

    The IoT SoC platform approach has opened up many new use-cases for edge applications. Among them are sensor hubs for industrial applications, including outdoor, factory floor and in-room environmental control. IoT gateway applications are also experiencing rapid growth from the custom IoT SoC platform approach. For example, a well-designed IoT gateway SoC platform can address multiple smart city applications, such as waste management, transport, traffic, parking, lighting, and metering.

    Custom SoCs for IoT: Simplified 6 The custom IoT SoC platform approach can speed custom design, reduce risk and cost, and enable the critical differentiation that customers demand. Quality platform development requires extensive experience and knowledge. Platform creators must think like a system company as well as a startup. They need to consider end-use-cases in the vertical IoT markets while designing an easy-to-use platform. Such developers need to be responsible for the core block and its verification, which allows for the highly customized software drivers to be written and used as the core library.

    The use of platforms not only opens the door to faster validation of new designs with very little risk but also allows the visionaries and architects to focus on their end goal, which is to bring product differentiation, more use-cases, more functionality and more ingenuity to the world of IoT.

    “Custom SoCs for IoT: Simplified”is the first comprehensive book to explicitly define and detail the various IoT architectures. It covers the multitude of security factors, the power budgets associated with different IoT applications, and many more technical considerations that dictate the success of a custom IoT SoC platform, including but not limited to implementation methodologies, as well as hardware and software tradeoffs. This book also provides a detailed case study of a highly successful approach to custom SoC design for an IoT gateway SoC using Spec2Chip turnkey solutions.

    It is important to mention that the implications of the Spec2Chip offerings outlined here extend far beyond IoT cloud, IoT edge, and IoT gateway devices. OEMs in other emerging technologies, such as deep learning, artificial intelligence, virtual reality, gaming and autonomous driving cars are benefitting from this Spec2Chip platform approach. Customers in these markets are collaborating with turnkey ASIC providers so they can scale back on, or even eliminate, the risks and loopholes of a lengthy chip design flow, and focus specifically on the core hardware differentiation IP and end application software that they bring to their innovation.

    This book deliberately includes a great deal of data and references to real products. We want you to fully understand and appreciate the scope of the IoT ecosystem and the Spec2Chip platform approach that is fueling its expansion. The goal is for you to take this experience and knowledge and apply it to your personal or organizational design flow. Our sincere hope is that your ideas, combined with the proven design methodologies outlined here, will result in a technological advancement that contributes to the IoT universe and those who live within it.

    You can register for the conference HERE. I hope to see you there!

    Also read: 35 Semiconductor IP Companies Hold 2nd Annual Conference


    High Calibre Development Keeps Mentor on Top of the Game

    High Calibre Development Keeps Mentor on Top of the Game
    by Tom Simon on 12-07-2017 at 12:00 pm

    One might be tempted to think that technology driven gains in computer performance might be enough to keep up with the needs of design and verification tools. We know that design complexity is increasing at a rate predicted by Moore’s Law. We also know that the performance of the computers used during IC development benefit from this same level of performance improvement. When I first started in EDA I made the analogy of a dragon chasing its tail – always close, but never quite able to catch up. However, the dragon always seemed close enough to make developing the next generation of hardware possible.

    Stepping back and examining the issue more closely reveals well known issues, such as exploding design rule complexity. I have written before about the rapid increase in the thickness of DRM’s. The need for increased compute power necessary for design completion hits across the board. We see it in simulation, synthesis, place and route, etc. However perhaps the most heavily impacted area is physical design verification, aka DRC.

    Many of us can remember the tectonic shift in DRC when flat checking became impractical. This is when after decades there was a move from Cadence’s Dracula to Mentor’s Calibre. Calibre effectively and reliably solved the issues surrounding hierarchical DRC and paved a new path forward that was absolutely necessary for design productivity. The holy grail of most verification tools is the overnight run. If a tool can run in 8 to 14 hours, the users can start a run, go home and come back the next day to analyze and correct issues.

    As process nodes advanced, the requirements for physical verification exploded. The new requirements came from several distinct categories. Mentor has published a white paper recently that discusses each of these areas and describes the work and development effort necessary to deliver a total solution that makes physical optimization practical.

    Some of the solutions are good old fashioned software development work. These include the addition of parallel processing and support for larger designs. Mentor does a lot of additional work on the algorithmic side to support faster operations. Some of these focus on handling hierarchy better, especially when it relates to metal fill. Anyone involved with nodes below 28nm knows how complex metal fill requirements have become. Calibre is used now to generate metal fill. Mentor has pushed out significant enhancements in fill generation. The white paper discusses how these enhancements and methodology changes enable much higher design throughput.

    Other significant runtime improvements can be achieved through less obvious methods such as carefully writing rule decks. Two seemingly similar methods of performing a check can have vastly different runtimes. If specific knowledge is used in writing rule decks, overall runtimes can be lowered. We are long past the days where users write their own rule files. Today all the major foundries provide Calibre rule decks. Mentor has developed deep relationships with the major foundries and has, in conjunction with them, devised a process to ensure that all the supported rule decks use the most efficient methods. This requires early access and intimate cooperation between the Calibre team and the foundries.

    More advanced nodes also come with completely new verification requirements. One such example is multi-patterning. Early on the foundries took care of this for their customers. However, as the process advanced from double to multiple patterning, the responsibility migrated to the physical layout teams. This is another area where Calibre has added support to ensure that customer designs are quickly and correctly completed.

    In the Mentor White Paper there is discussion on the effort that Mentor expends adapting Calibre at each new process node and at the same time maintaining backwards compatibility, so that designs on nodes like 130nm or 90nm still obtain the same verification results. My observation is that Mentor is highly aware that the base tool needs to be reinvigorated at the same time as new performance features need to be added. Furthermore, none of it is useful without deep foundry relationships to anticipate the requirements for nodes that may be years away. Mentor is in an enviable position as it has the ability to work with foundries and their most advanced customers to ensure that support is ready prior to production release for each new node.

    The Mentor paper entitled “Achieving Optimal Performance During Physical Verification” is a fascinating view into the work that is required to stay on top of meeting and exceeding the next generation needs for physical verification. The results that Calibre attains are a clear indication of the level of effort that Mentor dedicates to the overall process of enhancing and supporting their physical verification solution. I highly recommend reading the paper to get more detailed insights and to better understand how the whole solution works together.


    Optimizing Return from your IP Portfolio

    Optimizing Return from your IP Portfolio
    by Bernard Murphy on 12-07-2017 at 7:00 am

    Given that SoC design today is predicated on IP reuse, you would assume that processes to deliver, maintain and communicate status on reusable IP should be highly optimized. But that’s not necessarily the case, especially when so many design companies have consolidated, each brings its own IP libraries, design flows, license agreements and inevitably fragmented tribal knowledge about what works well, what doesn’t and under what circumstances.


    This isn’t a hypothetical problem, Recently, a design group in the Asia arm of a design company purchased an interface IP for $750k; shortly after, the US branch of the same company purchased the same IP, also for $750k. Were they able to sort this out? I don’t know but buyers have responsibilities as much as sellers. If too much time passed, I’m guessing the vendor would feel completely justified in pushing back against requests for a refund (try getting a refund from Amazon or Expedia after a few weeks, based on a mistake you made).

    Cases like that are eye-catching but I suspect the bigger losses are in more mundane areas. An important part of the value in an M&A is anticipated efficiencies and new markets opened thanks to sharing a larger pool of high-value IP. And yet I don’t think I’m alone in noticing that years after some completed mergers, more than a few design organizations are still struggling to broadly realize these supposed benefits. They certainly gained market share; efficiencies around IP are sometimes less clear. When sharing between organizations proves ineffective or inefficient, teams reinvent the wheel constantly, or spend more time adapting IP they thought would be drop-in, or they simply fail to exploit opportunities that should have delivered significant market advantage.


    Some of the problem starts with the way we view IP, as an essentially static end-product of IP development. Of course you hope the functionality changes less frequently than circuitry unique to the current design, but there can be a lot more information about an IP, changing more frequently than the functionality – bugs filed and status, schedule for bug fixes, silicon experiences and measurements and who else is using and has used this IP and in what configurations. All of this is very dynamic information, critically important to any current user. And it all evolves as new releases appear and new information flows in. Another design went into a respin because of a problem on that same IP? I might want to know that. Or I’m depending on a critical fix, but that need wasn’t passed down to the IP developers or status wasn’t rolled back up to those who might need it. I might want to know that too.

    A Consensia-sponsored survey of design teams’ problems with IP highlights these issues: #1 they spend too long looking for the exact IP they need (incomplete databases, inadequate documentation, …), #2 they don’t know if they are allowed to use the IP on their design (license issues, ITAR issues, discontinued support, …), #3 they didn’t know the IP they wanted was already available (so they built their own) and #4 they didn’t have a good measure of the quality of the IP (yeah it exists, but nobody has used it, or it has a history of working well in certain applications but not in your application).

    None of this will be a surprise to any experienced design organization. Each has typically built up home-brewed flows to address at least some of these needs, based on different DDM, bug/defect tracking, release/testing frameworks and possibly some kind of lifecycle tracking. Which is great, but there’s no easy way to bring different flows together in a merger (which may be why many are still struggling years after the M&A). Brute-force consolidation is frequently impractical. This problem becomes even more challenging when you are working collaboratively on a design with a customer.


    Consensia offers the DelphIP platform to address all these problems, at the enterprise level where it can benefit all business lines in an organization. This provides a common and centralized way to track status, history, restrictions, maturity/quality along with other characteristics that aren’t commonly represented in EDA/DDM/defect tracking views. It also sits on top of and communicates with existing DDM platforms, bridging between existing local flows and enterprise-level IP communication and management.

    DelphIP also supports collaborative workspaces so you work effectively with a customer in support of their value-adds, without having to expose all the gory (and proprietary) details that go into producing a modern design. Which is something you may be required to consider if you are working in the automotive space, to name just one example. You can watch the Consensia webinar on DelpIP HERE.


    Safety qualification for leading edge IP elements – presentation at REUSE 2017 in Santa Clara

    Safety qualification for leading edge IP elements – presentation at REUSE 2017 in Santa Clara
    by Tom Simon on 12-06-2017 at 12:00 pm

    To ensure the reliability of automotive electronics, standards like AEC-Q100 and ISO 26262 have helped tremendously. They have created rational and explicit steps for developing and testing the electronic systems that go into our cars. These are not some abstract future requirement for fully autonomous cars, rather they are needed for critical systems in almost all cars today. Modern cars use significant electronics for engine systems, as well as braking and numerous safety systems such as impact sensors and airbags.

    It can be easy to understand how an airbag system might be documented and tested before use in a car design. However, it is important to realize that a sensor or actuator is really part of a larger network of devices, where the reliability of the entire system is critical. Within each of the elements we can find components that enable communication, such as a SERDES. With designs moving to 7nm, the complexity of designing a SERDES is becoming even more complicated. This is further compounded by the overlaying of safety requirements. Yet, no matter how difficult, these added requirements are essential.

    As you have read in my writing before, a standard like ISO 26262 does not allow the certification of the pieces of a system. Instead the final completed system can be qualified once it is built and its application is fully defined. The smaller components are what are known as a safety element out of context. They can be built with the final system level qualification in mind, but they themselves can never be certified.

    The question remains, how should a component such as a SERDES be designed and evaluated to ensure that it is suitable for inclusion in a system that will be qualified. Silicon Creations, a leader in analog IP development, will be speaking on this topic at the upcoming REUSE 2017 conference on December 14[SUP]th[/SUP] in Santa Clara. Andrew Cole, Vice President of Business Development at Silicon Creations will present a talk called “Developing 7nm IP for Safety Critical Automotive Applications”. In addition to talking about steps necessary for ISO 26262 qualification, he will talk about AEC-Q100 pre-qualification through testing of completed parts.

    His talk will be held at 11:15 AM on December 14[SUP]th[/SUP]. If you would like to register for this event please go to the event website for more information on the full agenda and registration. During the event, there will be talks given by Samsung, Intel Arm and many other key players in the IP arena. The event looks like it will provide an informative day for those interested in developing systems that employ IP.


    Enhancing FPGA Prototype Debug

    Enhancing FPGA Prototype Debug
    by Daniel Nenni on 12-06-2017 at 7:00 am

    FPGA prototyping is very popular in modeling hardware for early system prove-out, early embedded software development, as a cost-effective and performance-effective platform for software-driven hardware debug and for late-stage software debug, all before silicon is available. It has significant advantages in run-time performance over other hardware simulation methods and can be considerably less expensive than emulation. But as always, every solution comes with its own challenges; one particular challenge for FPGA prototyping is visibility into hardware internals for debug.

    When using prototyping for hardware debug, good debug access is obviously critical; you have to be able to place probes / triggers wherever needed and you must be able to store long traces to debug back through sequences that may run for millions of cycles. In simulation-based verification this is easy. Changing observation points and triggers is simple and re-running after making a change to the design is equally simple.

    But in FPGA prototyping neither of these steps is so easy. Take first the debug capability. Standard FPGAs, typically from Xilinx or Intel/Altera provide debug support in the form of on-chip logic analyzers (LAs), but these offer limited and potentially expensive debug support. First, they use extra gates, routing and (on-board) memory, more of each being required the more you want to debug. Problems are particularly acute with memory. Storing traces over millions of clock cycles may burn up all your on-board memory, undermining your design needs.

    This becomes even more complex when your design spreads across multiple FPGAs, as is common when prototyping designs of any reasonable size. Commercial prototyping systems support large designs by partitioning them across multiple FPGAs on a board, and potentially across multiple boards or even multiple enclosures. In cases like this, native device-based debuggers hit yet another problem. They are effective, as far as they go, in debug at a device level, but each device now represents just a part of your design. Moreover, multi-FPGA prototyping isn’t just about partitioning across FPGAs. Logic to pipeline signals through limited package pins and board-level logic to optimize communication between FPGAs, all artefacts of prototyping rather than features of your design, further complicate the debug picture from a whole design point of view.

    There’s another problem (in case you though you could somehow work your way around the problems I have listed so far). Since probes are logic embedded in your design, if you need to change probes (which you will, given a very limited budget for debug logic when using LAs), you’ll need to re-implement the whole design on almost all debug iterations (because you’ll almost certainly need to add more probes). And that is going to take a lot of time per iteration – adding the probes to the RTL, resynthesizing, perhaps re-partitioning and re-implementing through FPGA place and route. Think days at least to add a few probes.

    This is where dedicated debug solutions become particularly important, by moving most of the debug functionality off the FPGA so that the bulk of device resources remain allocated to your design, by supporting many more probes per device than would be possible in LAs and by letting you interact with a view of your total design (which is what you really want to understand) by transparently handling the details of design to device mapping. I’ll illustrate this through the S2C MDM (multi-debug module) solution.

    On resources, consider first memory needs. MDM doesn’t use FPGA memory. All debug data is output through a DDR3 memory port to external memory on the MDM board. Freed of the highly constrained FPGA memory resources, the solution provides 16GB of memory, supporting up to 16K probes per FPGA and several million cycles of traces. The on-device support logic is also greatly reduced to the very limited circuity, even for 16K probes, needed to capture and stream out this data.

    This also largely addresses the design iteration (for debug) problem. With 16K probes per FPGA, you’ll need a lot less lengthy design rebuilds to complete debug. Which means that getting to verification closure is going to be a lot faster.

    Finally, the MDM solution works with up to 16 FPGAs simultaneously. And through the Prodigy Player Pro cockpit, you can interact with your design in selecting probes, viewing waveforms and tracing back to find root-causes, all without having to worry about the mapping details between the design and the FPGA implementation. Together this provide a debug solution much closer to what you are used to in simulation.

    You can learn more about the S2C Multi-Debug Module HERE.


    Jump Start your Secure IoT Design with Intrinsix

    Jump Start your Secure IoT Design with Intrinsix
    by Mitch Heins on 12-05-2017 at 12:00 pm

    Have you ever had a great idea for a new product but then stopped short of following through on it because of the complexities involved to implement it? It’s frustrating to say the least. It is especially frustrating, when the crux of your idea is simple, but complexity arises from required components that don’t add to the functionality of the design. Enter the world of an internet-of-things (IoT) device.

    By definition, IoT devices live in the “wilds” of the internet, and that means they need to be made secure, else we endanger not only our own application but the internet upon which it depends. In most cases, IoT devices are made up of a simple processing unit, a network interface, and one or more sensors/actuators. These devices are typically much simpler than the required security element. This begs the question of how to bridge the security gap without having to hire an entire security team to design a secure device.

    Depending on your device there are multiple options. Some CPU vendors have a complete and comprehensive approach to security built into their offerings. These offerings typically require some type of licensed IP. Alternately, some operating systems like Linux, can also provide for secure communications. Security algorithms are implemented in software and as such require sophisticated CPUs to implement them. However, many IoT devices don’t need or want the more powerful CPUs. In fact, the opposite is true. They need just enough computing power to talk to their sensors/actuators and then relay the information on to an IoT gateway. These devices need to be very low power, running on the smallest of batteries, while also being extremely cost sensitive.

    Intrinsix, a leading design services company, addresses the IoT security gap by providing a Secure IoT Jump-Start kit. The kit enables designers to quickly realize IoT designs with the required performance and security while using a minimized design architecture that is frugal on power. The Jump-Start kit makes use of a RISC-V processor which can be easily configured to use minimal features and this cuts down on both chip size, power and cost.

    Security is provided through a custom IoT security hardware accelerator IP that dramatically reduces power consumption by as much as 1000X and increases battery life by as much as 10X. This frees up the CPU to be further customized for the specific application. The security IP is certified to be NSA Suite-B compliant (meeting NSA Secret level requirements), more than sufficient for most IoT applications.

    The Jump-Start kit also makes use of the minimalist Zephyr OS which does away with the need for more sophisticated memory management needed by operating systems like Linux. This means you don’t need to have a dedicated MMU, further reducing the chip footprint and cost.

    To make system design easier Intrinsix also includes within the Jump-Start kit a full software stack and API to access the dedicated security hardware accelerator. The software comes with example code that developers can use as a guide for linking security into all transactions into and out of the device. Of course, being a design services company, Intrinsix can be called upon to help implement a portion or all of the system for their customers.

    In summary, the Intrinsix IoT Jump-Start kit is meant to give IoT system designers a way to jump over the IoT security gap and to enable them to get their initial IoT product up and running and into the market with a secure system while reducing risks and both non-recurring and recurring costs.

    You can learn more about Intrinsix and their IoT offerings online at the link below or by downloading their IoT e-book.

    See also:
    Intrinsix eBook: IoT Security The 4[SUP]th[/SUP] Element
    Intrinsix website


    Blurring Boundaries

    Blurring Boundaries
    by Bernard Murphy on 12-05-2017 at 7:00 am

    I think most of us have come to terms with the need for multiple verification platforms, from virtual prototyping, through static and formal verification, to simulation, emulation and FPGA-based prototyping. The verification problem space is simply too big, in size certainly but also in dynamic range, to be effectively addressed by just one or even a couple of platforms. We need microscopes, magnifying glasses and telescopes to fully inspect this range.


    Which is all very well, but real problems don’t neatly divide themselves into domains that can be fully managed within one platform. When debugging such a problem, sometimes you need the telescope and sometimes the microscope or magnifying glass.

    This in turn means you need to be able to flip back and forth easily between these platforms. I wrote in an earlier blog (Aspirational Congruence) on a Cadence goal to make these transitions as easy as possible between emulation (on Palladium) and prototyping (on Protium). It looks like they have succeeded, so much so that the boundaries between emulation and prototyping are starting to blur. Frank Schirrmeister at Cadence pointed me to three customer presentations which illustrate collaborative verification between these platforms.

    One, from Microsemi, illustrates needs in bringing up hardware and firmware more or less simultaneously for multi-processor RAID SoCs. Need to model with software and realistic traffic make hardware assist essential, but they also must span a range for detailed hardware debug and power and performance modeling to the faster performance required for regressions, where detailed debug access is not necessary. If a regression fails on the prototyper they may want to drop that case back onto the emulator, a step greatly simplified by the common compile front-end and memory backdoor access between the platforms.

    Amlogic, who build over-the-top set-top boxes, media dongles and related products have similar needs but helpfully highlighted different aspects of the benefits of using a mixed prototyping/emulation flow. Their systems obviously depend on a lot of software (they have to be able to boot Linux and Android in full-system debug) and naturally some post-boot problems straddle both hardware and software. Again, they saw the benefit of speed in prototyping with ease of switching back to emulation for detailed debug. An interesting point here is that Amlogic used hands-free setup for Protium, which gave them 5MHz performance over 1MHz on Palladium. And probably a pretty speedy setup since they weren’t trying to tune the prototype; yet this apparently delivered enough performance for their needs. Amlogic’s measure of the effectiveness of this flow was pretty simple. They were able to get basic Android running on first silicon after 30 minutes and deliver a full Android demo to a customer in 3 days. That’s pretty compelling.

    Finally, Sirius-XM Radio gave a presentation at DAC this year on their use and benefits in this coupled configuration. (If you weren’t aware, Sirius-XM designs all the baseband devices that go in every satellite receiver.) There are some similar needs, but also some interesting differences. Between satellites, ground-based repeaters and temporary loss of signal (going under a bridge for example), there’s significant interleaving of signals spanning between 4-20 seconds which must be modeled. Overall for their purposes they have to support 2 full real-time weeks of testing; both of these needs obviously require hardware assist. Historically they did this with their own custom-crafted FPGA prototype boards, but observed that this wouldn’t scale to newer designs. For example, for the DDR3 controller in their design, they had to use the Xilinx hard macro, which didn’t correspond to what they would build in silicon and might have (had they followed this path on their latest device) led to a silicon re-spin bug.

    Instead they switched to a Palladium/Protium combo where they could model exactly what they were going to build. They used the Palladium for end-to-end hardware verification, running 1.5 minutes of real-time data in 15 hours. Meantime, the software team did their work on a Protium platform, running 1.5 minutes of real-time data in 3 hours, which served them well enough to port libraries, the OS and other components and to validate all of these. Again, where issues cropped up (which they did), the hardware team was able to switch the problem test-run over to the Palladium where they could debug the problem in detail.

    The thread that runs through all these examples is default (and therefore fast) setup on the prototyper being good enough for all these teams, and ease of interoperability between the emulator and the prototyper. For hardware developers, this supports switching back from a software problem in prototyping to emulation when you suspect a hardware bug and need greater visibility. Equally, the hardware team could use the prototyper to get past lengthy boot and OS bring-up, to move onto where the interesting bugs start to appear. And for software developers, this setup enabled work on porting, development and regression testing using the same device model used in hardware development. For each of these customers, the bottom line was that they were able to bring up software on first silicon in no more than a few hours. Who wouldn’t want to be able to deliver that result?


    35 Semiconductor IP Companies Hold 2nd Annual Conference

    35 Semiconductor IP Companies Hold 2nd Annual Conference
    by Daniel Payne on 12-04-2017 at 12:00 pm

    Our smart phone driven semiconductor economy consumes a lot of IP blocks to enable quick product development cycles, often annually updating with new models to choose from. So where do you find all of the best semiconductor IP, verification IP and embedded software? Well, one place is at the 2nd annual REUSE conference, scheduled for December 14th in Santa Clara at the Convention Center where you’ll be able to explore what 35 companies have to offer. Here’s the list to pique your curiosity a bit:

    • arm
    • Achronix
    • Amphion
    • Andes Technology
    • Archband
    • Avery Design Systems
    • CAST
    • Certus Semiconductor
    • City Semiconductor
    • Corigine
    • efabless
    • flexlogix Technologies, Inc.
    • Intel
    • Intrinsic ID
    • menta
    • mixel
    • Mobile Semiconductor
    • mobiveil
    • Moortec
    • NSCore
    • nvm engines
    • Open Silicon
    • QuickLogic
    • Samsung
    • SiFive
    • Silicon Creations
    • Silvaco
    • SiNTEGRA
    • Sofics
    • Sonics
    • surecore
    • SmartFlow
    • Semi IP Systems
    • True Circuit Inc
    • Uniquify

    It’s refreshing to see such a wide span of vendors at this conference, from small start-ups to behemoths like arm, Intel and Samsung. Even at the first annual REUSE conference last year there were some 34 companies from 11 countries signed up, and 400 attendees registered, pretty impressive for an inaugural event.

    I contacted Warren Savage of Silvacoto find out more about REUSE because he was one of the founders of this new conference. Online Warren has been making YouTube videos for several years under the tagline of Take Five with Warren where he interviews key people in the semiconductor IP industry. Each video plays in about 5 to 7 minutes, so don’t get hung up on the literal 5 minute moniker.

    Warren started out in 1979 as a design engineer at Fairchild Semiconductor, then moved into management with Tandem for 13 years. He started in the IP business with Synopsys from 1995 to 2003, then became President and CEO of IPextreme for the next 12 years. Last year in June his company IPextreme was acquired by Silvaco, where Warren is now the General Manager of the IP division.

    Q&A

    What type of engineer would be interested in attending?
    Anyone that uses IP will probably find something interesting there. It’s a great venue to come and check out companies that you may not have had a chance to see at DAC or other shows, which are so much more expensive for nascent IP companies to attend.

    What does an engineer get out of attending in terms of benefit, ideas, skills, awareness, etc?
    As I mentioned, one of the principles is that we don’t have a committee that tells people what they can and cannot talk about. We let companies talk about anything they like, and usually they pick something that they think would be interesting to someone. Whether its applicable to a design engineer or a product manager, we don’t know. They can read the agenda and find something interesting, I’m sure.

    Are there speakers at the event?

    Yes, a really nice selection of IP companies, big and small. Plus some nice keynotes from Smartflow on IP piracy and from Samsung on what IP means to foundries. Plus, Intel is giving a talk on what they expect from IP suppliers.

    • Ted Miracco, CEO, SmartFlow Compliance Solutions
    • Heather Monigan Program Director & Technology Strategist, Intel
    • Hong Hao, Sr. VP Foundry Business, Samsung
    • Meredith Lucky, VP of Sales, CAST
    • Tony Kazaczuk, Director, Flex Logix
    • Warren Savage, GM, Silvaco
    • John Heinlein, Ph. D., VP, Arm
    • Timothy Saxe, Ph. D., VP and CTO, QuickLogic
    • Stephen Fairbanks, Co-Director, Certus Semiconductor
    • Brian Gardner, VP of Business Development, True Circuits, Inc.
    • Steve Mensor, VP of Marketing, Achronix Semiconductor
    • Mike Noonen, Silego Technology
    • Dr. Naveed Sherwani, President & CEO, SiFive
    • John Blyler, JB Systems
    • Michael, Wishart, CEO, efabless
    • Andrew Cole, VP, Silicon Creations
    • Jim Bruister, Director, Silvaco

    What is happening in the IP industry these days?
    I think that attendees can get a very good sense of the dynamics of the industry by attending, listening to the talks and visiting with suppliers and other attendees. My sense is that the IP industry is as strong as ever and we continues to morph into new areas.

    Is the event expensive to attend?
    No, it’s free this year to attend.

    How many people do you expect this year?
    I expect between 500-600 people this year.

    Registration
    Even though this conference is free, you must register online to reserve your spot and receive a lanyard at the event.

    REUSE 2017will again bring the global IP community together, this time at the Santa Clara Convention Center in the center of Silicon Valley. Here an even greater diversity of suppliers will be promoting their products in a fair and balanced showcase. REUSE will also provide an open forum for communication and networking within our industry.


    What are you ready to mobilize for FPGA debug?

    What are you ready to mobilize for FPGA debug?
    by Frederic Leens on 12-04-2017 at 7:00 am

    There are 3 common misconceptions about debugging FPGA with the real hardware:

    [LIST=1]

  • Debugging happens because the engineers are incompetent.
  • FPGA debugging on hardware ‘wastes’ resources.
  • A single methodology should solve ALL the problems.

    Continue reading “What are you ready to mobilize for FPGA debug?”