NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Securing Your IoT System using ARM

Securing Your IoT System using ARM
by Daniel Payne on 03-14-2017 at 12:00 pm

I’ll never forget reading about and experiencing the October 21, 2016 Distributed Denial of Service (DDoS) attacks which slowed and shut down a lot of the Internet. On that particular attack the target was to shut down the Domain Name System (DNS). Traffic for this massive DDoS attack came from IoT devices which were unsecured digital devices, like home routers and surveillance cameras. Hackers were able to infect these devices with malicious code to form a botnet.

The good news is that semiconductor IP companies like ARM can provide you with a well thought out approach to make your IoT projects secure by design. Consider attending their next webinar, “How to implement a secure IoT system on ARMv8-M“, it’s planned for Wednesday, March 29, 2017 at two different time slots.

Attacks on IoT devices are guaranteed – they will happen! Therefore,system security needs to be easy and fast to implement. With ARM’s newest embedded processors – the ARM Cortex-M23 and Cortex-M33 with TrustZone for ARMv8-M – developers can take advantage of hardware-enforced security. Now, system designers have the challenge to extend security throughout the whole system.

Join this technical webinar with Applications Engineer, Ed Player and IoT Product Manager, Mike Eftimakis. They will showcase what hardware and software you need to design a secure IoT system on ARMv8-M with TrustZone. The webinar will deep dive into an actual IoT system and share some of the products and tools available, to create the most efficient, viable IoT system. During the webinar, ARM will share exclusive, exciting news, which will help you accelerate your next IoT design.

Register for this webinar to learn:

• Hardware design considerations for building security into an SoC.
• Development techniques for generating secure software.
• How ARM TrustZone technology can be used to establish trust and security within an IoT system.

Registration is done online here, and I’ll be attending and blogging about it, so stay tuned.


Prototype-Based Debug for Cloud Design

Prototype-Based Debug for Cloud Design
by Bernard Murphy on 03-14-2017 at 7:00 am

Unless you’ve been in hibernation for a while, you probably know that a lot more chip design is happening in system companies these days. This isn’t just for science experiments; many of these designs are already being used in high-value applications. This development is captive – systems companies generally don’t want to get into selling chips – but there is enough value in their own needs for them to justify the design and manufacture of these parts.

An important motivation is for differentiated enhancements in cloud hardware. Cloud services are a very hot and competitive area; one estimate puts Amazon Web Services, considered standalone, as the 6[SUP]th[/SUP] largest business in the US. Which is good news for cloud services providers and for those who sell hardware and software solutions to those providers. Climbing fast on this list is a Chinese company called Inspur. I’m guessing you’ve never heard of them. That may change; they’re a vertically integrated company, with offerings all the way from cloud services down to building and selling servers. In server sales, they rank 5[SUP]th[/SUP] worldwide and top in China. Which makes them very interested in anything that can improve QoS for cloud applications.

Networking between servers is an especially hot domain for differentiation. Microsoft recently announced their work with Intel/Altera to optimize networking for Azure through software-defined networking on reconfigurable platforms. Inspur is working on their own routing control chip (details not available) and unsurprisingly they want to prototype it, presumably in-system, before they commit to silicon.

Inspur chose S2C for their prototyping solution because they wanted to be able to inspect and validate correct behavior in operation, while making high volume/throughput packet transmissions. In particular, S2C’s Prodigy MDM (Multi-Debug Module) is being used to set trigger conditions and capture related packets for chip debug. Deep sampling depth supported by Prodigy MDM allows Inspur to grab as many packets as possible to then be analyzed for correctness.

Inspur cited especially the strength and ease of debugging across a multi-FPGA prototype in the Prodigy solution. Large designs (and routing controllers tend to be large) are unlikely to fit on a single FPGA and may have to span to multiple boards. But from the designer’s point of view, that’s an implementation detail; they still want to observe and debug across the whole design, unimpeded by FPGA partition boundaries. Using traditional FPGA tools, you would have to debug each FPGA in isolation; problems spanning more than one FPGA become painful to trace to a root-cause. Fixes are even more challenging – a fix on one FPGA, made without a clear perspective on behavior in the rest of the design, may create a new problem on another FPGA.

The Prodigy MDM solution addresses this fundamental problem in multi-FPGA debug by presenting a unified design view across 4 Prodigy logic modules (boards) simultaneously. When you setup probes and trigger conditions, they are based on the design and indifferent to partitioning. When you view waveforms, the same applies. The designer sees the design as a whole and can monitor and debug it as a whole – the FPGA implementation is transparent.

Inspur also mentioned that deep tracing support was very important to speeding up their debug process. They needed to grab as many packets as possible when looking for potential problems and this can only be accomplished at reasonable speed if trace data can be buffered into sizeable on-board memory. To get at the details of those packets, Prodigy MDM supports many more probes on each FPGA than you are likely to need (and you can precompile up to 16k probes per FPGA, from which you can select and change candidates for tracing without needing to recompile). MDM can then store up to 16GB of probe traces on the MDM hardware, again a significant differentiator from traditional FPGA debug tools which offer limited internal memory to capture traces. Tracing supports speeds up to 40MHz and transfer to a host computer is accomplished through a high-speed Gigabit Ethernet interface.

Compile, probe setup, run-time control and debug are all steered though the Prodigy Player Pro cockpit, so you have one unified interface for ease of use. You can learn more about Prodigy MDM HERE, the complete Prodigy line (including support for logic boards based on a variety of FPGA options) HERE and you can read the joint S2C/Inspur press release HERE.

More articles by Bernard…


Unlocking Access to SOC’s for IoT Edge Product Developers

Unlocking Access to SOC’s for IoT Edge Product Developers
by Tom Simon on 03-13-2017 at 4:00 pm

In the wake of the many mega mergers and consolidation in the semiconductor and electronics space, it is easy to say that opportunities for smaller companies are shrinking. Indeed, quite the opposite might be true. The larger companies, like Broadcom, ARM, Qualcomm, Analog Devices, Microchip, Maxim and Infineon (to name a few) are cranking out building blocks that actually make it easier to make innovative new consumer and industrial products. This in turn has fueled a large growth in small nimble companies that are building products for health, home security, home automation, automotive, convenience, recreation and other consumer oriented applications.

Most of these products are connected, contain a processor and have multiple sensors, user interface elements and actuators. Or in other words they fit into the category of Internet of Things edge devices. What is compelling about these products is that they are in many ways returning the power of product development to individuals and small teams. I always conjure up Edison, Bell or Tesla working in their shop in my mind when I think of today’s innovators.

What does it take to build an IoT edge device product? It’s one thing to talk about designing a new product and another to pull everything together to make it happen. If, as like with many small teams, there are time and money constraints, it can be trickier than it sounds. A lot of teams opt for buying discrete chips and building a board to get to market. While it is a quicker path in some cases, it brings with it many limitations – larger foot print, higher unit costs, reliability issues, shorter battery life, etc. What is often needed for a long term competitive win in the marketplace is a custom SOC for the product.

It used to be that for a small company or team, the dream of building a custom SOC was a bridge too far. Fortunately, it’s not just the hardware companies that are assembling powerful options for product development – Mentor Graphics has put together an array of technologies to facilitate the migration from a board based system level design to an SOC based design.

Mentor comes at the problem with a unique set of resources that create a complete solution for every aspect of the design problem: digital IP, logic design and prototype tools, analog design tools, embedded OS and software development tools, and simulation solutions to enable component and system verification.

Mentor is touting its “Rapid SoC Proof-of-Concept for Zero Cost” idea and it has some very interesting features. It starts with free access to IP models and integration tools for the ARM Cortex-M0. The M0 is an ideal choice because of its low power consumption and advanced 32 bit architecture. Along with this comes a low cost and no-hassle commercial licensing model when it is time to have it fabricated. Also for the glue logic, there is a $995 FPGA prototype board option that allows rapid prototyping on real hardware.

Mentor is also going out with a 30 day free evaluation license for Tanner EDA tools to design and simulate a proof of concept SoC. The Tanner EDA tools provide a mixed signal front end solution with schematic capture, analog simulation and digital design entry and behavioral simulation. There is a mixed signal simulation capability for verification of the entire design.

Software for the SOC can be developed with the ARM Keil MDK-Lite software development toolkit. This is part of the Cortex-M0 DesignStart package mentioned above. Once the proof of concept is fully developed and verified, it is easy to move to full physical implementation using the rest of the Mentor flow. Mentor has published a white paper providing much more detail on the entire process.

It’s gratifying to see that there are feasible avenues for teams with great ideas to get through the complex development process required for delivering new products. I always harken back to Maker movement for the roots of the notion that development tools and building blocks can and should be readily available to those who want to build things. After all, who knows where the next Edison, Bell or Telsa is going to come from.


Driver Assistance and Autonomous? Need ASIL D Ready Certified CPU!

Driver Assistance and Autonomous? Need ASIL D Ready Certified CPU!
by Eric Esteve on 03-13-2017 at 12:00 pm

The automotive segment is moving from a kind of niche, filled with commodities and highly specialized low complexity IC, to an innovative and very dynamic segment attracting most of the big players, from Qualcomm to Nvidia or Intel. These chip makers are targeting automotive as they need to find new growth areas, and they have quickly adapted their application processor offer from mobile to automotive, more specifically to ADAS and autonomous vehicle.

But this automotive segment is completely different from the consumer segment, as most of the applications are safety critical. That’s why you need to understand standards like ISO 26262 or ASIL. To target applications like ADAS, radar, or safety-critical sensors, even if you market a CPU IP and not and IC, you need to have invested well in advance to propose an ASIL D compliant core…


Synopsys has made this investment several years ago for the ARC EM CPU IP family, targeting the most challenging automotive specification, the ISO 26262 ASIL D. Meeting this specification means that you must have less than 1% single point of failure in the entire system. For processors going into ASIL D certifiable chips, this translates into several stringent requirements, like:

·Caches and tightly coupled memories need error correction and detection
·Implement a redundant (or shadow core) running same code
·Insert logic to monitor and compare results from redundant cores
·Build extensive safety documentation for ISO 26262

The next picture describes the implementation of pre-built ARC EM Safety Islands, verified and ASIL D Ready certified dual-core lockstep processors with integrated safety monitors. Interesting to notice, ARC EM cores with Safety Enhancement Package are the only processors in microcontroller class with ECC on memories and lockstep interface…

In fact, if the application processor chip makers are targeting master processing for ADAS or even autonomous vehicle application, there will be plenty chips around this master to support sub-systems like Radar, Lidar or sensor fusion. But these sub-systems will also be part of the safety-critical system and they will need to comply withISO 26262 ASIL D specification. In this case, the chip maker will select the Lock Step implementation.

If a customer is targeting an automotive application, but not safety critical, he may use the ARC EM IP core complying with the least stringent ASIL B specification and implement the Independent Dual Core mode.


To reduce single points of failure, Synopsys has implemented tightly coupled interrupt controller and options such as MPU andmDMA to provide full redundancy.A CPU IP core is not just made of dedicated hardware, and the compiler is part of the IP vendor offer as it’s an essential piece supporting the project development. The fact that the MetaWare compiler is alsoASIL D Ready certified will significantly accelerate ISO 26262 compliant code development.

The border between a microprocessor and a DSP is becoming blurry and the EM5D cores inside the EM5DSI support various DSP features like fixed point DSP, vector and single instruction/multiple data (SIMD) processing. They include a unified, single-cycle 32 x 32 MUL/MAC unit with 32-bit/64-bit accumulators. To deliver enhanced performance for filtering, FFT and other signal processing algorithms, the EM5D also features fractional support, rounding and non-rounding instructions, as well as divide, square root and fixed-point math functions. Vector and SIMD support provide greater processor efficiency by enabling multiple data values to be processed in a single operation.

The best way to describe an IP may not be to list a set of features, but rather to look at the type of applications which can be supported by this IP. For example, the EM5DSI can be used to process Vision ADAS, Radar and Smart Sensors, and each of these applications can potentially be on the safety critical path.

Vision processing is needed to help driver assistance and autonomous driving and the requirements are expected to move from ASIL B to ASIL D for the hardware supporting such safety critical application. This transition should be simplified when using the ARC EM Safety Islands, positively impacting the TTM.

The ARC EM Safety Islands can provide front end processing for Radar application. If the Radar technology is used for driver assistance and autonomous driving, the ASIL D requirements will be met faster when implementing such ASIL D ready certified solution, including the compiler.

Because many sensors are in the safety critical path for applications like braking or steering, the CPU core handling processing for smart sensors and controllers has to comply with the ISO 26262 ASIL D firm requirement. Selecting a certified IP solution will guarantee the compliance and will shorten the TTM.

By Eric Esteve from IPnest

The ARC EM4SI and EM5DSI Safety Islands and the MetaWare Development Toolkit for Safety are available now. The ARC EM6SI and EM7DSI Safety Islands will be available in Q2, 2017.

More about DesignWare ARC Processor Solutions for Automotive Applications:

·ARC EM Safety Island IP
·Safety Option for ARC EM Processors


Anirudh on Verification

Anirudh on Verification
by Bernard Murphy on 03-13-2017 at 7:00 am

I was fortunate to have a 1-on-1 with Anirudh before he delivered the keynote at DVCon. In case you don’t know the name, Dr. Anirudh Devgan is Executive VP and GM of the Digital & Signoff Group and the System & Verification Group at Cadence. He’s on a meteoric rise in the company, not least for what he has done for Cadence’s position in verification in just a year.

Of course, Cadence was never a slouch in verification. Back in the early 90’s they acquired Gateway, making them the sole provider of Verilog simulation for a while, they’ve got world-class engines in Jasper for formal and Palladium for emulation, but their simulation and prototyping products haven’t exactly towered over similar solutions in recent years. That changed at this year’s DVCon where they announced order of magnitude improvements in both platforms, which Anirudh asserts puts them at the top of the pack across all the verification engines.

That’s the most obvious change in the new Cadence verification lineup, but what’s the bigger picture? He uses a transportation analogy to explain this. When you buy something online, delivering that package to your doorstep requires two things – fast engines (planes, ships, trains and trucks) and smart logistics (optimizing use of these resources for fast and cost-effective store-to-door shipping). He sees verification in the same way – you need fast engines and you need a smart environment (logistics) to get you to verification signoff quickly and with high quality.



This starts, you’ll be happy to hear, with the fastest engines he can deliver at any given time; he doesn’t believe that you can build great solutions on top of average engines. And he’s unimpressed by incremental improvements. Tell him you’ve found a way to increase performance by 20% and you lose his interest. He wants to see order-of-magnitude improvements (maybe there’s something to this idea of putting an implementation guy in charge of verification). These priorities are very apparent in Xcelium simulation runtimes and Protium S1 bring-up improvements announced at this year’s DVCon.


So, top-flight engines across the line, check. Anirudh then turned to the logistics part of the puzzle – a smart environment for verification. He sees this as being about:

  • Total throughput. It’s not just about fast engines; the complete verification flow needs to be fast and efficient – planning, building tests, debugging errors and jumping around between engines to isolate problems. Cadence already offers vManager, comprehensive support for UVM and Indago to support many of these tasks. Cadence is also arguably the market leader in verification IP (including multi-engine support). A recent additional and important step along these lines is much tighter coupling between Palladium emulation and Protium S1 prototyping, enabling quick and easy modeling transfer between platforms.
  • Knowing when you’re done. This requires metric-driven verification and signoff supported by high-productivity test generation and rich options in measuring coverage. vManager coupled with UVM coverage delivers the metrics. A recently publicized and very strong addition to this area is Perspec System Verifier for automated high-volume system-level test generation, which will get you to a higher quality “done” much faster.
  • Application-optimized test, including support for features like mixed signal analysis for consumer and IoT, power analysis for mobile and IoT and safety and compliance analysis for automotive. Cadence offers multiple options in these areas.
  • Elastic compute resources. Cloud access is a big part of this, both in secure public clouds and in private clouds. Palladium Z1 enterprise emulation platform is another part, providing virtual access to emulation power to a much wider user-base.


All great improvements, but how do they stack up against customer demands? Anirudh pointed out that a lot of emerging system-development teams don’t necessarily have the range of expertise found in big semiconductor companies. And they operate in organizations expecting that if everything is built on reusable IP, design to signoff (including verification) should be much faster. This drives four customer priorities per Anirudh (actually in reverse order to the picture above):

  • When are we done? As mentioned earlier, vManager with planning and metrics has a big impact on this.
  • Why can’t I start software development and debug in parallel with design? Protium S1 and tight integration with Palladium pull software development and debug much earlier in the design cycle and make it more accessible to system designers unskilled in the arcana of FPGA prototyping.
  • Why does SoC design speed up with IP reuse, but not SoC verification? Perspec portable stimulus generation has an order of magnitude impact on test generation productivity and the portability of the approach promises to bring expectations for reuse in verification much closer to reality.
  • Why isn’t IP testing more complete? Greater use of formal and easier/non-expert access through formal applications, coupled with directed and random simulation, offers more complete proofs especially for hard problems like security and cache coherence.

Where does Anirudh see opportunity going forward? Vendors are always cagey about futures, for competitive reasons and GAAP/SEC requirements, but he was willing to open up a little. First, he sees plenty of opportunity to reach for more order of magnitude improvements in engines. Again, he’s not hampered by what we experts “know” can’t be done. He believes in big parallelism, more completeness and more out-of-the-box approaches. There’s also more opportunity for ease of use. System designers have neither the time nor the patience to learn complex verification flows. He thinks more big gains are possible here.

Then of course there is opportunity for more intelligence, through Big Data analytics and machine learning (ML). When I asked him to elaborate, he grinned and said he couldn’t but did note that Cadence already has over a hundred ML experts. So I’m going to assume there will be announcements around this area at some point.

Cadence has an impressive verification story, not just in terms of completeness and performance, but also in positioning for new waves of systems designers. These guys (and particularly Anirudh) are going to be very interesting to watch.

More articles by Bernard…


Lu Dai: Incoming Accellera Chair

Lu Dai: Incoming Accellera Chair
by Bernard Murphy on 03-11-2017 at 7:00 am

One of the fun things about what I do is getting to meet some of the movers and shakers in the industry. You might not think of Accellera as a spot to find movers and shakers, but when you consider the impact they have had on what we do (OVL, SystemVerilog, UVM, UPF, SystemC, IP-XACT and others), design today would be unrecognizable without their standardization efforts. So I definitely wanted to meet Lu Dai, the incoming Chair of the organization. Lu takes over from Shishpal Rawat who served as chair for the last 6+ years.

Lu has been in the industry for over 20 years, starting in Intel, followed by a 10-year spell in Cisco and, most recently, nearly as long in Qualcomm where he is currently a Sr. Director of Engineering. Most of this time has been spent deep in various aspects of design verification. He told me that he has been quite actively involved in methodology work, both at Cisco and Qualcomm, which led him to a board seat (representing Qualcomm) at Accellera in 2015.

Of course, Accellera hosts DVCon, so if you were there, you can thank the committee who pull the event together each year. They’ve started DVCons in Europe and India (both coming up on their 4[SUP]th[/SUP] year) and this year will launch DVCon in China (Shanghai). They also play a big role in DAC, they host SystemC meetings in Europe and Japan and a Verification and ESL forum will kick off in Taiwan this year (a heads up for us RTL-centric types – the design and verification world does not revolve solely around RTL).

Given all this activity and a wealth of standards in widespread use, a question I had for Lu was why he thought Accellera had been so successful. He thought that having a small number of voting members and high member participation was key. In an earlier panel the chair of one of the working groups was asked what would happen if he had 5 more members on his team. He said that getting to agreement would take 32 times longer 😎. Lu added that closing on agreement when voting members don’t attend meetings takes much longer, and closing when voting members are not expert in the topic (because they don’t attend the working group) is even harder – in those cases the safest bet is often to abstain which doesn’t help get to solid agreement.

Accellera working groups certainly seem to be nimble (as nimble as a standards groups can be). And a solid track record of success can’t hurt. I’d add that they also seem to be very pragmatic. Focus is on domains where competing solutions already exist, a standard is clearly needed and the user-base is motivated to drive convergence among vendors.

I asked Lu where he wants to take Accellera under his leadership. He feels there are lots of good working groups already, he would like to see some smaller groups added where needed, potentially moving faster to converge on agreement on target topics. He would like to see membership and outreach continue to grow. As semiconductor consolidation has accelerated, corporate and associate memberships have shrunk. Yet there is still great opportunity to diversify membership, both regionally (for example in Asia-Pacific) and into new industries which will drive new demands (cloud, automotive, medical and many more).

Dynamic times for Lu to be driving a standards organization, but Accellera has shown itself to be a capable resource for developing practical standards and its widening international presence should give it a good start in cementing solid growth and global relevance. You can get a broader update on Accellera HERE.

More articles by Bernard…


Help for Automotive and Safety-critical Industries

Help for Automotive and Safety-critical Industries
by Daniel Payne on 03-10-2017 at 12:00 pm

I’ve been an Electrical Engineer and a car driver since 1978, so I’ve always been attracted to how the automotive industry designs cars to be safer for me and everyone else around the globe. According to statistics compiled by the CDCI learned that some 33,700 Americans died by motor vehicle crashes in 2014, which is the leading cause of death in our country. The ISO 26262 functional safety standard is widely used in the automotive industry as a way to create a set of standardized practices for designing and testing products, and it even covers the qualification of hardware and software.

Many EDA and IP vendors have decided to serve the safety-critical and automotive markets, so they need to get their tools and hardware qualified for use in designs and verification flows at all criticality levels up to and including ASIL D. Mentor Graphics has just announcedthat their ReqTracer tool used for requirements management and tracking is qualified for use in ISO 26262 flows.

The folks at Mentor have something called the Mentor Safe functional safety assurance program, which includes all of the following software tools:

  • ReqTracer – requirements management and tracking
  • Tessent – silicon test and yield analysis
  • Nucleus SafetyCert – real time operating system
  • Volcano VSTAR AUTOSAR – operating systen and VSW stack

Related blog – Mentor Safe Program Rounds Out Automotive Position

Mentor has a strong presence in automotive by supplying such a wide range of software to OEM and Tier 1 companies. With ReqTracer being the newest member of the Mentor Safe tool qualification it includes:

  • A description of the tool classification process
  • A description of all ReqTracer use cases
  • A tool classification report, fully justifying a TCL 1 classification for all work flows

Related blog – Coverage Driven Analog Verification

The inputs and outputs of ReqTracer are shown below to give you an idea of how it would fit into your electronic product design flow:

Features and benefits of using a tool like ReqTracer include:

  • Enables you to trace requirements throughout the entire design process
  • Requirement changes can be managed and weighed
  • All of your project documents can be linked with other tools for design, verification and implementation
  • You get to see graphical reports that are automated
  • Quality goes up from automated coverage analysis
  • Process improvements can be tracked
  • You know that design requirements are fully met
  • Regulatory requirements are satisfied

Related blog – Coverage Driven Verification for Analog? Yes, it’s Possible

Engineers at ST Microelectronics in Agrate Brianza, Italy have used the ReqTracer tool in order to trace requirements all the way to source code in the design of the ST SPC56E microcontroller, a chip used for automotive safety projects like power steering, active suspension and radar for adaptive cruise control.

Alessandro Sansonetti from STMicroelectronics became convinced of the benefits of using an automated requirements tracking system, “We discovered immediately that a lot of requirements were simply not correctly traced through to the sourcecode. The potential of the tool was obvious and I pushed the team to make an investment.”

Summary

Our automobiles provide us the greatest amount of travel freedom on a daily basis, so it’s no surprise that vehicle safety is paramount for both drivers and designers. Since Mentor Graphics has some 30 years of experience as a vendor to the automotive market, their list of tools in the Mentor Safe program continues to grow with ReqTracer being the latest addition. It’s worth taking a look at how these ISOS 26262 qualified software tools can help your next safety-critical project get through the design and validation phases more quickly than using other, more manual approaches.


Eclipsing IDEs

Eclipsing IDEs
by Bernard Murphy on 03-10-2017 at 7:00 am

In a discussion with Hilde Goosens at Sigasi, she reminded me of an important topic, relevant to the Sigasi platform. Some aspects of technology benefit from competition, others less obviously so and some absolutely require standardization. Imagine how chaotic mobile communication would be if wireless protocols weren’t standardized. That’s an obvious case, shared with many other “invisible” standards whose details aren’t directly apparent to us consumers. Some standards are much more visibly apparent, some open standards, some defacto standards.

The Android interface is a good example. Perhaps user experience (UX) interfaces are the next battleground for adoption of standard/open solutions. Android took off because phone manufacturers saw Android lowering the barrier to consumer acceptance and in improved integration between applications. Why shouldn’t the same reasoning apply in professional and technical applications? That doesn’t mean that, following Highlander, there shall only be one. We still have iOS and Android but, thank goodness, we don’t have 50 different phone UXes.

In system UXes we have Windows and MacOS and a variety of flavors for Linux – Gnome and KDE among others. This is a bit more scattered, but users tend to live in one environment or switch between a favorite Linux UX and one of the mainstream UXes, so not too bad. IDEs (integrated development environments) tend to be much more tool-specific. Think of Visual Studio from Microsoft and DreamWeaver from Adobe. These are custom-crafted interfaces, tuned to work with the vendor’s underlying technology – compilers, profilers, debuggers and so on. They allow for plugins from 3[SUP]rd[/SUP]-parties but if there isn’t a suitable plugin or you want to link to a tool from a competing vendor, you may be out of luck.


Which is probably why Eclipse, an open-source IDE, recently ranked in one index as one of the top two IDEs (Visual Studio was slightly ahead) with more than 2x market share over the nearest rival. Software developers need to jump around between a lot of different development, build, test, debug and visualization contexts, so the less unnecessary differences there are between these contexts, the better. Individual application windows still have their own menus, buttons and visualizations but the basic look and feel across the system is the same and the platform encourages tight interoperability between views (copy/paste, cross-probing and so on). Eclipse already has support for C/C++, Java, PHP, JS, CSS, HTML, along with lots of tools for Python, UML, Git, Subversion, among other development options. Why try to recreate or integrate all of that in a custom UX?

What does this have to do with hardware design? Tool vendors necessarily started and have evolved with the UX platforms that were available – originally perhaps Tcl/Tk, more recently Qt. Switching to a new UX platform is a very big and expensive transition (not so much a competitive issue), so movement in this area isn’t happening quickly. Nevertheless, Xilinx, Altera, Mentor and ARM all have investment in Eclipse-based tools for obvious reasons. If development in your environment also must support integration with a wide range of embedded software tooling, you have no hope of adequately supporting all possibilities in a proprietary UX; you must go with an open environment. The same pressures are likely to be seen in EDA tooling, where the line between hardware and software build and debug is increasingly blurred.

There’s another important consideration; safety standards like ISO 26262 are creating a lot of interest in building integration around common platforms like Eclipse, to minimize potential disconnects and information loss in transitioning between disconnected tools. Over time, expectations here are likely to switch from desired to required. I’d be unsurprised to learn that hardware tool vendors are fully aware of this need and already have plans in this direction.

Of course Sigasi already support Eclipse. You can integrate Sigasi code creation, checking and other tooling directly alongside your other Eclipse tools. You can get more information HERE.

More articles by Bernard…


Improved Timing Closure for Network-on-Chip based SOC’s

Improved Timing Closure for Network-on-Chip based SOC’s
by Tom Simon on 03-09-2017 at 12:00 pm

Network on chip (NoC) already has a long list of compelling reasons driving its use in large SOC designs. However, this week Arteris introduced their PIANO 2.0 software that provides an even more compelling reason to use their FlexNoC architecture. Let’s recap. Arteris FlexNoC gives SOC architects and designers a powerful tool for provisioning top level interconnect. SOC’s have long since passed the days where connections between the blocks can be hardwired. Routing resources are too scarce, and flexibility for inter-block communication and data exchange has become paramount.

NoC is added to a design as RTL blocks that manage data exchange between blocks over a high performance and reliable on-chip network. Arteris’ FlexNoC is even capable of supporting cache coherent memory interfaces. Now, to understand why PIANO 2.0 is important it’s key to understand that a significant variability in timing closure efficiency is introduced when moving from the front end to the back end. PIANO 2.0 delivers a strong connection between RTL spec and the later physical timing closure steps. Until now, NoC implementation optimization was akin to being limited to wire load models instead of full parasitics.

PIANO 2.0 promotes intelligently moving interface elements away from their host or target blocks and into the routing channels. This works remarkably well for improving area and performance. The building blocks for an NoC are small and ideal for fitting in the ‘grout’ of the design. However, their placement and the provisioning of supporting pipeline stages can have a significant effect of area, power and timing.

Without any hints from the front end, placement tools will often cluster NoC logic blocks in ways that fails to meet timing, or that requires the addition of pipeline stages. One contributing factor is that in 28nm and below, many interconnect paths between top level blocks are simply too long for the signal to arrive in under one clock cycle. Attempting to fix this by adding more pipeline stages or relying on LVT cells can consume critical area and add to static and dynamic power consumption.

Arteris has added feedback loops so that physical implementation tools from Cadence and Synopsys can create better placement for these interconnect IP blocks. It is axiomatic that better communication between front end and back end design teams will improve design results and reduce unnecessary iterations. PIANO 2.0 help facilitate front to back dataflow in a systematic fashion.

Arteris provides some benchmark results to support the effectiveness of PIANO 2.0. In their first example, they provide data on a design with no pipeline stages starting with Design Compiler and only using wireload models that is forecast to require 385K sq microns. Taking this same non-piplelined design to DC Topological, it fails timing by 1.26ns and the interconnect IP area has grown to 830K sq microns. To make this meet timing with manual pipeline additions, the interconnect IP area grows to 1,008K sq microns. Instead, by using PIANO 2.0 the design meets timing with an interconnect IP area of 806K sq microns. This result also saves 46nW over the manually pipelined case.

In another example Arteris provides, they compare manual pipeline insertion with Auto Pipeline in PIANO 2.0. There was an 11% reduction in interconnect IP area, from 1.77M sq microns to 1.58M sq microns. The process for pipeline insertion went down from 45 days to 1.5 days as well. This 28nm design has 20 power domains, 10 clocks running between 100 and 400 MHz and 160 NoC NIU sockets.

Arteris is including endorsements from several major customers and EDA vendors in their product announcement. Among them is Horst Rieger Manager at the Design Services Group in Renasas and Dr. Antun Domic, CTO at Synopsys. Also, Senior Analyst Mike Delmer with the Linley group commented on the technology in the Arteris press release on PIANO 2.0.

Arteris PIANO 2.0 offers an effective solution for getting rapidly to timing closure with all the added benefits of an NoC architecture. This is not an incremental improvement either. It dramatically improves area, congestion, power and timing. Given that it works for coherent and non-coherent interconnect, it should be widely applicable to almost any design at 28nm or below.


ClioSoft Crushes it in 2016!

ClioSoft Crushes it in 2016!
by Daniel Nenni on 03-09-2017 at 7:00 am

If you are designing chips in a competitive market with multiple design teams and IP reuse is a high priority then you probably already know about the ClioSoft SOS Platform. What you probably did not know however is how well they are doing with the re-architected version of their integrated design and IP management software.


We have been covering ClioSoft since SemiWiki started in 2011 and have published 71 blogs that have been viewed almost 300,000 times by people all over the world so we know them quite well. You can see the ClioSoft SemiWiki landing page HERE. One thing you will notice is that ClioSoft has a very loyal customer base and they are not shy about sharing their experiences with the ClioSoft software and heaping praise on the company. The other thing you should know about ClioSoft is that for a relatively small company, they throw a very big customer and partner appreciation party at DAC!

In general we do not publish press releases but I believe that ClioSoft’s accomplishments in 2016 deserves special recognition so here it is:

ClioSoft Closes 2016 with Continued Growth
Best-in-class design collaboration platform drives new contracts, customers and renewals

 

FREMONT, Calif., February 28, 2017 — ClioSoft, Inc., a leader in system-on-chip (SoC) design data and intellectual property (IP) management solutions for the semiconductor design industry, today reported a 20% increase in new bookings for 2016 along with further adoption of ClioSoft’s SOS7 design management platform by existing customers. Thirty new accounts were added to ClioSoft’s existing customer base of over 200 customers in 2016. The rise in bookings was due to an increased growth in analog and RF designers adopting ClioSoft’s SOS solution and an upsurge for it IP management solution.

SOS Virtuoso and SOS ADS, used by the analog and RF designers, is built on top of the SOS7 design management platform. The SOS7 design management platform enables designers to work with other team members, located either locally or at remote design sites, to build and collaborate on the same design, from concept to GDSII.

“It has been a good year for us especially for the SOS7 Design Management Platform” said Srinath Anantharaman, founder and CEO of ClioSoft. “SOS7, which is the re-architected update to the existing SOS design management platform, is being received very well amongst our customers. Released about a year and a half back, a number of companies have now started to standardize on the SOS7 design management platform, which has been built for performance, security and reliability. SOS7 takes design collaboration to a whole new level and has helped us win enterprise accounts from our competition.”

“ClioSoft’s SOS7 design management platform has helped us collaborate efficiently between designers located at multiple sites and improve the productivity of our design teams,” said Linh Hong, Vice President and General Manager of Kilopass OTP Division. “It is important for us to manage the numerous design revisions and at the same time enable the design teams to work efficiently. The tight integration of SOS with EDA tools such as Cadence[SUP]®[/SUP] Virtuoso[SUP]®[/SUP] Platform makes it easy for our engineers to develop the next generation memories and work together without stepping on each other’s toes. SOS provides high performance and the flexibility needed to manage the handoffs of complex design flows including fine grained access control to our project data. Moreover, the quality and responsiveness of ClioSoft’s support team is outstanding.”

ClioSoft provides the only design management platform for multi-site design collaboration for all types of designs – analog, digital, RF and mixed-signal. By facilitating easy design handoffs along with secure and efficient sharing of design data from concept through tape-out, SOS7 platform allows multi-site design collaboration for dispersed development teams. Tight integration with several EDA tools from Cadence, Keysight Technologies, Mentor Graphics and Synopsys[SUP]®[/SUP] with SOS7 provides a cohesive design environment for all types of designs and enables designers across multiple design centers to increase productivity and efficiency in their complex design flows. In addition to enabling design engineers to manage design data and tool features from the same cockpit, SOS7 provides integrated revision control, release and derivative management and issue tracking interface to commonly used bug tracking systems. Using SOS7 helps reduce the possibility of design re-spins.

About ClioSoft
ClioSoft is the pioneer and leading developer of system-on-chip (SoC) design configuration and enterprise IP management solutions for the semiconductor industry. The company’s SOS7 Design Collaboration Platform, built exclusively to meet the demanding requirements of SoC designs, empowers multi-site design teams to collaborate efficiently on complex analog, digital, RF and mixed-signal designs.

The collaborative IP management system from ClioSoft is part of the overall SOS Design Collaboration Platform. The IP management system improves design reuse by providing an easy-to-use workflow for designers to manage the process of shopping, consuming and producing new IPs. ClioSoft customers include the top 20 semiconductor companies worldwide. ClioSoft is headquartered at 39500 Stevenson Place, Suite 210, Fremont, CA, 94539. For more information visit us at www.cliosoft.com.

Also Read

CEO Interview: Srinath Anantharaman of ClioSoft

Qorvo Uses ClioSoft to Bring Design Data Management to RF Design

Qorvo and KeySight to Present on Managing Collaboration for Multi-site, Multi-vendor RF Design