RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Anirudh on Verification

Anirudh on Verification
by Bernard Murphy on 03-13-2017 at 7:00 am

I was fortunate to have a 1-on-1 with Anirudh before he delivered the keynote at DVCon. In case you don’t know the name, Dr. Anirudh Devgan is Executive VP and GM of the Digital & Signoff Group and the System & Verification Group at Cadence. He’s on a meteoric rise in the company, not least for what he has done for Cadence’s position in verification in just a year.

Of course, Cadence was never a slouch in verification. Back in the early 90’s they acquired Gateway, making them the sole provider of Verilog simulation for a while, they’ve got world-class engines in Jasper for formal and Palladium for emulation, but their simulation and prototyping products haven’t exactly towered over similar solutions in recent years. That changed at this year’s DVCon where they announced order of magnitude improvements in both platforms, which Anirudh asserts puts them at the top of the pack across all the verification engines.

That’s the most obvious change in the new Cadence verification lineup, but what’s the bigger picture? He uses a transportation analogy to explain this. When you buy something online, delivering that package to your doorstep requires two things – fast engines (planes, ships, trains and trucks) and smart logistics (optimizing use of these resources for fast and cost-effective store-to-door shipping). He sees verification in the same way – you need fast engines and you need a smart environment (logistics) to get you to verification signoff quickly and with high quality.



This starts, you’ll be happy to hear, with the fastest engines he can deliver at any given time; he doesn’t believe that you can build great solutions on top of average engines. And he’s unimpressed by incremental improvements. Tell him you’ve found a way to increase performance by 20% and you lose his interest. He wants to see order-of-magnitude improvements (maybe there’s something to this idea of putting an implementation guy in charge of verification). These priorities are very apparent in Xcelium simulation runtimes and Protium S1 bring-up improvements announced at this year’s DVCon.


So, top-flight engines across the line, check. Anirudh then turned to the logistics part of the puzzle – a smart environment for verification. He sees this as being about:

  • Total throughput. It’s not just about fast engines; the complete verification flow needs to be fast and efficient – planning, building tests, debugging errors and jumping around between engines to isolate problems. Cadence already offers vManager, comprehensive support for UVM and Indago to support many of these tasks. Cadence is also arguably the market leader in verification IP (including multi-engine support). A recent additional and important step along these lines is much tighter coupling between Palladium emulation and Protium S1 prototyping, enabling quick and easy modeling transfer between platforms.
  • Knowing when you’re done. This requires metric-driven verification and signoff supported by high-productivity test generation and rich options in measuring coverage. vManager coupled with UVM coverage delivers the metrics. A recently publicized and very strong addition to this area is Perspec System Verifier for automated high-volume system-level test generation, which will get you to a higher quality “done” much faster.
  • Application-optimized test, including support for features like mixed signal analysis for consumer and IoT, power analysis for mobile and IoT and safety and compliance analysis for automotive. Cadence offers multiple options in these areas.
  • Elastic compute resources. Cloud access is a big part of this, both in secure public clouds and in private clouds. Palladium Z1 enterprise emulation platform is another part, providing virtual access to emulation power to a much wider user-base.


All great improvements, but how do they stack up against customer demands? Anirudh pointed out that a lot of emerging system-development teams don’t necessarily have the range of expertise found in big semiconductor companies. And they operate in organizations expecting that if everything is built on reusable IP, design to signoff (including verification) should be much faster. This drives four customer priorities per Anirudh (actually in reverse order to the picture above):

  • When are we done? As mentioned earlier, vManager with planning and metrics has a big impact on this.
  • Why can’t I start software development and debug in parallel with design? Protium S1 and tight integration with Palladium pull software development and debug much earlier in the design cycle and make it more accessible to system designers unskilled in the arcana of FPGA prototyping.
  • Why does SoC design speed up with IP reuse, but not SoC verification? Perspec portable stimulus generation has an order of magnitude impact on test generation productivity and the portability of the approach promises to bring expectations for reuse in verification much closer to reality.
  • Why isn’t IP testing more complete? Greater use of formal and easier/non-expert access through formal applications, coupled with directed and random simulation, offers more complete proofs especially for hard problems like security and cache coherence.

Where does Anirudh see opportunity going forward? Vendors are always cagey about futures, for competitive reasons and GAAP/SEC requirements, but he was willing to open up a little. First, he sees plenty of opportunity to reach for more order of magnitude improvements in engines. Again, he’s not hampered by what we experts “know” can’t be done. He believes in big parallelism, more completeness and more out-of-the-box approaches. There’s also more opportunity for ease of use. System designers have neither the time nor the patience to learn complex verification flows. He thinks more big gains are possible here.

Then of course there is opportunity for more intelligence, through Big Data analytics and machine learning (ML). When I asked him to elaborate, he grinned and said he couldn’t but did note that Cadence already has over a hundred ML experts. So I’m going to assume there will be announcements around this area at some point.

Cadence has an impressive verification story, not just in terms of completeness and performance, but also in positioning for new waves of systems designers. These guys (and particularly Anirudh) are going to be very interesting to watch.

More articles by Bernard…


Lu Dai: Incoming Accellera Chair

Lu Dai: Incoming Accellera Chair
by Bernard Murphy on 03-11-2017 at 7:00 am

One of the fun things about what I do is getting to meet some of the movers and shakers in the industry. You might not think of Accellera as a spot to find movers and shakers, but when you consider the impact they have had on what we do (OVL, SystemVerilog, UVM, UPF, SystemC, IP-XACT and others), design today would be unrecognizable without their standardization efforts. So I definitely wanted to meet Lu Dai, the incoming Chair of the organization. Lu takes over from Shishpal Rawat who served as chair for the last 6+ years.

Lu has been in the industry for over 20 years, starting in Intel, followed by a 10-year spell in Cisco and, most recently, nearly as long in Qualcomm where he is currently a Sr. Director of Engineering. Most of this time has been spent deep in various aspects of design verification. He told me that he has been quite actively involved in methodology work, both at Cisco and Qualcomm, which led him to a board seat (representing Qualcomm) at Accellera in 2015.

Of course, Accellera hosts DVCon, so if you were there, you can thank the committee who pull the event together each year. They’ve started DVCons in Europe and India (both coming up on their 4[SUP]th[/SUP] year) and this year will launch DVCon in China (Shanghai). They also play a big role in DAC, they host SystemC meetings in Europe and Japan and a Verification and ESL forum will kick off in Taiwan this year (a heads up for us RTL-centric types – the design and verification world does not revolve solely around RTL).

Given all this activity and a wealth of standards in widespread use, a question I had for Lu was why he thought Accellera had been so successful. He thought that having a small number of voting members and high member participation was key. In an earlier panel the chair of one of the working groups was asked what would happen if he had 5 more members on his team. He said that getting to agreement would take 32 times longer 😎. Lu added that closing on agreement when voting members don’t attend meetings takes much longer, and closing when voting members are not expert in the topic (because they don’t attend the working group) is even harder – in those cases the safest bet is often to abstain which doesn’t help get to solid agreement.

Accellera working groups certainly seem to be nimble (as nimble as a standards groups can be). And a solid track record of success can’t hurt. I’d add that they also seem to be very pragmatic. Focus is on domains where competing solutions already exist, a standard is clearly needed and the user-base is motivated to drive convergence among vendors.

I asked Lu where he wants to take Accellera under his leadership. He feels there are lots of good working groups already, he would like to see some smaller groups added where needed, potentially moving faster to converge on agreement on target topics. He would like to see membership and outreach continue to grow. As semiconductor consolidation has accelerated, corporate and associate memberships have shrunk. Yet there is still great opportunity to diversify membership, both regionally (for example in Asia-Pacific) and into new industries which will drive new demands (cloud, automotive, medical and many more).

Dynamic times for Lu to be driving a standards organization, but Accellera has shown itself to be a capable resource for developing practical standards and its widening international presence should give it a good start in cementing solid growth and global relevance. You can get a broader update on Accellera HERE.

More articles by Bernard…


Help for Automotive and Safety-critical Industries

Help for Automotive and Safety-critical Industries
by Daniel Payne on 03-10-2017 at 12:00 pm

I’ve been an Electrical Engineer and a car driver since 1978, so I’ve always been attracted to how the automotive industry designs cars to be safer for me and everyone else around the globe. According to statistics compiled by the CDCI learned that some 33,700 Americans died by motor vehicle crashes in 2014, which is the leading cause of death in our country. The ISO 26262 functional safety standard is widely used in the automotive industry as a way to create a set of standardized practices for designing and testing products, and it even covers the qualification of hardware and software.

Many EDA and IP vendors have decided to serve the safety-critical and automotive markets, so they need to get their tools and hardware qualified for use in designs and verification flows at all criticality levels up to and including ASIL D. Mentor Graphics has just announcedthat their ReqTracer tool used for requirements management and tracking is qualified for use in ISO 26262 flows.

The folks at Mentor have something called the Mentor Safe functional safety assurance program, which includes all of the following software tools:

  • ReqTracer – requirements management and tracking
  • Tessent – silicon test and yield analysis
  • Nucleus SafetyCert – real time operating system
  • Volcano VSTAR AUTOSAR – operating systen and VSW stack

Related blog – Mentor Safe Program Rounds Out Automotive Position

Mentor has a strong presence in automotive by supplying such a wide range of software to OEM and Tier 1 companies. With ReqTracer being the newest member of the Mentor Safe tool qualification it includes:

  • A description of the tool classification process
  • A description of all ReqTracer use cases
  • A tool classification report, fully justifying a TCL 1 classification for all work flows

Related blog – Coverage Driven Analog Verification

The inputs and outputs of ReqTracer are shown below to give you an idea of how it would fit into your electronic product design flow:

Features and benefits of using a tool like ReqTracer include:

  • Enables you to trace requirements throughout the entire design process
  • Requirement changes can be managed and weighed
  • All of your project documents can be linked with other tools for design, verification and implementation
  • You get to see graphical reports that are automated
  • Quality goes up from automated coverage analysis
  • Process improvements can be tracked
  • You know that design requirements are fully met
  • Regulatory requirements are satisfied

Related blog – Coverage Driven Verification for Analog? Yes, it’s Possible

Engineers at ST Microelectronics in Agrate Brianza, Italy have used the ReqTracer tool in order to trace requirements all the way to source code in the design of the ST SPC56E microcontroller, a chip used for automotive safety projects like power steering, active suspension and radar for adaptive cruise control.

Alessandro Sansonetti from STMicroelectronics became convinced of the benefits of using an automated requirements tracking system, “We discovered immediately that a lot of requirements were simply not correctly traced through to the sourcecode. The potential of the tool was obvious and I pushed the team to make an investment.”

Summary

Our automobiles provide us the greatest amount of travel freedom on a daily basis, so it’s no surprise that vehicle safety is paramount for both drivers and designers. Since Mentor Graphics has some 30 years of experience as a vendor to the automotive market, their list of tools in the Mentor Safe program continues to grow with ReqTracer being the latest addition. It’s worth taking a look at how these ISOS 26262 qualified software tools can help your next safety-critical project get through the design and validation phases more quickly than using other, more manual approaches.


Eclipsing IDEs

Eclipsing IDEs
by Bernard Murphy on 03-10-2017 at 7:00 am

In a discussion with Hilde Goosens at Sigasi, she reminded me of an important topic, relevant to the Sigasi platform. Some aspects of technology benefit from competition, others less obviously so and some absolutely require standardization. Imagine how chaotic mobile communication would be if wireless protocols weren’t standardized. That’s an obvious case, shared with many other “invisible” standards whose details aren’t directly apparent to us consumers. Some standards are much more visibly apparent, some open standards, some defacto standards.

The Android interface is a good example. Perhaps user experience (UX) interfaces are the next battleground for adoption of standard/open solutions. Android took off because phone manufacturers saw Android lowering the barrier to consumer acceptance and in improved integration between applications. Why shouldn’t the same reasoning apply in professional and technical applications? That doesn’t mean that, following Highlander, there shall only be one. We still have iOS and Android but, thank goodness, we don’t have 50 different phone UXes.

In system UXes we have Windows and MacOS and a variety of flavors for Linux – Gnome and KDE among others. This is a bit more scattered, but users tend to live in one environment or switch between a favorite Linux UX and one of the mainstream UXes, so not too bad. IDEs (integrated development environments) tend to be much more tool-specific. Think of Visual Studio from Microsoft and DreamWeaver from Adobe. These are custom-crafted interfaces, tuned to work with the vendor’s underlying technology – compilers, profilers, debuggers and so on. They allow for plugins from 3[SUP]rd[/SUP]-parties but if there isn’t a suitable plugin or you want to link to a tool from a competing vendor, you may be out of luck.


Which is probably why Eclipse, an open-source IDE, recently ranked in one index as one of the top two IDEs (Visual Studio was slightly ahead) with more than 2x market share over the nearest rival. Software developers need to jump around between a lot of different development, build, test, debug and visualization contexts, so the less unnecessary differences there are between these contexts, the better. Individual application windows still have their own menus, buttons and visualizations but the basic look and feel across the system is the same and the platform encourages tight interoperability between views (copy/paste, cross-probing and so on). Eclipse already has support for C/C++, Java, PHP, JS, CSS, HTML, along with lots of tools for Python, UML, Git, Subversion, among other development options. Why try to recreate or integrate all of that in a custom UX?

What does this have to do with hardware design? Tool vendors necessarily started and have evolved with the UX platforms that were available – originally perhaps Tcl/Tk, more recently Qt. Switching to a new UX platform is a very big and expensive transition (not so much a competitive issue), so movement in this area isn’t happening quickly. Nevertheless, Xilinx, Altera, Mentor and ARM all have investment in Eclipse-based tools for obvious reasons. If development in your environment also must support integration with a wide range of embedded software tooling, you have no hope of adequately supporting all possibilities in a proprietary UX; you must go with an open environment. The same pressures are likely to be seen in EDA tooling, where the line between hardware and software build and debug is increasingly blurred.

There’s another important consideration; safety standards like ISO 26262 are creating a lot of interest in building integration around common platforms like Eclipse, to minimize potential disconnects and information loss in transitioning between disconnected tools. Over time, expectations here are likely to switch from desired to required. I’d be unsurprised to learn that hardware tool vendors are fully aware of this need and already have plans in this direction.

Of course Sigasi already support Eclipse. You can integrate Sigasi code creation, checking and other tooling directly alongside your other Eclipse tools. You can get more information HERE.

More articles by Bernard…


Improved Timing Closure for Network-on-Chip based SOC’s

Improved Timing Closure for Network-on-Chip based SOC’s
by Tom Simon on 03-09-2017 at 12:00 pm

Network on chip (NoC) already has a long list of compelling reasons driving its use in large SOC designs. However, this week Arteris introduced their PIANO 2.0 software that provides an even more compelling reason to use their FlexNoC architecture. Let’s recap. Arteris FlexNoC gives SOC architects and designers a powerful tool for provisioning top level interconnect. SOC’s have long since passed the days where connections between the blocks can be hardwired. Routing resources are too scarce, and flexibility for inter-block communication and data exchange has become paramount.

NoC is added to a design as RTL blocks that manage data exchange between blocks over a high performance and reliable on-chip network. Arteris’ FlexNoC is even capable of supporting cache coherent memory interfaces. Now, to understand why PIANO 2.0 is important it’s key to understand that a significant variability in timing closure efficiency is introduced when moving from the front end to the back end. PIANO 2.0 delivers a strong connection between RTL spec and the later physical timing closure steps. Until now, NoC implementation optimization was akin to being limited to wire load models instead of full parasitics.

PIANO 2.0 promotes intelligently moving interface elements away from their host or target blocks and into the routing channels. This works remarkably well for improving area and performance. The building blocks for an NoC are small and ideal for fitting in the ‘grout’ of the design. However, their placement and the provisioning of supporting pipeline stages can have a significant effect of area, power and timing.

Without any hints from the front end, placement tools will often cluster NoC logic blocks in ways that fails to meet timing, or that requires the addition of pipeline stages. One contributing factor is that in 28nm and below, many interconnect paths between top level blocks are simply too long for the signal to arrive in under one clock cycle. Attempting to fix this by adding more pipeline stages or relying on LVT cells can consume critical area and add to static and dynamic power consumption.

Arteris has added feedback loops so that physical implementation tools from Cadence and Synopsys can create better placement for these interconnect IP blocks. It is axiomatic that better communication between front end and back end design teams will improve design results and reduce unnecessary iterations. PIANO 2.0 help facilitate front to back dataflow in a systematic fashion.

Arteris provides some benchmark results to support the effectiveness of PIANO 2.0. In their first example, they provide data on a design with no pipeline stages starting with Design Compiler and only using wireload models that is forecast to require 385K sq microns. Taking this same non-piplelined design to DC Topological, it fails timing by 1.26ns and the interconnect IP area has grown to 830K sq microns. To make this meet timing with manual pipeline additions, the interconnect IP area grows to 1,008K sq microns. Instead, by using PIANO 2.0 the design meets timing with an interconnect IP area of 806K sq microns. This result also saves 46nW over the manually pipelined case.

In another example Arteris provides, they compare manual pipeline insertion with Auto Pipeline in PIANO 2.0. There was an 11% reduction in interconnect IP area, from 1.77M sq microns to 1.58M sq microns. The process for pipeline insertion went down from 45 days to 1.5 days as well. This 28nm design has 20 power domains, 10 clocks running between 100 and 400 MHz and 160 NoC NIU sockets.

Arteris is including endorsements from several major customers and EDA vendors in their product announcement. Among them is Horst Rieger Manager at the Design Services Group in Renasas and Dr. Antun Domic, CTO at Synopsys. Also, Senior Analyst Mike Delmer with the Linley group commented on the technology in the Arteris press release on PIANO 2.0.

Arteris PIANO 2.0 offers an effective solution for getting rapidly to timing closure with all the added benefits of an NoC architecture. This is not an incremental improvement either. It dramatically improves area, congestion, power and timing. Given that it works for coherent and non-coherent interconnect, it should be widely applicable to almost any design at 28nm or below.


ClioSoft Crushes it in 2016!

ClioSoft Crushes it in 2016!
by Daniel Nenni on 03-09-2017 at 7:00 am

If you are designing chips in a competitive market with multiple design teams and IP reuse is a high priority then you probably already know about the ClioSoft SOS Platform. What you probably did not know however is how well they are doing with the re-architected version of their integrated design and IP management software.


We have been covering ClioSoft since SemiWiki started in 2011 and have published 71 blogs that have been viewed almost 300,000 times by people all over the world so we know them quite well. You can see the ClioSoft SemiWiki landing page HERE. One thing you will notice is that ClioSoft has a very loyal customer base and they are not shy about sharing their experiences with the ClioSoft software and heaping praise on the company. The other thing you should know about ClioSoft is that for a relatively small company, they throw a very big customer and partner appreciation party at DAC!

In general we do not publish press releases but I believe that ClioSoft’s accomplishments in 2016 deserves special recognition so here it is:

ClioSoft Closes 2016 with Continued Growth
Best-in-class design collaboration platform drives new contracts, customers and renewals

 

FREMONT, Calif., February 28, 2017 — ClioSoft, Inc., a leader in system-on-chip (SoC) design data and intellectual property (IP) management solutions for the semiconductor design industry, today reported a 20% increase in new bookings for 2016 along with further adoption of ClioSoft’s SOS7 design management platform by existing customers. Thirty new accounts were added to ClioSoft’s existing customer base of over 200 customers in 2016. The rise in bookings was due to an increased growth in analog and RF designers adopting ClioSoft’s SOS solution and an upsurge for it IP management solution.

SOS Virtuoso and SOS ADS, used by the analog and RF designers, is built on top of the SOS7 design management platform. The SOS7 design management platform enables designers to work with other team members, located either locally or at remote design sites, to build and collaborate on the same design, from concept to GDSII.

“It has been a good year for us especially for the SOS7 Design Management Platform” said Srinath Anantharaman, founder and CEO of ClioSoft. “SOS7, which is the re-architected update to the existing SOS design management platform, is being received very well amongst our customers. Released about a year and a half back, a number of companies have now started to standardize on the SOS7 design management platform, which has been built for performance, security and reliability. SOS7 takes design collaboration to a whole new level and has helped us win enterprise accounts from our competition.”

“ClioSoft’s SOS7 design management platform has helped us collaborate efficiently between designers located at multiple sites and improve the productivity of our design teams,” said Linh Hong, Vice President and General Manager of Kilopass OTP Division. “It is important for us to manage the numerous design revisions and at the same time enable the design teams to work efficiently. The tight integration of SOS with EDA tools such as Cadence[SUP]®[/SUP] Virtuoso[SUP]®[/SUP] Platform makes it easy for our engineers to develop the next generation memories and work together without stepping on each other’s toes. SOS provides high performance and the flexibility needed to manage the handoffs of complex design flows including fine grained access control to our project data. Moreover, the quality and responsiveness of ClioSoft’s support team is outstanding.”

ClioSoft provides the only design management platform for multi-site design collaboration for all types of designs – analog, digital, RF and mixed-signal. By facilitating easy design handoffs along with secure and efficient sharing of design data from concept through tape-out, SOS7 platform allows multi-site design collaboration for dispersed development teams. Tight integration with several EDA tools from Cadence, Keysight Technologies, Mentor Graphics and Synopsys[SUP]®[/SUP] with SOS7 provides a cohesive design environment for all types of designs and enables designers across multiple design centers to increase productivity and efficiency in their complex design flows. In addition to enabling design engineers to manage design data and tool features from the same cockpit, SOS7 provides integrated revision control, release and derivative management and issue tracking interface to commonly used bug tracking systems. Using SOS7 helps reduce the possibility of design re-spins.

About ClioSoft
ClioSoft is the pioneer and leading developer of system-on-chip (SoC) design configuration and enterprise IP management solutions for the semiconductor industry. The company’s SOS7 Design Collaboration Platform, built exclusively to meet the demanding requirements of SoC designs, empowers multi-site design teams to collaborate efficiently on complex analog, digital, RF and mixed-signal designs.

The collaborative IP management system from ClioSoft is part of the overall SOS Design Collaboration Platform. The IP management system improves design reuse by providing an easy-to-use workflow for designers to manage the process of shopping, consuming and producing new IPs. ClioSoft customers include the top 20 semiconductor companies worldwide. ClioSoft is headquartered at 39500 Stevenson Place, Suite 210, Fremont, CA, 94539. For more information visit us at www.cliosoft.com.

Also Read

CEO Interview: Srinath Anantharaman of ClioSoft

Qorvo Uses ClioSoft to Bring Design Data Management to RF Design

Qorvo and KeySight to Present on Managing Collaboration for Multi-site, Multi-vendor RF Design


Synopsys and PhoeniX Demo Photonic IC Flow Using AIM PDK at OFC

Synopsys and PhoeniX Demo Photonic IC Flow Using AIM PDK at OFC
by Mitch Heins on 03-08-2017 at 12:00 pm


Synopsys has long been known for its leading position in the digital logic synthesis world. More recently however, the company started delving into the world of photonic integrated circuit (PIC) design. Synopsys started down this path from the system level with a 2010 acquisition of Optical Research Associates and their CODE V and LightTools products. These products deal with discrete optical components and free-space optics. They then moved down into the photonic IC level with a 2012 acquisition of RSoft Design Group that brought with them several very good photonic simulation engines.

The RSoft group, now part of Synopsys’ Optical Solutions Group, brought with it simulation tools at both the system and component levels. At the component level, these tools are very much like what Synopsys offers for electronic technology CAD (TCAD). Their offerings include tools for mode solving, beam propagation (BPM and FDTD) and modeling for diffraction elements, gratings and active devices like lasers, to name a few.

At the system level, RSoft brought into Synopsys a simulator called OptSim. OptSim is well known in the photonics industry and was historically focused on telecommunications systems and datacom applications. Synopsys continues to sell this software and has also augmented the platform and pushed it into the PIC world with release of OptSim Circuit. OptSim Circuit is, as its name implies, a circuit level simulation environment for PICs.

I say environment because, like its predecessor OptSim, OptSim Circuit brings with it all the features you need to easily capture and simulate a PIC design. OptSim Circuit uses an easy-to-learn drag and drop schematic capture environment that features hierarchy and a library of parameterized high-level photonic building blocks that can be used to create a circuit. The platform enables designers to model at different levels of abstraction and then verify the functional implementation against system models used during system level simulations.

OptSim Circuit includes the ability to easily set up complex test benches for multiple what-if scenarios and allows designers to run sweeps through parameter ranges and to check for impacts of process variations on critical design parameters. The software also enables visualization of simulation outputs in terms familiar to system designers such as eye diagrams, BER, FSR, etc.

More recently, Synopsys completed a PDA Flow link with PhoeniX Software to enable a bridge between the system- and circuit-level functional PIC design and an implementation of the design targeted to a specific foundry and packaging technology. PhoeniX Software specializes in PIC design layout tools. Their product, OptoDesigner, also uses PDKs and a library of high-level parameterized photonic building blocks. Using the PDA Flow link, PhoeniX can take the circuit from OptSim Circuit and synthesize a foundry-specific DRC correct layout that matches the design intent as described by the designer in OptSim Circuit.

Synopsys and PhoeniX will be jointly demonstrating this new flow at the Optical Fiber Conference (OFC) in Los Angeles in a couple of weeks. The demo will be given at the Synopsys booth #2519 on March 21[SUP]st[/SUP] and 22[SUP]nd[/SUP] from 10:00a to 12:00p and 2:00p to 4:00p, respectively. A feature of this demo is that they will be using the AIM Photonics PDK to show that this flow can in fact be used to create designs that can be fabricated now on MPW runs using the AIM Photonics silicon and silicon nitride processes.
Synopsys and PhoeniX have also completed work with the LioniX foundry in Europe to support this same flow with the LioniX Triplex (silicon nitride) technology.

While this flow is similar to what is done in the electronic world between capture/simulation and place and route, it is also very different. The differences come from the fact that the waveguides that connect photonic components are not simple connectors. With the right conditions, waveguides can actually become components in their own right that dramatically affect the signal. They can even be used to modulate the signal depending on how they are built. Given this phenomena, the photonics schematic takes on a much larger role than in electronics, in that the designer needs to comprehend some of these layout affects even while working in an abstract schematic.

The flow between Synopsys and PhoeniX enables this by allowing the designer to capture schematic connections as a combination of waveguide segments. These segments can be modeled and simulated with the rest of the circuit to allow the designer to ensure correct functionality. The segmented connections are then passed on to the PhoeniX layout tools through the PDA link, where they are synthesized per the parameters in the schematics. Placement of components and waveguide segments are accomplished by the PhoeniX tools based on the relative placements of components and waveguide segments in the schematic. PhoeniX uses something called elastic-connectors along with the relative placement information of components and waveguide segments to enable the physical placement and routing of the waveguides to meet the designer’s intent.

If you are curious to see how this all works, Synopsys and PhoeniX encourage you to stop by the Synopsys booth at OFC at the appointed times.

See also:
Synopsys RSoft Products Web Page
Synopsys / PhoeniX PDA Link Flow


The Real Lesson from the AWS Outage

The Real Lesson from the AWS Outage
by Matthew Rosenquist on 03-08-2017 at 7:00 am

The embarrassing outage of Amazon Web Services this week should open our eyes to a growing problem. Complex systems are difficult to manage, but if they are connected in dependent ways, a fragile result emerges. Such structures are subject to unexpected malfunctions which can sprawl quickly. One of the most knowledgeable technology companies on the planet learned just such a lesson this week. Amazon’s star-child, their cloud services, had a major disruption. It was not a nation-state attack, sophisticated teams of cyber-hackers, or even malicious insiders bent on destruction. Nonetheless, the lessons are telling. The ramifications of which will be important to all of us.

Summary of the Amazon S3 Service Disruption: We’d like to give you some additional information about the service disruption that occurred in the Northern Virginia (US-EAST-1) Region on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected. At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended…

It was one employee, typing a few wrong codes, that caused a significant outage to major portions of the Internet. Amazon worked furiously to contain and recover from the incident. It will have to rebuild trust with customers whom were sold on the resiliency of ‘cloud’ services to avoid such events. Amazon has already stated they will learn from the event and will apply some compartmentalization controls to lessen potential damage in the future. But there is a more significant realization to be made.

The greater lesson for us all is that when hugely sophisticated systems interconnect with each other, there is an exponential increase in complexity. Due to reliance, authority, and trust, these structures can fail in spectacular fashion. The AWS example show how such a situation allows a series of cascading unintended effects, that cannot easily have been predicted, to occur and cause widespread impacts. As bad as it may have appeared, it was not too severe. If it were an intentional attack from a capable, motivated, and sophisticated attacker, I believe the results would have been catastrophic.

With the AWS outage we can see the impact of an unintentional accident and the difficulty to recover when everyone is working together to resolve the issue. Now imagine what a malicious and focused cyber-threat could do while being stealthy, striving for maximum damage, and actively undermining countermeasures and recovery actions of response teams.

If this were a malicious insider or professional hack, the damage would be a thousand times worse. We would still be picking up the shattered pieces. There would be tears falling from the AWS cloud.

This week it was cloud storage services making websites unavailable. What happens when it is a fleet of autonomous vehicles which put lives at risk or the complex national power grid infrastructure?

We must take a fresh look at understanding threats, risks, countermeasures, and protection practices as individual pieces of the computing world are growing much more complex and being connected. Traditional methods are not sufficient in understanding how chain reactions can occur in the next generation of new technologies and services.

Interested in more? Follow me on Twitter (@Matt_Rosenquist), Steemit, and LinkedIn to hear insights and what is going on in cybersecurity.


Automotive OEMs Get Boost as NetSpeed NoC is Certified ISO 26262 Ready

Automotive OEMs Get Boost as NetSpeed NoC is Certified ISO 26262 Ready
by Mitch Heins on 03-07-2017 at 12:00 pm


I read with great interest today news from NetSpeed Systems that both their Gemini and Orion NoC IPs have been certified ISO 26262 ASIL D ready. They were certified by SGS-TUV Saar GmbH, an independent accredited assessor. This is a big deal as up till now, it was left up to the OEMs to do most of the heavily lifting to qualify their IC’s interconnect for the ISO automotive functional safety standard. To be clear, they still do, however if they use NetSpeed’s certified NoC IP, a significant burden has been lifted.

To compete in the automotive space, SoC platforms are created and derivatives are generated for market segment differentiation. Many of the big blocks remain the same for the derivatives while new blocks are added and configuration of blocks are changed. The interconnect portion however always changes when doing derivatives. Each time this happens, designers have to re-create a new NoC based on a new floorplan and different anticipated traffic patterns, QoS and safety/security requirements for the design. Doing this by hand is a big burden for designers, especially when you factor in that they must make sure the new NoC now meets all of the new QoS, power, performance and safety requirements and is once again ISO 26262 compliant to the ASIL level required.

NetSpeed’s synthesis capabilities make the task of creating a new NoC incredibly easy. Designers can quickly change constraints and then re-synthesize the NoC. The cool part is that NocStudio, the synthesis tool doing all of this work, now understands the ISO 26262 standard and can give designers an estimate of the new NoC’s ISO 26262 ASIL score and level before it is even synthesized.

At this point, it should be noted that the NetSpeed NoC IP has been certified ready for ASIL-B (90% SPFM) through ASIL-D (99% SPFM) levels depending on how the NoC is configured. It should also be noted that NetSpeed’s solution is the first coherent NoC IP to be certified ISO 26262 ready. This is especially important for state-of-the-art automotive SoCs targeted for autonomous vehicles. Those SoCs have complex interactions among heterogeneous CPU cores, clusters, vision processors and storage and the complexity has gotten to the point that it has become nearly impossible to build these types of interconnects by hand. NetSpeed takes on this challenge leveraging advanced machine learning algorithms to build correct-by-construction designs that can manage the complexity while also ensuring coherency and functional safety as part of the solution.

From the ISO 26262 point of view, NetSpeed’s architecture has safety built in at multiple levels, including defect checks for both end-to-end and hop-to-hop failures. Additionally, NetSpeed lets the designer fully specify NoC master slave relationships not only in terms of QoS and security, but also for specific ASIL targets. Unlike other NoCs, NetSpeed’s NoC IP enables the designer to customize the NoC to be as heterogeneous as the design it serves. Master slave relationships can be set up for varying ASIL coverage and secure and/or non-secure data transmission. Specific masters can also be blocked from specific address ranges that may include multiple slaves. This can be done at synthesis time, creating a hardwired firewall or dynamically at run time, without the need to split the interconnect.

This brings me my last point. As with most problems, the best solutions are those that holistically take into account a problem from the beginning when early design trade-offs can be made with more degrees of freedom. Adding features like NoC coherency and functional safety onto an existing fixed architecture is extremely costly, both in terms of system performance and area. NetSpeed’s ability to synthesize in and optimize both of these functionalities at different levels of granularity makes a huge difference in the quality of the design generated.

A key point here is that NetSpeed is unique in their ability to optimize not only specific QoS, power, performance and area metrics but they can also target specific ISO 26262 ASIL levels in different parts of the system. You can’t do this if you don’t look at the problem holistically.

Interestingly, the ISO standard reviews not only your design, but also your design team and how they do their work. The reason the NetSpeed team is now certified ready for ISO 26262 is because they think holistically and methodically and it shows in their products.

See also
Press release link
NetSpeed Web Page


Perspective in Verification

Perspective in Verification
by Bernard Murphy on 03-07-2017 at 7:00 am

At DVCon I had a chance to discuss PSS and real-life applications with Tom Anderson (product management director at Cadence). Tom is very actively involved in the PSS working group and is now driving the Cadence offering in this area (Perspec System Verifier), so he has a pretty good perspective on the roots, the evolution and practical experiences with this style of verification.


PSS grew out of the need to address an incredibly complex system verification problem, which users vociferously complained was not being addressed by industry-standard test-bench approaches (DVCon 2014 hosted one entertaining example). High on the list of complaints were challenges in managing software and use-case-based testing in hardware-centric languages, reusability of tests across diverse verification engines and across IP, sub-system and SoC testing, and in managing test of complex constraints such as varying power configurations layered on top of all that other complexity. Something new was obviously needed.

Of course, the hope in cases like this is “#1: make it handle all that additional stuff, #2: make it incremental to what I already know, #3: minimize the new stuff I have to learn”. PSS does a pretty good job with #1 and #3 but some folks may feel that it missed on #2 because it isn’t an incremental extension to UVM. But reasonable productivity for software-based testing just doesn’t fit well with being an extension to UVM. Which is not to say that PSS will replace UVM. All that effort you put into learning UVM and constrained-random testing will continue to be valuable for a long time, for IP verification and certain classes of (primarily simulation-based) system verification. PSS is different because it standardizes the next level up in the verification stack, to serve architects, software and hardware experts and even bring-up experts.

That sounds great but some observers wonder if it is over-ambitious, a nice vision which will never translate to usable products. They’re generally surprised to hear that solutions of this type are already in production and have been in active use for a few years; Perspec System Verifier is a great example. These tools predate the standard so input isn’t exactly PSS but concepts are very similar. And as PSS moves towards ratification, vendors are busy syncing up, just as they have in the past for SVA and UVM. Tom told me that officially the standard should be released in the second half of 2017.

How does PSS work? For reasons that aren’t important here, the standard allows for two specification languages: DSL and a constrained form of C++. I’m guessing many of you will lean to DSL so I’ll base my 10-cent explanation on that language (and I’ll call it PSS to avoid confusion). The first important thing to understand is that PSS is a declarativelanguage, unlike most languages you have seen, which are procedural. C and C++ are procedural, as are SV, Java and Python. Conversely, scripting for yacc, Make and HTML is declarative. Procedural languages are strong at specifying exactly “how” to do something. Declarative languages expect a definition of “what” you want to do; they’ll figure out how to make that happen by tracing through dependencies and constraints, eventually getting down to leaf-level nodes where they execute little localized scripts (“how”) to make stuff happen. If you’ve ever built a Makefile, this should be familiar.

PSS is declarative and starts with actions which describe behavior. At the simplest level these can be things like receiving data into a UART or DMA-ing from the UART into memory. You can build up compound actions from a graph of simple actions and these can describe multiple scenarios; maybe some steps can be (optionally) performed in parallel, some must be performed sequentially. Actions can depend on resources and there can be a finite pool of resources (determining some constraints).

Then you build up higher-level actions around lower-level actions, all the way up to run multiple scenarios of receiving a call, Bluetooth interaction, navigating, power-saving mode-switching and whatever else you have in the kitchen sink. You don’t have to figure out scenarios through the hierarchy of actions; just as in constrained random, a tool-specific solver will figure out legal scenarios. Hopefully you begin to get a glimmer of the immense value in specifying behavior declaratively in a hierarchy of modules. You specify the behavior for a block and that can be reused and embedded in successively higher-level models with no need for rewrites to lower-level models.


Of course, I skipped over a rather important point in the explanation so far; at some point this must drop down to real actions (like the little scripts on Makefile leaf-nodes). And it must be able to target different verification platforms – where does all that happen? I admit this had me puzzled at first, but Tom clarified for me. I’m going to use Perspec to explain the method, though the basics are standard in PSS. An action can contain an exec body. This could be a piece of SV, or UVM (maybe instantiating a VIP) or C; this is what ultimately will be generated as a part of the test to be run. C might run on an embedded CPU, or in virtual model connected to the DUT or may drive a post-silicon host bus adapter. I’m guessing you might have multiple possible exec blocks depending on the target, but I confess I didn’t get deeper on this.

So in the Perspec figure above, once you have built a system level test model with all kinds of possible (hierarchically-composed) paths, then the Perspec engine can “solve” for multiple scenario instances (each spit as a separate test), with no further effort on your part. And tests can be targeted to any of the possible verification engines. Welcome to a method that can generate system-level scenarios faster than you could hope to, with better coverage than you could hope to achieve, runnable on whichever engine is best suited to your current objective (maybe you want to take this test from emulation back into simulation for closer debug? No problem, just use the equivalent simulation test.)


We’re nearly there. One last question – where do all those leaf-level actions and exec blocks come from? Are you going to have to build hundreds of new models to use Perspec? Tom thinks that anyone who supplies IPs is going to be motivated to provide PSS models pretty quickly (especially if they also sell a PSS-based solution). Cadence already provides a library for the ARM architecture and an SML (system methodology library) to handle modeling for memories, processors and other components. They also provide a method to model other components starting from simple Excel tables. He anticipates that, as the leading VIP supplier, Cadence will be adding support for many of the standard interface and other standard components over time. So you may have to generate PSS models for in-house IP, but it’s not unreasonable to expect that IP and VIP vendors will quickly catch up with the rest.

This is well-proven stuff. Cadence already has public endorsements from TI, MediaTek, Samsung and Microsemi. These include customer claims for 10x improvement in test generation productivity (Tom told me the Cadence execs didn’t believe 10x at first – they had to double-check before they’d allow that number to be published.) You can get a much better understanding of Perspec and a whole bunch of info on customer experiences with the approach HERE.

More articles by Bernard…