RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Understanding Sources of Clock Jitter Critical for SOC’s

Understanding Sources of Clock Jitter Critical for SOC’s
by Tom Simon on 05-29-2017 at 12:00 pm

Jitter issues in SOC’s reside at the crossroads of analog and digital design. Digital designers would prefer to live in a world of clocks that are free from jitter effects. At the same time, analog designers can build PLL’s that are precise and finely tuned. However, when a perfectly working PLL is inserted into an SOC, things can get complicated.

On the digital side deterministic jitter can cause havoc with timing. Deterministic jitter comes from things like supply noise, duty cycle distortion, and other factors. The other variety of jitter is called random jitter, and mostly comes from variations in the VCO or PLL over a number of cycles. Random jitter is often hard to measure because it can be buried in the much higher amplitude deterministic jitter.

In a presentation prepared by Silicon Creations, they posit that deterministic jitter (the kind that can seriously affect SOC timing) is often largely a result of supply noise. The title of their presentation is “Supply Noise Induced Jitter – Don’t Let it Kill Your Chip”. They have seen deterministic jitter of up to 450ps. It’s worth noting that analog interface IP is also affected by jitter, potentially leading to higher BER and poor SNR.

Looking in more detail at the digital side, we already know that gate delay depends on voltage and temperature. Supply voltage variations caused by noise alter gate delay, which will affect clocks as well as control logic and data paths. Nice gradual shifts in supply voltage are not nearly as bad as abrupt changes while a clock edge is in flight. The results can take an extremely clean clock signal and turn it into one with detrimental levels of jitter. Additionally, longer paths are more susceptible to jitter effects.

Silicon Creations has a useful formula shown above for estimating period jitter. You can see it depends heavily on the peak to peak changes in the difference between Vdd and Vss, as well as the difference between Vdd and Vt. Silicon Creations is fine admitting that this is an approximation, not a rigorous equation. Though to make this formula useful there needs to be a way to estimate changes in Vdd. Their presentation goes into detail on methods to go through this process.

Alternatively, in many cases it can prove useful to assemble a flow for modeling jitter, rather than relying on a rule of thumb such as the one shown above. Silicon Creations customers’ have also done this with a flow based on Voltus & Tempus from Cadence. It is necessary to look at activity information and use this to derive per instance dynamic voltage drop information. Of course thermal information is needed too.

Silicon Creations closes their presentation with suggestions on how to fix chips that are broken due to jitter related issues. Their interest in this topic comes from wanting to ensure design success for customers of their PLL IP. They have had extensive experience working with SOC design teams to find solutions to chip level timing and jitter issues. Clearly this is a case of having to bring together expertise in the digital and analog domains to craft the best technical resolution.

If you want to look over the entire presentation, you can find it on their website.


FD-SOI in Japan?

FD-SOI in Japan?
by Adele Hars on 05-27-2017 at 7:00 pm

If you want to get your finger on the Japan FD-SOI pulse, registration is still open for a free, two-day workshop in Tokyo this week organized by the SOI Consortium. This is the 3rd Annual SOI Tokyo Workshop, and there’s a really interesting line-up of speakers.

In case you’re wondering, Japan is doing FD-SOI. In fact the history is long and deep, with researchers presenting papers on it — especially for ultra-low power — at top conferences for over a decade. Now it’s out of the lab and into the fab. Sony, for example, is using FD-SOI for IoT chips (the first of which, a GPS chip that cut power to 1/10 of the previous generation, was unveiled at last year’s workshop). And Renesas is using it for automotive and ultra-low-power IoT (which they just presented at ICICDT). In Japan, it was originally called SOTB, for Silicon on Thin Box. But that’s just another name for FD-SOI.

(Image courtesy: ©Tokyo Convention & Visitors Bureau)

This year the SOI/FD-SOI workshop will be held over the course of two days, May 31st and June 1st . Click here for registration information on the SOI Consortium website. (While there is no charge for the event, you should register in advance to guarantee your place.) You’ll find the full program here. Here’s a quick overview of what’s on tap.

May 31st – Ecosystem in the Afternoon
Wednesday, May 31st there’s an afternoon session on the FD-SOI Ecosystem hosted by Silvaco. There will be presentations from GlobalFoundries and IP/design companies Synopsys, Silvaco, Invecas and Attopsemi, as well as the SOI Consortium. It will be held on the 25th floor of the Yokohama Landmark Tower. A reception at the end of the day will give participants an extended opportunity to network with the speakers and other attendees.

June 1st – Full Day of Convergence, IoT & Automotive
On Thursday, June 1st the focus is on Convergence of IoT, Automotive Through Connectivity. This is a full-day workshop; it will be held at Tokyo University’s Takeda Hall.

First up will be talks on ultra-low power applications from Sony IoT and Samsung. Next there’ll be execs from Imagination/MIPS, IHSMarkit and Leti talking about automotive technologies. In the afternoon, GlobalFoundries, Cadence, Nokia and ST deep dive into IoT, Connectivity and Infrastructure. And finally, the day ends with talks by key supply chain providers Applied Materials, Soitec and Screen.

There will be a couple of coffee breaks and lunch (yes, of course you’ll get lunch), which will give attendees and speakers time for networking.

These workshops are really popular events — people fly in from all over the world for them. If you want to get an idea of what they’re like, click here to check out the presentations from last year.

If you’re in the region (it is, after all, the week before VLSI in Kyoto), it really is a great opp.


Webinar -New Concepts in Semiconductor IP Lifecycle Management

Webinar -New Concepts in Semiconductor IP Lifecycle Management
by Daniel Payne on 05-26-2017 at 7:00 am

The semiconductor IP market continues growing at a healthy rate, and IP reuse is a staple of all modern SoC designs. Along with the acceptance of IP reuse comes a host of growing challenges, like:

  • Increase in design files
  • Increase in meta-data
  • More links between design members worldwide
  • More links between data in multiple engineering systems

Companies like Methodics have been serving the IP lifecycle management segment for many years now, however there comes a point where the increases in design complexity call for a new approach, so to find out what is coming next you are invited to a webinar where their next generation platform is being unveiled:

Today’s complex SoC design requires a new level of internal and external design traceability and reuse by tightly coupling IP creators with IP consumers. Join us for the introduction of an exciting new platform that allows companies to provide the transparency and control needed to streamline collaboration by providing centralized cataloging, automated notifications to design teams, flexiblepermissions across projects, and integrated analytics across diverse engineering systems. Come see how companies are realizing substantial cost and time to market savings by adopting IP lifecycle management methodologies.

When: June 8th, 10AM PDT

Where:Online Webinar

Moderator:


Daniel Nenni

Presenters:


Michael Munsey


Rien Gahlsdorf (live demo)


Vishal Moondrha

Who Should Attend
Engineering designers, project management, program managers, verification engineers and SoC integrators.

About Methodics
Methodics delivers state-of-the-art IP Lifecycle Management, Design Data Management, and Storage and Workspace optimization and acceleration tools for analog, digital, SoC, and software development design teams. Methodics’ customers benefit from the products’ ability to enable high-performance collaboration across multi-site and multi-geographic design teams. The company is headquartered in San Francisco, California, and has additional offices and representatives in the U.S., Europe, Israel, China, Taiwan, and Korea. For more information, visit http://www.methodics.com


Three Major Challenges Facing IoT

Three Major Challenges Facing IoT
by Ahmed Banafa on 05-25-2017 at 12:00 pm

The Internet of Things (IoT) — a universe of connected things providing key physical data and further processing of that data in the cloud to deliver business insights— presents a huge opportunity for many players in all businesses and industries . Many companies are organizing themselves to focus on IoT and the connectivity of their future products and services. For the IoT industry to thrive there are three categories of challenges to overcome and this is true for any new trend in technology not only IoT: technology, business and society [1, 2, 3].

Technology
This part is covering all technologies needed to make IoT systems function smoothly as a standalone solution or part of existing systems and that’s not an easy mission, there are many technological challenges, including Security, Connectivity, Compatibility & Longevity, Standards and Intelligent Analysis & Actions [4].


Figure 1: Technological Challenges

Security:
IoT has already turned into a serious security concern that has drawn the attention of prominent tech firms and government agencies across the world. The hacking of baby monitors, smart fridges, thermostats, drug infusion pumps, cameras and even the radio in your car are signifying a security nightmare being caused by the future of IoT. So many new nodes being added to networks and the internet will provide malicious actors with innumerable attack vectors and possibilities to carry out their evil deeds, especially since a considerable number of them suffer from security holes.

The more important shift in security will come from the fact that IoT will become more ingrained in our lives. Concerns will no longer be limited to the protection of sensitive information and assets. Our very lives and health can become the target of IoT hack attacks [1].

There are many reasons behind the state of insecurity in IoT. Some of it has to do with the industry being in its “gold rush” state, where every vendor is hastily seeking to dish out the next innovative connected gadget before competitors do. Under such circumstances, functionality becomes the main focus and security takes a back seat.

Connectivity:Connecting so many devices will be one of the biggest challenges of the future of IoT, and it will defy the very structure of current communication models and the underlying technologies [2]. At present we rely on the centralized, server/client paradigm to authenticate, authorize and connect different nodes in a network.

This model is sufficient for current IoT ecosystems, where tens, hundreds or even thousands of devices are involved. But when networks grow to join billions and hundreds of billions of devices, centralized systems will turn into a bottleneck. Such systems will require huge investments and spending in maintaining cloud servers that can handle such large amounts of information exchange, and entire systems can go down if the server becomes unavailable.

The future of IoT will very much have to depend on decentralizing IoT networks. Part of it can become possible by moving some of the tasks to the edge, such as using fog computing models where smart devices such as IoT hubs take charge of mission-critical operations and cloud servers take on data gathering and analytical responsibilities [5].

Other solutions involve the use of peer-to-peer communications, where devices identify and authenticate each other directly and exchange information without the involvement of a broker. Networks will be created in meshes with no single point of failure. This model will have its own set of challenges, especially from a security perspective, but these challenges can be met with some of the emerging IoT technologies such as Blockchain [6].

Compatibility and Longevity:IoT is growing in many different directions, with many different technologies competing to become the standard. This will cause difficulties and require the deployment of extra hardware and software when connecting devices.

Other compatibility issues stem from non-unified cloud services, lack of standardized M2M protocols and diversities in firmware and operating systems among IoT devices.

Some of these technologies will eventually become obsolete in the next few years, effectively rendering the devices implementing them useless. This is especially important, since in contrast to generic computing devices which have a lifespan of a few years, IoT appliances (such as smart fridges or TVs) tend to remain in service for much longer, and should be able to function even if their manufacturer goes out of service.

Standards:Technology standards which include network protocols, communication protocols, and data-aggregation standards, are the sum of all activities of handling, processing and storing the data collected from the sensors [3]. This aggregation increases the value of data by increasing, the scale, scope, and frequency of data available for analysis.

Challenges facing the adoptions of standards within IoT

  • Standard for handling unstructured data: Structured data are stored in relational databases and queried through SQL for example. Unstructured data are stored in different types of NoSQL databases without a standard querying approach.
  • Technical skills to leverage newer aggregation tools: Companies that are keen on leveraging big-data tools often face a shortage of talent to plan, execute, and maintain systems.

Intelligent Analysis & Actions:The last stage in IoT implementation is extracting insights from data for analysis, where analysis is driven by cognitive technologies and the accompanying models that facilitate the use of cognitive technologies.

Factors driving adoption intelligent analytics within the IoT

  • Artificial intelligence models can be improved with large data sets that are more readily available than ever before, thanks to the lower storage
  • Growth in crowdsourcing and open- source analytics software: Cloud-based crowdsourcing services are leading to new algorithms and improvements in existing ones at an unprecedented rate.
  • Real-time data processing and analysis: Analytics tools such as complex event processing (CEP) enable processing and analysis of data on a real-time or a near real-time basis, driving timely decision making and action

Challenges facing the adoptions of intelligent analytics within IoT

  • Inaccurate analysis due to flaws in the data and/or model: A lack of data or presence of outliers may lead to false positives or false negatives, thus exposing various algorithmic limitations
  • Legacy systems’ ability to analyze unstructured data: Legacy systems are well suited to handle structured data; unfortunately, most IoT/business interactions generate unstructured data
  • Legacy systems’ ability to manage real- time data: Traditional analytics software generally works on batch-oriented processing, wherein all the data are loaded in a batch and then analyzed

The second phase of this stage is intelligent actions which can be expressed as M2M and M2H interfaces for example with all the advancement in UI and UX technologies.

Factors driving adoption of intelligent actions within the IoT

  • Lower machine prices
  • Improved machine functionality
  • Machines “influencing” human actions through behavioral-science rationale
  • Deep Learning tools

Challenges facing the adoption of intelligent actions within IoT

  • Machines’ actions in unpredictable situations
  • Information security and privacy
  • Machine interoperability
  • Mean-reverting human behaviors
  • Slow adoption of new technologies

Business
The bottom line is a big motivation for starting, investing in, and operating any business, without a sound and solid business model for IoT we will have another bubble , this model must satisfy all the requirements for all kinds of e-commerce; vertical markets, horizontal markets, and consumer markets. But this category is always a victim of regulatory and legal scrutiny.

End-to-end solution providers operating in vertical industries and delivering services using cloud analytics will be the most successful at monetizing a large portion of the value in IoT. While many IoT applications may attract modest revenue, some can attract more. For little burden on the existing communication infrastructure, operators have the potential to open up a significant source of new revenue using IoT technologies.

IoT can be divided into 3 categories, based on usage and clients base:

  • Consumer IoT includes the connected devices such as smart cars, phones, watches, laptops, connected appliances, and entertainment systems.
  • Commercial IoT includes things like inventory controls, device trackers, and connected medical devices.
  • Industrial IoT covers such things as connected electric meters, waste water systems, flow gauges, pipeline monitors, manufacturing robots, and other types of connected industrial devices and systems.


Figure 2: Categories of IoT

Clearly, it is important to understand the value chain and business model for the IoT applications for each category of IoT.

Society

Understanding IoT from the customers and regulators prospective is not an easy task for the following reasons:

  • Customer demands and requirements change constantly.
  • New uses for devices—as well as new devices—sprout and grows at breakneck speeds.
  • Inventing and reintegrating must-have features and capabilities are expensive and take time and resources.
  • The uses for Internet of Things technology are expanding and changing—often in uncharted waters.
  • Consumer Confidence: Each of these problems could put a dent in consumers’ desire to purchase connected products, which would prevent the IoT from fulfilling its true potential.
  • Lack of understanding or education by consumers of best practices for IoT devices security to help in improving privacy, for example change default passwords of IoT devices.

Privacy
The IoT creates unique challenges to privacy, many that go beyond the data privacy issues that currently exist. Much of this stems from integrating devices into our environments without us consciously using them.

This is becoming more prevalent in consumer devices, such as tracking devices for phones and cars as well as smart televisions. In terms of the latter, voice recognition or vision features are being integrated that can continuously listen to conversations or watch for activity and selectively transmit that data to a cloud service for processing, which sometimes includes a third party. The collection of this information exposes legal and regulatory challenges facing data protection and privacy law.

In addition, many IoT scenarios involve device deployments and data collection activities with multinational or global scope that cross social and cultural boundaries. What will that mean for the development of a broadly applicable privacy protection model for the IoT?

In order to realize the opportunities of the IoT, strategies will need to be developed to respect individual privacy choices across a broad spectrum of expectations, while still fostering innovation in new technologies and services.

Regulatory Standards
Regulatory standards for data markets are missing especially for data brokers; they are companies that sell data collected from various sources. Even though data appear to be the currency of the IoT, there is a lack of transparency about; who gets access to data and how those data are used to develop products or services and sold to advertisers and third parties. There is a need for clear guidelines on the retention, use, and security of the data including metadata (the data that describe other data).

Ahmed Banafa
Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016. This article was published on IEEE-IoT : Three Major Challenges Facing IoT

References

[LIST=1]

  • http://www.microwavejournal.com/articles/27690-addressing-the-challenges-facing-iot-adoption
  • https://blog.apnic.net/2015/10/20/5-challenges-of-the-internet-of-things/
  • https://www.sitepoint.com/4-major-technical-challenges-facing-iot-developers/
  • https://www.linkedin.com/pulse/iot-implementation-challenges-ahmed-banafa?trk=mp-author-card
  • https://www.linkedin.com/pulse/why-iot-needs-fog-computing-ahmed-banafa?trk=mp-author-card
  • http://iot.ieee.org/newsletter/january-2017/iot-and-blockchain-convergence-benefits-and-challenges.html

  • CPU, GPU, H/W Accelerator or DSP to Best Address CNN Algorithms?

    CPU, GPU, H/W Accelerator or DSP to Best Address CNN Algorithms?
    by Eric Esteve on 05-25-2017 at 7:00 am

    If you read an article dealing with Convolutional Neural Network (CNN), you will probably hear about the battle between CPU and GPU, both off-the-shelf standard product. Addressing CNN processing needs with standard CPU or GPU is like having to sink a screw when you only have a hammer or a monkey wrench available. You can dissert for a while and finally come to the conclusion that the hammer is better suited than the monkey wrench, or decide that the GPU is better than the CPU (which is probably true). At least if you don’t know the screwdriver…

    Now, just imagine that you discover the imagine/vision DSP (such as Tensilica Vision P6 DSP IP from Cadence) and integrate this new option to the comparison chart, like done in the above table. The CPU and GPU options are good in term of “Ease of Development” and “Future Proofing” criteria, but the DSP core as well. When it comes to “Power Efficiency”, the GPU based solution appears to be better than CPU, and this will probably be the conclusion of the above-mentioned article. With no screwdriver available, it’s probably more convenient to use a hammer to sink the screw! But if you compare the power efficiency of the DSP based solution with the GPU, you realize that the DSP core can be up to 10X better.

    Cadence has also compared the pure DSP solution with the approach mixing a DSP (or CPU or GPU) and dedicated hardware accelerators (HA), which may seem attractive, at it allows to offload very precise tasks. In the case of CNN algorithms, the tasks offloaded to the HA are only the convolution layers, as all others neural network (NN) layers are run on the imaging DSP, control CPU or GPU. This architecture lead to excessive data movement between two processing elements (NN and main DSP/CPU/GPU). These data movement are degrading the global power efficiency, even if the accelerated convolutional layers are showing good power efficiency in stand-alone.

    But the main drawback is linked with the CNN algorithm development trends. Even if the realistic semiconductor based solution for CNN are pretty recent (2012), at that time Alexnet was the preferred algorithm, requiring 700k + MACS per image. In less than 4 years, Inception and ResNet have been introduced and the computational requirements have jumped to 5.7M (Inception V3) or even 11M MACS per image for ResNet-152. This means that you would have to drop any hardware accelerator based solution designed to support Alexnet.

    Moreover, network architectures are changing regularly, as Inception V3 and ResNet are based on smaller convolution that Alexnet. If you select an inference hardware platform in 2017, the product will be shipping in 2019 or 2020. It will have to achieve low-power efficiency (and DSP + HA does), but also stay flexible… and only a pure DSP based embedded solution can make it.

    The problem becomes even worse when you need to scale the selected solution, for example to support Surveillance or semi-autonomous Automotive application requiring about 1 TMAC/s, and running a couple of neural networks all the time. If you want to address (toward) autonomous Automotive, the processing requirement moves by one order of magnitude (10 TMAC/s) and several NN have to run all the time.

    Cadence proposes a multi-core solution based on Vision C5 DSP, each core being a complete, standalone solution that runs all layers of NN (convolution, fully connected, normalization, polling…) and this solution scales elegantly compared with the multi-core with Imaging DSP + HA. In fact NN accelerator requires to implement a Vision/Imaging DSP with each core, as shown on the above picture.

    The Tensilica Vision C5 DSP for Neural Networks is a dedicated, NN-optimized DSP core (general purpose and programmable), architected for multi-processor design and scaling to multi-TMAC/s, not a “hardware accelerator” paired with a vision DSP. That’s why it’s not only a power efficient solution, but also a flexible architecture, well suited for CNN algorithms needs: increasing computational requirements and fast evolving networks architecture.

    By
    Eric Esteve fromIPnest


    Time is Money, Especially when Testing ICs

    Time is Money, Especially when Testing ICs
    by Daniel Payne on 05-24-2017 at 12:00 pm

    Semiconductor companies are looking for ways to keep their business profitable by managing expenses on both the design and test side of electronic products, which is quite the challenge as the trends show increases in test pattern count and therefore test costs. Scan compression is a well-known technique first created over 15 years ago that can be used to reduce the cost of testing ICs by compressing your test patterns so that you use less time on the tester and can implement with fewer test pins. Engineers at Mentor Graphics were early pioneers offering compression technology used inside of ICs with a product called Tessent TestKompress.

    Just this month the graphics chip company Nvidia announced their Volta GPU with a staggering 21 billion transistors, so Moore’s Law continues to hold up with many special-purpose chips like this one which drive the test challenge. The earliest IC fault model starts with stuck-at, however with smaller geometries we need to add more fault models, like:

    • Transition faults
    • Path-delay faults
    • Multiple-detect faults
    • Bridging faults

    As more fault models are added it grows the size of test patterns. From a practical viewpoint the number of pins on SoCs has been growing in a linear instead of exponential fashion over time, which then means that the bandwidth for test patterns is reduced. If we can reduce the total number of pins on an SoC, then we can use less expensive test equipment and keep our costs in check.

    The economic benefit of using a technology like Tessent TestKompress is that it reduces overall test costs by using fewer test patterns, running in fewer test cycles, yet producing a high test coverage metric. Compression really does decrease the volume of tester data while using fewer test channels. Fortunately for our industry the advances in test compression have gotten ahead of the growth curve for Moore’s Law (purple) as shown below.

    The acronym LPCT was created to stand for Low Pin Count Test, and it’s a driving factor in keeping test costs manageable in the DFT (Design For Test) world. Reasons for using LPTCT can be summarized as:

    • Better top-level chip routing
    • Minimizing the total number of pins
    • Including hierarchical ATPG
    • Lowering ATE costs
    • Keeping yield high during wafer testing
    • Enabling multi-site testing

    LPCT Approach

    The architecture of Tessent TestKompress has compressed stimuli coming into an IC from the left-hand side where the on-chip hardware decompresses the patterns for testing the core, then the output results from the core are compacted on-chip before being sent back to the tester on the right-hand side.

    Parallelism in your hardware design can be taken advantage of during test by re-using input stimulus to identical cores as shown below with cores 1 and 2, while cores 3 and 4 are different from each other and therefore still require separate input channels:

    During manufacturing test even with compression in TestKompress you still get diagnostics for failing devices without needing offline application of bypass patterns.

    Test pin count can be reduced by replacing the primary IO cells with a boundary scan version as shown below where the core is loaded with just four scan IN pins, and read out with just four scan OUT pins.

    LPCT Controllers

    There are three styles of LPCT controllers and each one of them is supported with TestKompress, and the benefit is minimizing the total pin count needed by your tester. Here’s the Type 1 LPCT controller architecture which takes the fewest gates:


    For a Type 2 LPCT Controller we are using a 4 or 5 pin TAP interface which allows us to eliminate connections to the functional I/O pins:

    Finally for a Type 3 LPCT Controller we need only 3 digital pins, while the controller uses about 1,400 gates:

    Summary

    The challenges to growing gate counts on SoCs have pressured the DFT community to come up with compression approaches to keep test times in line. Mentor recommends using LPCT along with their compression called TestKompress as a way to maintain high test coverage with the minimum tester time. The automation with TestKompress allows DFT engineers to be quite productive while minimizing the design changes.

    Read the complete 8 page White Paper on this topic online.


    Webinar: Getting to Accurate Power Estimates Earlier and Faster

    Webinar: Getting to Accurate Power Estimates Earlier and Faster
    by Bernard Murphy on 05-24-2017 at 7:00 am

    Power has become a very important metric in modern designs – for mobile and IoT devices which must live on a battery charge for days or years, for datacenters where power costs can be as significant as capital costs, and for increasingly unavoidable regulatory reasons. But accurate power estimation on a design must start from an implementation with detailed gate-level representation and realistic interconnect values and requires gate-level simulation data which can take days or weeks to produce. That’s good for signoff but it can be too late to guide design changes if the power target is missed.


    REGISTER NOW for this Webinar on June 1st at 10am PDT

    One way to attack this problem is to do power estimation at RTL which has advantages in flexibility but is quite a bit less accurate that gate-level estimation and is certainly not suitable for signoff. A different approach, better suited to accurate estimation, is to continue to work with the implemented gate-level netlist, but to use readily available RTL simulation data together with a mechanism to infer corresponding gate-level activity from that data. This is what you can have using Synopsys PowerReplay together with Synopsys PrimeTime PX. This webinar will show you how that can be done.

    REGISTER NOW for this Webinar on June 1st at 10am PDT


    CDC Verification for FPGA – Beyond the Basics

    CDC Verification for FPGA – Beyond the Basics
    by Bernard Murphy on 05-23-2017 at 12:00 pm

    FPGAs have become a lot more capable and a lot more powerful, more closely resembling SoCs than the glue-logic we once considered them to be. Look at any big FPGA – a Xilinx Zynq, an Intel/Altera Arria or a Microsemi SmartFusion; these devices are full-blown SoCs, functionally different from an ASIC SoC only in that some of the device is programmable.


    All that power greatly increases verification challenges, which is why adoption of the full range of ASIC verification techniques, including static and formal methods, is growing fast in FPGA design teams. The functional complexity of these devices overwhelms any possibility of design iteration through lab testing. One such verification problem, in which I have a little background, is analysis of clock domains crossings (CDC).

    CDCs and SoCs go hand in hand since any reasonable SoC will contain multiple clock domains. There’s a domain for the main CPU (possibly multiple domains for multiple processors/accelerators), likely a separate domain for the off-chip side of each peripheral that must support a protocol which rigorously restricts clock speed options. The bus fabric communication between these devices may itself support multiple protocols each running under different clocks. Clock speeds proliferate in SoCs.

    That why CDCs are found all over an SoC. At any place where data can cross from one of these domains to another, from the off-chip side of a peripheral to the bus for example or perhaps through a bus bridge, clock domain crossings exist. It is not uncommon to find at least 10 different clocks on one of these FPGA SoCs, which can imply 10,000 or more CDCs scattered across the FPGA. The “so what” here is that CDCs, if not correctly handled, can fail very unpredictably and can be very difficult to check, either in simulation or in lab testing. But if a CDC problem escapes testing, your customers are going to find it in their design, very often in the field, as an intermittent a lock-up or a functional error. When the design is going into a space application or military avionics or any other critical application, this state of affairs is obviously less than desirable.

    Simulation can play a role in helping minimize the chance of such failures but requires special checking and is bounded in value to the limited use-cases you can test. This has prompted significant focus on static (and formal) methods, which are use-case independent, to offer more comprehensive coverage. And in this domain, I am pretty certain that no commercial tool has the combined pedigree, technology depth and user-base offered by SpyGlass CDC, especially in ASIC design. It looks like Synopsys has been polishing the solution to extend the many years of ASIC expertise built in to the tool to FPGA design teams through collaboration with FPGA vendors and by adding methodology support for standards like DO-254.

    You might reasonably question why another tool is needed, given that the FPGA vendor tools already provide support for CDC analysis. Providing some level of CDC analysis for the basic approaches to managing CDC correctness is not too difficult and this is what you can expect to find in integrated tools. But as designs and design tricks become more complex, providing useful analysis rapidly becomes much more complicated.

    By way of example, integrated tools recognize basic 2-flop synchronizers as a legitimate approach to avoid metastability at crossings. But what is appropriate at any given crossing depends heavily on design context. A quasi-static signal like a configuration signal may not need to be synchronized at all; if you do synchronize you may be wasting a flop (or synchronizer cell) in each such case. Or you may choose to build your own synchronizer cell which you have established is a legitimate solution but which isn’t recognized as legitimate by the tool, so you get lots of false errors. Or perhaps you use a handshake to transfer data across the crossing. In this case, there’s no synchronizer; correct operation must be recognized through functional analysis.

    Failing to handle these cases correctly quickly leads to an overwhelming number of false violations. If you must scan through hundreds or thousands of violations, you inevitably start making cursory judgements on what is suspect and what is not a problem; and that’s how real problems sneak through to production.


    For many years, the SpyGlass team has worked with CDC users in the ASIC world to reduce this false violation noise through a variety of methods. One is through protocol-independent recognition, a very sophisticated analysis to handle a much wider range of synchronization methods that took many years to develop and refine (and is covered by patents).

    A second aspect is in analysis of reconvergence – cases where correctly-synchronized signals from a common domain, when brought back together, can lead to loss of data. A third is in very careful and detailed support for a methodology that emphasizes up-front constraints over back-end waivers. Following this methodology ensures you will have a much more manageable task in reviewing a much smaller number of potential violations; as a result you can get to a high-confidence CDC signoff, rather than an “I hope I didn’t overlook anything”.

    SpyGlass-CDC will also generate assertions which you can use to check correct synchronization in simulation; this becomes important when you want to validate functionally determined correctness at domain crossings, such as analyzing handshakes, bridges and other functionally-determined synchronization. And if you’re feeling especially brave, SpyGlass CDC also provides an extensive set of embedded formal analysis-based checks, which require very little formal expertise to use.

    Synopsys has worked with Xilinx, Intel/Altera and Microsemi on tuning support in SpyGlass-CDC for these platforms. You check out more details and watch the webinar and a demo on the methodology HERE.


    $100M China Investment for FD-SOI Ecosystem!

    $100M China Investment for FD-SOI Ecosystem!
    by Daniel Nenni on 05-23-2017 at 4:30 am

    When GlobalFoundries first briefed me on 22FDX during a trip to Dresden in 2015, China was one of the first things that came to mind. The China semiconductor market was still on 28nm and FinFETs seemed far away for the majority of the Chinese fabless companies. A low cost, low power, low complexity 22nm process with a path to 12nm (12FDX) seemed like a perfect fit and as it turns out it is, absolutely.

    Continue reading “$100M China Investment for FD-SOI Ecosystem!”