Synopsys IP Designs Edge AI 800x100

IROC Technologies CEO on Semiconductor Reliability

IROC Technologies CEO on Semiconductor Reliability
by Daniel Nenni on 05-26-2013 at 8:10 pm

One of the best things about being part of SemiWiki is the exposure to new technologies and the people behind them. SemiWiki now works with more than 35 companies and I get to spend time with each and every one of them. Much like I do, IROC Technologies works closely with the foundries and the top semiconductor companies so it was a pleasure to do this CEO interview:

Q: What are the specific design challenges your customers are facing?

A: The design flow is an always evolving, ever-demanding beast. The continuing evolution of the technology allows building increasingly complex electronic devices integrating more and more functions. This evolution is not free of problems, or more appropriate, challenges to overcome. Reliability is a natural concern. Particularly, perturbations induced by radiation – Single Event Effects (SEEs) may cause system downtime, data corruption and maintenance incidents. Thus, the SEEs are a threat to the overall system reliability performance causing engineers to be increasingly concerned about the analysis and the mitigation of radiation-induced failures, even for commercial systems performing in a natural working environment. Our observation is that from the classical Power, Area and Timing-driven design flows, going through the Design for Manufacturability, Yield and Test (DFM/DFY/DFT) design and manufacturing frameworks, our customers are increasingly aware and adepts of the Design for Reliability (DFR) paradigm.

Q: What does your company do?

A: We provide services, tools and expertise to qualify and improve the reliability of electronic designs and systems with a focus on radiation induced effects, process variation and ageing. We help engineers to evaluate the susceptibility of their designs to different perturbations. We assist them in improving the primary reliability characteristics (uptime, event/fault/error/failure rate …) that are relevant for their specific application. We accompany the reliability engineers in their exchanges with the suppliers in order to select best performing materials and processes. We also help them to prove the fitness of the delivered solution for the reliability expectations of the current application. Ultimately, we assist all the actors from the design and manufacturing flow to select and improve the best reliability-aware processes, designs approaches and frameworks with the overall goal of providing high reliability solutions to demanding end users. Moreover, our continuous 10+ years’ service as an independent trusted advisors and test experts has been positively recognized by technology & solution providers and end-users alike.

Q: Why did you start/join your company?

A: I’ve joined the company in September 2000 as one of the earliest employees. From my initial R&D Engineer position, through the role of VP Engineering and moving up to the CEO position in February 2013, I’ve accompanied the company through a long history of challenges and opportunities. My current focus is on preparing the organization for growth, technical excellence and leadership and aligning our solutions, services and tools to the needs of our partners and customers.

On the academic side, I hold a Ph.D in Microelectronics from INPG, Grenoble Institute of Technology, France and I’m fairly active in the R&D community, both industry and academy. I’m currently helping as a program chair, the organization of the International Online Test Symposium (IOLTS) in Crete, Greece, event focusing on the reliability evaluation and improvement of very deep submicron and nanometer technologies.

Q: How does your company help with your customers’ design challenges?

A: Our solutions, tools and testing services have been designed to evaluate and improve the Soft Error Rate of sophisticated systems – from the technological process and standard cell library up to the complexity of a failure mechanism in complex systems. Our tools can present an itemized contribution of the SER of every design feature (individual cell, memory block, IPs, hierarchical blocks) and our expertise can address the reliability concerns of cell, chip or system designer by improving the underlying technology reliability and adding error management IPs and solutions to the chip and system.

Q: What are the tool flows your customers are using?

A: Our TFIT (transistor/cell SER evaluation) and SoCFIT (platform for SER evaluation and improvement of complex systems) tools have been designed for a seamless integration in existing design flows and methodologies. The TFIT tool is usually used by the designers of standard cells, in conjunction with a SPICE simulator (HSPICE, Spectre). SoCFIT integrates a collection of modules and tools that aim at providing SER-aware evaluation and improvement capabilities in standard design flows (i.e. Architecture>RTL/HLS>GLN), possibly connecting to commercial simulation tools (NCSim, VCS, ModelSim, QuestaSim) or synthesis, verification and prototyping platforms (Synopsys DC, PrimeTime, First Encounter)

Q: What will you be focusing on at the Design Automation Conference this year?

A: We want to meet our customers, and our prospective customers. Austin is a large design center, and chances are we will be meeting many designers and engineers who don’t usually travel to Silicon Valley or other places for conferences.

Q: Where can SemiWiki readers get more information?

www.iroctech.com

http://www.semiwiki.com/forum/content/section/2094-iroc-technologies.html

Vision: We believe that every modern chips should have the highest level of reliability throughout the product life time. IROC Technologies helps the semi-conductor industry significantly lower the risks of soft errors by providing software and expert services to prevent soft errors when designing IC’s.

With the introduction of submicron technologies in the semiconductor industry, chips are becoming more vulnerable to radiation induced upsets. IROC Technologies provides chip designers with soft error analysis software, services and expert advisors to improve a chip’s reliability and quality.

Exposure of silicon to radiation will happen throughout the lifetime of any IC or device. This vulnerability will grow as development moves to smaller and smaller geometries. IROC proved that the soft errors that cause expensive recalls, time-to-volume slow-down, and product problems in the field can be significantly reduced. The mission of the company’s soft error prevention software and expert advisors is to allow users to increase reliability and quality while significantly lowering the risk of radiation-induced upsets, throughout the lifetime of products under development.

Also Read:

CEO Interview: Jens Andersen of Invarian

CEO Interview: Jason Xing of ICScape Inc.

Atrenta CEO on RTL Signoff


The History of Arasan Chip Systems

The History of Arasan Chip Systems
by Sam Beal on 05-26-2013 at 8:05 pm

In 2002 few people outside of Steve Jobs, could have predicted the iPhone. But a forward-looking technology CEO could expect Moore’s Law to extend into portable devices as it did with PCs. While 2G and 2.5G cellular phones were shipping in the hundreds of millions, the features were rather primitive. 3G, 4G, WiFi, Bluetooth, etc. were not available. The mega-pixel cameras, displays, and HD video capability of a modern smartphone were stuff of dreams.

That same year Arasan and his core team began its journey to deliver silicon intellectual property (IP) for portable electronics. The focus was not just on delivering IP Cores, but customer relationships and awareness of trends and evolving standards for portable electronics.

Arasan acquired the specifications from the SD Card Association (an apex body constituting Panasonic, SanDisk, and Matsushita) and entered the SD card market. SDIO, based on the SD standard, enabled add-on cards. While early adoption of the SD card and SDIO interface was limited to PDA and gaming systems, ACS established SDIO as a viable IP product for mobile devices. The first major break came when WiFi ICs using SDIO became a de-facto standard in mobile applications. SDIO has since evolved to support additional wireless standards like GPS, Bluetooth, WiMax, 3G, etc.

ACS actively participates and contributes to a number of major standards groups. In 2005, ACS pioneered the universal card host controller combing SD, SDIO and MMC. After joining JEDEC in 2008 eMMC support was added to the controller. Today the“3MCR” is the leading host memory card controller in mobile computing.

ACS was one of the first companies to join the MIPI Alliance and to contribute to the initial introductions of standards like DSI, CSI, Unipro, and SLIMBus. The high-speed serial interface required by many MIPI standards, led Arasan to invest in analog design – resources and mixed signal methodology for MIPI D-PHY and M-PHY. Today the largest microprocessor manufacturer and the leading suppliers of chipsets to cell phone companies are licensees of ACS’s MIPI based products.
First to design, first to market with a “Total IP Solution” has been Arasan’s goal from the formative years, and a key reason for making ACS the leader in mobile connectivity and mobile storage. The company prides itself on deep domain and system expertise. As a result ACS provides IP Cores with verification IP, software drivers and stacks, and validation platforms to enable customer success. Arasan Chip Systems established a reputation of the highest quality, unprecedented support, and a unique personal touch.

Mobile computing technology is constantly evolving to meet insatiable market demands for more features like higher resolution displays, multiple cameras, new WiFi standards, etc. To support these features, high-speed serial (SERDES) interfaces will drive growth in silicon IP (for example SSIC and Mobile PCIe, which will use the MIPI M-PHY). ACS customer engagements help tremendously to gauge these emerging trends, and to select future product offerings.


Interview with Forte CTO John Sanguinetti on Cynthesizer 5

Interview with Forte CTO John Sanguinetti on Cynthesizer 5
by Randy Smith on 05-26-2013 at 12:00 pm

Recently, Forte Design Systems announced the release of a new core engine to their popular high-level synthesis tool offering. It is a large undertaking, so I asked John Sanguinetti, Forte’s CTO, to answer some questions about that development effort.

Q. How long has it been since the last major upgrade of the Cynthesizer engine (when was C4 released)? Why are you doing this now?
A: This is the first major architectural change to the Cynthesizer core since our initial release in 2001. Of course we’ve been updating and improving the core along the way but it is the first time we’ve changed the architecture. We’ve known for a few years that our current platform would only take us so far and that we would need an upgrade. We started on that process a little over 2 years ago and over the last 18 months it has been one of the top priorities.

Q. What are the major changes in the engine? How will users benefit?
A: The biggest change is that we’ve combined the scheduling and allocation phases. High-level synthesis research for some time now has focused on combining scheduling and allocation. While our previous core organization worked pretty well in general, there were a number of optimizations that we identified that we really couldn’t do with our previous organization. With C5, we can now implement these new optimizations.

More importantly, it provides users with a more predictable platform. The tool can now quickly and thoroughly evaluate changes to the schedule, push them all the way through allocation (where parts are assigned and shared), and get a much more detailed view of the resulting RTL circuit.

We’ve also been able to deliver our first user-controlled power features on the new platform. Cynthesizer has had passive low power features for some time that included low power coding styles, known-good architectures for low power, and choice of half- and quarter-speed memories, for example. Cynthesizer 5 adds the ability to trade-off either area or performance for lower power. New features include integrated HLS-optimized clock gating, FSM (finite state machine) optimization to minimize datapath logic switching, and memory access optimizations. Using Cynthesizer’s exploration capabilities, it will be easy to get multiple QoR data points quickly.

We’ve also developed a new SystemC IDE and analysis environment. The goal with this product is to allow new users to more easily move to SystemC-based design. At the same time we wanted to make it easier for Cynthesizer experts to quickly analyze designs to get to their desired results more quickly. It’s a careful balancing act but we are confident that it is going to be a significant new feature.

Q. How much effort was involved in development? Quality assurance?

A: It was a big effort taking the better part of 2 years with most of our staff. As you know, we’ve been delivering production-level high-level synthesis for more than 10 years and in that time we’ve built up an extensive regression suite. Our regression suite is made up of nearly 10,000 designs that range from small unit test cases to customer-deployed designs with millions of gates. This has proven to be a big advantage for us throughout the development of Cynthesizer 5 because we were able to quickly see the trends in terms of area, performance, and power.

Q. What was the greatest engineering challenge in making C5?

A: Whenever you deliver new technologies into an existing customer base the first challenge is to make sure that your existing customers see the new benefits without giving up anything on their existing designs. This is actually harder than it sounds. One of the toughest challenges is that Cynthesizer 4.3 can achieve really good results already. Getting the quality of results (QoR) to be better in every case was a tall order, but we have very nearly achieved that. The end result is that existing customers should have a great experience with Cynthesizer 5 with no disruption in their flow and both new and existing customers will find some really exiting new features that further differentiate Cynthesizer.

Q. Were any customers involved in C5? If so how many and how (and who if you can tell us)?

A: We always work closely with customers on any new developments. We had partners in the US, Japan, and Korea looking at Cynthesizer 5 very early on as well as Cynthesizer Low Power.

Q. What effort should current users expect in transitioning to C5?

A: We don’t expect any significant effort for existing users. We’ve included compatibility modes to make sure that the transition is easy with existing designs.

Q. What are the next major improvements to the engine? Will those be a focus of C6 or will you be able to add those features into C5?
A: We’ve designed this platform to be able to carry the product line for a long time. We expect to be spending a lot of time continuing to raise the abstraction-level of the input, adding new low-power optimizations, and expanding our IP effort.

Q: What will Forte be showing at DAC in Austin?
A: Cynthesizer 5 will be the main focus but we will also have a number detailed technical sessions. One of customers, Adapt-IP, will be showing a USB 3.0 design completely designed with Cynthesizer running on a live board. We’ll also have a joint session with Ansys Apache on low power and a detailed technical tutorial on Cynthesizer. You can find more on our website at www.ForteDS.com/dac2013.

lang: en_US


Avoiding layout related variability issues

Avoiding layout related variability issues
by Daniel Nenni on 05-26-2013 at 7:55 am

In advanced process technologies, electrical and timing problems due to variability can become a big issue. Due to various processing effects, a circuit performance (both speed and power) is dependent on specific layout attributes and can vary a lot from instance to instance. The accumulated effects can be severe to the point that it may cause the circuit to fail.

In this blog I will demonstrate how iDRM is used very effectively to measure and analyze millions or even billions of layout instances and determine possible impact on performance. We will focus on two layout dependent effects that affect transistor performance:

  • Well Proximity Effect (WPE). Transistors that are close to the well edge have a different performance (mostly due to modified Vt) than ideally placed transistors. The effect can vary the transistor speed by ±10%.
  • Stress/strain effect. This effect causes the mobility of charge carriers in transistors to change which causes changes in device Idon. The precise quantitative effect is very process dependent and can vary the transistor speed by ± 30%.

There are various situations where such analysis is needed. The context can be one where a legacy design or IP is being integrated or reused by another group, or when a design is sent to be fabricated by a different foundry than the one originally designed for.

The approach we take is to gather general statistics on the above variation effects for every device in the layout and then analyze the value distribution. We want to check if there are any significant outliers from the regular expected data, and also look for general shifts in the distribution that can make the overall average faster or slower than expected.
The specific WPE and stress effects described here apply mostly to the 28nm and 40nm nodes. A similar approach can be used for variability checks in fin-FET designs, for example due to the impact of certain layer density variations.

Defining what to look for using iDRM

Using iDRM, we define two patterns that will be used to gather statistics from a physical design.
[LIST=1]

  • The first pattern will be used to measure WPE effects:

    • We draw a transistor (crossing Poly and Diffusion edges) of which the W/L is measured and we draw the well edges around the device (see WPE figure). Note that the well edges display a “multiple edge” shading since a single transistor may be enclosed by multiple well edges on each side.
    • We enter a formula to calculate the minimal distance of the well edges to the transistor gate center. This is the minimum of all four side distances as in the expression shown in the picture. We call this variable WPE.
    • A dSpeed variable is created to calculate the speed impact as: (1/0.24 – 1/WPE). The chosen reference distance value 0.24 is an average value for devices in the nominal range. dSpeed represents the speed difference between a nominal transistor and each one being matched. A non-zero dSpeed will indicate a layout irregularity that deviates from the nominally modeled and characterized devices and might present a WPE risk that warrants further analysis.


    Obviously, you can use a more advanced WPE to speed model, but this simple model is already sufficient to reveal valuable information. The purpose is not to exactly predict speed impact, but to identify potentially risky deviations from nominal, well modeled design.

    [LIST=1]

  • The second pattern will detect one of the major stress/strain influencing effects. This is LOD (Length Of Diffusion), the amount of SD diffusion area that extends away from the gate.
  • The pattern is similar to the WPE pattern. Again the transistor is drawn and the relevant distances are measured and a simple formula is used to calculate a dSpeedLOD. The chosen reference value 0.075 is an average distance from the gate centerline to the diffusion edge in nominally drawn and modeled devices. Any dSpeedLOD which deviates from 0 indicates a non-nominal device that requires further inspection.


    Gathering and processing physical data
    Once the patterns have been defined, we can run them on the physical design. Run times will vary from a few minutes for a block to a few hours for a full chip. During the run, iDRM will automatically record the following:

    • all the locations where such patterns were found
    • for each such location (match instance): the actual values of the distances that were defined in the pattern definition
    • all the evaluated expressions

    Interpreting statistical results
    iDRM has powerful features to display results of statistic data collection:

    • Frequency graphs where the occurrence count (how often does a specific value occur in the layout) of a variable (e.g. a distance) is plotted against the value of the variable.
    • 2-D graphs where two variables are used, and occurrence counts are displayed color coded
    • Pareto charts, showing cumulative occurrence of combinations of multiple variable values
    • Plain tables displays
    • Exports to *CSV files that can be used by other tools.

    All statistics views are linked to the layout and it takes one-click to find and view all occurrences of any specific value combination in the design.

    We ran these patterns on a typical test SOC layout and found that both distributions have some interesting characteristics.

    WPE Statistics
    (see occurrence graph below)
    Looking at the WPE occurrence graph, we find the largest group of occurrences around rule value zero, which is expected. But there is also a peak at rule value= -9.5 (remember, the actual value is not that important, we are mostly checking if this is a normal distribution) which indicates a large number of devices that were laid out in a special way. After inspecting the layout for this value, which is just one mouse-click away from the statistics view, we could see that these are all long-L small-W devices close to a well edge in a special standard cell (a power down retention circuit). We did find a special circuit, which might be variation and yield sensitive by just looking at statistics, and without knowing the design. We can now focus on these circuits and further analyze their impact.

    LOD Statistics(see occurrence graph below)
    Looking at the LOD distribution we see a big peak in occurrence count at value -2.8. After inspecting these instances they turned out to be special transistors in a memory array. Assuming that the memory cells are properly characterized, we can ignore these.



    Conclusion

    iDRM enables an easy to use and yet very powerful mechanism to analyze legacy or otherwise not fully familiar designs; sort through a huge amount of physical design data, and quickly identify specific design objects that may impact performance or yield and thus require further analysis.

    Defining the patterns and measurements is done graphically and takes less than an hour and requires no programming skills.

    In addition to the above examples, iDRM can be used to search, measure and analyze many other physical design objects and phenomena that can have an impact on design integrity, performance and yield.

    Further information can be found at Sage’s white paper here.

    Sign up for a demo at DAC booth #2233 here.

    lang: en_US


  • Heart of Technology Heads to DAC in Austin

    Heart of Technology Heads to DAC in Austin
    by Randy Smith on 05-25-2013 at 2:45 pm

    With the support of the Heart of Technology, one of the big events this year at DAC will be the Kickin’ It Up in Austin celebration on June 3, at the amazing, world famous Austin City Limits Live! The event starts at 8:00 PM, runs until 1:00 AM and features three bands – 9-time Grammy winner Asleep at the Wheel; Vista Roads, an industry band featuring Jim Hogan and friends; and Texas Terraplanes, described as “Rock’n Electric Blues…sounds like a hot steam’n plate of Texas ribs and brisket with a side of seafood gumbo.”The first 400 guests in the door at the 50[SUP]th[/SUP] anniversary celebration will receive a free concert T-shirt. A DAC badge is required for entry.

    A special part of the celebration takes place in the HOT Zone sponsored by Heart of Technology and several emerging companies in the EDA and IP industry. Held in the Jack and Jim Gallery at the top of ACL Live, the HOT Zone features artist and performance photos from the famed photographer Jim Marshall. The Jack stands for Jack Daniels, and premium drinks will be served there all night. This is a great opportunity to party with friends as well as raise money for a good cause.

    The Heart of Technology has served as a fundraising accelerator for charities for nearly ten years now. Conceived and lead by the efforts of Jim Hogan, and with the assistance of many other industry professionals, the Heart of Technology assists non-profits by helping them raise money for causes that strengthen communities and help people going through difficult life transitions. With DAC being held in Austin, Texas this year the Heart of Technology has turned its attention to host an event to benefit CASA (Court Appointed Special Advocates) of Travis County. CASA speaks up for children who’ve been abused or neglected by empowering the community to volunteer as advocates for them in the court system. Prior charities have included Second Harvest Food Bank, FleaHab, and southern Bay Area schools.

    To be a guest at the HOT Zone, just donate $50.00 or more to CASA of Travis County. Your donation is tax deductible. Find out more details on the HOT website.

    What’s Sizzling in the HOT Zone
    · Private performance by “The Red Headed Stranger”
    · Premium drinks including Jack Daniels for which the gallery is named
    · Unique food including Texas Treats on a stick and breakfast buffet on a stick
    · Photo booth, tattoos
    · Entry into a drawing for a Stratocaster guitar signed by Asleep at the Wheel
    · Private balcony seating for main stage performances
    · Featured entertainment at the DAC party includes Grammy Award winning artists Asleep at the Wheel, Vista Roads Band, and Texas Terraplanes

    The first 100 people to make a donation of $100 or more will receive a rock and roll HOT Zone t-shirt! Find out more at www.heartoftechnology.org.

    Don’t miss what will be one of the most memorable DAC events yet!

    Full Disclosure: I will be singing in the Vista Roads band at the event.

    lang: en_US


    DAC50 App for iPhone Now Available

    DAC50 App for iPhone Now Available
    by Paul McLellan on 05-24-2013 at 8:24 pm

    This year’s version of Bill Deegan’s DAC App for iPhone is now available for download from the iTunes App Store. The App has the entire calendar included, and makes it easy to add any interesting looking event to your calendar. The whole exhibit hall can be searched and there is a zoomable map of the exhibit hall.

    I have found that the App is an easier way to get stuff onto my calendar that I am interested in than doing it by hand.

    Bill is now working on the Android version which should be available soon.

    The App can be downloaded here.


    Bringing Sanity to Analog IC Design Verification

    Bringing Sanity to Analog IC Design Verification
    by Daniel Payne on 05-24-2013 at 1:07 pm

    Two weeks ago I blogged about analog verification and it started a discussion with 16 comments, so l’ve found that our readers have an interest in this topic. For decades now the Digital IC design community has used and benefited from regression testing as a way to measure both design quality and progress, ensuring that first silicon will work with a higher degree of confidence.

    So, what can be done to make automated Analog Verification a reality for the average IC designer? In the Analog design space, a subset of testing will always remain manual. Often there is no way (or desire) to replace pulling up a SPICE waveform or looking at an eye diagram. But there is a large class of testing that can be scripted and automated.

    In the last blog in this series, I looked at how a tool like VersICfrom Methodicscan help you discover, run and track the history of scripted tests right from within your Cadence environment. If you are willing to absorb the initial cost of setting up automation scripts, the benefits are manifold. Scripted tests can be standardized and run multiple times in identical fashion. Tests can be tracked, ported and shared easily across cells, libraries and designs.

    Once some scripted tests are available, an additional benefit can be obtained – automatic regressions. A regression is a collection of tests that are run as a group. This group can then be manipulated, tracked and managed as a single entity.


    Regressions as collections of scripted tests in Virtuoso

    Grouping tests into regressions makes it very easy to run a set of consistent checks on a design change. Tests that are related in some way – say LVS/DRC, port checks and RTL consistency checks can all be grouped into a single regression for hand-off criteria. There is now no need to remember if all the hand-off criteria were met – simply run this regression and it ensures that all checks were in place.

    Other examples of regressions include ADEXL functional tests that can be grouped as a regression. Each library or cell can have its own functional tests, run through a common ADEXL automation framework, so that essentially the same regression is run on all the cells of a library.

    When handing a design off to integration, a good practice would be to run some critical subset of your tests as a handoff – or ‘sanity’ – regression. This regression, commonly referred to as a ‘smoke’ regression in Digital Verification circles, ensures that minimum quality is met before integration is attempted. This way, the integration team only concentrates on issues at the subsystem or system level, knowing that the individual cells or libraries are consistent.


    Regression Progress Graph

    Regression can also be tracked for progress – plotting the number of passing/failing tests in a regression over a period of time is a good indicator of the health of the design.

    Think of regressions as a powerful communication tool. Regression results automatically provide a view into the current state of the design. The tests included in a regression are an indicator of what tests are important. Passing or failing regressions automatically become a gating criteria for acceptance of a design.

    Summary
    The new discipline of regression testing for AMS designs is of benefit and Methodics has EDA tools that will help you add automated regression testing into your IC design flow.

    If you’re traveling to DAC in Austin then visit the engineers at Methodics in booth #1731, ask for Simon Butler.

    Further Reading

    lang: en_US


    The Internet of Things

    The Internet of Things
    by Paul McLellan on 05-24-2013 at 1:02 am

    There is a lot of talk about the Internet of Things (IoT) about how everything is going to be connected to the internet. For some reason the archetypal example is a refrigerator that knows what you are nearly out of and puts it on your shopping list. Or orders it from the store. Or something. This seems like pretty high on the list of things I don’t need. If my self-driving car would go and gas itself up without me, that would be great, but I don’t need a special delivery of milk and bacon. Well, OK, who wouldn’t want a delivery of bacon.

    Here’s one of those little incremental steps on the way that seems neat. How about a fridge magnet. Boring. From a pizza company. A little less boring. With a button on it. When you press the button, the magnet (well, the electronics inside) links by bluetooth to your cell-phone and orders a pizza. It is all confirmed by text in case you want to change anything about the order. And the pizza shows up in 30 minutes.

    Of course it isn’t available everywhere yet. Silicon valley? Forget it, it is only available in Dubai, which wouldn’t have been the first place I’d have picked. It has been so popular that they ran out of them and there is a six week delay while they manufacture more.

    See a video showing how it works (1½ minutes).


    PinPoint in Practice

    PinPoint in Practice
    by Paul McLellan on 05-23-2013 at 10:40 pm

    I talked with a mystery person earlier this week. I would love to tell you his (or her) name and the company he (or she) works for but they are the sort of company that doesn’t casually endorse any suppliers so it all has to remain anonymous. But they have been a customer of Pinpoint, originally from Tuscany Design Automation until late last year when Dassault Systemes acquired the company. They have been using it in production designs for about 18 months.

    Like most big semiconductor companies, they have multiple internal dashboards that have been homegrown. But Pinpoint is unique since it does not just pull status reports and metrics together in one place, it provides all the information needed to visualize a problem, debug the problem, plan the fix and make decisions from within a single browser window. Without the information Pinpoint provides it is often unclear what approach should be taken: if a path misses timing is it an RTL problem, a routing congestion problem, bad power grid, poor placement, or a bad floorplan. All of these are possible reasons and with only a timing report and a perhaps the layoutit is almost impossible to guess which is the root cause and thus whose job it is to try and fix it.


    Unlike internal dashboards, Pinpoint makes it possible to look at the actual paths of interest (not making timing, say), see where they run in the layout and overlay it with other relevant information. The company now uses Pinpoint to drive their weekly project meetings when they are in design closure. They also use it as a focus to get RTL designers more involved since all the information is available through a browser window. The design team goes all the way from RTL to GDSII with very tough timing and power requirements. So RTL designers need to work closely with the physical team, they are not “just” taking pre-designed RTL IP from another group and integrating it into an SoC.

    They are also starting to use Pinpoint as a way to communicate to geographically dispersed teams. Most of the team is in one site but there are people at a couple of other sites and PinPoint makes it easy to have a common view for discussion.

    They have customized Pinpoint in a number of ways, firstly to add internal metrics that they sometimes use and also to overlay information from different sources such as power analysis maps from specialized power analysis tools, or congestion maps from routers, or thermal maps. This is something that is hard to do in the individual tools since they are usually only designed to display their own data graphically.


    Another big saving is that Pinpoint provides visualization of physical design, timing paths etc without pulling down an expensive license of PrimeTime or the place & route tools during this analysis and debug. It is not just a financial saving, there is a big saving in time in avoiding needing to reload the whole design/block.

    During physical design regression (the nightly run) all the data is automatically updated into Pinpoint. When a designer is playing around with multiple experiments, he/she can flag their best run and control the collection of the metrics. Pinpoint can track the historical progression of the various metrics and the status of the design, and that data is always maintained and visualizable even if the original source data (DEF, LEF, Primetime reports, etc) have to be deleted to make disk space available for future iterations.


    Do my tests certify the quality of my products?

    Do my tests certify the quality of my products?
    by Pawan Fangaria on 05-23-2013 at 9:00 pm

    Honestly speaking, there is no firm answer to this question, and often when we get confronted by our customers, we talk about the coverage reports. The truth is a product with high rate of coverage can very easily fail in customer environment. Of course coverage is important, and to be clear about the fact that the failure is not because a particular construct was not tested, we heavily stress on 100% coverage. When I was managing physical design EDA products, I often used to have arguments with my test team about flow tests which go much beyond the syntax and semantics, and in true sense have no limits. The toughest problem we had that no customer was ready to give its design to us for testing purposes. Sometimes, if we were lucky, could get a portion of it under NDA, else relied on repeated reporting of failures (until pass) from the customer.

    I am happy to see this tool called Crossfire from Fractal Technologies. This tool enables customer as well as supplier to work in collaborative mode and certify the checks required for a design to work. It works at design level to validate complete StdCell library, IOs and IPs which are used in the design, and has more than 100 types of checks used consistently over different formats at front-end as well as back-end, including databases such as Cadence DFII, MilkyWay and Open Access. Apart from parsing the format, it has specific checks for cells, terminals, pins, nets and so on, as usual for all cells in the library.

    What is interesting, and that adds up into the quality of test, is special set of checks which actually sneak into design quality at the time of construction of the design. Some nice examples of these are –

    Layout Vs layout – Identity between polygons is checked by Boolean mask XOR operation and abstract enclosing layout polygons. Typical errors of this check can be in the form as below –

    LEF cell size – LEF cell is checked to have correct size as per LEF technology.

    Routability – Checks if signal-pins can be routed to cell-boundary. Typical errors are – “Pins not on grid” or “Wrongly coded off-set”.

    Abutment – Cells checked for self-symmetry and abutment with reference cell. Typical abutment errors are represented as below –

    Functional Equivalence – Functional representation in different formats is checked for equivalence. Schematic netlists such as Spice, CDL or schematic views must exhibit same functionality. Similarly Verilog, VHDL or any other description must mean the same functionality. Typical Functional Equivalence errors are – “mismatch between asynchronous and synchronous descriptions”, “short circuit”, “missing functional description”, “Preset and Set yielding different results”, and so on.

    Timing Characterization: cross format – Checks for consistency of timing arcs in all formats such as Verilog, VHDL, Liberty, TLF and the like

    Timing Characterization: NLDM (Non-linear Delay Model) – Consists of Index, Table Values, Trend, Property and Attribution checks. Typical characterization errors are – delay decreases with increasing output load, obsolete default conditions, non-pared setup and hold times etc.

    Timing Characterization: CCS (Composite Current Source) – Consists of Index, Table Values, Reference time, Peak position, Threshold passage and Accuracy checks. Typical CCS characterization errors are represented as below –

    Timing Characterization: ECSM (Effective Current Source Model) – Consists of Index, Table Values, Threshold passage and Accuracy checks. Typical ECSM characterization errors are represented as below –

    Power Characterization: NLPM (Non-linear Power Model) – Consists of Index, Table Values and Trend checks.

    A detailed description of all these checks is given in a paper at Fractal website here.
    It also has example test reports of 45nm library and 45nm PDK library validations – Example1 and Example2

    An important characteristic of Crossfire is that it allows you to create new tests on demand as per your design and design flow, hence leading to completeness in all types of checks in actual sense. It can also accommodate proprietary formats and databases. Fractal team provides expert knowledge on validation flows and integration of customized APIs with Crossfire which then provides a true environment for complete Quality Control and Assurance.