SemiWiki Podcast Banner

WP_Query Object
(
    [query] => Array
        (
            [paged] => 732
        )

    [query_vars] => Array
        (
            [paged] => 732
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [post_type] => Array
                (
                    [0] => post
                    [1] => podcast
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type IN ('post', 'podcast') AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 7310, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 458
                    [post_author] => 3
                    [post_date] => 2011-04-07 16:53:00
                    [post_date_gmt] => 2011-04-07 16:53:00
                    [post_content] => Inroduction
In the early days we made paper plots of an IC layout then measured the width and length of interconnect segments with a ruler to add up all of the squares, then multiplied by the resistance per square. It was tedious, error prone and took way too much time, but we were rewarded with accurate parasitic values for our SPICE circuit simulations.

Today we have many automated technologies to choose from when it comes to extracting parasitic values for an IC layout. These parasitic values ensure that what we simulate in SPICE is accurately providing the right timing value, can detect glitches and measure the effects of cross-talk.


Accuracy vs Speed


The first automated parasitic extraction tools used rules about each interconnect layer for resistance and capacitance as a function of width, length and proximity to other layers. These tools are fast and reasonable accurate for nodes that have wide interconnect with little height. As the height of interconnect have grown then the accuracy of these rules diminishes because of the complex 3D nature of nearby layers.

3D field solvers have been around for over a decade and have also offered the ultimate in accuracy with the major downside being slow run times. The chart above places 3D field solvers in the upper left hand corner, high accuracy and low performance.

Here's a quick comparison of four different approaches to extracting IC parasitics:

[TABLE] class="cms_table_grid"
|- class="cms_table_grid_tr"
| class="cms_table_grid_td" | Approach
| class="cms_table_grid_td" | Plus
| class="cms_table_grid_td" | Minus
|- class="cms_table_grid_tr"
| class="cms_table_grid_td" | Rule-based/Pattern Matching
| class="cms_table_grid_td" | Status quo
Familiar
Full-chip
| class="cms_table_grid_td" | Unsuitable for complex structures
Unable to reach within 5% of reference
|- class="cms_table_grid_tr"
| class="cms_table_grid_td" | Traditional Field Solver
| class="cms_table_grid_td" | Reference Accuracy
| class="cms_table_grid_td" | Long run times
Limited to devices
|- class="cms_table_grid_tr"
| class="cms_table_grid_td" | Random-Walk Field Solver
| class="cms_table_grid_td" | Improved Integration
| class="cms_table_grid_td" | 3 to 4X slower than Deterministic
|- class="cms_table_grid_tr"
| class="cms_table_grid_td" | Deterministic Field Solver
| class="cms_table_grid_td" | Reference-Like Accuracy
Fast as Rule-based
| class="cms_table_grid_td" | Multi-cpu required (4 - 8)
|-


What if you could find a tool that was in the upper right hand corner, offering high accuracy and fast run times?

That corner is the goal of a new breed of 3D field solvers where highest accuracy and fast run times co-exist.

Mentor's 3D Field Solver
 I learned more about 3D field solvers from Claudia Relyea, TME at Mentor for the Calibre xACT 3D tool, when we met last month in Wilsonville, Oregon. The xACT 3D tool is a deterministic 3D field solver where multiple CPUs are used to achieve faster run times. A white paper is available for download here.

Q: Why shouldn't I try a 3D field solver with a random-walk approach?

A: Well, your results with a random-walk tool will have a higher error level. Let's say that you have 1 million nets in your design, then with a sigma of 1% you will see 3,000 nets where the accuracy is >3% different from a reference result. For sensitive analog circuits and data converters that level of inaccuracy will make your chip fail.

Q: What is the speed difference between xACT 3D and random walk tools?

A: We see xACT 3D running about 4X faster.

Q: What kind of run times can I expect with your 3D field solver?

A: About 120K nets/hour when using 32 cpus, and 65K nets/hour with 16 cpus.

Q: How is the accuracy of your tool compared to something like Raphael?

A: On a 28nm NAND chip we saw xACT 3D numbers that were between 1.5% and -2.9% of Raphael results.

Q: Which customers are using xACT 3D?

A: Over a dozen, the one's that we can mention are: STARC, eSlicon and UMC.

Q: For a device level example, how do you compare to a reference field solver?

A: xACT 3D ran in 9 seconds versus 4.5 hours, and the error versus reference was between 4.5% and -3.8%.

Q: What kind of accuracy would I expect on an SRAM cell?

A: We ran an SRAM design and found xACT 3D was within 2.07% of reference results.

Q: How does the run time scale with the transistor count?

A: Calibre xACT 3D has linear run time performance with transistor count. Traditional field solvers have an exponential run time with transistor count making them useful for only small cells.

Q: What is the performance on a large design?

A: A memory array with 2 million nets runs in just 28 hours when using 16 cpus.

Q: Can your tool extract inductors?

A: Yes, it's just an option you can choose.

Q: How would xACT 3D work in a Cadence IC tool flow?


Q: Can I cross-probe parasitics in Cadence Virtuoso?

A: Yes, that uses Calibre RVE.


Q: Where would I use this tool in my flow?

A: Every place that you need the highest accuracy: device, cell and chip levels.

Summary
3D field solvers are not just for device level IC parasitics, in fact they can be used on cell and chip level as well when using multiple CPUs. The deterministic approach by Mentor gives me a safer feeling than the random-walk method because I don't need to worry about accuracy.

I've organized a panel discussion at DAC on the topic of 3D field solvers, so hope to see you in San Diego this June.

[post_title] => Who Needs a 3D Field Solver for IC Design? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => who-needs-a-3d-field-solver-for-ic-design [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:52:07 [post_modified_gmt] => 2019-06-15 01:52:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/who-needs-a-3d-field-solver-for-ic-design.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 457 [post_author] => 4 [post_date] => 2011-04-05 06:23:00 [post_date_gmt] => 2011-04-05 06:23:00 [post_content] => The recent announcement from Tabula about the $108 million raised in its Series D round of funding is putting the focus on FPGA technology, and FPGA startups in particular. Who are these FPGA startups, what is their differentiation, where is the innovation, in the product or the business model?

When you say FPGA, you first think: customization, “Field Programmable” means any design engineer can do it (providing he has the right tool set). Almost immediately, the two brands “Xilinx” and “Altera” come to your mind, illustrating the duopoly ruling the FPGA market. These two companies have been successful because they have been able to “standardize the customization”, by creating numerous product lines, whether low cost, or high density, or DSP centric and so on. The “Makimoto’s Wave” concept illustrates very well the expansion model of the customer specific market (ASIC, SoC, FPGA, PLD…), oscillating between customization and standardization.



If we go further, we can say that innovation, bringing by start-up, is linked with customization, and maturity with standardization. When a new product finds a market, because it offers valuable differentiation (customization) the success will pass through mass production, requiring a high level of standardization.

If we look at history of PLD startups (published by EETimes in July 2009) we can see that almost all the startups since 1985 are dead, except QuickLogic and Atmel, both not doing so well. We can see that the lack of money is probably not the failure reason, as the parent companies list includes AMD, Philips, TI, National Semiconductor, Samsung, SGS, Toshiba, IBM… That reminds me the person in charge of the FPGA product line for Europe for TI, in the mid 90’s, a certain Warren East. Being in charge of FPGA business may lead to success, as far as you have a chance to escape from the FPGA business, but I digress. The second statement of fact is that the companies still alive are all born after 2004. Even if we can learn a lot from post mortem analysis, we will take a look at the startups which are still alive, at their product, differentiation and business model. And try to guess who has a chance of success…



Talking about FPGA start-up, such a company has to offer a differentiated product, usually based on technical innovation, even if it can be based on a new business model, like for example offering an FPGA “design block”, or an IP, to be integrated into a more traditional SoC (ASIC or ASSP). The list of “still alive” startups is short: Achronix,Menta, Silicon Blue, and Tabula.

Achronix is the first FPGA to be commercially launched which is different from conventional architectures. They have developed Asynchronous FPGAs, allowing very high speed operations. Achronix claim to deliver world’s fastest FPGAs with frequencies up to 1.5GHz, Speedster family being fabricated on TSMC 65nm process. They have hard blocks for memory, multipliers, SerDes, PLLs and also for memory and communication controllers. Their CAD tools suite ACE (Achronix CAD Environment) provides a classical RTL tools flow for the programmer by hiding all the effects of Asynchronous FPGA hardware. Target market segments are networking, telecommunication, DSP, high performance computing, military and aerospace etc. The company has got special attention of industry in 2010 when they announced partnership with Intel to make 22nm FPGAs on Intel process, being the first company to share such an advanced process technology.
Differentiation: The 3H: High Speed Logic, High density, High speed SerDes
Market: Networking, high performance computing
Challenge: directly compete with the Big-2 on the sweet spot (high ASP products)
Tech./Fab:TSMC 65nm





Menta licenses world’s first pure soft FPGA IP core. Having a soft IP makes its integration in a SoC very easy since it is synthesized with the standard HDL flow of design. Being a soft core the Menta’s eFPGA is technology independent, which gives a lot of easiness for SoC manufacturing since it can be integrated with any process technology for which the SoC is designed. Implementing of systems on programmable logic is slower, bigger and more power consuming compared to dedicated hardware. Menta is fully aware of this problem and its ultra compact architecture decreases this gap and helps the SoC designer to have the flexibility of FPGA with ASIC like performance.

Differentiation: Business Model (offering FPGA as an IP design block)
Market: ASSPs/ASIC, MCUs, Aerospace/Defense/Automotive
Challenge: Funding to strengthen technology and expand business
Tech./Fab: Tech. Independent!, current main focus for evaluation ST 65nm, TSMC 45nm




SiliconBlue has a major focus on low-power FPGAs which can be used for battery-based portable devices. Their iCE65 FPGA devices family is built on low-power TSMC 65nm process. They have done several innovations in packaging and configuration mechanism to make their devices very compact, low-power and single chip solution for the target. Their FPGAs have very low static power. Their FPGAs compared to other providers are relatively small, logic cells (LUT4+FF) range from 1200 to 16000. They have embedded memory blocks and phase-lock loops (PLL) as hard macros. They also propose their FPGAs in the form of a Die for SIP (System in Package) solutions. One of their most appreciated innovations is a single chip solution using embedded non-volatile XPM memory from Kilopass which loads the configuration to SRAMs of FPGA on power up.
Differentiation: Low power
Market: Mobile market
Challenge: meet ASIC/ASSP price point
Tech./Fab:TSMC 65nm





Tabula’s technology can be considered as a masterpiece of dynamic reconfiguration. The device is not physically 3D in manufacturing; they call the time as 3rd axes. By this advantage their ABAX 3PLD devices fabricated on 40nm TSMC [1.39] process when compared to an equivalent classical 2D FPGA have gains of around 2.5X in logic density, 2.0X in Memory and 3.7X in DSP performance. More importantly as stated before, despite of the architecture which is completely un-natural physically, the programming model according to the company is purely standard RTL based. Their 3D Spacetime Compiler makes this possible. Tabula claim to have Cisco as a customer.
Differentiation: Time based reconfiguration
Market: networking
Challenge:directly compete with the Big-2 on the sweet spot (high ASP products) – Product introduction has been very long (5 years+)
Tech./Fab:TSMC 40nm



These four startups are well segmented in term of differentiation: one concentrates on low complexity products offering a very low power to target the mobile industry, when the second offering high performance core logic (1.5 GHz peak performance) and supporting a wide range of communication protocols and SerDes up to 12.5 Gbps, target the Networking segment. The next two proposes real disruptive innovation: the third proposing to embed FPGA as a design IP block (business model innovation), targeting IDM or Fabless in multiple segments, when the fourth has created the “3 D” FPGA concept where the 3rd dimension is the time.

To be honest, none of these startups should “loose”, as each of them is offering a key differentiator. Except if they fail to attract customer because of too complex design flow or too expensive toolset, of fail to keep them because of wrong execution (in production). Or simply if the duopoly can close the technology gap. We will probably dig more and come back in a next blog, as this is a fascinating part of the SC industry.

I would like to thank Syed Zahid Ahmed (www.linkedin.com/in/SyedZahidAhmed) who has helped me to write this blog, using the know how of the emerging FPGA market acquired when doing research for his PhD. . By the way, Zahid will defend his PhD in June, and is looking these days for his next adventure (job)!
Eric Esteve (eric.esteve@ip-nest.com)
[post_title] => Wanted: FPGA start-up! …Dead or Alive? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => wanted-fpga-start-up-dead-or-alive [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:38:30 [post_modified_gmt] => 2019-06-15 02:38:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/wanted-fpga-start-up-dead-or-alive.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 456 [post_author] => 28 [post_date] => 2011-04-01 17:47:00 [post_date_gmt] => 2011-04-01 17:47:00 [post_content] =>
Samsung is the #1 electronics company, the #2 semiconductor company, and for 20+ years the world’s largest memory chip maker. Analysts expect Samsung to catch Intel by the year 2014. In the foundry business however Samsung is a distant #9 after more than a five year investment and here’s why:

Foundry 2010 Revenue:
(1) TSMC $13B
(2) UMC $4B
(3) GFI $3.5B
(4) SMIC $1.5B
(5) Dongbu $512M
(6) Tower/Jazz $509M
(7) Vanguard $508M
(8) IBM $430M
(9) Samsung $420M
(10) MagnaChip $405M

Dr. Kwang-Hyun Kim, Executive VP, Foundry Business, Samsung Electronics, keynoted the final day of the SNUG 2011 conference: “Consumer Driven Innovation in SoC Design”. It was an excellent presentation on consumer electronics market trends and the associated challenges down to the semiconductor device level. It did not however contain anything that supports the Samsung position that it is serious about the foundry business.

Samsung Electronics' Foundry business, Samsung Foundry, is dedicated to support fabless and IDM semiconductor companies offering full service solutions encompassing design kits and proven IP to fully turnkey manufacturing to achieve market success with advanced IC designs by Foundry, ASIC and COT engagement.

The market trends Dr Kim covered were: high performance WITH low power, rapidly increasing bandwidth requirements (streaming mobile HD video), increasing design complexity (multicore everything), and short turn-around time (12 month product cycles versus 18 month SoC design cycles). Samsung Electronics has a nice Wikipedia page HERE. The Samsung Foundry site is HERE.



IDM 2010 Revenue:
(1) Intel $40B
(2) Samsung $32.6B
(3) Toshiba $13.4B
(4) TI $13B
(5) Renesas $11.8B
(6) Hynix $10B
(7) STMicro $10B
(8) Elpida $7B
(9) Infineon $6.2B
(10) Sony $5.6B

According to a 2010 interview with Ana Hunter, Samsung Semiconductor Vice President of Foundry Services, after years of trying, “Samsung’s share of the foundry business is not as big as we want, but it takes time to put the pieces in place and ramp designs.”Hunter stated that, “The foundry business is part of our core strategy” and highlighted 6 reasons why Samsung believes it will succeed:

[LIST=1]
  • Capacity – Samsung plans to double its production of chips for outside customers every year until it rivals market leader TSMC. I asked a Samsung executive what the Samsung foundry capacity was today and he declined to answer. All other foundries are very open about this.
  • Resources – Samsung is one of the few companies that has the resources to compete at the high-end of the foundry market. Other than Intel, IBM, TSMC, UMC, SMIC, and GFI of course.
  • Leading Edge Technology – Samsung was ramping 45-nm technology at a time when TSMC and others were struggling.
  • Leading Edge Technology part II – Samsung will be one of the first foundries to roll out a high-k/metal-gate solution. The technology will be offered at the 32- and 28-nm nodes. (TSMC and GFI will go straight to 28nm HKT this year)
  • Leading Edge Technology part III – Unlike rival TSMC, Samsung is using a gate-first, HKMG technology, TSMC is going with gate-last. News flash: for 20nm and below all foundries will use gate-last HKMG technology which is a clear TSMC win.
  • Ecosystem – Samsung has put the EDA pieces in place for the design-for-manufacturing puzzle. Actually, GlobalFoundries has and Samsung is following their lead.

        Let me reiterate the 6 reasons why I believe Samsung will continue to struggle as a foundry:
    [LIST=1]
  • Business Model – The Foundry business is services centric, the IDM business is not. This is a serious paradigm shift for Samsung. GlobalFoundries made the transition though so it can be done.
  • Customer Diversity – Supporting a handful of customers/products is a far cry from supporting the 100’s of customers and 1,000′s of products TSMC does.
  • Ecosystem – An open ecosystem is required which includes supporting commercial EDA, Semiconductor IP, and Design Services companies of all shapes and sizes.
  • Conflict of Interest – Pure-play foundries will not compete with customers, Not-pure-play foundries (Samsung/Intel) will. Would you share sensitive design, yield, and cost data with your competitor? Apple will move its iProduct ICs from Samsung to TSMC for this reason alone.
  • China – The Chinese market represents the single largest growth opportunity for the foundry business. TSMC has a fab in Shanghaiand 10% control of SMIC (#4), UMC (#2) has control of China’s He Jian(#11), and Samsung does not even speak Mandarin.
  • Competition – The foundry business is ultra competitive, very sticky, and predatory pricing (product dumping) will not get you from #9 to #1.
    [post_title] => Samsung is NOT a Foundry! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => samsung-is-not-a-foundry [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:38:26 [post_modified_gmt] => 2019-06-15 02:38:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/samsung-is-not-a-foundry.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 454 [post_author] => 3 [post_date] => 2011-03-31 10:19:00 [post_date_gmt] => 2011-03-31 10:19:00 [post_content] => Intro
     Earlier this month I drove to Mentor Graphics in Wilsonville, Oregon and spoke with Michael Buehler-Garcia, Director of Marketing and Nancy Nguyen, TME, both part of the Calibre Design to Silicon Division. I'm a big fan of correct-by-construction thinking in EDA tools and what they had to say immediately caught my attention.

    The Old Way
    In a sequential thinking mode you could run a P&R tool, do physical signoff (DRC/LVS), optimize for DFM, find & fix timing or layout issues, then continue to iterate while hoping for manufacturing closure.


    The New Way
    Instead of a sequential process, what about a concurrent process? Well, in this case it really works to reduce the headaches of manufacturing closure.

    The concept is that while the P&R tool is running it has real time access to DRC, LVS and DFM engines. This approach is what I term correct-by-construction and it creates a tight, concurrent sub-flow of tools to get the job done smarter than with sequential tools.

    The product name is called Calibre InRoute and it includes the following four engines:


    • P&R - Olympus SoC
    • DRC - Calibre nmDRC
    • LVS - Calibre nmLVS
    • DFM - Calibre DFM



    DRC Checking with the Sign-off Deck
    Since Calibre is a golden tool for sign-off at the foundries it makes sense to use this engine and the sign-off rule deck while P&R is happening. Previous generations of P&R tools would have some rudimentary DRC checks embedded, but now that we have hundreds of rules at 28nm you cannot afford to rely on incomplete DRC checks during P&R.

    What I saw in the demo was DRC checking results both in a text format and visual format. There was even a detailed description of what the DRC violation was.


    OK, it's nice to have DRC checking but what about fixing the DRC violations for me?

    InRoute does just that. Here's a DRC violation shown highlighted in Yellow:

    And the same area after the DRC violation has been automatically fixed for me:


    DFM Issues - Find and Fix Automatically
    Just like DRC issues the InRoute tool can find and fix your DFM issues which saves time and speeds you through manufacturing closure more quickly:








    Litho Friendly Design
    The perfectly rectangular polygons that we can draw in our IC layouts are not what ends up on the masks or silicon. The lithography process effects can be taken into account while InRoute is running to identify issues and even fix some of them. Here's an example that shows a minimum width violation and how it was identified and auto-fixed:


    Who is Using This?
    STMicroelectronics is a tier-one IC design company that is one of the first to publicly talk about Calibre InRoute.

    “We have used Calibre InRoute on a production 55nm SOC. InRoute has successfully corrected the DRC violations caused by several complex IPs, whose ‘abstract’ views did not fully match the underlying layout, as well as several detailed routing DRC violations,” said Philippe Magarshack, STMicroelectronics Technology R&D Group Vice President and Central CAD and Design Solutions General Manager.

    Conclusion
    It appears that the old ways of buying point tools (P&R, DRC, LVS, DFM, Litho) from multiple EDA vendors and creating your own sub-flows will get you through manufacturing closure with multiple iterations, while this new concurrent approach with EDA tools that are qualified by the foundries reduces iterations significantly. Calibre InRoute looks to be well-suited for the challenges of manufacturing closure by using a concurrent approach. [post_title] => DRC/DFM inside of Place and Route [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => drc-dfm-inside-of-place-and-route [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:52:03 [post_modified_gmt] => 2019-06-15 01:52:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/drc-dfm-inside-of-place-and-route.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 453 [post_author] => 9491 [post_date] => 2011-03-30 15:15:00 [post_date_gmt] => 2011-03-30 15:15:00 [post_content] => Yesterday at the Globalpress electronic summit Andrew gave an overview of the Apache product line, carefully avoiding saying anything he cannot due to the filing of Apache's S-1. From a financial point of view the company has had 8 years of consecutive growth, is profitable since 2008, and has no debt. During 2010 when the EDA industry grew 9%, Apache grew 27%. The detailed P&L can be found in Apache's S-1.

    Apache is focused on low power and this is a problem that is going to be with us going forward. Per Gary Smith, Apache has a 73% market share in physical power analysis and is the sign-off solution for all 20 of the iSuppli top-20 semiconductor companies.

     Apache's business in power and noise is driven by some underlying industry trends. The number of transistors per chip is rising (more power density), power supply voltages are coming down and getting closer and closer to threshold voltages (less noise margin), I/O performance is increasing (more noise) and packaging pin counts are exploding.

     These trends have driven Apache's solutions in 3 areas: power budgeting, power dellivery integrity and power induced noise.

    The really big underlying trend that means that power keeps designers awake at night is the growing disparity between power budgets, which don't really increase since we are not looking for shorter battery life on our devices, and power consumption. I've seen a similar worry from Mike Mueller of ARM looking at how many processors we'll be able to put on a chip, and worrying that we won't be able to light them all up at once for power reasons.

     Another growing problem is power noise from signals going out from the chip, through the package, onto the board, back into the package and onto the chip again. The only way to handle this is to look at the whole chip-package-system, including decoupling capacitors, the power grid, well capacitance and so on, all factors we've managed to avoid up to now. [post_title] => Andrew Yang's presentation at Globalpress electronic summit [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => andrew-yangs-presentation-at-globalpress-electronic-summit [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:51:58 [post_modified_gmt] => 2019-06-15 01:51:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/andrew-yangs-presentation-at-globalpress-electronic-summit.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 446 [post_author] => 28 [post_date] => 2011-03-29 11:39:00 [post_date_gmt] => 2011-03-29 11:39:00 [post_content] => This was my first SNUG (Synopsys User Group) meeting as media so it was a ground breaking event. Media was still barred from some of the sessions but hey, it’s a start. The most blog worthy announcement on day 1 was that Synopsys signed a deal with Amazon to bring the cloud to mainstream EDA!

    Even more blog worthy was a media roundtable with Aart de Geus. Aart more than hinted that cloud computing, or SasS (software as a service), will also be used to change the EDA business model. The strategy is to offer cloud services for “peak” simulation requirements when customers need hundreds of CPU’s for short periods of time. When customers get comfortable with EDA in the cloud they will switch completely and cut millions of dollars from their IT budgets (my opinion).

    Yes, I know other EDA companies have dabbled in the cloud but dabbling in something that will DISRUPT an industry does not cut it. Cadence is a cloud poser and so is EDA up to this point. Synopsys, and Aart de Geus specifically, can make this transition happen, absolutely. When Aart discussed it, he had passion in his voice and determination in his eyes, do not bet against Aart on this one.


    Can EDA software take advantage of a big fat cloud? Yes of course. Synopsys will start with VCS and add other tape-out gating, compute intensive applications like Hspice and hopefully DRC. Is a cloud secure enough for EDA? Aart mentioned that the Amazon cloud is "Military Secure!" Not the best analogy! I would say that Amazon is more than military secure, and much more secure than any private data center. Traditional endpoint security is no longer enough. You must have truly massive perimeter control and only cloud companies like Amazon can facilitate that.

    It would also be nice to get all of those private EDA saturated CPUs off the Silicon Valley power grid and into a cloud powered by greener energy sources. Right?

    How can a cloud help EDA grow? Well, clearly EDA360 has fizzled so my bet is on the cloud. Not only will this free up millions of dollars in IT and facilities expense, it also presents EDA with the opportunity for a much needed business model change. I gave Aart business model grief at the EDA CEO panel and in a follow-up blog EDA / IP Business Model Debate: Daniel Nenni versus Aart de Geus. Hopefully this is his response. A cloud based business model, or Software as a Service (SaaS), is much more collaborative and presents significant incremental revenue opportunities for EDA.


    The other thing I questioned Aart on is his run for California Governor. The response to the Aart de Geus (Synopsys) for Governor! blog was overwhelming with big views from Sacramento and even Washington DC. Unfortunately Aart told me that he is not up for the challenge and will continue to shape the world through semiconductors. Probably the only thing Aart and I have in common is determination so this is not over by a long shot.

    One final comment on SNUG, the vendor exhibition was one of the best I have seen and something I have asked for in the past. 60 or so vendors participated in uniform 10x10 booths, a level playing field for big companies and small. 2000+ people attended (my guess). We were wined, dined, and demo’d, business as it should be. Only one thing was missing, John Cooley! I know times are tough John, but how could you miss the 21 [SUP]st[/SUP] birthday of the child you co-parented? Invitation get lost in your SPAM filter? Seriously, you were missed. [post_title] => 2011 Semiconductor Design Forecast: Partly Cloudy! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => 2011-semiconductor-design-forecast-partly-cloudy [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:51:57 [post_modified_gmt] => 2019-06-15 01:51:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/2011-semiconductor-design-forecast-partly-cloudy.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 445 [post_author] => 23928 [post_date] => 2011-03-28 13:53:00 [post_date_gmt] => 2011-03-28 13:53:00 [post_content] => For an industry committed to constant innovation, changes in any part of the design flow are only slowly adopted, and only when absolutely necessary. Almost 10 years ago, it became clear that shrinking process technologies would bring a massive growth of layout and mask data—rougly 50% per node. This avalanche of data seriously challenges the two de facto standard file formats for layout data — GDSII and MEBES.


    Results of experiments on real designs by Joseph Davis and team.

    With surprising foresight, the industry came together and formed a working group to define a new format – the Open Artwork System Interchange Standard, or OASIS® (P39) and OASIS.MASK (P44). In 2005, when the OASIS format was officially approved, it was quickly supported by RET software from all of the major EDA vendors and some leading-edge companies such as Intel, TI, Fujitsu, NEC, and IBM had programs in place. OASIS looked primed for quick adoption.


    5 years later…


    My colleagues and I took an industry survey to find out how prevalent the OASIS format has become, and presented the results at European Mask and Lithography Conference in 2010. Figure 1 shows the results as a function of technology node at two points in the flow: handoff from RET to fracture, and the handoff of fractured data to the mask house.


    Figure 1: OASIS adoption by technology node, broken down by the data-prep hand-offs. The non-zero adoption rate in older technologies reflects the fact that some manufacturers came on-line with those technologies when OASIS was widely available and proven in the production flow.

    As of 2010, foundries have widely adopted OASIS for the post-tapeout flow and report at least a 10x file compression improvement. However, for 45 nm designs in 2009 there was still very little use of OASIS as a stream-out format from design house to foundry, or from foundry to mask house. So, if OASIS isn’t in production for mask making – the application that was the impetus for its creation – and it isn’t the standard for tape-out to the foundries, is OASIS dead? Was the data explosion a mirage on the sand? Not at all.

    The first thing that jumps out from this chart is that adoption of OASIS in the RET step led that of the fracture step by two whole technology nodes. Since the mask data is largest after fracture, many expected that the hand-off from fracture to mask making would have the fastest adoption. Why was the RET step, which deals with smaller files, the first place where OASIS was adopted?

    Diffusion of Innovation
    As in the adoption of any new technology, the new technology must present a solution to a known problem in order to gain acceptance. The rate of adoption is related to
    • Costs and risk associated with continuing the status quo
    • Gain associated with adopting the new technology
    • Costs and risks associated with changing to the new technology
    • Environmental factors that can either accelerate or inhibit adoption of the solution.

    Cost of inaction – The direct, measurable cost of storing and processing very large files. This direct cost has been flat because the cost of hard disk storage an internet bandwidth has been decreasing at almost exactly the same rate that storage needs have been increasing. However larger file take more time to process, load into viewing tools, review, transfer, etc. These effects are real, but difficult to measure directly. The RET and fracture steps have approximately the same cost of inaction.

    Risk of inaction – Eventually, one of the layouts will actually exceed the capabilities of the legacy file formats and the chip will literally not be manufacturable. At each node, the foundry and mask houses can estimate the probability of this happening.

    Benefits of migration – Lower file size, processing time, and engineering review time. For RET, the file size is reduced ~5-10x with OASIS. For the fracture step, the gain is less (2-4x), but using OASIS can also eliminate the need for multiple file formats for different mask writing and inspection machines.

    Cost of migration – The upgrade cost plus the cost of qualifying the new format in the manufacturing flow. For RET, the upgrade cost is negligible, as RET and associated software are updated quarterly. Qualification can be achieved in parallel with the existing flow, so the over-head is small. However, the mask tools must be able to accept OASIS as input, which likely requires new hardware to be purchased at the cost of millions per machine
    Risk of migration – The probability of data loss or corruption cannot be predicted, and can only be mitigated by a lengthy prove-out period.

    Environmental factors – The technology development cycle. Early technology development requires test chips, which need masks. Therefore, mask hardware vendors must have their products ready very early in a technology. Mask houses won’t demand OASIS support until it has been production proven. The RET hand-off, on the other hand, is software-to-software, which is more easily updated than hardware. Therefore, the post RET hand-off is a natural place to test and prove the new format.

    Looking Forward…


    From the starting point of having full support in the EDA software, 18 months for proving in a technology node, and a two year model of technology development, it is natural that mask tools are just now starting to support OASIS, five years after it was fully supported by the EDA industry. This process of downstream migration will naturally continue, as the new format has proven to add value throughout the flow.

    We anticipate a gradual expansion of the full adoption of OASIS. But there are benefits even for hybrid flows, in which both OASIS and legacy formats are used. Figure 2 shows the relative runtime for several different mask manufacturing flows, from the current state to a full OASIS deployment.


    Figure 2: Data processing effort for mask manufacturing with increasing extent of machine support for OASIS.MASK. The basic assumption is that commonly three formatting steps are conducted (Fracture 1, Fracture 2, Fracture 3). OASIS.MASK introduction has the potential to reduce the overall effort by 3x.

    In the design area, we expect OASIS to be used increasingly in the chip assembly/chip finishing stage, especially for large designs. This is the area where reducing file size can improve the over-all infrastructure burden and increase turn-around-time for such activities as physical verification, file merging, etc. In fact, the de facto standard custom design tool (Virtuoso) officially started OASIS support in February 2011. Other stages of the design process may benefit from other aspects of the OASIS format, such as encryption and the structure of the data storage format (indexes, etc), the value of these features will depend on the specific design flow and design types.

    Summary


    The OASIS formats offer at least 10x data volume reduction for the design and post-RET data and over 4x for fractured data. The new formats were quickly support by the EDA companies, and adoption of the new format in production flows is progressing – led by the post-RET data hand-off starting at the 65nm node – where more than half of those surveyed are using it.

    The deployment of OASIS and OASIS.MASK has been strongly affected by both economic and technical factors. Yet even partial deployment, along with format translation, can offer a significant benefit in data processing time and file size reduction that will meet the post-tape out and mask making demands of designs at 22nm and below. With the continued increase in design complexity, OASIS deployment will continue to grow in both the manufacturing and design flows.

    --Joseph Davis, Mentor Graphics

    To learn more, download the full technical publication about this work: Deployment of OASIS.MASK (P44) as Direct Input for Mask Inspection of Advanced Photomasks. [post_title] => Dawn at the OASIS, Dusk for GDSII [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => dawn-at-the-oasis-dusk-for-gdsii [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:51:55 [post_modified_gmt] => 2019-06-15 01:51:55 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/dawn-at-the-oasis-dusk-for-gdsii.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 432 [post_author] => 4 [post_date] => 2011-03-25 09:49:00 [post_date_gmt] => 2011-03-25 09:49:00 [post_content] =>
    Although there has been always a strong relationship between ARM and GlobalFoundries, it is interesting to notice that Intel has helped to boost it and make it even stronger. Indeed when AMD renegotiated its x86 licensing deal with Intel in 2009, one of the most significant long-term changes was a marked reduction in how much of GlobalFoundries AMD had to own in order to remain within the terms of its manufacturing license. As a result of this change, AMD announced in January 2010 that it intended to significantly accelerate the financial split between itself and GlobalFoundries; we now have seen the impact of that transition on the GlobalFoundries' side of the business. During 2010, GFI has developed a new strategic partnership with ARM, in which the two companies collaborate on leading-edge, 28nm system-on-chip (SoC) designs. This strategy should allow GlobalFoundries to attract more customers especially this designing Application processor for the wireless handset segment. We have to keep in mind that the Smartphone market has been 302 Million unit in 2010 with a 70% YoY growth rate (and is expected to grow to about 600 Million in 2015) to be compared with a total PC market at 350 Million where Intel processors represent 80% market share, leaving a TAM of 70 Million units for their competitor, and the foundries processing the processor. We now better understand how strategic is for GlobalFoundries such a move to enhance ARM partnership, and be the first to support ARM Cortex A9 in 28nm.

    ARM Processor IP strengths are well known: for a similar performance level, ARM-based chip power consumption will be 50% less than for Intel IC, the figures for standby power being better by a factor up to ten, but this highly depend of the chip maker know how in term of power management. One weakness of ARM architecture, the lack of Microsoft support, is expected to quickly vanish, as Microsoft has announced during 2011 CES the support of “SoC architecture, including ARM based systems”. This evolution is more like a revolution, as it is the first time since the 20 years ARM architecture is available!



    Globalfoundries has decided in 2009 to be the first foundry to work with ARM to enable a 28nm Cortex-A9 SoC solution. The SoC enablement program, built around a full suite of ARM Physical IP, Fabric IP and Processor IP, will deliver customers advanced design flexibility. The collaborative efforts of the partnership will initially focus on enabling SoC products which use the low power and high performance ARM Cortex -A9 processor on Globalfoundries 28nm HKMG process.

    Looking at the flow allowing to speed-up time to volume for foundry customers, we see that the last milestone is develop, process and characterize the Product Qualification Vehicle (PQV). In this case, the jointly developed Test Qualification Vehicle (TQV) reached the tapeout stage in August 2010 at GLOBALFOUNDRIES Fab 1 in Dresden, Germany. If we look at the different building blocks part of this TQV: Std Cells, I/O, Memory, the Cortex A9 core and some market specific IP like USB, PCIe or Mobile DDR Controller, you see that all of these allow to build a product grade SoC. When this PQV has been tapeout then processed, running the validation on the Silicon samples will allow to do data correlation, improving the accuracy of the models used by the designers of “real” products.



    The TQV will be based on GLOBALFOUNDRIES 28nm High Performance (HP) technology targeted at high-performance wired applications. The collaboration also will include the 28nm High Performance Plus (HPP) technology for both wired and high-performance mobile applications, and the Super Low Power (SLP) technology for power-sensitive mobile and consumer applications. All technologies feature GLOBALFOUNDRIES’ innovative Gate First approach to HKMG. The approach is superior to other 28nm HKMG solutions in both scalability and manufacturability, offering a substantially smaller die size and cost, as well as compatibility with proven design elements and process flows from previous technology nodes. If we want to compare the same Cortex A9 based chips, these built using 28nm HKMG from GlobalFoundries will deliver a 40% performance increase within the same thermal envelope as 40- or 45-nm products. Coupling their know-how, ARM and GlobalFoundries also say they can achieve up to 30% less power consumption and 100% higher standby battery life.


    As you can see on this figure from ARM, a design team always can optimize the core instantiation in respect with the application, and the desired performance to power trade-off, by selecting the right library type. He can also design, within the same chip, some specific blocks targeting high speed by using the 12-tracks high cells, when the rest of the chip will be optimized for power consumption at first (if the target application is battery powered) or for density, when the target application require the lowest possible unit price. Having these libraries and processor IP from ARM available on various process nodes and variants (like HP, HPP and SLP in 28nm HKMG) is key for the semiconductor community –the fables and also the IDM who adopt more and more a “fab-lite” profile.
    Because the layout picture of a device tells more than a long talk (and also because that’s remind me the time where I was an ASIC designer), I cannot resist and show you the layout of the Cortex A9 based TQV device:



    Eric Esteve (eric.esteve@ip-nest.com)

    [post_title] => ARM and GlobalFoundries [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => arm-and-globalfoundries [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:38:23 [post_modified_gmt] => 2019-06-15 02:38:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/arm-and-globalfoundries.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 431 [post_author] => 9491 [post_date] => 2011-03-24 17:28:00 [post_date_gmt] => 2011-03-24 17:28:00 [post_content] =>  One of the first things that needs to be created when bringing up a new process is the Process Design Kit, or PDK. Years ago, back when I was running the custom IC business line at Cadence, we had a dominant position with the Virtuoso layout editor and so creating a PDK really meant creating a Virtuoso PDK, and it was a fairly straightforward task for those process generations.

    The PDK contains descriptions of the basic building blocks of the process: transistors, contacts etc and are expressed algorithmically as PCells so that they automatically adjust depending on their parameters. For example, as a contacted area gets larger, additional contact openings will be created (and perhaps even removed, depending on the design rules).

    Two things have changed. Firstly, Virtuoso is no longer the only game in town. All the major EDA companies have their own serious offerings in the custom layout space, plus there are others. But none of these other editors can read a Virtuoso PDK which is based on Cadence's SKILL language. The second thing that has changed is that design rules are so much more complex that creating the PDK is a significant investment. Creating multiple PDKs for each layout editor is more work still, and work that doesn''t really bring a lot of value to either the foundry or the user.

    Since Cadence isn't about to put its PDKs (and PCells) into the public domain as a standard everyone can use, a new standard was needed. The Interoperable PDK Libraries Alliance (IPL), working with TSMC, standardized on using Ciranova's PyCell approach (based on Python rather than SKILL) and created the iPDK which is supported by all the layout editors (even Virtuoso, at least unofficially).

    But if one standard is good, two is even better right? Well, no. But there is a second portable PDK standard anyway called OpenPDK, being done under the umbrella of Si2, although the work just started last year and hasn't yet delivered actual PDKs.

    There is a lot of suspicion around the control of these standards. iPDK is seen as a TSMC standard and, as a result, Global Foundries won't support it. They only support the Virtuoso PDK, which seems a curious strategy for a #2 player wanting to steal business from TSMC and its customers. Their Virtuoso-only strategy makes it unnecessarily hard for layout vendors to support customers who have picked other layout systems.

    Si2 is perceived by other EDA vendors as being too close to Cadence (they also nurture OpenAccess and CPF, which both started off internally inside Cadence) and so there is a suspicion that it is in Cadence's interests to have an open standard but one that is less powerful than the Virtuoso PDK. Naturally, Cadence would like to continue to be the leader in the layout space for as long as possible.

    It remains to be seen how this will all play out. It would seem to be in the foundries interests to have a level playing field in layout systems, instead of a de facto Cadence monopoly. TSMC clearly thinks so. However, right now Global seems to be doing what it can to prop up the monopoly, at least until OpenPDK delivers.


    lang: en_US



     [post_title] => Process Design Kits: PDKs, iPDKs, openPDKs [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => process-design-kits-pdks-ipdks-openpdks [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:39:25 [post_modified_gmt] => 2019-06-15 01:39:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/process-design-kits-pdks-ipdks-openpdks.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 430 [post_author] => 23928 [post_date] => 2011-03-24 15:56:00 [post_date_gmt] => 2011-03-24 15:56:00 [post_content] => In part I of this series, we looked at the history of lithography process models, starting in 1976. Some technologies born in that era, like the Concorde and the space shuttle, came to the end of their roads. Others did indeed grow and develop, such as the technologies for mobile computing and home entertainment. And lithography process models continue to enable sub-wavelength patterning beyond anyone’s imagination a few years ago. As for the lasting impact of Barry Manilow, well, you can’t argue with genius. But back to lithography process models. Here's a summary timeline of process model development:



    In this second part in the series, I want to talk even more about the models themselves. The next parts will address requirements like accuracy, calibration, and runtime, as well as the emerging issues. I particularly appreciate the reader comments on Part I, and will attempt to address them all. [Yes, I take requests!]

    Recall that TCAD tools are restricted to relatively small layout areas. Full-chip, model-based OPC can process several orders of magnitude more in layout area, partly because of a reduction in problem dimensionality. A single Z plane 2D contour is sufficient to represent the relevant proximity effect for full-chip OPC. Some of the predictive power of TCAD simulation is not relevant for OPC given that the patterning process must be static in manufacturing as successive designs are built.

    There are domains of variability where a model needs to dynamically predict, but these are largely limited to errors in mask dimension, dose, focus, and overlay. Dose can serve as a proxy for a variety of different manufacturing process excursions, such as PEB time and temperature. Some mathematical facets of these photoresist chemistry, such as acid-based neutralization or diffusion, can be incorporated into process models, but useful simulation results do not depend on a detailed mechanistic chemical and kinetic understanding. The information that would come from such mechanistic models is very useful for process development, but not strictly necessary for OPC in manufacturing.

    Optical models for a single plane did not require dramatic simplification from TCAD to OPC, but the photoresist and etch process models used in full-chip OPC are fundamentally different. Starting with the Cobb threshold approach, these photoresist and etch process models models are variously referred to as semi-empirical, black box, compact, phenomenological, lumped, or behavioral. Whatever you call them, they are characterized by a mathematical formulation that provides a transfer function between known system inputs and measured outputs of interest. Notably, the user does not need access to sophisticated physiochemical characterization methods. Rather, all the inputs required for the model are readily available in the fab.

    Photoresist Models
    There are two basic types of photoresist process models used in full chip simulations; those that threshold the aerial image in some manner, and those that transform the aerial image shape. Alternatively, the model types could be parsed by variable or constant threshold. Earlier full-chip models were based upon the aerial image cutline of intensity versus position, with the simplest form a constant threshold. Accuracy increased by defining the threshold as a polynomial in various simulated image properties associated with the aerial intensity profile, as shown in Figure 1. Initially Imin and Imax were utilized, then image slope was added, then image intensity at neighboring sites, and finally a variety of functions used to calculate pattern density surrounding the site under consideration. Thus multiple different modelforms are possible.


    Figure 1. Variable Threshold resist models schematic.

    More recent full-chip simulation (“dense” simulation, which I’ll discuss in another part of this series) was accompanied by a new type of resist model (CM1) that applied a constant threshold to a two dimensional resist surface. The resist surface is generated by applying a variety of fast mathematical operators to the aerial image surface. These operators include neutralization, differentiation of order k, power n, kernel convolution, and n order root. This is expressed in the equation below:



    The user can specify a modelform that selects which operators and k, n, and p values are desired, thus as with the above variable threshold model, a huge number of different forms are possible. The linear coefficients Ci and continuous parameters b and s are found by minimizing the objective function during calibration.

    A nominal exposure condition model fit result is shown in Figure 2, which compares a constant threshold aerial image result with a CM1 model fit for 1D features, Figure 3, which does the same for 2D features, and Figure 4, which shows overall CM1 model fitness for 695 gauges.


    Figure 2: CM1 modelfit for 1D structures.


    Figure 3. CM1 modelfit for 2D structures.


    Figure 4. CM1 modelfit for all structures (695 gauges).

    It is interesting to note that the accuracy of OPC models has roughly scaled with the critical dimensions: an early paper by Conrad et al. on a 250 nm process reported errors of 17 nm 3σ for nominal and 40 nm 3σ for defocus models. The model accuracy for today’s 22 nm processes is on the order of 10X lower than these values. Typical through process window results are shown in Figure 5. It can be seen that CDerrRMS of 1 nm is achieved and the errRMS value is maintained below 2.5 nm throughout the defined focus and dose window.


    Figure 5. Example results for CM1 model fitness at different focus and three exposure dose conditions. Comparable accuracy to TCAD models.

    Etch proximity effects are known to operate locally (i.e. on a scale similar to those of “optical” proximity effects) as well as over longer distances; approaching mm scale. Long distance loading effects can be accounted for but typically result in long simulation runtimes, whereas the shorter range effects can be compensated effectively. The two primary phenomena are aspect ratio dependent etch rates (ARDE), and microloading. With ARDE, the etch rate and therefore bias is seen to depend upon the space being etched, while microloading dictates that the etch bias is dependent upon the density of resist pattern within a region of interest. Different kernel types can accurately represent these phenomena, capturing the local pattern density and the line of sight visible pattern density. When used in combination, these variable etch bias (VEB) models can yield a very accurate representation of the etch bias as a function of feature type, as show in Figure 6.


    Figure 6. Example VEB model fitness for a 45 nm poly layer. Model error for four different types of kernel combinations: 2 gauss kernels, 3 gauss kernels, 2 gauss and 1 visible kernel, 2 gauss and 1 visible kernel.

    So there you have the two main OPC model types. Next I’ll talk about how they actually work in practice, including the concepts of sparse vs. dense simulation and how the OPC software addresses the principal accuracy and predictability, ease of calibration, and runtime.


    OPC Model Accuracy and Predictability – Evolution of Lithography Process Models, Part III
    Mask and Optical Models--Evolution of Lithography Process Models, Part IV

    John Sturtevant, Mentor Graphics [post_title] => Evolution of Lithography Process Models, Part II [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => evolution-of-lithography-process-models-part-ii [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:51:54 [post_modified_gmt] => 2019-06-15 01:51:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/evolution-of-lithography-process-models-part-ii.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 458 [post_author] => 3 [post_date] => 2011-04-07 16:53:00 [post_date_gmt] => 2011-04-07 16:53:00 [post_content] => Inroduction
    In the early days we made paper plots of an IC layout then measured the width and length of interconnect segments with a ruler to add up all of the squares, then multiplied by the resistance per square. It was tedious, error prone and took way too much time, but we were rewarded with accurate parasitic values for our SPICE circuit simulations.

    Today we have many automated technologies to choose from when it comes to extracting parasitic values for an IC layout. These parasitic values ensure that what we simulate in SPICE is accurately providing the right timing value, can detect glitches and measure the effects of cross-talk.


    Accuracy vs Speed


    The first automated parasitic extraction tools used rules about each interconnect layer for resistance and capacitance as a function of width, length and proximity to other layers. These tools are fast and reasonable accurate for nodes that have wide interconnect with little height. As the height of interconnect have grown then the accuracy of these rules diminishes because of the complex 3D nature of nearby layers.

    3D field solvers have been around for over a decade and have also offered the ultimate in accuracy with the major downside being slow run times. The chart above places 3D field solvers in the upper left hand corner, high accuracy and low performance.

    Here's a quick comparison of four different approaches to extracting IC parasitics:

    [TABLE] class="cms_table_grid"
    |- class="cms_table_grid_tr"
    | class="cms_table_grid_td" | Approach
    | class="cms_table_grid_td" | Plus
    | class="cms_table_grid_td" | Minus
    |- class="cms_table_grid_tr"
    | class="cms_table_grid_td" | Rule-based/Pattern Matching
    | class="cms_table_grid_td" | Status quo
    Familiar
    Full-chip
    | class="cms_table_grid_td" | Unsuitable for complex structures
    Unable to reach within 5% of reference
    |- class="cms_table_grid_tr"
    | class="cms_table_grid_td" | Traditional Field Solver
    | class="cms_table_grid_td" | Reference Accuracy
    | class="cms_table_grid_td" | Long run times
    Limited to devices
    |- class="cms_table_grid_tr"
    | class="cms_table_grid_td" | Random-Walk Field Solver
    | class="cms_table_grid_td" | Improved Integration
    | class="cms_table_grid_td" | 3 to 4X slower than Deterministic
    |- class="cms_table_grid_tr"
    | class="cms_table_grid_td" | Deterministic Field Solver
    | class="cms_table_grid_td" | Reference-Like Accuracy
    Fast as Rule-based
    | class="cms_table_grid_td" | Multi-cpu required (4 - 8)
    |-


    What if you could find a tool that was in the upper right hand corner, offering high accuracy and fast run times?

    That corner is the goal of a new breed of 3D field solvers where highest accuracy and fast run times co-exist.

    Mentor's 3D Field Solver
     I learned more about 3D field solvers from Claudia Relyea, TME at Mentor for the Calibre xACT 3D tool, when we met last month in Wilsonville, Oregon. The xACT 3D tool is a deterministic 3D field solver where multiple CPUs are used to achieve faster run times. A white paper is available for download here.

    Q: Why shouldn't I try a 3D field solver with a random-walk approach?

    A: Well, your results with a random-walk tool will have a higher error level. Let's say that you have 1 million nets in your design, then with a sigma of 1% you will see 3,000 nets where the accuracy is >3% different from a reference result. For sensitive analog circuits and data converters that level of inaccuracy will make your chip fail.

    Q: What is the speed difference between xACT 3D and random walk tools?

    A: We see xACT 3D running about 4X faster.

    Q: What kind of run times can I expect with your 3D field solver?

    A: About 120K nets/hour when using 32 cpus, and 65K nets/hour with 16 cpus.

    Q: How is the accuracy of your tool compared to something like Raphael?

    A: On a 28nm NAND chip we saw xACT 3D numbers that were between 1.5% and -2.9% of Raphael results.

    Q: Which customers are using xACT 3D?

    A: Over a dozen, the one's that we can mention are: STARC, eSlicon and UMC.

    Q: For a device level example, how do you compare to a reference field solver?

    A: xACT 3D ran in 9 seconds versus 4.5 hours, and the error versus reference was between 4.5% and -3.8%.

    Q: What kind of accuracy would I expect on an SRAM cell?

    A: We ran an SRAM design and found xACT 3D was within 2.07% of reference results.

    Q: How does the run time scale with the transistor count?

    A: Calibre xACT 3D has linear run time performance with transistor count. Traditional field solvers have an exponential run time with transistor count making them useful for only small cells.

    Q: What is the performance on a large design?

    A: A memory array with 2 million nets runs in just 28 hours when using 16 cpus.

    Q: Can your tool extract inductors?

    A: Yes, it's just an option you can choose.

    Q: How would xACT 3D work in a Cadence IC tool flow?


    Q: Can I cross-probe parasitics in Cadence Virtuoso?

    A: Yes, that uses Calibre RVE.


    Q: Where would I use this tool in my flow?

    A: Every place that you need the highest accuracy: device, cell and chip levels.

    Summary
    3D field solvers are not just for device level IC parasitics, in fact they can be used on cell and chip level as well when using multiple CPUs. The deterministic approach by Mentor gives me a safer feeling than the random-walk method because I don't need to worry about accuracy.

    I've organized a panel discussion at DAC on the topic of 3D field solvers, so hope to see you in San Diego this June.

    [post_title] => Who Needs a 3D Field Solver for IC Design? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => who-needs-a-3d-field-solver-for-ic-design [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:52:07 [post_modified_gmt] => 2019-06-15 01:52:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/who-needs-a-3d-field-solver-for-ic-design.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7457 [max_num_pages] => 746 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => a2ab2e1650c3e06a51b11dc75b06bdec [query_vars_changed:WP_Query:private] => 1 [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )
  • Who Needs a 3D Field Solver for IC Design?

    Who Needs a 3D Field Solver for IC Design?
    by Daniel Payne on 04-07-2011 at 4:53 pm

    Inroduction
    In the early days we made paper plots of an IC layout then measured the width and length of interconnect segments with a ruler to add up all of the squares, then multiplied by the resistance per square. It was tedious, error prone and took way too much time, but we were rewarded with accurate parasitic values for our SPICE… Read More


    Wanted: FPGA start-up! …Dead or Alive?

    Wanted: FPGA start-up! …Dead or Alive?
    by Eric Esteve on 04-05-2011 at 6:23 am

    The recent announcement from Tabula about the $108 million raised in its Series D round of funding is putting the focus on FPGA technology, and FPGA startups in particular. Who are these FPGA startups, what is their differentiation, where is the innovation, in the product or the business model?

    When you say FPGA, you first think:Read More


    Samsung is NOT a Foundry!

    Samsung is NOT a Foundry!
    by Daniel Nenni on 04-01-2011 at 5:47 pm


    Samsung is the #1 electronics company, the #2 semiconductor company, and for 20+ years the world’s largest memory chip maker. Analysts expect Samsung to catch Intel by the year 2014. In the foundry business however Samsung is a distant #9 after more than a five year investment and here’s why:

    Foundry 2010 Revenue:
    (1) TSMC $13B
    (2)Read More


    DRC/DFM inside of Place and Route

    DRC/DFM inside of Place and Route
    by Daniel Payne on 03-31-2011 at 10:19 am

    Intro
    Earlier this month I drove to Mentor Graphics in Wilsonville, Oregon and spoke with Michael Buehler-Garcia, Director of Marketing and Nancy Nguyen, TME, both part of the Calibre Design to Silicon Division. I’m a big fan of correct-by-construction thinking in EDA tools and what they had to say immediately caught my… Read More


    Andrew Yang’s presentation at Globalpress electronic summit

    Andrew Yang’s presentation at Globalpress electronic summit
    by Paul McLellan on 03-30-2011 at 3:15 pm

    Yesterday at the Globalpress electronic summit Andrew gave an overview of the Apache product line, carefully avoiding saying anything he cannot due to the filing of Apache’s S-1. From a financial point of view the company has had 8 years of consecutive growth, is profitable since 2008, and has no debt. During 2010 when the… Read More


    2011 Semiconductor Design Forecast: Partly Cloudy!

    2011 Semiconductor Design Forecast: Partly Cloudy!
    by Daniel Nenni on 03-29-2011 at 11:39 am

    This was my first SNUG (Synopsys User Group) meeting as media so it was a ground breaking event. Media was still barred from some of the sessions but hey, it’s a start. The most blog worthy announcement on day 1 was that Synopsys signed a deal with Amazon to bring the cloud to mainstream EDA!

    Even more blog worthy was a media roundtable… Read More


    Dawn at the OASIS, Dusk for GDSII

    Dawn at the OASIS, Dusk for GDSII
    by Beth Martin on 03-28-2011 at 1:53 pm

    For an industry committed to constant innovation, changes in any part of the design flow are only slowly adopted, and only when absolutely necessary. Almost 10 years ago, it became clear that shrinking process technologies would bring a massive growth of layout and mask data—rougly 50% per node. This avalanche of data seriously… Read More


    ARM and GlobalFoundries

    ARM and GlobalFoundries
    by Eric Esteve on 03-25-2011 at 9:49 am


    Although there has been always a strong relationship between ARM and GlobalFoundries, it is interesting to notice that Intel has helped to boost it and make it even stronger. Indeed when AMD renegotiated its x86 licensing deal with Intel in 2009, one of the most significant long-term changes was a marked reduction in how much of
    Read More


    Process Design Kits: PDKs, iPDKs, openPDKs

    Process Design Kits: PDKs, iPDKs, openPDKs
    by Paul McLellan on 03-24-2011 at 5:28 pm

    One of the first things that needs to be created when bringing up a new process is the Process Design Kit, or PDK. Years ago, back when I was running the custom IC business line at Cadence, we had a dominant position with the Virtuoso layout editor and so creating a PDK really meant creating a Virtuoso PDK, and it was a fairly straightforward… Read More


    Evolution of Lithography Process Models, Part II

    Evolution of Lithography Process Models, Part II
    by Beth Martin on 03-24-2011 at 3:56 pm

    In part I of this series, we looked at the history of lithography process models, starting in 1976. Some technologies born in that era, like the Concorde and the space shuttle, came to the end of their roads. Others did indeed grow and develop, such as the technologies for mobile computing and home entertainment. And lithography … Read More