SIC 2020 Forum 800x100

WP_Query Object
(
    [query] => Array
        (
            [paged] => 731
        )

    [query_vars] => Array
        (
            [paged] => 731
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [post_type] => 
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type = 'post' AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 7300, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 227
                    [post_author] => 18791
                    [post_date] => 2010-10-14 16:00:00
                    [post_date_gmt] => 2010-10-14 16:00:00
                    [post_content] => In my submission about TSMC making some DFM analysis steps mandatory at 45nm (see “TSMC’s DFM Announcement”), I ended with a question about why the foundries can’t just write better design rules (and rule decks) to make sure all designs yield well. Here’s my take on this complicated question.

If we take a step back for a moment, there is something generic about DFM analysis that needs to be considered. Each type of DFM analysis has a “sphere of influence” in its scope. For CMP analysis, the analysis window size is around 20um. That’s large compared to a standard cell. For Critical Area Analysis (my favorite tool), the analysis scope is the size of the largest random particle to be considered, typically anywhere from 2um to 10um diameter. For lithography analysis (LPC), the scope is a little smaller, roughly 1-2um.

How does this get back to rules? Well, what’s the scope of a generic DRC rule—a shape all by itself, a shape within a shape, or a shape and its nearest neighbors? In common DRC practice, those are pretty much the available options, which is too limited for advanced rule checking.

That’s why Mentor developed equation-based DRC (eqDRC), which is an extension of Calibre technology that allows you to write equations to express rules instead of using fixed values. However, even with eqDRC you still can’t easily get past the nearest neighbor in a DRC rule. Yes, you can write complex rules to go one or two shapes past the nearest neighbors, but the complexity of the rule and its runtime will go up exponentially, the farther you try to go from the original shape. What this means is that it will be difficult if not impossible to write rules that take into account enough of the context of the shape being checked. DFM tools, on the other hand, automatically take context into account. That’s the big advantage of model-based, over rule-based analysis.

At advanced process nodes (45nm and below), DRC is “necessary but not sufficient.” The more “not sufficient” it becomes, the greater the need for DFM tools that see the extended context of all shapes in the design that are close enough to have any adverse effect. Of course, the farther upstream you find and fix a DFM issue, the easier it is to fix. That’s why I expect this trend of pushing the designers to do DFM analysis to continue, and for more foundries to follow TSMC’s lead.

By the way, the comment about DRC being necessary but not sufficient is not meant to belittle DRC. DRC is of course, mandatory at all nodes. DRC is also absolutely necessary in a DFM flow because for one thing, all DFM tools assume the design to be (essentially) DRC clean. If you get too far outside the allowed range of analysis in DFM because the design is not clean, the results can be inaccurate.

So yes, you still need DRC—more than ever. But you also need some new tricks in your designer bag!

– Simon Favre, Technical Marketing Engineer for Calibre YieldAnalyzer [post_title] => So, Why Not Just Write Better Rules? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => so-why-not-just-write-better-rules [to_ping] => [pinged] => [post_modified] => 2010-10-14 16:00:00 [post_modified_gmt] => 2010-10-14 16:00:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/so-why-not-just-write-better-rules.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 228 [post_author] => 18791 [post_date] => 2010-10-14 16:00:00 [post_date_gmt] => 2010-10-14 16:00:00 [post_content] => If you are a TSMC customer, no doubt you have heard TSMC is requiring lithography and planarity analysis for all 45nm designs. Their website says customers can either run it themselves, or contract TSMC services to do it. The most cost-effective way would be for the customers to run it themselves, but some might not have the resources to do that. Of course, by the time you pay TSMC to do it 3 or 4 times, you could have bought some tools and run it yourself. That’s good for Mentor and other EDA vendors, right? Probably, but there has to be more to it than that.

So, what’s really behind this? TSMC isn’t generally known for making things easy for EDA vendors. Why make a new step mandatory like DRC is? Is it because they experienced real yield issues at 45nm, and they want the customers to find and fix the issues? Or is it because the DFM tools are finally mature enough to be a required part of the flow? Well, maybe they are, but that’s probably not the reason.

I think it has to do with money. As they say, “Follow the money.” Having low-yielding parts in the fab doesn’t do anybody any good. Most TSMC customers buy wafers at a pre-negotiated price. If the part yields poorly, the customer will likely have to buy more wafers to make up the volume, and will try to renegotiate the price. How is it bad for TSMC if they buy more wafers? Because that makes TSMC’s production starts more unpredictable. A small company with one product could go out of business if good die are costing them too much as a result of low yield. Orders from medium-sized companies could fluctuate wildly. That would really make things unpredictable for TSMC.

Low yield would also hurt TSMC’s reputation. They like being #1 in the foundry business. They like being thought of as the best. Having lots of customers complaining about price and yield puts that at risk. But to resolve low yields, TSMC has to devote more resources to these problem chips, which would cost them real money. Even worse, some large customers actually buy only good die. Low yield for those customers would directly impact TSMC’s bottom line, as TSMC would have to make up the difference.

Follow the money. Having happy customers who sell more product, make more money, and come back for more high-yielding wafers probably makes the most sense for TSMC. The trend seems to be to make the customers more responsible for DFM. Expect other foundries to follow suit.

So why can’t the foundry just write better rules to make sure that all designs yield well? Hmmm…that probably deserves its own discussion (see “So, Why Not Just Write Better Rules?”).

– Simon Favre, Technical Marketing Engineer for Calibre YieldAnalyzer [post_title] => TSMC’s DFM Announcement [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => tsmcs-dfm-announcement [to_ping] => [pinged] => [post_modified] => 2010-10-14 16:00:00 [post_modified_gmt] => 2010-10-14 16:00:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/tsmcs-dfm-announcement.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 250150 [post_author] => 18791 [post_date] => 2010-10-14 16:00:00 [post_date_gmt] => 2010-10-14 16:00:00 [post_content] => In my submission about TSMC making some DFM analysis steps mandatory at 45nm (see “TSMC’s DFM Announcement”), I ended with a question about why the foundries can’t just write better design rules (and rule decks) to make sure all designs yield well. Here’s my take on this complicated question.

If we take a step back for a moment, there is something generic about DFM analysis that needs to be considered. Each type of DFM analysis has a “sphere of influence” in its scope. For CMP analysis, the analysis window size is around 20um. That’s large compared to a standard cell. For Critical Area Analysis (my favorite tool), the analysis scope is the size of the largest random particle to be considered, typically anywhere from 2um to 10um diameter. For lithography analysis (LPC), the scope is a little smaller, roughly 1-2um.

How does this get back to rules? Well, what’s the scope of a generic DRC rule—a shape all by itself, a shape within a shape, or a shape and its nearest neighbors? In common DRC practice, those are pretty much the available options, which is too limited for advanced rule checking.

That’s why Mentor developed equation-based DRC (eqDRC), which is an extension of Calibre technology that allows you to write equations to express rules instead of using fixed values. However, even with eqDRC you still can’t easily get past the nearest neighbor in a DRC rule. Yes, you can write complex rules to go one or two shapes past the nearest neighbors, but the complexity of the rule and its runtime will go up exponentially, the farther you try to go from the original shape. What this means is that it will be difficult if not impossible to write rules that take into account enough of the context of the shape being checked. DFM tools, on the other hand, automatically take context into account. That’s the big advantage of model-based, over rule-based analysis.

At advanced process nodes (45nm and below), DRC is “necessary but not sufficient.” The more “not sufficient” it becomes, the greater the need for DFM tools that see the extended context of all shapes in the design that are close enough to have any adverse effect. Of course, the farther upstream you find and fix a DFM issue, the easier it is to fix. That’s why I expect this trend of pushing the designers to do DFM analysis to continue, and for more foundries to follow TSMC’s lead.

By the way, the comment about DRC being necessary but not sufficient is not meant to belittle DRC. DRC is of course, mandatory at all nodes. DRC is also absolutely necessary in a DFM flow because for one thing, all DFM tools assume the design to be (essentially) DRC clean. If you get too far outside the allowed range of analysis in DFM because the design is not clean, the results can be inaccurate.

So yes, you still need DRC—more than ever. But you also need some new tricks in your designer bag!

– Simon Favre, Technical Marketing Engineer for Calibre YieldAnalyzer [post_title] => So, Why Not Just Write Better Rules? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => so-why-not-just-write-better-rules-2 [to_ping] => [pinged] => [post_modified] => 2010-10-14 16:00:00 [post_modified_gmt] => 2010-10-14 16:00:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/so-why-not-just-write-better-rules-2.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 250151 [post_author] => 18791 [post_date] => 2010-10-14 16:00:00 [post_date_gmt] => 2010-10-14 16:00:00 [post_content] => If you are a TSMC customer, no doubt you have heard TSMC is requiring lithography and planarity analysis for all 45nm designs. Their website says customers can either run it themselves, or contract TSMC services to do it. The most cost-effective way would be for the customers to run it themselves, but some might not have the resources to do that. Of course, by the time you pay TSMC to do it 3 or 4 times, you could have bought some tools and run it yourself. That’s good for Mentor and other EDA vendors, right? Probably, but there has to be more to it than that.

So, what’s really behind this? TSMC isn’t generally known for making things easy for EDA vendors. Why make a new step mandatory like DRC is? Is it because they experienced real yield issues at 45nm, and they want the customers to find and fix the issues? Or is it because the DFM tools are finally mature enough to be a required part of the flow? Well, maybe they are, but that’s probably not the reason.

I think it has to do with money. As they say, “Follow the money.” Having low-yielding parts in the fab doesn’t do anybody any good. Most TSMC customers buy wafers at a pre-negotiated price. If the part yields poorly, the customer will likely have to buy more wafers to make up the volume, and will try to renegotiate the price. How is it bad for TSMC if they buy more wafers? Because that makes TSMC’s production starts more unpredictable. A small company with one product could go out of business if good die are costing them too much as a result of low yield. Orders from medium-sized companies could fluctuate wildly. That would really make things unpredictable for TSMC.

Low yield would also hurt TSMC’s reputation. They like being #1 in the foundry business. They like being thought of as the best. Having lots of customers complaining about price and yield puts that at risk. But to resolve low yields, TSMC has to devote more resources to these problem chips, which would cost them real money. Even worse, some large customers actually buy only good die. Low yield for those customers would directly impact TSMC’s bottom line, as TSMC would have to make up the difference.

Follow the money. Having happy customers who sell more product, make more money, and come back for more high-yielding wafers probably makes the most sense for TSMC. The trend seems to be to make the customers more responsible for DFM. Expect other foundries to follow suit.

So why can’t the foundry just write better rules to make sure that all designs yield well? Hmmm…that probably deserves its own discussion (see “So, Why Not Just Write Better Rules?”).

– Simon Favre, Technical Marketing Engineer for Calibre YieldAnalyzer [post_title] => TSMC’s DFM Announcement [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => tsmcs-dfm-announcement-2 [to_ping] => [pinged] => [post_modified] => 2010-10-14 16:00:00 [post_modified_gmt] => 2010-10-14 16:00:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/tsmcs-dfm-announcement-2.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 224 [post_author] => 18791 [post_date] => 2010-10-14 10:00:00 [post_date_gmt] => 2010-10-14 10:00:00 [post_content] => I finally got to watch the critically acclaimed sci-fi movie “Inception” last weekend and life has not been the same since. Without giving away too much detail for the benefit of those who have not watched it yet, the main plot involves dreams within dreams within dreams - three levels to be precise—to “incept” an idea into someone’s subconscious mind. Are you still with me? Never mind, the first thing that came to my mind when I was exposed to the concept of dreams within dreams was – nested domains in multi-voltage (MV) designs. Blame the nerd gene for triggering this reaction, but the truth remains.

One thought led to another and before long I was dreaming about nested multi-voltage domains with donut shaped domains, which happens to be reality. The donut shaped nested domains is one of the new emerging flavors for nested multi-voltage designs and it brings a new set of requirements and challenges for the MV flow. Some of the key considerations for the donut shaped nested domains are:

• Number of levels of nested hierarchy
• Defining donut domains in the UPF
• Hierarchy and netlist management for the top level and the donut domains
• Placement of cells based on connectivity in the donut hole and the top level
• Handling of level shifters based on connectivity (need to be placed in the donut hole or the top level)
• Handling of isolation cells if the donut domain has a switching supply
• Power routing to the donut hole if the donut domain has a switching supply
• Power supply routing to the donut domain if the top level has a switching supply
• Handling power switches if either the donut or the top level has a switching supply
• Building a balanced clock tree for the donut domain
• Signal Routing within the donut domain boundary and meeting timing requirements
• Always-on buffer handling for the donut hole or the top level
• Ensuring power integrity for all the domains, etc.


Nested Donut Domains in Multi-Voltage Designs

If there are more than two levels of nesting with donut shapes this list will get even longer and much more complex. Why exactly a designer would need a donut domain is beyond me but whoever planted the idea is playing a cruel practical joke. Now, if you will excuse me I need to go and spin my top.

--Arvind Narayanan, Product Marketing Manager, Place and Route Product Line [post_title] => Effects of Inception [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => effects-of-inception [to_ping] => [pinged] => [post_modified] => 2010-10-14 10:00:00 [post_modified_gmt] => 2010-10-14 10:00:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/effects-of-inception.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 425 [post_author] => 28 [post_date] => 2010-10-13 23:01:00 [post_date_gmt] => 2010-10-13 23:01:00 [post_content] => In celebrating the 10th anniversary of SMIC, CEO David Wang ushers in a new era of China semiconductor manufacturing with triumphs versus promises. By triumphs David means profits, which SMIC saw for the first time in Q2 2010. The future looks even brighter for SMIC as the China semiconductor demand versus supply gap is an estimated $30B versus $3B.

SMIC is definitely positioned for growth with 10k+ people and $1.5B in 2010 revenue versus $1B in 2009. 2010 has been a banner year for the foundry industry with close to $30B in total revenues, which is approximately 10% of the $300B in total semiconductor revenue. Outsourcing from semiconductor IDM’s (fab-light strategy) continues to push foundry growth as well as mobile internet devices and emerging markets in China, India, and South America. It is interesting to note that Cadence CEO Lip-Bu Tan is on the SMIC board of directors. Lip-Bu’s Walden Venture Fund is heavily invested in the China fabless semiconductor market and he can spell cloud computing, so expect a strong move from Cadence in China.

Other interesting datapoints:

2010 Numbers
Electronics $1.36T +12%
Semi $300B +31.5%
EDA $5B +0%
CAPS $41.8B +90%
Fab Equip $28B +120%

SMIC Revenue
60% USA
30% China
10% Taiwan, UK, Israel, Korea
20% 90nm and below

Capacity Expansion Plans
8” 150k per month
12” 130/90nm 20k
12” 65/55nm 60k
12” 45/40nm 50k
12” 32/28nm 60k

Aart De Geus was the keynote speaker with an updated version of his: Systemic Collaboration: The New Smart Skill presentation. This presentation is looking more and more like an EDA360 pitch! I actually experienced déjà vu from conversations with EDA360 Chief Anarchist John Bruggeman!

One of Aart’s slides highlighted the System, SoC, and Silicon Realization companies Synopsys had acquired over the years, an impressive list for sure. In fact, Synopsys dedicates 20%+ of revenue on M&A activity (inoraganic growth) versus 30%+ on R&D (organic growth). Unfortunately the Synopsys “Realization” strategy is FPGA based which will never work for bleeding edge semiconductor products that account for 90% of the silicon shipped in a year. The Cadence EDA360 vision is simulation/emulation based which is much better suited for “Realization”. Correct me if I’m wrong here Synopsys fans, this is just my impression/opinion.

The importance of IP re-use was also mentioned in regards to the increasing quality (yield) and time to market pressure the semiconductor industry faces. Better IP equals better yield, better yield equals time to market and better margins. As Aart says, the semiconductor design ecosystem is systemic. The results are not a SUM but a PRODUCT. If anywhere in the semiconductor design and manufacturing equation there is a zero, the results will be a bad wafer, die, chip, or electronic device, which supports the increasing importance of IP re-use.

I’m a Semiconductor IP person by experience and have blogged about it many times, and will do it again next week. Soft and hard IP cores continue to have a profound impact on SoC design. The trend I see is more soft IP versus hard, which presents a different type of qualification and integration challenge, but more on that next week. [post_title] => Semiconductor Manufacturing International Corporation 2010 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => semiconductor-manufacturing-international-corporation-2010 [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:37:22 [post_modified_gmt] => 2019-06-15 02:37:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://35.226.139.164/uncategorized/425-semiconductor-manufacturing-international-corporation-2010/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 424 [post_author] => 28 [post_date] => 2010-10-10 22:18:00 [post_date_gmt] => 2010-10-10 22:18:00 [post_content] => Okay, this is more of a, “What I would do if I was TSMC” than a critique, but I needed a one word descriptor for the title. This was the third TSMC OIP Conference and I would guess about 250 people attended. This was the first time I have seen TSMC in “reactive” mode versus “proactive” leadership mode, so I was a bit disappointed. TSMC is THE industry leader and should NOT be looking in the rear view mirror at competitors that are barely visible.

The semiconductor landscape has dramatically changed during the contraction phase of the current business cycle. The strong got stronger by acquisition and aggressive business practices, and the rest of the fabless semiconductor companies either were: acquired, got smaller, or became IP companies. So TSMC, being a customer driven company, must also change strategies and the Open Innovation Platform IS the delivery system for that change.


The Pareto principle (also known as the 80-20 rule or the law of the vital few) states that, for many events, roughly 80% of the effects come from 20% of the causes. For semiconductors this is definitely the case. In fact, as a result of the recent economic chaos and consolidations I would guess that 90% of the silicon is shipped by 10% of the companies.

The foundry strategy for the top semiconductor companies is three-fold: Early Access, Capacity, and Wafer Pricing. TSMC is working hard on capacity and wafer pricing 24/7, believe it! There is no doubt in my mind that TSMC will continue to be the capacity and margin leader for 40nm, 28nm, and 20nm, which will keep the top foundry customers engaged. Early access however is a continuing challenge. For example, Design Rule Manuals (DRMS) are still in PDF format, 1,300+ pages long, and rapidly changing. Some of the rules are so complicated they are impossible to describe, and even harder to code and communicate, even within the foundry teams. This should be the focus of the TSMC OIP for the top semiconductor companies, a more automated and simplified information exchange, one that uses vendor neutral formats so customers cannot be held hostage by short sighted EDA vendors. The iPDK initiative is an excellent start but there is much more that can be accomplished.

For the other 90% of the semiconductor companies, the ones that cannot afford to develop custom design flows, PDK’s, and IP, the ones that cannot afford an in-house foundry team for early access, TSMC OIP is a critical enabler. Unfortunately, one of the messages of the conference was, “TSMC will not compete with partners”, which was a clear response to public relations pressure from the GlobalFoundries mantra, “We don’t compete with partners!”

Competition is what has made the semiconductor industry and semiconductors themselves what they are today! Competition is what drives innovation and keeps costs down. Not destructive competition, where the success of one depends on the failure of another, but constructive competition that promotes mutual survival and growth where everybody can win. The semiconductor design ecosystem is the poster child for destructive competition, which is why EDA ( SNPS, CDNS, MNTR, LAVA) valuations are a fraction of what they should be.

The TSMC Open Innovation Platform should be the cornerstone of the semiconductor design ecosystem. The ecosystem must NOT hold designers hostage with proprietary formats! The ecosystem MUSTinnovate to compete! The TSMC Open Innovation Platform MUST lead the way! TSMC is the #1 foundry and that will not change within my lifetime. TSMC must also be #1 in customer satisfaction and the design ecosystem ISwhere customer satisfaction begins. [post_title] => TSMC OIP Conference 2010 Critique! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => tsmc-oip-conference-2010-critique [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:37:22 [post_modified_gmt] => 2019-06-15 02:37:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/tsmc-oip-conference-2010-critique.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 199 [post_author] => 4057 [post_date] => 2010-10-08 20:08:00 [post_date_gmt] => 2010-10-08 20:08:00 [post_content] => Simon Favre, one of our Calibre Technical Marketing Engineers, presented a paper on Critical Area Analysis and Memory Redundancy at the 2010 IEEE North Atlantic Test Workshop in Hopewell Junction, NY, just up the road from Fishkill. As Simon says…

Fishkill, New York. IBM is in Fishkill. IBM invented Critical Area Analysis in what, the 1960’s? Venturing into IBM country to speak on CAA is kind of like being the court jester. Fortunately, no one said, “Off with his head.” :) But seriously, it amazes me how little is known about this topic.

There have been other papers on the subject. I’m merely bringing the topic up to date. I did come up with a way of writing out the formula that appears new, though the underlying principle is the same. Memory redundancy is about having repair resources available to fix an embedded RAM, typically an SRAM. Whatever structure you have repair resources for can be thought of as a unit. You have to calculate the unrepaired yield for all units, then adjust for repair resources. You have to add in the probabilities of having all units good, one unit bad, and so on, until there are not enough repair resources to make repairs. It can make a dramatic difference in yield if the memories are large enough or defect rates are high enough.

There are two extreme schools of thought on memory redundancy. One says, “Why bother? I can’t fix the logic, so don’t bother on the memories. Just pressure the foundry to reduce defect rates.” The other extreme says, “Redundancy is good. Put redundancy everywhere.” In between, the designer is either taking an educated guess, or just following the memory IP provider’s guidelines. Those guidelines may have been published when the process was new, and may be pessimistic. The only way to know for sure if adding redundancy helps yield significantly, or is just a waste of chip area and tester time, is to do memory redundancy analysis tied to current foundry defect rates. If a design is going to go through a re-spin, either to a half-node, or to add functionality, it may be the ideal time to ask, “Is the current memory redundancy scheme adequate, or is it overkill?” Calibre YieldAnalyzer has this capability. If you analyze the design with redundancy in mind, the redundancy configuration can be adjusted and the yield rapidly recalculated to facilitate a what-if analysis. It’s the best way to determine the optimal redundancy scheme based on actual foundry defect rates.

The downside of overdesign in this area is very real. Let’s say a large SOC is 50% embedded SRAM. If you add 2% to the area of each SRAM for redundancy, you just increased the chip area 1%. That’s 1% fewer die per wafer over the entire life of the design. It better be worth doing. There’s also tester time to consider. A chip tester is a large, expensive piece of hardware. Every millisecond it spends testing chips is accounted for. If you factor the depreciation cost of that hardware over its lifetime, every extra millisecond that tester spends testing embedded memories and applying the repair resources adds to chip cost. Again, that cost is over the entire life of the chip. Designers may have some idea of how much area they are adding, but the impact on good die vs. gross die may be missed without analysis. Designers probably have much less information about how redundancy overdesign impacts tester time and back-end manufacturing cost. My test expert brethren at Mentor can probably add a lot to this discussion in future submissions.

Want to hear more of what Simon says? Watch the on-demand web seminar CAA Using Calibre YieldAnalyzer: It's Not Just a Fab Problem Anymoreto learn more about CAA...

You can also download a related paper by clicking here... [post_title] => Critical Area Analysis and Memory Redundancy [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => critical-area-analysis-and-memory-redundancy [to_ping] => [pinged] => [post_modified] => 2010-10-08 20:08:00 [post_modified_gmt] => 2010-10-08 20:08:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/critical-area-analysis-and-memory-redundancy.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 478 [post_author] => 28 [post_date] => 2010-10-03 22:35:00 [post_date_gmt] => 2010-10-03 22:35:00 [post_content] => It’s that time of the quarter again, where the semiconductor analysts revise forecasts, passing off glorified guesstimates as valid financial planning data. They aren’t forecasts! They are observations! I blame these hacks for the 12.5% Silicon Valley unemployment rate! I blame these hacks for the dwindling available capitol for emerging fabless, EDA, and IP companies. I even blame these hacks for global warming! Okay, maybe not global warming, but the other stuff for sure!

iSupply is first out of the gate with a downward observation (forecast) of 32% versus 35%. Semiconductor revenues around the world are now expected to hit $302 billion this year, a gain of 32 percent from $228 billion in 2009. This drop is attributed to “weaker consumer demand for certain electronic devices and higher industry inventory” rather than “just bad forecasting”. Revenue in the fourth quarter is expected to drop by 0.3 percent, which will be the first sequential drop since the semiconductor market took an “unforecasted” nose dive in the fourth quarter of 2008 and first quarter of 2009.
TheSemiconductor Intelligence observation (forecast) was 36% so expect a revision from Bill Jewell. Bill also warns us that, according to National Bureau of Economic Research (NBER), the current recession is the longest since World War II:


The NBER is generally seen as the authority for documenting US recessions and defines them as:
“a significant decline in the economic activity spread across the country, lasting more than a few months, normally visible in real GDP growth, real personal income, employment (non-farm payrolls), industrial production, and wholesale-retail sales.”

By definition, the end of a recession means the U.S. economy stopped contracting and not when it reaches the level it was at the start, so we have a way to go yet. The US Real GDP & Durable Goods graphic is based on data from the U.S. Department of Commerce and shows the quarterly U.S. real gross domestic product (GDP) indexed to 4th quarter 2007, the peak prior to the recession.

Speaking of the semiconductor ecosystem, next week I will be at the 2010 TSMC OIP Partner Forum on Tuesday and the SMIC 2010 Technology Symposium on Friday, two free lunches, the life of a world famous blogger! It would be a pleasure to meet you, that is, of course, if you recognize me without the Porsche hat!
 [post_title] => Semiconductor Forecast 2010-2011 Update! [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => semiconductor-forecast-2010-2011-update [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:37:20 [post_modified_gmt] => 2019-06-15 02:37:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/semiconductor-forecast-2010-2011-update.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 417 [post_author] => 28 [post_date] => 2010-10-01 09:44:00 [post_date_gmt] => 2010-10-01 09:44:00 [post_content] => During my most recent Taiwan trip I was not only afforded a meeting with Dr Mark Liu, Sr VP of TSMC, a guided tour of GigaFab #12 was also included. Even more impressive, I’m now considered “Elite” by Eva Airlines so I automatically get the good seats, better food, and VIP service. My wife, however, is not impressed with my Elite status so I still have to do chores around the house.

Mark Liu ramped up TSMC’s first 200mm fab in 1993 and has been building fabs for TSMC ever since. Mark’s favorite topic is the 300mm GigaFabs, Fab#12, Fab#14, and Fab #15 which TSMC just broke ground on last month. Clearly TSMC has learned a valuable lesson from the 40nm wafer shortage experience. Not having enough capacity is far more costly than having too much. After 40nm, customer priorities have certainly changed: Capacity is now the 1st concern with price a close 2[SUP]nd[/SUP], and last but not least design enablement. Please note that the perceived value of semiconductor design enablement is often overlooked but it is clearly the key enabler to TSMC’s expansive customer base.
After putting on the clean room space suit and being lightly air washed I entered a GigaFab for the first time and was literally speechless. If you know me personally, being speechless is not one of my strong suits so this was a new experience.

The insignias on the machines were logos and acronyms that I recognized but what struck me was the total automation of a GigaFab. Machines outnumbered man exponentially with 99% automation. Shuttles zoomed around on tracks above delivering thousands of 40nm wafers to the 300+ steps in the semiconductor manufacturing process. The few people I did see were at monitoring stations. Even more impressive than the billions of dollars of hardware in a GigaFab, is the millions of lines of software developed to run it: Automated Material Handling Systems (AMHS) for transporting, storing, and managing semiconductor wafer carriers and reticles plus Manufacturing Execution Systems (MES) software to manage overall production efficiency.

This year TSMC will spend a record $5.9B on capital expenditures. Approximately 75% will be used to expand TSMC’s 65/40/28nm technology capacity and 15% will be used for mainstream processes. The remainder will be used for equipment, R&D expenses, and new business such as solar and LED. TSMC’s newest Gigafab, Fab 15, will cost an estimated $9.4B. TSMC is also set to complete Phase 5 expansion at Fab 12, and Phase 4 expansion at Fab 14.


According to the most recent management report, TSMC has accelerated its capacity expansion plan for 2010. Total managed capacity was 2,749K 8-inch equivalent wafers in 2Q10, increased by 7% from 2,566K in 1Q10. Current capacity plan calls for an overall increase by 14% to 11,299 8- inch equivalent wafers, compared with 11,247 8-inch equivalent wafers planned in the last quarter.

Demand for TSMC’s advanced technology wafers in all major semiconductor market segments again increased quarter to quarter. Among the advanced technologies, 40nm not only increased an additional 2% of TSMC’s revenue share, the output of Gigafab wafers processed using 40nm technology increased by 30% sequentially.


The 40nm race is officially over, TSMC wins by a landslide in regards to capacity, price, and design enablement. The race for 28nm dominance however is still on between TSMC, Samsung, and GlobalFoundries. Samsung is in production at 32nm so moving to 28nm should just be a process shrink. For TSMC and GlobalFoundries, 28nm is a completely new node which will bring new technical challenges. Still, in my opinion, the foundry race to 28nm is too close to call today and it will certainly be an exciting finish! [post_title] => TSMC GigaFab Tour! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => tsmc-gigafab-tour [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:37:19 [post_modified_gmt] => 2019-06-15 02:37:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/tsmc-gigafab-tour.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 227 [post_author] => 18791 [post_date] => 2010-10-14 16:00:00 [post_date_gmt] => 2010-10-14 16:00:00 [post_content] => In my submission about TSMC making some DFM analysis steps mandatory at 45nm (see “TSMC’s DFM Announcement”), I ended with a question about why the foundries can’t just write better design rules (and rule decks) to make sure all designs yield well. Here’s my take on this complicated question.

If we take a step back for a moment, there is something generic about DFM analysis that needs to be considered. Each type of DFM analysis has a “sphere of influence” in its scope. For CMP analysis, the analysis window size is around 20um. That’s large compared to a standard cell. For Critical Area Analysis (my favorite tool), the analysis scope is the size of the largest random particle to be considered, typically anywhere from 2um to 10um diameter. For lithography analysis (LPC), the scope is a little smaller, roughly 1-2um.

How does this get back to rules? Well, what’s the scope of a generic DRC rule—a shape all by itself, a shape within a shape, or a shape and its nearest neighbors? In common DRC practice, those are pretty much the available options, which is too limited for advanced rule checking.

That’s why Mentor developed equation-based DRC (eqDRC), which is an extension of Calibre technology that allows you to write equations to express rules instead of using fixed values. However, even with eqDRC you still can’t easily get past the nearest neighbor in a DRC rule. Yes, you can write complex rules to go one or two shapes past the nearest neighbors, but the complexity of the rule and its runtime will go up exponentially, the farther you try to go from the original shape. What this means is that it will be difficult if not impossible to write rules that take into account enough of the context of the shape being checked. DFM tools, on the other hand, automatically take context into account. That’s the big advantage of model-based, over rule-based analysis.

At advanced process nodes (45nm and below), DRC is “necessary but not sufficient.” The more “not sufficient” it becomes, the greater the need for DFM tools that see the extended context of all shapes in the design that are close enough to have any adverse effect. Of course, the farther upstream you find and fix a DFM issue, the easier it is to fix. That’s why I expect this trend of pushing the designers to do DFM analysis to continue, and for more foundries to follow TSMC’s lead.

By the way, the comment about DRC being necessary but not sufficient is not meant to belittle DRC. DRC is of course, mandatory at all nodes. DRC is also absolutely necessary in a DFM flow because for one thing, all DFM tools assume the design to be (essentially) DRC clean. If you get too far outside the allowed range of analysis in DFM because the design is not clean, the results can be inaccurate.

So yes, you still need DRC—more than ever. But you also need some new tricks in your designer bag!

– Simon Favre, Technical Marketing Engineer for Calibre YieldAnalyzer [post_title] => So, Why Not Just Write Better Rules? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => so-why-not-just-write-better-rules [to_ping] => [pinged] => [post_modified] => 2010-10-14 16:00:00 [post_modified_gmt] => 2010-10-14 16:00:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/so-why-not-just-write-better-rules.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7357 [max_num_pages] => 736 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => ab300cf6d620f0cd71fb62f74b04e3a9 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

So, Why Not Just Write Better Rules?

So, Why Not Just Write Better Rules?
by glforte on 10-14-2010 at 4:00 pm

In my submission about TSMC making some DFM analysis steps mandatory at 45nm (see “TSMC’s DFM Announcement”), I ended with a question about why the foundries can’t just write better design rules (and rule decks) to make sure all designs yield well. Here’s my take on this complicated question.… Read More


TSMC’s DFM Announcement

TSMC’s DFM Announcement
by glforte on 10-14-2010 at 4:00 pm

If you are a TSMC customer, no doubt you have heard TSMC is requiring lithography and planarity analysis for all 45nm designs. Their website says customers can either run it themselves, or contract TSMC services to do it. The most cost-effective way would be for the customers to run it themselves, but some might not have the resources… Read More


So, Why Not Just Write Better Rules?

So, Why Not Just Write Better Rules?
by glforte on 10-14-2010 at 4:00 pm

In my submission about TSMC making some DFM analysis steps mandatory at 45nm (see “TSMC’s DFM Announcement”), I ended with a question about why the foundries can’t just write better design rules (and rule decks) to make sure all designs yield well. Here’s my take on this complicated question.… Read More


TSMC’s DFM Announcement

TSMC’s DFM Announcement
by glforte on 10-14-2010 at 4:00 pm

If you are a TSMC customer, no doubt you have heard TSMC is requiring lithography and planarity analysis for all 45nm designs. Their website says customers can either run it themselves, or contract TSMC services to do it. The most cost-effective way would be for the customers to run it themselves, but some might not have the resources… Read More


Effects of Inception

Effects of Inception
by glforte on 10-14-2010 at 10:00 am

attachment

I finally got to watch the critically acclaimed sci-fi movie “Inception” last weekend and life has not been the same since. Without giving away too much detail for the benefit of those who have not watched it yet, the main plot involves dreams within dreams within dreams – three levels to be precise—to “incept” an idea into … Read More


Semiconductor Manufacturing International Corporation 2010

Semiconductor Manufacturing International Corporation 2010
by Daniel Nenni on 10-13-2010 at 11:01 pm

In celebrating the 10th anniversary of SMIC, CEO David Wang ushers in a new era of China semiconductor manufacturing with triumphs versus promises. By triumphs David means profits, which SMIC saw for the first time in Q2 2010. The future looks even brighter for SMIC as the China semiconductor demand versus supply gap is an estimated… Read More


TSMC OIP Conference 2010 Critique!

TSMC OIP Conference 2010 Critique!
by Daniel Nenni on 10-10-2010 at 10:18 pm

Okay, this is more of a, “What I would do if I was TSMC” than a critique, but I needed a one word descriptor for the title. This was the third TSMC OIP Conference and I would guess about 250 people attended. This was the first time I have seen TSMC in “reactive” mode versus “proactive” leadership mode, so I was a bit disappointed. TSMC is … Read More


Critical Area Analysis and Memory Redundancy

Critical Area Analysis and Memory Redundancy
by SStalnaker on 10-08-2010 at 8:08 pm

Simon Favre, one of our Calibre Technical Marketing Engineers, presented a paper on Critical Area Analysis and Memory Redundancy at the 2010 IEEE North Atlantic Test Workshop in Hopewell Junction, NY, just up the road from Fishkill. As Simon says…

Fishkill, New York. IBM is in Fishkill. IBM invented Critical Area Analysis in what,… Read More


Semiconductor Forecast 2010-2011 Update!

Semiconductor Forecast 2010-2011 Update!
by Daniel Nenni on 10-03-2010 at 10:35 pm

It’s that time of the quarter again, where the semiconductor analysts revise forecasts, passing off glorified guesstimates as valid financial planning data. They aren’t forecasts! They are observations! I blame these hacks for the 12.5% Silicon Valley unemployment rate! I blame these hacks for the dwindling available capitol… Read More


TSMC GigaFab Tour!

TSMC GigaFab Tour!
by Daniel Nenni on 10-01-2010 at 9:44 am

During my most recent Taiwan trip I was not only afforded a meeting with Dr Mark Liu, Sr VP of TSMC, a guided tour of GigaFab #12 was also included. Even more impressive, I’m now considered “Elite” by Eva Airlines so I automatically get the good seats, better food, and VIP service. My wife, however, is not impressed with my Elite status… Read More