Semiwiki Ansys SimWorld

WP_Query Object
(
    [query] => Array
        (
            [paged] => 662
            [page_id] => author/daniel-payne-57-page51.html
        )

    [query_vars] => Array
        (
            [paged] => 662
            [page_id] => 0
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [post_type] => 
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type = 'post' AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 6610, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 867
                    [post_author] => 9491
                    [post_date] => 2011-11-15 13:19:00
                    [post_date_gmt] => 2011-11-15 13:19:00
                    [post_content] => I attended much of the Jasper users' group a week ago. There were several interesting presentations that I can't just blog about because companies are shy, and some that would only be of interest if you were a user of Jasper's products on a daily basis.

 But for me the most interesting presentations were several on an area that I didn't realize this sort of formal verification was being used for. The big driver is that modern multi-core processors now require much more sophisticated cache control than before. ARM in particular has created some quite sophisticated protocols under the AMBA4 umbrella that they announced at DAC.

In the old days, cache management was largely done in software, invalidating large parts of the cache to ensure no stale data could get accessed, and forcing the cache to gradually be reloaded from main memory. There are several reasons why this is no longer appropriate. Caches have got very large and the penalty for off-chip access back to main memory is enormous. Large amounts of data flowing through a slow interface is bad news.

As a historical note, the first cache memory I came across was on the Cambridge University Titan computer. It had 32 words of memory accessed using the bottom 7 bits as the key and was only used for instructions (not data). This architecture sounds too small to be useful, but in fact it ensures that any loop of less than 32 instructions runs out of the cache and so that trivial amount of additional memory made a huge performance difference.

 Anyway, caches now need to have more intelligence. Instead of invalidating lots of data that might turn out to be stale, the cache controllers need to invalidate on a line-by-line basis in order to ensure that anybody reading an address gets the latest value written. This even needs to be extended to devices that don't even have caches since a DMA device cannot simply go to the main memory due to delayed write-back.

Obviously these protocols are pretty complicated, so how do you verify them? I don't mean how do you verify that a given RTL implementation of the protocol is good, that is the normal verification problem that formal and simulation techniques have been used on for years. I mean how do you verify that the protocol itself is correct. In particular, that the caches are coherent (any reader correctly gets the last write) and deadlock-free (all operations will eventually complete, or a weaker condition, at least one operation can always make progress).

Since this is a Jasper User Group Meeting it wouldn't be wild to guess that you use formal verification techniques. The clever part is that there is a table driven product called Jasper ActiveModel. This creates a circuit that is a surrogate for the protocol. Yes, it has registers and gates but these are not something implementable, they capture the fundamental atomic operations in the protocol. Then, using standard proof techniques, this circuit can be analyzed to make sure it has the good properties that it needed.

 It was a very worthwhile exercise. It turned out that the original spec contain some errors. Of course they were weird corner cases, but that is what formal verification is so good at. Simulation hits the stuff you already thought of. There was one circumstance under which the whole cache infrastructure could deadlock, and another in which it was possible to get access to stale data.

Similar approaches have been taken for verifying communication protocols, which also have similar issues: they might deadlock, the wrong data might get through and so on.





 [post_title] => Formally verifying protocols [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => formally-verifying-protocols [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:15 [post_modified_gmt] => 2019-06-15 02:41:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/formally-verifying-protocols.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 866 [post_author] => 4 [post_date] => 2011-11-15 10:59:00 [post_date_gmt] => 2011-11-15 10:59:00 [post_content] => With worldwide annual media tablet shipments forecast changing –growing- almost every quarter, the latest from ABI research calling for shipment to approach 100 million units in 2012 and passing 150 million in 2014, with the same kind of forecast for smartphone passing 400 million units this year (438 million) and approaching the billion units shipment for 2016, no doubt that these two applications are the key drivers for the semiconductor market for the next five years.




But the good question is which type of semiconductor? Let’s have a look at the compared bill of material (BOM) for Media Tablet and Smartphone:



From the first column (smartphone) we extract a total value of about $200 and $300 for the column associated with media tablet. Clearly, a significant value is coming from display, touch screen and mechanical, especially when looking at media tablet. Because this site is semiwiki (and not displaywiki or mechawiki... even if it would be interesting) we will concentrate on the SC content only. We have built the following table, restricted to SC, where we can see that the BOM ratio is no more 2 to 3, but rather 8 to 9. The SC content for both is very similar: Nand Flash, DRAM and Application processor can be seen as identical (which is no more true if you compare a Media Tablet enabled with 32GB Nand Flash with a Smartphone with 16 GB Nand Flash, so this is a “theoretical” case). The Nand Flash and DRAM suppliers are well known (Samsung, Elpida, SanDisk…), and it would be highly doubtful that a fabless new comer emerges in these two segments. That why we have decided to “zoom” to the semiconductor content excluding the memories. Then the BOM goes down to $42 for the smartphone and $50 for the media tablet.




We have addressed in a previous postthe Application Processor segment, pretty crowded, as we count seven big players (Broadcom, Freescale, Intel, Marvell, Nvidia, Qualcomm, Renesas, ST-Ericsson and TI) and a bunch of new comers, some of these starting to be well established now (Mtekvision or Spreadtrum):

  • Anyka Technologies Corporation
  • Beijing Ingenic Semiconductor Co., Ltd.
  • Chongqing Chongyou Information Technologies Company
  • Fuzhou Rockchip Electronics Co., Ltd
  • Hisilicon Technologies
  • Leadcore Technology
  • MagicEyes Digital, Inc
  • MStar Semiconductor
  • Mtekvision
  • Novatek Microelectronics
  • Spreadtrum
  • Shanghai Jade Technologies Co., Ltd.

Because this segment is so crowded, why not looking elsewhere? It could be connectivity (WiFi, WLAN or Blutooth) or sensors (gyroscope or accelerometer) chips… but we have selected the Power Management (PM) IC: the PM emiconductor content reach 40% of the SC BOM for media tablets and 20% for smartphone.

Let’s try to make a quick assessment of the PM IC Total Addressable Market (TAM). IPnest has already built a smartphone shipment forecast for 2010-2016 (see blogin Semiwiki), and we have forecast information available for media tablet from ABI research. To derive the PM IC TAM forecast for 2011-2016, we have to:

  • consolidate these two forecasts (media tablet and smartphone shipment)
  • Assess the price erosion, or ASP evolution, for PM IC

First, the combined forecast by unit shipments, for 2010-2016:





Then, we can calculate the Total Available Market for the Power Management IC, in both Smartphone and Media Tablet Application, associating the PM IC respective ASP in each of these application. We have neglected the potential decline in PM IC usage in media tablet or smartphone, as it is unlikely to happen: to provide satisfaction to the end user by increasing the time between charging, the trend is to provide systems with better power management capabilities. As well, we have neglected the potential growth pervasion for power management devices. We think such a growth would lead to higher TAM, and higher TAM would increase the number of competitors, leading to more drastic price erosion. But we have assumed the same price erosion rate than for the application processors, or 33% over a five years period of time. With these assumptions, the power management IC market segment is expected to reach $6 Billion by next year, and up to $8 Billion by 2016.



Power management market segment has been relatively neglected by analyst or bloggers so far, at least when compared with the application processor segment, where you can find multiple articles. Looking at this IC segment market weight allow to better understand the long term strategy of companies like Texas Instrument, trying to consolidate position on the PM segment (which, by the way, is NOT part of the Wireless Business Unit) by running high price acquisitions, when one can feel that they defocus from the wireless market – which is completely wrong! They are just focusing more on the power management segment... the application becoming a kind of enabler for the PM IC!

Eric Esteve from IPnest [post_title] => Media Tablet & Smartphones to generate $6 Billion market in… power management IC segment by 2012, says IPnest [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => media-tablet-smartphones-to-generate-6-billion-market-in-power-management-ic-segment-by-2012-says-ipnest [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:13 [post_modified_gmt] => 2019-06-15 02:41:13 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/media-tablet-smartphones-to-generate-6-billion-market-in-power-management-ic-segment-by-2012-says-ipnest.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 862 [post_author] => 20367 [post_date] => 2011-11-15 06:00:00 [post_date_gmt] => 2011-11-15 06:00:00 [post_content] =>  It is always a treat to listen to the nVidia earnings conference call as Jen Hsun Huang offers his take on the industry as well as a peek at his company’s future plans. Invariably a Wall St. analyst will ask about Windows 8 and Project Denver – the code name for the ARM based processor designed to run Windows 8 with great graphics performance and in categories that Jen Hsun describes in meticulous detail as Tablets and Clamshells. In last week’s call, Jen Hsun went out a little further on the ski tips as he claimed that he is going to take ARMs architecture into market segments it hasn’t gone before through “extensions.” Let me cut to the chase, he is going to build a CPU with x86 instruction translation with the help of the cadre of engineers imported from Transmeta.

Before I go any further, let me back up for those of us who didn’t get the Microsoft update. Up until a Microsoft Analyst meeting in September, the standard line from the Redmond folks was that everything that ran on Windows 7 would run on Windows 8, regardless of the processor (x86 or ARM). Renee James of Intel mentioned at a Spring Intel conference that x86 apps would not run on ARM based Windows 8 machines. Microsoft had a cow and let the world know that Ms. James wasincorrect. Turns out she was correct and she should know since she heads up the Software Group at Intel that makes sure that Windows 8 and applications will run on Intel’s newest processors. Uh Oh, the Emperor Just Lost His Clothes!

Just to set some leveling here… Intel and Microsoft are going through a long divorce. It will be messy and stretch out for years, maybe even decades. Renee James’ comments are the type that are strategic and the wording is reviewed in great detail by multiple people, including CEO Paul Otellini. So Paul is saying to Steve Ballmer, "Time to come clean buddy." And to ARM, Otellini is saying, "x86 isn’t dead in PCs by a long shot." Or maybe nVidia has a different answer.

That brings us up to Microsoft’s September statement from Steven Sinofsky on what Windows 8 can run:

STEVEN SINOFSKY: Sure. I don't think I said quite that. I think I said that if it runs on a Windows 7 PC, it'll run on Windows 8. So, all the Windows 7 PCs are X86 or 64-bit.

We've been very clear since the very first CES demos and forward that the ARM product won't run any X86 applications. We've done a bunch of work to enable that -- enable a great experience there, particularly around devices and device drivers. We built a great deal of what we call class drivers, with the ability to run all sorts of printers and peripherals out of the box with the ARM version.

Oh what would we do without analysts to tell us the future?

Back in the days when Transmeta was still around, it shopped itself to the usual suspects. Jen Hsun passed on a direct buyout and instead hired some of the best engineers. Rumors flew around about nVidia building a direct x86 competitor to Intel, however the true value of Transmeta engineering was in the x86 compatible software translator that sat on top of a VLIW core with some hardware hooks for performance. The significance of Transmeta was both the translator and the discovery that the world was about to ditch MHz and go for mobile wireless devices with extra long battery life that emphasized the visual experience over some PC Mag benchmark. All the benchmarks in the day however centered around jumping in and out of Office Applications making tweaks here and there demonstrating why 1GHz Pentiums were much more valuable than 933MHz ones.

Bear in mind that Jen Hsun Huang expects the Project Denver question at every quarterly earnings call, so here in his own words:

“Project Denver, our focus there is to supplement, add to ARM’s capabilities by extending the ARM’s architecture to segments in a marketplace that they're not, themselves, focused on. And there are some segments in the marketplace where single-threaded performance is still very important and 64 bit is vital. And so we dedicated ourselves to create a new architecture that extends the ARM instruction set, which is inherently very energy-efficient already, and extend it to high-performance segments that we need for our company to grow our market. And we're busily working on Denver. It is on track. And our expectation is that we'll talk about it more, hopefully, towards the end of next year. And until then, until then I'm excited, as you are.”

Essentially, nVidia’s model is that for most of the PC market what matters is compatibility and graphics performance. In the nVidia model the x86 CPU is a sidecar. In the future you will pay more for a better graphics experience than CPU performance. If the performance of Jen Hsun's multicore ARM is way beyond what a typical Microsoft Office User expects. An x86 software translator on top of the ARM cores at 20% performance of native should be just fine. I picked 20% - maybe it’s 25% or 30% but you get the idea. To be unique and to get away from the pack (TI and Qualcomm), nVidia will implement some instruction extensions to enable the translator. Since nVidia already has the gaming community on its side writing games that directly go to the graphics GPU, Jen Hsun can envision a scenario where it is Game Over!




 [post_title] => Jen Hsun Huang’s Game Over Strategy for Windows 8! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => jen-hsun-huangs-game-over-strategy-for-windows-8 [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:12 [post_modified_gmt] => 2019-06-15 02:41:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/jen-hsun-huangs-game-over-strategy-for-windows-8.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 863 [post_author] => 9491 [post_date] => 2011-11-14 17:27:00 [post_date_gmt] => 2011-11-14 17:27:00 [post_content] =>  Tensilica has been around for quite a long time. Their key technology is a system for generating a custom processor, the idea being to better match the requirement of the processor for performance, power and area as compared with a fully-general purpose control processor (such as one of the ARM processors). Of course generating a processor on its own isn't much use: how would you program it? So the system also generates custom compilers, virtual platform models and so on. Everything that you need to be able to use the processor.

I've said before in the context of ARM that what is most valuable is not the microprocessor design itself, it is the ecosystem that surrounds it. That is the barrier to entry, not the fact that ARM does a reasonable job of implementing processors.

In the early days of Tensilica, this technology was what they sold. Early adopters who needed a custom processor could buy the system, design their processor, put it on an SoC, program it using the compiler and model. ARC (now part of Synopsys via Virage) was the other reasonably well-known competition. I remember talking to them once and they admitted that lots of people really wanted a fixed processor because they wanted to know the performance in advance, for example.

Tensilica found the same thing. There isn't a huge market of people wanting to design their own processor. But there is a huge market of people who want a programmable block that has certain characteristics, and a market for people who want a given function implemented without having to write a whole load of Verilog to create a fully-customized solution.

So Tensilica have been taking their own technology and using it to create blocks that are easier to use. Effectively they are the custom processor design experts so that their customer don't have to be. The first application that got a lot of traction was 24-bit audio.

More recently, there is the ongoing transition to LTE (which stands for Long Term Evolution, talk about an uninformative and generic name) for 4G wireless. This is very complicated, and will be high-volume (on the handset side anyway, base-station not so much).

Difficult to use but flexible technologies often end up finding a business like this. The real experts are in the company and it is easier for them to "eat their own dogfood" then it is to teach other people to become black-belt users.




 [post_title] => Not your father's Tensilica [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => not-your-fathers-tensilica [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:10 [post_modified_gmt] => 2019-06-15 02:41:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://35.226.139.164/uncategorized/863-not-your-fathers-tensilica/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [4] => WP_Post Object ( [ID] => 861 [post_author] => 28 [post_date] => 2011-11-13 16:38:00 [post_date_gmt] => 2011-11-13 16:38:00 [post_content] => Can semiconductor devices change the temperature of the earth? The heat from my Dell XPS changes the temperature of my lap! A 63” flat screen TV changes the temperature of my living room. I just purchased six of the latest iPhones for my family (under duress) and signed up for another two years with Verizon, so our carbon footprint changes once again.

As computing goes from the desktop to the laptop to the fingertop with a total available market (TAM) of 7B+ people, power has become a critical mess. A recent Time Magazine article, 2045: The Year Man Becomes Immortal suggests that we will successfully reverse-engineer the human brain by the mid-2020s. Replicating the computing power of the human brain is one thing, unfortunately, replicating its power efficiency is quite another!

Power efficiency in the semiconductor design and manufacturing ecosystem is overwhelming design reviews: cost and power, performance and power, temperature and power, etc.... Who knows this better than Chris Malachowsky, co-founder and CTO of NVIDIA? Chris and the IEEE Council on EDA bought me lunch at ICCAD last week. Chris talked about everything from superphones to supercomputers and the semiconductor power challenges ahead.

Chris was quick to point out that NVIDIA is a processor company not a graphics (GPU) company, with $3.5B in revenue, 6,900 employees, and 2,000+ patents. Chris currently runs a 50+ PhD research group inside NVIDIA. One of the projects his group is working on is a supercomputer capable of a MILLION TRILLION calculations per second for the Department of Energy ExascaleProgram, all in the name of science. The hitch is that it can only consume 20MWs!

NVIDIA has 3 very synergistic market segments for their technology:

[LIST=1]
  • Mobile (Tegra)
  • Gaming/Visual Computing (GForce/Quadro)
  • Supercomputing (Tesla)

    The largest market today is computer gaming at $35B+ but fingertop computing (mobile) is where the hyper growth is as it intersects all three markets. We now live in a pixel based world and whoever controls those pixels wins!

    One of the worst kept secrets is NVIDIA’s new Tegra 3 architecture, which is an example of what Chris Malachowsky called a mutli-disciplinary approach to semiconductor power management. The best write-up is Anandtech’s NVIDIA's Tegra 3 Launched: Architecture Revealed.

    The Tegra 3 is a quad core SoC with almost twice the die size of its predecessor, from 49mm^2 to around 80mm^2, built on the TSMC 40nm LPG process. The performance/throughput of Tegra 3 is about five times better than last year’s Tegra 2 with 60%+ less power consumption. The Tegra 4 (code name Wayne) has already been taped out on TSMC 28nm and will appear in 2012. We will never see a better process shrink than 40nm to 28nm in regards to performance and power so expect the Tegra 4 to be extra cool and fast!

    The key to the Tegra 3’s low power consumption is the fifth ARM Cortex 9 “companion” core, running at 500MHZ. You can run one to four cores at 1.3GHZ or just the companion core for background tasks, thus the power savings, and thus the mutli-disciplinary approach to low power SoC realization. You will see a flood of Tegra 3 based devices at CES in January so expect NVIDIA to have a Very Happy New Year!

    The fingertop computing technology appetite is insatiable, and with a TAM of 7B+ units you can expect many more multi-disciplinary approaches to low power semiconductor design and manufacturing, believe it!




     [post_title] => Semiconductor Power Consumption and Fingertop Computing! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => semiconductor-power-consumption-and-fingertop-computing [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:10 [post_modified_gmt] => 2019-06-15 02:41:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://35.226.139.164/uncategorized/861-semiconductor-power-consumption-and-fingertop-computing/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [5] => WP_Post Object ( [ID] => 860 [post_author] => 3 [post_date] => 2011-11-12 10:36:00 [post_date_gmt] => 2011-11-12 10:36:00 [post_content] => 3D-IC design has become a popular discussion topic in the past few years because of the integration benefits and potential cost savings, so I wanted to learn more about how the DRC and LVS flows were being adapted. My first stop was the Global Semiconductor Alliance web site where I found a presentation about how DRC and LVS flows were extended by Mentor Graphics for the Calibre tool to handle TSV (Thru Silicon Via) technology. This extension is called Calibre 3DSTACK.



    With TSV each die now becomes double-sided in terms of metal interconnect. DRC and LVS have to now verify the TSV, plus front and back metal layers.



    The new 3DSTACK configuration file controls DRC and LVS across the stacked die:



    A second source that I read was at SOC IP where there were more details provided about the configuration file.

    This rule file for the 3D stack has a list of dies with their order number, postion of each die, rotation, orientation, location of the GDS layout files and associated rule files and directories.

    To do the parasitic extraction requires new information about the size and electrical properties of the microbumps, copper pillars and bonding materials.

    One methodology is to first run DRC, LVS and extraction on each die separately, then add the interfaces. The interface between the stacked dies uses a separate GDS, and LVS/DRC checks are run against this GDS.

    For connectivity checking between dies text labels are inserted at the interface microbump locations.

    With these new 3D extensions then Calibre can run DRC, LVS and extraction on the entire 3D stack.
    A GUI helps you to visual the 3D rules and results from DRC, LVS and extraction:


    TSMC Partner of the Year Award
    Based on this extension of Calibre into the 3D realm, TSMC has just announced that Mentor was chosen as the TSMC Partner of the Year. IC designers continue to use the familiar Calibre rule decks with the added 3DSTACK technology.


    Summary
    Yes, 3D-IC design is a reality today where foundries and EDA companies are working together to provide tools and technology to extend 2D and 2.5D flows for DRC, LVS and extraction.

    Further Info






    var _gaq = _gaq || [];
    _gaq.push(['_setAccount', 'UA-26895602-2']);
    _gaq.push(['_trackPageview']);

    (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
    })();




     [post_title] => Physical Verification of 3D-IC Designs using TSVs [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => physical-verification-of-3d-ic-designs-using-tsvs [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:54:00 [post_modified_gmt] => 2019-06-15 01:54:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/physical-verification-of-3d-ic-designs-using-tsvs.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 859 [post_author] => 3 [post_date] => 2011-11-11 11:36:00 [post_date_gmt] => 2011-11-11 11:36:00 [post_content] => All four of the public EDA companies offer SPICE circuit simulation tools for use by IC designers at the transistor-level, and Magma has been offering two SPICE circuit simulators:

    • FineSIM SPICE (parallel SPICE)
    • FineSIM PRO (accelerated, parallel SPICE)

    An early advantage offered by Magma was a SPICE simulator that could be run in parallel on multiple CPUs. The SPICE competitors have all now followed suit and re-written their tools to catch up to FineSim in that feature.



    I also blogged about FineSIM SPICE and FineSIM Pro in June at DAC.

    When I talk to circuit designers about SPICE tools they tell me that they want:

    • Accuracy
    • Speed
    • Capacity
    • Compatibility
    • Integration
    • Value for the dollar
    • Support

    The priority of these seven attributes really depends on what you are designing.

    Feedback from anonymous SPICE circuit benchmarks concludes that FineSim SPICE can be preferred versus Synopsys HSPICE:

    • Accuracy - about the same, qualified at TSMC for 65nm, 40nm and 28nm
    • Speed - FineSim SPICE can be 3X to 10X faster
    • Capacity - around 1.5M MOS devices, up to 30M RC elements
    • Compatibility - uses inputs: HSPICE, Spectre, Eldo, SPF, DSPF. Models: BSIM3, BSIM4. Outputs: TR0, fsdb, WDF.
    • Integration - co-simulates with Verilog, Verilog-A and VHDL
    • Value - depends on the deal you can make with your Account Manager
    • Support - excellent

    Room for Improvement
    Cadence, Synposys and Mentor all have HDL simulators that support: Verilog, VHDL, System Verilog and System C. These HDL simulators have been deeply integrated with their SPICE tools, letting you simulate accurate analog with the SPICE engine in context with Digital. Magma has no Verilog or VHDL simulator and only does a co-simulation, which is really primitive in comparison to these deeper integrations using single kernel technology.

    Memory designers use hierarchy and FineSim Pro does offer a decent simulation capacity of 5M MOS devices, although it is not a hierarchical simulator so you cannot simulate a hierarchical netlist with 100M or more transistors in it. Both Cadence and Synopsys offer hierarchical SPICE simulators. With FineSim Pro you have to adopt a methodology of netlist cutting to simulate just the critical portions of your hierarchal memory design.

    Summary
    You really have to benchmark a SPICE circuit simulator on your own designs, your models, your analysis, and your design methodology to determine if it is better than what you are currently using. This is a highly competitive area for EDA tools and by all accounts Magma has world-class technology that works well for a wide range of transistor-level netlists, like: custom analog IP, large mixed-signal designs, memory design and characterization.

    We've setup a Wiki page for all SPICE and Fast SPICE circuit simulatorsto give you a feel for which companies have tools.




     [post_title] => SPICE Circuit Simulation at Magma [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => spice-circuit-simulation-at-magma [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:10 [post_modified_gmt] => 2019-06-15 02:41:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://35.226.139.164/uncategorized/859-spice-circuit-simulation-at-magma/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [7] => WP_Post Object ( [ID] => 858 [post_author] => 9491 [post_date] => 2011-11-09 16:14:00 [post_date_gmt] => 2011-11-09 16:14:00 [post_content] =>  I just put up a blog about the EDA interoperability forum, much of which is focused on standards. Which reminded me just how long-lived some standards turn out to be.

    Back in the late 1970s Calma shipped workstations (actually re-badged Data General minicomputers) with a graphic display. That was how layout was done. It's also why, before time-based licenses, EDA had a hardware business model, but that's a story for another day. The disk wasn't big enough to hold all the active designs, so the typical mode of operation was to keep your design on magnetic tape when you weren't actually using the system. Plus you could use a different system next time rather than having to get back on the same system (this was pre-ethernet). The Calma system was called the graphic design system and the second generation was (surprise) labeled with a two. That tape backup format was thus called "graphic design system 2 stream format". Or more concisely GDS2. Even today it is the most common format for moving physical layout design data between systems or to mask-makers, over 30 years later.

     My favorite old standard is the cigarette lighter outlet that we all have on our cars. It was actually designed in the 1950s as a cigar lighter (well, everyone smoked cigars then I guess. Men, anyway). When eventually people wanted a power source then one way to get it was to design a plug that would take power from the cigar lighter outlet. That meant no wiring was required to be done. That was about the only good thing about it. It is an awful design as an electrical plug and socket, with a spring loaded pin trying to push the plug out of the socket and nothing really holding it solidly in place. Despite this, fifty years later every car has one (or several) of these and we use them to charge our cell-phones (now there's a use that wasn't envisioned in the 1950s).

    Even more surprising, since you could already buy a PC power-supply that ran off a cigar outlet, when they first put power outlets for laptops on planes, they (some airlines anyway) used the old cigar lighter outlet. Imagine talking to the guy in the 1950s who designed the outlet, and you told him that in 2011 you'd use that socket to power your computer on a plane. He'd have been astounded. The average person could not afford to go on a plane in those days and, as for computers, they were room-sized, far too big to put on a plane.

     Talking of planes, why do we get on from the left hand side. It's another old standard living on. Back 2000 years ago, before the invention of the stern-post rudder, ships were steered with a steering oar. For a right hander (that's most of you: we left-handers are the 10%) that was most conveniently put on the right side of the ship, hence the steerboard or, as we say today, starboard side. The other side was the port side, the side of the ship put against the quay for loading and unloading, without the steering oar getting in the way. When planes first had passenger service, they were sea-planes so naturally they kept the tradition. Eventually planes got wheels, and jet engines (and cigar outlets for our computers). 2000 years after the steering oar became obsolete that standard lives on.





     [post_title] => Old standards never die [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => old-standards-never-die [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:10 [post_modified_gmt] => 2019-06-15 02:41:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://35.226.139.164/uncategorized/858-old-standards-never-die/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 9 [filter] => raw ) [8] => WP_Post Object ( [ID] => 857 [post_author] => 9491 [post_date] => 2011-11-09 15:06:00 [post_date_gmt] => 2011-11-09 15:06:00 [post_content] =>  The 24th Interoperability Forum is coming up at the end of the month on November 30th to be held at the Synopsys compus in Mountain View. It lasts from 9am until lunch (and yes, Virginia, there is such a thing as a free lunch). I think it looks like a very interesting way to spend a morning.

    Here are the speakers and what they are speaking about:


    • Philippe Margashack, VP of central R&D at ST, will talk about 10 years of standards. Somehow I guess SystemC and TLM may figure prominently.
    • John Goodenough, VP Design technology and automation at ARM
    • Jim Hogan, long-time of Cadence and Artisan and now a private investor will talk about The sequel: a fistful of dollars (which I believe is on exit-strategies)
    • Mark Templeton (another Artisan alumnus) and now president of Scientific Ventures will talk about Survival of the fittest and the DNA of interoperability
    • Mike Keating, a Synopsys fellow and author of the Low Power Methodology Manual will talk about (surprise) Low power. Treating water in a rising flood
    • Shay Gal-On, director of software engineering at EEMBC Technology Center, will talk on Multicore Technology: To Infinity and beyond in complexity. I firmly believe that writing software for high-count multicore processors is as big a challenge as anything that we have on the semiconductor side in the coming decades.
    • Shishpal Rawat, chair of Accelera, will talk on The evolution of standards organizations: 2025 and beyond. Gulp. What technology node are we meant to be on by then?

    Following the presentations there will be a wrap up, a prize drawing (let me guess, an iPad2) then lunch and networking.

    I'll see you there.

    To register, go here.


     [post_title] => EDA Interoperability Forum [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => eda-interoperability-forum [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:53:58 [post_modified_gmt] => 2019-06-15 01:53:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/eda-interoperability-forum.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 856 [post_author] => 4 [post_date] => 2011-11-09 09:19:00 [post_date_gmt] => 2011-11-09 09:19:00 [post_content] => Is it surprising to see that Synopsys has been selected Interface IP partner of the year by TSMC? Not really, as the company is the clear leader on this IP market segment (which includes USB, PCI Express, SATA, DDRn, HDMI, MIPI and others protocols like Ethernet, DisplayPort, Hyper Transport, Infiniband, Serial RapidIO…). But, looking five years back (in 2006), Synopsys was competing with Rambus (no more active on this type of activity), ARM (still present, but not very involved), and a bunch of “defunct” companies like ChipIdea (bought by MIPS in 2007, then sold to Synopsys in 2009), Virage Logic (acquired by Synopsys in 2010)…At that time, the Interface IP market was weighting $205M (according with Gartner) and Synopsys had a decent 25% market share. Since then, the growth has been sustained (see the picture showing the market evolution for USB, PCIe, DDRn, SATA and HDMI) and Synopsys is enjoying in 2010 a market share of… be patient, I will disclose the figure later in this document!




    What we can see on the above picture is the negative impact of the Q4 2008-Q1/Q2/Q3 2009 recession on the growth rate for every segment – except DDRn Memory Controller. Even if in 2010, the market has recovered, we should come back to 20-30% like growth rate only in 2011. What will happen in 2012 depends, as always, of the health of the global economy. Assuming no catastrophic event, 2010/2011 growth should continue, and the interface IP market should reach in 2012 a $350M level, or be 58% larger than in 2009 (a 17% CAGR during these 3 years).

    The reasons for growth are well known (at least for those who read Semiwiki frequently!): the massive move from parallel I/Os to high speed serial, the ever increasing need for more bandwidth, not only in Networking, but also in PC, PC peripheral, Wireless and Consumer Electronic segments – just because we (the end user) exchange more data through Emails, Social Media, watch movies or listen music on various, and new, electronic systems. Also because these protocols standards are not falling in commoditization (which badly impact the price you sell Interface IP), as the various organizations (SATA, USB, PCIe, DDRn to name the most important) are releasing new protocol version (PCIe gen-3, USB 3.0, SATA 6G, DDR4) which help to keep high selling price for the IP. For the mature protocols, the chip makers expects the IP vendors to port the PHY (physical part, technology dependant) on the latest technology node (40 or 28 nm), which again help to keep price in the high range (half million dollar or so). Thus the market growth will continue, at least for the next three to four years. IPnest has built a forecast dedicated to these Interface IP segments, up to 2015, and we expect to see a sustained growth for a market climbing to a $400M to $450M range (don’t expect IPNEST to release a 3 digit precision forecast, this is simply anti-scientific!)…





    But what about Synopsys’ position? Our latest market evaluation (one week old) integrated in the “Interface IP Survey 2005-2010 – Forecast 2011-2015” shows that for 2010, Synopsys has not only kept the leader position, but has consolidated and has passed from a 25% market share in 2006 to a 40%+ share in 2010. Even more impressive, the company is getting at least 50% more market share (sometime more than 80%) in the segments where they are playing, namely USB, PCI Express, SATA, DDRn, with the exception of HDMI, where Silicon Image is really too strong- on a protocol they have invented, that make sense!




    All of the above explains why TSMC has made the good choice, and any other decision would not have been rational… except maybe to decide to develop (or at least market) themselves the Interface IP functions, like the FPGA vendors are doing…

    By the way, if you plan to attend IP-SoC 2011 in December 7-8[SUP]th[/SUP] in Grenoble, don’t miss the presentation I will give on the Interface IP market, see the Conference agenda.


    Eric Esteve from IPNEST – Table of Contentfor “Interface IP Survey 2005-2010 - Forecast 2011-2015” available here [post_title] => Synopsys Awarded TSMC's Interface IP Partner of the Year [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => synopsys-awarded-tsmcs-interface-ip-partner-of-the-year [to_ping] => [pinged] => [post_modified] => 2019-06-14 20:53:57 [post_modified_gmt] => 2019-06-15 01:53:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/synopsys-awarded-tsmcs-interface-ip-partner-of-the-year.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 867 [post_author] => 9491 [post_date] => 2011-11-15 13:19:00 [post_date_gmt] => 2011-11-15 13:19:00 [post_content] => I attended much of the Jasper users' group a week ago. There were several interesting presentations that I can't just blog about because companies are shy, and some that would only be of interest if you were a user of Jasper's products on a daily basis.

     But for me the most interesting presentations were several on an area that I didn't realize this sort of formal verification was being used for. The big driver is that modern multi-core processors now require much more sophisticated cache control than before. ARM in particular has created some quite sophisticated protocols under the AMBA4 umbrella that they announced at DAC.

    In the old days, cache management was largely done in software, invalidating large parts of the cache to ensure no stale data could get accessed, and forcing the cache to gradually be reloaded from main memory. There are several reasons why this is no longer appropriate. Caches have got very large and the penalty for off-chip access back to main memory is enormous. Large amounts of data flowing through a slow interface is bad news.

    As a historical note, the first cache memory I came across was on the Cambridge University Titan computer. It had 32 words of memory accessed using the bottom 7 bits as the key and was only used for instructions (not data). This architecture sounds too small to be useful, but in fact it ensures that any loop of less than 32 instructions runs out of the cache and so that trivial amount of additional memory made a huge performance difference.

     Anyway, caches now need to have more intelligence. Instead of invalidating lots of data that might turn out to be stale, the cache controllers need to invalidate on a line-by-line basis in order to ensure that anybody reading an address gets the latest value written. This even needs to be extended to devices that don't even have caches since a DMA device cannot simply go to the main memory due to delayed write-back.

    Obviously these protocols are pretty complicated, so how do you verify them? I don't mean how do you verify that a given RTL implementation of the protocol is good, that is the normal verification problem that formal and simulation techniques have been used on for years. I mean how do you verify that the protocol itself is correct. In particular, that the caches are coherent (any reader correctly gets the last write) and deadlock-free (all operations will eventually complete, or a weaker condition, at least one operation can always make progress).

    Since this is a Jasper User Group Meeting it wouldn't be wild to guess that you use formal verification techniques. The clever part is that there is a table driven product called Jasper ActiveModel. This creates a circuit that is a surrogate for the protocol. Yes, it has registers and gates but these are not something implementable, they capture the fundamental atomic operations in the protocol. Then, using standard proof techniques, this circuit can be analyzed to make sure it has the good properties that it needed.

     It was a very worthwhile exercise. It turned out that the original spec contain some errors. Of course they were weird corner cases, but that is what formal verification is so good at. Simulation hits the stuff you already thought of. There was one circumstance under which the whole cache infrastructure could deadlock, and another in which it was possible to get access to stale data.

    Similar approaches have been taken for verifying communication protocols, which also have similar issues: they might deadlock, the wrong data might get through and so on.





     [post_title] => Formally verifying protocols [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => formally-verifying-protocols [to_ping] => [pinged] => [post_modified] => 2019-06-14 21:41:15 [post_modified_gmt] => 2019-06-15 02:41:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.semiwiki.com/word5/uncategorized/formally-verifying-protocols.html/ [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7024 [max_num_pages] => 703 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 4f028e80631e020c8bc52d7311568fe6 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )
  • Formally verifying protocols

    Formally verifying protocols
    by Paul McLellan on 11-15-2011 at 1:19 pm

    I attended much of the Jasper users’ group a week ago. There were several interesting presentations that I can’t just blog about because companies are shy, and some that would only be of interest if you were a user of Jasper’s products on a daily basis.

    But for me the most interesting presentations were several… Read More


    Media Tablet & Smartphones to generate $6 Billion market in… power management IC segment by 2012, says IPnest

    Media Tablet & Smartphones to generate $6 Billion market in… power management IC segment by 2012, says IPnest
    by Eric Esteve on 11-15-2011 at 10:59 am

    With worldwide annual media tablet shipments forecast changing –growing- almost every quarter, the latest from ABI research calling for shipment to approach 100 million units in 2012 and passing 150 million in 2014, with the same kind of forecast for smartphone passing 400 million units this year (438 million) and approaching… Read More


    Jen Hsun Huang’s Game Over Strategy for Windows 8!

    Jen Hsun Huang’s Game Over Strategy for Windows 8!
    by Ed McKernan on 11-15-2011 at 6:00 am

    It is always a treat to listen to the nVidia earnings conference call as Jen Hsun Huang offers his take on the industry as well as a peek at his company’s future plans. Invariably a Wall St. analyst will ask about Windows 8 and Project Denver – the code name for the ARM based processor designed to run Windows 8 with great graphics performance… Read More


    Not your father’s Tensilica

    Not your father’s Tensilica
    by Paul McLellan on 11-14-2011 at 5:27 pm

    Tensilica has been around for quite a long time. Their key technology is a system for generating a custom processor, the idea being to better match the requirement of the processor for performance, power and area as compared with a fully-general purpose control processor (such as one of the ARM processors). Of course generating… Read More


    Semiconductor Power Consumption and Fingertop Computing!

    Semiconductor Power Consumption and Fingertop Computing!
    by Daniel Nenni on 11-13-2011 at 4:38 pm

    Can semiconductor devices change the temperature of the earth? The heat from my Dell XPS changes the temperature of my lap! A 63” flat screen TV changes the temperature of my living room. I just purchased six of the latest iPhones for my family (under duress) and signed up for another two years with Verizon, so our carbon footprint … Read More


    Physical Verification of 3D-IC Designs using TSVs

    Physical Verification of 3D-IC Designs using TSVs
    by Daniel Payne on 11-12-2011 at 10:36 am

    3D-IC design has become a popular discussion topic in the past few years because of the integration benefits and potential cost savings, so I wanted to learn more about how the DRC and LVS flows were being adapted. My first stop was the Global Semiconductor Alliance web site where I found a presentation about how DRC and LVS flows were… Read More


    SPICE Circuit Simulation at Magma

    SPICE Circuit Simulation at Magma
    by Daniel Payne on 11-11-2011 at 11:36 am

    All four of the public EDA companies offer SPICE circuit simulation tools for use by IC designers at the transistor-level, and Magma has been offering two SPICE circuit simulators:

    • FineSIM SPICE (parallel SPICE)
    • FineSIM PRO (accelerated, parallel SPICE)

    An early advantage offered by Magma was a SPICE simulator that could be … Read More


    Old standards never die

    Old standards never die
    by Paul McLellan on 11-09-2011 at 4:14 pm

    I just put up a blog about the EDA interoperability forum, much of which is focused on standards. Which reminded me just how long-lived some standards turn out to be.

    Back in the late 1970s Calma shipped workstations (actually re-badged Data General minicomputers) with a graphic display. That was how layout was done. It’s… Read More


    EDA Interoperability Forum

    EDA Interoperability Forum
    by Paul McLellan on 11-09-2011 at 3:06 pm

    The 24th Interoperability Forum is coming up at the end of the month on November 30th to be held at the Synopsys compus in Mountain View. It lasts from 9am until lunch (and yes, Virginia, there is such a thing as a free lunch). I think it looks like a very interesting way to spend a morning.

    Here are the speakers and what they are speaking… Read More


    Synopsys Awarded TSMC’s Interface IP Partner of the Year

    Synopsys Awarded TSMC’s Interface IP Partner of the Year
    by Eric Esteve on 11-09-2011 at 9:19 am

    Is it surprising to see that Synopsys has been selected Interface IP partner of the year by TSMC? Not really, as the company is the clear leader on this IP market segment (which includes USB, PCI Express, SATA, DDRn, HDMI, MIPI and others protocols like Ethernet, DisplayPort, Hyper Transport, Infiniband, Serial RapidIO…). But,… Read More