ansys simworld semiconductors 800x100 1

WP_Query Object
(
    [query] => Array
        (
            [paged] => 2
        )

    [query_vars] => Array
        (
            [paged] => 2
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [post_type] => Array
                (
                    [0] => post
                    [1] => podcast
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type IN ('post', 'podcast') AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 10, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 296001
                    [post_author] => 11830
                    [post_date] => 2021-03-03 10:00:14
                    [post_date_gmt] => 2021-03-03 18:00:14
                    [post_content] => 

NetApp approach to security

Data sharing between semiconductor companies and EDA software companies has been critical to the advancement of the industry.  But it’s had security issues and associated loss of trust along the way.  For instance, there have been cases of customer designs shared as a testcase finding their way into a product demo without the consent of the customer. How did this happen? There was no malicious intent. The primary cause was that the shared data was not controlled within a secure vault and there was no tracking of how the data was used and by whom.  There was also no clear way to return the data that was sent or ensure that all instances of the data were deleted. This has led to major B2B trust issues which then leads to longer bug fix cycles because data is not easily shared. A new approach is needed. Read on to see how NetApp is working to improve secure B2B data sharing for the semiconductor industry.

Why the Industry Needs Secure and Trusted B2B Data Sharing

As I have shared in previous articles, data is the ever-growing lifeblood of semiconductor design.  Double digit data growth between 7, 5 and 3nm design nodes is straining design infrastructure.  At the same time the value of that data is increasing. Data once deleted after successful or failed analysis is being saved so AI/ML models can train or learn from past design runs. Data shared for the joint development of AI/ML models is just one example of the importance of robust secure B2B data sharing solutions.

Let’s examine some of the key reasons for B2B data sharing in the semiconductor industry. These items won’t necessarily make big headlines, but they represent a crucial process to advance chip design. The following points highlight some scenarios of interest.

EDA vendor debug

EDA vendors will always require access to customer designs for software debug – this need will never go away.  Concerns around sharing testcase data results in delays to gain access to the data, creating longer debug and resolution times.  I have even heard stories of EDA teams trying to guess the cause of a problem when access to data was not an option. Rapid access to data is critical for fast issue resolution of issues and for meeting time to market goals.

AI development

EDA tools are rapidly building AI-enabled solutions. Machine learning (ML)/deep learning (DL) can reduce algorithm complexity, increase design efficiency and improve design quality.  Training complex ML and DL models requires massive amounts of data.  And in most cases, it is data EDA vendors don’t have.  The data EDA vendors need is their customer’s design data.  Secure data sharing is critical to the rapid advancement of AI in the semiconductor industry. The challenge is the volume and proprietary nature of the data further complicates sharing.

NDA compliance

We have an NDA in place, so we’re covered, right?  Most data sharing NDAs require that data be returnedand/or deleted once it is no longer needed.  Verifying that all copies of sensitive data were fully deleted in compliance with an NDA is difficult at best. 

Collaboration

Modern chip design is a team sport.  IP providers, library vendors, tool vendors and design services teams all work together to meet critical design timelines and design goals.  Secure data sharing to facilitate collaboration is critical for this process to work.

Can we change the way we think about secure data sharing?

Let’s talk about the roles and responsibilities of Data Owners and Data Users. 

  • Data Owners should be able to share data into a data user’s secure walled off datacenter while still retaining complete visibility and control over WHO can access the data and WHAT systems can access the data. There should be visibility into how often the data is accessed with the ability to highlight anomalous data access patterns. Data Owners should be able to monitor the security attributes of the systems that have access to the data

Data Owners should also be able to securely revoke (or even securely wipe) the data from the system including removing key access.  Data Owners should not find data sitting on a data user’s system unused or after the terms of use have expired or the data has turned cold.  Data Owners should have full visibility of their data at any time even when it is in the Data Users’ datacenter or cloud environment.

  • Data Users should be able to use or share data in their own secure walled off datacenter where they have access to their own resources and tools. They should be able to access the data for approved processes such as test case debug, AI model development and for design collaboration.  Data sets are often so large that it is impractical to expect the Data Owners to host the compute and storage resource for development.  So, it is often critical to have access to the data in Data User’s own datacenter.

The NetApp Approach

NetApp’s ONTAP storage operating system is used by all of the top semiconductor and EDA companies. ONTAP is also used in all of the 3-letter acronym government facilities today for data sharing.  This means that B2B secure data is most likely already a possibility.  Because NetApp’s ONTAP storage operating system runs in all of the commercial clouds, B2B data sharing can be done datacenter-to-datacenter, datacenter-to-cloud or cloud-to-cloud, all with the same controls and monitoring. You can learn more about ONTAP from this prior post.

You can also get a broad view of NetApp’s approach to security here. There is a very useful technical report available from NetApp. A link is coming.

First, let’s take a look at some of the capabilities that allow NetApp to enable secure B2B data sharing for the semiconductor industry.

  • Support for Zero-Trust security architectures
  • Virtual Storage Machine (SVM) – this enables data to be walled off on a shared storage system. This is effectively a secure multi-tenant data storage environment.  SVM allows for role-based access that allows controlled access to allow Data Owners to monitor the storage environment even inside the Data User’s datacenter for real time auditing
  • Secure data transfer via SnapMirror or FlexCache means no more downloading and untar’ing data.Data is automatically transferred from one ONTAP filer to another with data encryption both at rest and in flight. An added benefit is the data is always up to date in the case of rapidly changing data sets
  • Data encryption both with encrypted or unencrypted drive with external key manager is supprted
  • Secure data shredding is supported
  • NFS and SMB security with Kerberos is supported
  • Military grade data security credentials are supported. ONTAP is EAL 2+ and FIPS 140-2 certified
  • File-level granular event monitoring with integration is security information and event management (SIEM) partners is available and supports:
    • Log management and compliance reporting
    • Real-time monitoring and event management. This provides visibility of WHO is accessing the data, what systems are accessing the data and how often the data is being accessed.
  • Integration into third party security tools like:
    • Splunk-based system monitoring to report changes to the system
  • Cloud Secure technology also monitors for anomalous access patterns alerting the Data Owners of suspicious access patterns

The B2B Data Owner has the ability to securely transmit data, revoke data, monitor the usage and access pattern of data, monitor and alert when the secure Zero-Trust infrastructure has been changed, etc. 

I’ve only scratched the surface here. NetApp offers a lot of capability to create a trusted, secure environment. NetApp is working to improve secure B2B data sharing for the semiconductor industry.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

[post_title] => NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => netapp-enables-secure-b2b-data-sharing-for-the-semiconductor-industry [to_ping] => [pinged] => [post_modified] => 2021-03-06 13:09:56 [post_modified_gmt] => 2021-03-06 21:09:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296001 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 296641 [post_author] => 14 [post_date] => 2021-03-03 08:00:22 [post_date_gmt] => 2021-03-03 16:00:22 [post_content] => Toamagachi Semiconductor shortage - Semi Situation Stems from long term systemic neglect - Will require much more than money & time than thought - Fundamental change is needed to offset the financial bias - Auto industry is just the hint of a much larger problem Like recognizing global warming when the water is up to your neck The problem with the semiconductor industry has finally been recognized but only after it stopped the production of the beloved F150 Pick Up truck and Elon's Tesla. Many analysts and news organizations wrongly blame the Covid pandemic and its many consequences and assume this is just another example of the Covid fallout. Wrong! This has been a problem decades in the making. Its not new. The fundamental reasons have been in the works for years. The only thing the pandemic did was to bring the issue to the surface more quickly. The issue could have been brought to the surface just as easily and with worse consequences by a conflict between China and Taiwan. Or perhaps another trade spat between Japan and Korea. The semiconductor industry is perhaps not as robust as would otherwise be thought given that it hasn't seen a significant problem before. The reality is that the "internationalization" of both the industry and its supply chain have opened it up to all manner of disruption coming at any point along that long chain. The consolidation has further concentrated the points of failure into a small handful of players and perhaps one, TSMC, that is 50+% of the non memory chip market. Tamagotchi Toys were the Canary in a Coal Mine Most people may not remember those digital pets called Tamagotchi that were a smash hit in the late 90's. Many in the semiconductor industry in Taiwan do remember them. In the summer of 1997 they sucked up a huge amount of semiconductor capacity in Taiwan and whacked out the entire chip industry for the entire summer causing delays and shortages of all types of chips. Tamagotchi Tidal Wave Hits Taiwan In essence, a craze over a kids toy created shortages of critical semiconductor chips. Semiconductor capacity is much greater now than it was 20 years ago but the industry remain vulnerable to demand spikes and slowdowns. The memory industry is an example of the problem Perhaps the best example of the chip industry's vulnerability is the memory semiconductor market. The market lives on the razors edge of supply and demand and the balance maintained between the two. Too much demand and not enough supply and prices skyrocket....too little demand and excess supply and prices collapse. The memory industry is clearly the most cyclical and volatile in the semiconductor universe. One fab going off line for even a short while due to a power outage or similar causes the stop market for memory chips to jump. Kim Jong-Un should buy memory chips futures All it would take is one "accidentally" fired artillery round from North Korea that hit a Samsung fab in South Korea and took it out of commission. Memory prices would go through the roof for a very long time as the rest of the industry could never hope to make up for the shortage caused in any reasonable amount of time Other industries, such as oil, do not have the same problem When you look at other industries in which a product is a commodity like memory is you do not have the same production problem. The oil industry which is also a razor's balance between supply and demand does not have the same issue as there is a huge amount of excess capacity ready to come on line at a moments notice. The cost of oil pumps and derricks sitting around idle waiting to be turned on is very very low as compared to the commodity they pump. This means the oil industry can flex up and down as needed by demand and easily make up for the shortage if someone goes off line (like Iran). Imagine if the oil industry kept pumping, at full output, never slowing, for each new oil field drilled. In the semiconductor industry the capital cost is essentially the whole cost so fabs never ever go offline as the incremental cost to produce more chips is quite low. This means there is no excess capacity in the chip industry of any consequence and they run 24X7. Capacity is booked out months in advance and capacity planning is a science (perfected by TSMC). The semiconductor industry has all the maneuverability of a super tanker that takes many miles to slow down or speed up....you just can't change capacity that easily. There is no real fix to the capacity issue due to financials To build capacity that could be brought on line in a crisis or time of high demand would require an "un-natural" act. That is spending billions to build a fab, only to have it sit there unused waiting for the capacity to be needed. This scenario is not going to happen....even the government isn't dumb enough to spend billions on a "standby" factory that needs a constant spend to keep up Moore's law. Its just not going to happen Moving fabs "on shore" just reduces supply risk not demand risk Rebuilding fabs in the US would be a good thing as it would mean fabs that are no longer an artillery shell away from a crazy northern neighbor or an hour boat ride away from a much bigger threat that still claims to own you. That will certainly help reduce the supply side risk assuming we don't build the new fabs on fault lines or flood zones. The demand side variability will still exist but could be managed better. Restarting "Buggy Whip" manufacturing The other key thing that most people do not realize is that most semiconductors used in cars, toys and even defense applications are made in very old fabs. All those older fabs that used to make 386 and 486 chips and 1 megabit memory parts have long ago been sold for scrap by the pound and shipped off to Asia (China) and are now making automotive and toaster oven chips. Old fabs never die...they just keep making progressively lower value parts. As I have previously mentioned in a prior note, you don't make a 25 cent microcontroller for a car in a $7B , 5NM fab....the math simply doesn't work. This ability to keep squeezing value out of older fabs has worked as demand for trailing edge has not exceeded capacity. For a typical chip company, the leading edge fab makes the highest value CPU, the next generation older fab maybe makes a GPU, the next older fab maybe some I/O chips or comms chips, the older fab makes consumer chips and the oldest fabs make chips for TV remotes. In bleeding edge fabs the equipment costs are the vast majority with labor being a rounding error. In older fabs , with fully depreciated equipment, labor starts to become a factor so many older fabs are better suited to be packed up and shipped off to a low labor cost country. The biggest problem is that demand for older chip technology seems to have exceeded the amount of older capacity in the world as chips are now in everything and IOT doesn't need bleeding edge. Equipment makers for the most part don't make 6 inch (150MM) tools anymore, some still make their old 8 inch (200MM) some don't. As we have previously mentioned, demand for 200MM now exceeds what it was in their peak. Old Tools are being Hoarded Summary Fixing not only the shortage issue but the risk issue will take not only a lot of time but a lot of money. The problem is systemic and has been dictated by financial math that has incentivized what we currently have in place. In order to change the behavior of anyone who runs a chip company and can add we need to put in place financial incentives, legal decrees, legislative incentives and use a multiple of levers to change the current dynamics of the industry. Even with all the written motivation in place it will still take years for the physical implementation of the incentivized changes. TSMC has been under enormous pressure for years about a fab in the US. Now they are planning one in Arizona that is still years away, will be old technology when it comes on line and will barely be a rounding error..... all that from a multi billion dollar effort..... but its a start. A real effort is likely to be well north of $100B and 10 to 20 years in the making before we could get back to where the US was in the semiconductor industry 20 years ago. The Stocks As the saying goes, buying semiconductor equipment company stocks is like buying a basket of the semiconductor industry. They can also be view as the "arms merchants" in an escalating war. It doesn't matter who wins or loses in the chip industry but building more chip factories is obviously good for the equipment makers, in general. In the near term, foreign makers such as Tokyo Electron, ASM International, Nova Measuring and others may make for an interesting play. There is plenty of time as we are sure that no matter what happens we will see zero impact from government sponsored activities in 2021 and it will likely take a very long time to trickle down so we would beware of "knee jerk" reactions that may drive the stocks near term. [post_title] => Semiconductor Shortage - No Quick Fix - Years of neglect & financial hills to climb [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => semiconductor-shortage-no-quick-fix-years-of-neglect-financial-hills-to-climb [to_ping] => [pinged] => [post_modified] => 2021-03-03 10:30:22 [post_modified_gmt] => 2021-03-03 18:30:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296641 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [2] => WP_Post Object ( [ID] => 296249 [post_author] => 16 [post_date] => 2021-03-03 06:00:44 [post_date_gmt] => 2021-03-03 14:00:44 [post_content] => Power integrity analysis in large chip designs is especially challenging thanks to the huge dynamic range the analysis must span. At one end, EM estimation and IR drop through interconnect and advanced transistor structures require circuit-level insight—very fine-grained insight but across a huge design. At the other, activity modeling requires system-level insight and rolling EM-IR analytics up to the full-chip power delivery network (PDN). Watch this CadenceTECHTALK on hierarchical PI analysis March 11 on a new approach to meet this need. REGISTER NOW to make sure you don’t miss the webinar. TECHTALK: Hierarchical PI Analysis

The need

These are real design problems today, such as in the giant AI chips you are likely to see in hyperscalar installations, or perhaps in a CPU cluster together with eight giant GPUs. These are already way too big to run full-flat EM-IR analysis across the whole chip. Yet they are very important analyses to get right, because marketable implementations depend on finding the narrow window between under-design and over-design, between a design that may fail on timing and/or reliability in production or a design that you didn’t sufficiently size up critical areas of the PDN, or a design for which, to overcompensate for an uncertain analysis, you sized up too much of the network, pushing chip area outside a profitable bound. Cadence has introduced a hierarchical analysis methodology in the Voltus IC Power Integrity Solution, which is particularly well suited to large designs with multiple repeated elements like those GPUs. (Come to think of it, this may well cover most super-large designs. After all, who is going to build such a design purely out of unique functions?) This latest release will generate models for IP blocks that can stand in for those blocks in full-chip analysis. These models have an order-of-magnitude-lower memory demand yet preserve accuracy within a few percent of a full-flat analysis—a very practical approach to managing EM-IR analysis across huge designs.

Summary: Hierarchical PI Analysis of Large Designs with Voltus Solution

Memory requirements and runtime for full-chip EM-IR analysis has become a major challenge at advanced nodes. It is not uncommon to see designs with 100s of millions of cells and some even in the multi-billion range. To run a flat analysis requires multiple terabytes of memory over a distributed network. To mitigate these issues, the Cadence® Voltus™ IC Power Integrity Solution enables designers to run hierarchical analysis using IP modeling technology. This helps designers create xPGV models for their IP blocks, accurately capturing the demand current and electrical parasitics. These xPGV models are an order of magnitude smaller compared to the fully extracted block. When used in the chip-level analysis, can help significantly reduce runtime and memory. The modeling methodology used in the Voltus IC Power Integrity Solution ensures minimal result difference relative to a fully flat analysis. This TechTalk will cover the generation of xPGV models, including the package model, and their use in IC-level analysis.  Attend this CadenceTECHTALK to learn how to:
  • Run your largest designs much faster with lower memory
  • Perform very accurate sub-chip analysis, including impact of chip-level demand current and parasitics
  • Reuse IP models in different designs or for multiple instantiations within a design
    [post_title] => TECHTALK: Hierarchical PI Analysis of Large Designs with Voltus Solution [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => techtalk-hierarchical-pi-analysis-of-large-designs-with-voltus-solution [to_ping] => [pinged] => [post_modified] => 2021-03-05 14:43:37 [post_modified_gmt] => 2021-03-05 22:43:37 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296249 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 296388 [post_author] => 13 [post_date] => 2021-03-02 10:00:25 [post_date_gmt] => 2021-03-02 18:00:25 [post_content] => USB made its big splash by unifying numerous connections into a single cable and interface. At the time there were keyboard ports, mouse ports, printer ports and many others. Over the years USB has delivered improved performance and greater functionality. However, as serial interfaces became more popular and started being used for what were previously parallel interfaces, there was a proliferation of new serial cables and protocols. The latest version of USB, referred to as USB4, makes a new bold move to unify many of these different interfaces. USB4 naturally works for USB data streams, but it also can tunnel PCIe, Thunderbolt3, and DisplayPort data streams. USB4 supports 20 Gbps and can go up to 40 Gbps. It specifies use of the USB Type-C connector, which further simplifies the user experience. And like its predecessors, manages power distribution with USB PD. It offers one connector for device interfaces, storage, peripherals and display output. However, with this unification comes complexity under the hood. Many legacy and new features are included in the host and device specification for USB4. One of the hallmarks of the USB interface is its backward compatibility. And so, USB4 is USB 2 and USB 3 compatible, as one might expect. USB4 is a multi-lane interface, so lane bonding is essential. Higher data rates call for more sophisticated encoding and error correction algorithms. Layers of abstraction for routing and tunneling have added complexity. Indeed, the list of features inside a properly functioning USB4 interface is lengthy. Implementing USB4 is not a trivial task. At each stage of development, it is essential to have the ability to verify that everything conforms to the specification and is implemented properly. It is imperative to have a verification environment that can exercise all the functionality and provide designers information to help isolate and pin down issues. Last Summer Truechip, a leading provider of verification IP (VIP), announced the customer shipment of their USB4 and eUSB VIP. [caption id="attachment_296390" align="aligncenter" width="1109"]USB4 Verification IP USB4 Verification IP[/caption] Truchip has a truly impressive offering of VIP for nearly every category of design. These include storage, BUS & interfaces, USB, automotive, memory, PCIe, networking, MIPI, AMBA, display, RISC-V, and defense & avionics. Their VIP includes coverage, assertions, BFMs, monitors, scoreboard and testcases. In addition, they support error injection scenarios that can be crucial for finding problems that could otherwise crop up in the field. Their VIP works on a wide range of platforms - UVM, OVM, VMM and Verilog. Truechip’s USB4 VIP is fully compliant with the v1.0 specification. It includes backward compatibility with USB 2.0. As expected, it also includes the Power Delivery for USB 3.0 and Type-C v2.0. Truchip’s VIP also supports all logical layer ordered sets. It has 64/66b, 128/132b and Reed-Solomon FEC encoding and decoding. In reality the list of features it supports is too long to list here. The deliverables for the USB4 VIP are also comprehensive. In addition to the host and device models, it includes bus functional models and agents for the electrical layer, logical layer, transport layer, configuration layer and the protocol adapter layer. It comes with a monitor and scoreboard. There are test suites for basic and directed protocol tests. It has low power tests, error scenario tests, stress tests, random tests and compliance tests. Truechip’s USB4 VIP is highly configurable and contains everything needed to verify any portion of a USB4 interface design. With it designers can be assured that their finished products will fully conform to the specification and will work reliably in silicon. For more information on this VIP check out the Truechip website.             [post_title] => USB4 Makes Interfacing Easy, But is Hard to Implement [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => usb4-makes-interfacing-easy-but-is-hard-to-implement [to_ping] => [pinged] => [post_modified] => 2021-02-28 11:09:59 [post_modified_gmt] => 2021-02-28 19:09:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296388 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 296287 [post_author] => 2635 [post_date] => 2021-03-02 08:00:10 [post_date_gmt] => 2021-03-02 16:00:10 [post_content] => Resistive RAM (ReRAM) technology has emerged as an attractive alternative to embedded flash memory storage at advanced nodes.  Indeed, multiple foundries are offering ReRAM IP arrays at 40nm nodes, and below. ReRAM has very attractive characteristics, with one significant limitation:
  • nonvolatile
  • long retention time
  • extremely dense (e.g., 2x-4x density of SRAM)
  • good write cycle performance (relative to eFlash)
  • good read performance
but with
  • limited endurance (limited number of ‘1’/’0’ write cycles)
These characteristics imply that ReRAM is well-suited for the emerging interest in compute-in-memory architectures, specifically for the multiply-accumulate (MAC) computations that dominate the energy dissipation in neural networks. To implement a trained NN for inference applications, node weights in the network would be written to the ReRAM array, and the data inputs would be (spatially or temporally) decoded as the word lines accessing the array weight bits.  The multiplicative product of the data/wordline = ‘1’ and the stored weight_bit = ‘1’ would result in significant memory bitline current that could be readily sensed to denote the bit product output – see the figure below. bitcell current reram At the recent International Solid State Circuits Conference (ISSCC), researchers from Georgia Tech University and TSMC presented results from an experimental compute-in-memory design using TSMC’s 40nm ReRAM macro IP. [1]  Their design incorporates several unique features – this article summarizes some of the highlight of their presentation. Background As the name implies, ReRAM technology is based on the transitions of a thin film material between a high-resistance and low-resistance state.  Although there are a large number of different types of materials (and programming sequences) used, a typical metal-oxide thin-film implementation is depicted in the figure below. filament formation resistive ram The metal oxide thin film material shown incorporates the source and transport of oxygen ions/vacancies under an applied electric field of high magnitude.  (The researchers didn’t elaborate on the process technology in detail, but previous TSMC research publications on ReRAM development did utilize a TiO-based thin film programming layer.  Multiple metal-oxide thin film materials are also used.) As depicted in the figure above, an initial “filament forming” cycle is applied, resulting in transport of oxygen ions in the thin film.  In the Reset state (‘0’), a high electrical resistance through the metal-oxide film is present.  During the application of a Set (‘1’) write cycle, oxygen ion migration occurs, resulting in an extension of the filament throughout the thin film layer, and a corresponding low electrical resistance.  In the (bipolar operation) technology example depicted above, the write_0 reset cycle breaks this filament, returning the ReRAM cell to its high resistance state. The applied electric field across the top thin film for the (set/reset) write operation is of necessity quite large;  the applied “read” voltage to sense the (low or high) bitcell resistance utilizes a much smaller electric field. There are several items of note about ReRAM technology:
  • the bitcell current is not a strong function of the cell area
The filamentary nature of the conducting path implies that the cell current is not strongly dependent on the cell area, offering opportunities for continued process node scaling.
  • endurance limits
There is effectively a “wearout” mechanism in the thin film for the transition between states – ReRAM array specifications include an endurance limits on the number of write cycles (e.g., 10**4 – 10**6).  Commonly, there is no limit on the number of read cycles. The endurance constraints preclude the use of ReRAM as a general-purpose embedded “SRAM-like” storage array, but it is the evolutionary approach adopted as an eFlash replacement, and a compute-in-memory offering where pre-calculated weights are written, and updated very infrequently.
  • resistance ratio, programming with multiple write cycles
The goal of ReRAM technology is to provide a very high ratio of the high resistance to low resistance states (HRS/LRS).  When the cell is being accessed during a read cycle – i.e., data/wordline = ‘1’ – the bitline sensing circuit is simplified if i_HRS << i_LRS. Additionally, it is common to implement a write to the bitcell using multiple iterations of a write-read sequence, to ensure the resulting HRS or LRS cell resistance is within the read operation tolerances.  (Multiple write cycles are also initially used during the forming step.)
  • HRS drift, strongly temperature dependent
The high-resistance state is the result of the absence of a conducting filament in the top thin film, after the oxygen ion transport during a write ‘0’ operation.  Note in the figure above the depiction of a high oxygen vacancy concentration in the bottom metal oxide film.  Any time a significant material concentration gradient is present, diffusivity of this material may occur, accelerated at higher temperatures.  As a result, the HRS resistance will drift lower over extended operation (at high temperature). Georgia Tech/TSMC ReRAM Compute-in-Memory Features The researchers developed a ReRAM-based macro IP for a neural network application, with the ReRAM array itself providing the MAC operations for a network node, and supporting circuitry providing the analog-to-digital conversion and the remaining shift-and-add logic functionality.  The overall implementation also incorporated three specific features to address ReRAM technology issues associated with:  HRS and LRS variation; low (HRS/LRS) ratio; and, HRS drift. low HRS/LRS ratio One method for measuring the sum of the data inputs to the node multiplied times a weight bit is to sense the resulting bitline current drawn by the cells whose data/wordline = ‘1’.  (Note that unlike a conventional SRAM block with a single active decoded address wordline, the ReRAM compute-in-memory approach will have an active wordline for each data input to the network node whose value is ‘1’.  This necessitates considerable additional focus on read-disturb noise on adjacent, unselected rows or the array.)  However, for a low HRS/LRS ratio, the bitline current contribution from cells where data = ‘1’ and weight = ‘0’ needs to be considered.  For example, if (HRS/LRS) = 8, the cumulative bitline current of eight (data = ‘1’ X weight = ‘0’) products will be equivalent to one LRS current (‘1’ X ‘1’), a binary multiplication error. The researchers chose to use an alternative method.  Rather than sensing the bitline current (e.g., charging a capacitor for a known duration to develop a readout voltage), the researchers pumped a current into the active bitcells and measured Vbitline directly, as illustrated below. V bitline The effective resistance is the parallel combination of the active LRS and HRS cells.  The unique feature is that the current source value is not constant, but is varied with the number of active wordlines – each active wordline also connects to an additional current source input.  Feedback from Vbitline to each current source branch is also used, as shown below. bitline current with feedback This feedback loop increases the sensitivity of each current source branch to Reffective, thus amplifying the resistance contribution of each (parallel) LRS cell on the bitline, and reducing the contribution of each (parallel) HRS cell.  The figure below illustrates how the feedback loop fanout to each current branch improves the linearity of the Vbitline response, with an increasing number of LHS cells accessed (and thus, parallel LRS resistances contributing to Rtotal). Vbitline linearity LRS/HRS variation As alluded to earlier, multiple iterations of write-read are often used, to confirm the written value into the ReRAM cell. iterative write The technique employed here to ensure a tight tolerance on the written HRS and LRS value evaluates the digital value read after the write, and increases/decreases the pulse width of the subsequent (reset/set) write cycle iteration until the (resistance) target is reached, ending the write cycle. HRS drift The drift in HRS resistance after many read cycles is illustrated below (measured at high operating conditions to accelerate the mechanism). HRS drift To compensate for the drift, each bitcell is periodically read – any HRS cell value which has changed beyond a pre-defined limit will receive a new reset write cycle to restore its HRS value.  (The researchers did not discuss whether this “mini-reset” HRS write cycle has an impact on the overall ReRAM endurance.) Testsite Measurement Data A micrograph of the ReRAM compute-in-memory testsite (with specs) is shown below. micrograph 2 Summary ReRAM technology offers a unique opportunity for computing-in-memory architectures, with the array providing the node (data * weight) MAC calculation.  The researchers at Georgia Tech and TSMC developed a ReRAM testsite with additional features to address some of the technology issues:
  • HRS/LRS variation:  multiple write-read cycles with HRS/LRS sensing are used
  • low HRS/LRS ratio:  a Vbitline voltage-sense approach is used, with a variable bitline current source (with high gain feedback)
  • HRS drift:  bitcell resistance is read periodically, and a reset write sequence applied if the read HRS value drops below a threshold
I would encourage you to review their ISSCC presentation. -chipguy References [1]  Yoon, Jong-Hyeok, et al., “A 40nm 64kb 56.67TOPS/W Read-Disturb-Tolerant Compute-in-Memory/Digital RRAM Macro with Active-Feedback-Based Read and In-Situ Write Verification”, ISSCC 2021, paper 29.1.   [post_title] => Features of Resistive RAM Compute-in-Memory Macros [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => features-of-resistive-ram-compute-in-memory-macros [to_ping] => [pinged] => [post_modified] => 2021-03-04 16:29:49 [post_modified_gmt] => 2021-03-05 00:29:49 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296287 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [5] => WP_Post Object ( [ID] => 296224 [post_author] => 33083 [post_date] => 2021-03-02 06:00:04 [post_date_gmt] => 2021-03-02 14:00:04 [post_content] => Lauri at the white board In tiny devices, such as true wireless headphones, the battery life of the device is usually determined by the chips that execute the device’s functions. Professor Jan Rabaey of UC Berkeley, who wrote the book on low power, also coined the term “energy frugal” a number of years ago, and this term is even more valid today with the proliferation of true wireless devices. When optimizing the battery lifetime, many times power and energy are used interchangeably. However, they are not interchangeable as the device’s battery stores energy while reducing power can actually consume more energy. Techniques to reduce energy by reducing voltage are being deployed more broadly as demand takes off for true wireless products. In this blog, I’m going to illustrate what’s behind this trend through several examples that demonstrate the relationship between energy, power and voltage. Let’s start by reviewing the basic equations for energy and power, shown below in Figure 1.  They look similar but there are a few, critical takeaways: 1) energy consumption cannot be reduced by reducing frequency, 2) leakage cannot be reduced without reducing VDD (excluding process options) and finally, 3) because of the quadratic relationship, VDD is by far the most effective method of reducing energy. Basic equations for energy and power Figure 1: Basic equations for energy and power Let’s look at the takeaways with some examples. For takeaway 1), an example is simple: reducing frequency by 10%, for example, increases Eleak by 10% (as t increases 10%) while Edyn remains unchanged. This “fallacy” is mostly seen in “run-to-complete” strategies. For example, let’s say that your processor consumes 90% dynamic energy and 10% leakage energy at its nominal voltage. If you run to complete (i.e. run the processor as fast as you can) and then let it leak (i.e. no power gating), neither dynamic energy or leakage change (see equation). But the fallacy shows up if you try to run faster for the sake of shutting down earlier.  For example, let’s say a 10% frequency increase for a 10% VDD increase to run to complete 10% faster. Your new energy consumption is E = 0.9*(1,1)2 + 0.1*1.1*0.9 = 119%. Clock gating doesn’t change this equation as it equally affects all dynamic energy cases but let’s look at power gating’s effect. If your power gating switches super-fast and doesn’t cost active energy, then the theoretical maximum you can save is the leakage energy (10%). How about running as fast as you can and then power gating? The dynamic energy increase is quadratic and the leakage linear, so you can’t win. For the 10% frequency increase case above, you would still end up consuming more energy (0.9*(1,1)2 + 0 = 109%). For takeaways 2) and 3) above, let’s turn to examples that employ reduced voltage. These are not hypothetical examples as we are working with companies to deploy solutions based on reduced voltage today.  I’ll need to explain a few assumptions to start.  Assume that your computation time linearly depends on VDD (a realistic assumption up to a point). Let’s say that this is a slow operating mode (you also have modes that take more of the clock cycle), so your processor (at the same 90% dynamic/ 10% leakage energy as above) finishes in 50% of the clock cycle. Let’s use the remaining 50% of the clock cycle to reduce VDD (i.e. halve VDD). This would result in a huge reduction in energy.  For those interested in the exercise: E = 0.9*(0.5)2 + 0.1 * 0.5 * 2 = 32.5%. It gets even better, as Ileak reduces exponentially with voltage. Let’s say that Ileak reduces by 90% when VDD is halved as above. Your new energy is reduced further to only 23.5%. (E = 0.9*(0.5)2 + 0.1 * 0.1 (0.5 * 2) = 23.5%) In case you are thinking that I’m writing this from an ivory tower, there are also cases where reducing voltage does not make sense when looking at the total chip. Let’s say that you have an old PLL which consumes as much energy as your processor but which can be shut off with no leakage. Then the 50% VDD drop case from above would end up consuming more energy (2*0.5 + 0.5*(0.9*(0.5)2 + 0.1 * 0.1 (0.5 * 2)) = 112%). It’s not an uncommon story in the IC industry that the overhead ends up cancelling out the gains, and in upcoming blogs I’ll show you how to avoid that with dynamic voltage and frequency scaling (DVFS) systems based on our experience working with design teams working on true wireless devices. https://minimaprocessor.com/ [post_title] => It’s Energy vs. Power that Matters [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => its-energy-vs-power-that-matters [to_ping] => [pinged] => [post_modified] => 2021-03-02 08:21:48 [post_modified_gmt] => 2021-03-02 16:21:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296224 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 296154 [post_author] => 11830 [post_date] => 2021-03-01 10:00:20 [post_date_gmt] => 2021-03-01 18:00:20 [post_content] =>

Webinar Achronix and Vorago Deliver Innovation to Address Rad Hard and Trusted SoC Design FINAL

Radiation hardening is admittedly not a challenge every SoC design team faces. Methods to address this challenge typically involve a new process technology, a new library or both. Trusted, secure design is something more design teams worry about and that number is growing as our interconnected world creates new and significant attack surfaces. This challenge typically requires the introduction of new IP, new process tweaks or both. There is a webinar coming on SemiWiki that explains how to deal with both of these challenges with minimal perturbation to both the IP and process strategy. The work here is significant. Read on to learn how Achronix and Vorago deliver Innovation to address rad-hard and trusted SoC design.

The webinar presents the collaboration of two companies. Achronix brings embedded FPGA technology to the table and Vorago brings a unique and low-impact approach to radiation hardened design. Together, these two companies solve a lot of rather difficult problems in an elegant way. First, a bit about the speakers.

[caption id="attachment_296528" align="alignright" width="105"]Dr. Patrice Parris Dr. Patrice Parris[/caption]

The webinar begins with a presentation by Dr. Patrice Parris, chief technology officer at Vorago Technologies. With several degrees in EE, CS and physics from MIT and a diverse career in innovative work at NXP, Freescale and Motorola, Patrice provides a comprehensive overview of radiation hardened design that is easy to follow. He describes Vorago’s unique and patented capabilities to provide technology solutions to address radiation hardening and extreme temperature requirements. More on this technology in a moment.

The next speaker is Raymond Nijssen, vice president and chief technologist at Achronix. Raymond has deep background in ASIC/FPGA design as well as EDA product development. He is driving both the software systems to support Achronix FPGAs as well as key aspects of its FPGA architectures. Both of these gentlemen hold multiple patents. The depth of their technical understanding is substantial. More relevant for the webinar is that they both are able to explain complex concepts in ways that are easy to understand.

[caption id="attachment_296529" align="alignright" width="108"]Raymond Nijssen Raymond Nijssen[/caption]

If the topics of radiation hardening or trusted, secure design are of interest, I highly recommend this webinar. You will come away with new tools and new insights. I will provide an overview of the topics covered in the webinar and then provide a link to register.

We’ll start with Vorago. The company provides an innovative technology called HARDSIL® that adds radiation hardening cost-effectively to existing production fab capability. The approach is to add a small number of mask steps and implants to achieve rad-hard performance. These additions are easily added and don’t impact transistor performance or yield. So, there is minimal impact on the design flow and IP. If this sounds too good to be true, watch the webinar. You will be treated to a very comprehensive overview of how this all works, including SEM photos.  Patrice also does a great job explaining the various types of circuit events that occur during radiation dosing of semiconductors. There are several, with different implications to short- and long-term performance of the circuit. I thought I understood these issues. I wound up learning some new and interesting concepts.

Throughout the webinar, Patrice and Raymond interleave their presentations to build the complete story. Achronix is a unique company that provides both stand-alone and embedded FPGA solutions. I previously covered the offerings of Achronix in this post. There are many other excellent posts about Achronix on its SemiWiki page. Raymond provides an overview of the threats that exist in the semiconductor supply chain. There are many opportunities for theft, tampering and reverse engineering. A trusted flow is daunting for sure. What is quite interesting are the benefits of using embedded FPGA technology in chip design. You need to see Raymond unfold the benefits in detail, but the primary point is that the function and implementation of a circuit are separated in an FPGA and that makes a big difference regarding security.

Raymond and Patrice also describe how HARDSIL is being applied to the Achronix embedded FPGA technology to complete the picture. There is a lot of very useful information presented in this webinar. The tight collaboration between Achronix and Vorago comes across quite well. This webinar will be presented on Tuesday, March 9, 2021 at 10AM Pacific time. You can register for the webinar here. I highly recommend you attend and see how Achronix and Vorago deliver Innovation to address rad-hard and trusted SoC design.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

[post_title] => Webinar: Achronix and Vorago Deliver Innovation to Address Rad-Hard and Trusted SoC Design [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => webinar-achronix-and-vorago-deliver-innovation-to-address-rad-hard-and-trusted-soc-design [to_ping] => [pinged] => [post_modified] => 2021-03-06 13:11:17 [post_modified_gmt] => 2021-03-06 21:11:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296154 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 296486 [post_author] => 28 [post_date] => 2021-03-01 06:00:07 [post_date_gmt] => 2021-03-01 14:00:07 [post_content] => Now that semiconductor conferences are virtual there are better speakers since they can prerecord and we have the extra time to do a better job of coverage. Even when conferences go live again I think they will also be virtual (hybrid) so our in depth coverage will continue. ISSCC is one of the conferences we covered live since it's in San Francisco so that has not changed. We will however be able to cover many more sessions as they come to our homes on our own time. First off is the keynote by TSMC Chairman Mark Liu:  Unleashing the Future of Innovation: Given the pandemic related semiconductor boom that TSMC is experiencing, Mark might not have had time to do a live keynote so this was a great opportunity to hear his recorded thoughts on the semiconductor industry, the foundry business model, and advanced semiconductor technologies. Here are some highlights from his presentation/paper intermixed with my expert insights:
  • The semiconductor industry has been improving transistor energy efficiency by about 20-30% for each new technology generation and this trend will continue.
  • The global semiconductor market is estimated at $450B for 2020.
  • Products using these semiconductors represent 3.5% of GPD ($2T USD).
  • From 2000 to 2020 the overall semiconductor industry grew at a steady 4%.
  • The fabless sector grew at 8% and foundry grew 9% compared to IDM at 2%.
  • In 2000 fabless revenue accounted for 17% of total semiconductor revenue (excluding memory).
  • In 2020 fabless revenue accounted for 35% of total semiconductor revenue (excluding memory).
  • Unlike IDMs, innovators are only limited by their ideas not capital.
Nothing like a subtle message to the new Intel CEO. It will be interesting to see if the Intel - TSMC banter continues. I certainly hope so. The last one that started with Intel saying that the fabless model was dead did not end so well. Mark finished his IDM message with: “Over the previous five decades, the most advanced technology had been available first to captive integrated device manufacturers (IDMs). Others had to make do with technologies that were one or several generations behind. The 7nm logic technology (mass production in 2017) was a watershed moment in semiconductor history. In 2017, 7nm logic, was the first time that the world’s most advanced technology was developed and delivered by a pure-play foundries first, and made available broadly to all fabless innovators alike. This trend will likely continue for future technology generations…" As we all now know Intel will be expanding TSMC outsourcing at 3nm. TSMC 3nm will start production in Q4 of this year for high volume manufacturing beginning in 2H 2022. The $10B question is: Will Intel get the Apple treatment from TSMC (early access, preferred pricing, and custom process recipes)? I’m not sure everyone understands the possible ramifications of Intel outsourcing CPU/GPU designs to TSMC so let’s review:
  • Intel and AMD will be on the same process so architecture and design will be the focus. More direct comparisons can be made.
  • Intel will have higher volumes than AMD so pricing might be an issue. TSMC wafers cost about 20% less than Intel if you want to do the margins math.
  • Intel will have designs on both Intel 7nm and TSMC 3nm so direct PDK/process comparisons can be made.
Bottom line: 2023 will be a watershed moment for Intel manufacturing, absolutely! https://www.youtube.com/watch?v=8mI8l7jQzHg&t=122s [post_title] => TSMC ISSCC 2021 Keynote Discussion [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => tsmc-isscc-2021-keynote-discussion [to_ping] => [pinged] => [post_modified] => 2021-03-01 16:47:27 [post_modified_gmt] => 2021-03-02 00:47:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296486 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 6 [filter] => raw ) [8] => WP_Post Object ( [ID] => 296513 [post_author] => 12045 [post_date] => 2021-02-28 14:00:03 [post_date_gmt] => 2021-02-28 22:00:03 [post_content] => China Taiwan Chip Dilema 1 In its February 20, 2021 edition, the Economist published an article entitled “How to kill a democracy; China faces fateful choices, especially involving Taiwan”.  It went on to quote “To many Chinese, the island’s conquest is a sacred national mission” as well as a by-line “America is losing its ability to deter a Chinese attack on Taiwan.  Allies are in denial.” The thought of such an attack should send cold shivers down the chip industry’s spine given, were this to happen, a pivotal part of the western world’s chip supply would dry up overnight.  Chip inventories would quickly become exhausted and end equipment production lines everywhere would grind to a halt within a matter of weeks, even days.  The near instant impact on global trade and the world economy would be orders of magnitude greater than the 2008 Lehman Brothers crash or the 2020 Covid-19 lockdown. This problem has been brewing for years, the combined result of an efficient out-sourcing regime, driven faultlessly by TSMC, aided and abetted by super-efficient chip-design tools.  Both trends have been manna from heaven to chip firms, users and their investors alike, as it offered lower chip costs and allowed firms to deploy outsourcing-rich, asset-lite manufacturing strategies, increasing profits and diverting their cash flows from investments to dividends and share buy-back schemes.  It was accounting Excel Sheet heaven. No-one paid any attention to the loss of control of a key strategic manufacturing industry, why should they?  Taiwan was the West’s friend and TSMC an outstanding company and, in any case, chips were just another commodity.  The Real men have fabs’ naysayers were ridiculed as out of touch, out of date, twentieth century dinosaurs. The current chip shortage, and its devastation impact on the automotive industry, has to a limited extent stirred the chip-supply hornet’s nest, but this will all blow over once the supply-demand imbalance gets sorted.  Knee-jerk initiatives, such as US ‘Chips for America’ and EU ‘European Initiative on Processors and Semiconductor Technologies’ are the wrong answer to the right problem.  They fail to address the fundamental issue that chip firms do not want to own wafer fabs (it screws up their balance sheet) and the chip users don’t care where the chips come from (so long as they’re cheap).  There’s neither market pull or push! China has been aware of this out-sourced dependency risk for years, hence its drive for national self-sufficiency in chip production, but any fast follower catch-up strategy is notoriously hard to achieve.  As a benchmark, it took TSMC over twenty-five years to come close to manufacturing parity with the best in practice manufacturers, and only in the past five has it moved into pole position, yet they are, without doubt, the best chip firm in the world.  If it took TSMC this long to catch up, what chance has anyone else, hence the reason why, even before the US-imposed sanctions, China has made such modest progress. But, as the Economist points out, the Taiwan conundrum represents unfinished business from the 1949 war when the defeated Nationalist regime fled into exile in Taiwan.  Were President Xi to fulfil China’s pledge to bring the 23rd Province of China under Communist Party control or not is more a matter of when, not if, with D-Day shaped more by the judgement call whether America would (could?) stop him. The big question is America’s ability to deter such an invasion, but as America’s starving of chips to Huawei has shown, invasion today no longer entails tanks and troops on the ground, or the streets of Taipei scorched by fire and stained with blood; simply cutting off the electricity and shutting down TSMC’s factories is all it would take to bring America and the rest of the western world to its knees. For the hawks in China, what better time to do that than now, whilst the non-China world is still struggling with the Covid-19 pandemic, the US democracy and government has been battered by a brutal and divisive presidential election, the world is struggling with a global chip shortage and there is no global consensus whether Taiwan’s independence is worth angering China, especially for some countries, where China is their largest, or a crucial, trade partner. Taiwan’s recovery back into the Communist fold is not just a sacred national mission, it would also signal that American global leadership had come to an end.  The only deterrent is if China feels it cannot complete the task at a bearable cost.  Once that fear is reconciled, there is little doubt China will act and, from a chip supply perspective, there will be nothing the rest of the world can do … as the automotive industry has realized, there is no Plan B. https://www.futurehorizons.com/ [post_title] => The Chip Market / China Conundrum [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => the-chip-market-china-conundrum [to_ping] => [pinged] => [post_modified] => 2021-03-01 08:40:38 [post_modified_gmt] => 2021-03-01 16:40:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296513 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 9 [filter] => raw ) [9] => WP_Post Object ( [ID] => 296421 [post_author] => 33330 [post_date] => 2021-02-28 10:00:06 [post_date_gmt] => 2021-02-28 18:00:06 [post_content] =>

Convergence of Edge Computing, Machine Vision and 5G-Connected Vehicles

Today’s societies are becoming ever more multimedia-centric, data-dependent, and automated. Autonomous systems are hitting our roads, oceans, and air space. Automation, analysis, and intelligence is moving beyond humans to “machine-specific” applications. Computer vision and video for machines will play a significant role in our future digital world. Millions of smart sensors will be embedded into cars, smart cities, smart homes, and warehouses using artificial intelligence. In addition, 5G technology will be the data highways in a fully connected intelligent world, promising to connect everything from people to machines and even robotic agents - the demands will be daunting. The automotive industry has been a major economic sector for over a century and it is heading towards autonomous and connected vehicles. Vehicles are becoming ever more intelligent and less reliant on human operation. Vehicle to vehicle (V2V) and connected vehicle to everything (V2X), where information from sensors and other sources travels via high-bandwidth, low-latency, and high-reliability links, are paving the way to fully autonomous driving. The main compelling factor behind autonomous driving is the reduction of fatalities and accidents. Realizing that more than 90% of all car accidents are caused by human failures, self-driving cars will play a crucial role in accomplishing the ambitious vision of “zero accidents”, “zero emissions”, and “zero congestion” of the automotive industry. The only obstacle is vehicles must possess the ability to see, think, learn and navigate a broad range of driving scenarios. The market for automotive AI hardware, software, and services will reach $26.5 billion by 2025, up from $1.2 billion in 2017, according to a recent forecast from Tractica. This includes machine learning, deep learning, NLP, computer vision, machine reasoning, and strong AI. Fully autonomous cars could represent up to 15% of passenger vehicles sold worldwide by 2030, with that number rising to 80% by 2040, depending on factors such as regulatory challenges, consumer acceptance, and safety records, according to a McKinsey report. Autonomous driving is currently a relatively nascent market, and many of the system's benefits will not be fully realized until the market expands. Accelerating AI Defined Cars Figs Figure 1 – Automotive AI market forecast for the period of 2017 through 2025 AI-Defined Vehicles The fully autonomous driving experience is enabled by a complex network of sensors and cameras that recreate the external environment for the machines. Autonomous vehicles process the information collected by cameras, LiDAR, radar, and ultrasonic sensors to tell the car about its distance to surrounding objects, curbs, lane markings, visual information of traffic signals and pedestrians. Meanwhile, we are witnessing the growing intelligence of vehicles and mobile edge computing with recent advancements in embedded systems, navigation, sensors, visual data, and big data analytics. It started with Advanced Driver Assistance Systems (ADAS), including emergency braking, backup cameras, adaptive cruise control, and self-parking systems. Fully autonomous vehicles are gradually expected to come to fruition following the introduction of the six levels of autonomy defined by the Society of Automotive Engineers (SAE) as shown in Figure 2. These levels range from no automation, conditional automation (human in the loop) to fully automated cars. With increasing levels of automation, the vehicle will take over more functions from the driver. ADAS mainly belongs to Level 1 and Level 2 of automation. Automotive manufacturers and technology companies, such as Waymo, Uber, Tesla, and a number of tier-1 automakers, are investing heavily in higher levels of driving automation. Accelerating AI Defined Cars Figs 2 Figure 2 - Levels defined by SAE for autonomous vehicles With the rapid growth of innovations in AI technology, there is a broader acceptance of Level 4 solutions, targeting vehicles that mostly operate under highway conditions. Although the barrier between Levels 3 and Level 4 is mainly regulatory at this time, the leap is much greater between Levels 4 and 5. The latter requires achieving the technological capability to navigate complex routes and unforeseen circumstances that currently necessitate human intelligence and oversight. As the automation levels increase, there will be a need for more sensors, processing power, memory, efficient power consumption, and networking connectivity bandwidth management. Figure 3 shows various sensors required for self-driving cars. Accelerating AI Defined Cars Figs 3 Figure 3 – Sensors (camera, LiDAR, Radar, Ultrasound) required for autonomous vehicle levels The convergence of deep learning, edge computing, and the Internet of vehicles is driven by the recent advancements in AI automotive and vehicular communications. Another enabling technology for machine-oriented video processing and coding in visual data applications and industries is the emerging MPEG Video Coding for Machine (MPEG-VCM) standard. Two specific technologies are investigated for VCM:
  • Efficient compression of video/images
  • The shared backbone of feature extraction
Powerful AI accelerators for inferencing at the edge, standard-based algorithms for video compression and analysis for machines (MPEG-VCM), and 5G connected vehicles (V2X) play a crucial role in enabling the full development of autonomous vehicles. The 5G-V2X and emerging MPEG-VCM standards enable the industry to work towards harmonized international standards. The establishment of such harmonized regulations and international standards will be critical to global markets of future intelligent transportation and AI automotive industry. There are a number of possible joint VCM-V2X architectures for the future autonomous vehicle (AV) industry.   Depending on the requirements for the given AV infrastructure scenarios, we can have either centralized, distributed, or hybrid VCM-V2X architectures as shown in Figure 4.  Currently, most connected car automaker manufactures are experimenting with the centralized architecture with low-cost cameras.  However, as the cameras become more intelligent, distributed, and hybrid architectures due to their scalability, flexibility, and resource sharing capabilities can become more attractive.  The emerging MPEG-VCM standard also provides the capability of transporting the compressed extracted features rather than sending compressed video/images between vehicles. Accelerating AI Defined Cars Figure 3.1 Accelerating AI Defined Cars Figs 4 Gyrfalcon Technology Inc. is at the forefront of these innovations by using the power of AI and deep learning to deliver a breakthrough solution for AI-powered cameras and autonomous vehicles — an unmatched performance, power efficiency, and scalability for accelerating AI inferencing at the device, edge, and cloud level. The convergence of 5G, edge computing, computer vision, and deep learning, and Video Coding for Machine (VCM) technologies will be key to fully autonomous vehicles. Standard and interoperable technologies such as V2X, emerging MPEG-VCM standard, powerful edge, and onboard compute inferencing accelerator chips enable low-latency, energy-efficient, low-cost, and safety benefits to the demanding requirements of the AI automotive industry. About Manouchehr Rafie, Ph.D. Dr. Rafie is the Vice President of Advanced Technologies at Gyrfalcon Technology Inc. (GTI), where he is driving the company’s advanced technologies in the convergence of deep learning, AI Edge computing, and visual data analysis. He is also serving as the co-chair of the emerging Video Coding for Machines (VCM) at MPEG-VCM standards.  Prior to joining GTI, Dr. Rafie held executive/senior technical roles in various startups and large companies including VP of Access Products at Exalt Wireless, Group Director & fellow-track positions at Cadence Design Services, and adjunct professor at UC Berkeley University. He has over 90 publications and served as chairman, lecturer, and editor in a number of technical conferences and professional associations worldwide. [post_title] => Accelerating AI-Defined Cars [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => accelerating-ai-defined-cars [to_ping] => [pinged] => [post_modified] => 2021-03-01 08:39:00 [post_modified_gmt] => 2021-03-01 16:39:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296421 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 296001 [post_author] => 11830 [post_date] => 2021-03-03 10:00:14 [post_date_gmt] => 2021-03-03 18:00:14 [post_content] =>

NetApp approach to security

Data sharing between semiconductor companies and EDA software companies has been critical to the advancement of the industry.  But it’s had security issues and associated loss of trust along the way.  For instance, there have been cases of customer designs shared as a testcase finding their way into a product demo without the consent of the customer. How did this happen? There was no malicious intent. The primary cause was that the shared data was not controlled within a secure vault and there was no tracking of how the data was used and by whom.  There was also no clear way to return the data that was sent or ensure that all instances of the data were deleted. This has led to major B2B trust issues which then leads to longer bug fix cycles because data is not easily shared. A new approach is needed. Read on to see how NetApp is working to improve secure B2B data sharing for the semiconductor industry.

Why the Industry Needs Secure and Trusted B2B Data Sharing

As I have shared in previous articles, data is the ever-growing lifeblood of semiconductor design.  Double digit data growth between 7, 5 and 3nm design nodes is straining design infrastructure.  At the same time the value of that data is increasing. Data once deleted after successful or failed analysis is being saved so AI/ML models can train or learn from past design runs. Data shared for the joint development of AI/ML models is just one example of the importance of robust secure B2B data sharing solutions.

Let’s examine some of the key reasons for B2B data sharing in the semiconductor industry. These items won’t necessarily make big headlines, but they represent a crucial process to advance chip design. The following points highlight some scenarios of interest.

EDA vendor debug

EDA vendors will always require access to customer designs for software debug – this need will never go away.  Concerns around sharing testcase data results in delays to gain access to the data, creating longer debug and resolution times.  I have even heard stories of EDA teams trying to guess the cause of a problem when access to data was not an option. Rapid access to data is critical for fast issue resolution of issues and for meeting time to market goals.

AI development

EDA tools are rapidly building AI-enabled solutions. Machine learning (ML)/deep learning (DL) can reduce algorithm complexity, increase design efficiency and improve design quality.  Training complex ML and DL models requires massive amounts of data.  And in most cases, it is data EDA vendors don’t have.  The data EDA vendors need is their customer’s design data.  Secure data sharing is critical to the rapid advancement of AI in the semiconductor industry. The challenge is the volume and proprietary nature of the data further complicates sharing.

NDA compliance

We have an NDA in place, so we’re covered, right?  Most data sharing NDAs require that data be returnedand/or deleted once it is no longer needed.  Verifying that all copies of sensitive data were fully deleted in compliance with an NDA is difficult at best. 

Collaboration

Modern chip design is a team sport.  IP providers, library vendors, tool vendors and design services teams all work together to meet critical design timelines and design goals.  Secure data sharing to facilitate collaboration is critical for this process to work.

Can we change the way we think about secure data sharing?

Let’s talk about the roles and responsibilities of Data Owners and Data Users. 

  • Data Owners should be able to share data into a data user’s secure walled off datacenter while still retaining complete visibility and control over WHO can access the data and WHAT systems can access the data. There should be visibility into how often the data is accessed with the ability to highlight anomalous data access patterns. Data Owners should be able to monitor the security attributes of the systems that have access to the data

Data Owners should also be able to securely revoke (or even securely wipe) the data from the system including removing key access.  Data Owners should not find data sitting on a data user’s system unused or after the terms of use have expired or the data has turned cold.  Data Owners should have full visibility of their data at any time even when it is in the Data Users’ datacenter or cloud environment.

  • Data Users should be able to use or share data in their own secure walled off datacenter where they have access to their own resources and tools. They should be able to access the data for approved processes such as test case debug, AI model development and for design collaboration.  Data sets are often so large that it is impractical to expect the Data Owners to host the compute and storage resource for development.  So, it is often critical to have access to the data in Data User’s own datacenter.

The NetApp Approach

NetApp’s ONTAP storage operating system is used by all of the top semiconductor and EDA companies. ONTAP is also used in all of the 3-letter acronym government facilities today for data sharing.  This means that B2B secure data is most likely already a possibility.  Because NetApp’s ONTAP storage operating system runs in all of the commercial clouds, B2B data sharing can be done datacenter-to-datacenter, datacenter-to-cloud or cloud-to-cloud, all with the same controls and monitoring. You can learn more about ONTAP from this prior post.

You can also get a broad view of NetApp’s approach to security here. There is a very useful technical report available from NetApp. A link is coming.

First, let’s take a look at some of the capabilities that allow NetApp to enable secure B2B data sharing for the semiconductor industry.

  • Support for Zero-Trust security architectures
  • Virtual Storage Machine (SVM) – this enables data to be walled off on a shared storage system. This is effectively a secure multi-tenant data storage environment.  SVM allows for role-based access that allows controlled access to allow Data Owners to monitor the storage environment even inside the Data User’s datacenter for real time auditing
  • Secure data transfer via SnapMirror or FlexCache means no more downloading and untar’ing data.Data is automatically transferred from one ONTAP filer to another with data encryption both at rest and in flight. An added benefit is the data is always up to date in the case of rapidly changing data sets
  • Data encryption both with encrypted or unencrypted drive with external key manager is supprted
  • Secure data shredding is supported
  • NFS and SMB security with Kerberos is supported
  • Military grade data security credentials are supported. ONTAP is EAL 2+ and FIPS 140-2 certified
  • File-level granular event monitoring with integration is security information and event management (SIEM) partners is available and supports:
    • Log management and compliance reporting
    • Real-time monitoring and event management. This provides visibility of WHO is accessing the data, what systems are accessing the data and how often the data is being accessed.
  • Integration into third party security tools like:
    • Splunk-based system monitoring to report changes to the system
  • Cloud Secure technology also monitors for anomalous access patterns alerting the Data Owners of suspicious access patterns

The B2B Data Owner has the ability to securely transmit data, revoke data, monitor the usage and access pattern of data, monitor and alert when the secure Zero-Trust infrastructure has been changed, etc. 

I’ve only scratched the surface here. NetApp offers a lot of capability to create a trusted, secure environment. NetApp is working to improve secure B2B data sharing for the semiconductor industry.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

[post_title] => NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => netapp-enables-secure-b2b-data-sharing-for-the-semiconductor-industry [to_ping] => [pinged] => [post_modified] => 2021-03-06 13:09:56 [post_modified_gmt] => 2021-03-06 21:09:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296001 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7538 [max_num_pages] => 754 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => f6cf882a22c63d085e4467ee30435821 [query_vars_changed:WP_Query:private] => 1 [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry

NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry
by Mike Gianfagna on 03-03-2021 at 10:00 am

NetApp approach to security

Data sharing between semiconductor companies and EDA software companies has been critical to the advancement of the industry.  But it’s had security issues and associated loss of trust along the way.  For instance, there have been cases of customer designs shared as a testcase finding their way into a product demo without the … Read More


Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb

Semiconductor Shortage – No Quick Fix – Years of neglect & financial hills to climb
by Robert Maire on 03-03-2021 at 8:00 am

Toamagachi Semiconductor shortage

– Semi Situation Stems from long term systemic neglect
– Will require much more than money & time than thought
– Fundamental change is needed to offset the financial bias
– Auto industry is just the hint of a much larger problem

Like recognizing global warming when the water is up to your neck

The problem… Read More


TECHTALK: Hierarchical PI Analysis of Large Designs with Voltus Solution

TECHTALK: Hierarchical PI Analysis of Large Designs with Voltus Solution
by Bernard Murphy on 03-03-2021 at 6:00 am

voltus min

Power integrity analysis in large chip designs is especially challenging thanks to the huge dynamic range the analysis must span. At one end, EM estimation and IR drop through interconnect and advanced transistor structures require circuit-level insight—very fine-grained insight but across a huge design. At the other, activity… Read More


USB4 Makes Interfacing Easy, But is Hard to Implement

USB4 Makes Interfacing Easy, But is Hard to Implement
by Tom Simon on 03-02-2021 at 10:00 am

USB4 Verification IP

USB made its big splash by unifying numerous connections into a single cable and interface. At the time there were keyboard ports, mouse ports, printer ports and many others. Over the years USB has delivered improved performance and greater functionality. However, as serial interfaces became more popular and started being used… Read More


Features of Resistive RAM Compute-in-Memory Macros

Features of Resistive RAM Compute-in-Memory Macros
by Tom Dillinger on 03-02-2021 at 8:00 am

V bitline

Resistive RAM (ReRAM) technology has emerged as an attractive alternative to embedded flash memory storage at advanced nodes.  Indeed, multiple foundries are offering ReRAM IP arrays at 40nm nodes, and below.

ReRAM has very attractive characteristics, with one significant limitation:

  • nonvolatile
  • long retention time
  • extremely
Read More

It’s Energy vs. Power that Matters

It’s Energy vs. Power that Matters
by Lauri Koskinen on 03-02-2021 at 6:00 am

Lauri at the white board

In tiny devices, such as true wireless headphones, the battery life of the device is usually determined by the chips that execute the device’s functions. Professor Jan Rabaey of UC Berkeley, who wrote the book on low power, also coined the term “energy frugal” a number of years ago, and this term is even more valid today with the proliferation… Read More


Webinar: Achronix and Vorago Deliver Innovation to Address Rad-Hard and Trusted SoC Design

Webinar: Achronix and Vorago Deliver Innovation to Address Rad-Hard and Trusted SoC Design
by Mike Gianfagna on 03-01-2021 at 10:00 am

Webinar Achronix and Vorago Deliver Innovation to Address Rad Hard and Trusted SoC Design FINAL

Radiation hardening is admittedly not a challenge every SoC design team faces. Methods to address this challenge typically involve a new process technology, a new library or both. Trusted, secure design is something more design teams worry about and that number is growing as our interconnected world creates new and significant… Read More


TSMC ISSCC 2021 Keynote Discussion

TSMC ISSCC 2021 Keynote Discussion
by Daniel Nenni on 03-01-2021 at 6:00 am

Mark Liu TSMC ISSCC 2021

Now that semiconductor conferences are virtual there are better speakers since they can prerecord and we have the extra time to do a better job of coverage. Even when conferences go live again I think they will also be virtual (hybrid) so our in depth coverage will continue.

ISSCC is one of the conferences we covered live since it’s… Read More


The Chip Market / China Conundrum

The Chip Market / China Conundrum
by Malcolm Penn on 02-28-2021 at 2:00 pm

China Taiwan Chip Dilema

In its February 20, 2021 edition, the Economist published an article entitled “How to kill a democracy; China faces fateful choices, especially involving Taiwan”.  It went on to quote “To many Chinese, the island’s conquest is a sacred national mission” as well as a by-line “America is losing its ability to deter a Chinese attack… Read More


Accelerating AI-Defined Cars

Accelerating AI-Defined Cars
by Manouchehr Rafie on 02-28-2021 at 10:00 am

Accelerating AI Defined Cars Figs

Convergence of Edge Computing, Machine Vision and 5G-Connected Vehicles

Today’s societies are becoming ever more multimedia-centric, data-dependent, and automated. Autonomous systems are hitting our roads, oceans, and air space. Automation, analysis, and intelligence is moving beyond humans to “machine-specific” … Read More