SemiWiki 800x100 DAC FSWG

WP_Query Object
(
    [query] => Array
        (
        )

    [query_vars] => Array
        (
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [post_type] => 
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type = 'post' AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 0, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 288198
                    [post_author] => 11830
                    [post_date] => 2020-07-14 10:00:03
                    [post_date_gmt] => 2020-07-14 17:00:03
                    [post_content] => 

Ansys addresses complex Multiphysics simulation and analysis tasks, from device to chip to package and system. When I was at eSilicon we did a lot of work on 2.5D packaging and I can tell you tools from Ansys were a critical enabler to get the chip, package and system to all work correctly.

Ansys recently published an Application Brief on how they address analysis of power management ICs. The tool highlighted is Ansys Totem, a foundry-certified transistor-level power noise and reliability platform for power integrity analysis on analog mixed-signal IP and full custom designs. I had the opportunity to speak with Karthik Srinivasan, Sr. Corporate Application Engineer Manager, Analog & Mixed Signal and Marc Swinnen, Director of Product Marketing at Ansys.

I began by probing the genealogy of Totem. Did it come from an acquisition? Interestingly, Totem is a completely organic tool that builds on the Multiphysics platform at Ansys that powers other tools such as the popular Ansys Redhawk.  Organic development like this is noteworthy – it speaks to the breadth and depth of the underlying infrastructure. As Totem is a transistor-level tool, it delivers Spice-like accuracy according to the Application Brief. I probed this a bit with Karthik. Was Totem actually running Spice, and if so, how do you get an answer for a large network in less than geologic time?

Totem changes the modeling paradigm for the network to deliver results much faster than traditional Spice. All non-linear elements are converted to a linear model. All transistors are modeled as current sources and capacitors. These models are then connected to the parasitic network of the power grid. An IR-drop and electromigration analysis is then performed. This cuts the computational complexity of the problem down quite a bit. Totem provides targeted accuracy for the analysis of interest, typically within 5-10 mV of Spice, even for advanced technology nodes.

We discussed other applications of this approach. Power management ICs contain very wide power rails to handle the large currents involved in their operation. These structures are typically analyzed with a finite element solver, resulting in very long run times, typically multiple days. Using the Totem approach, a result with similar accuracy can typically be delivered 5-6X faster.

Using the Ansys Multiphysics platform, analysis can be performed from transistor and cell library level all the way to the system level. One platform, one source of models. IP vendors are also developing and delivering Totem macro models along with the IP to facilitate this kind of multi-level analysis. Marc pointed out that custom macro models are a key enabling technology to support this kind of transistor to system analysis. One first does the detailed analysis in Totem and then creates a macro model of the result to drive Redhawk.

The Ansys Application Brief goes into a lot more detail about the analysis capabilities of Totem. You can access the Application Brief here. To whet your appetite, here are some of the topics covered:

  • Advanced Analysis: Power FETs, RDSON & sensitivity, guard ring weakness checks, transient power
  • Early Analysis: device R maps, interconnect R maps, guard ring weakness maps
  • PDN Noise Sign-Off: power, DvD, substrate noise

With DAC approaching, you can visit the Ansys virtual booth. Registration for DAC can be found here. There’s more to see from Ansys at DAC.  The company has an incredible 25 papers accepted in the designer track (that’s not a misprint). Four of them focus on Totem. I also hear that Ansys is planning a special semiconductor-focused virtual event in the Fall. Watch your inbox and SemiWiki for more information on that as it becomes available.

[post_title] => Ansys Multiphysics Platform Tackles Power Management ICs [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => ansys-multiphysics-platform-tackles-power-management-ics [to_ping] => [pinged] => [post_modified] => 2020-07-13 11:12:36 [post_modified_gmt] => 2020-07-13 18:12:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288198 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 288067 [post_author] => 16 [post_date] => 2020-07-14 06:00:35 [post_date_gmt] => 2020-07-14 13:00:35 [post_content] => Back in my Atrenta days (before mid-2015), we were already running into a lot of very large SoC-level designs – a billion gates or more. At those sizes, full-chip verification of any kind becomes extremely challenging. Memory demand and run-times explode, and verification costs explode also since these runs require access to very expensive servers in-house or in the cloud. Verifying hierarchically seems like an obvious solution but presents new problems in abstracting blocks in the analysis. Immediate ideas for abstraction invariably hide global detail which is critical to accuracy and dependability for sign-off. Implementing hierarchical CDC (clock domain crossing) analysis provides a good example. Design complexity drives hierarchical CDC

The need for hierarchical CDC

The factors that make for a CDC problem don’t neatly bound themselves inside design hierarchy blocks. Clocks run all over an SoC and many domain crossings fall between function blocks. You might perhaps analyze two or more such blocks together, but you still have to abstract the rest, adding unknown inaccuracies to your analysis. Even this solution may fail for more extended problems like re-convergence or glitch prone logic. Add in multiple power domains and reset domains and the range of combinations you may need to test can become overwhelming. Clever user hacks can’t get around these issues unfortunately. The unavoidable answer is to develop much better abstractions which can capture that global detail, detail that is necessary for CDC analysis but not captured in conventional constraints or other design data. That direction started in Atrenta and continues to be evolved in Synopsys through a concept of sign-off abstract models (SAMs). A SAM is a reduced and annotated model, much smaller than the full model. But it still contains enough design and constraint detail to support an accurate CDC analysis at the next level up.

Hierarchical analysis

The analysis methodology, which can extend through multiple levels of hierarchy, will typically start at a block/IP level where an engineer will first fully validate CDC correctness, then generate a SAM model through an automatic step. These models strip out internal logic except for logic at boundaries where that logic has relevance to CDC. The SAM model will also include assumptions made in the block-level analysis. At the next level up, CDC will between the assumptions at that level (e.g. sync/async relations between clocks) and those block-level assumptions. When you have fixed any consistency problems at one level, you can run CDC analysis  at level next level up. Fix any problems there, generate a SAM model  for that level, and so on, up the hierarchy.

Hierarchy simplifies CDC review

There’s another obvious benefit to this approach. CDC noise becomes much more manageable. No need to wade through gigabytes of full-chip reports to find potential problems. You can now work through reasonably-sized reports at each level. Synopsys already has lots of clever techniques uses to reduce noise further within a level . The secret sauce in this process is the detail in the SAM model, in generation, and in consistency checks between levels. To ensure that hierarchical analysis is entirely consistent with a full flat analysis. While subtracting the detail that would have been reported inside whatever you have abstracted. You can still run a final signoff before handoff, to be absolutely certain. Hierarchical CDC helps you to be a lot more efficient about how you get there. You can learn more about the VC SpyGlass hierarchical CDC analysis flow HERE. [post_title] => Hierarchical CDC analysis is possible, with the right tools [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => hierarchical-cdc-analysis-is-possible-with-the-right-tools [to_ping] => [pinged] => [post_modified] => 2020-07-09 12:38:40 [post_modified_gmt] => 2020-07-09 19:38:40 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288067 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 287426 [post_author] => 3 [post_date] => 2020-07-13 10:00:54 [post_date_gmt] => 2020-07-13 17:00:54 [post_content] => DVCon was the first EDA conference in our industry impacted by the pandemic and travel restrictions in March of this year, and the organizers did a superb job of adjusting the schedule. I was able to review a DVCon tutorial called "Defining a SystemC Methodology for your Company", given by Swaminathan Ramachandran of CircuitSutra. His company provides ESL design IP and services and their main office is in India. Why SystemC The SystemC language goes all the way back to a DAC 1997 paper, and the first draft version was released in 1999. SystemC is defined by Accellera and even has an IEEE standard 1666-2011. The Accellera SystemC/TLM (Transaction Level Modeling) 2.0 standard provides a solid base to start building, integrating and deploying models for use cases in various domains. The ability to model a virtual platform of both SoC hardware and software concurrently using SystemC is the big driver. SystemC is a library built in C++, which has a rich and robust ecosystem consisting of libraries and development tools. Virtual Prototypes Virtual Prototypes are the fast software models of the hardware, typically at a higher level of abstraction, sacrificing cycle accuracy for simulation speed. Virtual Platforms based on SystemC have been leading the charge for ‘left-shift’ in the industry. It has had a profound impact in the fields of pre-silicon software development, architecture analysis, verification and validation, Hardware-Software co-design & co-verification. SystemC/TLM2.0 has become the de facto standard for development and exchange of IP and SoC models for use in virtual prototypes SystemC Methodology for Virtual Prototypes SystemC, a C++ library, offers the nuts and bolts to model the hardware at various abstraction levels. Developing each IP model from scratch with low level semantics and boilerplate code can be a drain on engineering time and resources, leading to lower productivity and higher chances of introducing bugs. There is a need for a boost-like utility library on top of SystemC, that provides a rich collection of tool independent, re-usable modeling components that can be used across many IPs and SoCs. One of the strengths of SystemC, and also its biggest weakness, is its versatility. SystemC allows you to develop models which can be at the RTL level, similar to Verilog / VHDL. It also allows you to develop the models at higher abstraction levels which can simulate as fast as real hardware. To effectively deploy SystemC in your projects, just learning the SystemC language is not sufficient, you need to understand the specific modeling techniques so that models are suitable for a specific use case. The modeling methodology or boost-like library on top of SystemC, for virtual prototyping use case should provide the re-usable modeling classes & components that encapsulate the modeling techniques required in virtual prototyping. Any model developed using this library will automatically be at higher abstraction levels, fully suitable for virtual prototypes. Virtual prototyping tools from many EDA vendors comes with such a library, however models developed with these become tightly-coupled with the tools. Most of the semiconductor companies working on virtual platform projects end up developing such a library in-house in a tool independent fashion, While defining such a methodology, one should try to identify and leverage recurring patterns in the model development. There will be some code sections or features that will be similar in all models. Instead of each modeling engineer implementing their own versions of these code sections, it will be better to maintain these in a common library to be used by all modeling engineers. In addition, there may be set of common, re-usable modeling components required while developing the models of the various IP of the same application domain, e.g. audio / video. Every company has to carefully evaluate their needs and come up with the requirement specs of these common components. Most of the time, there is a central methodology team who develops and maintain this library, and keep it up to date with latest standards. This presentation covered a select list of components and features that may be used to build such a high productivity suite. These may be useful for the semiconductor and system companies, willing to start with virtual prototyping activities. Over the years the team at CircuitSutra has built up their own SystemC library to accelerate virtual prototype projects. CircuitSutra Modeling Library (CSTML) has been successfully used in a wide variety of virtual platform projects for over a decade, and has become highly stable over that period of time. Using CSTML as the base for your projects right from the beginning will ensure that your models are compliant with standards and can be integrated with any EDA tool. You may also use it as the base and further customize it to define your own modeling methodology. Feature List Some of these library elements are presented here:
  • Register Modeling
  • Smart TLM sockets
  • Configuration
  • Reporting/Logging
  • Model Generator
  • Smart Timer
  • Generic Router
  • Generic Memory
  • Python Integration
  Register Modeling Registers provide the entry point for embedded programmers to configure an IP, and as such are universally found in almost all IPs. Registers come in all shapes and sizes and are usually described using IPXACT register specifications. Memory mapped registers  are mapped to CPU address maps. Registers may be further composed of bit-fields, each of which may control one or more aspects of an IP and report their status. Register read and write requests are typically handled via a TLM 2.0 target socket. We can marry the TLM2.0 (smart) target socket to the register library to provide seamless and automatic communication between the two. Register Model min Registers and bit-fields have five access types. The bit field read has three variants and write has ten variants. The number of permutations and combinations that this can offer is mind boggling, but with a register library, accompanied with code generation this complexity can be tucked away under a lightweight and consistent API to access registers and bit-fields. Further array-like access semantics provide syntactic sugar. If we want to associate an action linked to a register access, we can enable it by registering a pre/post call-back with the appropriate register. For e.g. If CNTL_BIT0 bit-field is set for an IP, then take some action. This may be implemented by providing a debug post call-back. This approach also simplifies code-reviews, as the functionality associated with a register access operation is localized, and this code can be kept separate from generated code.
static const int ADDR_CNTL = 0x104;
// Setup registers and associated bit-fields
// note: generated
void IP::register_setup() {
    // ...
}

// debug-write/post-cb (User written)
void IP::reg_cntl_cb(addr_t addr, value_t val) {
   if (m_reg[addr][CNTL_BIT0]){
       bar();
   }
}

// note: Register IP behavior
IP:IP() {
    register_setup();
    m_reg.attach_cb(ADDR_CNTL, &IP::reg_cntl_cb, 
    REG_OP_DBG_WRITE, REG_CB_POST);
}
Smart TLM Sockets Accellera tlm_utils library provides some convenient sockets which simplify modeling TLM2.0 transactions, however they do not provide support for some commonly used features like Direct Memory Interface (DMI) management in LT modeling and tlm_mm (TLM Memory Manager) in case of AT transactions. Smart TLM Socket SystemC The TLM smart initiator socket provides built in support for tlm_mm and DMI manager that is transparent to the end-user. The tlm_mm may also be extended to support buffer, byte-enable and tlm_extensions memory management. Similarly, TLM smart target provides a memory-mapped registration feature for resources that may be leveraged by resources like Registers and Internal Memory. It also handles gaps in memory maps based on configurable policies like ignoring them, raising an exception, etc. Configuration In a virtual platform you can quickly change any memory size, cache size, set policies and control debug levels using configuration. There's a library to handle configuration aspects, and this tool reads in different file formats and then configures all of the IPs to be used in an SoC.Configuration SystemC A configuration database provides a file-format (XML, JSON, lua etc.) agnostic way to store and retrieve configuration values, and this can be leveraged by SystemC/CCI for configuring the System. It can support both Static (Config-file(s) based) and Dynamic (Tool based) Configuration updates. Using a Broker design pattern it can also help to limit visibility of certain parameters as desired by the IP/Integration engineer. Reporting/Logging SystemC provides the hooks, albeit basic, to support reporting with log-source capture, multiple log-levels, associating actions with logging etc. What is missing is a convenience class that can simplify log management at IP and integration level, which is provided by the CST Log module.. Reporting At the IP level we need capabilities to log not just (char*) strings, but also integers, registers, internal states, etc. At the Integration level we need capabilities to filter out messages based on the log-source(s) in addition to log-levels. For non-interactive runs, and for debugging we may want to capture logs in files. Tool configuration is also simplified if it has access to a centralized logging module. Smart Timer It is well known that introducing clocks, especially in LT simulation can drastically slow down the simulation. While developing the models for virtual platforms, generally the clock is abstracted away, and the timing functionality is implemented in a loosely timed fashion Every SoC have one or more timer IP, so developing the LT model of these timers can be very tedious and error prone. CSTML has a generic ‘Smart Timer’   that can be mapped to any of your (Timer) IP needs with either Loosely Timed or Clocked styles. This class is highly configurable, and provides support for most of the commonly required timer features: using up or down counting, supporting pre-scaling, controlled with enable or pause, and having a cycle or one-shot. Smart Timer min Model Generator Given an IP specification, there is a fair amount of boilerplate code needed to implement registers, internal memory, interface-handing, and configurations. Manually transcribing the specification document to code can be time consuming and introduce bugs in the process. Using machine-readable specifications like IPXACT, custom XMLs, and Excel sheets are becoming common. The Model Generator (python based) accepts file inputs (in different formats) to describe any IP block, and then it automatically creates the boilerplate code needed for:
  1. IP scaffolding including interfaces, registers, any internal memories, tlm-socket to register/memory binding, configuration params
    1. Doxygen comments provide contextual info drawn from the Inputs.
    2. User-code to be written is generated in separate sources, so that the IP code can be regenerated, if required, without loss of user customizations.
  2. Unit testbench (UT) with complementary interfaces, sanity test cases for testing memory map, registers, configurations.
  3. A Top module to instantiate and connect IP and UT.
  4. Configuration file(s) for IP/UT and Top.
  5. Build scripts (Cmake based) for building and testing IP.
  6. README.md to provide basic information on the IP, how to build, test.
Model Generator SystemC library You don’t have to start with a blank screen and hand-code all of the low-level details when you use the Model Generator approach. It even creates code that conforms to your own style guidelines for consistency. Generic Router Once we have a set of Master and Slave IPs, the next logical step is to connect them together based on the System memory-map. This is a common IP block required in a system, and CircuitSutra has made their generic router configurable to enforce your routing policy, it's aware of DMI, and follows your security policies. All of the options are configurable with an external file. Generic Router min The generic router provides a way to configure N-initiator and M-targets. The target memory map is configurable for each initiator. It also optionally provides a way to base-adjust the outgoing transaction address. Error handling of unmapped regions can also be configured. Alternate routing policies like round-robin, fixed-routing and priority routing can also be implemented. The router can also be made DMI aware, handing not only the normal/debug transport APIs, but also the DMI forward transport interface with base-adjustment, and invalidate DMI backward interface. The handles both LT and AT style TLM requests. Logging the configured memory maps and time stamped transactions is very helpful during debugging. Generic Memory Many SoC devices are filled with over 50% area of memory IP blocks. It is good to have a generic memory model  that can range in size from a few MB up to multi-GB array. You configure each memory IP, define RW permissions, use logging and tracing for debug, and model single or multi-port instances.Generic Memory, SystemC model Multiple configuration knobs are supported like the size of memory, read-write permissions and latency, byte-initialization at reset, and retention. It may also provide a feature to save/restore memory state to files. LT friendly memory implementations also provide support for DMI. Logging and tracing memory transactions are provided to help in debugging. More complex implementations may provide multiple ports with configurable arbitration policies Python Integration Test engineers do not have to be C++/SystemC experts to test the IP functionality. If the test scenarios are enumerated, they may be coded in any (scripting) language. A Python front-end for SystemC is quite popular due to its ease of interface with C/C++ code, and the general familiarity of engineers with the Python language. Writing tests in Python makes them more readable with fewer lines of code, and consequently fewer bugs. CSTML provides a generic testbench infrastructure that allows creating consistent self-checking unit test cases. Summary A well designed SystemC modeling methodology can be a big productivity boost  to create a Virtual Platform more quickly with less engineering effort and shorter debug than starting from scratch. The engineers at CircuitSutra have been honing their ESL design skills over the past decade using SystemC and their libraries across a wide range of domains:
  • Automotive
  • Storage
  • Application processors
  • IoT
They are working with leading EDA, semiconductor and systems companies. View the archived tutorial from DVCon, starting at time point 21:40. Related Blogs [post_title] => SystemC Methodology for Virtual Prototype at DVCon USA [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => systemc-at-dvcon [to_ping] => [pinged] => [post_modified] => 2020-07-11 10:22:59 [post_modified_gmt] => 2020-07-11 17:22:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=287426 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 288217 [post_author] => 28 [post_date] => 2020-07-13 06:00:09 [post_date_gmt] => 2020-07-13 13:00:09 [post_content] => [caption id="attachment_288235" align="alignright" width="113"]vincent markus menta Vincent Markus CEO of Menta[/caption] What products are Menta offering today? Menta is a semiconductor IP provider. We are the only proven European provider of programmable logic to be embedded inside customers’ SoCs and ASICs. This programmable logic is in the form of embedded FPGA IP. So, we offer our customers the possibility to have a small portion of their SoC as a low-power FPGA which can be programmed by them or their customers in the field. This is if you like ‘design insurance’ in a world where algorithms and requirements are changing at a much faster pace than the SoC design cycles. So, your eFPGA IP is essentially the core fabric of an FPGA? It is indeed tempting to summarize it like that. However, this does not accurately reflect the complete reality – and we found this out the hard way during our early years at Menta. When we started with what was then version-1 of our eFPGA IP, we designed it with a ‘standalone FPGA mindset’. There was virtually no conceptual difference between our eFPGA IP and what you could find at that time with commercial FPGA vendors regarding their core fabric. We produced v1 and v2 of these cores, and in 2011, even an MRAM based FPGA core fabric – a world first but still with an FPGA mindset. These offerings gave us a good market exposure and prospects started knocking at our door. However, after the excitement of the PowerPoint presentations passed, we discovered that the enthusiasm of the prospects was fading rapidly – for reasons we didn’t initially appreciate. We experienced this the hard way with a large Japanese prospect in 2014. While we were negotiating big numbers and large volumes with their business team, their ASIC engineers raised many questions and issues regarding integration, simulation, verification, yield, final test, etc. None of those were major hurdles for us to overcome but it meant additional risk, cost, and time to integration for those engineers. We ended up losing this account, and many others thereafter, for similar reasons. When I invested in Menta and decided to lead the company, we made a re-start to change our mindset. Our customers were SoC and ASIC designers - so we hired SoC and ASIC designers to understand their product expectations and the fundamental barriers we were experiencing in the adoption of our IP. The cooperation of our FPGA specialists with our ASIC design specialists led to a new generation of Menta’s unique eFPGA IP – what we called v3 back in 2015 – which was born with an ASIC IP mindset and delivering a complete journey for our customers from cradle to production. And soon after we gained our first customer - a top US Aerospace & Defense company. Moving fast forward, we are now selling our v5 IP which was released in 2018. The same principles apply but with much improved PPA with each generation. What are those principles? As I said, Menta eFPGA IP is designed to be integrated into an ASIC or SoC – so our primary aim is to make the complete journey of our customers from design to full production free of any friction or worry. First, we don’t want to dictate to our customers what foundry to use, the process node, interfaces or the EDA design-flow. Of course, the earlier the decision to integrate an eFPGA IP is made, the more benefits can be gained from that integration. However, Menta eFPGA IP can still be integrated very late in the design process because of the extreme flexibility of our approach. Let’s expand on that flexibility aspect - our eFPGA IP is based completely on standard cells, provided by the foundry, the customer or a third party – not a single custom cell is needed to use our IP. Even for the bitstream storage we use DFF for extreme portability - while most other solutions would need a custom SRAM bitcell design which limits their choice of fab or process. We don’t require any specific library, process step or metal stack for our users to deploy our IP. As a side note, DFF also makes our designs much more radiation hardened compared to SRAM base designs – an important consideration for automotive and of course space and defense. Same thing applies regarding the interfaces to the eFPGA IP - these are all external to our blocks. Connections and communication with the IP are as simple as connecting a memory block. We have developed and patented a standard scan chain DfT for the same reason and allow our customers to verify and simulate within their own EDA toolchain at every stage - like for any other digital IP. We realize that our eFPGA IP must not introduce any yield or reliability issues into the design of our customers. Finally, our customers are doing ASIC design – which stands for ‘application specific’. So, we made our IP completely ‘design adaptive’ – or even ‘application adaptive’. So, it evolves with the needs of our customers. If you need a new AI algorithm, you can program it as opposed to burn it into hard gates. I could go on for a long time with a list of requirements like verification, simulation, trust, etc. One thing we know for sure though, is that what it takes to provide a good eFPGA IP cannot be oversimplified to physical density of look-up-tables. There are many other factors that influence the silicon area for a given RTL design like DSPs, whether memories can be integrated inside the IP itself, read/write circuitry, test circuitry, and so on. When you look at it holistically, our customers are very happy with the small silicon area trade off with the design flexibility they get. What about Menta Software? Our customers don’t want to introduce any complexity for their customers. If they have to buy third party software to program the chip, that is an additional degree of user friction and cost which must be avoided. That is why, very early on, we made the correct strategic decision to develop and deliver a complete design environment for our eFPGA IP - our very own Origami programming platform which is available to all our customers. We also ensure compatibility with our customers’ existing RTL code by integrating the Verific HDL parser. It takes only a couple of hours for an FPGA engineer to master our design-flow and move their existing RTL to Menta eFPGA IP with ease. This is how our customers typically evaluate our IP and design-flow before committing to a design and has been a cornerstone of our success with a growing number of design-wins. How long does it take and what does it cost to port a Menta eFPGA IP to a given process? Thanks to our strategy of using only standard cells, the portability of our IP and our design-flow, it takes only between 1 to 6 months for us to deploy our eFPGA IP in a new process node. To date, our IP has been delivered on 10 different nodes across 4 different foundries – all the way from 180nm down to 6nm and getting ready to work on 5nm. As we don’t need custom cells, we do not require going through a test-chip or silicon characterization. As a result, all our deliveries have been ‘right first time’. Our methodology has been audited several times by partners and customers and we have been qualified by GLOBALFOUNDRIES on 32SOI and 12LP and are 22FDX’celerator ecosystem members. That tells you how serious we are when it comes to quality and portability. Why use eFPGA IP when one can buy a stand-alone FPGA? FPGAs do a great job for those low-volume, high-value applications which require a huge number of programmable logic resources – we are speaking million of LUTs here. In the datacenter for example, AI workloads on stand-alone FPGAs are making great inroads against the GPUs. When it comes to workloads on the Edge however, where cost and low-power are paramount considerations, stand-alone FPGAs do not make much sense – except when prototyping. In these markets, ASICs and SoCs are the real winners for the foreseeable future. However, as I said earlier, in a world where algorithmic IP is changing at a rapid pace, it does not make sense to hardwire these into gates in an ASIC. Otherwise your chip may be still-born by the time it hits the market. This is a trend we are seeing in AI/ML, computational storage, 5G and encryption – constant change. This is where the eFPGA shines – you allocate typically around 20% of your chip for those rapidly changing algorithms with the comfort that even if you need a different algorithm you can still program it into your ASIC – even after production. It is true that your chip will be slightly bigger (compared to hardwired gates) – but that small ‘insurance premium’ is worth it in making your chip fit-for-purpose well into the future. We are also seeing another phenomenon among our customers – configurability. Prior to eFPGAs some customers would have 100s of different chips with slightly varying functionality. With a tiny amount of eFPGA, they can now have a single die from which they can produce 100s of different SKUs with no inventory risk. This is priceless for them. Finally, especially in cryptography, eFPGA works as an additional level of security. If the encryption is hardwired into gates, it can always be reverse engineered. If it is only loaded into the ASIC at run-time (which you can do with eFPGA), it is much harder to reverse engineer. In summary, we are now seeing an endless stream of new use cases which we did not envisage when we started this journey. What is new since last time we talked? It has been a while, so there is actually quite a lot of updates. First, we released the v5 of our IP with improved PPAs. Second, we introduced new features especially in handling memories within the IP in a completely automated and transparent way, as well as a new adaptive DSP with some patented breakthrough features – that are already in use by early adopters. We’ll tell you more in the coming months. Where do you see Menta eFPGA IP used? We address four main market segments. Our early adopters have been Aerospace & Defense companies. We have multiple customers all over the world (European Defence Agency, Thales Alenia Space). Our capability to deliver trusted eFPGA IP and the various radiation hardening options we have are some of the strengths that push A&D actors to adoption. We also have customers in computing intensive applications such as High-Performance Computing (EPI) or 5G base stations (Chongxin Beijing communication). IoT (Edge) is another segment where our low power, small area and low cost small eFPGA IPs have a lot of success. Automotive is an evolving segment for us where deals typically take longer but we have a strong position here and recently had the chance to discuss publicly some of the work we do with Karlsrühe Institute of Technology, Infineon and BMW. I saw several partnerships announcement – can you tell me more? We aim to bring our customers not only an eFPGA IP, but also all the collateral IPs and tools that will increase the value add of using Menta eFPGA IP. For this, we’ve been quietly building an exciting ecosystem. Some partners are offering their expertise to our customers to enable applications – for example security and cryptography with Rambus and Secure-IC. Some partners are bringing ease of use of our eFPGA like Verific for VHDL/Verilog/System Verilog parsing or Mentor Graphics Catapult to allow our customers to program our eFPGA IP in high level language such as SystemC. We also have partners that bring SoC level applications such as eFPGA IP and CPU combination, like Andes – and others that bring technology options to our customers such as GLOBALFOUNDRIES, IMEC, Surecore or Synopsys. Finally, we have a growing ecosystem of algorithmic IP providers who are offering their wares to our customers to enable vertical applications – from TinyML to security and cryptography applications, including those from Rambus and Secure-IC. Watch this space! About Menta Menta is a privately held company based in Sophia-Antipolis, France. The company provides embedded FPGA (eFPGA) technology for System on Chip (SoC), ASIC or System in Package (SiP) designs, from EDA tools to IP generation. Menta's programmable logic architecture is based on scalable, customizable and easily programmable architecture created to provide programmability for next-generation ASIC design with the benefits of FPGA design flexibility. For more information, visit the company website at: www.menta-efpga.com [post_title] => Menta CEO Update 2020 [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => ceo-interview-vincent-markus-of-menta-2 [to_ping] => [pinged] => [post_modified] => 2020-07-09 07:52:58 [post_modified_gmt] => 2020-07-09 14:52:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288217 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 288182 [post_author] => 23 [post_date] => 2020-07-12 06:00:15 [post_date_gmt] => 2020-07-12 13:00:15 [post_content] => Will AI rescue the world from the impending doom of cyber attacks or be the cause There has been a good deal of publicized chatter about impending cyberattacks at an unprecedented scale and how Artificial Intelligence (AI) could help stop them. Not surprisingly much of the discussion is led by AI vendors in the cybersecurity space. Although they have a vested interest in raising an alarm, they do have a point. But it is only half the story. There is a new ‘largest’ cyber-attack almost every year. Sometimes it is an overwhelming Distributed-Denial-of-Service (DDoS) attack, other times it has been a deeper penetrating worm, more powerful botnet, massive data breach, or a bigger financial heist. This is not unexpected. Rather it is a result of the world embracing Digital Transformation (DT) with more assets and reliance on the growing digital ecosystem. Although I do not think there will be some cataclysmic cyber-attack that brings everything down in the foreseeable future, we are likely to experience an ever-increasing rate and impact of attacks. I find the AI discussions to be interesting, not for the arguments for how AI can help, but for what is omitted. You see, AI is just a tool. A powerful one which will be used by both attackers and defenders. AI can greatly enhance cybersecurity prediction, prevention, detection, and response capabilities to improve defenses, adapt faster to new threats, and lower the overalls cost of security. Attackers are also attracted to AI capabilities because of the very same attributes of speed, scale, automation, and effectiveness that empowers them to relentlessly pursue targets, gain access, seize assets and undermine attempts by security to detect and evict them. AI can be used to attack and undermine other AI systems, which is becoming a problem. Adversarial attacks are one such class of exploitation where the inputs to an AI system are modified by the opposition in such a way that the output is intentionally manipulated. These and other types of offensive systems that undermine AI represent a serious and growing risk to consumersmilitaries, critical infrastructure, and transportation. Yes, AI can help with the next ‘largest’ attacks, but it is also very likely that AI will be behind those attacks as well. So, let’s have a balanced discussion about the risks that increase every day, for all of us with roots in the digital domain. AI will grow and play a pivotal role in how technology influences the lives of every person on the planet. It will be very important to both cybersecurity and cyber-attackers in how they can maneuver. The game is on and the stakes are high. Welcome to the new AI cyber-arms race. [post_title] => Will AI rescue the world from the impending doom of cyber-attacks or be the cause [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => will-ai-rescue-the-world-from-the-impending-doom-of-cyber-attacks-or-be-the-cause [to_ping] => [pinged] => [post_modified] => 2020-07-13 07:56:37 [post_modified_gmt] => 2020-07-13 14:56:37 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288182 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 288021 [post_author] => 11830 [post_date] => 2020-07-10 10:00:33 [post_date_gmt] => 2020-07-10 17:00:33 [post_content] =>

Randy FishDAC is a complex event with many “moving parts”. While the conference has gone virtual this year (as all events have), the depth of the event remains the same. The technical program has always been of top quality, with peer-reviewed papers presented across many topics and across the world. This is also the oldest part of DAC, dating back 57 years. DAC has grown to include many other events that make up the entire experience. A major trade show with topical events presented in pavilions on the show floor, workshops, tutorials, a designer track and an IP track to name a few.

IP is a relatively new addition to DAC and the EDA segment in general. This is especially true if you consider DAC is 57 years old. I had the opportunity to chat with Randy Fish, the chair of the IP track for DAC this year. I learned some interesting things about how this track is put together and how it interacts with the rest of the conference.

First, a bit about Randy. He began his career as a design engineer at Intel. From there, he worked in applications, sales and marketing across an array of EDA and IP companies, both large and small. He is currently vice president of market development at UltraSoC, a company that has recently been acquired by Siemens.

So, how does one get involved with the DAC Executive Committee? In Randy’s words, he’s been going to DAC since the mid-1980’s. Like many of us, he’s had lots of great experiences, both technical and social over the years at DAC. If you’re in the EDA or IP business, this show punctuates your yearly existence in many ways. A couple of years ago, Randy was chatting with Mike McNamara, a past DAC general chair and Michelle Clancy, DAC’s publicity and marketing chair. They were giving Randy the recruiting speech – join the force of DAC.  Randy decided it was time to “give back” and so he joined the Executive Committee and he is heading the IP Track this year.

At the start our discussion Randy pointed out that there really isn’t a large, mainstream event for semiconductor IP. DAC is the best venue for such a focus and Randy believes this is at it should be. He went on to explain that the regular technical program at DAC is aimed at the “researcher”, but the IP program is aimed at the “practitioner” – those using IP to design chips. The choices of what IP to use as a practitioner are quite large – there are a lot of vendors to explore and a lot of new technologies.  A virtual show environment helps this agenda quite a bit since “sampling” many presentations and vendor booths are much easier in this format.

Next, Randy explained the scope and focus of the IP track. There are six folks on the IP committee. One aspect of their job is to develop invited sessions – topics of interest and possible presenters.  This is the “proactive” part of the content development if you will. There is also the review and selection of submitted papers on IP and organizing them into topical groups. This is the “reactive” part. Working both as a proactive and reactive organization, Randy and his team have put together an excellent program this year. Here are the top-level sessions:

Randy and his team were also working on functional safety track and decided the topic was better served as a tutorial, so the team “donated” the topic to a different track at DAC for the good of the agenda. This one also looks quite interesting, check it out:

IP also impacts the technical agenda at DAC.  Thanks to the RISC-V movement, there are internal designs and designs from companies like SiFive, Codasip and Andes which are all driving the need the processor verification, creating a renewed interest in this topic for the DAC technical agenda.

I think the IP track at DAC this year looks quite strong and I congratulated Randy and his team on the excellent work. Randy closed with a call to action that may resonate with some of you, at least I hope so. He said that his committee, and others as well at DAC are always looking for interested parties to get involved. So, if you’d like to help shape future DACs, just contact Randy, or anyone on the DAC Executive Committee.

The 57th DAC will be hosted virtually Monday, July 20 – Friday, July 24, with on-demand access to sessions through August 1, 2020.  Registration for DAC is now open.  There are three ways to attend DAC virtually – complimentary I LOVE DAC pass, Designer/IP/Embedded Track Special $49.00 or Full Conference pass starting at $199.00.

For more information on the Virtual DAC program and registration please visit: www.dac.com

 

[post_title] => A Tour of This Year’s DAC IP Track with Randy Fish [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => a-tour-of-this-years-dac-ip-track-with-randy-fish [to_ping] => [pinged] => [post_modified] => 2020-07-10 20:17:57 [post_modified_gmt] => 2020-07-11 03:17:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288021 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 288211 [post_author] => 16 [post_date] => 2020-07-10 06:00:13 [post_date_gmt] => 2020-07-10 13:00:13 [post_content] => Got a great idea for an intelligent device at the extreme edge? Self-contained and can run on a coin cell battery, maybe even harvested energy? Needs to fit in a space not much larger than a quarter? Eta Compute has a board for you. This comes with 2 MEMS microphones, a pressure/temperature sensor, a 6-axis MEMS accelerometer/gyroscope, their ultra-low power neural sensor processor, extensibility through a UART port and a micro-SD slot, BLE with antenna for communication and battery cradle, all in a 1.4”x1.4” board. You can learn more at a free workshop they are hosting on July 14th (I was told that all the free promotional boards have already been taken!) Eta Compute ECM3532 AI Sensor Board Top Users can develop their AI solution through partner Edge Impulse’s TinyML development pipeline, uploading the completed solution through the UART port. One enthusiast was able to develop, upload and test an alarm detection system in under one hour.

Sensors, AI on a Tiny Board with Tiny Power

I talked to Semir Haddad (Sr Dir Product Marketing) at Eta Compute to understand why they developed this board. He told me that a lot of their customers want to prove out a solution, sensors, AI and communication, but they were having to hack together their own solutions through multiple boards or adapt evaluation boards, all of which takes time and creates debug and scaling problems. Those users wanted to get quickly to a proof of concept, even a solution they could deploy quickly in the field, say in an agricultural application. They wanted to prove the solution out and pilot at a modest scale, before deciding if they want to go to volume production in a custom design. Semir discussed some addition use-cases, including vibration detection for machine monitoring, or detecting doors or windows opening or closing. He mentioned pressure detection, saying that it common to fuse this kind of sensing with motion for more accurate motion/position detection. Also together with the Edge Impulse solution, microphones can be used such to recognize learned sounds (a chicken squawking for example – Warning! Fox in the chicken pen!) or wake words and command phrases (unlock or lock the gate). Similarly, the 6-axis motion sensor can be used for gesture detection. Between these two you have pretty wide range of options to control your edge device.

Tiny Power through Self-Timed Logic, CVFS

The system is built around Eta Compute’s ECM 3532 neural sensor processor on which I’ve written before. This has all the capabilities of a hybrid multi-core Cortex-M plus DSP solution, but built on self-timed logic with continuous voltage and frequency scaling (CVFS). That’s continuous, unlike conventional DVFS which can only switch between a small number of voltage and frequency options. These features allow for this processor to get under 1mW for inference operations and to get always-on operation (in support of the sensors) under 1uA. Eta Compute’s software partner (Edge Impulse) is known for their TinyML pipeline, which I’m told together with this development board provides a pretty much turnkey solution – no code needs to be written to get a proof of concept up and running very quickly.

Register for Workshop

Remember to register for free workshop. You can also learn more about the board HERE. Also, you can buy the boards through DigiKey. [post_title] => Sensors, AI, Tiny Power in a Turnkey Board. [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => sensors-ai-tiny-power-turnkey-board [to_ping] => [pinged] => [post_modified] => 2020-07-08 11:55:03 [post_modified_gmt] => 2020-07-08 18:55:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288211 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 287816 [post_author] => 28 [post_date] => 2020-07-09 10:00:16 [post_date_gmt] => 2020-07-09 17:00:16 [post_content] => Achronix SpeedcoreBlogging is not an easy thing to do. It takes time, patience, commitment, and creativity. SemiWiki brought blogging to the semiconductor industry and many companies have followed. Very few have been successful with personal or corporate blogs but as a premier semiconductor blogger I have developed a proven recipe over the last ten years and can spot a winner when I see one. As a corporate blog success story I will point to the Achronix blog site. Over the last three years Achronix has posted 28 blogs. My preference would be one per month without fail but 28 in 37 months is a serious commitment. There is a nice mix of authors from different aspects of the company (engineering, marketing, applications, C level, etc…): – Kent OrthnerSystems Architect – Alok SanghaviSr. Marketing Manager – Steve MensorVice President, Marketing – Katie PurcellSenior Staff Applications Engineer – Volkan OktemSr. Director of Application – Manoj RogeVP of Strategic Planning & Business Development – Raymond NijssenVice President and Chief Technologist – Bob SIllerDirector, Product Marketing – Huang LunSr. Field Applications Engineer While Author is important, blog title and the first paragraph is everything for both direct and search traffic. You need to speak to a specific problem to get high quality traffic. For semiconductor sites, quality versus quantity is important and be very careful about clickbait because it is a double edged sword. Again, the Achronix blogs are a great example for titles and summaries: Embedded FPGAs for Next-Generation Automotive ASICs Bob SIller, Sr. Marketing Manager For anyone who has looked at new cars lately, it's hard not to notice how quickly automotive electronics are advancing. Looking at automotive safety technology from just three years ago vs. today, you see a significant increase in the number of cameras to support applications such as surround-view display, driver distraction monitors, stereo vision cameras, forward-facing and multiple rearview cameras. Speedcore Increase Performance Using an FPGA with 2D NoC Huang LunSr. Field Applications Engineer Achronix Speedster7t FPGAs feature a revolutionary new two-dimensional network on chip (NoC), which provides >20 Tbps ultra-high bandwidth connectivity to external high-speed interfaces and for routing data within the programmable logic fabric. The NoC is structured as a series of rows and columns spread across the Speedster7t FPGA fabric. Each row or column has two 256-bit data paths using industry standard AXI data format, which support 512 Gbps data rates. What is an FPGA and Why the Answer is Changing? Bob Siller, Director, Product Marketing What is an FPGA? With the advent of new FPGA architectures, the answer has changed more in the last two years than ever before. Traditionally, an FPGA or field programmable gate array, is a reconfigurable semiconductor device comprising programmable logic gates and interconnect or routing, connected to multipurpose I/O pins.  An FPGA can be reprogrammed to perform any function, and its functionality can be changed over time. (For a great summary and history of the FPGA industry and technology. Insights from the Next FPGA Platform Event Manoj Roge, VP of Strategic Planning & Business Development It was exciting to participate in Next FPGA Platform on January 22nd at the Glasshouse in San Jose. I found it was particularly exciting to have Achronix share in a panel discussion with Xilinx and Intel. The Next Platform co-editors Nicole Hemsoth and Timothy Prickett Morgan did a great job in interviewing experts from FPGA ecosystem with insightful questions. The best part of Next Platform events is their format, where they keep marketing pitches to minimum with no presentations, just discussions. FPGAs in the 2020s – The New Old Thing Bob Siller, Director, Product Marketing FPGAs are the new old thing in semiconductors today. Even though FPGAs are 35 years old, the next decade represents a growth opportunity that hasn’t been seen since the early 1990s. Why is this happening now? Mine Cryptocurrencies Sooner Part 1-3 Raymond Nijssen, Vice President, Marketing Cryptocurrency mining is the process of computing a new cryptocurrency unit based on all the previously found ones. The concept of cryptocurrency is nearly universally recognized by the publicity of the original cryptocurrency, Bitcoin. Cryptocurrencies were supposed to be a broadly democratic currency vehicle not controlled by any one entity, such as banks, governments, or small groups of companies. Much of a cryptocurrency’s acceptance and trustworthiness is based on that proposition. However, with Bitcoin, that is not how it unfolded. Getting the first wave of blog views is actually the easy part. Keeping readership is critical and that is all about the quality of content. If I had to credit one thing for the success of SemiWiki over the last 10 years it would be excellent content.  All content on a company website is important but done correctly blogs can bring a consistent stream of high quality traffic and improve your website SEO and rankings, absolutely. You should also check out the Achronix videos and webinars, very well done!   About Achronix Semiconductor Corporation Achronix Semiconductor Corporation is a privately held, fabless semiconductor corporation based in Santa Clara, California and offers high-performance FPGA and embedded FPGA (eFPGA) solutions. Achronix’s history is one of pushing the boundaries in the high-performance FPGA market. Achronix offerings include programmable FPGA fabrics, discrete high-performance and high-density FPGAs with hardwired system-level blocks, datacenter and HPC hardware accelerator boards, and best-in-class EDA software supporting all Achronix products. The company has sales offices and representatives in the United States, Europe, and China, and has a research and design office in Bangalore, India.. Follow Achronix Website: www.achronix.com The Achronix Blog: /blogs/ Twitter: @AchronixInc LinkedIn: https://www.linkedin.com/company/57668/ Facebook: https://www.facebook.com/achronix/ [post_title] => Achronix Blog Roundup! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => achronix-blog-roundup [to_ping] => [pinged] => [post_modified] => 2020-07-09 10:35:30 [post_modified_gmt] => 2020-07-09 17:35:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=287816 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 288187 [post_author] => 4 [post_date] => 2020-07-09 06:00:56 [post_date_gmt] => 2020-07-09 13:00:56 [post_content] => The Interface Design IP market explodes, growing by 18% in 2019, with $870 million, when CPU IP category grew by 5% at $1,460 million. In fact, Interface IP market is forecasted to sustain high growth rate for the next five years, as calculated by IPnest in the “Interface IP Survey 2015-2019 & Forecast 2020-2024”, to reach $1,800 million by 2025. Obviously the CPU IP category will not stay at the 2019 level and is expected to grow as well. But we think the CAGR 2020-2025 for CPU will be more modest, in the 4% range. Why such a modest growth rate for the CPU IP category? The first reason is that the CPU IP market is shaky because the licensing business model is in revolt since the insertion of RISC-V CPU. The second reason is the uncertainty about ARM future revenues coming from IP sales in China (estimated to be in the 30% range), because of the exit of the JV built to support ARM IP sales in the country. This post “Tears in the Rain – ARM and JVs in China” from Jay Goldberg in Semiwiki gives very detailed explanation of the complete story. I strongly suggest you to read this post because it reflects that we were only guessing, translating a feeling into clear wording. But the goal today is to explain why the interface IP category will see such a high growth rate until 2025. The below picture is showing that the CPU IP market share is declining since 2017 (40.8% to 37.2%) when the interface IP share is growing on the same period from 18% to 22.1%. This trend is validated during the last three years and we will see why this behavior will continue during the 2020’s. IP Market share 2017 2019 by category 1 In the 2010-decade smartphone was the strong driver for the IP industry pushing the CPU/GPU categories and some interface protocols like LPDDR, USB and MIPI. The smartphone industry is still active but has reached a peak. The new growth driver for IP sales are data-centric applications including servers, datacenter, wired and wireless networking and emerging AI. All of these applications share the need for higher and higher bandwidth for in-system data exchange (with memory and between chips) as well across the global network to support faster and wider interconnects between datacenter and networking. This translates into high speed memory controllers (DDR5, HBM or GDDR6) and faster release of interface protocols (PCIe 5, 400G and 800G Ethernet, 112G SerDes) as well as emergence of protocols supporting Chiplet (HBI or SerDes). If we look at the interface IP segments, it will directly impact the memory controller, PCI Express, Ethernet and SerDes segments and a new segment that we could call “Die2Die” (D2D). We have already seen significant IP revenue growth in these segments in 2019, ranging from 12% (memory controller), 20% (PCIe) and even more for Ethernet and SerDes segment. The drivers have been linked with emerging protocols adoption as well as new technology nodes, like 7nm and 5m. For PCIe the driver has been adoption for PCIe 4 (16 Gbps data rate per lane). For the memory controller segment, we have seen several drivers like DDR4 adoption in datacenter, and also the adoption of High Bandwidth Memory (HBM2) and Graphic (GDDR6) in numerous applications, some of them being new and linked with AI. When  a design project starts on the last available technology node (7nm in 2019) and integrates the last release of a protocol, the license ASP is impacted and more expansive than before (n-1 release on N-1 node). So the growth for a specific IP segment is generated by the number of design starts (higher than before because there are more developments in application like datacenter and AI) multiplied by the license ASP increase, because the protocol is more complex and the target node is advanced. What we start to see clearly is that data-centric applications (servers, datacenter, networking, AI, …) are strongly pushing the interface IP market, more specifically memory controller, PCIe, Ethernet and SerDes. With SerDes we can consider that 2019 was the year where 112G PAM4 SerDes have started to be adopted, impacting positively the SerDes IP category revenues, but also Ethernet, as 400G MAC IP (and 800G MAC) have started to sale. In fact, we have seen growth in the high 30% for this category, illustrated by Synopsys (thanks to Silabtech acquisition) Cadence (thanks to Nusemi acquisition) and three year old SerDes start-up Alphawave  reaching $25 million revenues in 2019! Top 5 Forecast 2020 2024 1 Don't forget other protocols like USB as the introduction of USB 4 should boost USB IP sales in 2021 and after. USB 4 offers much higher bandwidth with 40 Gbps (to be compared with 10 Gbps for USB 3.2 or 20 Gbps for USB 3.2x2) and clarifies USB nomenclature making it easier to understand for the end user (the consumer). It also supports DisplayPort and ThunderBolt, a new capability to make life easier for consumers who want to see movies. The MIPI protocol, part of the top 5 interfaces, is massively used in the smartphone. The change is coming from the automotive segment with the adoption of MIPI CSI (camera) and MIPI A-PHY defined to support long range (LR) SerDes based interconnect in a car. Nevertheless, the IPnest forecast for USB and MIPI predict a CAGR in the 10% range for 2020-2024 for these two protocols, slightly less than the 15% CAGR associated with the three other protocols. IPnest has used a methodology based on design starts by protocol forecasting the new project growth in respect with the target market segment (like datacenter, networking or ADAS) and predicting the license price (as a function of the technology node for the PHY and linked with the protocol release for the controller). This approach is quite complex but we expect it to help with accurate results and more importantly a realistic forecast. This is the 12th version of the survey starting in 2009 when the Interface IP category market was $250 million (in 2019 $870 million), and we can affirm that the 5 years forecast stayed within +/- 5% error margin! So, when IPnest predicts in 2020 that the interface IP category in 2025 will be in the $1800-$2000 range, passing the CPU IP category, this affirmation is backed-up by experience… If you’re interested in this “Interface IP Survey” released in June 2020, just contact me: eric.esteve@ip-nest.com . Eric Esteve from IPnest [post_title] => Interface IP Category to Overtake CPU IP by 2025? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => interface-ip-category-to-overtake-cpu-ip-by-2025 [to_ping] => [pinged] => [post_modified] => 2020-07-09 15:44:42 [post_modified_gmt] => 2020-07-09 22:44:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288187 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [9] => WP_Post Object ( [ID] => 287789 [post_author] => 16 [post_date] => 2020-07-08 06:00:21 [post_date_gmt] => 2020-07-08 13:00:21 [post_content] => Late last year I wrote about Arm’s efforts to play a role in servers, in AWS, and particularly Arm-based supercomputing, in the Sandia Astra roadmap and in partnering with NVIDIA who are in the Oak Ridge Summit supercomputer. These steps came, at least for me, with an implicit “Good for them, playing a role on the edges of these challenging applications.” Well they just blew right past that theory. The Fugaku Arm-based supercomputer was just named this year’s fastest in the world. Arm isn’t helping in some peripheral role. Arm cores are the CPUs in this supercomputer. What's more, Fugaku earlier also topped the list of the world’s most efficient supercomputers. Fugaku Arm in supercomputing

Some Fugaku specs

Fujitsu and RIKEN developed Fugaku jointly, around the Fujitsu A64FX processor. Fujitsu have built these processors around a many-core Arm CPU, with 48 compute cores connected through a NoC, together with either 2 or 4 helper cores. In addition, each processor connects in-package to 32GB of high-bandwidth memory (HBM) supporting streaming memory accesses, also the types of accesses common in AI applications. The processor uses the Arm V8.2A architecture, plus scalable vector extension with a 512-bit vector implementation. One processor alone is a serious machine. 384 of these go in a full rack and there are 396 of those racks in the system, plus a number of half racks. Together these add up to a total of nearly 160k nodes in Fugaku. These interconnect through a torus-architecture network they call TofuD (a neat name for a Japanese supercomputer network). Theoretical peak performance is eye-watering. In boost mode, the system reaches 1.07 exaflops/second in 32-bit single precision, 2.15 exaflops/second in AI training (16 bit) and 4.3 exaops/second in 8-bit inference. This with a theoretical peak memory bandwidth of 163 petabytes/second. Peak power for this monster is about 28 MW and, no surprise, depends on a closed circuit water cooling system.

COVID applications

RIKEN is working with the Japanese Ministry of Education, Culture, Sports, Science and Technology to use Fugaku on a number of projects targeting COVID. One is a project to search for new drug candidates using molecular dynamics modeling to find candidates with a high affinity for the spike proteins on the virus. They are applying this analysis to 2000 existing drug candidates. A different analysis is looking at the molecular dynamics of the spike protein to find features which may not be experimentally detectable. This is to gain a better understanding of the mechanisms behind connection to ACE2 receptors on cell surfaces. A third team team plan to model infection in indoor environments through virus droplets . This is with a view to testing possible counter-measures, such as airflow control. I like this simply because it’s an incredibly complex many-body fluidics problem. How else would you model this other than on a monster supercomputer?

Cray announces their Arm supercomputer

Fugaku isn’t the only Arm-based supercomputer on record. HPE/Cray have announced the Cray CS500, based on the Fujitsu A64FX processor. This product provides a Cray programming environment on the system. Already SUNY Stony Brook, DOE Los Alamos National Laboratory and ORNL have signed up for these systems. No more patronizing Arm in supercomputing. They’re on the leader board and one of their customers is at the top of the leader board. I’ve heard that Cray plans to reclaim that spot next year. Wow! You can read more about Arms journey in supercomputing HERE and you can learn more about Fugaku HERE. [post_title] => Arm Rings the Bell in Supercomputing [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => arm-rings-the-bell-in-supercomputing [to_ping] => [pinged] => [post_modified] => 2020-07-08 12:47:21 [post_modified_gmt] => 2020-07-08 19:47:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=287789 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 9 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 288198 [post_author] => 11830 [post_date] => 2020-07-14 10:00:03 [post_date_gmt] => 2020-07-14 17:00:03 [post_content] =>

Ansys addresses complex Multiphysics simulation and analysis tasks, from device to chip to package and system. When I was at eSilicon we did a lot of work on 2.5D packaging and I can tell you tools from Ansys were a critical enabler to get the chip, package and system to all work correctly.

Ansys recently published an Application Brief on how they address analysis of power management ICs. The tool highlighted is Ansys Totem, a foundry-certified transistor-level power noise and reliability platform for power integrity analysis on analog mixed-signal IP and full custom designs. I had the opportunity to speak with Karthik Srinivasan, Sr. Corporate Application Engineer Manager, Analog & Mixed Signal and Marc Swinnen, Director of Product Marketing at Ansys.

I began by probing the genealogy of Totem. Did it come from an acquisition? Interestingly, Totem is a completely organic tool that builds on the Multiphysics platform at Ansys that powers other tools such as the popular Ansys Redhawk.  Organic development like this is noteworthy – it speaks to the breadth and depth of the underlying infrastructure. As Totem is a transistor-level tool, it delivers Spice-like accuracy according to the Application Brief. I probed this a bit with Karthik. Was Totem actually running Spice, and if so, how do you get an answer for a large network in less than geologic time?

Totem changes the modeling paradigm for the network to deliver results much faster than traditional Spice. All non-linear elements are converted to a linear model. All transistors are modeled as current sources and capacitors. These models are then connected to the parasitic network of the power grid. An IR-drop and electromigration analysis is then performed. This cuts the computational complexity of the problem down quite a bit. Totem provides targeted accuracy for the analysis of interest, typically within 5-10 mV of Spice, even for advanced technology nodes.

We discussed other applications of this approach. Power management ICs contain very wide power rails to handle the large currents involved in their operation. These structures are typically analyzed with a finite element solver, resulting in very long run times, typically multiple days. Using the Totem approach, a result with similar accuracy can typically be delivered 5-6X faster.

Using the Ansys Multiphysics platform, analysis can be performed from transistor and cell library level all the way to the system level. One platform, one source of models. IP vendors are also developing and delivering Totem macro models along with the IP to facilitate this kind of multi-level analysis. Marc pointed out that custom macro models are a key enabling technology to support this kind of transistor to system analysis. One first does the detailed analysis in Totem and then creates a macro model of the result to drive Redhawk.

The Ansys Application Brief goes into a lot more detail about the analysis capabilities of Totem. You can access the Application Brief here. To whet your appetite, here are some of the topics covered:

  • Advanced Analysis: Power FETs, RDSON & sensitivity, guard ring weakness checks, transient power
  • Early Analysis: device R maps, interconnect R maps, guard ring weakness maps
  • PDN Noise Sign-Off: power, DvD, substrate noise

With DAC approaching, you can visit the Ansys virtual booth. Registration for DAC can be found here. There’s more to see from Ansys at DAC.  The company has an incredible 25 papers accepted in the designer track (that’s not a misprint). Four of them focus on Totem. I also hear that Ansys is planning a special semiconductor-focused virtual event in the Fall. Watch your inbox and SemiWiki for more information on that as it becomes available.

[post_title] => Ansys Multiphysics Platform Tackles Power Management ICs [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => ansys-multiphysics-platform-tackles-power-management-ics [to_ping] => [pinged] => [post_modified] => 2020-07-13 11:12:36 [post_modified_gmt] => 2020-07-13 18:12:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288198 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7101 [max_num_pages] => 711 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 443e714a5290f77b0df02e2f78f50560 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

Ansys Multiphysics Platform Tackles Power Management ICs

Ansys Multiphysics Platform Tackles Power Management ICs
by Mike Gianfagna on 07-14-2020 at 10:00 am

Screen Shot 2020 07 08 at 7.14.17 PM

Ansys addresses complex Multiphysics simulation and analysis tasks, from device to chip to package and system. When I was at eSilicon we did a lot of work on 2.5D packaging and I can tell you tools from Ansys were a critical enabler to get the chip, package and system to all work correctly.

Ansys recently published an Application Brief… Read More


Hierarchical CDC analysis is possible, with the right tools

Hierarchical CDC analysis is possible, with the right tools
by Bernard Murphy on 07-14-2020 at 6:00 am

Design complexity demands hierarchical CDC

Back in my Atrenta days (before mid-2015), we were already running into a lot of very large SoC-level designs – a billion gates or more. At those sizes, full-chip verification of any kind becomes extremely challenging. Memory demand and run-times explode, and verification costs explode also since these runs require access to … Read More


SystemC Methodology for Virtual Prototype at DVCon USA

SystemC Methodology for Virtual Prototype at DVCon USA
by Daniel Payne on 07-13-2020 at 10:00 am

Register Model min

DVCon was the first EDA conference in our industry impacted by the pandemic and travel restrictions in March of this year, and the organizers did a superb job of adjusting the schedule. I was able to review a DVCon tutorial called “Defining a SystemC Methodology for your Company“, given by Swaminathan Ramachandran… Read More


Menta CEO Update 2020

Menta CEO Update 2020
by Daniel Nenni on 07-13-2020 at 6:00 am

vincent markus menta

What products are Menta offering today?
Menta is a semiconductor IP provider. We are the only proven European provider of programmable logic to be embedded inside customers’ SoCs and ASICs. This programmable logic is in the form of embedded FPGA IP. So, we offer our customers the possibility to have a small portion of their SoC as… Read More


Will AI rescue the world from the impending doom of cyber-attacks or be the cause

Will AI rescue the world from the impending doom of cyber-attacks or be the cause
by Matthew Rosenquist on 07-12-2020 at 6:00 am

Will AI rescue the world from the impending doom of cyber attacks or be the cause

There has been a good deal of publicized chatter about impending cyberattacks at an unprecedented scale and how Artificial Intelligence (AI) could help stop them. Not surprisingly much of the discussion is led by AI vendors in the cybersecurity space. Although they have a vested interest in raising an alarm, they do have a … Read More


A Tour of This Year’s DAC IP Track with Randy Fish

A Tour of This Year’s DAC IP Track with Randy Fish
by Mike Gianfagna on 07-10-2020 at 10:00 am

Randy Fish

DAC is a complex event with many “moving parts”. While the conference has gone virtual this year (as all events have), the depth of the event remains the same. The technical program has always been of top quality, with peer-reviewed papers presented across many topics and across the world. This is also the oldest part of DAC, dating… Read More


Sensors, AI, Tiny Power in a Turnkey Board.

Sensors, AI, Tiny Power in a Turnkey Board.
by Bernard Murphy on 07-10-2020 at 6:00 am

Eta Compute ECM3532 AI Sensor Board Top

Got a great idea for an intelligent device at the extreme edge? Self-contained and can run on a coin cell battery, maybe even harvested energy? Needs to fit in a space not much larger than a quarter? Eta Compute has a board for you. This comes with 2 MEMS microphones, a pressure/temperature sensor, a 6-axis MEMS accelerometer/gyroscope,… Read More


Achronix Blog Roundup!

Achronix Blog Roundup!
by Daniel Nenni on 07-09-2020 at 10:00 am

Achronix Speedcore

Blogging is not an easy thing to do. It takes time, patience, commitment, and creativity. SemiWiki brought blogging to the semiconductor industry and many companies have followed. Very few have been successful with personal or corporate blogs but as a premier semiconductor blogger I have developed a proven recipe over the last… Read More


Interface IP Category to Overtake CPU IP by 2025?

Interface IP Category to Overtake CPU IP by 2025?
by Eric Esteve on 07-09-2020 at 6:00 am

Top 5 Forecast 2020 2024

The Interface Design IP market explodes, growing by 18% in 2019, with $870 million, when CPU IP category grew by 5% at $1,460 million. In fact, Interface IP market is forecasted to sustain high growth rate for the next five years, as calculated by IPnest in the “Interface IP Survey 2015-2019 & Forecast 2020-2024”, to reach $1,800… Read More


Arm Rings the Bell in Supercomputing

Arm Rings the Bell in Supercomputing
by Bernard Murphy on 07-08-2020 at 6:00 am

Fugaku

Late last year I wrote about Arm’s efforts to play a role in servers, in AWS, and particularly Arm-based supercomputing, in the Sandia Astra roadmap and in partnering with NVIDIA who are in the Oak Ridge Summit supercomputer. These steps came, at least for me, with an implicit “Good for them, playing a role on the edges of these challenging… Read More