Architecture Exploration of Processors and SoC to trade off power and performance 5

WP_Query Object
(
    [query] => Array
        (
        )

    [query_vars] => Array
        (
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [post_type] => Array
                (
                    [0] => post
                    [1] => podcast
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type IN ('post', 'podcast') AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 0, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 298816
                    [post_author] => 28
                    [post_date] => 2021-05-07 10:00:12
                    [post_date_gmt] => 2021-05-07 17:00:12
                    [post_content] => 
Dan and Mike are joined by Sudhir Mallya, vice president of corporate and product marketing at OpenFive. We explore 2.5D design and the role chiplets play. Current technical and business challenges are discussed as well as an assessment of how the chiplet market will develop and what impact it will have. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual. Sudhir Mallya is Vice President of Corporate and Product Marketing. He is responsible for custom silicon product marketing, technology roadmaps and business model innovation, corporate marketing initiatives, and strategic customer and partner alliances. He was previously at Toshiba where he led their North American silicon BU with a focus on data center and automotive applications. He is based in Silicon Valley and has held executive positions in engineering, marketing, and business development at leading semiconductor companies. He has led multiple$100M+ global strategic customer engagements from very early concept to high volume production. He has a BSEE from the Indian Institute of Technology, Bombay, and an MSEE from the University of Cincinnati.
[post_title] => Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => podcast-ep19-the-emergence-of-2-5d-and-chiplets-in-ai-based-applications [to_ping] => [pinged] => [post_modified] => 2021-05-07 10:08:51 [post_modified_gmt] => 2021-05-07 17:08:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?post_type=podcast&p=298816 [menu_order] => 0 [post_type] => podcast [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 298659 [post_author] => 28 [post_date] => 2021-05-07 06:00:06 [post_date_gmt] => 2021-05-07 13:00:06 [post_content] => srinath square Srinath Anantharaman founded Cliosoft in 1997 and serves as the company’s CEO.  He has over 40 years of software engineering and management experience in the EDA industry.  Srinath graduated with a Bachelor of Technology from IIT/Kanpur and MSEE from Washington University in St. Louis. The last time we talked to you was 2017. Tell us a little bit about how the company has grown since then and how you’ve evolved your strategy. The company has grown steadily and significantly over the last few years. Oddly, we have seen a big uptick in our business during the COVID lockdown. Our SOS family of design management solutions has become the backbone for design data collaboration for many of the largest semiconductor companies in the world. We have engineers spread all over the world from the US to Australia developing and supporting the software that these multinationals depend on to share data efficiently across their design centers and the cloud. Our business mantra has really never changed. Develop the best product we can, support our customers at the highest level and treat each other with respect.  We never focus on revenue or growth. These are by-products that will come if we deliver on our fundamentals. Apparently, we are delivering. Your business model and solutions have gone to the heart of some of the biggest challenges of IP reuse. What are the challenges to IP reuse that you are seeing? IP reuse is the holy grail of design we have been talking about for a while. It promises to bring about the next significant leap in design team productivity, design cost savings, and reduced time-to-market.  Unfortunately, reality has not caught up with the vision. While there is ad hoc IP reuse within a team, it rarely crosses over to other business units and/or across the enterprise. IPs are often trapped in silos while companies continue to acquire and grow globally. There are many factors that limit IP reuse. There is some overhead to develop IPs for reuse and it requires good documentation to assist potential reuse. It must be easy and convenient for designers to find the right IP and gauge its quality. When reusing an IP, designers need the ability to get help with the IP if needed, report issues found, and be notified if there are updates. Effective IP reuse requires a change in mindset, perhaps enforced through a mandate, along with an IP-based design methodology and a good software infrastructure to enable it all. Cliosoft is trying to evangelize the benefits of IP reuse and provide the tools needed to help design teams make it a reality. Consolidation seems constant in the EDA industry. First, where do you see opportunities for new efficiencies? And where do you see opportunities for startups with disruptive technology? SA: Indeed, we have seen several acquisitions in our customer base. On Semiconductor acquired Fairchild and Aptina, Microchip acquired Microsemi, Intel picked up eASIC and Soft Machines, Marvell bought Inphi, Skyworks acquired  Avnera and Synopsys has snapped up several IP vendors. We see this as a great opportunity. Most mid-to-large size companies are the result of several acquisitions, globally distributed with different cultures and expertise. To be more than the sum of its parts, engineers need to collaborate and share expertise across these boundaries. Our SOS design management platform helps teams in different business units work together on exciting new projects. However, we saw a much bigger opportunity in providing a solution to help harness the power of all the intelligence and expertise spread across the enterprise. We introduced a new product called HUB, which as the name implies, lets people across the enterprise share their Intellectual Property and expertise with others. It enables problems to be solved quickly by crowdsourcing and designs to be completed faster without reinventing the wheel. I recently heard a talk from Erica Dhawan, the author of a book named ‘Get Big Things Done - The power of Connectional Intelligence’, where among other things, she talks about how difficult problems can be solved by leveraging the expertise of a broad network. Creative new solutions may come from unexpected sources looking at the problem from a different perspective. HUB was designed to do just that - provide a platform to enable the use of Connectional Intelligence within the enterprise by making it easy to share and reuse IP and expertise. Using HUB, an engineer in one business unit needing a silicon IP may find that it has already been developed by an acquired company. They can now quickly access the IP and leverage not only the expertise of the authors, but other users in the enterprise who may have integrated that IP into their designs. All the interaction is recorded in HUB and becomes a knowledge base that future users of the IP can leverage. Does the rise and popularity of RISC-V make design management any more difficult for companies? Put another way, how do Cliosoft solutions help those companies who are embracing RISC-V IP? SA:  From a design management perspective, our SOS design management platform helps design teams manage their RISC-V IP and designs exactly the same as any other IP and design. However, given the open-source nature of RISC-V and the fact that any user can collaborate and extend the ISA with new instructions and innovate the micro-architecture of the RISC-V processors, our HUB IP management platform helps manage and track this collaboration. HUB provides IP traceability for RISC-V IPs along with their knowledge base to help proliferate the evolution, reuse and integration of RISC-V IP. Tell us a little bit about improvements you’ve made to the SOS platform since we last talked. SA: SOS is a very mature platform with well over 300 organizations using the software. As teams and projects have become larger, our focus has been to improve performance and scalability. We have seen an increase in IP based design methodologies and so we have added features to lubricate this design flow. Since SOS is primarily used in IC design flows, with a large number of large binary files, optimizing the use of network storage has been a key differentiator. We are working on some new capabilities to improve storage optimization even more. The other trend we have seen is that design teams may have multiple flows. A team using Cadence Virtuoso may also use Keysight ADS for designing some RF components. Some architects may use Mathworks Matlab and project leads may manage specifications and other documentation using Microsoft Office. We work with a variety of vendors so that engineers can invoke SOS revision control features directly from their preferred tools and all the design data and documentation is managed by SOS. Another trend is a result of acquisitions. A company using Cadence Virtuoso may acquire other companies that use Synopsys Custom Compiler or Siemens Tanner. Since SOS is integrated and production tested with all these flows, the company can use the same design management solutions for all the flows. How do you see the rise of cloud services affecting your business? Frankly, it has not affected our business in any significant way. Whether engineers are working in their private cloud or using rented cloud services, they are using our solutions in the same way. We have expertise with Amazon AWS, Google GCP and Microsoft Azure. Since we have a globally distributed workforce, we use the cloud and of course use our own software in the cloud to manage our software development. Many startups use the Cadence Cloud-Hosted Design Solution. Our applications engineers have a great working relationship with the engineers managing the hosted solution at Cadence. Since the Cadence engineers are very familiar with our solution, they help onboard a new company. This almost eliminates the need for us to set up a new startup that is often low on CAD expertise and resources. We can’t thank the Cadence hosted solution engineers enough. The competitive landscape has changed a little bit around Cliosoft. What’s your take on the impact of those changes for users of IP management solutions? Cliosoft has always been focused on meeting the data collaboration needs of design engineers. Our competition has changed in that their focus has become diluted with acquisitions or interest in entirely different domains. So we are now the only vendor left whose sole focus is on helping semiconductor companies manage their crown jewels - their IP and design data. Customers trust that we will be laser focused on solving their problems and this has given us further credibility. We have seen a steady migration of customers moving to our solutions. In recent years, we’ve seen a big increase in the number of large, vertically integrated companies that design their own SoCs. Apple, Google, Facebook, Amazon, to name a few. Have they embraced commercial IP management solutions, or do they roll their own solutions simply because they can? SA: The large semiconductor companies still have the largest design teams and the bulk of our focus. The companies you mentioned clearly have the software expertise to build any solutions they want. However, software for managing IC design data and IP reuse is very specialized and not their area of expertise. Some of them already use our solutions. As you embark on your 24th year in business, what’s your vision for how IP use and reuse will evolve in the coming years and how Cliosoft can address any challenges there? We continue to see a vigorous growth of new startups. Many of these will get acquired and we will see more consolidation. As design teams will be required to move faster to accommodate shrinking market windows, I expect that upper management will push to make reuse of existing IP a reality and try to purchase third party IP when necessary. Tracking all this reuse information and managing dependency trees will be of paramount importance both for design integrity and quality, as well as avoiding any legal or financial jeopardy with third party IP vendors. Our HUB solution is well positioned to address these needs and we have seen a growing interest especially with large multinationals. We expect to learn more from these engagements and further enhance the product to meet these challenges. Cliosoft.com [post_title] => CEO Interview: Srinath Anantharaman of Cliosoft [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => ceo-interview-srinath-anantharaman-of-cliosoft-2 [to_ping] => [pinged] => [post_modified] => 2021-05-07 09:47:44 [post_modified_gmt] => 2021-05-07 16:47:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298659 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 298614 [post_author] => 13 [post_date] => 2021-05-06 10:00:53 [post_date_gmt] => 2021-05-06 17:00:53 [post_content] => Proper clock functionality and performance are essential for SoC operation. Static timing analysis (STA) tools have served well for verifying clocks, yet with new advanced process nodes, lower operating voltages, higher clock speeds and higher reliability requirements, STA tools alone can’t perform the kinds of analysis that are needed for clock sign-off anymore. At 7nm and below, a clock failure due to rail-to-rail, duty cycle distortion or aging issues can jeopardize an entire project. To help find and solve these problems San Jose based Infinisim has developed a product called ClockEdge that uses advances in simulation in conjunction with software specifically devoted to analyzing clocks. ClockEdge is most relevant for clock speeds in excess of 1GHz and clocks designed at below 10nm process nodes. It can handle traditional clock tree structures and also works with grid, mesh and spine-based clocks. ClockEdge overcomes the limitations that STA encounters, offering deeper insights into clocks that help with performance, power, reliability and more. At advanced process nodes STA tools guard-band their results, leading to over-design and unnecessary power consumption. STA also suffers due to lower operating voltages and non-linear device behavior. Results from STA miss rail-to-rail failures, aging effects and supply induced jitter, all of which can lead to chip failures. Infinisim’s ClockEdge does more than just look at timing, it ensures that the clock is also functionally correct.  It delivers SPICE accurate results, typically with overnight turnaround even for the largest SOCs. ClockEdge performance is achieved through linear scaling using LSF jobs, unlike multithreading which plateaus at around 10-20X.  ClockEdge analyzes clock performance at multiple PVT corners and will perform HCI and NBTI aging analysis.  Another benefit of ClockEdge is its ability to compute peak-to-peak, average power and leakage current for each gate in the clock. [caption id="attachment_298616" align="aligncenter" width="1114"]Clock analysis rail to rail Clock analysis rail to rail[/caption] Unlike STA, ClockEdge analyzes the entire clock domain and looks at every clock path for its timing and electrical analysis. Going beyond looking at one path at a time can uncover situations where there may be excessive guard-banding or lurking failures. In advanced process node clock designs, duty cycle distortion or asymmetry in high and low pulse widths and rail-to-rail failures are often missed by STA but accurately predicted by ClockEdge. If not detected, both these errors can cause a host of problems and lead to timing problems in the finished chip. ClockEdge does full analog signal analysis to catch and report these issues. ClockEdge is easy to use because the entire flow is focused on clock analysis. It automatically performs gate level tracing and sensitization, which is followed by transistor level simulation. ClockEdge has comprehensive post processing to generate the reports and the information needed to interpret clock functionality and performance results. As for inputs, ClockEdge uses the same information and data that are used by STA. Clocks are too important to leave to STA at advanced nodes. A lot needs to be looked at, including power, rail-to-rail and aging to ensure design success. This is especially true for designs below 10nm, where many of these issues can slip through if only STA is used to look at clock issues. Infinisim has put a lot of work into ClockEdge, and they have gained acceptance with major semiconductor companies working on leading edge designs. Their website includes more information on the flow for ClockEdge. About Infinisim Infinisim, Inc is a privately funded EDA company providing design verification solutions. Founded by industry luminaries, the Infinisim team has over 50 years of combined expertise in the area of design and verification. Infinisim customers are leading edge semiconductor companies and foundries that are designing high-performance mobile, AI, CPU and GPU chips. Infinisim has helped customers achieve unprecedented levels of confidence in design robustness prior to tape-out. Customers have been able to eliminate silicon re-spins, reduce chip design schedules by weeks and dramatically improve product quality and production yield. www.infinisim.com [post_title] => A New Approach for Clock Analysis below 10nm [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => a-new-approach-for-clock-analysis-below-10nm [to_ping] => [pinged] => [post_modified] => 2021-05-06 07:26:34 [post_modified_gmt] => 2021-05-06 14:26:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298614 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 298331 [post_author] => 16 [post_date] => 2021-05-06 06:00:57 [post_date_gmt] => 2021-05-06 13:00:57 [post_content] => Remember the days when verification meant running a simulator with directed tests? (Back then we just called them tests.) Then came static and formal verification, simulation running in farms, emulation and FPGA prototyping. We now have UVM, constrained random testing and many different test objectives (functional, power, DFT, safety, security, cache coherence). Over giant designs now needing hierarchies of test suites. And giant regressions to ensure backward compatibility, compliance and coverage while aiming to optimize use of compute farms and clouds. It’s all become a bit more complicated than it used to be. To achieve the productivity and efficiency gain needed to keep up, automating verification management of this complex and diverse set of objectives becomes essential. Verification Management

Comprehensive verification management

In a recent recorded video Kirankumar Karanam (AE Mgr Synopsys Verification Group) walks through the Synopsys VC Execution Manager (ExecMan) answer to this need. The ExecMan solution has five primary goals:
  • Provide a systematic path linking from testplan to execution, debug and coverage and trend analysis
  • Optimize regression turn-around times
  • Minimize debug turn-around times
  • Optimize time to closure
  • Utilize the grid as effectively as possible
The planning phase always intrigues me, linking a design plan to a test plan and subsequently through to analysis and debug. In a short overview there wasn’t time to go into more detail on this topic. I could see this being very useful in establishing traceability between specs and testing.

Optimizing regression turnaround-time and debug productivity

One important consideration in optimizing regression throughput is simply load-balancing. Packing jobs in such a way that total turn-around time per regression pass is minimized to the greatest extent possible. The manager helps optimize this balancing. It also apparently does some level of reduction in redundant test identification, using coverage analytics.  There’s also a note in the slides on VCS engine performance enhancement in this release – I believe VCS 2020.12. To optimize debug productivity, the manager provides help in several ways. First it automatically sets up debug runs to run in parallel with ongoing regression runs. You can supply debug hooks up-front to drive such runs. There’s also mention in the slides of ML-based failure triage and debug assistant(s), though not elaborated in the talk. These are topics I cover from time to time. Could be very helpful.

Optimizing closure turn times and grid utilization

Here there’s more focus on test grading by coverage, to filter out tests which don’t contribute significantly. Synopsys have also just introduced a feature called Intelligent Coverage Optimization (ICO), using ML to bias constraints for randomization, again to minimize low value sims. They claim 5X reduction in turn-around time using this technique for stable CR regressions. Finally, on this general optimization theme, the manager optimizes for grid efficiency, looking at the best way to assign tasks to specific grid hosts. The manager does this by analyzing environment and historical data.

More goodies

ExecMan adds further automation for results binning, re-run and debug through Verdi. It further supports coverage analysis through test grading and plan grading tools and can link with bugs tracked in Redmine issue-tracking. Kirankumar wraps up by describing a use-case they developed with a memory customer, based in this instance on VC SpyGlass regressions. An interesting point here is that this customer uses Jenkins for regression management. Requiring that ExecMan work with that flow. I don’t know how far that customer takes their use of Jenkins, but it’s encouraging to see tools from the agile world appearing in hardware regression flows. You can watch the recorded video HERE.   [post_title] => Verification Management the Synopsys Way [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => verification-management-the-synopsys-way [to_ping] => [pinged] => [post_modified] => 2021-05-07 15:31:53 [post_modified_gmt] => 2021-05-07 22:31:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298331 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 298568 [post_author] => 15217 [post_date] => 2021-05-05 10:00:10 [post_date_gmt] => 2021-05-05 17:00:10 [post_content] => One meaning of the word “reckoning” says it is the action or process of calculating or estimating something. But dead reckoning? What does that mean? Believe it or not, we have all deployed dead reckoning to varying degrees of success on different occasions. As an example, when driving on a multi-lane winding highway and direct sunlight hits our eyes. Although we lose visibility momentarily, we still navigate our vehicle without hitting the median barrier or another vehicle. Of course, if we had been distracted and intermittently ignoring visual cues of the surroundings, the result may have been different. As per Wikipedia: “In navigation, dead reckoning is the process of calculating current position of some moving object by using a previously determined position, or fix, and then incorporating estimations of speed, heading direction, and course over elapsed time.” Prior to modern day navigation technologies, dead reckoning technique was used for navigation at sea. Can this technique still be useful? The answer is yes, as we saw with the highway example. How about in the technology world? How much useful can this technique be? Last month, CEVA unveiled MotionEngine™ Scout, a highly-accurate dead reckoning software solution for navigating Indoor Autonomous Robots. And on April 27th, they hosted a webinar titled “Spot-On Dead Reckoning for Indoor Autonomous Robots” to provide deeper insights into that solution. The main presenters were Doug Carlson, Senior Algorithms Engineer, Sensor Fusion Business Unit of CEVA and Charles Chong, Director of Strategic Marketing, PixArt Imaging. Even with multiple sensors’ feeding position, orientation and speed data to the navigation system, trajectory error can start building up as sensors’ data could be momentarily interrupted or corrupted. Doug and Charles explain how CEVA’s solution helps reduce the trajectory error by a factor of up to 5x in challenging surface scenarios. The following are some excerpts based on what I gathered by listening to the webinar. MotionEngine Scout avoids expensive camera and LiDAR technology-based sensors. Instead, it uses optical flow (OF) sensors. Figure below shows the three different types of sensors that the solution uses, how the sensors are used and what type of data they provide. Sensors Characteristics For optical flow sensing, CEVA’s solution uses PixArt’s optical track sensor, part number PAA5101. PAA5101 is a dual-light LASER/LED hybrid optical technology implementation. This approach yields best results over a wide range of surfaces. LED performs better on carpets and LASER works better on hard surfaces. Nonetheless all three types of sensors can be severely impacted by the environment and thus introduce errors in measurement data. That directly impacts dead reckoning calculations. Refer to Figure below for details on obstacles to accurate dead reckoning performance. Dead Reckoning Obstacles CEVA’s solution fuses measurements from these three sensors to achieve significantly better accuracy and robustness. Sensor fusion is the process of combining sensory data from multiple types of sensing sources in a way that produces a more accurate result than is possible with just the individual sensors’ data. MotionEngine Scout leverages 15+ years of CEVA R&D in sensor calibration and fusion. The solution is able to minimize absolute error by a factor of 5-10x over relying on just wheel encoder or optical flow sensor data. Refer to Figure below. Sensor Fusion Highlights   MotionEngine Scout is the software package that is being released to address the indoor autonomous robot market. It can support residential, commercial and industrial settings. Evaluation hardware will become available to customers in May/June 2021. The hardware will be in the form of a single PCB module and simple to integrate with customer’s robot platform. As a backgrounder, MotionEngine™ is CEVA’s core sensor processing software system. More than 200 million products leveraging MotionEngine system have been shipped by leading consumer electronics companies into various markets. Check here for a list of MotionEngine based software packages supporting different market segments. For all the details from the webinar, I recommend you register and listen to it in its entirety. If you are developing indoor autonomous robots, you may want to have deeper discussions with CEVA. Their software package may help you address the challenging pricing requirements of your market.     [post_title] => Spot-On Dead Reckoning for Indoor Autonomous Robots [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => spot-on-dead-reckoning-for-indoor-autonomous-robots [to_ping] => [pinged] => [post_modified] => 2021-05-05 19:16:06 [post_modified_gmt] => 2021-05-06 02:16:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298568 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 298713 [post_author] => 33083 [post_date] => 2021-05-05 06:00:32 [post_date_gmt] => 2021-05-05 13:00:32 [post_content] => In the previous blogs on this topic, we’ve seen that utilizing near-threshold voltage (NTV) saves incredible amounts of energy, theoretically up to 10x and in practice from 2x to 4x. But there is a price which makes some applications more suited for NTV than others. This is due to the inevitable performance (speed) loss of NTV as transistor current decreases with respect to operating voltage. While some applications require full speed all of the time, almost all IoT applications have widely varying performance requirements. Here, I’ll dig into one of the hottest IoT applications which happens to be an excellent fit for NTV: wireless audio hearables, also known as true wireless systems. To be state-of-the-art in the very competitive hearables space, the system must include Keyword Spotting (KWS) such as Alexa or Siri. These are always-on systems: because the keyword can come at any random time, the system cannot be (completely) shut down. This already rules out long sleep times, the most common energy saving method of IoT systems. A typical KWS system consists of a feature extractor and a neural-network-based classifier, a form of artificial intelligence (AI). For energy efficiency, these are usually preceded by an energy/voice activity detector. This allows for the system to run the low-performance energy detection as the only always-on component and only wake up (via interrupt etc.) the main processor when energy resembling speech is detected. Of these, NTV is an excellent choice for the energy detector. In a conventional energy-optimized system, the energy detector is often a hardwired block. Ultimately, time-to-market demands programmability as algorithms and architectures change, and anything hardwired severely limits this. One option would be a “big-little” type of system: a small CPU sharing memory and periphery with a bigger CPU such as an Arm M0 and an Arm M33, or two RISC-V cores. But even this solution has task-switching limitations on memory and switching time. If your software team gets to decide, all tasks will be run on the same core. Then there’s the extra silicon and verification costs that go into a multi-core solution. Minima’s approach to a NTV system makes a single-core solution possible, one that can scale its energy together with its performance. As seen in Figure 1, using all of the energy curve (and not just a small sliver at the top) allows for optimizing energy no matter how much performance spread your application requires such as with keyword detection in hearables. blog part 3 energy frugality graphic FINAL 1 Figure 1: Minima’s approach to NTV operation enables a CPU to scale its energy (shown on the right) for simple parts of an algorithm as well as the more complex parts such as in hearables IoT applications. Even better, Minima’s approach to NTV system maximizes the use cases of the CPU. Modern KWS algorithms are heavily optimized for small, embedded-class CPUs but today’s deep-submicron processes mean that often there is still room left at the top for you to design the system for more performance. So, when your algorithm and SW guys want more performance for a product with a bigger battery, you can reuse the system.  For example, adding a 0.9V operating point in Figure 1 might allow for the same chip to be used as a speaker driver feedback DSP. Greater task granularity in your application may also be possible; for example, running different Bluetooth layers or neural-network layers at different operating points. These examples of being energy frugal apply to a large number of other applications. Anywhere you need AI, there are probably energy-saving possibilities by using simpler algorithms part of the time enabled by Minima’s approach to a NTV system. https://minimaprocessor.com/ [post_title] => Why Near-Threshold Voltage is an Excellent Choice for Hearables [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => why-near-threshold-voltage-is-an-excellent-choice-for-hearables [to_ping] => [pinged] => [post_modified] => 2021-05-05 10:47:08 [post_modified_gmt] => 2021-05-05 17:47:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298713 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 298556 [post_author] => 3 [post_date] => 2021-05-04 10:00:47 [post_date_gmt] => 2021-05-04 17:00:47 [post_content] => My first transistor-level IC design job was with Intel, doing DRAM designs by shrinking the layout to a smaller process node, and it also required running lots of SPICE runs with manually extracted parasitics to verify that everything was operating OK, meeting the access time specifications and power requirements across PVT corners. I'd do a SPICE run, and show the waveform and timing results to a senior circuit designer, and then he'd ask for the schematics and say, "Go and make this change, then rerun SPICE again." What a slow and laborious process it was to migrate a high-volume memory part. Eventually I learned the art and science of DRAM design, and then became a senior circuit designer, responsible for eyeballing other designers schematics, reviewing their waveforms, and telling them, "Go and make this change, then rerun SPICE again." The EDA industry has paid attention to the challenges of transistor-level circuit designers over the years and has come up with something beyond just running lots of SPICE circuit simulations, and that tool category is known as static checking, which is something that complements what SPICE can tell you. A new white paper from Siemens EDA was just released on static checking, and I'll give you an overview of what static checks can help you quickly verify.

Power-Intent Checks

IP blocks within an SoC can employ many power domains, and that requires some transistor-level design control with:
  • Voltage regulators
  • Header and footer switches
  • Level shifters
  • Isolation cells
  • State retention cells
In the following block diagram the circuit designer needs to place and verify that there is a level shifter between the power domains connected to VCC1 and VCC2, transistors with thick oxide are connected to the high voltage supply, and that an isolation cell is placed between Block 2 and 3, because Block 2 has gated power. power intent checks min A static checker can automatically detect every power domain crossing in a chip, and verify that level shifters and isolation cells are properly placed. You really want to be using a transistor-level verification tool for tricky tasks like this to ensure thorough verification. Catching and fixing a power intent bug before tape out makes economic sense.

ESD Protection Verification

I remember placing DRAM chips onto a tester during characterization, and we always wore a conductive strap connected to the tester, which prevented the build up of Electro Static Discharge (ESD) while walking on the carpet, that created a large voltage that potentially damaged the IC as current flowed into the chip. IO cells on a chip using multiple power domains have special diodes to shunt the high ESD currents away from rest of the chip. ESD multiple PG domains min A static checker can find all of these ESD elements, including parasitic resistances and capacitances, and calculate ESD safety limits much faster than running an exhaustive number of SPICE simulations. Vias on interconnect layers in the ESD path can be statically checked for electromigration compliance.

Voltage-Aware Spacing Checks

There's a reliability concern called Time-Dependent Dielectric Breakdown (TDDB), where the allowed spacing of wires is dependent of the voltages of the wires. Voltage aware DRC checking min A typical DRC tool doesn't know about voltages, and trying to run dynamic SPICE simulations on a full chip isn't practical, so the smarter approach is using a tool that can do static voltage propagation and topology checks for TDDB.

Analog layout-dependent checks

AMS designers know about taking layout precautions to ensure robust operation, like:
  • Device layout symmetry
  • Current orientation matching
  • Dummy device insertions
  • Common centroid and pitch between devices
  • Electrical parameter matching
In the following schematic for a fully differential mixer the layout designer needs to use symmetry between:
  • M1, M2, M4, M5
  • M3, M6
  • M7, M8
  • RIfully differential mixer min
Small layout differences can impact performance, while a static checker can quickly identify any layout dependent violations.

Summary

Yes, SPICE circuit simulation is still heavily used for transistor-level IP design and verification, but using advanced, FinFET process nodes, there are so many more effects to verify now that it makes sense to add a static checker to ensure that your designs are meeting power intent, ESD protection, voltage-aware spacing, and analog layout-dependent effects. Using the right tool for the right task makes the life of a chip designer less stressful, and ensures that silicon will perform to spec, and operate reliably. The full white paper is online here.

Related Blogs

[post_title] => Transistor-Level Static Checking for Better Performance and Reliability [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => transistor-level-static-checking-for-better-performance-and-reliability [to_ping] => [pinged] => [post_modified] => 2021-05-01 10:51:28 [post_modified_gmt] => 2021-05-01 17:51:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298556 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [7] => WP_Post Object ( [ID] => 298558 [post_author] => 15217 [post_date] => 2021-05-04 06:00:05 [post_date_gmt] => 2021-05-04 13:00:05 [post_content] => Earlier in April, Achronix made a product announcement with the headline “Achronix Now Shipping Industry’s Highest Performance Speedster7t FPGA Devices.” The press release drew attention to the fact that the 7nm Speedster®7t AC7t1500 FPGAs have started shipping to customers ahead of schedule. In the complex product world of semiconductors, hitting a production silicon milestone ahead of schedule is a significant accomplishment. The copy stated that the product includes innovative architectural features making it ideal for data acceleration applications. It also spotlighted the industry’s first 2D network-on-chip (NoC), an architectural innovation that eliminates complex routing bottlenecks found in traditional FPGAs. Amid these highlighted aspects, it is easy to miss the bigger story. No, I’m not talking about the upcoming SPAC merger with ACEV to become a publicly traded company. Yes, that’s a significant story as well and portends to bring lot of benefits to Achronix’s customers. But there is an even bigger story. And that story is about the effect Achronix’s solutions and product strategy are expected to have on the industry. In order to fully appreciate the potential impact, we have to review the changes that have taken place from the 1980s to today. That backdrop will provide the rationalization for what, why and how Achronix’s timeless solutions are expected to solve long standing chronic problems.

1980s to Now

Market Evolution: There was a time when integrated circuit (IC) chips used to be referred to as computer chips. That is because there was only one market and that was the computing market. Product performance was critical. Cost and power consumption were secondary. That was the situation, say up through the 1980s. From the 1990s, communications market started growing rapidly. Product performance was still a dominant driver although cost also started becoming important. From the 2000s, consumer market for electronic products started growing rapidly. Cost and power consumption became dominating factors. The late 2000s saw the lines between computing, communications and consumer markets fade away in a major way. From the 2010s, big data, e-commerce, data security and cloud computing became major drivers. And starting around 2015, we entered the artificial intelligence (AI) era and emphasis of edge computing paradigm. Refer to Figure below. 1980s to Now Market Changes But this evolution did not lead to a super-monolithic market segment. On the contrary, a number of smaller market segments have been created with requirements primarily driven by the use case the devices are deployed for. Semiconductor process node/technology: In order to support the above market evolution, foundries have been pushed to develop multiple flavors of a process node, one for high performance/speed, one for low power, one for ultra-low power, etc., Chip Design Cost and Cycle Times: With the introduction of each advanced process node, chip design cycle time got longer. And chip design cost went higher. Market Attraction for ASICs: ASIC-based product attractiveness was a no brainer when the target market segment was large, the development cost was low and the design cycle time was short. This was the case up until the early 2010s. As the market evolved, monolithic large market segments fragmented into many smaller ones. This combined with increased development cost and longer design cycle time, made it difficult to make a business case for ASICs. Product Cycles: Electronic product cycles that used to be around three years back in the 1990s started getting shorter and shorter. With explosive growth in AI driven applications and rapid advances in AI techniques, product cycles got compressed tremendously. Market Attractiveness for traditional FPGAs: Although the founding of the ASIC market and the traditional FPGA market happened in the 1980s, both markets did not experience the same growth path. For a long period, FPGAs were predominantly used for prototyping and in low-volume, high-margin products. It remained this way until the communications market started taking off and high-speed I/O were added to FPGAs. Chronic Problems: Processors: Consume too much power as a tradeoff for maximum flexibility ASICs:           Lack of flexibility (once implemented) as a tradeoff for optimized performance and power Traditional FPGAs: Not as optimized as ASICs, not as flexible as processors, intrachip performance bottlenecks. ASSPs:          Not as optimized as ASICs as a tradeoff for amortizing R&D cost over a larger market

Now and Onward

What if we can solve the chronic problems without too much of a tradeoff? How about increasing addressable market size and extending product life without having to re-spin silicon? With the market moving toward an AI driven, edge-centric, fast-changing, data-accelerated product space with short life cycles, the stage is set for a timeless solution to fill the demand. The increased interest in leveraging a chiplet methodology for developing semiconductor products is not a coincidence. Achronic (timeless) Solutions: Achronix through its ACE design tools and partner tools ecosystem, makes it easy for customers to design their products. Customers may tap from the Speedster® family of products that eliminates traditional FPGA problems. Customers could also tap from Achronix’s various eFPGA cores to implement their products and eliminate other chronic problems. I expect product developers to adopt chiplets methodology and eFPGAs incorporation as and when applicable. In essence, customers could achieve earlier tape-out, address more SKUs and extend product lifecycle. Differentiated Competitive Position If you are developing products in any of these high-growth, fast changing markets (refer to above figure), you may want to explore ways to benefit from Achronix’s offerings by holding deep dive discussions with them.     [post_title] => Achronix Next-Gen FPGAs Now Shipping [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => achronix-next-gen-fpgas-now-shipping [to_ping] => [pinged] => [post_modified] => 2021-05-05 10:46:07 [post_modified_gmt] => 2021-05-05 17:46:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298558 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 298735 [post_author] => 14 [post_date] => 2021-05-03 14:00:27 [post_date_gmt] => 2021-05-03 21:00:27 [post_content] => 60 Minutes Chip Shortage -Chip shortage on 60 Minutes- Average Joe now aware of chip issue -Intel sprinkling fairy dust (money) on New Mexico & Israel -Give up on buy backs and dividends -Could Uncle Sam give a handout to Intel? You normally don't want to answer the door if 60 Minutes TV crew is outside as it likely doesn't mean good things. But in the case of the chip industry, the shortage that has been talked about in all media outlets has finally come home to prime time. The chip shortage that has impacted industries across the board from autos to appliances to cigarettes so it has gotten prime time attention; CBS 60 Minutes program on Chip Shortage 60 Minutes got hold of some of our past articles including our recent ones about the shortage and China risks and contacted us. We gave them a lot of background information and answered questions about the industry and shortages as we wanted to help provide an accurate picture. Overall, we think they did a great job representing what was going on in the industry and were both accurate and informing. Does Intel have its hand out? We have previously mentioned that we thought that Intel was looking for government help and maybe a handout which was touched upon in Pat Gelsinger's interview , up front. While certainly not directly asking for money, it certainly sounds like Intel wouldn't say no. Intel was clearly shopping the idea under the previous administration in the White House as well as previous Intel management. The chip shortage both amplifies that prior request as well as makes it more timely. It also gets even more timely when it is put under the banner of infrastructure repair. Intel is going to hemorrhage Money We have said that Intel's financial's were going to get a lot worse before they got any better. We suggested they would triple spend 1) Spend to have TSMC make product 2) Spend to catch up to TSMC (like on EUV and other tools) 3) spend to build extra capacity to become a foundry. Intel, Gelsinger, even said on 60 minutes that they are not going to be doing stock buy backs. Intel in Israel & New Mexico Intel has just announced that in addition the the $20B for two new foundries that it is spending in Arizona, it is spending $3.5B in New Mexico on packaging technology & capacity. Intel is also spending $200M on a campus in Haifa, $400M for Mobileye in Israel and $10B to expand its 10NM fab in Kiryat Gat, Israel . Its interesting to note that the spend in Israel is not mentioned on Intel's newsroom website as it likely doesn't fit the "move technology & jobs back to the US" that Gelsinger espoused on 60 Minutes. Between spending on production at TSMC, fixing Intel, building foundries, New Mexico, Mobileye, Israel (likely Ireland as well)...Intel is going to be raining down money all over. Mark Liu on 60 minutes Mark Liu was also interviewed as the clear leader in technology and capacity in the chip industry. We think that Liu was very accurate and straight forward when he said that TSMC was surprised that Intel had fumbled. He also clearly is on the side of the industry that downplays the shortages and thinks they will be short lived. As to the "repatriation" of the chip industry to the US, as expected he sees no reason for it. He also stayed away from commentary about the "Silicon Shield" provided to Taiwan by its leadership in chips. TSMC is clearly in the drivers seat and is not likely to change any time soon The Stocks Given the spending and gargantuan task ahead we have suggesting avoiding Intel's stock as its going to both take longer and cost more than anyone suggests and the odds of success aren't great. Gelsinger is on a world tour sprinkling fairy dust around which he will need the luck of as we go forward. We would not be surprised if the government does indeed write Intel a check as they are the US's only and last hope of getting back in the semiconductor game which is so critical to our future, not to mention our short term needs. All this spend will do zero to help the shortage but the shortage did at least bring these issues (many of which we have been talking about for years) to the forefront of peoples minds. We do continue to think that the semi equipment industry will likely benefit big time especially ASML as they have a lock on EUV. We also think equipment companies can make a few bucks on their old 6" and 8" tools if they can resurrect manufacturing as those are the fabs in shortest supply. [post_title] => You know you have a problem when 60 Minutes covers it! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => you-know-you-have-a-problem-when-60-minutes-covers-it [to_ping] => [pinged] => [post_modified] => 2021-05-05 10:44:57 [post_modified_gmt] => 2021-05-05 17:44:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298735 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 5 [filter] => raw ) [9] => WP_Post Object ( [ID] => 298490 [post_author] => 13 [post_date] => 2021-05-03 10:00:49 [post_date_gmt] => 2021-05-03 17:00:49 [post_content] => Just prior to this year’s Synopsys User Group (SNUG) meeting, I had a call with Hany Elhak, Group Director of Product Management and Marketing at Synopsys, to talk about their latest announcements for analog simulation. Synopsys usually has big things to talk about each year around this time - this year is no exception. Hany had a set of announcements to discuss that represent a major leap forward for their entire analog simulation lineup. Under the moniker of PrimeSim Continuum they are rolling out a unified workflow for all their circuit simulation technologies, a GPU accelerated SPICE and a new FastSPICE architecture. Hany talked about how over the last 5 years advanced node SoCs have changed so that increased capacity and speed is needed for transistor level simulation. IOs are running at data rates over 100 Gb/s, embedded memories are larger and faster, and analog/custom content is increasing steadily. All of these factors and others are translating into the need to perform more analog simulations across many parts of today’s SoCs and their memory subsystems. [caption id="attachment_298492" align="aligncenter" width="1009"]Synopsys analog simulation Synopsys analog simulation[/caption] Previously there has been a balkanization of analog simulation technologies for each domain, such as libraries, memories, power distribution networks, IO circuits, etc. Likewise, with the rate of CPU performance-scaling slowing down the increases in capacity and throughput needed to keep up with design size and multi-corner analysis have not been available. Advanced process node designs have experienced reduced margins and increased parasitics, making analog simulation analysis even more important. For the first time ever, Synopsys is taking advantage of GPUs to gain massive performance improvements. GPUs always seemed like they could offer big performance gains, but until they supported high precision floating point they were not suitable for SPICE. GPU performance year-to-year is moving at factor of 1.5, which means that by 2015 they will offer a 1000X performance advantage over CPUs alone. Of course, Synopsys has improved the overall performance of PrimeSPICE by 3X in the newest release, and there is good scaling with additional CPUs. However, adding GPUs can add an additional 10X over this. PrimeSim Pro, their FastSPICE offering, now has a new architecture that runs 2-5X faster and can scale to handle billions of elements, opening the doors to running on larger designs for more complete SoC and memory subsystem verification. GPU acceleration is also available now in PrimeSim Pro. Synopsys has enhanced power block detection, optimized event propagation, and developed core independent partitioning for PrimeSim Pro. There are also improvements in advanced modeling. It supports math-based load models, net based coupling cap handling, and has a new RC reduction algorithm. These changes open up capacity for massive PDNs and simulation with full parasitics. In particular Hany pointed out how this will enable full simulation of multi-die SoC memory subsystems. His example showed full simulation of six 16 Gb die and their controller. All of this would still be cumbersome to set up, run and interpret results without a unified simulation environment. Synopsys also announced PrimeWave, which provides a comprehensive environment for all advanced analyses. It works with all the PrimeSim Continuum simulators for all modes of analysis. According to Hany, it will be used for analog, memory, signal/power integrity and RF. PrimeWave has integrated waveform viewing and post processing, inside a flexible and programmable environment. PrimeSim Continuum is tightly coupled with Custom Compiler for analog design acceleration. It is also integrated with VCS so mixed simulation will be easier and faster. Lastly, IP characterization will also benefit from PrimeLib’s integration into this design platform. As is always the case, Synopsys has done extensive work during their product development with leading semiconductor companies to ensure the flow is fully tested and meets end user requirements. Their announcement is accompanied with endorsements from KIOXIA, Samsung Memory, NVIDIA and Samsung Foundry. They each report consistent improvements in runtime and accuracy. They also point to the common workflow as a big step forward in productivity. Synopsys continues to maintain its track record for innovation and investment in its leading suite of circuit simulation tools. This should come as no surprise to anyone, but the ambitious changes in this announcement are gratifying to see. What is especially interesting is that they are making use of the formidable power of GPUs for the first time. GPUs offer massively parallel computing for applications that can be architected to take advantage of them. With GPUs now supporting double precision floating point operations and high capacity and bandwidth memory, they are now an excellent vehicle for SPICE based simulation. The full announcement and all the details are available on the Synopsys website.       [post_title] => Synopsys Debuts Major New Analog Simulation Capabilities [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => synopsys-debuts-major-new-analog-simulation-capabilities [to_ping] => [pinged] => [post_modified] => 2021-05-01 10:43:37 [post_modified_gmt] => 2021-05-01 17:43:37 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298490 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 298816 [post_author] => 28 [post_date] => 2021-05-07 10:00:12 [post_date_gmt] => 2021-05-07 17:00:12 [post_content] =>
Dan and Mike are joined by Sudhir Mallya, vice president of corporate and product marketing at OpenFive. We explore 2.5D design and the role chiplets play. Current technical and business challenges are discussed as well as an assessment of how the chiplet market will develop and what impact it will have. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual. Sudhir Mallya is Vice President of Corporate and Product Marketing. He is responsible for custom silicon product marketing, technology roadmaps and business model innovation, corporate marketing initiatives, and strategic customer and partner alliances. He was previously at Toshiba where he led their North American silicon BU with a focus on data center and automotive applications. He is based in Silicon Valley and has held executive positions in engineering, marketing, and business development at leading semiconductor companies. He has led multiple$100M+ global strategic customer engagements from very early concept to high volume production. He has a BSEE from the Indian Institute of Technology, Bombay, and an MSEE from the University of Cincinnati.
[post_title] => Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => podcast-ep19-the-emergence-of-2-5d-and-chiplets-in-ai-based-applications [to_ping] => [pinged] => [post_modified] => 2021-05-07 10:08:51 [post_modified_gmt] => 2021-05-07 17:08:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?post_type=podcast&p=298816 [menu_order] => 0 [post_type] => podcast [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7658 [max_num_pages] => 766 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => e64def1ba05c75a0d2020dce1b92d1d6 [query_vars_changed:WP_Query:private] => 1 [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications

Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications
by Daniel Nenni on 05-07-2021 at 10:00 am

Dan and Mike are joined by Sudhir Mallya, vice president of corporate and product marketing at OpenFive. We explore 2.5D design and the role chiplets play. Current technical and business challenges are discussed as well as an assessment of how the chiplet market will develop and what impact it will have.

The views, thoughts, and

Read More

CEO Interview: Srinath Anantharaman of Cliosoft

CEO Interview: Srinath Anantharaman of Cliosoft
by Daniel Nenni on 05-07-2021 at 6:00 am

srinath square

Srinath Anantharaman founded Cliosoft in 1997 and serves as the company’s CEO.  He has over 40 years of software engineering and management experience in the EDA industry.  Srinath graduated with a Bachelor of Technology from IIT/Kanpur and MSEE from Washington University in St. Louis.

The last time we talked to you was 2017. Read More


A New Approach for Clock Analysis below 10nm

A New Approach for Clock Analysis below 10nm
by Tom Simon on 05-06-2021 at 10:00 am

Clock analysis rail to rail

Proper clock functionality and performance are essential for SoC operation. Static timing analysis (STA) tools have served well for verifying clocks, yet with new advanced process nodes, lower operating voltages, higher clock speeds and higher reliability requirements, STA tools alone can’t perform the kinds of analysis… Read More


Verification Management the Synopsys Way

Verification Management the Synopsys Way
by Bernard Murphy on 05-06-2021 at 6:00 am

Verification management min

Remember the days when verification meant running a simulator with directed tests? (Back then we just called them tests.) Then came static and formal verification, simulation running in farms, emulation and FPGA prototyping. We now have UVM, constrained random testing and many different test objectives (functional, power,… Read More


Spot-On Dead Reckoning for Indoor Autonomous Robots

Spot-On Dead Reckoning for Indoor Autonomous Robots
by Kalar Rajendiran on 05-05-2021 at 10:00 am

Sensors Characteristics

One meaning of the word “reckoning” says it is the action or process of calculating or estimating something. But dead reckoning? What does that mean? Believe it or not, we have all deployed dead reckoning to varying degrees of success on different occasions. As an example, when driving on a multi-lane winding highway and direct … Read More


Why Near-Threshold Voltage is an Excellent Choice for Hearables

Why Near-Threshold Voltage is an Excellent Choice for Hearables
by Lauri Koskinen on 05-05-2021 at 6:00 am

blog part 3 energy frugality graphic FINAL 1

In the previous blogs on this topic, we’ve seen that utilizing near-threshold voltage (NTV) saves incredible amounts of energy, theoretically up to 10x and in practice from 2x to 4x. But there is a price which makes some applications more suited for NTV than others. This is due to the inevitable performance (speed) loss of NTV as … Read More


Transistor-Level Static Checking for Better Performance and Reliability

Transistor-Level Static Checking for Better Performance and Reliability
by Daniel Payne on 05-04-2021 at 10:00 am

power intent checks min

My first transistor-level IC design job was with Intel, doing DRAM designs by shrinking the layout to a smaller process node, and it also required running lots of SPICE runs with manually extracted parasitics to verify that everything was operating OK, meeting the access time specifications and power requirements across PVT … Read More


Achronix Next-Gen FPGAs Now Shipping

Achronix Next-Gen FPGAs Now Shipping
by Kalar Rajendiran on 05-04-2021 at 6:00 am

1980s to Now Market Changes

Earlier in April, Achronix made a product announcement with the headline “Achronix Now Shipping Industry’s Highest Performance Speedster7t FPGA Devices.” The press release drew attention to the fact that the 7nm Speedster®7t AC7t1500 FPGAs have started shipping to customers ahead of schedule. In the complex product world… Read More


You know you have a problem when 60 Minutes covers it!

You know you have a problem when 60 Minutes covers it!
by Robert Maire on 05-03-2021 at 2:00 pm

60 Minutes Chip Shortage

-Chip shortage on 60 Minutes- Average Joe now aware of chip issue
-Intel sprinkling fairy dust (money) on New Mexico & Israel
-Give up on buy backs and dividends
-Could Uncle Sam give a handout to Intel?

You normally don’t want to answer the door if 60 Minutes TV crew is outside as it likely doesn’t mean good things.… Read More


Synopsys Debuts Major New Analog Simulation Capabilities

Synopsys Debuts Major New Analog Simulation Capabilities
by Tom Simon on 05-03-2021 at 10:00 am

Synopsys analog simulation

Just prior to this year’s Synopsys User Group (SNUG) meeting, I had a call with Hany Elhak, Group Director of Product Management and Marketing at Synopsys, to talk about their latest announcements for analog simulation. Synopsys usually has big things to talk about each year around this time – this year is no exception. Hany… Read More