SNPS1670747138 DAC 2025 800x100px HRes

Malcom Penn and the Semiconductor Industry Update Webinar

Malcom Penn and the Semiconductor Industry Update Webinar
by Daniel Nenni on 09-06-2024 at 6:00 am

Future Horizons


If you think my 40 years of semiconductor experience is impressive, Malcolm Penn has been at it for 55 years starting with the first commercial integrated circuit. Malcolm’s company Future Horizons has been providing legendary semiconductor industry forecasts since 1989. I have attended dozens of these both live and virtual events over the years and Malcolm has been featured on SemiWiki many times in our blogs and podcasts. His next industry update is on 9/10/2024, Tuesday of next week, at 3pm UK time which is 7am PT. It is definitely worth your time, a cup of tea is also a good idea, I will see you there.

I have been getting weird vibes from the ecosystem over the summer so it will be interesting to see what Malcolm has to say. I will provide my feedback on the Friday following the webinar so stay tuned.

Semiconductor Industry Update Webinar – Registration Now Open

Taken at face value, the headline annualized growth numbers look good, with Memory leading the charge and AI GPUs flying off the shelf but delving into the detail and a much bleaker picture emerges. Inventory levels remain stubbornly high, dampening unit demand, and the global economy is weak, neither of which bodes well for the industry moving forward. Now, more than ever, is the time for cool heads and sound reasoning.

• Is the 2024 growth momentum sustainable or a false dawn?
• When will IC unit growth return to the industry?
• Will China flood the chip market with non-leading-edge ICs?
• Is the current AI boom hype, hope or reality?
• What does the outlook for 2025 hold?

Find out the answer to these and other key questions at Future Horizons’ IFS2024 Autumn industry update outlook webinar, Tuesday 10 September, 2024, 3pm UK BST, registration now open.

Full Details HERE. Register HERE.

What Will You Hear
This one-hour broadcast will focus on the chip industry outlook, including:
• What happened so far in 2024 vs. January IFS forecast?
• What is the updated market outlook for 2024?
• What’s happening in CapEx, investment, and onshoring trends?
• What are the likely opportunities and implications for the industry?
• How to build resilient business strategies, plus
• Opportunity to ask specific questions in advance, during and after the webinar.

Who Should Attend
All companies, small and large, from startups to established market leaders.
• Key decision-makers in the design, manufacture, or supply of semiconductors.
• Government organizations in industry, trade, and investment.
• Those involved in M&A, investment, or finance within the electronics industry.
• Senior industry executives planning future marketing strategy.

Your Trusted Industry Advisor
Founded in 1989, Future Horizons has been in the business of forecasting and analyzing the semiconductor market for 35 years and has been a trusted advisor to governments, investors, startups, and most of the top global semiconductor firms. Our forecast track record and hands-on industry experience, dating back to the first commercial IC, longer than any other analyst and most industry execs, make this a must-attend event for key decision makers in semiconductors, electronics, and all related industries. We always present accurate and insightful analysis at these events, consistently helping our clients save time and money with our insightful and accurate analysis of the industry.

Fee
For a small investment of UK £150 plus £30 UK VAT you will gain accurate industry insight to make good strategic decisions in these uncertain times
• Discount available for 3 or more attendees from the same company/organisation
• Can’t attend? No need to miss out, order the webinar video recording and slides instead.
• If already registered or not directly suitable for you, please pass it to a colleague or associate.
• The event can also be repeated on-line or in-person in-house for your added convenience and flexibility.

Malcolm Penn
Chairman & CEO

Also Read:

The Recovery has Started and it’s off to a Great Start!

The Billion Dollar Question – Single or Double Digit Semiconductor Decline

Podcast EP136: Semiconductor Industry Update with Malcolm Penn

 


Powering the Future: The Transformative Role of Semiconductor IP

Powering the Future: The Transformative Role of Semiconductor IP
by SarojChouhan on 09-05-2024 at 10:00 am

Semiconductor IP Graphic

In the rapidly changing landscape of technology, the semiconductor industry serves as a cornerstone, fueling innovation across multiple sectors. Central to this industry is the semiconductor Intellectual Property (IP)—a vital component that frequently escapes public attention and is instrumental in determining the future of electronic devices.

Understanding Semiconductor IP
Semiconductor IP, or Semiconductor Intellectual Property, encompasses pre-designed and pre-verified components, which are essential in the development of semiconductor chips and Integrated Circuits (ICs). This comprises critical elements that semiconductor companies can license or reuse, such as memory controllers, processor cores, and interface protocols. They play a crucial role in the creation and development of advanced and innovative electronic products ranging from smartphones to automotive systems.

The semiconductor IP market is witnessing significant growth due to the increasing demand for semiconductor solutions that offer greater performance and enhanced energy efficiency. Fortune Business Insights states that the global market for semiconductor IP will generate revenue of USD 8.53 billion by 2029.

Benefits of Semiconductor IP in Modern Chip Design

Time-to-Market Acceleration
Semiconductor IP enables designers to make use of pre-designed and pre-verified components, thereby minimizing the time required to develop them, as it prevents the necessity to create devices from the very beginning. This feature offers faster chip development and reduces time-to-market, providing companies with a competitive edge.

Cost Savings
The creation of semiconductor chips from scratch requires significant time, expertise, and resources. However, with the adoption of semiconductor IP, businesses can reduce their development expenses by reusing or licensing the existing IP elements without the need to develop them again internally.

Performance Optimization
Semiconductor IP blocks offer optimal performance and energy efficiency. Companies can be greatly benefitted from the expertise and optimization efforts of IP provider by incorporating these blocks into their designs.

Design Quality and Reliability
Semiconductor IP is usually extensively tested and validated, thereby delivering enhanced quality and reliability. Chip designers can take advantage of IP providers’ knowledge and experience by integrating proven IP blocks into their designs, leading to the creation of more durable and dependable semiconductors.

Access to Advanced Technology
Semiconductor IP providers frequently remain at the front edge of technological innovations, offering IP blocks equipped with advanced features and functionalities. Semiconductor designers can take advantage of the most recent technological developments and stay ahead of the competition by using these IP solutions to create cutting-edge semiconductor solutions.

Driving Forces Behind Semiconductor IP Growth

Growing Embrace of Wireless Technology Devices
The rising use of wireless technology devices and increased investments by key players in advanced and innovative wireless products are driving market growth. Major companies operating in this market are globally investing in the development of wireless technologies to cater to consumer demands. This surge in the development of wireless technology enhances the popularity for Intellectual Property (IP) solutions, including interface IP, silicon-based design IP (ASIC), and processor IP. These components are essential for manufacturing mobile and wireless devices.

Surging Popularity of Advanced Technology-driven Consumer Electronics
The increasing adoption and advancement of technology-driven consumer electronics worldwide have significantly fueled the market growth for the semiconductor IP. The semiconductor IP solutions are integral to the production of various electronic devices, including smartphones, wearables, headphones, and various other innovative and advanced home products. Memory and interface IP are utilized in wearable devices to enhance every day experiences by providing real time feedback. As the demand for wearables and smart connected devices rises globally, this trend is projected to propel market growth further.

Understanding the Limitations of Semiconductor IP
IP theft and counterfeiting result in high costs, especially in ASIC and FPGA semiconductor designs, and also reputational risks for organizations. This issue is a significant concern in the semiconductor market, with counterfeit components posing a major threat. Another significant challenge faced by semiconductor companies is the complexities involved in technological upgrades. To remain competitive, companies must continuously innovate and adapt, which necessitates substantial investments in R&D and a workforce equipped to navigate the complexities of modern technology.

Emerging Trends in the Semiconductor IP Landscape

Artificial Intelligence and Machine Learning: The integration of AI and ML into semiconductor IPs is increasingly prevalent and aims to optimize performance, enhance security, and improve energy efficiency.

5G Technology Advancement: The roll-out of 5G technology is significantly boosting the demand for advanced IP that supports faster and more reliable communication networks.

Automotive Electronics Development: The rise of autonomous and electric vehicles necessitates specialized IP in automotive electronics.

Security-Centric IP: As cyber threats evolve, there is a growing emphasis on developing IP solutions with robust security features.

Customization and Flexibility: There is a notable trend toward more customizable and flexible IP solutions tailored to meet specific customer needs.

Asia Pacific’s Semiconductor IP Market: A Dominant Force in Innovation and Growth

Asia Pacific dominates the global semiconductor IP market and is projected to capture the largest share in the coming years, driven by increased investments from the region’s leading players in electronics manufacturing.

Samsung Group’s announcement in May 2022 to invest approximately USD 489 billion over five years is a significant factor in Asia Pacific’s dominance. This investment primarily focuses on developing semiconductors and biopharmaceuticals and aims to strengthen domestic supply chains and enhance competitiveness in strategic sectors.

Additionally, the concentration of electronics manufacturers and the growing export of electronics components from Asia Pacific are significant factors contributing to market expansion. The Invest Asean Organization reports that the consumer electronics industry constitutes approximately 50% of the total exports for several Asia Pacific nations, including India, China, and Japan.

Major Players Operating in the Semiconductor IP Space

Arm Holdings Ltd (U.K.)
ARM, a prominent semiconductor IP solutions provider, offers a diverse range of graphics processors, processor cores, and System-on-Chip (SoC) designs for various applications, including automotive, mobile, and IoT. ARM’s features include ARM Cortex-A series CPUs, ARM Artisan physical IP, and Mali GPUs, which provide high performance and energy efficiency for wide semiconductor applications.

Synopsys Inc. (U.S.)
Synopsys, a leading semiconductor design tools and IP cores verification solutions provider offers an enhanced portfolio that includes analog, digital, and mixed-signal IP for FPGA and ASIC designs. The aim is to deliver designers with configurable and reusable semiconductor IP solutions.

Cadence Design Systems, Inc. (U.S.)
Cadence, a prominent provider of software for semiconductor design and verification, features a wide range of verification IP, IP cores, and design tools for SoC development. It innovates with advanced functionalities such as Denali memory IP, Tensilica DSPs, and the Virtuoso design platform, equipping designers with tools for managing complex SoC projects.

Ceva Inc. (U.S.)
CEVA focuses on semiconductor IP solutions for AI acceleration, Digital Signal Processing (DSP), and wireless connectivity. It provides licensable IP cores for Bluetooth, Wi-Fi, cellular networks, and audio processing and offers improved performance and energy-efficient solutions for various applications.

Lattice Semiconductor Corporation (U.S.)
Lattice Semiconductor is a leading provider of semiconductor IP solutions specializing in signal processing, programmable logic, and interface bridging. The company aims to provide CPLD and FPGA solutions for diverse applications in automotive, industrial, and consumer markets.

Rambus Inc. (U.S.)
Rambus, a well-recognized company that provides semiconductor IP solutions such as security, memory interfaces, and chip-to-chip interconnects, provides low latency and high speed for various applications, including storage, networking, and AI. The aim is to offer designers IP solutions that possess great configurations and are scalable.

eMemory Technology, Inc (Taiwan)
eMemory is a leading provider of semiconductor IP solutions specifically for Non-Volatile Memory (NVM) such as MTP, OTP, and RRAM. The company aims to deliver designers with secure and scalable solutions for various functions such as security, data storage, and code storage.

Recent Key Developments in the Semiconductor IP Industry
Vissonic Electronics Ltd. revolutionized teleconferencing with its launch of Vissonic 5G Wi-Fi Wireless Conference System, in June 2022. This innovative setup includes a wireless microphone, a presentation system, and a 5G Wi-Fi access point, ensuring seamless local audio and video solutions for meeting rooms. By taking advantage of this setup, users can enjoy a clutter-free environment and also get quick access to essential features, enhancing the overall meeting experience.

Earlier, in May 2022, Faraday Technology Corporation unveiled Soteria’s advanced security IP subsystems, featuring a custom SoC design that enhances hardware security for IoT applications. This includes comprehensive software solutions to streamline secure SoC development, addressing the growing demand for robust security in connected devices.

Final Thoughts on Semiconductor IP Dynamics
The semiconductor IP market is undergoing a rapid evolution, driven by various trends such as AI acceleration, complex SoC designs, and RISC-V adoption. These factors enhance device performance and functionality and position semiconductor companies for success through innovation.  As the market faces challenges, a strategic approach is essential for leveraging emerging opportunities. The ongoing evolution of this market is crucial for powering innovative devices in our interconnected world. Ultimately, understanding and adapting these trends will be key for companies navigating this dynamic landscape.

For More Information: https://www.fortunebusinessinsights.com/semiconductor-ip-market-106877

Also Read:

AI Booming is Fueling Interface IP 17% YoY Growth

Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem

Analog Bits Momentum and a Look to the Future

Spatial audio concepts targeted for earbuds and soundbars


Calibre DesignEnhancer Improves Power Management Faster and Earlier

Calibre DesignEnhancer Improves Power Management Faster and Earlier
by Mike Gianfagna on 09-05-2024 at 6:00 am

Calibre DesignEnhancer Improves Power Management Faster and Earlier

Anyone who has attempted to implement a custom design in an advanced process node knows that effective power management can be quite challenging. Effects such as voltage (IR) drop and electromigration (EM) can present significant headaches for both design teams and foundries. Optimizing layouts for these kinds of issues is tricky. Design and P&R tools are intended for optimal design creation and implementation. Layout optimization can be an afterthought. Design for manufacturing (DFM) layout tools are good at optimization, but they are used at signoff, and the goal is to use it earlier in the design flow. There is a technical paper from Siemens Digital Industries Software that details an effective solution to this dilemma. A link is coming, but let’s first examine some details to see how Calibre DesignEnhancer improves power management faster and earlier.

About the Publication and the Author

Jeff Wilson

The paper from Siemens is entitled Calibre DesignEnhancer Design-Stage Layout Modification Improves Power Management Faster and Earlier. The author is Jeff Wilson, a product management director for DFM applications in the Calibre organization at Siemens Digital Industries Software.

Jeff is responsible for the development of products that analyze and modify IC layouts to improve the robustness and quality of the design. Before joining Siemens, he worked at Motorola and SCS. He holds a B.Sc. in design engineering from Brigham Young University and an MBA from the University of Oregon.

Jeff has quite a passion for layout optimization, and it comes through clearly in this well organized and insightful technical paper. You can see a short video of Jeff providing an overview of Calibre DesignEnhancer. A link to that is coming as well.

The Problem

At advanced process nodes, the challenge of managing capacitance and resistance impacts rises sharply. For example, going from 16nm to 5nm we see a max resistance increase around 6X. Over-designing the power grid results in wasted area. Under- designing can result in the IC never achieving IR and EM requirements. What is needed is a solution where the power grid is designed to be efficient for most of the design and optimized for areas of the layout that must support greater power usage.

Custom/analog designers and design implementation engineers have all the physical details needed for accurate and efficient layout optimization. However, applying layout modifications during design and implementation has typically been difficult and/or time-consuming. As discussed, most design and P&R tools provide native options that allow engineers to apply some layout optimization changes to layouts, but custom design and P&R tools are designed and intended for design creation and implementation. The result is sub-optimal layout optimization.

Some of the specific challenges to be met here include IR drop and electromigration.  For the first item, the overall size of the chip may be similar at advanced nodes, but the transistors and interconnect are packed into a smaller area. Typically, that results in the interconnect becoming narrower, which increases the unintended (parasitic) resistance, causing the voltage of the current traveling through that interconnect path to decrease over the length of the path (IR drop).

For the second item, metal atoms can be “pushed” out of place by the flow of current through the interconnect. Over time, the movement of these metal atoms creates both empty spaces (voids) and piles of atoms (hillocks) in the interconnect. If the voids become wide and/or deep enough, they create an open circuit in the interconnect, while hillocks can grow high enough to connect to other interconnects, creating a short. This is a ticking time bomb in advanced circuits that needs to be handled carefully. The figure below illustrates what can happen.

Electromigration Issues

The Solution – Design Stage Layout Modification

To ensure a design remains compliant with design rule checking (DRC) constraints, all layout modifications must be applied with a deep understanding of complex design rules and connectivity requirements. The Calibre DesignEnhancer tool provides an analysis-based solution integrated with both design and P&R flows to help custom designers and P&R engineers efficiently and accurately reduce IR drop and EM issues without negatively impacting performance and area. The tool is used early in the design process, creating a Shift-Left solution that optimizes results and avoids long design/analysis loops.

This approach prepares layouts for physical verification more quickly with minimal issues encountered. Multiple automated layout enhancement use models accessing proven, foundry- preferred rule decks provide optimized layout modifications while ensuring all layout changes are Calibre-clean. The technical paper goes into significant detail about the challenges in advanced designs and how Calibre DesignEnhancer addresses those challenges. If advanced node design is in your future, you need to download this technical paper to improve your chances of success

A download link is coming. First, I will summarize the solutions presented in the paper.

The Calibre DesignEnhancer tool currently provides three use models:

Via insertion, that automatically adds Calibre-clean vias to reduce IR drop and moderate the impact of via resistance on manufacturability and reliability.

Parallel run lengths insertion with the power grid enhancement (Pge) use model that automatically reduces resistance by finding open tracks and inserting Calibre-clean metal and vias to create these parallel runs.

Filler/DCAP cell insertion. In this case, open areas that are left between cells after P&R must be filled before physical verification can be run. These gaps are filled with filler cells (non-functional cells used to continue the rails as required for layer continuity and alignment, such as power/ground and Pwell/Nwell) and DCAP cells (temporary capacitors added between power and round rails to counter functional failures due to IR drop).

By replacing time-consuming and limited P&R filler cell insertion processes with the push-button Calibre DesignEnhancer Pvr use model and its knowledge of Vt rules, design teams can ensure not only Calibre-clean filler and DCAP cells, but also electrically-correct layouts, while also reducing filler cell and DCAP cell insertion runtimes.

These three use models are the inspiration for the graphic at the top of this post. The paper also details the design flow, compatibility with commercial tools and presents several detailed results.

To Learn More

Now it’s time to get your copy of this technical paper and prepare for your next advanced node design. You can download the paper here. And if you have a couple of minutes you can watch a great overview video presented by Jeff Wilson at the top of this page. And that’s how Calibre DesignEnhancer improves power management faster and earlier.

 


Podcast EP245: A Conversation with Dr. Wally Rhines about Hardware Security and Caspia Technologies

Podcast EP245: A Conversation with Dr. Wally Rhines about Hardware Security and Caspia Technologies
by Daniel Nenni on 09-04-2024 at 10:00 am

Dan is joined by Dr. Walden Rhines. Wally is a lot of things, CEO of Cornami, board member, advisor to many and friend to all. In this conversation, Wally discusses his decision to join Caspia Technologies as Chairman of the Board.You can read the press release announcing this here:

 

Wally explains his strong interest in hardware security as a new EDA field and his connections to the University of Florida in Gainesville, where Caspai Technologies was formed. He explains the unique skills of the founding team, the products being developed by Caspia, the momentum the company has achieved and his views of the impact Caspia will have on semiconductor design.

Wally also discusses the addition a new CEO, CRO and VP of engineering at Caspia. You can read the press release announcing these new additions here:

 

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Emotion AI: Unlocking the Power of Emotional Intelligence

Emotion AI: Unlocking the Power of Emotional Intelligence
by Ahmed Banafa on 09-04-2024 at 6:00 am

Emotion AI Unlocking the Power of Emotional Intelligence

Emotion AI, also known as affective computing or artificial emotional intelligence, is a rapidly growing field within artificial intelligence that seeks to understand, interpret, and respond to human emotions. This technology is designed to bridge the gap between human emotions and machine understanding, enabling more natural and empathetic interactions between humans and machines. As AI continues to evolve, the ability to recognize and respond to emotions is becoming increasingly important, not only for enhancing user experiences but also for applications in mental health, education, customer service, and more.

Definition of Emotion AI

Emotion AI refers to the subset of artificial intelligence that is focused on detecting, analyzing, and responding to human emotions. It combines techniques from computer science, psychology, and cognitive science to develop systems that can recognize emotional cues from various sources, such as facial expressions, voice tone, body language, and even physiological signals like heart rate or skin conductivity. By interpreting these signals, Emotion AI can make inferences about a person’s emotional state and respond accordingly.

Emotion AI systems typically rely on machine learning algorithms, natural language processing (NLP), and computer vision to analyze emotional data. These systems are trained on large datasets of emotional expressions and behaviors, allowing them to recognize patterns and make predictions about emotional states. Over time, as these systems are exposed to more data, they become more accurate in their emotional assessments.

Applications of Emotion AI

Emotion AI has a wide range of applications across various industries. Some of the key areas where Emotion AI is being utilized include:

  • Customer Service: Emotion AI is being integrated into customer service platforms to enhance interactions between customers and service representatives. By analyzing the tone of voice and word choice, Emotion AI can detect if a customer is frustrated, confused, or satisfied. This allows customer service agents to tailor their responses to better meet the emotional needs of the customer, leading to improved customer satisfaction.
  • Mental Health: In the field of mental health, Emotion AI is being used to monitor and support individuals with mental health conditions. For example, AI-driven chatbots can provide real-time emotional support by recognizing signs of distress in a person’s language and offering appropriate interventions. Additionally, Emotion AI can be used in therapy sessions to help therapists understand their patients’ emotions more accurately, leading to more effective treatment plans.
  • Education: Emotion AI is being applied in educational settings to create more personalized learning experiences. By analyzing students’ facial expressions and body language, Emotion AI can gauge their engagement levels and emotional responses to different teaching methods. This information can then be used to adjust the curriculum or teaching style to better suit the individual needs of each student.
  • Marketing: In marketing, Emotion AI is being used to create more emotionally resonant advertisements. By analyzing how consumers react to different ads, companies can gain insights into what emotional triggers are most effective for their target audience. This enables marketers to craft campaigns that are more likely to evoke the desired emotional response, leading to increased brand loyalty and sales.
  • Human-Computer Interaction: Emotion AI is transforming the way humans interact with computers and other devices. For example, voice-activated virtual assistants like Siri and Alexa can use Emotion AI to detect the user’s emotional state and respond in a more empathetic manner. This creates a more natural and engaging user experience, making technology feel more human.
  • Autonomous Vehicles: In the automotive industry, Emotion AI is being integrated into autonomous vehicles to enhance safety and passenger experience. For instance, Emotion AI can monitor a driver’s facial expressions and physiological signals to detect signs of drowsiness or stress. The vehicle can then take appropriate actions, such as issuing a warning or taking control of the vehicle to prevent accidents.
Advantages of Emotion AI

Emotion AI offers numerous advantages across different sectors:

  • Enhanced User Experience: By understanding and responding to human emotions, Emotion AI can create more personalized and empathetic interactions. This leads to higher levels of user satisfaction and engagement.
  • Improved Mental Health Support: Emotion AI can provide real-time emotional support and monitoring, making it a valuable tool in mental health care. It can help individuals manage their emotions and access appropriate interventions when needed.
  • Increased Productivity: In the workplace, Emotion AI can be used to monitor employee well-being and stress levels. By addressing emotional challenges early, companies can reduce burnout and improve overall productivity.
  • Better Decision-Making: Emotion AI can provide insights into human emotions that might not be immediately apparent. This can help businesses make more informed decisions, whether it’s in customer service, marketing, or product development.
  • Safety Improvements: In industries like automotive and healthcare, Emotion AI can enhance safety by monitoring emotional and physiological states, leading to timely interventions that prevent accidents or errors.
Disadvantages of Emotion AI

Despite its advantages, Emotion AI also has several disadvantages and challenges:

  • Privacy Concerns: Emotion AI relies on the collection and analysis of personal data, including facial expressions, voice recordings, and physiological signals. This raises significant privacy concerns, as individuals may not be comfortable with their emotional data being monitored and analyzed.
  • Bias and Inaccuracy: Like all AI systems, Emotion AI is susceptible to biases in the data it is trained on. If the training data is not representative of diverse populations, the system may make inaccurate or biased assessments of emotions. This can lead to unfair treatment or misinterpretation of emotions.
  • Ethical Issues: The use of Emotion AI raises ethical questions about consent, manipulation, and the potential for misuse. For example, companies could use Emotion AI to manipulate consumers’ emotions for profit, or governments could use it for surveillance purposes.
  • Over-Reliance on Technology: There is a risk that individuals and organizations may become overly reliant on Emotion AI, leading to a reduction in human empathy and emotional intelligence. This could have negative consequences for interpersonal relationships and social interactions.
  • Technical Limitations: Emotion AI is still in its early stages, and there are technical limitations to its accuracy and reliability. Emotions are complex and can be expressed in many different ways, making it challenging for AI systems to accurately interpret them in all contexts.
Challenges Facing Emotion AI

As Emotion AI continues to develop, it faces several challenges that must be addressed:

  • Data Diversity: One of the biggest challenges in Emotion AI is ensuring that the training data is diverse and representative of different populations. Emotions can be expressed differently across cultures, genders, and age groups, so it’s important for Emotion AI systems to be trained on data that reflects this diversity.
  • Real-Time Processing: For Emotion AI to be effective in applications like customer service or autonomous vehicles, it needs to be able to process emotional data in real-time. This requires significant computational power and efficient algorithms that can quickly analyze and interpret emotional signals.
  • Contextual Understanding: Emotions are often influenced by context, and the same emotional expression can have different meanings in different situations. Developing Emotion AI systems that can understand and interpret context is a major challenge that researchers are working to overcome.
  • Ethical and Legal Frameworks: As Emotion AI becomes more widespread, there is a need for clear ethical and legal frameworks to govern its use. This includes regulations around data privacy, consent, and the potential for misuse. Developing these frameworks will require collaboration between policymakers, researchers, and industry stakeholders.
  • Integration with Existing Systems: Emotion AI needs to be seamlessly integrated with existing technologies and systems. This can be challenging, especially in industries like healthcare or automotive, where there are strict regulations and standards that must be adhered to.
The Future of Emotion AI

The future of Emotion AI is promising, with many exciting developments on the horizon. As technology continues to advance, Emotion AI is expected to become more accurate, reliable, and widely adopted across various industries.

  • Advancements in AI and Machine Learning: Ongoing advancements in AI and machine learning are likely to lead to more sophisticated Emotion AI systems. These systems will be better able to understand complex emotions and respond in a more nuanced and empathetic manner.
  • Greater Integration into Daily Life: As Emotion AI becomes more advanced, it is likely to be integrated into a wider range of devices and applications. From smart homes to wearable technology, Emotion AI will play a key role in creating personalized and emotionally aware environments.
  • Personalized Mental Health Care: Emotion AI has the potential to revolutionize mental health care by providing highly personalized and real-time emotional support. This could lead to more effective treatment plans and better outcomes for individuals with mental health conditions.
  • Ethical AI Development: As the field of Emotion AI grows, there will be an increasing focus on developing ethical AI systems. This includes ensuring that Emotion AI is transparent, fair, and used in a way that respects individuals’ rights and privacy.
  • Global Adoption and Regulation: Emotion AI is likely to see global adoption, with countries around the world integrating it into various sectors. However, this will also require the development of international regulations and standards to ensure its ethical and responsible use.
  • Collaboration Across Disciplines: The future of Emotion AI will require collaboration across disciplines, including computer science, psychology, neuroscience, and ethics. By working together, researchers and practitioners can develop Emotion AI systems that are both technically advanced and socially responsible.

Emotion AI represents a significant advancement in the field of artificial intelligence, with the potential to transform the way humans interact with machines. By enabling machines to understand and respond to human emotions, Emotion AI can create more natural, empathetic, and personalized experiences across a wide range of applications.

However, the development and deployment of Emotion AI also come with challenges, including privacy concerns, biases, ethical dilemmas, and technical limitations. Addressing these challenges will require ongoing research, collaboration, and the development of robust ethical and legal frameworks.

Ahmed Banafa’s books

Covering: AI, IoT, Blockchain and Quantum Computing

Also Read:

AI: Will It Take Your Job? Understanding the Fear and the Reality

Bug Hunting in NoCs. Innovation in Verification

Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem


Accellera and PSS 3.0 at #61DAC

Accellera and PSS 3.0 at #61DAC
by Daniel Payne on 09-03-2024 at 10:00 am

PSS at #61DAC min

Accellera invited me to attend their #61DAC panel discussion about the new Portable Stimulus Standard (PSS) v3.0, and the formal press release was also just announced. The big idea with PSS is to enable seamless reuse of stimulus across simulation, emulation and post-silicon debug and prototyping.

Tom Fitzpatrick from Siemens EDA was the panel moderator, and he shared that a leading challenge was creating sufficient tests to verify the design. Debug time is the bottleneck. UVM is good for modular, reusable verification environments, but only a small group of people understand how to verify and bring up a chip. UVM isn’t great for creating test content, as scoreboard checkers are manual, and UVM isn’t scalable for concurrency, resources and memory management. So having PSS + UVM provides the required abstraction, and PSS provides schedule for a UVM environment, with UVM providing structural features and PSS providing features tailored for test scenario creation. PSS really complements UVM, not replacing it.

Major new features

  • Added “behavioral coverage” clause / Added “Formal semantics of behavioral coverage” annex
    • Coverage performs a critical role in assessing the quality of tests. PSS has supported data coverage (think SystemVerilog covergroups) since its first release, but being able to collect coverage on key sequences of behavior is critical for system-level verification, where PSS focuses. You can think of PSS behavior coverage as “SystemVerilog assertions (SVA) for the system level”. It allows users to capture key sequences of behavior and key behaviors that must occur concurrently, and to collect coverage data proving that these key behaviors were executed.

Minor new feature or incremental changes to existing features

  • Added address space group
    • PSS models memory management as a first-class language feature that allows users to characterize different regions of memory (eg ddr, sram, flash, read-only, read-write, etc) and specify test requirements for memory. When combining PSS models from different sources that access the same overall address space, it can happen that the different model creators characterized memory in different ways. Address-space groups improve reuse by allowing models that characterize memory differently to work together.
  • Added support for “sub-string operator” and string methods
    • PSS has had support for a string datatype since its initial release. The new operators and methods provide new ways for users to manipulate the values stored in strings.
  • Added support to allow collection of reference types
    • This enables more-complex PSS models to be created.
  • Added support for comments in template blocks
    • PSS supports a code-generation feature that is used for targets that require very specific control over the test structure (eg PERL scripts or specific assembly-language tests). This enhancement allows adding comments into the ‘template’ blocks used to generate the output code. This helps users as the develop PSS models by allowing them to temporarily disable portions of a template with less effort.
  • Added support for yielding control with cooperative multitasking
    • Test code running on a single core runs cooperatively. The yield statement allows the user to explicitly code in points at which other concurrently-running test code should be allowed to execute – for example, while polling a register to detect when an operation is complete.
  • Added PSS-SystemVerilog mapping for PSS lists
    • PSS defines interoperability with SystemVerilog, C, and C++. This enhancement allows PSS list variables to be passed to SystemVerilog, enabling more-complex data to be passed.

Clarifications, etc

  • Added support to allow platform qualifiers on function prototype declarations
  • Clarified static const semantics

Panel Discussion

Panel members were:

  • Dave Kelf – Brekker
  • Sergey Khaikin – Cadence
  • Hillel Miller – Synopsys
  • Freddy Nunez – Agnisys
  • Santosh Kumar – Qualcomm

Q&A

Q: What is your favorite PSS feature?

Sergey: The resource allocation feature, because I can create a correct by construction test case for DMA channels as an example.

Hillel: State objects with schedule. Sequences are correct by constraint.

Santosh: Inferencing of actions.

Freddy: One standard for multiple platforms.

Dave: Reusability.

Q: What are the challenges to learn another language like PSS?

Santosh: Folks from SV know about constraints, newbies learn the declarative nature, so the ramp up time is not too high with PSS, so there’s no going back.

Sergey: We’re teaching PSS in classes, and it’s a concise language, so there’s a mindset change from procedural to declarative, so we let the tool do the magic work.

Hillel: Freshers like to learn the PSS in just a week or two.

Dave: Mindset is the real barrier. Following a path to learn the methodology first, then language second. Expert users or services company are the first to adopt PSS. There’s a small adoption from those who create VIP models. PSS will grow just like Formal verification grew. VIP with PSS and C behind it are growing.

Q: Why isn’t everyone using PSS yet?
Dave: It’s a new language, system VIP is the killer app. More libraries are added each year.

Hillel: The PSS standard enables more validation and verification.

Q: UVM enabled reusable test environments, what about a PSS methodology library for VIP?

Dave: Absolutely. How to get new people to adopt it are showing the ease of use, so it’s an education issue.

Santosh: We need a standard methodology around using the standard. Applying PSS to verification, vs the UVM learning curve.

Freddy: PSS can do so much, but many don’t know how to first adopt it efficiently.

Q: How about formal vs stimulus free approaches? What about gen AI to create more stimulus?

Dave: The declarative approach in PSS is similar to the formal approach for verification.

Hillel: PSS is better than Excel and Word documents for an executable spec.

Santosh: Scenario specification is like an executable spec, so use the PSS language on how to program IPs.

Dave: PSS is really an executable spec standard.

Q: PSS usage is too low, who is using it today?

Sergey: There’s already a wide range of PSS users, and most of the verification pain is coming from multi-core embedded designers where they have a UVM environment with more than a dozen agents and too many virtual sequences.

Dave: I see PSS being used for large multi-core SoCs, verifying coherency and power domain testing, really, across the board use, even DSP applications.

Hillel: Wherever verification managers have pain in their present methodology.

Q: Why join the PSS committee?

Dave: It’s a great learning experience about verification and you get to talk with so many different users.

Sergey: We’ve been using this for many years and are new to the committee, and I see that both vendors and customers that are working on real challenges. New volunteers bring I new requirements.

Hillel: Constraint solvers are improving and need to be more scalable.

Freddy: We always need new eyes on the language standard for feedback.

Santosh: More people will just strengthen the standard features. Start out with questions, then build to requesting new features.

Q: When does PSS get into IEEE standards?

Tom: PSS 3.0 is coming out in about August or so. Likely 3.1 is required before going to IEEE standards in a year or two.

Q:  Will IP vendors provide PSS models and IP XACT models?

Tom: Yes, that’s ideal. IP vendors should provide the models.

Freddy: PSS will complement IP XACT, not compete with it.

Conclusion

The tone at this #61DAC panel was very upbeat and forward looking. Verification engineers should consider adopting PSS 3.0 in their methodologies along with UVM. The Accellera committee has been accepting new feature requests in the PSS specification and forging improvements along the way.

Read the press release for PSS 3.0 from August 29th.

Related Blogs


Nvidia Pulled out of the Black Well

Nvidia Pulled out of the Black Well
by Claus Aasholm on 09-03-2024 at 6:00 am

Nvidia Pulled the Quarter out of the Well

Despite a severe setback, Nvidia pulled it off once again

There have been serious concerns about the ROI on AI and yield problems with Blackwell, but Nvidia pulled it off again and delivered a result significantly above guidance.

Beating the revenue guidance of $28B with 2B$ to just above 30B$, representing 15% QoQ and a 122% YoY growth. As usual, the increase was driven by the Data Centre business that reached $26.3B, demonstrating that the H100 is not just filling the void before Blackwell takes over, but the H100 demand is still solid.

Despite the excellent result and a mixture of “Maintain” and “Outperform” ratings from the analyst communities, the investor community was less impressed, and the Nvidia stock responded negatively.

It looks like the worry of some of the larger financial institutions and economists about AI’s ROI has taken hold, and investors are starting to believe in it. What I know for sure is that I know as much about AI’s future return as anybody else: Nothing!

Mark Zuckerberg of Meta formulated it well when dividing the Meta AI investment into two buckets: a practical AI investment with very tangible returns and a more speculative long-term generative AI investment.

As I have lived through the dot-com crash of the early millennium, I know that a fairy tale is only a fairy tale when you choose the right moment to end the story. Many stocks that tanked and rightfully were seen as bubble stocks are with us today and incredibly valuable. I had shares in a small graphic company that tanked during that period – fortunately, I kept the shares or else I would not have been able to write this article. It is too early to tell how the AI revolution will end, but companies are still willing to invest (bet) in AI.

Not surprisingly, the analyst community was interested in Jensen Huang’s view of this, and he was very willing to attack the likely most significant growth inhibitor of the Nvidia stock.

While I will not comment on the stock price, I believe Jensen did an excellent job framing the company’s growth thesis. Opposed to how critics have presented it, it is not only a question of AI ROI—it should be seen in the much larger frame of Accelerated computing.

Without being too specific on the actual numbers and growth rates, Jensen presented his growth thesis based on the combined value of the current traditional data centre on a round of $1T.

While we can be criticised for working without exact numbers, we believe that viewing research on a high level with approximate numbers can provide value if you have a large-scale impact that does not require precision to provide insight. Fortunately, this is the foundation of any Nvidia analysis at the moment.

It is possible to judge if the 1T$ datacenter value is reasonable. The Property Plant and Equipment value (PPE) of the top 5 data centre owners is above 650B$, and the same companies have a depreciation of 28B$; the rough average write-off period is 5.8 years, suggesting the PPE is heavy on Server equipment with 4-5 year write off periods.

The 1T$ value is a reasonable approximation for the Nvidia growth thesis.

This is what we extracted from Nvidia’s investor call and would frame as Nvidia’s growth thesis:

Nvidia is at a tipping point between traditional CPU-based computing and GPU-based accelerated computing in the data center, and Blackwell represents a step function in this tipping point. In other words – you ain’t seen anything yet!

The fertile lands for Nvidia’s GPUs are not only the new fields of AI but also the existing and well-established data centres of today. They will also have to convert their workloads to accelerated computing for cost, power and efficiency reasons.

The current depreciation of the 1T data center value we calculate to 43B$/quarter, in other words, this is what is needed to maintain the value of the existing data centres. This depreciation is likely going to increase if Nvidia’s growth thesis is right that the data centers will have to convert their existing capacity to accelerated computing.

The current results of Nvidia will pale in comparison with the post-Blackwell era.

A prediction is not a result, but Jensen did an excellent job of framing the opportunity into a very tangible $1T+ opportunity and a more speculative xxxB$ AI opportunity that shows that Nvidia is not opportunity-limited. There is plenty of room to grow into a very tangible market.

It is time to dive into the details.

The status of Nvidia’s business

Investigating the product perspective, the GPU business dominates but both the Software and Networking products also did well.

From a quarterly perspective, software outgrew the rest with 27% growth while GPU took the price from a YoY perspective with 130% growth followed by 100% network growth and 70% software growth. We already know that Nvidia has transformed from a component to a systems company but the next transformation to services could be in sight. This reveals that Nvidia’s moat is more than serves and that it is expanding.

From a Data center platform perspective, this was expected to be the Blackwell quarter but no meaningful revenue was recorded.

The revenue is completely dominated by the H100 platform while the A100 is close to being phased out. The Chinese revenue kept growing at a strong rate despite having been setback by the restrictions imposed by the US government on GPU sales to China. We categorise all the architectures allowed in China (A800 and H800) as H20 (specially designed for China).

While Nvidia’s revenue by country can be hard to decipher as it is a mixture of direct and indirect business through integrators, the China business is purely based on what is billed in China.

As can be seen, the China revenue is showing strong growth. In the last quarter it grew by more than 47% bringing revenue back to close to the pre embargo period. Nvidia highlighted that China is a highly competitive market (lower prices) for AI but it is obvious that Nvidia competes well in China.

This is also a strong indication of the market position of Nvidia, even in a low cost market bound by embargoes, Nvidia’s competitive power is incredibly strong.

The increased GPU revenue is not really showing in the CapEx of the known cloud owners in China. We will continue following this topic over time.

The systems perspective

With AMD’s acquisition of ZT Systems accelerating the company’s journey from a components to a systems company, it is worth analysing Nvidia with that lens.

Nvidia have already made this transition back in Q3-23 when the first H100 revenue became visible.

Onwards the revenue is no more concentrated around Nvidia silicon (GPU+Network) but also memory silicon from predominantly SK Hynix and an increasing “Other” category that represent the systems transformation.

The other category also includes the advanced packaging including the still very costly built up substrates necessary for H100 and later for Blackwell.

This demonstrates that while the ZT Systems makes AMD more competitive, the company is not overtaking Nvidia but catching up to a similar level of competitiveness from a systems perspective.

The Q3 result in more detail

As can be seen from the chart, there was significant growth in Nvidias revenue, Gross margin and operating margin but not to the same degree as the last couple of quarters.

The growth rates are declining and this is a cause for concern in the analyst community and likely the reason the stock market response has been less than ecstatic.

Indeed the quarterly revenue growth rate was down from 17.8% last quarter to 15% this quarter and both Gross Profit Margin an Operating Profit Margin declined. In isolation this looks like the brakes are slightly impacting the Nvidia hypergrowth. Numbers don’t lie but they always exist in a context that seems to have eluded the analyst community

3 Months ago, Jensen Huang declared that there would be a lot of Blackwell revenue in both the quarter and the rest of the year but shortly after a design flaw allegedly impacting yields was found and a metal mask layer of Blackwell had to be reworked. In reality a key growth component vaporised and should have left the quarter in ruins. Nevertheless Nvidia delivered a result just shy of the growth performance of the last few stellar quarters.

Knowing how complex the semiconductor supply chain is, this is a testament to Nvidia’s operational agility and ability to deliver. The company did not get sufficient credit for this.

A dive into the supply machine room can add to the story.

The supply machine room

Assuming that the Q2 Cost of Goods Sold represent a balanced (under continuous steep growth) Nvidia requires 215K$ worth of COGS to generate 1B$ in revenue. The Q3-24 increase in revenue represent and additional COGS of 860K$ bringing the total COGS needed to 6.5B$

The COGS grew to 7.5B$ while the inventory also accelerated its increase from 600K$/qtr to 800$/qtr.

In total the COGS/Inventory position grew by 1-1.2B$ above of what Nvidia would need to deliver the result in H100 Terms. This represent the impact of the unexpected problems with Blackwell.

In other terms, Nvidia was probably preparing for around 5B$ worth of Blackwell revenue that now had to be replaced with H100 revenue.

Simultaneously, the TSMC HPC revenue jumped in what could be caused by other customers but also undoubtedly some extra work for Nvidia based on the Blackwell issues.

As seen on the TSMC HPC revenue, it also took a bump of 2.2B$, which easily can contain the 1-1.2B$ additional COGS/Inventory that Nvidia is exhibiting.

No matter what, the Blackwell issue was significant and Nvidia delt with it without taking the credit but downplaying the issues in the investor call. From my experience working in semiconductors battleships, this was like a direct hit close to the ammunition stores and everybody has been in panic. On the outside and on the investor call, this was treated like a gracing blow.

Demand and Competition

Rightfully a good analysis should include the competitive situation and a view on the demand side of the equation. We recently covered those two areas in the article: OMG It’s alive. The conclusion of that article is that Nvidias competitive advantage remains strong and that the CapEx of the large cloud providers are growing in line with their cloud businesses. The visibility of CapEx spend is also favorable to Nvidia and the AI companies in general.

Conclusion

As always, we do not try and predict any share price but concentrate on the underlying business results. Sometimes the two are connected, other times not.

This analysis shows that Nvidia pulled of a very good result while the company took a direct hit to the hull. The impact of the Blackwell issue was significant but handled with some damage to revenue growth and profitability. This will likely recover soon.

It reconfirmed Nvidia’s journey towards a systems company with strong networking and software growth and an increase in CapEx could signal something interesting is brewing. While the AMD ZT Systems acquisition is good for the company, it does not represent a tangible threat to Nvidia

The H100 platform is executing impressively and now accounts for 19B$/Qtr or 73% of the DC business and more than 63% of the total Nvidia revenue. Despite Blackwell problems, the H100 supply chain pulled through and Nvidia blew through the 28B$ guidance.

China is becoming important again with strong growth of over 47% QoQ. The Chinese share of revenue is now back to more than 12% of total revenue. Nvidia has clearly struck a balance between cost and performance that does not hurt the profitability of the company. While it is not visible where the GPU goes, it is safe to assume the Chinese growth does not stop here.

For us the most interesting thing in the Nvidia call was the revelation of the Nvidia Growth Thesis (our term) as a response to the worries of ROI on AI spread by banks and economists based on short term returns. We think that Jensen Huang layed out an excellent growth thesis with plenty of opportunity to grow while at the same time addressing the ROI on AI.

A more pressing issue will be the ROI on the $1T traditional CPU based data center value that will depreciate with $43B (our analysis) per quarter. Jensen argues that this will not be able to compete with accelerated computing and will be unable to compete very soon.

If Jensen is right here, there is no need to worry about the ROI on AI for some time. The cloud companies will have to invest just to protect their cloud business.

It looks like the growth thesis has escaped the most of the analyst community that are more interested in calculating the next quarter than lifting their gaze to the horizon. The future looks bright for Nvidia.

While our ambition is to stay neutral, we allow ourselves to be impressed every once in a while and that is what we were in this investor call.

Pledge your support for this content

Also Read:

Robust Semiconductor Market in 2024

Semiconductor CapEx Down in 2024, Up Strongly in 2025

Automotive Semiconductor Market Slowing


Intel and Cadence Collaborate to Advance the All-Important UCIe Standard

Intel and Cadence Collaborate to Advance the All-Important UCIe Standard
by Mike Gianfagna on 09-02-2024 at 10:00 am

Intel and Cadence Collaborate to Advance the All Important UCIe Standard

The Universal Chiplet Interconnect Express™ (UCIe™) 1.0 specification was announced in early 2022 and a UCIe 1.1 update was released on August 8, 2023. This open standard facilitates the heterogeneous integration of die-to-die link interconnects within the same package. This is a fancy way of saying the standard opens the door to true multi-die design, sourced from an open ecosystem that can be trusted and validated. This standard is very important to the future of semiconductor system design. It’s also quite complex and presents many technical hurdles to practical usage by many. Intel and Cadence recently published a white paper that details how the two companies are working together to get to the promised land of a chiplet ecosystem. If multi-die design is in your future, you will want to get your own copy. A link is coming, but let’s first examine some history and innovation as Intel and Cadence collaborate to advance the all-important UCIe standard.

Some History

It turns out Cadence and Intel have a history of collaborating to bring emerging standards into the mainstream. In 2021, the companies collaborated on simulation interoperability between an Intel host and Cadence IP for the Compute Express Link™ (CXL™) 2.0 specification. Like UCIe, this work aimed to have a substantial impact on chip and system design.

The specification, along with the latest PCI Express® (PCIe®) 5.0 specification provided a path to high-bandwidth, cache-coherent, low-latency transport for many high-bandwidth applications such as artificial intelligence, machine learning, and hyperscale applications, with specific use cases in newer memory architectures such as disaggregated and persistent memories.

The ecosystem to support this standard was rapidly evolving. Design IP, verification IP, protocol analyzers, and test equipment were all advancing simultaneously. This situation could lead to design issues not being discovered until prototype chips became available for interoperability testing. Finding the problem this late in the process would delay product introduction for sure.

So, Intel and Cadence collaborated on interoperability testing through co-simulation as the first proof point to successfully run complex cache coherent flows. This “shift-left” approach demonstrated the ability to confidently build host and device IP, while also providing essential feedback to the CXL standards body.

You can read about this project here.

Addressing Present Day Challenges

In 2023 Cadence and Intel began collaborating again, this time to advance the UCIe standard and help achieve on-package integration of chiplets from different foundries and process nodes – the promise of an open chiplet ecosystem. UCIe is expected to enable power-efficient and low-latency chiplet solutions as heterogeneous disaggregation of SoCs becomes mainstream.  This work is critical to keep the exponential complexity growth of Moore’s Law alive and well. Monolithic strategies won’t be enough.

To achieve a chiplet ecosystem, design IP, verification IP, and testing practices for compliance will be needed, and that is the focus of the work summarized in this white paper. Here are the topics covered in the white paper – a link is coming so you can get the whole story.

UCIe Compliance Challenges. Topics include the electrical, mechanical, die-to-die adapter, protocol layer, physical layer, and integration of the golden die link to the vendor device under test. The PHY electrical and adapter compliances include the die-to-die high-speed interface as well as the RDI and FDI interface. The mechanical compliance of the channel is tightly coupled with the type of reference package used for integration. There are a lot of technical challenges and design-specific challenges discussed in this section.

The Role of Pre-Silicon Interoperability. There are many parts to each of the standards involved in multi-die design. The entire system is designed concurrently, resulting in all layers going through design and debug at the same time. Like the work done on CXL, “shift-left” strategies are explored here to allow testing and validation to be done before fabrication. The figure below illustrates the relation of the various specifications.

UCIe – A Multi Layered Subsystem

UCIe Verification Challenges. Some of the unique challenges to the verification environment are discussed here. Topics covered include:

  • D2C (data-to-clk) Point Testing
  • PLL Programming Time
  • Length of D2C Eye Sweep Test
  • Number of D2C Eye Sweep Tests

UCIe Simulation Logistics. For this project, the Cadence UCIe advanced package PHY model with x64 lanes was used for pre-silicon verification with Intel’s UCIe vectors. Topics covered include:

  • Initial Interoperability
  • Simulation – Interoperability over UCIe
  • Controller Simulation Interoperability

The piece concludes with UCIe Benefits to the Wider Community.

To Learn More

If multi-die design is in your future, you need to understand the UCIe standard. And more importantly, you need to know what strategies exist for early interoperability validation. The white paper from Cadence and Intel is a must read. You can get your copy here. And that’s how Intel and Cadence collaborate to advance the all-important UCIe standard.

Also Read:

Overcoming Verification Challenges of SPI NAND Flash Octal DDR

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Future of Logic Equivalence Checking


WEBINAR: Workforce Armageddon: Onboarding New Hires in Semiconductors

WEBINAR: Workforce Armageddon: Onboarding New Hires in Semiconductors
by Daniel Nenni on 09-02-2024 at 6:00 am

CHIPQUEST WEB1

The semiconductor industry is undergoing an unprecedented inflection—not in its technology, but in its very structure. This transformation is happening at a time of phenomenal growth, presenting both opportunity and crisis. The ingredient most critical to meeting the growth demands, but which also poses the greatest risk, is workforce. There will not be nearly enough skilled workers to fill all roles. The history of such industrial inflections suggests many companies will under-prepare, and then over-react. To their detriment.

This webinar addresses a key, and often overlooked, and maybe unexpected ingredient to weathering such crises—employee onboarding.

Join us at 13:00 EST on September 5th, 2024, hosted by Chipquest. Register here.

The Compounding Forces Behind the Workforce Crisis

This workforce crisis is not driven by one or two independent factors, but by several compounding forces that are reshaping the industry landscape:

Expanded Demand Across Multiple Fronts: The demand for semiconductors is skyrocketing across various sectors:

  • Consumer Electronics: More laptops, smartphones, and other devices are being produced than ever before.
  • Data Centers: The surge in digital transformation during the COVID-19 pandemic has increased the need for server farms to support cloud computing, e-commerce, and streaming platforms.
  • IoT and Automotive: The proliferation of IoT devices and the shift toward electric and autonomous vehicles are driving exponential increase in use cases.
  • Artificial Intelligence: AI and machine learning applications are generating a new wave of need for advanced, high-performance chips.

Supply Chain Redundancy and Geopolitical Tensions: Geopolitical tensions have led to a push for on-shoring, reshoring, or near-shoring semiconductor manufacturing:

  • Companies like TSMC and Amkor are expanding their manufacturing footprint to countries where they never had a presence before.
  • This duplication of infrastructure requires additional skilled workers, further stretching the already limited talent pool.

Technology Sovereignty as National Security: The global race for semiconductor supremacy has become a matter of national security:

  • Governments are investing heavily in domestic semiconductor capabilities. Newcomers like India and Vietnam are entering the semiconductor race, intensifying competition for talent.
  • The CHIPS and Science Act and similar initiatives in other nations aim to secure technology sovereignty, further escalating the need for skilled professionals.

Workforce Dynamics and a Changing Labor Landscape: The semiconductor workforce is already greatly reduced from its earlier peak industry is already facing a significant workforce gap due to early retirements, layoffs, and competition from other tech sectors:

  • A net exodus of workers due to layoffs, early retirements and pilfering of key talent by adjacent industries.
  • Declining interest in manufacturing roles, particularly among younger demographics.
What’s Being Done—and What’s Missing

Public-private partnerships, government funding, and renewed focus on education and apprenticeships are all steps in the right direction. While these initiatives do create a more knowledgeable pool to draw from, they do not serve to integrate new workers into the actual workplace, where the immensity of systems, procedures and policies readily overwhelm new workers..

A New Approach: Modernized Onboarding and Training

One critical aspect that continues to be overlooked is the effectiveness of onboarding and training within individual companies. Traditional methods—relying on static PDFs and uninspiring safety training—fail to engage new employees. This not only leads to costly mistakes but also impacts retention rates.

To address these challenges, the semiconductor industry needs innovative solutions that can modernize onboarding and training. Methods like gamification and microlearning offer a glimpse into how training can become more engaging and effective, better aligning with the expectations of today’s digital-native workforce.

Join Us to Learn More

The semiconductor industry is transforming, and companies must adapt their workforce strategies to stay competitive. Join Chipquest’s upcoming webinar, “Workforce Armageddon: Onboarding New Hires in Semiconductors,” to explore these critical challenges and the innovative solutions that can help your organization thrive.

Register now to secure your spot!

Also Read:

Elevate Your Analog Layout Design to New Heights

Introducing XSim: Achieving high-quality Photonic IC tape-outs

Synopsys IP Processor Summit 2024


Podcast EP244: A Review of the Coming Post-Quantum Cryptography Revolution with Sebastien Riou

Podcast EP244: A Review of the Coming Post-Quantum Cryptography Revolution with Sebastien Riou
by Daniel Nenni on 08-30-2024 at 10:00 am

Dan is joined by Sebastien Riou. Director of Product Security Architecture at PQShield. Sebastien has more than 15 years of experience in the semiconductor industry, focusing on achieving “banking grade security” on resource-constrained ICs such as smart cards and mobile secure elements. Formerly of Tiempo-Secure, he helped create the world’s first integrated secure element IP achieving CC EAL5+ certification.

Sebastien discusses post-quantum cryptography and why the US Government’s National Institute of Standards and Technology (NIST) is pushing for implementation of new, quantum resistant security now. Sebastian explains how the new standards are evolving and what dynamics are at play to deploy those standards across a wide range of systems, both large and small. The special considerations for open source are also discussed.

Sebastien describes the broad hardware and software offerings of PQShield and the rigorous verification and extensive documentation that are available to develop systems that are ready for the coming quantum computing threat to traditional security measures.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.