BannerforSemiWiki 800x100 (2)

Navigating the AI Horizon: Trends and Challenges in 2024

Navigating the AI Horizon: Trends and Challenges in 2024
by Ahmed Banafa on 01-05-2024 at 6:00 am

Navigating the AI Horizon Trends and Challenges in 2024

Stepping into the year 2024, the landscape of artificial intelligence (AI) continues to evolve at an unprecedented pace, presenting both exciting opportunities and formidable challenges. In this era of technological advancement, we find ourselves at the intersection of innovation and responsibility, where emerging trends in AI are reshaping industries and influencing the way we live and work.

As we explore the future of AI, several compelling trends come to the forefront, each poised to leave a significant impact on technology and society. These trends include the promises of Quantum AI, the infusion of AI into creative processes, the transformation of work through augmented capabilities, the evolution of multi-modal AI, and the increasing emphasis on ethical considerations.

Emerging Trends:
  • Quantum AI: Quantum computing promises to solve problems intractable for classical computers, enabling the development of more powerful and efficient AI models. Potential applications include drug discovery, materials science, and climate modeling. Requires significant hardware and software advancements.
  • AI-Enhanced Creativity: AI algorithms will assist humans in creative endeavors like art, design, and writing. Tools will generate new ideas, personalize experiences, and create unique artistic expressions. Raises concerns about artistic originality and ownership.
  • Augmented Working: AI will automate repetitive tasks, freeing humans for higher-order thinking and collaboration. Tools will support project management, decision-making, and communication. Requires careful planning to minimize job displacement and ensure a smooth transition.
  • Next-Generation Multi-Modal AI: AI models will understand and process diverse data modalities like text, images, and audio. This will lead to more natural and intuitive human-computer interactions. Requires advances in data fusion, representation learning, and multi-modal architectures.
  • Ethical Considerations: Increasing focus on ethical considerations like data privacy, algorithmic bias, and potential misuse. Development of guidelines and regulations for responsible AI development and deployment. Transparency and accountability are crucial for building trust in AI.
Challenges and Risks:
  • Data Bias and Fairness: AI models can amplify existing biases in data, leading to discriminatory outcomes. Techniques need to be developed for ensuring fairness and accountability in AI systems. Requires diverse training data and robust bias detection algorithms.
  • Explainability and Transparency: Understanding how AI models make decisions is often difficult, hindering trust and accountability. Methods need to be developed for explaining AI decisions in a human-understandable way. Interpretable AI models and explainable AI frameworks are crucial.
  • Job displacement: Automation by AI can lead to widespread job displacement, particularly in routine tasks. Strategies for retraining and reskilling workers are essential. Investing in education and lifelong learning programs is crucial.
  • Security and Privacy: AI systems are vulnerable to attacks that can compromise data privacy and security. Robust security measures need to be developed to protect against malicious use of AI. Secure hardware and software, along with cybersecurity awareness, are essential.
  • Global AI Governance: As AI adoption accelerates globally, coordinated efforts are needed for responsible development. International standards and regulations need to be established for ethical AI governance. Collaboration between governments, industry leaders, and researchers is key.
Navigating the Future:
  • Investing in AI education and training: Equipping individuals with the skills and knowledge needed to understand, develop, and utilize AI responsibly. Educational programs and training initiatives should be accessible to all.
  • Prioritizing ethical AI development: Establishing clear ethical guidelines and best practices for AI development and deployment. Ensuring transparency, accountability, and fairness in AI systems. Building public trust through responsible AI development.
  • Fostering collaboration: Addressing the challenges of AI requires collaboration between researchers, policymakers, industry leaders, and the public. Open dialogue and information sharing are essential. Fostering an inclusive and collaborative AI ecosystem is crucial.
  • Promoting open-source AI: Open-source platforms can accelerate AI progress and ensure transparency and accessibility. Sharing knowledge and resources can benefit the entire AI community. Building open-source AI repositories and tools is important.
  • Investing in AI research: Continued research and development are essential for pushing the boundaries of AI and unlocking its full potential. Funding for basic and applied AI research is crucial. Supporting diverse research teams and promoting international collaboration is important.

By embracing emerging trends and addressing potential challenges, we can leverage AI for a better future. Building a responsible, ethical, and inclusive AI ecosystem is essential for ensuring that this powerful technology benefits all. As we navigate the AI horizon, let us strive to create a future where AI empowers humanity and builds a more equitable and sustainable world for all.

Ahmed Banafa’s books

Covering: AI, IoT, Blockchain and Quantum Computing

Also Read:

AI and Machine Unlearning: Navigating the Forgotten Path

The Era of Flying Cars is Coming Soon

AI and the Future of Work


2024 Outlook with Matt Burns of Samtec

2024 Outlook with Matt Burns of Samtec
by Daniel Nenni on 01-04-2024 at 6:00 am

Matt Burns Samtec

Matt develops go-to-market strategies for Samtec’s Silicon to Silicon solutions. Over the course of 20+ years, he has been a leader in design, technical sales and marketing in the telecommunications, medical and electronic components industries. It has been an honor working with Matt and his team for the last 3 years and I value his input, absolutely.

Tell us a little bit about yourself and your company.

Samtec designs and manufactures high-performance interconnect solutions including copper connectors and cable assemblies, RF connectors and cable assemblies, and a growing portfolio of embedded optical transceivers. Currently, I lead a highly experienced team of marketing and technical professionals evangelizing Samtec’s high-performance interconnect solutions and services.

What was the most exciting high point of 2023 for your company?

You’d think that would be an easy question to answer, but I think that we survived 2023 was the highlight. I find joy when we can provide our customers and partners the right interconnect solution for their specific challenge. AI HW system architectures are evolving quickly. I am sure SemiWiki.com readers see that at the chiplet, substrate or package level. At Samtec, we see that at the system level. Density, higher speeds and flexible, scalable interconnects are a must. In 2023, we were able to deliver new solutions for several application-specific AI implementations. As we move in to 2024, we are challenged to develop new solutions for the next gen of AI systems. We can’t wait!

What was the biggest challenge your company faced in 2023?

Surviving. There were a lot of headwinds in 2023. Supply chains obviously normalized post-pandemic. Inflation seems to be coming under control. Global conflicts seem to be on the rise, so that can disrupt business cycles. There are many more.

How is your company’s work addressing this biggest challenge?

Samtec is very flexible and nimble. We can respond to challenges, whatever they may be, pretty quickly. Our focus is always on the customer. It’s a culture deeply ingrained in our ethos of find a solution to challenges our customers and partners face. As we continually execute to answer those challenges, long term relationships and trust are formed. Obviously, we are not immune to business cycles. However, our laser focus on serving the customer over the long term is always a winning strategy.

What do you think the biggest growth area for 2024 will be, and why?

I can’t predict the future, but I wouldn’t be surprised if AI from the data center to the intelligent edge continues to provide new opportunities. The release of ChatGPT was obviously an inflection point. LLMs and related AI applications are going to grow in complexity. The need for more computing is not going to slow down. New AI chipsets are emerging. Linking those chipsets is what we do best.

How is your company’s work addressing this growth?

It’s all about getting data from Point A to Point B as quickly and as reliably as possible. We are introducing our expanded 224 Gbps PAM4 product portfolio at the upcoming DesignCon 2024 trade show. Our 112 Gbps PAM4 solutions are still being designed into new products. We see new applications for our embedded optical transceivers, even in the data center. Our expanding 18-110 GHz RF cable, cable assembly and connector portfolio target semiconductor development and test applications as well.

What conferences did you attend in 2023 and how was the traffic?

Coming out of the pandemic, I was curious to see if engineers really wanted to live in a virtual or digital world most of the time. I was blown away at how many attendees we saw at our trade shows and conference around the world. North America and EMEA lead the way in 2022. We started seeing the same trends with growing attendance in APAC events in 2H23. Engineers want to see new technology. They want to interact with the thought leaders and technologists that make it happen. Trade shows and conferences are essential for this. So we added a few events to our trade show schedule in 2023. Key annual events for Samtec include DesignCon, OFC, embedded world, the global PCI-SIG DevCons, IMS, ECOC, OCP, SuperComputing, and the AI Hardware and Edge AI Summits among others.

Will you attend conferences in 2024? Same or more?

In general, we will be at the same amount of conference or tradeshows in 2024. Going into the new year, we see some US-based shows like the AI Hardware and Edge AI Summit going to Europe. embedded world 2024 is coming to Austin in October 2024. electronica is always a big deal in Munich. We take advantage of those and more.

Additional questions or final comments?

Its always a pleasure talking with the SemiWiki.com team Thank you for your hard work keeping the semiconductor and interconnect industries updated and informed so thoroughly.

Thank you for kicking off this series of interviews Matt. I look forward to seeing you at DesignCon..
Also Read:

Samtec Welcomes You to the Future with Proven 224G PAM4 Interconnect Solutions

Samtec Increases Signal Density Again with Analog Over Array™

Samtec Innovates a New Approach for High-Frequency Analog Signal Propagation


CEO Interview: Oshri Cohen of Cybord

CEO Interview: Oshri Cohen of Cybord
by Daniel Nenni on 01-03-2024 at 10:00 am

Oshri Cohen CEO of Cybord

Oshri Cohen is an executive officer with 20+ years of vast management experience in operations, engineering, and supply chain in multinational high-tech companies. Before joining Cybord, Oshri headed Supply chain at Nvidia Networks. Prior to that Mr. Cohen held the global procurement and logistics at Mellanox (Acquired by Nvidia). Oshri holds a BSc. in Industrial Engineering from Ruppin academic centre and MSc in system Engineering from Technion University.

Tell us about your company?

Cybord is pioneering the use of visual AI ensuring electronic component quality, authenticity, and traceability and is leading the AI revolution for the electronics manufacturing industry. Cybord’s technology analyzes 100% of the components directly on the assembly line, helping OEMs and EMSs ensure that only reliable, defect-free components are integrated into products, with unmatched 99.9% accuracy. By providing manufacturers with complete transparency, Cybord helps identify defects, counterfeits, and irregularities that could otherwise go unnoticed.

Our solution empowers OEMs and EMSs to minimize costly scrapping, rework, and recalls, while ensuring only reliable, defect-free components are used in production. Designed to be both cost-effective and highly efficient, Cybord is transforming the electronics manufacturing industry by uncovering the “unknown unknowns” and driving unparalleled reliability across the supply chain.

What role does AI have in your product?

AI is at the heart of Cybord’s solution, transforming the way manufacturers ensure quality and reliability of electronic components by utilizing big data and AI across the assembly line to give manufacturers unparalleled visibility and access into their production processes. Leveraging a growing and vast database of billions of unique electronic components, our machine learning model constantly evolves and learns from a diverse data set, such as multi-vendor environments and high-volume production lines across a variety of industries to provide accurate analysis and detect quality, authenticity, and traceability issues within milliseconds. The visual AI engine analyzes data from 100% of electronic components on the assembly line, inspecting images of both the top and bottom of the components to detect and alert about even the most subtle abnormalities including defects, damage, corrosion, and structural irregularities, verify component origins, and prevent defective parts from advancing in the manufacturing line – all in real time.

What problems are you solving?

Electronic components are the lifeblood powering everything from telecoms to automobiles to data centers. By addressing the risk of faulty, damaged, and low-quality electronic components entering production lines, Cybord is the first line of defense for OEMs and EMSs to safeguard product quality and ensure brand reputation. Additionally, geopolitical factors around the world have led to “point of origin” restrictions being levied on components. For instance, Country A may impose restrictions on automobiles that prohibit the use of any components sourced from Country B. Cybord can identify and corroborate a component’s country of origin and original manufacturer, providing OEMs and EMSs with the ability to verify authenticity and standard compliance to align with any governmental regulations and industry benchmarks.

What application areas are your strongest?

 Cybord’s solutions serve all industries where electronic component reliability is key (i.e. all industries that rely on electronic circuit boards) – including data centers and servers, telecoms, medical devices, automotive, aerospace, and more. One of Cybord’s standout features is its ability to provide “MRI-style” detection and inspection, allowing us to detect even the most subtle abnormalities in electronic components. For example, in automotive applications, we ensure that smart vehicles perform safely and reliably, while in telecommunications, we guarantee continuous network performance. In medical devices, our solutions help maintain the highest standards of component integrity for life-critical applications. These are just a few of many examples of how Cybord is verifying the quality and traceability of the electronics that are so essential to the fabric of modern life.

What keeps your customers up at night?

Recalls—plain and simple. For an OEM, a recall is the worst-case scenario. Not only do they involve a hefty financial cost to address, but they also pose significant reputational damage that can take years to recover from. The disruption to operations, customer trust, and the long-term impact on brand value are substantial.

Cybord helps mitigate these risks by preventing recalls and minimizing their reach when they do happen. Our AI-driven solution ensures that every component is inspected for defects, authenticity, and traceability before it is placed on a board. By providing real-time insights and accurate analysis, Cybord gives OEMs the peace of mind that their products are safe, secure, and of the highest quality. This leads to fewer defects, reduced rework costs, and fewer recalls—allowing companies to focus on innovation without the constant worry of quality control failures.

Also Read:

 


Alphawave Semiconductor Powering Progress

Alphawave Semiconductor Powering Progress
by Daniel Nenni on 01-03-2024 at 6:00 am

Alphawave Semiconductor Chiplets

Do you know who had another great year? Alphawave Semi did. Despite being relatively young in the industry (founded in 2017), the company has quickly gained recognition for its advancements in high-speed connectivity solutions.

They specialize in developing high-speed connectivity solutions for Data centre, AI, 5G wireless infrastructure, Data networking, Autonomous vehicles, and Solid-state storage. Their focus on advanced connectivity and signal processing technologies has positioned them as key players in the industry.

Alphawave Semi is all about pushing the boundaries of high-speed connectivity. They design and develop semiconductor IP (Intellectual Property) solutions that enable fast and efficient data transfer within electronic devices. Their expertise lies in working closely with customers and partners developing advanced connectivity and signal processing technologies.

It has been an honor to work with Alpha Semi CEO Tony Pialis and his growing team of top notch professionals. As far as semiconductor ecosystem CEOs I would put Tony in my top 10, absolutely.

In my opinion chiplets will be one of the most disruptive semiconductor technologies and Alphawave Semi will be one of the companies leading the way starting with:

IO Chiplets

Reconfigurable ZeusCORE100 SerDes IO with integrated protocol controllers, security IP and AresCORE (that is, D2D-UCIe) IP that enables up to 1.6T of throughput at MR, XLR, and PCIe/CXL reaches.

  • Medium Reach Optical Driver Chiplet
  • Extra Long Reach Ethernet Chiplet
  • Combo PCIe/CXL/Ethernet Chiplet
  • 1.6T high speed IO Chiplet
Accelerator Chiplets

High performance, Arm® or RISC-V-based accelerator chiplets—enables data acceleration through Arm or RISC-V multi-core accelerator SOCs)

Memory Chiplets

Low Latency, high speed DDR5 and memory controller; includes multi-core CPU with L1 and L2 caches.

Chiplets can contribute to significant cost reductions by reducing chip design time, more efficient use of wafers, resulting in improved flexibility and scalability in both design and manufacturing.

While the adoption of chiplets may involve initial challenges in terms of design and integration, the long-term benefits, the ability to keep Moore’s Law moving along more than justifies the return on investment.

Bottom line: To further reduce the overall risk of implementing a chiplet based design strategy, working with ecosystem experts like Alphawave Semi is critical, absolutely.

Alphawave Semiconductor in 2023:

Alphawave Semi Partners with Keysight to Deliver Industry Leading Expertise and Interoperability for a Complete PCIe 6.0 Subsystem Solution

Alphawave Semi Elevates Chiplet-Powered Silicon Platforms for AI Compute through Arm Total Design

Alphawave Semi Spearheads Chiplet-Based Custom Silicon for Generative AI and Data Center Workloads with Successful 3nm Tapeouts of HBM3 and UCIe IP

Alphawave Semi Expands Collaboration with Samsung, Adds 3nm Connectivity IP to Meet Accelerated AI and Data Center Demand

Alphawave Semi Showcases 3nm Connectivity Solutions and Chiplet-Enabled Platforms for High Performance Data Center Applications

Alphawave IP Achieves Its First Testchip Tapeout for TSMC N3E Process

About Alphawave Semi

Alphawave Semi is a global leader in high-speed connectivity for the world’s technology infrastructure. Faced with the exponential growth of data, Alphawave Semi’s technology services a critical need: enabling data to travel faster, more reliably and with higher performance at lower power. We are a vertically integrated semiconductor company, and our IP, custom silicon, and connectivity products are deployed by global tier-one customers in data centers, compute, networking, AI, 5G, autonomous vehicles, and storage. Founded in 2017 by an expert technical team with a proven track record in licensing semiconductor IP, our mission is to accelerate the critical data infrastructure at the heart of our digital world. To find out more about Alphawave Semi, visit: awavesemi.com.

Also Read:

Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Interface IP in 2022: 22% YoY growth still data-centric driven


Will the Package Kill my High-Frequency Chip Design?

Will the Package Kill my High-Frequency Chip Design?
by Bryan Preble on 01-02-2024 at 6:00 am

Figure6

Understanding the electromagnetic (EM) coupling between various elements of a high-frequency semiconductor device is vital for meeting design specifications and ensuring reliable operation in the field. These EM interactions include not only the silicon chip but also extend to the package that encloses it. However, it may be only towards the end of a project that the IC or systems designer gets round to create and simulate EM models that include both on-die metals as well as the package layers. It is not uncommon to find that the inclusion of the package layers with the on-die metals model causes a degradation in performance that may cause specifications to be violated. To avoid this, Ansys provides a solution that can easily add package layers to a silicon technology’s metal stack-up in order to extract complete models with both on-silicon and package layers early in the design process.

Ansys’ suite of on-chip electromagnetic analysis tools operate on IC layouts at the pre-LVS design stage (Ansys RaptorX™) and the post-LVS signoff stage (Ansys Exalto™). The chip analysis can include portions of the package layout and/or package layers to extract a complete EM model that can be simulated with a SPICE circuit simulator. The Ansys tools rely on precise information about the interconnect process technology used in the manufacture of each layer. Process information is provided by silicon foundries in various formats, including Design Rule Manuals (DRMs) and technology files – such as iRCX, ITF, and ICT files – that may be unencrypted or encrypted. The process for capturing the technology stack-up compiles a collection of Ansys format technology files by mapping foundry provided process technology information onto physical layout information in OpenAccess or GDSII stream format (see Figure 1). These compiled technology files also support other Ansys on-chip EM tools including Ansys VeloceRF™ (inductive device layout synthesis) and Ansys RaptorQu™ (for superconducting quantum design).

RaptorX is a silicon-optimized electromagnetic solver, and it comes with a very useful wizard called Process Configurator that makes it easy to create and modify Ansys technology files, even for complex chip-package configurations. As shown in Figure 1, Process Configurator creates Ansys technology files that can contain just the foundry metal stack-up or can contain the foundry metal stack-up plus selected additional package layers. The input to the Process Configurator wizard for the foundry metal stack-up is the process information provided by the foundry. If die and package layers need to be co-extracted, then the package layer information for the layers of interest also needs to be included.

Figure 1: The Ansys Process Configurator wizard gives designers easy control of the chip-package configuration and enables what-if analyses

If the foundry technology file is unencrypted, or the package layer information is unencrypted, the Process Configurator wizard will let you explore various process-related “what-if” scenarios by editing the properties of the die and/or package layers and compiling different versions of the Ansys technology files. The Process Configurator allows designers to add or subtract substrates, backplanes, conductors, dielectrics, and vias including Through-Silicon Vias (TSV). The technology properties that can be edited with Process Configurator are metal thickness, metal conductivity, dielectric thickness, and dielectric constant. In order to complete the Ansys technology files the compiler also requires the GDS stream layer map file and the layer mapping information.

Some examples of modifying an unencrypted technology for “what-if” experiments include:

  • modifying the substrate thickness and properties to explore effects of coupling through the substrate
  • adding TSVs in an exploratory 3DIC stack up
  • setting up a technology file for Wafer-on-Wafer (WoW) technology

adding package layers to see their effect on the EM device – as will be shown in the following example

The input files and information for Process Configurator can be processed using both a UI and a batch-mode command script . The outputs of Process Configurator are the compiled Ansys process technology files used by the Ansys EM tool suite. The Process Configurator has the very useful capability to visualize a technology cross-section, which makes it easy to verify the correct sequence and connectivity of the technology layers. Unencrypted technology layer properties like thickness, resistivity, and dielectric constant are also displayed in the cross-section viewer. If the technology is encrypted then the cross-section viewer shows the layer sequence and connectivity, but the layer thicknesses are not to scale, and material properties are not reported.

Figure 2 below shows a stack up of a fictional example technology file. The left panel displays the substrate characteristics on the bottom layer, the cumulative layer height starting from the substrate, the layer and via names on the left, and the dielectric thickness and dielectric constant (er) on the right. The Conductor section in the right panel lists the conductors with their thickness and resistivity (r), and the Vias section shows the via resistance and area.

Figure 2: Example of Process Configurator display of an unencrypted silicon stack-up with all parameters reported and conductor thicknesses shown to scale

The red box in Figure 3 below highlights a via and package layer that have been added to the stack-up. This stack-up, with the package layer and via included, was used for the simulation results described in the following paragraphs that show how the package layer can affect the performance of an EM device.

Figure 3: Example of an unencrypted silicon stack-up with added package layers highlighted in the red box

To illustrate how Process Configurator can be used to explore the effect of a package on a chip we created a simple layout example: It consists of an EM device – a single-ended octagonal spiral inductor – that was extracted using RaptorX. The resulting electrical model was then simulated in a SPICE-level circuit simulator to analyze the performance first with, and then again without, a package layer placed above it. Figure 4 below shows RaptorX’s physical mesh for the inductor without the package layer.

Figure 4: Ansys RaptorX’s physical mesh for the inductor without a package layer

Next, the same inductor was used, but a rectangle of the package layer was placed above it. Figure 5 below shows the RaptorX mesh of the inductor with the package layer included.

Figure 5: Ansys RaptorX’s physical mesh for the inductor including a covering package layer

RaptorX generated an S-parameter model for each inductor, which were then simulated for Inductance and Quality Factor across a frequency range. Figure 6 shows the inductance of the two inductors plotted across frequency. Comparing the plot of inductance at 3 GHz for the package layer included (green) shows a 28% decrease in inductance, and 33% decrease in the resonance frequency versus the simulation results for the model without the package layer (red).

Figure 6: Inductance over frequency plot showing the significant impact of adding package layers to the simulation

In Figure 7 below, the Quality Factor (Q) of the two inductors is plotted across frequency. Comparing the simulation plot of Q for the package layer included (green) shows a 38% decrease in Max Q value and a 21% decrease in max Q peak frequency versus the simulation results for the model without the package layer (red).

Figure 7: Quality factor over frequency plot showing the significant impact of adding package layers to the simulation

In summary, these simulation results illustrate the stark changes in device behavior that are seen when package layers are included in a simulation. Modeling package layers together with on-die metals can reveal degradation in performance that may violate a specification or cause the device to fail. Ansys has developed the Process Configurator to make it very easy for IC and System designers to capture even the most complex multi-layer packaging configurations and to facilitate quick experimentation. It encourages a shift-left approach with early what-if exploration to help designers find the best possible solution for optimizing their final product and avoid late-stage surprises.

Also Read:

Keynote Speakers Announced for IDEAS 2023 Digital Forum

Ansys Revving up for Automotive and 3D-IC Multiphysics Signoff at DAC 2023

Keynote Sneak Peek: Ansys CEO Ajei Gopal at Samsung SAFE Forum 2023


2024 Big Race is TSMC N2 and Intel 18A

2024 Big Race is TSMC N2 and Intel 18A
by Daniel Nenni on 01-01-2024 at 6:00 am

Intel PowerVia backside power delivery

There is a lot being said about Intel getting the lead back from TSMC with their 18A process. Like anything else in the semiconductor industry there is much more here than meets the eye, absolutely.

From the surface, TSMC has a massive ecosystem and is in the lead as far as process technologies and foundry design starts but Intel is not to be ignored. Remember Intel first brought us High Metal Gate, FinFETs, and many more innovative semiconductor technologies. One of which is backside power delivery. BPD can certainly bring Intel back to the forefront of semiconductor manufacturing but we really need to take it in proper context.

Backside power delivery refers to a design approach where power is delivered to the back side of the chip rather than the front side. This approach can have advantages in terms of thermal management and overall performance. It allows for more efficient heat dissipation and can contribute to better power delivery to the chip components. It’s all about optimizing the layout and design for improved functionality and heat distribution.

Backside power delivery has been talked about in conferences but Intel will be the first company to bring it to life. Hats off to Intel for yet another incredible step in keeping Gordon Moore’s vision alive.

SemiWiki blogger Scotten Jones talks about it in more detail in his article: VLSI Symposium – Intel PowerVia Technology. You can see other new Intel technology revelations here on SemiWiki: https://semiwiki.com/category/semiconductor-manufacturers/intel/.

TSMC and Samsung of course will follow Intel into backside power delivery a year or two behind. The one benefit that TSMC has is the sheer force of customers that intimately collaborate with TSMC ensuring their success, not unlike TSMC’s packaging success.

Today any comparison between intel and TSMC is like comparing an Apple to a Pineapple, they are two completely different things.

Right now Intel makes CPU chiplets internally and outsources supporting chiplets and GPUs to TSMC at N5-N3. I have not heard about an Intel TMSC N2 contract as of yet. Hopefully Intel can make all of their chiplets internally at 18A and below.

Unfortunately, Intel does not have a whale of a customer for the Intel foundry group as of yet. Making chiplets internally does not compare to TSMC manufacturing complex SoCs for whales like Apple and Qualcomm. If you want to break up the BPD competition into two parts: Internal chiplets and complex SoCs that is fine. But to say Intel is a process ahead of anybody while only doing chiplets is disingenuous, my opinion.

Now, if you want to do a chiplet comparison let’s take a close look at Intel versus AMD or Nvidia as they are doing chiplets on TSMC N3 and N2. Intel might actually win this one, we shall see. But to me if you want the foundry process lead you need to be able to make customer chips in high volume.

Next you have to consider what does the process lead mean if you don’t have customer support. It will be one of those ribbons on the wall, one of those notes on Wikipedia, or a press release like IBM does. It will not be the billions of dollars of HVM revenue that everybody looks for. Intel needs to land some fabless semiconductor whales to stand next to TSMC, otherwise they will stand next to Samsung or IBM.

Personally I think Intel has a real shot at this one. If their version of BPD can be done by customers in a reasonable amount of time it could be the start of a new foundry revenue stream versus the NOT TSMC business I have mentioned before. We will know in a year or two but for me this is the exciting foundry competition we have all been waiting for so thank you Intel and welcome back!

There is an interesting discussion in the SemiWiki forum on TSMC versus Intel in regards to risk taking. I hope to see you there:

Intel vs TSMC in Risk Taking

Also Read:

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

How Intel, Samsung and TSMC are Changing the World

Intel Enables the Multi-Die Revolution with Packaging Innovation


Podcast EP200: Dan and Mike’s Top Ten List For the Semiconductor Industry

Podcast EP200: Dan and Mike’s Top Ten List For the Semiconductor Industry
by Daniel Nenni on 12-29-2023 at 10:00 am

Dan is joined by podcast producer and collaborator Mike Gianfagna for Semiconductor Insiders episode 200. Dan and Mike look over the past two years (and 200 podcasts) to develop a top ten list of changes and innovation in the semiconductor industry. There is a lot of back-story detail on each topic in this far-reaching discussion.

The topics discussed are:

Dan: Innovation and advances at TSMC, Intel, and SMIC, changes in the automotive industry, and the RISC-V movement.

Mike: Semiconductors becoming a household word, the explosion of AI, the impact of AI on chip design, and the CEO change at Synopsys.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Transformation Model for IP-Centric Design

The Transformation Model for IP-Centric Design
by Kalar Rajendiran on 12-29-2023 at 6:00 am

HelixIPLM Accelerate Semiconductor Development with IP Centric Design

Semiconductor designs have been progressing over time to address wider product varieties and designs with increasing complexity. Organizations have been addressing intense time-to-market pressures by leveraging globally dispersed team resources. The project-centric design methodology, which once worked well with smaller projects and longer timelines is struggling to meet the demands of the modern semiconductor landscape. It creates isolated silos, discourages reuse, and fuels redundancies. Traceability becomes very difficult at best, and global collaboration strains under the weight of manual coordination and disparate data management systems. Mistakes and design gaps have become prohibitively expensive, and the global distribution of design teams presents new complexities, especially with dynamic geopolitical realities.

More efficient design practices are needed to achieve ambitious goals while controlling costs and meeting time-to-market demands. One promising solution is the IP-Centric Design approach.

IP-Centric Design: A Paradigm Shift

An IP-Centric Design approach reframes the entire design process, placing reusable intellectual property (IP) blocks at the heart of the development cycle. These pre-verified, optimized modules become the building blocks of new chips, fostering a host of benefits. Re-purposing proven IP allows design teams to leapfrog past repetitive tasks and costly respins, propelling them ahead of the competition. A centralized repository of IP fosters seamless collaboration and end-to-end project tracking, streamlining workflows and ensuring accountability. Pre-optimized IP guarantees reliability and performance, translating into robust, dependable products that consumers trust. IP-Centric Design is inherently scalable, effortlessly adapting to accommodate ever-growing design footprints and larger product portfolios.

A Roadmap for IP-Centric Design

To switch from project-centric design methodology to IP-Centric Design methodology will not happen overnight. Transitioning to IP-Centric Design is not without its challenges. It’s a mindset shift, not merely a methodology change. Legacy systems, ingrained habits, and cultural resistance may pose initial hurdles. Different teams may have varying levels of maturity in specific areas, and implementing the model requires careful planning and organizational buy-in.

Perforce has published a whitepaper presenting a Transformation Model that provides a blueprint for organizations to navigate this journey. The Model describes five key levels of transformation that organizations have to undergo in order to successfully achieve IP-Centric Design methodology.

Level One: Embracing IP-Centric Design

Design blocks are modeled as intellectual properties (IPs), each with a list of dependencies. This creates a versionable, hierarchical model of the design outside regular data management tools. The definition of IP broadens, encompassing not only pre-packaged design blocks from third-party providers but any block in the design, including those specific to a project, shared between projects, or delivered by central teams.

Level Two: Discovery & Reuse

The modeled IPs, independent of underlying data management, evolve into a dynamic catalog. The catalog allows users to search, filter, and comprehend available IPs based on metadata, irrespective of their location and content. This leads to seamless IP discovery and standardizes project configuration, embedding traceability into the design process. Teams can now make informed build-vs-buy decisions, fostering a culture of reuse that streamlines design efforts.

Level Three: Enterprise Development at Scale

Level Three addresses the challenge of scaling the system to meet the needs of a widespread community of users around the globe. The system must be architected for horizontal scaling, allowing teams to add hardware resources as the team and user numbers grow. Local or near-local response times are critical for effective IP discovery and reuse, especially as different teams collaborate across projects and design centers.

Level Four: Built-In Traceability

The central system becomes the single source of all data and metadata pertaining to designs. Users and workflows can find any design data and metadata they need, ensuring full traceability. This level is crucial for industries governed by standards, where compliance requires proof and provenance of designs. Effective integrations at this level enable organizations to confirm adherence to standards like ISO 26262, ITAR, or CFR21.

Level Five: Planning at Platform Scope

The highest level of IP-Centric Design involves modeling all components of a platform in a unified system. It goes beyond tracking work-in-progress to provide a platform for planning projects and anticipating upcoming needs. This level enables users across the enterprise to view existing IPs and plans for IPs in progress, fostering collaboration, influencing designs, and streamlining efforts across design teams.

Summary

The Transformation Model for IP-Centric Design emerges as a strategic blueprint for success to overcome the challenges of exponential complexity, global collaboration, and intense time-to-market pressures. The journey to IP-Centric Design may seem daunting, but the rewards are undeniable. By taking the first step and partnering with trusted solution providers, organizations can navigate this journey and unlock the full potential. Perforce offers semiconductor solutions that provide the foundation, structure, and scalability necessary for successful implementation. Visit the product page.

Organizations leveraging Perforce can expect improved collaboration, accelerated design cycles, informed build-vs-buy decisions, and streamlined efforts across design teams. Click here to request information.

Also Read:

Chiplets and IP and the Trust Problem

Insights into DevOps Trends in Hardware Design

IP Lifecycle Management for Chiplet-Based SoCs


Achieving a Unified Electrical/Mechanical PCB Design Flow – The Siemens Digital Industries Software View

Achieving a Unified Electrical/Mechanical PCB Design Flow – The Siemens Digital Industries Software View
by Mike Gianfagna on 12-28-2023 at 10:00 am

Achieving a Unified Electrical:Mechanical PCB Design Flow – The Siemens Digital Industries Software View

Let’s face it, designs are getting harder, much harder. Gone are the days when the electrical and mechanical design of a system occurred separately. Maybe ten years ago this practice was acceptable. Once the electrical design was completed (either the chip or the board) the parameters associated with the design were then given to the package or PCB design team to implement the physical delivery of the design. The handoff was done once, and each team lived in its own world. Those days are gone. In current day design complexity, the electrical design impacts the mechanical design in subtle ways. Similarly, the mechanical design of the system, including things like choice of materials has a profound impact on what is possible electrically. One must break down the walls and collaborate continuously, or accept the likelihood of project overruns and failure.  A comprehensive and informative white paper was recently published on this topic for PCB design. Read on to understand achieving a unified electrical/mechanical PCB design flow – The Siemens Digital Industries Software view.

Why Now?

Entitled Unifying ECAD-MCAD PCB design through collaborative best practices, the Siemens white paper begins by setting the stage for why a unified PCB design flow is so important now. Most SemiWiki readers will be familiar with this trend. The overall demands for PCB design have also been discussed in detail in this SemiWiki post.  The new Siemens white paper cites four trends in electronic design that are making a unified flow so urgent:

Compute power: Since the advent of the microprocessor, there’s been an astronomical increase in the compute power that chips can deliver – a trillion-fold over six decades. Given the slowing of Moore’s Law, future performance gains in semiconductors will be driven by, among other factors, advanced packaging flows.

Engineering discipline convergence: The “smaller, denser, faster” mantra associated with today’s products is magnifying the importance of ensuring that electromechanical compatibility is addressed prior to the first fabrication – waiting until manufacture to validate electronic and mechanical compatibility is clearly leaving things until too late.

Sustainability: The environmental impact of the manufacture of electronic devices is starting to get more scrutiny, as is the worldwide energy consumption of devices during their working life. This one is quite important to Siemens.

AI in electronics design: The fourth trend is the rise of AI in electronics design. AI might be considered a product of electronics, yet AI can also help with electronics design.

The white paper goes into a lot more detail on these topics. Links are coming so you can see the whole picture, as well as learn more about the Siemens approach.

What’s Needed?

The white paper covers a lot of ground. Here are some of the topics that are examined:

The importance of ECAD-MCAD collaboration: An integrated ECAD/MCAD collaboration environment enables electrical and mechanical design teams to work together throughout the entire design process in real time. And this can spell the margin of victory for a complex design project. The specific benefits of a well-integrated approach are discussed.

Ways ECAD-MCAD collaboration can be improved: A lot of engineering development teams still struggle to break free of legacy practices, which were perfectly good in their day but fall short in the present day. Specific approaches to improving the process are discussed.

A multi discipline, multi domain workflow supporting real time collaboration

Keys to successful ECAD-MCAD collaboration: Efficient collaboration between ECAD and MCAD domains enables both to optimize an electronics design within tight form-factor constraints while still meeting quality, reliability, and performance requirements. In this section, specific approaches to design methodology and data sharing are presented. The goal is a multi-discipline, multi-domain workflow that supports real-time collaboration, as illustrated in the figure.

A toolkit for collaborative engineering: Now that some of the reasons collaboration is so important and some of the obstacles to its adoption have been discussed, this section looks at the solutions available to support ECAD- MCAD collaboration.

Accelerating PCB design: The Siemens Xcelerator business platform ecosystem is presented, with details of scope, capabilities, and benefits for design teams worldwide.

To Learn More

If you’re involved in complex system design this white paper is a must read. You can access the full text here.  There is also a great podcast from the authors of the white paper available here.  You can now learn about achieving a unified electrical/mechanical PCB design flow – The Siemens Digital Industries Software view.


Will Chiplet Adoption Mimic IP Adoption?

Will Chiplet Adoption Mimic IP Adoption?
by Eric Esteve on 12-28-2023 at 6:00 am

Adoption theory

If we look at the semiconductor industry expansion during the last 25 years, adoption of design IP in every application appears to be one of the major factors of success, with silicon technology incredible development by a x100 factor, from 250nm in 2018 to 3nm (if not 2nm) in 2023. We foresee the move to chiplet-based architecture to soon play the same role that SoC chip-based architecture and massive use of design IP has played in the 2000’s.

The question is how to precisely predict chiplet adoption timeframe and what will be the key enablers for this revolution. We will see if diffusion of innovation theory can be helpful to fine-tune a prediction, determine what type of application will be the driver. Chip-to-chip interconnect protocol standard specifications allowing fast industry adoption, driving applications like IA or smartphone application processor quickly seems to be the top enabler, but EDA tools efficiency or packaging new technologies and dedicated fab creation, among others, are certainly key.

Introduction: emergence of chiplet technology

During the 2010-decade, the benefits of Moore’s law began to fall apart. Moore’s law stated transistor density doubled every two years, the cost of compute would shrink by a corresponding 50%. The change in Moore’s law is due to increased in design complexity the evolution of transistor structure from planar devices, to Finfets. Finfets need multiple patterning for lithography to achieve devices dimensions to below 20-nm nodes.

At the end of this decade, computing needs have exploded, mostly due to proliferation of datacenters and due to the amount of data being generated and processed. In fact, adoption of Artificial Intelligence (AI) and techniques like Machine Learning (ML) are now used to process ever-increasing data and has led to servers significantly increasing their compute capacity. Servers have added many more CPU cores, have integrated larger GPUs used exclusively for ML, no longer used for graphics, and have embedded custom ASIC AI accelerators or complementary, FPGA based AI processing. Early AI chip designs were implemented using larger monolithic SoCs, some of them reaching the size limit imposed by the reticle, about 700mm2.

At this point, disaggregation into a smaller SoC plus various compute and IO chiplets appears to be the right solution. Several chip makers, like Intel, AMD or Xilinx have select this option for products going into production. In the excellent white paper from The Linley Group, “Chiplets Gain Rapid Adoption: Why Big Chips Are Getting Small”, it was shown that this option leads to better costs compared to monolithic SoCs, due to the yield impact of larger. These chip makers have designed homogenous chiplet, but the emergence and adoption of interconnect standard like Universal Chiplet Interconnect Express (UCIe) IP is easing adoption of heterogeneous chiplet.

The evolution of the newer, faster, protocol standards is picking up speed as the industry keeps asking for higher performance. Unfortunately, the various standards are not synchronized by a single organization. New PCIe standards can come one year (or more) earlier or later than the new Ethernet protocol standard. Using heterogeneous integration allows silicon providers to adapt to the fast-changing market by changing the design of the relevant chiplet only. Considering advanced SoC design fabrication requires massive capital expenditures for 5nm, 4nm or 3nm process nodes, the impact of chiplet architectures is tremendous to drive future innovation in the semiconductor space.

Heterogeneous chiplet design allows us to target different applications or market segments by modifying or adding just the relevant chiplets while keeping the rest of the system unchanged. New developments could be launched quicker to the market, with significantly lower investment, as redesign will only impact the package substrate used to house the chiplets. For example, the compute chiplet can be redesigned from TSMC 5nm to TSMC 3nm to integrate larger L1 cache or higher performing CPU or number of CPU cores, while keeping the rest of the system unchanged. Chiplet integrating SerDes can be redesigned for faster rates on new process nodes offering more IO bandwidth for better market positioning.

Using heterogeneous chiplet will offer better Time-to-Market (TTM) when updating system, reusing the part of system with no change if it’s designed in chiplet. This will also be a way to minimize cost when keeping some functional chiplet on less advanced nodes, cheaper than the most advanced. But the main question is to forecast when the chiplet technology will create a significant segment of the semiconductor market? We will review the IP adoption history as chiplet and IP are similar, both have to break the NIH syndrome to become successful. We will extract the main causes of chiplet adoption and build a forecast, using the innovation theory and the defined category (Innovators, Early Adopters, etc. see Figure below).

Figure 1: Innovation Theory (Reminder)

We will review ARM CPU IP adoption through 1991 to 2018 and IP adoption history through 1995 to 2027, and check how this adoption rate stick with the Innovation Theory.

We will explain why chiplet adoption will be boosted, reviewing the technology and marketing related reasons:

  • From IP-based SoC to chiplet-based system
  • Interoperability, thanks to chiplet interconnect preferred protocol standard
  • Explaining why high-end Interface IP are key for Chiplet adoption
  • Design-related challenges to solve.
  • Last but not least, investment made by foundry

Finally, we can build a tentative chiplet adoption forecast, based on innovation theory. Just to mention, the industry just moved in the “Early adopters” phase, seeing numerous IP and chiplet vendors serving HPC and AI.

If you download the white paper, you will enjoy with all the text, and numerous pictures, some of them beeing created exclusively for this work.

By Eric Esteve (PhD.) Analyst, Owner IPnest

Alphawave sponsored the creation of this white paper, but the opinions and analysis are those of the author. Article can be found here:

https://awavesemi.com/resource/will-chiplet-adoption-to-mimic-ip-adoption/

Also Read:

Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Interface IP in 2022: 22% YoY growth still data-centric driven