SNPS1670747138 DAC 2025 800x100px HRes

Petri Nets Validating DRAM Protocols. Innovation in Verification

Petri Nets Validating DRAM Protocols. Innovation in Verification
by Bernard Murphy on 05-01-2023 at 6:00 am

Innovation New

A Petri nets blog scored highest in engagement last year. This month we review application of the technique to validating an expanding range of JEDEC memory standards. Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Fast validation of DRAM protocols with timed petri nets. The authors presented the paper at the 2019 MEMSYS conference and are from Fraunhofer and TU Kaiserslautern in Germany.

JEDEC standards for memory protocols describe the complexities of command behaviors and timing through a mix of state machine diagrams, tables, and timing diagrams. Validating an implementation through simulation depends on creating meaningful tests and checks through manual comparison with the standard doc. JEDEC themselves acknowledge that their reference based on a combination of FSMs, tables and timing diagrams is not fully complete and makes automated test generation problematic. This paper uses Timed Petri Nets to provide a full model of the DRAM states, the logical command dependencies and the internal timing dependencies of the system under test, from which a complete SystemC reference model can be automatically generated and used as a reference with popular DRAM simulators for verification.

In addition to the value of the ideas, this paper provides a useful intro to the mechanics of DRAM operation for novices like me!

Paul’s view

This is an easy to read, self-contained paper, providing a great example of the opportunity for domain specific languages in design and verification. In this paper the authors tackle the problem of verifying DDR interfaces. They build on prior art describing the DDR3 protocol using a Petri Net, a wonderful graph-based notation for visually representing interactions between concurrent state machines.

The authors’ key contribution is to upgrade this prior art to a “Timed” Petri Net containing additional arcs and annotations to model the timing dependences between commands in the DDR protocol standard. They create a nice little textual language, DRAMml, for describing these Timed Petri Nets which is able to represent the complete DDR3 command protocol on one page. Nice!

They also develop a compiler for DRAMml to generate simulatable SystemC code which can be used as a “golden” model as a reference for verification. As final icing on the cake, they use their golden model to find a bug in DRAMSys, a well cited popular DRAM simulator in the literature. I would be really interested to see if this work could be applied to other protocols such as PCIe or Ethernet.

Raúl’s view

This is the first paper on the validation of dynamic memory controllers for DDR that we have reviewed in this series. JEDEC has issued 19 standards since the original DDR in 2000, including DDR5 and HBM3. It is easy to see that the specification of such memories – with multiple banks, 212 to 218 rows, where each row can store 512 B to 2 KB of data, recharge, refresh requirements – can get very complex, particularly regarding the timing requirements. As an example, the authors cite that to model the complete state space of a DDR4 with 16 banks requires 65,536 states with over a billion transitions.

To simplify and formalize memory specifications, the paper builds models using extended Petri Nets. Petri nets are bipartite graphs with places and transitions connected by weighted directed arcs (section 3.2). They are extended by:

-> [t1, t2] timed-arc with guard [t1,t2] meaning the transition can only fire in that time interval

->>            reset-arc which clears a place of all tokens

-o              inhibitor-arc which prevents a transition to fire

With such extensions Petri Nets become as powerful as Turing Machines. These Petri Nets model DRAMs with reasonable complexity, e.g. 4 places and 8 transitions, plus a place and 6 transitions per bank. Power can be modeled directly; timing gets a bit more complicated requiring also timing dependencies between transitions.

The paper then goes on to define a DSL (domain specific language) called DRAMml using the MPS Software from Jetbrains, to describe this Petri Net and convert it to SystemC. The generated executable model was simulated with several DRAM simulators, namely DRAMSim2, DRAMSys, Ramulator, and the DRAM controller in gem5, uncovering a timing violation in DRAMSys.

I found the paper interesting and easy to read given that I also worked with Petri Nets before. Following the DRAM specs in detail is more for the specialist but can be educational. The claim that “DRAMml, which describes all the timing, state and command information of the JEDEC standards in a formal, short, comprehensive, readable and understandable format” is not obvious. It requires understanding of Petri Nets, which may be a barrier to the adoption of the methodology, despite improved simplicity and expressive power. It would be interesting to know what JEDEC thinks of this approach since in principle it should allow them to provide or at least build definite reference models for new standard releases.


Podcast EP158: The Benefits of a Unified HW/SW Architecture for AI with Quadric’s Nigel Drego

Podcast EP158: The Benefits of a Unified HW/SW Architecture for AI with Quadric’s Nigel Drego
by Daniel Nenni on 04-28-2023 at 10:00 am

Dan is joined by Nigel Drego, the CTO and Co-founder at Quadric. Nigel brings extensive experience in software and hardware design to his role at Quadric. Nigel is an expert in computer architectures, compiler technology, and software frameworks.

Dan explores the unique and unified HW/SW architecture developed by Quadric with Nigel. The benefits of a single architecture programmable approach to on-chip AI is explained, along with specific examples of how to adapt the system to various AI processing challenges.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Ravi Thummarukudy of Mobiveil

CEO Interview: Ravi Thummarukudy of Mobiveil
by Daniel Nenni on 04-28-2023 at 6:00 am

Ravi Thummarukudy

Mobiveil Marks 11th Anniversary

Ravi Thummarukudy is Mobiveil’s Chief Executive Officer and a founder. He and I recently spent an enjoyable afternoon getting acquainted as I learned more about Mobiveil. It’s an inspiring story of a technology company in the semiconductor space helping customers and prospering.

Eleven-year-old Mobiveil is noted for its silicon IP, application platforms and engineering services for Flash Storage, data center, 5G, AI/ML, automotive and IoT applications. In those 11 years, Mobiveil thrived and continues to do so with around 500 employees located around the world – Mobiveil has R&D centers in Silicon Valley, Bangalore, Chennai, Hyderabad, and Rajkot in India.

Product Development teams at the world’s largest product companies in U.S., Europe, China, Japan, Korea, Israel, and Taiwan have leveraged Mobiveil’s IP blocks, hardware platforms or its specialized engineering services to accelerate their innovation and product development schedule. Mobiveil is unique in its business by continuously investing in R&D to develop reusable components and platforms to increase the value add for its engineering capabilities.

What’s your background? What about your co-founders’ backgrounds?

All the founders of Mobiveil have electronics engineering background with 20 plus   years’ experience in the electronic product development marketplace working for either System OEMs or their supply chains like EDA or Semiconductor companies. Another thing in common among the founders is our passion for new product development and the opportunity to accelerate this process.

This is the second company we founded together as a leadership team. We founded GDA Technologies in the late 1990s and grew to a more than 600 employee organization before being acquired by L&T Infotech.

I received a Master of Science degree in electrical engineering from IIT Chennai and MBA from Santa Clara University and worked at Indian Space Research Organization (ISRO), Tata Consulting Services and Cadence Design Systems before venturing into entrepreneurship.

Did you see yourself as an entrepreneur?

Like many people working in Silicon Valley, becoming an entrepreneur was always in my mind and my co-founders also had similar aspirations. At that time, I was at Cadence, I learned the contours of the electronics industry as well as customer-centered business practices. I also established deep working relationships in the EDA and Semiconductor industry. The industry was prime for growth due to standardization of design languages and the advent of IP centric-SoC design methodologies. When we started the first company, it was a boom time in the Silicon Valley where VCs were funding several semiconductor startups and there was tremendous demand for outsourced engineering – This helped us scale the business with no external investment. This jumpstart allowed us to chart our own destiny with our engineering intuitions and scale the business targeting our investments toward the disruptions that were taking place in the marketplace.

After we had a successful exit from our first venture, we worked at large public companies but again came together for this venture we named Mobiveil. I would say that we are passionate about this business and enjoy being serial entrepreneurs.

What convinced you and your co-founders to start Mobiveil?

It’s our firm conviction that no company big or small can do their entire R&D by themselves. There was a time when product companies were fully vertically integrated from product definition to manufacturing. Cost and efficiency offered by specialized companies drove product companies to slowly outsource manufacturing, chip development, EDA tools and finally IP. And once the industry became standardized, outsourcing EDA tools, IP and engineering services became a no-brainer. We helped this process by offering capabilities in the U.S. as well as from India that met the needs for our customers at affordable cost then and continues even today.

Mobiveil began by targeting the mobile apps space and later moved into the storage area? Why the shift?

Our first company got acquired in 2008 and when we started to look for a restart several years later, the main theme of the time was mobility and smart phones and we wanted to contribute to this space. However, we soon realized that our passion belonged in product development and soon pivoted back to silicon IP, platforms, and engineering services.

During this time, storage technology was transforming from hard disks to Flash storage and the NVM Express standard emerged. We quickly developed the NVM Express IP and got it certified by the University of New Hampshire (UNH). That enabled us to help many of our customers accelerate this hard disk to flash or SATA to NVMe transformation for them. This trend was further accelerated by the exponential growth of data center and cloud service providers who needed the latency and throughput that PCI Express and NVM Express offered. Over the years, we developed many IP blocks as well as acquired IP assets in data storage and augmented it by standard platforms and specialized engineering services.

One other example is CXL technologies. We were one of the first companies to develop CXL design IP and get interoperability with Intel’s Sapphire Rapids platform. Today we have several silicon IP blocks around high-speed interfaces and error-correction technologies for memory and storage along with several readymade FPGA platforms. We also hold a few important patents in this space.

How has the SIP market changed and evolved over the past 10 years? How much have the market dynamics changed?

One of my first jobs at Cadence in the 1990s was to work with companies like NEC, LSI Logic, Toshiba to convince them to use Cadence EDA tools instead of their internally developed tools and methodologies. Even though these customers loved their fully customized and locally supported EDA tools, these tools did not stand a chance against more sophisticated and ever-improving third- party EDA tools and the economies of scale that they offered.

When we started the first company, we were convinced that SoC customers sooner or later would outsource standard IP to third parties and ventured into standards-based IP like Hyper Transport, RapidIO and PCI Express Today using third-party IP and engineering services is as common a practice as getting a chip manufactured at TSMC or Samsung foundries.

While the IP industry grew as a separate segment, it was quickly absorbed by the EDA companies as it was an adjacent market for them. They started offering many of the standards-based IP along with their EDA tools. Conversely, independent IP companies became specialized and are offering highly complex IP to differentiate themselves from the EDA companies.

Our approach is to focus on technology verticals like data storage and 5G and offer a portfolio of digital IP combined with engineering services as our major differentiator.

How important are industry standards for IP? Is Mobiveil active in standards organizations?

Standardization is extremely important for IP. In fact, standardization is the main reason the EDA and SIP industries were created. The huge success of Verilog and VHDL design languages and PCIe, USB, DDR, Ethernet protocols and electrical standards fueled the growth of EDA and IP business along with outsourced engineering services.

Standardization allows engineers working from anywhere in the world the ability to design standard IP components in an EDA environment that could be quickly integrated to SoCs easily.  The democratization of the semiconductor industry was further strengthened by the advent of open-source initiatives like RISC-V and availability of lower cost manufacturing and engineering talent available in primarily Asian countries.

Mobiveil currently participates in several standard bodies such as PCIe SIG, MIPI Alliance and the NVM Express Consortium. Being part of these standard bodies and accelerating the adoption of these technologies early is a critical strategy for our growth.

What’s next for Mobiveil?

For Mobiveil, we continue to be passionate about helping our customers realize their products faster and cheaper. And to that extent, we continue to invest in innovation that creates standards-based IP blocks and platforms augmented by specialty services. We are confident of the growth of this industry and our ability to scale the company for greater success in the years to come. We have had several successes in entering a new space like Flash storage and making valuable contributions.

Looking forward, I see Mobiveil growing and becoming a major contributor of IP and services in 5G wireless where we have an initiative to develop mathematics-based data path IP like data encoding, decoding and transformation. We are also developing a platform for offering 5G services for private cells (licensed and unlicensed band) and gateways.  On the AI front, we focus on computer vision, image recognition and processing. For geographic expansion, we will soon open our office in Munich, Germany, where we plan to focus on 5G wireless, automotive, and industrial automation including robotics.

Also Read:

Developing the Lowest Power IoT Devices with Russell Mohn

CTO Interview: Dr. Zakir Hussain Syed of Infinisim

CEO Interview: Axel Kloth of Abacus


TSMC 2023 North America Technology Symposium Overview Part 5

TSMC 2023 North America Technology Symposium Overview Part 5
by Daniel Nenni on 04-27-2023 at 10:00 am

Global Footprint

TSMC also covered manufacturing excellence. The TSMC “Trusted Foundry” tagline has many aspects to it, but manufacturing is a critical one. TSMC is the foundry capacity leader but there is a lot more to manufacturing as you will read here. Which brings us to the manufacturing accomplishments from the briefing:

To meet customers’ growing demand, TSMC has accelerated its fab expansion rate:
  • From 2017 to 2019, TSMC built around 2 phases of fabs on average per year.
  • From 2020 to 2023, the average will significantly increase to around 5
  • In the past two years, TSMC started the construction of 10 new phases in total, including 5 phases of wafer fabs in Taiwan, 2 phases of advanced packaging fabs in Taiwan, and 3 phases of wafer fabs overseas.
    • The overseas capacity of 28nm technology and below will be 3X larger in 2024 than it was in 2020.
    • In Taiwan, phases 5, 6, and 8 of Fab 18 in Tainan are the base of TSMC’s N3 volume production. In addition, TSMC is preparing new fabs, Fab 20 in Hsinchu and a new site in Taichung, for N2 production.
    • In the US, TSMC is planning for 2 fabs in Arizona.
  • The first fab for N4 has started tools move-in, and volume production will be in 2024.
  • The second fab is under construction now and is planned for N3 production.
  • Combined capacity for both fabs will reach 600K wafers per year.
    • In Japan, TSMC is building a fab in Kumamoto to provide foundry services for 16/12nm and 28nm family technologies to address strong global market demand for specialty technologies. Construction of this fab has begun and volume production will be in 2024.
    • In China, a new phase for 28nm technology started volume production in 2022.

TSMC’s leadership on advanced technology defect density (D0) and defective parts per million (DPPM) has demonstrated its manufacturing excellence.

    • The process complexity of N5 is much higher than N7, but N5’s yield improvement is even better than N7 at the same stage.
    • TSMC’s N3 technology has demonstrated industry-leading yield in high-volume production, and its D0 performance is already on par with N5 at the same stage.
      • TSMC’s N7 and N5 technologies have demonstrated industry-leading DPPM, including smartphones, computers, and cars, and TSMC is confident that N3 DPPM will catch up with N5 very soon.
3DFabric™ Manufacturing
  • By leveraging TSMC’s industry-leading 3DFabric™ manufacturing, customers can overcome the challenges of system-level design complexity and speed up product innovation.
  • CoWoS and InFO families have reached fairly high-level yields very soon after their volume productions.
  • The integrated yield of SoIC and advanced packaging will achieve the same level as the CoWoS and InFO families.
Green Manufacturing
    • To achieve the goal of net zero emissions by 2050, TSMC continues to evaluate and invest in all types of opportunities to reduce greenhouse gas emissions.
    • In 2022, TSMC’s direct greenhouse gas emissions have significantly dropped to 32% from 2010 levels.
    • This was achieved through reducing process gas consumption, replacing global warming potential gases, installing on-site abatement systems, and improving gas removal efficiency.

TSMC aims to double production energy efficiency for every process node after five years of volume production.

    • For N7 technology, the energy efficiency improved by 5X in the fifth year of its volume production.
    • For N5 technology, TSMC expects to see energy efficiency improvement by 5X by 2024.
    • TSMC has built an innovative chiller system with AI capabilities, which significantly contributed to improving cooling energy efficiency.

Last year, TSMC’s first water reclamation plant in southern Taiwan started supplying 5,000 metric tons of water per day. Today, it’s 20,000 tons per day.

    • By 2030, TSMC’s tap water consumption per unit of production will be reduced to 60% of 2020 level.
    • At TSMC Arizona, TSMC plans to build an industrial water reclamation plant to help the company reach near-zero liquid discharge. When completed, TSMC Arizona will be the greenest semiconductor manufacturing facility in the U.S.

After attending a handful of conferences in 2023 I must say that the TSMC Technical Symposium was by far the best. I don’t know the final attendance numbers but more than 1,600 people registered to attend this event. The exhibit hall was very busy and well stocked with food. Quite a few of the companies we work with on SemiWiki were exhibiting and I was told that for the cost it had by far the best ROI of semiconductor conferences.

The TSMC Technical Symposium will next go to Austin, Boston, Taiwan, Europe, Israel, China, and Japan. TSMC certainly knows how to build an ecosystem of customers, partners, and suppliers, absolutely.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 4


TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 4
by Daniel Nenni on 04-27-2023 at 8:00 am

TSMC Specialty Technology 2023

TSMC covered their specialty technologies in great detail. Specialty is what we inside the ecosystem used to call weird stuff meaning non-mainstream and fairly difficult to do on leading edge processes.  Specialty technologies will play an even more important part of semiconductor design with the advent of chiplets where die from specialty processes can be integrated with mainstream process die.

Specialty processes also fill fabs. As you can see TSMC is pushing heavily on N6RF to fill the N7 fabs. Here is the lengthy list of specialty accomplishments from the media kit:

TSMC offers the industry’s most comprehensive specialty technology portfolio, covering Power Management, RF, CMOS image sensing, and much more for a broad range of applications:

  • Automotive
    • As the automotive industry moves toward autonomous driving, compute requirement is increasing at a very fast rate and needs the most advanced logic technology. By 2030, TSMC expects that 90% of all cars will have ADAS function, with L1, L2, and L2+/L3 each taking up 30% of that market.
    • In the past three years, TSMC rolled out ADEP (Automotive Design Enablement Platform) by offering industry-leading Grade-1 qualified N7A and N5A to unleash customers’ automotive innovation.
    • To give customers a head start on automotive product design before technology is auto-ready, TSMC introduced Auto Early as a steppingstone to enable an early design start and shorten product time-to-market.
      • N4AE, based on N4P, enables customers to start risk production in 2024.
      • N3AE serves as a steppingstone to N3A, which will be fully automotive qualified in 2025.
      • N3A, once qualified and released, will be the world’s most advanced automotive logic technology.
  • Advanced RF Technologies for 5G & Connectivity
    • In 2021, TSMC released N6RF with best-in-class transistor performance, including speed and power efficiency, based on our record-setting 7nm logic technology.
    • Combining the superb RF performance and excellent 7nm logic speed and power efficiency, TSMC’s customers can enjoy 49% power reduction from an RF SoC chip with half digital and half analog thru migration from 16FFC to N6RF releasing the power budget of mobile devices to support other growing features.
    • Today, TSMC announced the most advanced RF CMOS technology, N4PRF, that will be released in the second half of 2023.
      • Offers 77X logic density increase and 45% logic power reduction under the same performance moving from N6RF.
      • 32% MOM cap density increase in N4PRF is offered when compared with its predecessor, N6RF.
  • Ultra-Low Power
    • TSMC’s ultra-low power solutions continue to drive Vdd reduction to push power saving, which is essential to electronics.
    • With continued technology enhancement to lower minimum Vdd from 0.9V at 55ULP to less than 0.4V in N6e, TSMC offers a wide range of voltage operation to enable dynamic voltage scaling design for optimal power/performance.
    • TSMC’s coming N6e solution can provide around 9X logic density with >70% power reduction vs. the N22 solution, an attractive solution for wearables.
  • MCU / Embedded Nonvolatile Memories
    • TSMC’s most advanced eNVM technology has progressed to 16/12nm FinFET-based technology, which allows customers to leverage superb performance in compute from FinFET transistors.
    • Due to the growing complexity of traditional floating gate-based eNVM or ESF3, TSMC has also heavily invested in new embedded memory technologies, such as RRAM and MRAM.
    • Both new technologies have now come to fruition, going into production at 22nm & 40nm nodes.
    • TSMC is planning for 6nm development
  • RAM: Moved into 40/28/22RRAM production during the first quarter of 2022
    • TSMC’s 28RRAM is also progressing well, with reliable performance that is automotive capable.
    • TSMC is now developing the next generation 12RRAM, which is expected to be ready by the first quarter of 2024.
  • MRAM: 22MRAM started production in 2020 for IoT applications. Now TSMC is working with customers to bring MRAM technology to future automotive applications and expects to qualify for automotive Grade 1 in the second quarter of 2023.
  • CMOS Image Sensing
    • While the smartphone camera has been the main driving force of CMOS image sensing technology, TSMC expects that automotive cameras will drive the next wave of CIS growth.
    • To serve the future sensor requirements and achieve even more high-quality and intelligent sensing, TSMC has been working on multi-wafer stack solution, demonstrating new sensor architectures such as stacked pixel sensors, the smallest footprint for global shutter sensors, event-based RGB fusion sensors, and AI sensors with integrated memory.
  •  Display
    • TSMC is focusing on higher resolution and lower power consumption for many new applications, driven by 5G, AI, and AR/VR.
    • The next generation high-end OLED panel will require more digital logic and SRAM content, and a faster frame rate. To address this need, TSMC is bringing its HV technology down to 28nm generation for better energy efficiency and higher SRAM density.
    • TSMC’s leading µDisplay on silicon technology can deliver up to 10X pixel density to achieve the higher resolution needed for near-eye displays like those used in AR and VR.

You can see more detailed descriptions of TSMC’s specialty offerings of MEMs Technology, CMOS Image Sensor, eFlash, MS/RF, Analog, HV, and BCD HERE.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 5


TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 3
by Daniel Nenni on 04-27-2023 at 6:00 am

3DFabric Technology Portfolio

TSMC’s 3DFabric initiative was a big focus at the symposium, as it should be. I remember when TSMC first went public with CoWos the semiconductor ecosystem, including yours truly, let out a collective sigh wondering why TSMC is venturing into the comparatively low margin world of packaging. Now we know why and it is  absolutely brilliant!

In 2012 TSMC introduced, together with Xilinx, the by far largest FPGA available at that time, comprised of four identical 28 nm FPGA slices, mounted side-by-side, on a silicon interposer. They also developed through-silicon-vias (TSVs), micro-bumps and re-distribution-layers (RDLs) to interconnect these building blocks. Based on its construction, TSMC named this IC packaging solution Chip-on-Wafer-on-Substrate (CoWoS).

This building blocks-based and EDA-supported packaging technology has become the de-facto industry standard for high-performance and high-power designs. Interposers, up to three stepper fields large, allow combining multiple die, die-stacks and passives, side by side, interconnected with sub-micron RDLs. Most common applications were combinations of a CPU/GPU/TPU with one or more high bandwidth memories (HBMs).

In 2017 TSMC announced the Integrated FanOut technology (InFO). It uses, instead of the silicon interposer in CoWoS, a polyamide film, reducing unit cost and package height, both important success criteria for mobile applications. TSMC has already shipped tens of millions of InFO designs for use in smartphones.

In 2019 TSMC introduced the System on Integrated Chip (SoIC) technology. Using front-end (wafer-fab) equipment, TSMC can align very accurately, then compression-bond designs with many narrowly pitched copper pads, to further minimize form-factor, interconnect capacitance and power.

Today TSMC has 3DFabric, a comprehensive family of 3D Silicon Stacking and Advanced Packaging Technologies. Here are the TSMC related accomplishments from the briefing:

  • TSMC 3DFabric consists of a variety of advanced 3D Silicon Stacking and advanced packaging technologies to support a wide range of next-generation products:
    • On the 3D Si stacking portion, TSMC is adding a micro bump-based SoIC-P in the TSMC-SoIC® family to support more cost-sensitive applications.
    • The 2.5D CoWoS® platform enables the integration of advanced logic and high bandwidth memory for HPC applications, such as AI, machine learning, and data centers. InFO PoP and InFO-3D support mobile applications and InFO-2.5D supports HPC chiplet integration.
    • SoIC stacked chips can be integrated in InFO or CoWoS packages for ultimate system integration.
  • CoWoS Family
    • Aimed primarily for HPC applications that need to integrate advanced logic and HBM.
    • TSMC has supported more than 140 CoWoS products from more than 25
    • All CoWoS solutions are growing in interposer size so they can integrate more advanced silicon chips and HBM stacks to meet higher performance requirements.
    • TSMC is developing a CoWoS solution with up to 6X reticle-size (~5,000mm2) RDL interposer, capable of accommodating 12 stacks of HBM memory.
  • InFO Technology
    • For mobile applications, InFO PoP has been in volume production for high-end mobile since 2016 and can house larger and thicker SoC chips in smaller package form factor.
    • For HPC applications, the substrateless InFO_M supports up to 500 square mm chiplet integration for form factor-sensitive applications.
  • 3D Silicon stacking technologies
    • SoIC-P is based on 18-25μm pitch μbump stacking and is targeted for more cost-sensitive applications, like mobile, IoT, client, etc.
    • SoIC-X is based on bumpless stacking and is aimed primarily at HPC applications. Its chip-on-wafer stacking schemes feature 4.5 to 9μm bond pitch and has been in volume production on TSMC’s N7 technology for HPC applications.
    • SoIC stacked chips can be further integrated into CoWoS, InFo, or conventional flip chip packaging for customers’ final products.
  • 3DFabric™ Alliance and 3Dblox Standard
    • At last year’s Open Innovation Platform®(OIP) Forum, TSMC announced the new3DFabric™ Alliance, the sixth OIP alliance after the IP, EDA, DCA, Cloud, and VCA alliances, to facilitate ecosystem collaboration for next-generation HPC and mobile designs by:
      • Offering 3Dblox Open Standard,
      • Enabling tight collaboration between memory and TSMC logic, and
      • Bringing Substrate and Testing Partners into Ecosystem.
    • TSMC introduced 3Dblox™ 1.5, the newest version of its open standard design language to lower the barriers to 3D IC design.
      • The TSMC 3Dblox is the industry’s first 3D IC design standard to speed up EDA automation and interoperability.
      • 3Dblox™ 1.5 adds automated bump synthesis, helping designers deal with the complexities of large dies with thousands of bumps and potentially reducing design times by months.
      • TSMC is working on 3Dblox 2.0 to enable system prototyping and design reuse, targeting the second half of this year.

Above is an example of how TSMC 3DFabric technologies can enable an HPC chip. It also supports my my opinion that one of the big values of the Xilinx acquisition by AMD was the Xilinx silicon team. No one knows more about implementing advanced TSMC packaging solutions than Xilinx, absolutely.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5

 


TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 2
by Daniel Nenni on 04-26-2023 at 8:00 pm

TSMC N3 Update 2023

The next topic I would like to cover is an update to the TSMC process node roadmap starting with N3. As predicted, N3 will be the most successful node in the TSMC FinFET family. The first version of N3 went into production at the end of last year (Apple) and will roll out with other customers in 2023. There is a reported record amount of N3x design starts in process and from what I have heard from the IP ecosystem, that will continue.

Not only is N3 easy to design to, the PPA and yield is exceeding expectations. While I’m hearing good things about N2 I still think the mainstream chip designers will stick to N3 for quite some time and the ecosystem agrees.

Meanwhile the competition is still working on 3nm. Intel 3 for foundry customers is still in process and Samsung 3nm was skipped by all. I still have not heard of a successful tape-out to Samsung 3nm from a customer name that I recognize.

Here are the TSMC N3 accomplishments from the briefing:

  • N3 is TSMC’s most advanced logic technology and entered volume production in the fourth quarter of 2022 as planned; N3E follows one year after N3 and has passed technology qualification and achieved the performance and yield targets.
  • Compared with N5, N3E offers 18% speed improvement at the same power, 32% power reduction at the same speed, a logic density of around 6X, and a chip density of around 1.3X.
  • N3E has received the first wave of customer product tape-outs and will start volume production in the second half of 2023.
  • Today, TSMC is introducing N3P and N3X to enhance technology values and offer additional performance and area benefits while preserving design rule compatibility with N3E to maximize IP reuse.
  • For the first 3 years since inception, the number of new tape-outs for N3 and N3E is 5 to 2X that of N5 over the same period, because of TSMC’s technology differentiation and readiness.
  • N3P: Offers additional performance and area benefits while preserving design rule compatibility with N3E to maximize IP reuse. N3P is scheduled to enter production in the second half of 2024, and customers will see 5% more speed at the same leakage, 5-10% power reduction at the same speed, and 1.04X more chip density compared with N3E.
  • N3X: Expertly tuned for HPC applications, N3X provides extra Fmax gain to boost overdrive performance at a modest trade-off with leakage. This translates to 5% more speed versus N3P at drive voltage of 1.2V, with the same improved chip density as N3P. N3X will enter volume production in 2025.
  • Today, TSMC introduced the industry’s first Auto Early technology on 3nm, called N3AE. Available in 2023, N3AE offers automotive process design kits (PDKs) based on N3E and allow customers to launch designs on the 3nm node for automotive applications, leading to the fully automotive-qualified N3A process in 2025.

TSMC N3 will be talked about for many years. Not only did TSMC execute as promised, the competition did not, so it really is a perfect semiconductor storm. The result being a very N3 focused industry ecosystem that will be impossible to beat, absolutely.

Here are the TSMC N2 accomplishments from the media briefing:

  • N2 volume production is targeted for 2025; N2P and N2X are planned for 2026.
  • Performance of the nanosheet transistor has exceeded 80% of TSMC’s technology target while demonstrating excellent power efficiency and lower Vmin, which is a great fit for the energy-efficient compute paradigm of the semiconductor industry.
    • TSMC has exercised N2 design collateral in the physical implementation of a popular ARM A715 CPU core to measure PPA improvement: Achieved a 13% speed gain at the same power, or 33% power reduction at the same speed at around 0.9V, compared to the N3E high-density 2-1 fin standard cell.
  • Part of the TSMC N2 technology platform, a backside power rail provides additional speed and density boost on top of the baseline technology.
    • The backside power rail is best suited for HPC products and will be available in the second half of 2025.
    • Improves speed by more than 10-12% from reducing IR drop and signal RC delays.
    • Reduces logic area by 10-15% from more routing resources on the front side.

Remember, N2 is nanosheets, which, unlike FinFETs, is not open source technology so this is really going to be a challenge for design and the supporting ecosystem which gives TSMC a very strong advantage. TSMC also mentioned what follows nanosheets which I found quite interesting. I’m sure we will hear more about this at IEDM 2023:

  • Transistor architecture has evolved from planar to FinFET and is about to change again to nanosheet.
  • Beyond nanosheet, TSMC sees vertically stacked NMOS and PMOS, known as CFET, as one of the key process architecture choices going forward.
    • TSMC estimates the density gain would fall between 5 to 2X after factoring in routing and process complexity.
  • Beyond CFET, TSMC made breakthroughs in low dimensional materials such as carbon nanotubes and 2D materials which could enable further dimensional and energy scaling.

For the record, TSMC has deployed 288 distinct process technologies and manufactured 12,698 products for 532 customers and counting. There is no stopping this train so you might as well jump on with the rest of the semiconductor industry.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5


TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 1
by Daniel Nenni on 04-26-2023 at 6:00 pm

Advanced Technology Roadmap

The TSMC 2023 North America Technology Symposium happened today so I wanted to start writing about it as there is a lot to cover. I will do summaries and other bloggers will do more in-depth coverage on the technology side in the coming weeks. Having worked in the fabless semiconductor ecosystem the majority of my 40 year semiconductor career and writing about it since 2009 I may have a different view of things than the other media sources so stay tuned.

First some items from the opening presentation. As I have mentioned before, AI is driving the semiconductor industry and North America is leading the way with a reported 43% of the world wide AI business. With AI you have 5G since tremendous amounts of data have to be both processed and communicated from the edge to the cloud and back again and again and again.

Due to this tremendous industry driver, TSMC expects the global semiconductor market to approach $1 trillion by 2030 as demand surges from HPC-related applications with 40% of the market, smartphone at 30%, automotive at 15%, and IoT at 10%.

Of course in 2023 we will experience a revenue pothole which C.C. Wei joked about. C.C. said he would not give a forecast this year since he was wrong in saying TSMC would again experience double digit growth in 2023. It is now expected to be single digit decline and it could be even worse than that if you believe other industry sources. Since the TSMC forecast is derived from customer forecasts they were wrong too, there is plenty of blame to share and joke about, which C.C. did.

I still blame the pandemic for the horrible forecasting of late, truly a black swan event. Personally I think the foundry business and TSMC specifically is in the strongest position today so I have no worries whatsoever.

I had flashbacks to when Morris Chang spoke at the symposiums during the C.C. Wei presentation. I see a lot of Morris in C.C. but I also see a very focused man who is not afraid to ask for purchase orders. I also see a much stronger competitive nature in C.C. and I would never want to be on the wrong side of that, absolutely.

“Our customers never stop finding new ways to harness the power of silicon to create innovations that shall amaze the world for a better future,” said Dr. C.C. Wei, CEO of TSMC. “In the same spirit, TSMC never stands still, and we keep enhancing and advancing our process technologies with more performance, power efficiency, and functionality so their pipeline of innovation can continue flowing for many years to come.”

I sometimes tell my family that I don’t want to talk about my accomplishments because it will seem like bragging and I’m much too humble to brag. This is actually true with TSMC so here are some of their accomplishments from the briefing:

  • Together with partners, TSMC created over 12,000 new, innovative products, on approximately 300 different TSMC technologies in 2022.
  • TSMC continues to invest in advanced logic technologies, 3DFabric, and specialty technologies to provide the right technologies at the right time to empower customer innovation.
  • As our advanced nodes evolve from 10nm to 2nm, our power efficiency has grown at a CAGR of 15% over a span of roughly 10 years to support the semiconductor industry’s incredible growth.
  • The CAGR of TSMC’s advanced technology capacity growth will be more than 40% during the period of 2019 to 2023.
  • As the first foundry to start volume production of N5 in 2020, TSMC continues to improve its 5nm family offerings by introducing N4, N4P, N4X, and N5A.
  • TSMC’s 3nm technology is the first in the semiconductor industry to reach high-volume production, with good yield, and the Company expects a fast and smooth ramping of N3 driven by both mobile and HPC applications.
  • In addition, to push scaling to enable smaller and better transistors for monolithic SoCs, TSMC is also developing 3DFabric technologies to unlock the power of heterogeneous integration and increase the number of transistors in a system by 5X or more.
  • TSMC’s specialty technology investment experienced more than 40% CAGR from 2017 to 2022. By 2026, TSMC expects to grow specialty capacity by nearly 50%.

The two customer CEO presentations that followed C.C. were quite a contrast. ADI has been a long and trusted TSMC customer where as Qualcomm has been foundry hopping since the beginning of fabless time. I remember working with QCOM on a 40nm design that was targeted to four different fabs. TSMC did the hard work first then it went to UMC, SMIC, and Chartered for high volume manufacturing.  QCOM has a new CEO and TSMC has CC Wei so that may change. The benefits of being loyal to TSMC have grown dramatically since the planar days so we shall see.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5


Podcast EP157: The Differentiated Role Andes Plays in the US with Charlie Cheng

Podcast EP157: The Differentiated Role Andes Plays in the US with Charlie Cheng
by Daniel Nenni on 04-26-2023 at 10:00 am

Dan is joined by Charlie Cheng, Managing Director of Polyhedron. Prior to that, Charlie was the CEO of Kilopass Technology, where he grew the core memory business into a successful acquisition by Synopsys. Before that, Charlie was an Entrepreneur in Residence at US Venture Partners and a Corporate VP at Faraday Technology, a Taiwanese semiconductor company. He joined Faraday after he co-founded Lexra, a CPU IP company. Charlie started his career at General Electric and IBM before focusing on microprocessor, semiconductor, EDA, and the IP business.

Charlie joins this podcast in his capacity at Board Advisor for Andes Technology. Dan explores the market position Andes occupies in the US, which is focused on higher end applications as compared to its position in other parts of the world. How some of Andes’ unique qualities are leveraged in the US market are discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


AI and the Future of Work

AI and the Future of Work
by Ahmed Banafa on 04-26-2023 at 8:00 am

AI and the Future of Work

Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize the way we work, learn, and interact with technology. The term AI refers to the ability of machines to perform tasks that would typically require human intelligence, such as decision-making, problem-solving, and natural language processing. As AI technology continues to advance, it is becoming increasingly integrated into various aspects of the workplace, from automating repetitive tasks to helping professionals make more informed decisions.

The impact of AI on the future of work is a topic of much discussion and debate. Some experts believe that AI will lead to the displacement of human workers, while others argue that it will create new opportunities and lead to increased productivity and economic growth. Regardless of the outcome, it is clear that AI will have a profound effect on the job market and the skills needed to succeed in the workforce.

In this context, it is crucial to understand the potential benefits and risks of AI in the workplace, as well as the ethical implications of using AI to make decisions that affect human lives. As AI continues to evolve, it is essential that individuals and organizations alike stay informed and adapt to the changing landscape of work.

AI is set to transform the future of work in a number of ways. Here are some possible angles:

·      The Impact of AI on Jobs: One of the biggest questions surrounding AI and the future of work is what impact it will have on employment. Will AI create new jobs or displace existing ones? What types of jobs are most likely to be affected?

·      The Role of AI in Workforce Development: As AI becomes more prevalent in the workplace, it’s likely that workers will need to develop new skills in order to keep up. How can companies and organizations help workers develop these skills?

·      The Future of Collaboration Between Humans and AI: Many experts believe that the future of work will involve collaboration between humans and AI. What might this collaboration look like? How can companies and organizations foster effective collaboration between humans and AI?

·      AI and Workforce Diversity: AI has the potential to reduce bias and increase diversity in the workplace. How can organizations leverage AI to improve workforce diversity?

·      The Ethical Implications of AI in the Workplace: As AI becomes more prevalent in the workplace, there are a number of ethical considerations that need to be taken into account. How can companies and organizations ensure that their use of AI is ethical and responsible?

·      AI and the Gig Economy: AI has the potential to transform the gig economy by making it easier for individuals to find work and for companies to find workers. How might AI impact the future of the gig economy?

·      AI and Workplace Automation: AI is likely to automate many routine tasks in the workplace, freeing up workers to focus on higher-level tasks. What types of tasks are most likely to be automated, and how might this change the nature of work?

Advantages and disadvantages of AI in the context of the future of work:

Advantages:

  • Increased Efficiency: AI can automate many routine tasks and workflows, freeing up workers to focus on higher-level tasks and increasing productivity.
  • Improved Accuracy: AI systems can process large amounts of data quickly and accurately, reducing the risk of errors.
  • Better Decision-Making: AI can analyze data and provide insights that humans may not be able to identify, leading to better decision-making.
  • Cost Savings: By automating tasks and workflows, AI can reduce labor costs and improve the bottom line for businesses.
  • Enhanced Customer Experience: AI-powered chatbots and other tools can provide fast, personalized service to customers, improving their overall experience with a company.

Disadvantages:

  • Job Displacement: As mentioned earlier, AI and automation could displace many workers, particularly those in low-skill jobs.
  • Skill Mismatch: As AI and automation become more prevalent, workers will need to develop new skills in order to remain competitive in the workforce.
  • Bias and Discrimination: AI systems are only as unbiased as the data they are trained on, which could lead to discrimination in hiring, promotion, and other workplace practices.
  • Ethical Concerns: As AI and automation become more prevalent, there are a number of ethical concerns that need to be addressed, including issues related to privacy, transparency, and accountability.
  • Cybersecurity Risks: As more and more data is collected and processed by AI systems, there is a risk that this data could be compromised by cybercriminals.
  • Loss of Human Interaction: AI systems may replace some forms of human interaction in the workplace, potentially leading to a loss of social connections and collaboration between workers.
  • Uneven Access: As mentioned earlier, not all workers and organizations have equal access to AI and automation technology, which could widen the gap between those who have access to these tools and those who do not.

These are just a few of the advantages and disadvantages of AI and the future of work. As AI continues to evolve, it’s likely that new advantages and disadvantages will emerge as well.

In conclusion, the impact of AI on the future of work is a complex and multifaceted issue that requires careful consideration and planning. While AI has the potential to revolutionize the way we work and improve productivity, it also poses significant challenges, including job displacement and ethical concerns.

To prepare for the future of work, individuals and organizations must prioritize upskilling and reskilling to ensure that they have the skills and knowledge necessary to thrive in an AI-driven world. Additionally, policymakers must address the potential impacts of AI on employment and work towards creating policies that ensure the benefits of AI are shared equitably.

Ultimately, the successful integration of AI into the workplace will require collaboration and dialogue between industry, academia, and government to ensure that AI is used in a way that benefits society as a whole. By staying informed and proactive, we can navigate the changes brought about by AI and create a future of work that is both efficient and equitable.

Ahmed Banafa, Author the Books:

 Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

Also Read:

AI is Ushering in a New Wave of Innovation

Narrow AI vs. General AI vs. Super AI

10 Impactful Technologies in 2023 and Beyond