Banner 800x100 0810

5G Requires Rethinking Deployment Strategies

5G Requires Rethinking Deployment Strategies
by Tom Simon on 03-10-2022 at 6:00 am

5G RAN architecture

5G’s Departure from Its Predecessors

In each move from 1G to 4G people became accustomed to seeing the new generation as primarily offering increased bandwidth and efficiency. It would be a mistake to view the transition to 5G along these same lines. 5G takes Radio Area Networks (RANs) from a use model primarily for cell phone communications to a service supporting multiple use models that can support industrial IoT, machine to machine, even real time applications such as automotive navigation and more. 5G comes with more new operational bands and modes that work in environments from rural to dense urban areas.

The 5G Evolution

The increased sophistication of 5G means major shifts in how mobile network operators (MNOs) build out and deploy their networks. An informative white paper by Achronix, a leading supplier of FPGA components and eFPGA IP, discusses the changes coming in 5G and how they will affect every aspect of its architecture. The paper is titled “Enabling the Next Generation of 5G Platforms”. Indeed, 5G is still in the processes of being specified. 3GPP, the organization that is developing the specifications, is currently on Rel-18, with plans for specification releases up to Rel-21 in 2028. Each release includes new features that add essential functionality.

Three New 5G Use Cases

5G adds three new use cases to the existing 4G fixed broadband implementation. The Achronix white paper describes them as follows:

    • Massive machine type communication (mMTC) supports machine to machine connections, with an eye towards large numbers of IoT devices requiring high efficiency, low cost, and deep indoor coverage.
    • Enhanced mobile broadband (eMBB) aims to meet the new requirements for interactive applications on mobile devices. The focus is on 8K streaming video, augmented reality and other high bandwidth uses.
    • Ultra-reliable low latency communication (URLLC) is the third use case focusing on high performance low latency real-time connectivity. It will be used for things like vehicle control, remote medical and time critical factory automation.

Because of the number of devices that will be connected and the new use models, the existing monolithic base station and backhaul building blocks will create bottle necks and limit flexibility. A new split architecture consisting of centralized units (CU), distributed units (DU) and radio units (RU) will be used. This change allows coordination for performance features, load management, and real-time performance optimization, which all enable adaptation to various use cases. DU, CU and RU elements can be collocated or physically distributed to accommodate the needs of the workloads and environment. This shift requires more intelligence in each type of equipment and the development of standard interfaces to allow a high level of interoperability.

5G RAN architecture

Where CPUs Fall Short, FPGAs Provide More Processing Power

It’s more than simply disaggregation that is driving the need to more intelligence and processing power within all elements of the 5G network. The white paper offers an excellent example of this on the expanding need for beam forming in the RU. Bandwidth is going from 20Mhz to 100MHz, transmission intervals are moving to 0.5ms and antenna arrays will grow to 64 by 64 elements. Future 5G releases will include AI/ML processing in the RU to improve signal quality. All of this will require significant processing power.

While it is tempting for the MNOs to fall back on virtualization of network functions running on traditional CPUs, the reality is that for systems far from the central office the requirements for power, cooling and size quickly begin to rule out multi-CPU solutions. With an evolving specification for 5G equipment, designing custom ASICs for DUs, CUs and RUs is not feasible. The Achronix white paper points out that FPGAs offer an excellent middle ground for these systems. They offer programmability and also provide high performance with very efficient power consumption.

FPGAs will allow upgrades on installed systems as new revisions are ratified. One additional way that Achronix offers to improve power and system size is by adding an embedded FPGA fabric to an ASIC. This eliminates costly and inefficient IO conversions between processing elements. Achronix eFPGA is fully configurable to specific system requirements, which means that no unused extra silicon real estate gets wasted. Furthermore, Achronix offers AI/ML processing units as part of their core functionality.

Every stage in the 5G radio area network (RAN) can benefit from the features offered by Achronix FPGA and eFPGA. The Achronix white paper dives deeper than I can in this article to illustrate the changes coming in 5G, both now and in future releases, and how Achronix FPGA and eFPGAs can effectively address them. The full white paper is available for download on the Achronix website.

Also read:

Integrated 2D NoC vs a Soft Implemented 2D NoC

2D NoC Based FPGAs Valuable for SmartNIC Implementation

Webinar on Dealing with the Pain Points of AI/ML Hardware

 

 


Semiconductor Packaging History and Primer

Semiconductor Packaging History and Primer
by Doug O'Laughlin on 03-09-2022 at 10:00 am

https3A2F2Fbucketeer e05bbc84 baa3 437e 9518 adb32be77984.s3.amazonaws.com2Fpublic2Fimages2F56e012ca 2cf7 4b4e b605 e8c008265eb8 1604x1109

From DIP to Advanced, semiconductor packaging has become strategic

For ease of reading – I am going to be splitting this primer into two parts. First is the technical overview of everything. Next will be the company-specific writeups that follow over time – specifically Teradyne, Formfactor, Advantest, and Camtek. Maybe Keysight and others over time. The concepts in this primer will likely be referenced over and over. This is a primer I wish someone had written sooner, as this will become a must-know in semiconductors going forward.

Why is Packaging Important Now

Packaging used to be an afterthought in the process of semiconductor manufacturing. You made the little piece of silicon magic, and then you attached it and moved on your merry way. But as Moore’s law has stretched, engineers realized that they could utilize all parts of their chip including the packaging to make the best possible product. Improving packaging gives you significant benefits, as there are thicker metal pieces for better conductivity, and I/O (input/output) problems are still one of the greatest issues for semiconductors.

What is more amazing is that none of the packaging companies were considered as important as the traditional front-end manufacturing processes in the past. The packaging supply chain was often considered “back-end” and viewed as a cost center, similar to the front office and back office in banking. But now as the front end struggles to scale geometry, a whole new field of focus has emerged, and this is the emphasis on packaging. We will discuss the variety of processes so you will never feel lost again while looking into this segment of semicap and understand what 2.5D or 3D packaging means.

A Brief History of Packages

This is a brief hierarchy of package technologies I found from this wonderful Youtube lecture. If you have some time this series is worth a watch. Importantly it shows the hierarchy, of technology from past to present.

A simplified evolution is DIP> QFP > BGA > POP/SiP > WLP

There is clearly a lot of different package technologies, but we are going to go over the simplistic ones that broadly representative of each type and then bring it slowly to the present. I also really like this high-level overview below (it’s dated but still correct).

In the very beginning of packaging, things were often in ceramic or metal cans and hermetically (airtight) sealed for maximum possible reliability. This mostly applied to aerospace and military functions, where the highest level of reliability was required. However, this was not really feasible for most of our day to day use cases, and so we started to use plastic packaging and dual in-line packaging (DIP).

DIP Packaging (1964-1980s)

DIP was introduced in the 1970s and became law of the land for a decade before surface mount technologies were introduced. DIPs used plastic enclosures around the actual semiconductor and had two parallel rows of protruding electrical pins called leadframes that was connected to a PCB (printed circuit board) below.

The actual die is connected by bonding wire to two leadframes which can be attached to a printed circuit board (PCB)

DIP like so many early semiconductor inventions was created in 1964 by Fairchild semi. DIP packages are kind of iconic in a retro way, and the design choices are understandable. The actual die would be completely sealed in resin, so it leads to high reliability and low cost, and many of the first iconic semiconductors were packaged in this way. Notice that the die is connected to the external lead frame via wire, and this makes this a “wire-bonding” method of packaging. More on that later.

Below is the Intel 8008 – effectively one of the first modern microprocessors. Notice it’s iconic DIP packaging. So if you ever see those funky photos of semiconductors that look like little spiders, this is just a DIP package class semiconductor.

The original microprocessor from Intel, the 8008 family

Each of those little metal pieces then gets soldered onto a PCB where it makes contact with other electrical components and the rest of the system. Below is how the package is soldered onto a PCB board.

The PCB itself is often made of copper or other electrical components laminated by a non-conductive material. PCBs can then route electricity from place to place and let the components interconnect and talk to each other. Notice the fine lines between each of the circuits that are soldered to the PCB, those are embedded wires that serve as conduits from piece to piece. That is the “package” part of the packaging, and PCBs are the highest level of packaging.

While there are other renditions of DIP – it’s actually time to move onto the next paradigm of packaging technology that began in the 1980s, or Surface Mount Packages.

Surface Mount Packaging (1980s-1990s)

Instead of mounting the products via DIP, the next step-change was the introduction of surface mounted technology (SMT). As implied, the package is mounted directly onto the surface of the PCB and allows for more components and lower cost on a piece of substrate. Below is a picture of a typical surface-mounted package.

There are many variations of this package, and this was a workhorse for a long time in the heyday of semiconductor innovation. Something I want you to notice is that instead of the two lead frames that mount to the PCB, now there are 4 surfaces on all sides that are mounted. This follows the general desire of packaging, to take up less space and increase connection bandwidth or I/O. Each additional advancement will have that in mind and is a pattern to watch for.

This process was once manual but now is highly automated. Additionally, this actually created quite a slew of issues for PCBs such as popcorning. Popcorning the package is when moisture inside the plastic package is heated during the soldering process and the moisture causes issues in the PCB due to rapid reheating and cooling. Another thing to note is that with each increase in the packaging process there is an additional increase in complexity and failure.

Ball Grid Packaging and Chip Scale Packaging (1990s – 2000s)

As the demands of semiconductor speed continue to pick up, so does the need for better packaging. While QFN (quad-flat no-leads) and other Surface Mounted technologies clearly continue to proliferate, I want to introduce you to the beginning of a package design that we will have to know about in the future. This is the beginning of the solder balls – or broadly Ball Grid Array (BGA) packaging.

Those balls or bumps are called solder bumps/balls

This is what the ball grid array looks like and can directly mount a piece of silicon to a PCB or substrate from below rather than just taping down the corners on all 4 ends like the previous surface mounted technology.

So this is just another continuation of the trend I listed above, taking less space and having more connection. Now instead of a wire finely connecting the package on each side, we are now directly attaching one package to another. This leads to increased density, better I/O (a synonym for performance), and now the added complexity of how do you check to see if a BGA package is working. Up until this point, the packages were primarily visually inspected and tested. Now we can’t see the package so there was no way to test. Enter X-rays for inspection, and eventually more sophisticated techniques.

Solder bumps are also something I want you to remember as the primary way things are bonded to each other now, as this is the most common type of package interconnection pattern.

Modern Packaging (2000s-2010s)

We are now stepping into the modern era of packaging. Many of the packaging schemes described above are still in use today, however, there are increasingly more package types that you will start to see and those will become more relevant in the future. I will start to describe these now. To be fair many of these upcoming technologies were invented in previous decades, but because of cost, were not widely used until later.

Flip Chip

This is one of the most common packages you will likely read or hear about. I’m happy I can define it for you because I’ve never had a satisfying explanation in a primer I’ve read so far. Flip Chip was invented by IBM very early on and often will be abbreviated as C4. In the case of flip-chip, it really isn’t a stand-alone package form factor but rather a style of packaging. It’s pretty much just whenever there is a solder bump on a die. The chip is not wire bonded for interconnect but flipped to face the other chip with a connecting substrate in between, therefore “flip-chip”.

I don’t expect you to understand just from that awkward sentence, and I want to give you an example from Wikipedia, who has actually some of the best work on this I’ve seen. Let’s walk you through the steps.

  1. ICs are created on the wafer
  2. Pads are metalized on the surface of the chip
  3. A solder dot is deposited on each of the pads
  1. Chips are cut
  2. Chips are flipped and positioned so that the solder balls are facing circuitry
  3. Solder balls are then remelted
  4. Mounted chip is underfilled with an electrically insulating adhesive

Wire Bond

Notice how flip-chip is different from wirebond. Remember that DIP package up top? That was wire bonding, where a die uses wires to be bonded to another metal that is then soldered to the PCB. Once again wire bonding is not a specific technology, but rather an older set of technologies that encompasses a lot of different types of packaging. I think it’s best described in relief to flip-chip. Wirebond is a precursor to flip-chip to be clear.

Honest if you’ve made it this far – you’re a champ. I think that really is all you need to know for this segment. There is a large number of variations of each form factor and just think of these as the overarching themes that dictate them. KLIC by the way is the market leader in this segment, and when you think of old packaging technology you should think of them.

Advanced Packaging (2010s to Today)

We are ever so slowly creeping into the “advanced packaging” semiconductor era, and I wanted to maybe touch on some higher-level concepts now. There are actually various levels of “package” that kind of fit within this thought process. Most of the packaging we have talked about before has been focused on chip package to PCB, but the beginning of advanced packaging really starts with the phone.

The mobile phone in a lot of ways is the huge precursor to so many aspects of advanced packaging. It makes sense! A phone in particular is a lot of silicon content in the smallest space possible, much denser than your laptop or computer. Everything must be passively cooled, and of course as thin as possible. Every year Apple and Samsung would announce a faster but more importantly thinner phone, and this pushed packaging to new limits. Many of the concepts that I will discuss began in the smartphone package and has now pushed itself to the rest of the semiconductor industry.

Chip Scale Packaging (CSP)

Chip scale packaging is actually a bit broader than it sounds and originally means chip-size packaging. The technical definition is a package that has no greater than 1.2x the size of the die itself, and it must be single-die and attachable. I actually have already introduced you to the concept of CSP, and that is through flip-chips. But CSP was really taken to the next level via smartphones.

The 2010s made CSP law of the land, everything in this photo is 1.2x the size of the chip die, and is focused on saving as much space as possible. There are a lot of different flavors of the CSP era with flip-chip, right substrate, and other technologies all part of this classification. But I don’t think knowing the specifics are of many benefits to you.

Wafer-level packaging (WLP)

But there is one level smaller – and this is the “ultimate” chip scale packaging size, or at wafer-level packaging. This is pretty much just putting the packaging on the actual silicon die itself. The package IS the silicon die. It’s thinner with the highest level of I/O, and obviously very hot and hard to manufacture. The advanced packaging revolution is currently at the CSP scale, but the future is all focused on the wafer.

It’s an interesting evolution, the package was something that got subsumed by the actual silicon itself. The chip is the package and vice versa. This is really expensive compared to just soldering some balls onto the chip, so why we are doing this? Why is there such an obsession with advanced packaging now?

Advanced Packaging: The Future

This is a culmination of the trends I have been writing about for a long time. Heterogeneous computing is not only the story of specialization, but how we put all those specialized pieces together. Advanced packaging is that crucial enabler that makes it all work.

Let’s look at the M1 – a classic heterogeneous compute configuration, specifically with their unified memory structure. The M1 to me is not a “wow cool” moment but a singular moment of pre and post for Heterogeneous compute. The M1 is ringing in what the future looks like, and many will be following Apple’s suit pretty shortly. Notice the actual SOC (system on chip) is not heterogeneous – but the custom package that brings the memory close to the SOC is.

This could be an edited photo – but notice the PCB has no wires – this is likely because of their awesome 2.5D Integration

Another great example of a very good advanced package is the new A100 by Nvidia. Once again notice no wires on the PCB.

Check this quote from their whitepaper.

Rather than requiring numerous discrete memory chips surrounding the GPU as in traditional GDDR5 GPU board designs, HBM2 includes one or more vertical stacks of multiple memory dies. The memory dies are linked using microscopic wires that are created with through-silicon vias and microbumps. One 8 Gb HBM2 die contains over 5,000 through-silicon via holes. A passive silicon interposer is then used to connect the memory stacks and the GPU die. The combination of HBM2 stack, GPU die, and Silicon interposer are packaged in a single 55mm x 55mm BGA package. See Figure 9 for an illustration of the GP100 and two HBM2 stacks, and Figure 10 for a photomicrograph of an actual P100 with GPU and memory.

The takeaway here is that the best silicon in the world is being made one type of way, and this revolution is not stopping. Let’s learn a little bit more about the words above and translate this into English. First some more about the two overarching categories of Advanced packaging, 2.5D, and 3D packages.

2.5D Packaging

2.5D is kind of like a turbo version of the flip-chip we mentioned above, but instead of stacking a single die onto a PCB, die are stacked on top of a single interposer. I think this graphic puts it well.

2.5D is like having a basement door into your neighbor’s house and physically is either a bump or a TSV (through silicon vias) into the silicon interposer beneath you, and that connects you to your neighbor. It isn’t faster than your actual on-chip communication, but since your net output is decided on the total package performance, the lowered distance and increased interconnection between the two silicon pieces outweigh the downsides of not having everything on a single SOC. The benefit of this is you can use “known good die” – or smaller pieces of silicon to piece together larger more complex packages very quickly. It would be better to all be done on one piece of silicon, but this process makes fabrication a lot easier especially at smaller sizes.

Those little pieces of silicon – are often called “chiplets” that you’ve heard all about. Now you can get chiplets of small functional blocks of silicon designed to be combined together, connect them on a single flat silicon substrate, and boom! 2.5D Package at your service.

Chiplets and 2.5D packaging are probably going to be used for a long while, it has a very workhorse like quality to it and likely will be easier to make than outright 3D and much cheaper as well. Additionally, it can scale well and can be reused with new chiplets thus making new chips in the same package format by just replacing the chiplet. The new Zen3 improvements are an of this, where the package is similar but some of the chiplets got upscaled. However this still kind of stops short at the final version of packaging, which is 3D

3D Packaging

3D packaging is the holy grail, the ultimate ending of packaging. Think of it this way, now instead of having all those separate little houses on the ground that are 1 story tall and connected by basements, we can just have one giant skyscraper that is custom made with whatever process is needed to fit the function. This is 3D Packaging – and now all the packaging is done on the piece of silicon itself. It is the single fastest and power-efficient way to drive larger more complex structures that are feature built to the task, and will “extend” Moore’s law significantly. We may not be able to get more feature shrink in the future, but now with 3D packaging we still could improve our chips into the future similar to Moore’s law of old.

And what’s interesting is we have a clear example of an entire semiconductor market that went 3D – Memory. Memory’s push into 3D structures is a very good indication of what’s to come. Part of the reason why NAND had to go 3D was that they struggled to scale at smaller geometry. Imagine memory as a large 3D skyscraper, and each of the floors is kept together by an elevator. These are called “TSV”s or Through silicon vias.

This is what the future looks like, and it’s even possible we will be stacking GPU/CPU chips on each other or stacking memory on CPU. This is the final frontier and one we are quickly approaching. You will likely start to see 3D packaging pop up over and over in the next 5 years.

A Quick Overview of 2.5D/3D Packaging Solutions

Instead of going much further into 3D and 2.5D packaging, I think it’s best to just lay out some processes that are being used and you might have heard before. I want to focus here on processes done by fabs, which are the ones that drive 3D/2.5D integration forward.

TSMC’s CoWoS

This is seemingly the workhorse of the 2.5D integration process and was pioneered by Xilinx.

This process is mostly focused on putting all the logic dies onto a silicon interposer, and then onto a package substrate. Everything is connected via microbumps or balls. This is a classical 2.5D structure.

TSMC SoIC

This TSMC’s 3D Packaging platform, and is the relatively new kid on the block.

Notice this amazing graph on bump density and bonding pitch, SoIC is not even close to Flipchip or 2.5D in size and rather is pretty much a front end process in terms of density and feature size.

This is a good comparison of their technologies, but note that the SoIC actually has a chip on chip stacking akin to 3D stacking, instead of the interposer 2.5D integration.

Samsung XCube

Samsung has become a much more important foundry partner in recent years, and of course not to be outmatched Samsung has a new 3D packaging scheme. Check out the video for their XCube below.

There isn’t exactly a lot of information here, but I want to highlight that the A100 was fabbed on the Samsung process, so this is likely the technology powering Nvidia’s recent chip. Additionally of all the companies here, Samsung likely has the most experience with TSVs due to their 3D memory platform, so clearly, they know what they are doing.

Intel Foveros

Last but not least is Intel’s Foveros 3D packaging. We will likely see more implementations from Intel in their “hybrid CPU” process from their future 7nm and beyond generations. They have been pretty explicit at Architecture days that this is their focus going forward.

Something that is interesting is that there really isn’t much differentiation between Samsung, TSMC, or Intel in the 3D process.

Winners from the Advanced Packaging Revolution

So if you remember this post that was the introduction to the semicap series, Advanced packaging firmly is the “mid-end” that I reference. Why this is so interesting is because this is all incremental growth.

In the past, packaging estimates were excluded from WFE (Wafer Fab Equipment) estimates annually, but as of 2020, they are starting to include wafer-level packaging. This is kind of the signal for the wind shift and why the mid-end is very interesting from here. Another definition for Mid-end is Back End of Line (BOEL). For a more in-depth discussion of packaging-related companies, refer to my packaging stocks follow-up.

Subscribe to Fabricated Knowledge

Let’s learn more about the world’s most important manufactured product. Meaningful insight, timely analysis, and an occasional investment idea.


How Intel will Beat Samsung

How Intel will Beat Samsung
by Daniel Nenni on 03-09-2022 at 6:00 am

Intel vs Samsung

Now that Intel is back in the foundry business, and with the Tower Semiconductor acquisition they are definitely back in the foundry business, Samsung will be the biggest foundry loser here.

You can break the IDM foundry business into two parts: First, and foremost, the NOT TSMC Business. Second is the the Better PPA (Power/Performance, Area) Business.

The semiconductor industry wants multiple sources, it’s a natural business reaction and has very good merit. Take Intel and AMD for example. It wasn’t long ago that AMD had single digit market share and that was the NOT Intel Business. Today AMD has double digit market share and that now includes the Better PPA Business.

With foundries everyone wants second and even third source manufacturing options and that is why chip companies like Qualcomm and Nvidia send business to Samsung. It’s not just reducing risk, it’s PPA and wafer availability. Today though Samsung mostly rides the NOT TSMC Bus.

Now that Intel is back in the foundry business Samsung is facing a formidable challenge. All of Samsung’s missteps will now bare increased scrutiny and as long as Intel executes their IDM 2.0 strategy Samsung Foundry is in serious trouble.

The IDM foundry business is an interesting story. In fact, IDMs founded the foundry business when they leased out fab space in the 1970s when business was slow. Then came the pure-play foundries in the 1980s who did not compete with their customers, the fabless companies, and the rest as they say is semiconductor history.

Fabless: The Transformation of the Semiconductor Industry

Samsung entered the foundry business with Apple 15+ years ago. The first Apple ASIC (iPod) was actually done by fabless ASIC vendor eSilicon and the volumes were drastically underestimated so eSilicon profited greatly. In 2006 Steve Jobs went to Intel CEO Paul Otellini and pitched the iPhone in hopes of getting a manufacturing agreement. Paul did not share Steve’s vision and passed on the deal:

We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we’d done it. The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do. At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”

As a result, not only did Intel miss the mobile market, they missed the opportunity of being a world class foundry like TSMC is today.

Apple then went to Samsung which produced the first A4 SoC for the iPhone 4 using the Samsung 45nm process. The A5 SoC was also 45nm. Back then we named processes different so it was not unusual to reuse a process so all was well and the iPhone dynasty had begun. The A6 was Samsung 32nm and the A7 was Samsung 28nm.

Unfortunately, Samsung showed their true IDM colors by competing directly with Apple and even borrowed some Apple IP. The result was Apple filing more than 50 legal actions around the world which would ultimately be settled for billions of dollars in Apple’s favor.

For the A8 (iPhone 6) Apple turned to TSMC 20nm. The iPhone 6 was one of the better smartphones (I had one). Unfortunately, when Apple turned to FinFETs TSMC could not supply enough chips so they also used Samsung 14nm for the A9. Apple was back to TSMC for the A9x and there on after.

Apple absolutely did change the foundry business by writing some really big checks, accounting for 20+% of TSMCs annual revenue, but also for the yearly process cadence. Rather than taking big risky steps TSMC did yearly half nodes matched with the yearly fall iProducts launch. This allowed them to perfect double patterning before adding FinFETs, introduce partial EUV before going to full EUV, and many other process innovations. It’s called yield learning for a reason.

Now Samsung and Intel both follow the half node process methodology and you can thank Apple for that, absolutely.

Which brings us back to the recent Samsung missteps. Samsung did VERY well at 14nm getting a piece of the Apple business and many other customers including Qualcomm. Globalfoundries also licensed Samsung 14nm for their Malta fab and has done quite well with it so Samsung 14nm customers are far reaching. Unfortunately, 10nm was not so kind to Samsung with single digit yields at launch time. Samsung was forced to ship good die instead of wafers causing Qualcomm and others to miss market windows and customer commitments.

Samsung did a much better job at 7nm but now we are hearing about a serious unreported yield problem at 5nm. In fact, there is a formal investigation inside Samsung:

“The company’s management suspects a forgery of the report on the release of microcircuits by the Samsung Semiconductor Foundry division. Information about the production of 5-, 4- and 3-nm products is now being verified…”

Samsung also recently had an environmental incident in Texas that wasn’t reported until more than 3 months after the fact which reflects badly on the Samsung safety monitoring systems.

“While it is unknown how much waste entered the tributary, Watershed Protection Department (WPD) staff found virtually no surviving aquatic life within the entire tributary from the Samsung property to the main branch of Harris Branch Creek, near Harris Branch Parkway….”

Not good considering Samsung wants to expand in Texas:

Bottom line: If chip designers are to decide between Intel and Samsung as a second source to TSMC, Samsung will be the biggest loser. If Intel provides competitive PPA to TSMC and Samsung with GAA processes, as Pat Gelsinger has promised, then the IFS decision gets even easier, absolutely.

Also read:

Intel Evolution of Transistor Innovation

Intel 2022 Investor Meeting

The Intel Foundry Ecosystem Explained

Intel Discusses Scaling Innovations at IEDM


Webinar: Beyond the Basics of IP-based Digital Design Management

Webinar: Beyond the Basics of IP-based Digital Design Management
by Daniel Payne on 03-08-2022 at 10:00 am

Digital Design Flow

According to the ESD Alliance, the single biggest revenue category in our industry is for semiconductor IP, so the concept of IP reuse is firmly established as a way to get complex products to market more quickly and reducing risk. On the flip side, with hundreds or even thousands of IP blocks in a complex SoC, how does a team, division or corporation know where all of their commercial and internal IP is being used, and what versions should be used? This brings up the whole topic of digital design management.

I did visit Cliosoft at DAC in San Francisco back in December and blogged about it, so I have been familiar with their general approach to digital design management. Recently I viewed their latest webinar on demand, IP-based Digital Design Management that Goes Beyond the Basics.

If you’re new to the concepts of digital design management, then the first section of the webinar is a great place to learn about how files are managed, versioning control and labeling releases. The digital design flow has many steps, EDA tools, IP blocks, and a variety of engineers with specific duties, so orchestrating this complex process requires a more structured approach.

Digital Design Flow and Personas

In the webinar you will learn how each of the personas on your team will use a methodology to handle the IP Bill of Materials (BOM). specifications, memory maps, documentation, forums and information sharing. Since IP reuse is a major tenet, you need to have a way to quickly search and find the exact IP required for new projects. Having both internal IP and commercial IP in a catalog makes the search more efficient at the start of a new project.

Working across geographies is important, especially in larger companies, so you’ll need a way to define where each part of an SoC is going to be defined and managed. Knowing the bug fix history for each IP block is essential to getting the correct behavior when using an IP instance. Having two versions of the same IP block should be quickly flagged, because in most cases you really want a single version for each IP on the same SoC.

The tasks of verification engineers are discussed, and tips on how to minimize the storage of EDA tool results presented.

Controlling who has access to all of the IP and at what levels (read, write, geography) was presented for enforcing methodology and meeting legal requirements. Automotive (ISO 26262), defense and aerospace design teams need to know exactly where each IP block has been used for ITAR (International Traffic in Arms Regulations) compliance.

They even included a checklist for teams that are considering which digital design management system to use for their complex projects, that way you know what to look for during an evaluation to compare different vendor approaches.

Summary

Cliosoft has a long history in supporting digital design management requirements with tools and a methodology to help ensure compliance and receive automation benefits. IP reuse is the leading methodology for SoC projects today, so having a way to use IP more effectively is a competitive advantage for systems companies.

IP Centric Digital Design

View the Cliosoft webinar on demand online, it’s 26 minutes long and requires a brief registration.

Related Blogs


Prototype enables new synergy – how Artosyn helps their customers succeed

Prototype enables new synergy – how Artosyn helps their customers succeed
by Daniel Nenni on 03-08-2022 at 6:00 am

LS Dual

Artosyn Microelectronics, a leading provider of AI SoCs for drones and other sophisticated applications finds itself at the intersection of hardware architecture and software development. “Our customers are advancing the state of AI programming every day,” said Shen Sha, Senior R&D Manager of Artosyn’s AI Chip Department. “They need early access to the hardware to develop their software. It’s imperative.”

Since 2015, Artosyn has specialized in Image Signal Processors (ISPs) and Neural Processing Units (NPUs), highly advanced ASIC devices that must meet the demanding requirements of low-power and high-performance. These systems represent the most advanced generation of controllers today, employing artificial intelligence for such tasks as object detection, object classification, and object tracking. Artosyn’s chips are at the heart of the latest generation of UAVs (drones), as well as being used in smart surveillance systems, robots, autonomous vehicles, and more.

These are designs of enormous size and complexity, and verifying their correct operation presents its own challenges. “We’ve used many of S2C’s FPGA development systems including their single and dual Prodigy VU440 systems,” explains Sha. “We were able to successfully complete the validation of several complex chips in the hundred-million gate range quickly and efficiently.”

But validating hardware still leaves millions of lines of code to test and verify. As a company dedicated to the customer experience, Artosyn looks for creative ways to help: “In some cases we’ve used the large capacity FPGA-based systems from S2C to directly built a prototype for the customer’s evaluation,” said Sha. “This really accelerates their software development process.”

Sharing the power of S2C’s Prodigy VU440 enabled both companies to leverage a unique form of cooperation – Artosyn was able to swiftly validate their 100-million gate design, and their customer gained access to the hardware critical to furthering their development schedule. Moreover, Artosyn benefits from additional insight by working so closely with their customer; knowledge that helps drive improvements in their own designs. It’s a win for both firms.

S2C specializes in providing leading-edge prototyping platforms for a broad range of design sizes and applications. Built around the world’s largest and fastest FPGAs – such as the Xilinx UltraScale+ VU19P, and the Intel Stratix 10 – S2C’s Prodigy series can scale up to accommodate the largest designs in the hyperscale class. With a rich library of memories, daughter cards, and interface modules, Prodigy systems can be quickly adapted and configured for any purpose. If you’re facing the challenge of large-scale design validation, let us help. Together, we can find a win for you too.

About Artosyn

Artosyn Microelectronics is the leading embedded AI SoC supplier. The application of their chips covers the market such as drones, robots, smart surveillance, and more. Artosyn is known for its solid experience in wireless communication, computer vision and deep learning

About S2C

S2C . is a global leader of FPGA prototyping solutions for today’s innovative SoC/ASIC designs. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 500 customers and more than 3,000 systems installed, our highly qualified engineering team and customer-centric sales team understands our users’ SoC development needs. S2C has offices and sales representatives in the US, Europe, Israel, mainland China, Hong Kong, Korea, Japan, and Taiwan.

Also read:

S2C’s FPGA Prototyping Solutions

DAC 2021 Wrap-up – S2C turns more than a few heads

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions


Analog IC Layout Automation Benefits

Analog IC Layout Automation Benefits
by Daniel Payne on 03-07-2022 at 10:00 am

Differential Pair Schematic

I viewed a recent webinar from Paul Clewes of Pulsic, and the topic was Balancing Analog Layout Parasitics in MOSFET Differential Pairs. This topic interests me, because back in 1982 I wrote my first IC layout automation tool at Intel that automatically created 15% of a GPU chip layout called the 82786, then joined Silicon Compilers in 1986 where IC layout automation really was push-button for users. Historically the automation of digital layout blocks came first, as analog IC layout has many more requirements than digital and was just too difficult.

For a differential pair amplifier there are a number of specific requirements to ensure robust performance, like:

Differential Pair Schematic
  • Matching transistor W and L values in amplifier
  • Interconnect parasitic balancing
  • Use of common centroid layout to reduce layout-dependent effects
  • Current mirror layout with smallest parasitic RC values

Analog IC Layout Automation

Paul showed how the Animate Preview plugin works inside of the Cadence Virtuoso environment, and that it automatically identifies schematic structures like current mirrors and differential pairs, then constrains the layout placement just like a skilled IC layout designer would do manually. You get to quickly see multiple layout scenarios within a minute or so, and each layout is already DRC clean by construction, saving you more time.

Example Analog Schematic

Clicking on the first automatically generated layout brings up the Animate Preview dialogs, showing windows for: Hierarchy, Schematic, Layout, Results and Constraints. The Layout window showed nine generated layout topographies. In the Results window there are analytics for each of the nine layouts, like aspect ratio, width, height and area. Every auto generated constraint from the Schematic is listed in the Constraints window.

Nine Layout Choices

Animate identified the differential pair from the schematic, and zooming into the schematic you can view the layout options for transistor layout like the number of layout rows used. Clicking anything in the schematic will cross-probe and highlight that device in the layout window.

Transistors M19 and M20 in the schematic define the differential pair, and the layout shows how these devices were placed in regular rows and columns known as a common centroid, which helps minimize process variations and also has matched spacings in both vertical and horizontal directions. Poly heads are defined in the same direction as part of the matching.

M19 and M20, schematic and layout

To further refine this layout and improve the vertical matching, a new dummy row was added above and below devices M19 and M20 by selecting a menu option:

Dummy rows added

The metal routing details can also be viewed in Animate, along with viewing just Poly, Metal 1, Metal 2, Metal 3, Metal 4 and Metal 5 layers so that you can confirm that interconnect in the differential pair is identical and balanced. Routing to Source and Drain nodes were visually compared, and they were balanced.

Common centroid layout was also shown for the four current mirror devices: M8, M10, M11, M12. Routing between the current mirror and differential pair is also minimized and symmetrical, by design. Examining the routing between current mirror and differential pairs revealed that the metal layers were indeed symmetrical and identical.

Current Mirror Devices

The output of the differential pair connects to two more devices, and even the placement and routing of these devices is balanced. A constraint choice was changed from Base Analog to Mirrored Base Analog to show how you can control the symmetric layout for devices on the left-hand side (Red), and right-hand (Green) side. The butterfly layout choices can be seen below:

Mirrored Base Analog

Summary

In the old days of analog IC design the circuit designer drew the schematic and maybe added some annotations or notes for the layout designer, then threw the schematic over the wall. The layout designer read the annotations, made some placements and routing, then threw the layout back over the wall to the circuit designer. Finally, the circuit designer would examine the IC layout for symmetry and matching, and request refinements, creating a loop that had to iterate until matching constraints were met.

The new method from Pulsic enables a circuit designer to quickly create a balanced and symmetric layout in minutes, not days, all because of the inherent automation designed into Animate.

View the 29 minute archived webinar online.

Related Blogs


Non Volatile Memory IP is Invaluable for PMICs

Non Volatile Memory IP is Invaluable for PMICs
by Tom Simon on 03-07-2022 at 6:00 am

Applications for NVM in PMICs

Power Management ICs are a vitally important part of system design. Evidence of this is cited by a Synopsys white paper that mentions how Apple acquired a portion of PMIC developer Dialog Semiconductor that was previously their exclusive PMIC supplier. Clearly Apple had decided that PMIC design was a strategic differentiating element of their products. PMICs play an outsized role in mobile and automotive systems. PMICs are historically analog circuits, yet now contain increasing digital content. One challenge has been to see how designers combine the need for analog power circuits with CMOS digital for control.

The Synopsys white paper, titled “Calibrate and Configure Your Power Management IC with NVM IP” written by Krishna Balachandran, Product Marketing Manager Sr. Staff, talks about the history of PMICs starting with their implementation on purely analog nodes. They discuss how the advent of BCD, which combined all the device types needed for optimal PMIC design in one technology, was a game changer. The white paper offers a good analysis of the trade-offs between BJT, MOSFET and IGBT.

The white paper also focuses on the need to store critical calibration information and operation configuration settings using a secure and reliable method. It explores the many benefits that non-volatile memory IP offers for both the analog and digital elements in a PMIC design.

For the analog circuit there is variation from unit to unit that requires calibration and the storage of that information on the individual die. This ensures that the output meets specifications. Because many PMIC circuits are used in multiple applications, there is also a need to store configuration settings on the die as well.

One-time programmable (OTP) or multiple time programmable (MTP) Non-Volatile Memory NVM is ideally suited to the needs of PMICs. Antifuse NVM offers high compatibility with existing process mask layers by not requiring any new layers. It works with BCD or CMOS nodes with no additional process steps. Its high reliability, including operation at high temperatures makes it a good fit for automotive applications. NVM is also very compact which means that it further saves on costs.

Applications for NVM in PMICs

NVM is very secure from unauthorized access and tampering. Antifuse NVM bits are not readable using scanning electron microscopes or other optical or mechanical methods. This gives them a big advantage over traditional fuse technology. Fuse technology can also develop reliability issues due to bridging from electromigration. Lastly traditional fuse technology requires a lot of area, which makes it much less cost effective.

When antifuse NVM is used it is possible to emulate MTP with OTP when sufficient storage is provisioned to allow storage of multiple copies of the data. Even though not infinitely re-writable, given reasonable estimates of lifetime rewrites, OTP can work well as a practical alternative.

The Synopsys white paper also explores the tradeoffs surrounding process selection for PMICs. While a large part of the market uses 180nm BCD, increasing digital content is encouraging a move to smaller BCD nodes. Designers also face a decision of whether to migrate to a two-die solution, with one for analog and another for digital. More complex power on and power off requirements, such as power ramping and standby modes is causing digital content to grow.

The Synopsys white paper is thorough and does a good job of articulating the issues surrounding PMIC design. PMICs are definitely a key competitive component in most systems today. Designers need to look at every opportunity to add needed features and optimize them as they reduce costs and ensure high reliability. OTP NVM technology is proving itself as a key element in this effort. The full Synopsys white paper is available here for download.

Also read:

Why It’s Critical to Design in Security Early to Protect Automotive Systems from Hackers

Identity and Data Encryption for PCIe and CXL Security

High-Performance Natural Language Processing (NLP) in Constrained Embedded Systems


Semiconductor Capital Equipment Series: Introduction

Semiconductor Capital Equipment Series: Introduction
by Doug O'Laughlin on 03-06-2022 at 10:00 am

Semiconductor Capital Equipment Series 1

Semicap is in some ways the unsung hero of American global dominance in semiconductors. The US punches above its weight in terms of market share compared to demand, but specifically in three categories. EDA, IP, and Equipment.

I hope to write about everything there can be said about semiconductors, and EDA is a place I understand a bit but not as much as I’d like. I still really like the Scuttleblurb EDA primer – and refer back to it from time to time. But Semicap – that is a place I feel pretty confident about, and I think the industry structure, capital returns, and defensiveness of the businesses is truly attractive. It’s a great subsector and always seems to reasonably priced.

Industry Overview and Map

First – I am going to start with this industry map and kind of break it down further. I really like this simplistic semiconductor industry map @Fritz844 made on Twitter (he seems to have deleted it).

You can see here where the Semicap companies exist – and who their key customers are. There is another level deeper I wanted to make a graphic for. Also “mid-end” semicap is something that is kind of emerging and new, so if you haven’t really heard of it before that is okay. I would put advanced packaging firmly in this segment, and this is where the back end is starting to look a lot more like the front end. More on that later.

I think this is a decent industry map, with the customers on one end (purchasers) and the suppliers on the other (Semicap and materials). I think right now there is a bit more blurring of Semicap lines than there was in the past, with materials and mid-end kind of being the emerging points of relevance on the map. Obviously, the front end is going to continue to be the most important part, but with the end of Moore’s law, we increasingly need new materials, new technologies (advanced packaging!), and new methods to keep the pace of improvement constant. If you have no idea what Front End, Back End, or Mid End mean, that’s where the next part comes into play.

The 10,000+ foot view of Semicap

My favorite quick infographic on how a semiconductor is made is from the ASML Annual report. I will briefly walk through a few of the steps.

Let’s start with Photoresist. The photoresist is a light-sensitive polymer put on top of a silicon wafer, and when exposed to light it turns into a soluble material. The exposure to light step is called lithography, and using light, they can print materials onto the wafer. This process is similar to old fashioned photography and film, and the light is shone through a lens called a photomask and imprints it’s image onto the film or silicon wafer.

After exposure, you smooth out the resist using a bake, which helps development. Development is similar to developing a photo and uses aqueous bases to create the shape of the photoresist profile. Next comes either etching, deposition, or ion implantation. Each has a different role in building a semiconductor transistor pattern, such as subtracting, adding, or doping the substrate.

Etching is the most common, and etching is usually performed using wet chemicals or plasma and is commonly used to dig deeper trenches. The photoresist material resists the etching and protects the covered material, and then they can print the process onto the substrate.

Last is stripping the photoresist, to then move onto another cycle or step in the fabrication process. Each of these steps is often to just print a single layer. And many modern semiconductors can have hundreds of layers built on top of each other, and of course, any single mistake will create a defect in the semiconductor. This explains why they are obsessed with cleanliness in the fabs. A single micron of dust will destroy a die and ruin yields.

For me – I think of building a semiconductor like laying a city filled with skyscrapers one floor at a time. Each step in the process either adds or subtracts a floor. After hundreds of steps, you then take a step back and you have your fully built “city”, complete with the hundreds of miniature skyscraper-like transistors.  

Everything I just described here is what is called the front end of semicap. The front end usually refers to the equipment that goes into the physical transistor creation from silicon and tends to be the largest source of spending for fabs. The emerging “mid-end” is post transistor, but pre-die-cutting or advanced packaging.

With wafer-level advanced packaging such as CoWoS processes and more, parts of packaging that were firmly in the back end historically are moving towards the front end. This is really important and likely one of the best opportunities in growth and misunderstood businesses for now.

Lastly the backend. After the semiconductor leaves the fab, the product is not done yet. There are more steps of assembly and testing that have to go into making a working end product from pieces of silicon. Even the typical CPU core you see has some level of packaging applied to it after fabrication. This has historically tended to be more cyclical and considered a worse business at the end of the tailwhip of the semiconductor supply chain. But with the importance of packaging rising in Heterogeneous compute, many back-end companies have been thrown a strategic lifeline.

Each of these steps is “owned” by a particular company. We will be diving into market share, positioning, and company descriptions in focused paid write-ups – but for now, know that the top 5 semicap companies have approximately ~65% market share. This is an oligopoly and a profitable one at that. There are some niches in each business that are truly wonderful. I think you’ll also learn a lot about the core science, the barriers to entry, and some of the most wonderful technology we’ve come up with to date. These will all be paid posts to come.

Subscribe to Fabricated Knowledge

Let’s learn more about the world’s most important manufactured product. Meaningful insight, timely analysis, and an occasional investment idea.


The Metaverse: A Different Perspective

The Metaverse: A Different Perspective
by Ahmed Banafa on 03-06-2022 at 6:00 am

The Metaverse A Different Perspective

The term Metaverse has been a hot topic of conversation recently, with many tech giants like Facebook and Microsoft staking claims. But what is The Metaverse?

Author Neal Stephenson is credited with coining the term “metaverse” in his 1992 science fiction novel “Snow Crash”. He envisioned lifelike avatars who live in realistic 3D buildings and other virtual reality environments. Correspondingly, in a technical sense, Metaverse is another name for the Internet of Everything (#IoE), a concept started in the early 2000s, leading to the Internet of Things (#IoT) and its applications a scaled-down version of the IoE. Since then, various developments have made milestones on the way toward a real Metaverse, an online virtual world that incorporates augmented reality (AR), virtual reality (VR), 3D holographic avatars, video, and other means of communication. As the Metaverse expands, it will offer a hyper-real alternative world or what Comic fans call the parallel universe. But this description is like talking about “Frontend “in apps development only without explaining the “Backend” side of the apps. To understand that side of this new X-verse, we need to look at Metaverse from a different perspective.

Different Perspective of The Metaverse

The Metaverse “is bringing together people, processes, data, and things (real and virtual) to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries”. In simple terms, the Metaverse is the intelligent connection of people, processes, data, and things. It describes a world where billions of objects have sensors to detect, measure, and assess their status, all connected over public or private networks using standard and proprietary protocols. The main pillars of The #Metaverse, as depicted in Fig. 1, are:

  • People: connecting people in more relevant, valuable ways;
  • Data: converting data into intelligence to make better decisions;
  • Processes: delivering the right information to the right person (or machine) at the right time;
  • Things: physical and virtual devices and objects connected to the Internet and each other for intelligent decision-making.

Figure 1: The pillars of The Metaverse.

Challenges Facing The Metaverse

No new technologies or concepts come without challenges, and the Metaverse is no exception:

  1. Identity Management: it is difficult to confirm ID in current Web 2.0 apps; with Metaverse, the problem scales up as we expand the use of the products and services; the last thing you want is to create a wild west in Metaverse.
  2. Security, Safety, and Privacy (SSP): As devices/people get more connected and collect more data, accelerating the Metaverse expansion at a speed close to the speed of the real universe, privacy, safety, and security concerns will increase too. How companies decide to balance customer SSP with this wealth of Metaverse data will be critical for the future of the Metaverse and, more importantly, customers’ trust in the Metaverse and any future X-verse versions.
  3. Finance in Metaverse: using cryptocurrency is a challenge by itself; using it as a way of payment in Metaverse will add more complications to what is still an unregulated payment system, one of the options to overcome this is to consider #CBDC (Central Bank Digital Currency)
  4. Laws, regulations, and protections: new world and new territory for the law to explore and define the responsible parties and create new regulations to protect everyone using Metaverse, including Intellectual Properties with the newfound businesses like #NFTs
  5. The emotional and mental impact of living in Metaverse: the same issues of non-stop social media usage and online gaming will transfer to the Metaverse on a large scale with another dimension added with near real-time interactions, this could create a lot of mental issues in the real world, and the line between real and imaginary world will be blurred with actions and words used in both worlds.
  6. Standardization of the Metaverse: this is usually one of the toughest parts in the early lifecycle of any new technology as everyone wants to be the “standard” and dominate the market; standards will cover all hardware/software, processes, protocols and make interoperability fundamental to the design and implementation of the Metaverse.

The Future?

Data are embedded in everything we do; every business needs its flavor of data strategy, which requires comprehensive data leadership. The Metaverse will create tens of millions of new objects and sensors, all generating real-time data which will add more value to their products and services for all the companies who will use Metaverse as another avenue of business. As a result, enterprises will make extensive use of Metaverse technology. As a result, there will be a wide range of products sold into various markets, vertical and horizontal, an endless list of products and services.

For example, in E-commerce, the Metaverse provides a whole new revenue stream for digital goods in a synchronous way instead of the current traditional 2D way of clicking and buying. In human resources (HR), significant training resources will be done with virtual reality (#VR) and augmented reality (#AR) that are overlaying instructions in a real-world environment and giving somebody a step-by-step playbook on how to put a complex machine together or run a device or try a new product all will be done with virtual objects at the heart of the Metaverse. While in sales/marketing, connecting with customers virtually and sharing the virtual experience of the product or service will be common similar to our virtual meetings during the past two years in the middle of Covid, but the Metaverse will make it more real and more productive. Crypto products, including NFTs, will be the natives of the Metaverse, adding another block to the Web 3.0 puzzle.

The pandemic forced us to be more online and accept many actions to be virtual, which was like a preview for the Metaverse in 2D; the real Metaverse is 3D with time as the 4th dimension. Still, in the Metaverse, we control time and space because we crate both in the Metaverse.

Finally, similarly to Cloud Computing, we will have Private-Metaverse, Hybrid-Metaverse, and Public-Metaverse with all possible applications and services in each type. Companies will benefit from all options based on their capabilities and needs. The main goal here is to reach Metaverse as a Service (MaaS) and see a label of “Metaverse Certified “on products and services.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

 Quantum Computing

Read more articles at: Prof. Banafa website

Article published in IEEE-IoT

References

https://lucidrealitylabs.com/blog/7-challenges-of-the-metaverse

https://cointelegraph.com/news/new-tribes-of-the-metaverse-community-owned-economies

https://biv.com/article/2021/11/top-business-applications-metaverse

https://www.usatoday.com/story/tech/2021/11/10/metaverse-what-is-it-explained-facebook-microsoft-meta-vr/6337635001/

http://www.cisco.com/web/about/ac79/innov/IoE.html

http://internetofeverything.cisco.com/

http://www.cisco.com/web/solutions/trends/iot/overview.html

http://time.com/#539/the-next-big-thing-for-tech-the-internet-of-everything/

http://www.gartner.com/newsroom/id/2621015

http://www.livemint.com/Specials/34DC3bDLSCItBaTfRvMBQO/Internet-of-Everything-gains-momentum.html

http://www.tibco.com/blog/2013/10/07/gartners-internet-of-everything/

http://www.eweek.com/small-business/internet-of-everything-personal-worlds-creating-new-markets-gartner.html


Podcast EP65: Trust But Verify – The Backstory of Applied Materials and Cornami with Wally Rhines

Podcast EP65: Trust But Verify – The Backstory of Applied Materials and Cornami with Wally Rhines
by Daniel Nenni on 03-04-2022 at 10:00 am

Dan is joined by CEO of Cormani Wally Rhines. Wally discusses the recent strategic investment in Conrami made by Applied Ventures (the venture capital arm of Applied Materials Inc.). Wally explores what type of business models and manufacturing optimization can be made possible if fully homomorphic encryption (HFE) can be enabled with Cornami technology.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Also read:

Conquering the Impossible with Aspiration and Attitude

I Have Seen the Future – Cornami’s TruStream Computational Fabric Changes Computing

CEO Interview: Wally Rhines of Cornami