RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Shipments of 5G Smartphones Will Surge to 900 Million Units in 2024

Shipments of 5G Smartphones Will Surge to 900 Million Units in 2024
by Robert Castellano on 11-03-2019 at 6:00 am

5G smartphones will increase from just 13 million units in 2019 to 900 million in 2024, as previous 2G/3G/4G smartphones shipments will decline slightly over the 2019-2024 period, reaching parity with 5G smartphones in 3Q 2023, as shown in the chart below.

According to The Information Network’s report “Hot ICs: A Market Analysis of Artificial Intelligence, 5G, CMOS Image Sensors, and Memory Chips,” 5G smartphones will surge at a CAGR of 130% during this period,. 2G/3G/4G smartphones will exhibit a CAGR of -10%, as their share of the overall smartphone market drops from 99% in 2019 to 48% in 2024.

In addition to the technical benefits of 5G, which are designed to transfer data 10 to 100 times faster than current 4G technology, prices will also drop significantly during this timeframe. In 2019, 5G phones are selling for twice that of 2G/3G/4G. But in 2024, 5G smartphone prices will have only a 10% premium over older phones.


VW Drops Connected Car Bombshell

VW Drops Connected Car Bombshell
by Roger C. Lanctot on 11-03-2019 at 6:00 am

A senior executive from Volkswagen North America kicked off Enterprise Ireland’s first annual “CASE: Driving the Future” mobility symposium last week with the announcement of the launch of its next generation Car-Net connected car platform. The new solution represents a breakthrough by allowing for the customer selection of preferred wireless carrier and the ability to add a new connected VW to an existing consumer wireless plan.

The VW announcement was a bombshell not only for this novel multiple-carrier configuration but also for the fact that it marks Verizon Wireless’ return to the connected car market six years after General Motors’ OnStar service opted for AT&T Mobility over Verizon. The VW announcement is something of a poke at AT&T as well, which is VW’s existing connectivity provider.  AT&T may yet be added to the new Car-Net platform in 2020 as a customer option.

The new Car-Net system also marks a change in business model for VW – offering new car customers five years of free remote access functionality (remote start, remote lock/unlock) from the company’s mobile app with OnStar-like automatic crash notification via the on-board modem as a $99/year add-on. The Car-Net service was previously offered as a complete package at $199/year.

New cars with the updated Car-Net service will start arriving soon in the U.S. – with a line-wide upgrade expected to be completed during the course of 2020. Verizon is the first carrier enabled on the Car-Net platform thanks to software and service provided by Ireland-based supplier Cubic Telecom. (Cubic Telecom investors include Volkswagen’s Audi division, Qualcomm, and eSIM supplier Valid.) T-Mobile is expected to be added soon to Car-Net via remote provisioning of the on-board e-SIM by Cubic. Volkswagen also expects AT&T to be an option in the future.

The announcement opens a new chapter in vehicle connectivity – one in which consumers can tap a re-provisionable connectivity device in a connected car in order to simply add their car to their existing wireless plan. The new connectivity means cars may actually come to be seen by consumers as smartphones on wheels.

VW intends to use the new platform to deliver streaming services, usage-based insurance, vehicle diagnostics and service scheduling, and integrated e-commerce capabilities for paying for parking, tolls, and fuel. All of the richness of the VW/Cubic value proposition will be realized in due time. The announcement shows VW stealing the connectivity innovation flag from Detroit’s GM and planting it firmly in Herndon, Va., VW NA’s headquarters. The multiple-carrier solution and device add-on functionality is only currently enabled in the U.S. and Canada largely due to regional regulatory limitations elsewhere in the world.

Related Blog


AMD Intel TSMC menage a trois and the trouble with trouples

AMD Intel TSMC menage a trois and the trouble with trouples
by Robert Maire on 11-01-2019 at 10:00 am

  • Its “Complicated”- A 3 way Chip Relationship
  • Competing for Wafers, Moore’s Law & Love
  • Who’s Competing with Whom?
  • All’s Fair

The 3 way relationship is more complex than it seems

On the surface it seems simple. AMD and TSMC compete with Intel making its own chips and TSMC making them for AMD. But below the surface the real competition is actually between Intel and TSMC for supremacy in Moore’s Law as that will determine chip performance, value and cost. Maybe not…Dig down another layer and maybe its a competition between the US and a foreign competitor. Dig a little deeper and its a clash of China versus the US.

So the competition is AMD  & China Versus Intel? Or is it? Could it be Intel & AMD versus TSMC? Could it be Intel & TSMC versus AMD? Maybe all three.

Falling down the rabbit hole

Everything seems simple until you ask who TSMC’s biggest customers are. You might answer with the obvious choices. Apple, Huawei, Qualcomm, AMD and who else? You might answer Nvidia or HiSilicon, Marvell, Broadcom, Mediatek as number 5 but what if Intel was a top customer of TSMC? Well that strange revelation may actually be the case TSMC has been making chips for Intel and Intel has been short of capacity.  Intel may be freeing up some capacity by off loading production to TSMC maybe a lot maybe a lot more.

A secret, convenient, affair?

Its not like either Intel or TSMC would want the “relationship” publicized. AMD might get mad at TSMC cheating behind their back and Intel might be embarrassed that it would have to go to its erstwhile competitor to get needed capacity…. much better to keep the “side” relationship quiet and discrete.

Is Intel really serious about Competing with TSMC?

Is Intel actually reducing “leading edge” capex?

On Intel’s recent conference call they spoke about wanting to get back on a 2 or 2.5 year Moore’s Law cadence after stumbling around for 5 years with the 14/10NM transition. Sounds good but they may not be putting their money where their mouth is?

Intel announced a $500M increase in Capex which sounds like a lot of money but in the scheme of things is really just a bit over 3% increase from their current Capex budget, a fairly paltry increase. More importantly Intel said on their call that a significant portion of their Capex was going to increase capacity.  We would interpret this “significant” capacity increase at likely more than 3% of their capex spend. The capacity increase is aimed more at 14NM and other non-leading edge geometries so the spend is not aimed at pushing Moore’s Law rather just making more of the same parts.

This suggests that the math means that actual Capex spend on 5NM and 7NM , leading edge, may actually be down after you take out the capacity spend.

This hardly seems like a way for Intel to get back on a 2 to 2.5 year Moore’s Law cadence let alone catch up to TSMC which had a huge (much bigger than 3%) increase in their capex.

So this begs the question if Intel is truly serious about catching TSMC… The numbers would indicate not… Maybe Intel really doesn’t want to compete…

Could Intel go “fab lite”? Shades of “real men have fabs” and Jerry Sanders at AMD

Maybe the financial math would be better for Intel to go fabless and throw its manufacturing to TSMC.  Its clearly working better than what they have done lately, Works for Apple. Has gotten AMD back in the race.

Maybe like Apple, AMD, Qualcomm and others you do the design and keep the IP and hand the dirty and capital intensive work to TSMC.

It would be funny to have both AMD and Intel CPUs made by TSMC….not a lot different from Apple and Huawei both getting their chips made by TSMC or Qualcomm and Broadcom or MediaTek and Marvell….. seems to be the model….

It begs the question that has been asked many times…why does Intel still have fabs?

Everybody is competing for TSMC’s love and capacity

People might say that Intel would never put itself in a position where it had to compete for capacity at TSMC versus AMD but the truth is that Intel is already there…just not at bleeding edge CPUs.  Apple is obviously TSMC’s favorite… Qualcomm always wants to keep a relationship with Samsung to keep TSMC honest and get more capacity.

Its like a bunch of teenagers fighting over who loves who more.  In this case, TSMC may be the object of everyone’s desire.

More plot twists & strange relationships than an opera

There are a lot of moving parts in these strange relationships. TSMC is a “frenemy” to Intel, Samsung is a “frenemy” to Apple.  All these are small sub-plots against the giant overarching drama between the US and China which desperately wants to take over both Taiwan and the Chip industry and they are both one and the same.  The trade war is a backdrop to the overarching drama.

Could Apple turn it into a four way drama?

What if Apple decides to dump Intel and X86 in favor of its own processors, for laptops and desktops, made by TSMC. Or could Apple go in the opposite direction and ask Intel to make its custom processors in Intel fabs in order to stamp them “Made in America” and avoid the China take over and IP risk? (not likely…but stranger things have happened..)

The risk factor in making almost every significant chip on the planet (other than Intel) on a small island an hour and a half sail away from China by one company seems strange when you think about it.

The stocks

Right now the drama continues to play out without a lot of definitive conclusions. We need to monitor Intel’s progress in both capacity and Moore’s Law and see if they can get it together.  Can AMD get the attention and capacity it needs form TSMC? Will Intel increase or decrease business with TSMC? What will happen with the trade war? When and how will Samsung come back? Will they give up on being a foundry?

Right now the best positioned company appears to be TSMC which sits at the nexus of the drama and is the friend that everyone wants to have which an enviable place to be. Intel is in wait and see mode and AMD has potential but needs to execute.


Samsung 2019 Technology Day Recap!

Samsung 2019 Technology Day Recap!
by Daniel Nenni on 11-01-2019 at 6:00 am

Samsung is a complicated company with a VERY long history. We attempted to capture the Samsung Experience in chapter 8 of our book “Mobile Unleashed: The Origin and Evolution of ARM Processors In Our Devices”. If you are a registered SemiWiki member you can download a free PDF copy in our Books section.

Here is the chapter 8 introduction:

To Seoul, via Austin

Conglomerates are the antithesis of focus, and Samsung is the quintessential chaebol. From humble beginnings in 1938 as a food exporter, Samsung endured the turmoil and aftermath of two major wars while diversifying and expanding. Its early businesses included sugar refining, construction, textiles, insurance, retail, and other lines mostly under the Cheil and Samsung names.

Today, Samsung is a global leader in semiconductors, solid-state drives, mobile devices, computers, TVs, Blu-ray players, audio components, major home appliances, and more. Hardly an overnight success in technology, Samsung went years before discovering the virtues of quality, design, and innovation. The road from follower to leader was long and rocky.

And here are the final thoughts of the chapter:

A bigger question is how Samsung, and others, continue to innovate in smartphones beyond just more advanced SoCs. There are also other areas of growth, such as smartwatches and the IoT, where Samsung is determined to play. There are me-too features, such as Samsung Pay, and new ideas like wireless charging and curved displays. (More ahead in Chapter 10.)

How this unfolds, with Samsung both supplier and competitor in an era of consolidation for the mobile and semiconductor industries, depends on adapting the strategy. Innovations in RF, battery, and display technology will be highly sought after. Software capability is already taking on much more importance. As Chinese firms improve their SoC capability, the foundry business may undergo dramatic changes – and the center of influence may shift again.

 History says Samsung invests in semiconductor fab technology and capacity during down cycles preparing for the next upturn. Heavy investments in 3D V-NAND flash, the SoC foundry business, and advanced processes such as 10nm FinFET and beyond are likely to accelerate, and competition with TSMC and other foundries will intensify as fab expenses climb.

This book was published in December of 2015 and while there have been lots of changes at Samsung many things remain the same. Remember, they have the full support of South Korea including the government and more than 51 million people.

Bottom line: Samsung is a brute force technology innovator and we are very lucky to have them as a leader in the semiconductor industry, absolutely!

The Samsung Technology Day featured three key announcements introduced by the president of Samsung Semiconductor:

“Samsung is focused on harnessing the most advanced semiconductor technologies to power innovation across key markets,” said JS Choi, president, Samsung Semiconductor. “From System LSI devices that are perfectly adapted for real-world 5G and AI, to advanced solid-state drives (SSDs) that handle mission-critical tasks and offload CPU workload, we are determined to deliver infrastructure capabilities that are built to enable every wave of innovation.”

  • Exynos 990 and 5G Exynos Modem 5123:Delivers unprecedented AI-powered user experiences on-device with a dual-core neural processing unit (NPU) and enhanced digital signal processor (DSP) that can perform over ten-trillion operations per second. The Exynos 990 and 5G Exynos Modem 5123 harness the most advanced chipmaking technologies to-date with a 7-nanometer (nm) process using extreme ultraviolet (EUV) lithography.
  • Third-generation 10nm-class (1z-nm) DRAM:Delivers the industry’s highest performance, energy efficiency and capacity, since mass production in September. Optimized for premium server platform development, the 1z-nm DRAM will open the door to a lineup of memory solutions at the cutting-edge such as DDR5, LPDDR5, HBM2E and GDDR6 products as early as the beginning of next year.
  • 12GB LPDDR4X uMCP (UFS-based multichip package): Combines four 24Gb LPDDR4X chips and an ultra-fast eUFS 3.0 NAND storage into a single package, breaking through the current 8GB package limit in mid-range smartphones and bringing more than 10GB of memory to the broader smartphone market.

Personally, I found the event well organized and the presentations very well done. They were personalized and entertaining. One of the comments was that Samsung will dramatically increase their cloud silicon business. Currently they have 0% market share so the sky is the limit, literally.


BittWare PCIe server card employs high throughput AI/ML optimized 7nm FPGA

BittWare PCIe server card employs high throughput AI/ML optimized 7nm FPGA
by Tom Simon on 10-31-2019 at 6:00 am

Back in May I wrote an article on the new Speedster7t from Achronix. This chip brings together Network on Chip (NoC) interconnect, high speed Ethernet and memory connections, and processing elements optimized for AI/ML. Speedster7t is a very exciting new FPGA that can be used effectively to accelerate a wide range of processing tasks. Naturally with an announcement like this, the question of how to deploy this chip arises. Not everyone who could benefit from this new technology has the skills, time or resources to build it into a system. Data center operators who want to deploy this chip need a ready-to-go accelerator to make this happen.

Fortunately, Achronix just announced a major design win for their Speedster7t that will help end users get this chip into their server farms and data centers. Achronix and BittWare, a subsidiary of Molex, have teamed up to produce the VectorPath S7t-VG6 Accelerator Card. With it there is now an enterprise class PCIe accelerator card that can be used to provide best in class FPGA acceleration for cloud and edge computing.

The trend of adding data center accelerators has been heating up recently and the annual market is estimated to be around $2.8B for 2019. Forecasts have this growing to around $21B by 2023. Of this, the FPGA accelerator segment should be the fastest growing with a size of over $5B by 2023. This is because FPGA based accelerators hit on every cylinder when it comes to meeting business and technical needs.

FPGA accelerators offer very high performance per watt for a number of applications. Because they are reconfigurable, they allow the agility to take advantage of new algorithms or to be adapted for new applications. Because the BittWare VectorPath S7t-VG6 uses PCIe, it is easily scalable with the addition of any number of needed cards. Deployment is made easy with a full suite of development tools and BittWare’s support resources.

The VectorPath S7t-VG6 is a full height ¾ length (GPU size) double wide card with passive, active or liquid cooling options. The on-board hardware is well thought out. There is 8GB of 4Tbps GDDR6 as well as 4GB of DDR4. The PCIe interface is Gen3 x16. The card is expected to support Gen4 with qualification. The Ethernet interfaces use hard MAC and FEC IP that support a wide range of standard protocols and line rates. There is a 1x 400GbE interface that can be configured as 2x 200, 4x 100, or 8x 10/25/40/50GbE. There is also a 1x 200BgE interface that can be configured as 2x 100 or 4x 10/25/40/50GbE

To make the card even more useful there are clock and interface expansion options. On the front of the card there are clock inputs for 1PPS + 10MHz. On the back there are 3.3V GPIOs that are useful for control, triggers and adding support for legacy interfaces. Additionally, on the back there is an OCuLink expansion port that adds a lot of flexibility. It can be used for PCIe Gen4 or for general purpose SerDes. It offers low latency card-to-card connections for deterministic scaling. Or, it can be used at add extra network ports, add NVMe FLASH, or to define custom serial I/O interfaces.

The news release from Achronix and BittWare has a lot more information about customization options, developer’s toolkit and goes into more depth on the advantages of the Speedster7t FPGA. One of the key take-aways is that BittWare has the resources and technology to make deployment of the S7t VG6 accelerator card practical for a wide range of end users. I suggest looking at the full release on their websites to get more information.


Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor

Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor
by Randy Smith on 10-30-2019 at 10:00 am

Last week I attended the Linley Fall Processor Conference held in Santa Clara, CA. This blog is the first of three blogs I will be writing based on things I saw and heard at the event.

In April, Flex Logix announced its InferX X1 edge inference co-processor. At that time, Flex Logix announced that the IP would be available and that a chip, InferX X1, would tape out in Q3 2019. Speaking at the fall conference, Cheng Wang, Co-founder and Senior VP of Engineering, announced that indeed, the chip did tape out in Q3. Also, Cheng said that first silicon/boards would be available sometime in March 2020, there would be a public demo in April 2020 (perhaps at the next Linley Conference?), and that mass production will be in 2H 2020. While this means that Flex Logix is delivering on the announced schedule, there was certainly a specific focus to Cheng’s presentation beyond that message. In a word – Efficiency.

In engineering fields, we often compare different efforts or approaches to the same problems using benchmarks. When it comes to looking at finished products, these benchmarks can be straight-forward. For example, we review miles per gallon, acceleration, stopping distance, and other factors when analyzing the performance of a car. For processors, it has always been a bit more difficult to do benchmarking. I remember working with BDTi nearly 20 years ago when trying to compare the performance of various processors for video processing with widely different architectures. It took an organization like BDTi to give an unbiased analysis, though it was still challenging to see how the results related to your real-world needs.

There is an increasing number of processing options now being developed and deployed for neural network inference at the edge. More and more, we see attempts to standardize the benchmarks for these solutions. One example is Stanford University’s DAWNBench, a benchmark suite for end-to-end deep learning training and inference. But reading through this information, you still will come to realize that it is your specific application that truly matters. Why look at benchmark results for “93% accuracy”, if you must meet “97% accuracy”? Does Resnet 50 v1 accurately represent the model you will be running? In particular, DAWNbench was ranking results based on either cost or time. As engineers though we typically face criteria in a different manner – hard constraints and efficiency.

Hard constraints are easy to understand when looking at these benchmarks as there will be simple constraints for area, power, and performance. Likely, multiple architectures may be able to meet all or most of these constraints, though perhaps not simultaneously. But to understand which approach meets them best, you need to consider efficiency – inferences per $, and inferences per watt. This method of showing performance is where Flex Logix’s InferX X1 approach seems to separate itself from the competition, at least for the devices shown. From the Flex Logix presentation at the Fall Conference:

DRAM costs money, so it is important to be efficient in your use of DRAM. If you are not considering DRAM efficiency in making your selection of IP, then you are not measuring your true costs. The DRAM requirements to hit a certain performance level are not equal between the various processors.

The one thing that has been clear to me this year, especially having attended both the AI Hardware Summit and the Linley Fall Processor Conference, is that simply measuring TOPS is a waste of time. See below the information presented by Flex Logix on TOPS across a few well-known solutions. In this example, InferX X1 would seem to be a minimum of 2x more efficient than the Nvidia solutions.

The entire Linley Fall Processor Conference presentation from Flex Logix is available on their website here. It is not possible to share all the details in a blog here, but I encourage you to see the entire presentation. There is more information available in the presentation about how this efficiency is achieved and how to accurately predict inference performance (how Flex Logix confirmed their performance pre-silicon).


Arm Reveals Custom Instructions, Mbed Partner Governance

Arm Reveals Custom Instructions, Mbed Partner Governance
by Bernard Murphy on 10-30-2019 at 6:00 am

Tipping the scale

At TechCon Arm announced two more advances against competitive threats, one arguably tactical and the other strategic, at least in this writer’s view.  The tactical move was to add support for custom instructions, the ability to collapse multiple instructions into a single instruction through customer-added logic which hooks into the CPU pipeline. This supports customer differentiation in performance and low power consumption for IoT devices, for example for trig functions used in GPS location. Software developers access these new functions as intrinsics apparently.

Custom instruction capability has been around for a while, especially in DSP IPs (where I believe it is extensively used for vectorized operations) and more recently and obviously in the RISC-V architecture. Of course in RISC-V you can do anything because it’s an open ISA but I imagine you may need to curb your enthusiasm for over-exotic capabilities if you want to remain compliant with the ecosystem. Perhaps then the customization advantages over the Arm offering will not be so great.

Custom instruction support will initially be provided in Cortex-M33, available in the first half of 2020, and will be extended further in the Cortex-M family at a later date. The capability comes at no additional cost for new and existing licensees. Further ability to differentiate at no added cost would I imagine give pause to anyone thinking about a switch to a different architecture.

The more strategic move is in opening up the governance of the Mbed OS to silicon partners. The ecosystem has always been a powerful advantage for Arm and will (in my view) remain a major hurdle for any competing solutions. They have equally built a big ecosystem over the last 10 years around their open-source IoT operating system, Mbed OS (about half a million third-party software developers and 150 Mbed-enabled boards and modules so far).

So far that’s been under Arm’s direction. Apparently silicon partners have been asking for more insight and input into Mbed OS future directions. Arm proposed this new governance approach which was well received and is now implemented in a technical working group and a product working group. The product working group meets monthly to prioritize and vote on new capabilities. As one example they’re already working on new low-power battery optimizations based on contributions from partners. Analog Devices, Cypress, Maxim Integrated, Nuvoton, NXP, Renesas, Realtek, Samsung, Silicon Labs and u-blox, among others, are already active in the WG and any Mbed silicon program partner is welcome to join at no cost.

This is strategic because will help further establish the ecosystem and technical investment partners make in a solution. In turn they’ll become more and more unwilling to drop to switch to another solution which may not provide all those nice features ready-made. Not to say that Arm (or the competition) couldn’t cross a line at some point where a switch would become more compelling. But so far at least, Arm seems to be making all the right moves to reinforce their position, conceding just enough in tactical areas while continuing to reinforce their strategic advantages. They continue to impress me as a thoughtful and well-managed company. They just keep adding more reasons to tip the scale in their direction.


“Connecting the Divide” at SEMICON Europa

“Connecting the Divide” at SEMICON Europa
by admin on 10-29-2019 at 2:00 pm

Connecting the Divide between Design and Manufacturing is an overarching theme within the ESD Alliance as these two essential semiconductor disciples become more reliant on each other. It’s also the reason we’re hosting  SMART Design, the first system-centric series showcasing advances in electronic system design to be held at SEMICON Europa held November 12 through November 15 in Munich, Germany.

SMART Design’s program, “Designing Electronic Systems for Future Applications,” includes presentations and a panel discussion underscoring how the increasing applications of advanced electronic system designs including automotive and medical pose new challenges that demand closer collaboration between design and manufacturing. Our goal is to create an opportunity for attendees to deepen their understanding of the links across Design and Manufacturing and throughout the supply chain. This will foster the collaborations essential to addressing technical challenges and ushering exciting new electronic products from concept to consumer.

The 2.5-hour program begins with Babak Taheri, Silvaco’s CEO and CTO, who will assess “Next Generation SoC Design: From Atoms to Systems.” “Near-Threshold Logic Benefits the Full Application Stack,” will be addressed by Lauri Koskinen, CTO of Minima Processor. Next up will be “Deep Learning for Electronics Manufacturing” by Javier Cabello, software and vision engineer at Mycronic AB.

“Cloud-Accelerated Innovation for Semiconductor Design and Verification,” a topic of interest to a wide audience, will be given by David Pellerin, head of worldwide business development, Hitech/Semiconductor for Amazon Web Services. Ian Campbell, OnScale’s CEO, follows with another talk on cloud-based Design titled, “Cloud Engineering Simulation: A Game Changer for Engineers.” The last presentation before a panel session is titled, “Addressing the ‘New-Space’ Paradigm Shift in Development and Production of High Reliability, Space Grade Semiconductor Components.” The presenter will be Christian Sayer, field applications engineer from Cobham Advanced Electronics Solutions.

Noted industry executive Jim Hogan, managing partner of Vista Ventures, will moderate “The Risk of Obsolete Design and Verification Environments in the RISC-V Era,” an ideal topic in the open-source era. Panelists include Gabriele Pulini, senior business development manager at Mentor, a Siemens Business; Silvaco’s Babak Taheri; Adnan Hamid, Breker Verification Systems’ CEO; Raik Brinkmann, president and CEO at OneSpin; and Paul Cunningham, corporate vice president and general manager of the System Verification Group from Cadence Design Systems, Inc.

SMART Design is scheduled for Thursday, November 14, from 2:30 p.m. until 5 p.m. in TechARENA 1, Hall B1. A networking hour hosted by the ESD Alliance and SEMI immediately follows.

Also debuting this year at SEMICON Europa is the SMART Transportation Forum led by SEMI’s Global Automotive Advisory Council (GAAC) with presentations from the Design, semiconductor equipment and materials suppliers and automotive OEM communities. The SMART Transportation Forum, “Connected-to-Everything Automated Mobility,” will be held Wednesday, November 13, from 9:30 a.m. until 3:30 p.m. in Room 14C at International Congress Center Munich.

As the SMART Design program offers, “Connecting Design and Manufacturing” is not only a catchphrase. While Design may be where electronics begins, it’s not the whole picture. With the complexity of systems being designed and manufactured, connecting Design and Manufacturing must be more than just talk. Connecting them will enable smarter, faster, more powerful, smaller, more reliable and more affordable electronic products produced by the $2-trillion global electronic product manufacturing and supply chain. This is a huge responsibility. Meeting it demands cooperation and collaboration across multiple disciplines including semiconductor Design, packaging, software development, materials and manufacturing, system integration and testing.

We look forward to seeing Semiwiki readers at SMART Design at SEMICON Europa as we extend design expertise in the worldwide electronics industry by “Connecting the Divide between Design and Manufacturing.


Synopsys’ New Die-to-Die PHY IP – What It Means

Synopsys’ New Die-to-Die PHY IP – What It Means
by Randy Smith on 10-29-2019 at 10:00 am

This morning, Synopsys announced its new Die-to-Die PHY IP. This announcement is critically important as it addresses two major market drivers – the growing need for faster connectivity in the datacenter and similar markets, and a path to better exploit the latest processes by dealing with yield issues for larger dies in a different manner. Also, this seems to be just the first step in this area, and we will anxiously await further advances in die-to-die connectivity. I believe Synopsys is trying to take the lead here and potentially help drive for industry standards that do not yet exist. Please read the press release for the details. Below, I will focus on what solutions this announcement can enable for use by chip architects and designers.

I have written a few times in the past few months on SerDes and other high-speed connectivity paths in the datacenter. Given the seemingly ever-growing demands for cloud computing, whether in e-commerce, machine learning, AI, gaming – the list is growing daily – datacenter administrators are hungry to find ways to deliver high performance. This pursuit has seen many gains in PCIe (inside the chassis), computer-to-computer, rack-to-rack, and datacenter-wide areas. High-speed optical solutions are now targeting high-speed in lengths up to 10 km. But these solutions still come with some latency and area penalties that make them prohibitive for a die-to-die solution.

By being able to connect multiple dies on a substrate in a point to point manner using the new Synopsys die-to-die PHY, which is available now, you can create a larger piece of functionality with less latency between the blocks. Admittedly, there has not been enough standardization for this type of solution. BGA-style connections within multi-chip modules (MCM) are not new, but there has not been much standardization of the PHYs connecting them. Initially, this solution will only be available on a single 7nm FinFET process, so it doesn’t yet support a heterogenous MCM solution. However, I expect that will certainly be coming soon. For now, this advancement alone is impressive.

As you make larger and larger semiconductor dies on the latest manufacturing processes, yield usually drops dramatically which significantly increases cost. If you take that same design and split it into multiple smaller dies in the same process you can see a huge saving in cost just from the improved yield. To achieve lower cost, the saving from the yield need only be more than the additional cost of the substrate used to connect the dies. If the original die was already going to sit on a substrate, then this is an easy decision. If not, it is still an option worth exploring as it may very well be less expensive to produce.

Another interesting consideration is what this technology can enable in conjunction with other technologies. For example, I can envision a design where multiple chiplets are placed in a row on a substrate forming a datapath (e.g., data flowing left to right, from die to die). If you need chunks of nearby memory, you have a choice, north or south of the datapath elements on the substrate, or can you place the memory on top of the datapath element, perhaps using another substrate? In other words, can you have BGA connection above and below the die? It is an interesting thought. Of course, that may also bring up thermal and other EM considerations. The use of stacked die is not a new thought. So how far can we take this new development from Synopsys? My imagination is started to run wild.

“1.8 terabit-per-second per millimeter unidirectional bandwidth for high throughput die-to-die connectivity.”

“One picojoule per bit (pJ/bit) for ultra-low-power.”

What will designers due with that? We should all be excited to find out.

Related Blog


New ARC VPX DSP IP provides parallel processing punch

New ARC VPX DSP IP provides parallel processing punch
by Tom Simon on 10-29-2019 at 6:00 am

The transition to the digital age from a mostly analog world really began with the invention of the A-to-D and D-to-A converters. However scalar processors can easily be overwhelmed by the copious data produced by something as simple as an audio stream. To solve this problem and to really jumpstart the digital age, the development of the digital signal processor (DSP) catalyzed the sweeping changes we are still witnessing today. If you are old enough, you probably remember when early DSPs were added to audio systems to enhance sound. Because of their immense usefulness, the applications of DSPs have expanded to include an ever-growing list of domains. However, it is a safe generalization to say that they’re more most useful in helping computing systems deal with the external world.

With applications as diverse as RF signal processing, automotive sensor RADAR, LIDAR, sensor fusion, vision, and in some cases machine learning, DSPs have needed to support a wider range of operations and increasing parallelism. To help SoC designers meet these challenges Synopsys has just introduced the impressive ARC VPX family of DSP Processor IP. The two new entrants in this family are the VPX5 and a Functional Safety version called the VPX5FS. So, what makes these DSPs different?

The answer in a word is parallelism at every level. Each VPX core offers 4 VLIW execution slots. VLIW introduces parallelism with the ability to encode multiple instructions for parallel execution in the same process or cycle. VLIW can be tricky to program because intermediate results in long expressions need to be cascaded through multiple VLIW operations. However, Synopsys has announced ARC MetaWare Tools that hide the mechanics of VLIW operation from developers, so they can write C++ code as usual and reap the benefits of VLIW acceleration.

The next level of parallelism for the VPX DSP IP cores is support for SIMD. SIMD lets one instruction operate on many (up to 512 for VPX5) data items. Three of the VLIW slots support SIMD, providing massive acceleration. Once again, Synopsys has made sure that the ARC MetaWare Tools help software developers easily take advantage of SIMD with minimal effort.

On top of this, VPX can scale up to 4 cores, adding a third layer of parallelism in VPX based SoCs. This ‘three dimensional’ parallelism gives the VPX based SoCs the ability to tackle a range of problems far beyond the now seemingly quaint uses for the first generations of DSPs.

A trend in machine learning, where parallel processing pays off, is the to move toward 8 or 16 bit integer operations to speed ML recognition algorithms. However, the real world is a messy place filled with data that can only be characterized by floating point values. Before a ML algorithm can be applied, high quality sensor data is needed, often in the form of sensor fusion output, to properly characterize what is happening in the real world. Floating point data offers high dynamic range and higher accuracy than integer values. The VPX cores support floating point operations in their VLIW slots. To further help with processing, VPX offers a linear algebra math unit that can perform sine, cosine, arctan, sqrt, log, exponent and other operations. Of course, there is also a VLIW slot that can use SIMD on 8, 16 and 32 bit integer data.

In their announcement Synopsys highlights several key areas where they see the VPX DSPs playing an important role. They are LIDAR, RADAR, Sensor Fusion and 5G communications. The low power and configurability of the VPX cores mean that they can be applied where needed to help process enormous amounts of data. A result of having higher performance in the system is a potential reduction of sensors and sensor complexity. For 5G, where signal processing becomes more important because of channel complexity, additional processing power can help improve data rates and reduce power. The full announcement with more details on the VPX5 and VPX5FS can be found on the Synopsys website.