5G, the planned successor to earlier mobile network standards, holds all kinds of promise for new capabilities beyond LTE, but for a while seemed stuck in debate on exactly what the standard should cover. Several problems are apparent. A path to higher bit-rates is complicated because of spectrum shortage and fragmentation (plans are apparently underway to ameliorate this problem), support is required for a wide range of applications with very dissimilar needs and the IoT requires support for massive numbers of devices, growing exponentially beyond traditional cellular demand.
This wide scope implies an even wider range of capabilities, particularly at base stations/cells. Massive IoT will expect to work at low data-rates per device but with very high connection densities (many devices within a limited area). Enhanced mobile broadband (eMBB) needs support for extremely high data rates to support 4K screens and AR/VR for example. Meanwhile mission-critical applications, in ADAS for automotive, medical devices and industrial uses can operate only with high expectations of reliability and low-latency.
Verizon with their V5GTF consortium had already been developing an early version of the standard but apparently not fast enough (or maybe not independently enough?) for others. An impressive group of telcos, chip and equipment providers announced on the first day of Mobile World Congress this year that they were promoting their own early version in 5G NR. Since this is ahead of a finalized spec (slated for 2020), solutions developed at this stage will need to be flexible to adjust to intermediate milestones in the run-up to the final release, but either way we may be seeing solutions earlier than originally expected. What once may have been relatively relaxed schedules to design products for this space may become more of a scramble.
Why is this a big deal? Because 5G-NR support, particularly in macro cells and small cells, is significantly more challenging than for LTE. As understood today, this aggregates simultaneous use of LTE, LTE-A Pro, 5G NR, WiFi 11ax/ad and WiGig in a unified protocol (an evolutionary rather than revolutionary approach, providing backward compatibility with those standards.) It must limit round-trip latency to 1ms or 0.5ms for ultra-low latency applications. It must support massive MIMO (256+ antennas) and multi-user MIMO, and in order to support high-density UEs/edge nodes (as much as a million per km[SUP]2[/SUP]) it must handle advanced beamforming.
This is already incredibly challenging – processing multi-user, multi-protocol, multi-IO connectivity through many antennas with exceptionally low latency (and low power), while computing and juggling complex beamforming strategies to optimize communication in a dense network. To maintain flexibility both in protocol handling and adapting to spec evolution, this will require SDR (software-defined radio) strategies, so significant software is needed to build out a solution, but high rates and low latencies require hardware implementation wherever possible. And on top of that, the standard isn’t yet finalized so designers know whatever they build today must inevitably evolve. This doesn’t look like a game for the faint of heart, but then again waiting for the standard to freeze before you jump into the game doesn’t look like a winning strategy either.
That’s where the CEVA XC-12 cluster architecture comes in, shown in the CEVA reference design above. We’re used to clustered CPUs; in fact, clustered DSPs are not a new idea but they make perfect sense in this context where massive parallel processing is absolutely essential. CEVA states that the XC12 has been designed from the ground-up for this domain, able to operate at 1.8GHz in 10nm processes, supporting massive computation through QUAD vector processing engines and up to 256×256 matrix processing. Each 5G-NR carrier is processed using a single cluster (4 XC12 cores). To squeeze latency to a minimum, each XC12 pair within a cluster is connected by fast interconnect busses and can share memory, allowing the pair to share task workloads such as channel estimation or data symbol processing.
The reference design is completed through other components from CEVA – L2 cache, X2 for scheduling and control, hardware accelerators including FFT/IFFT and forward error control encode and decode, and beamforming, each implemented in hardware for maximum performance. CEVA also provides optimized 3G, 4G and 5G libraries for all Physical Layer control and data channels so an OEM can build a complete PHY much faster. And they provide Drivers and Libraries running on X2 and XC12 to control the HW Accelerators included in the reference design.
This looks like a pretty good start to build out a 5G-NR-capable SoC. You can also be confident that you won’t be the first provider heading down this path. CEVA has already signed deals for a 5G base-station DSP with one OEM and a 5G UE modem DSP with another OEM targeting the Winter Olympics in South Korea. Click HERE to watch the Webinar on CEVA’s solution for 5G-NR.