Banner Electrical Verification The invisible bottleneck in IC design updated 1

PCB Design in the Cloud

PCB Design in the Cloud
by Daniel Payne on 02-28-2018 at 12:00 pm

I remember meeting John Durbetaki at Intel where we both worked in 1980, it was an exciting time and something called the Personal Computer had just been introduced by companies like Commodore, Apple and Radio Shack (yes, Radio Shack). IBM was rather late to the party with their PC in 1981, however when IBM entered the market then the business world decided that personal computers could be used for business and even scientific purposes. I bought a Radio Shack TRS-80 home computer, but my Intel friend John Durbetaki bought the IBM PC and soon started coding what would become a PCB schematic capture system called SDT under the company name of OrCAD in 1985. The growth of the PC and OrCAD continued and OrCAD went IPO in 1995, then was acquired by Cadence in 1999. What impresses me most is that Cadence has continued to invest in OrCAD over the years, so to learn more I talked with Kishore Karnane in February and discovered that OrCAD had now jumped from just the desktop into the cloud.

Back in 2017 the OrCAD Capture Cloud tool was introduced as a subset of the desktop version, providing lots of benefits like:

  • No need to install software on your desktop, instead just use a web browser with an account
  • Search the Arrow parts catalog
  • Platform independent design

So what’s brand new for 2018 is something they’ve dubbed OrCAD Entrepreneur, and you get lots of useful features for front-end PCB design like:

  • Schematic capture in the cloud
  • Arrow parts catalog with some 4,000,000 parts and symbols
  • Quickly create a BOM
  • Find out lead times for each component
  • See if the parts are in stock
  • All for just $99/year as a time-based license (TBL)

When you browse the Arrow catalog you’ll quickly notice that parts not in stock are shown in bright red, so it’s a best practice to find another comparable part early in your design process instead of much later when it becomes harder to make changes.

So the Cloud is where you do all of the schematic capture, then when it looks OK it’s time to do some simulation on the desktop along with PCB layout on the desktop. In the future you can expect PCB layout in the cloud too, so stay tuned. If you are doing IoT design, then spending just $99/year for schematic capture in the cloud sounds like a bargain, plus it’s backed up with the Arrow parts catalog and 12,000+ reference designs that Arrow has assembled so you don’t need to start with a blank schematic for your next design.

I was curious, so created a free account at https://orcad.arrow.com/arrow/signin to see what OrCAD Capture Cloud was like, and was pleasantly surprised to see a reference design appear using the venerable 80C51 microprocessor:

I was able to move components around, add components, wire between pins, and get a parts report, pretty slick and simply intuitive. Here’s a quick comparison between OrCAD Capture Cloud (free) and the new OrCAD Entrepreneur ($99/year):

So when you choose Cadence tools like OrCAD Entrepreneur you can scale up to using OrCAD on the desktop, or even Allegro for the most sophisticated PCB layout, Analog Mixed signal simulation, signal integrity analysis, or even FPGA design. You can even ask for some expert advise by hiring Arrow for consulting on design services, another way to reduce risk for a new product introduction. The free version of OrCAD Cloud already has 7,000 users, so you can expect a lot of those engineers and students will be upgrading to the OrCAD Entrepreneur version soon.

You can read the original press release about OrCAD Entrepreneur from January 29, 2018 here online.

Summary

If you’re doing system-level design or IoT design and want to start working in the cloud to save time, effort and money, then consider checking out what Arrow Electronics and Cadence have done with the new OrCAD Entrepreneur approach that uses the convenience of the cloud at just $99/year.

Related Articles


CEO Interview: Rene Donkers of Fractal Technologies

CEO Interview: Rene Donkers of Fractal Technologies
by Daniel Nenni on 02-28-2018 at 7:00 am

We (SemiWiki) have been working with Fractal for close to five years now publishing 25 blogs that have garnered more than 100,000 views. Generally speaking QA people are seen as the unsung heroes of EDA since the only time you really hear about them is when something goes wrong and a tapeout is delayed or a chip is respun.

FinFETs really changed QA by introducing many more complexities that require an increasing number of timing, power, and noise characterization checks for example. The foundries and leading edge companies are forgoing internal tools in favor of Crossfire from Fractal where they can collaborate (crowdsource) and increase QA confidence.

The fractal people and I have crossed paths many times over the years working for some of the same companies and recently I joined Fractal for business development work that you will be reading about moving forward. Fractal really is an impressive company with a unique approach to a very challenging market segment. It is my hope that Fractal can be an example to other emerging EDA companies who want to solve problems others have not addressed in a very sensible way, absolutely.

Tell me a little about your background. What brought you to running Fractal?

I started my career at Sagantec. This is how I got involved in the EDA industry. I have a financial background, then became responsible for WW customer support and Operations Management. Seven years ago we noticed a need in the design community for a standardized approach to quality assurance that could replace internal solutions, so Fractal was established by the 3 founding members and I ended up taking the CEO role.

Given our previous experience, we decided to build Fractal with the 3 founders as the only shareholders. Our strategy is to grow the company by adding customers and investing the returns from PO’s in software- and application-engineers. Looking at where we are today I would say we successfully pulled this off. Fractal now has a total of 20+ customers, 50% of which are top 20 Semiconductor companies, at the same time we always have several evaluations ongoing so we’re looking at good continued growth prospects.

Why in your view is library quality important? What is the impact of library errors?
Interesting you ask about libraries because when looking at the usage of Crossfire, our IP validation solution, 25% is on Standard Cell Library and 75% is on other IP such as IOs, Analog, SRAM, Mixed Signal, SerDes etc.

Any error in your IP design data is a potential risk for your design. Finding errors in IP models at the end of a design project could create a huge problem for meeting your tapeout deadline. For example, characterization issues are notoriously hard to spot without rigorous checking. For a standard-cell library at an advanced node we are literally talking Terabytes of .lib files. Now suppose one of those process corners had characterization issues, without QA incoming inspection these will pop-up as issues during timing and power verification. It’s then difficult very late in the design-cycle to trace these back to the characterization, get everything fixed by the library provider only to then see the real timing and power issues coming from the design.

This is why our philosophy at Fractal is to look beyond just qualifying the library or IP before it is used in the design. IP vendors and users need to be aligned on IP quality requirements. For which we have designed the Transport formalism for unambiguously describing QA requirements that can be used as a hand-off between IP consumers and provides, very much like how a DRC deck is used.

Where is this an issue – foundation IP, hard IP, hardened versions of soft IP, internal IP, external IP, …? Which of these tends to create more problems?
First a general remark applicable for all IP used by our customers. IP validation is all about the quality of your design data. Having an IP validation solution in place used by all of your internal design teams will enforce a minimum level of quality on your design data. If this same IP validation can be used on external IP providers you ensure this base-level is applicable for all IP used in your design flow. Our customers do this through Crossfire Transport, a formalism in which they can unambiguously express their quality requirements. These Transport decks can then be used by their IP providers, either internal or external, to ensure IP releases are compliant before they are shipped.

The next step is to gradually add more checks to the Transport decks and discussing the criteria with the IP providers so both parties support these QA requirements and see a benefit in making sure they are met.

This brings me to the different categories you indicated in your question. In general, regardless of the category, the problem is the same: if an IP issue shows up very late in the design flow, that has a danger of violating your schedule and at the very least demands a lot more effort from the design team which cannot be spent on the design itself. For the different IP categories the mechanisms are different though. A Hard IP block may show GDS or LEF vs SPICE mismatches when a black-box representation is replaced by the real model before final verification. A hardened Soft-IP may be slightly deviating from RTL – and perhaps for good reason but your LVS-checker or router doesn’t know that!

What we see with our customers as an important distinction is the difference between internally designed IP and externally sourced IP. Internally designed IP is almost always easier to deal with as the design groups will be using the same CAD flow, base-libraries, naming conventions, etc. So the result is very likely matched with the design in which it needs to be integrated.

External IP on the other has none of these “natural synergies”. However skilled the external designers, there’s fairly high chance some aspects of it won’t match with your design or verification tools or with the other IP blocks deployed in the design. That’s why it’s important to have these characteristics that make an IP block seamlessly integrate captured in a formalism like Transport. And it takes a couple of iterations to get there, as for many of these issues, they’re obvious to you as a user so it’s hard to imagine they’re not obvious to your IP provider.

If we can agree on a standard for IP validation at least everybody that is involved in IP quality will use the same solution which makes the exchange of setups and reports possible. If you can provide your IP supplier with an IP validation setup that meets your QA standards, and tell him to run it before shipping any IP, we are sure that problems with using external IP on your SOC will be minimal.

Why shouldn’t I expect library providers to fix their own problems?
First of all, in spite of all good intentions, library providers are not library users. Unless you explicitly inform them of your library QA requirements you cannot be sure that a library delivery is compliant.

Another part of the answer is that for library providers to fix their own problems we should give them the means to do so. From a QA point of view I am convinced that you should check your design data with a different, preferably independent tool. Checking your design data with same tool / provider that creates your design data could be a problem. How can you expect to find issues with same tool / provider that created your design data in the first place? Part of our existence is because we are tool and provider independent, Crossfire is not an IP or library generation tool, nor is it part of an SoC design flow. This makes it ideally suited as an independent, third-party, validation solution.

Don’t design teams figure out ways to patch together their own solutions? What’s wrong with that approach?
This is the historic approach we see at all our customers since there was no commercial alternative available on the market. And let’s face it, that’s what engineers are really good at: give them a problem and they’ll find some way of fixing it or working around it. Consequently, each design-company built its own IP validation solution, mostly a scripting environment. With no alternative solution available there is nothing wrong with this approach. During most of our evaluations we are benchmarked against such an internal solution.

Of course, we think proprietary solutions do have disadvantages:

  • Who is maintaining this own solution? What if engineers leave the company?
  • Same for, who is updating the own solution when adding new formats and checks for example because company moves to smaller technology node
  • Proprietary solutions most of the time only work for internal IP, doing incoming inspection of external IP is not possible where sometimes 40-50% of your design exist of external IP

Our typical take on this subject is to integrate those proprietary checks that are really design-style or tooling specific within Crossfire, and leave the bulk of the generic checks to a future-proof tool like Crossfire. This way customer get a better overall solution and yet continue to benefit from years of investment in unique proprietary checks.

Why isn’t this a one-time check? Why do we need continuous checking?
The cornerstone of QA is a continuous feedback cycle to improve the overall quality level. If you only run incoming inspection on IP, you’ll be finding the same issues over and over again. What you need is a feedback loop that addresses the root-causes of these issues.

We strongly believe in an IP validation flow used as part of your IP design regression testing. Once you have design data available, why wait with running validation checks till end of your design project? If there are issues, you rather find them as early as possible!

How do you see some of the biggest houses using this today (no names)?
From our 20+ customers, 50% are listed as top 20 semiconductor companies. Certainly, in the last couple of years we have been able to convince these big companies to replace existing internal solutions with Crossfire, our IP validation solution. One of the reasons is that more and more companies agree that maintaining an internal validation solution is not their core business provided a state of the art commercial solution is available on the market.

Another major reason is the adoption of Transport, our formalism for specifying QA requirements. These Transport decks allows customers to export existing Crossfire setups and send them to other internal groups or external IP providers. What we now are seeing is that some of largest fabless customers are demanding Transport compliance for external IP deliveries, very much like a foundry would require DRC-correctness of the GDS. With an internal scripting environment, this will would never have been possible.

Why do you think this problem is going to become even harder and more important to solve going forward?
With smaller technology nodes we see an increase in on-chip variability. This simply drives up the data volume into the terabyte range, so also your QA tools need to be designed from the ground up to deal efficiently with that amount of data. That’s another way of saying “forget scripting”.

On the other hand we see an increasing interconnectedness of design aspects like timing, power, signal integrity and reliability. Your design needs to be optimized for all these aspects at the same time, you simply cannot leave one of them as an afterthought. This leads to increasing demands on the consistency of the different models.

Can you give us any insight into your other thoughts for future trends in the market and for Fractal?
I think that smaller technology nodes will mean more design groups turning towards Fractal as theirs internal solutions will no longer be adequate. Investing more time in such internal solution is a waste for our customers. They should focus on new, better, faster, designs and let us worry about the QA of the design data.

Another opportunity is in the shakeout happening in the providers of smaller nodes, in the end we only see very few foundries offering e.g. N7 manufacturing. This is an excellent opportunity to standardize the QA aspects of libraries and IP blocks targeted for these nodes using Transport and Crossfire as the validation tool. And even if Moore’s law would suddenly come to an end, our belief is that our customers are going to focus even more on their core competencies and their usage of 3rd party IP will remain strong.

I would say, “let’s talk Crossfire” whenever we talk about the Quality of Design Data. If everybody speaks the same Crossfire language, exchange of IP (internal and external) should become easier.

Read more about Fractal on SemiWiki

Also Read:

CTO Interview: Ty Garibay of ArterisIP

CEO Interview: Michel Villemain of Presto Engineering, Inc.

CEO Interview: Jim Gobes of Intrinsix


Hardware Configuration Management – A Key Enabler for Startups & Big Companies Alike

Hardware Configuration Management – A Key Enabler for Startups & Big Companies Alike
by Mitch Heins on 02-27-2018 at 12:00 pm

Software configuration management (SCM) has been around for a long time with commercial SCM offerings such as ClearCase and Perforce and public domain mainstays such as CVS and Subversion. Similarly, over the last two decades we’ve seen a big uptake in the adoption of hardware configuration management (HCM) methodologies driven by the exponential growth in systems-on-a-chip (SoCs) complexity, larger amounts of binary design data, an increased need for better control over data security, and the use of larger geographically-dispersed design teams.

More recently, the complexity growth is being exacerbated by newer heterogeneous SoC architectures required by the internet-of-things (IoT) devices. These devices fuse data from multiple different sensors and some even employ artificial intelligence techniques that combine both hardware and embedded software to process data before sending actionable information back to the cloud.

Managing SoC design data is particularly challenging when one considers that the design data is a composite of many different CAD abstractions and views. Design teams regularly use CAD tools from multiple electronic design automation (EDA) vendors, each which have their own data representations with different and many times incompatible databases. Layer on this the fact that designs also use multiple IP libraries, some built internally while others are from outside vendors.

Add to this the fact that design teams are comprised of engineers with varied backgrounds who are working on different steps of the design process, on different networks and different hardware platforms while geographically dispersed across the globe. These engineers have different responsibilities and access rights to project data that must be strictly enforced.

For any SoC design, it is necessary to effectively manage the sharing of completed design data while isolating data that is still in progress (e.g. shared libraries vs scratch libraries). Hardware teams have traditionally relied on human-based data gate-keeping to ensure engineers don’t inadvertently overwrite each other’s work when copying changes from scratch areas to master libraries. It’s a practice that is fraught with error and almost unmanageable for teams that cross multiple time zones.

Teams have tried to mitigate the time zone problem using multiple master libraries, which they try to keep in sync on a regular basis. The use of hierarchical design complicates this practice as changes to lower level cells may not be seen due to latency between updates to the different master libraries and the lack of a clean bill of materials detailing cell versions to be used by the project. A much bigger problem occurs when changes are not detected and the project tapes out. This sort of error can necessitate a very expensive re-spin. File management is also cumbersome in this arrangement with multiple copies being kept on each site for both use and archival, which increases the cost of the associated storage devices.

The biggest issue aside from the logistical management of files and databases is the lack of a common process for managing the numerous revisions on all views of the design. This is where a hardware configuration management tool comes in. Many companies have taken different approaches to resolving the issues unique to the hardware designer. While some have opted to build layers on top of existing SCMs such as subversion, others have taken the route of creating the HCM from the ground up, providing a better platform which can be easily customized to the different needs of hardware design teams.

SOS7, a HCM from ClioSoft, is a good example. ClioSoft’s SOS7 streamlines the design process and significantly improves a team’s productivity. It acts as a gatekeeper and protects the users from accidentally losing or overwriting valuable data, eliminating the need for manual bookkeeping. SOS7 employs a distributed Client-Server architecture that allows access to data irrespective of a user’s location. Data is stored once in a common project repository and the system makes use of remote cache servers to reduce network bandwidth and minimize the effects of network latency.

Most importantly, SOS7 ensures that design changes are seen immediately by all other members of the team, regardless of the hardware platform used, as SOS7 works cross platform and is available on both the Linux and Windows. SOS7 also provides for sandbox development areas to isolate changing data. Objects checked out for edit have write-locks to prevent accidental overwrites by others with the ability to revert to or view previous versions.

Especially important for safety critical applications requiring ISO 26262 certification is that SOS7 maintains audit trails of all changes made to the design. SOS7 also employs gate keeping policies for data access control and integrates data management with requirements and issue tracking systems such as Jira, Bugzilla and Trac.

While SCM systems deal with source code in the form of ASCII text files, HCM systems must deal with data in different EDA formats. EDA tools create many different types of side files used to manage their own data. Knowing which of these files to archive can be cumbersome, but SOS7 takes care of that automatically, making it easier to add or exchange tools within the design flow as needed. This is enabled by the EDA vendors providing application programming interface (API) support that allows SOS7 to manage their data for them. DM APIs enable the design flow to seamlessly support revision control with automatic check-out and check-in capabilities without requiring the designer to know all the nuances of which EDA files need to be stored and which can be ignored.

It is easy to do a diff with text files, but it is a different problem when it comes to binary files such as schematics or layout views. SOS7 can easily do the text diff capabilities like SCM, but it also goes the extra mile by providing a mechanism to highlight differences between versions of a schematic or layout. In addition, ClioSoft also added design management GUIs directly into the EDA tool library browsers and design editors to give engineers the capability to browse libraries and design hierarchies, examine the status of cells and perform revision control operations without leaving the design environment or learning a new interface.

For most SoC design teams, given the large amount of design data generated, and the increased number of globally dispersed designers, disk space storage remains a major concern. A HCM such as SOS7 works hard to ensure that the size of the repository remains as small as possible. It achieves this objective by intelligently using the notion of symbolic links to optimize disk space usage for static libraries and design files. All the design files in the designer’s workspace remain as read-only symbolic links which minimizes the disk usage considerably. It is only when the designer wants to edit the file that a writable view of the design file is made available in the workspace.

The take away from all of this is that with the advent of more complex SoCs being designed for IoT applications, hardware configuration management will no longer just be for the big enterprises. Even small teams will need to embrace HCM, not only for design complexity, but for the capability to be able to do safety critical designs that require an audit trail and good version control. And remember if you are a startup, you likely will be hoping to be acquired for your IP. Being able to show that your design process and data are clean and in control can make all the difference to an acquiring company as to whether your IP is considered valuable or a pile bones that only a few people can make work.

This all bodes well for ClioSoft and their DM solutions and I expect we will be hearing more from them as the IoT revolution continues to explode.

See also:
ClioSoft Products Overview
ClioSoft SOS – Virtuoso
ClioSoft Visual Design Diff


Connecting Coherence

Connecting Coherence
by Bernard Murphy on 02-27-2018 at 7:00 am

If a CPU or CPU cluster in an SoC is the brain of an SoC, then the interconnect is the rest of the central nervous system, connecting all the other processing and IO functions to that brain. This interconnect must enable these functions to communicate with the brain, with multiple types of memory, and with each other as quickly and predictably as each function requires. But it must also be efficient and ensure error-free operation.

Pulling off this trick has led to plethora of bus protocol standards, most widely represented by the AMBA family, now complemented by CCIX, which I’ll get to later. There’s a nice summary of the various AMBA protocols here, ranging from APB and ASB, through multiple flavors of AHB and multiple flavors of AXI, all the way up to ACE (also in a couple of flavors) and finally CHI. Why so many? Because you simply can’t serve in one protocol the needs for functions running at tens of MHz to functions running at GHz, and quality of service (QoS) ranging from best-effort (e.g. web response) to guaranteed (e.g. phone-call).

Network-on-chip (NoC) architectures, like the FlexNoC solution from Arteris, have become pervasive in mixed-protocol SoC designs because of the flexibility, performance, QoS and layout- and power-efficient advantages they offer in in contrast to more traditional switch-matrix solutions. You don’t need to construct tiered hierarchies of interconnect to bridge between different protocols; the NoC architectures seamlessly manages bridging and communication and can be tuned to deliver the PPA and QoS you need.

These days, there’s another wrinkle: Cache-coherent protocols have become popular thanks to the appearance of CPU clusters and other devices which need to communicate with those systems. When cores read and write memory, they do so first to their caches as a fast short-cut to reading and writing main memory. But if a core updates memory address X in its private cache just before a function F reads X, from its private cache or directly from main memory, then F is going to read the wrong value. Cache-coherency protocols manage these potential mismatches through a variety of techniques to ensure that memory views stay in sync when needed. The ACE and CHI protocols were introduced to cover this need; ACE first then CHI later to handle the more complex configurations appearing in more recent SoCs.

Now of course many design enterprises have a mix of IPs with either ACE interfaces or CHI interfaces. Arteris introduced their Ncore version 3 cache coherent interconnect at the October 2017 Linley conference to manage both ACE and CHI protocols in one interconnect, so you can manage a complete cache-coherent domain with just one interconnect solution. This is technology is very configurable, not just in the expected parameters but also in topology. Ncore 3 supports tree, ring and mesh topologies and even a 3D options, all allowing for different ways to manage bandwidth, latency and fault-tolerance.


Typically, your whole design won’t require cache-coherence; much of what you repurpose from legacy subsystems (or even many new subsystems) won’t depend on this capability. You can connect all of those non-coherent subsystems and hardware accelerators using the standard FlexNoC solution, but again with a wrinkle: A hardware accelerator/sub-subsystem in this non-coherent domain can share address space with the coherent domain, allowing memory references from that accelerator/subsystem to be coherent. You accomplish this by connecting these non-coherent subsytems to the Ncore 3 fabric through interfaces containing proxy caches, which loops them into the coherence management logic. You can even connect multiple non-coherent accelerators to a single proxy cache, thereby creating a cluster that can interact with the rest of the system as a coherent peer to the cache-coherent CPU clusters..

Kurt Shuler (VP Marketing at Arteris) told me that this need to integrate non-coherent subsystems and accelerators with the coherent domain is becoming increasingly important in machine-learning use-cases. As the number of hardware accelerators required to process neural net and image processing algorithms increases, it become harder to manage data communications without using cache coherence for critical parts of the system. Incidentally, it’s also possible to connect, cache coherently, to other die/devices though the CCIX interface (in a 2.5D/3D assembly solution for example). Ncore 3 supports this kind of connection with a CCIX interface connecting coherent domains between multiple chips.

There is one more important set of capabilities in Ncore 3 that are highly relevant to automotive or other safety-critical applications. This solution provides (within the fabric) ECC generators and checkers for end-to-end data protection, intelligent unit duplication and checking, similar to dual-core lockstep (DCLS), and a fault controller with BIST that is automatically configured and connected based on the designer’s data protection and hardware duplication settings. The capabilities can be combined to provide sufficient diagnostic coverage to meet automotive ISO 26262 functional safety certification requirements, as well as the more general IEC 61508 specification.

There’s a lot of technology here which should be immediately interesting to anyone building heterogeneous coherent/non-coherent SoCs and anyone wanting to build added safety into those systems. You can learn more HERE.


Developing Affordable IoT Systems

Developing Affordable IoT Systems
by Daniel Payne on 02-26-2018 at 12:00 pm

The IoT market opportunities in segments like wearables, vehicles, home, cities and industrial are all growing thanks to the combination of semiconductors, sensors, software and systems technology. New hardware designs for IoT edge devices appear on a daily basis, and the companies behind these new products can often be start-ups or just a handful of people in a larger company doing something totally different. Of course to run a successful business you have to manage cash flow, so ideally when starting a new IoT project the expenses need to be managed closely during the design phase. Maybe you need to get an early IoT prototype completed as proof of concept in order to secure funding for production.

IC Insightsproduced a report in June 2017 that showed that the IoT market size in 2016 was $74.6 billion, project to reach $124.1 billion by 2020 in the five categories mentioned above. The IoT edge market doesn’t include gateways, servers, computers, smartphones or tablets.

The five IoT market segments fuel semiconductor revenue in the following proportions where smart cities is the largest segment at 59% or $10.82 billion, followed by Industrial IoT at $4.02 billion and connected vehicles at $2.14 billion:

Custom SoCs are a popular IoT implementation approach for edge devices in order to get the most battery life, performance, lightest weight or smallest sized product. Alternate approaches like placing discrete components on a PCB may not meet requirements. Using a custom SoC does provide several benefits over discrete parts, like:

  • Lower BOM costs
  • Smallest size
  • Lowest power, longer battery life
  • Higher performance
  • Better reliability
  • No more obsolete components
  • Greater IP protection, harder for competitors to copy
  • Higher barriers to entry for your competitors

Before you get all enamored with the idea of developing a custom SoC it is wise to consider your costs, market size and segmentation, time to market, your competitors and the proper process node. Fabricating with a 180nm node is much cheaper than choosing to use 28nm, plus with 180nm you still use 3.3V supplies which provide a high dynamic range and better noise margins, something quite useful for RF antennas.

You’ll hear terms like Non-recurring Engineering (NRE) which include the price of EDA design software, semiconductor IP blocks from 3rd parties and the first silicon run to get your samples. Mentor – a Siemens business, provides a 30 day, no cost evaluation of their Tanner EDA tools for design and simulation of your custom SoC.

  • Schematic capture of AMS design using S-Edit
  • Processor IP – Arm Cortex-M0 or Arm Cortex-M3
  • Analog simulation using T-Spice
  • Digital simulation using ModelSim

Once your proof of concept is ready the next step is to begin implementation using software tools and semiconductor IP. Here’s the flow from Mentor:

Pushing down into the EDA tooling box there are four distinct engineering tasks:

  • IC design
  • Embedded Software
  • System Exploration
  • PCB Design

Analog Mixed-Signal (AMS) design and MEMs design are done with the Tanner EDA tools, and this is also where you model all of the IoT sensors. Here’s more detail on what the AMS IC design flow looks like:

If you’re IoT device needs to measure something like pressure, rotation, acceleration, speed or humidity then MEMS can be modeled in 2D and 3D then analyzed for physical effects.

For embedded software development Mentor Embedded has a real-time operating system (RTOS) and other tools for IoT edge devices. The Nucleus RTOS is well-equipped for battery powered IoT devices and has been used in some 3 billion devices so far. During embedded software development you would use Sourcery CodeBench:

With Sourcery CodeBench your team can use micro-controllers or microprocessors, then understand system execution, measure performance and even debug your apps.

For system-level design and documentation Mentor has the SystemVision Cloud tool that can model both electronics and mechatronics systems, then simulate them so that you can explore the best design approach.

To finally place your SoC and sensors onto a PCB it’s time to use software called PADS Standard, which has both schematic capture and board layout features at an affordable price.

The most popular processor architecture in the world comes from Arm and they have put together a program called DesignStart Evalthat allows you to design and prototype at no cost, then when you’re ready for production you upgrade to DesignStart Pro.

Having IC samples produced at a low cost can be accomplished with multi-project wafers (MPW), where you are sharing the IC mask costs with other companies onto the same silicon wafer. Foundries and companies like MOSIS, eSilicon and EUROPRACTICE can assist you with the MPW logistics. It costs about $16K to get 45 IC samples on a 180nm process, according to EUROPRACTICE, while the second order of 45 samples has an even lower price of $2K.

Your particular SoC for IoT applications may have unique requirements that drive up the cost like adding more IP blocks, including design consulting, needing a smaller geometry process, needing more EDA tools, PCB fabrication, or more analysis of MEMS.

Full production is the final step after your proof of concept has been accepted and raised enough capitol, so you choose a foundry partner and get quotes for mask costs and production. At the 180nm node you can expect mask costs to be around $150K, while more advanced nodes like 90nm you can expect mask costs of $500K.

Summary
The IoT market is very promising and with the right approach you can minimize engineering costs for both a proof of concept and into production using vendors like Mentor and Arm.

There’s a 14 page White Paper from Mentor on this topic, available to download.

Related Articles


The hierarchical architecture of an embedded FPGA

The hierarchical architecture of an embedded FPGA
by Tom Dillinger on 02-26-2018 at 7:00 am

The most powerful approach to managing the complexity of current SoC hardware is the identification of hierarchical instances with which to assemble the design. The development of the hierarchical design representation requires judicious assessment of the component definitions. The goals for clock distribution, power management, and circuit/routing utilization require partitioning that is neither too fine nor too coarse – e.g., the management of multiple power domains within a large partition is difficult, while too fine a partitioning results in more pin constraints to manage and fewer opportunities for timing-driven physical design optimizations.

It struck me that the tradeoffs to the hierarchical representation directly apply to the architecture of an FPGA, as well. I recently chatted with Cheng Wang, SVP of Engineering at Flex Logix Technologies, about how they approached the hierarchical decomposition of the design complexity of their embedded FPGA cores – it was an extremely enlightening discussion.

First, I needed to study up on the typical hierarchical architecture of an FPGA. The programmable logic is implemented with n-input look-up tables (LUT’s). A logic block consists of multiple LUT’s, with additional storage bits. Dedicated local routing connects the LUT’s within the block. The traditional FPGA uses an island style architecture, with logic blocks separated by wiring channels. (This architecture is also denoted as a “mesh” style design.)


Figure 1. General FPGA island architecture. (From: Rose and Betz, “How Much Logic Should Go in an FPGA Logic Block?”, IEEE D&T of Computers, January 1998.)

The input and output signals of logic blocks are connected to segmented wires in the channels. The logic block-to-channel wire assignment is denoted as the “connection box”. The pins of the logic block are connected to a percentage of the wires in the channel (Fc), typically between 50% and 100% of the channel track width.

Figure 2. Expanded view of the connection box and switch box of an FPGA mesh architecture. (From: D. Markovic, “FPGA Architecture”, UCLA EE216B.)

The figure above depicts “un-segmented” channel wires and pass transistors for logic block connections. Alternatively, wire segments are commonly used – the figure below illustrates a block input pin connected to three segments, with the active segment using a buffer + MUX shown in red.)


Figure 3. Segmented wires in the channel connected to a logic block input. (From: V. Betz, “FPGA Architecture”, University of Toronto).

The channel wires are connected to programmable switches, located in the “Switch Box”. The Switch Box design defines how channel wires may connect to wires on other sides – the “flexibility” of the switch box is a parameter that indicates how many other wires are potential connections.

Note in the figures above that clock wires are not shown – the common approach is to include specific global and local wiring tracks for clocks to the logic block storage elements. The dedicated clocks include distributed buffering and clock management units.

FPGA architecture design involves balancing multiple tradeoffs related to the implementation hierarchy:

  • Logic block functionality needs to address performance, utilization, and routability. A fine-grained block design will require more programmable interconnect resources, more switches, and correspondingly, less performance. A very rich (coarse-grained) logic block design will be inefficient for small logic functions. The goal is to find an optimum logic block functionality, which aligns with the capabilities of the logic synthesis and physical design tools. FPGA implementations have commonly ranged from 4-10 LUT’s connected locally in the logic block. As FPGA synthesis has improved, the common LUT design has also evolved, from 4- to 5- to 6-input (with dual 5-input) functionality, as is the case for the current Flex Logix EFLX architecture.

  • FPGA design has also evolved to include special-purpose blocks. The hierarchical implementation needs to be able to readily support the unique programmable logic design of arithmetic and DSP functions.

  • The FPGA routing architecture needs to provide sufficient resources to satisfy both utilization and performance targets.

With that background, I asked Cheng, “How did Flex Logix approach these implementation hierarchy decisions?”

He answered, “Rather than the island architecture, we adopted a hierarchical switch network. The number of switch connections required for routes with high locality is reduced, improving performance.”

Figure 4. Hierarchical switch network for FPGA connectivity. (From: US Patent 9,503,092.)

“Of specific importance is the radix and depth of the hierarchical network tree, which were chosen to optimize the overall routability – the top level of the switch network utilizes the mesh routing of the island architecture.”, Cheng continued.

What other hierarchical tradeoffs were faced?”, I asked.

Cheng replied, “We recognized two key design goals for embedded FPGA IP. For many applications, customers need to implement power gating on some of their eFPGA functionality. And, for performance, customers require optimal, low-skew clock distribution, with support for integrating multiple clock domains. To meet these requirements, we introduced a hierarchical component denoted as a tile.”

The Flex Logix hierarchical tile functionality includes ~2,500 6-input, 2-output LUT’s (16nm), with two optional flops per LUT.

Cheng highlighted, “Within a tile, the programmable logic can be power gated for a low-power application. The tile design includes an optimized H-tree clock, supporting either one or two clock domains. We implemented a novel method for balanced H-tree construction to distribute a clock input across multiple tiles.”



Figure 5. Clock distribution within and between tiles, for balanced H-tree distribution. A clock may enter a tile at any edge, with multiplexing to distribute through a consistent number of buffers throughout multiple tiles. (From: US Patent 9,882,568.)

“With the introduction of the hierarchical switch network and the tile hierarchy for clock and power management, we needed to develop our own netlist placement and routing technology. These algorithms provide improved performance, with a reduced number of switches for logic localized to the lower levels of the hierarchical network.”, Cheng said.

The design of eFPGA IP requires supporting a range of end-customer logic capacities with aggressive utilization and performance targets, while supporting varied clock and power domain designs. The introduction of the hierarchical “tile” achieves these goals.


The next time we get together for coffee, Cheng is going to share how the tile boundary design enables efficient signal communication between adjacent tiles – it should be an interesting discussion.


For more information on these eFPGA hierarchical implementation design options, please follow this link.

-chipguy


Read more about Flex Logix on SemiWiki


LithoVision 2018 The Evolving Semiconductor Technology Landscape and What it Means for Lithography

LithoVision 2018 The Evolving Semiconductor Technology Landscape and What it Means for Lithography
by Scotten Jones on 02-25-2018 at 5:00 pm

I was invited to present at Nikon’s LithoVision event held the day before the SPIE Advanced Lithography Conference in San Jose. The following is a write up of the talk I gave. In this talk I discuss the three main segments in the semiconductor industry, NAND, DRAM and Logic and how technology transitions will affect lithography. Please note the slide numbering used in the article is matched to the slide numbers in the presentation.
Continue reading “LithoVision 2018 The Evolving Semiconductor Technology Landscape and What it Means for Lithography”


First Line of Defense for Cybersecurity: AI

First Line of Defense for Cybersecurity: AI
by Ahmed Banafa on 02-25-2018 at 7:00 am

The year 2017 wasn’t a great year for cyber-security; we saw a large number of high-profile cyber attacks; including Uber, Deloitte, Equifax and the now infamous WannaCry ransomware attack, and 2018 started with a bang too with the hackingof Winter Olympics. The frightening truth about increasingly cyber-attacks is that most businesses and the cybersecurity industry itself are not prepared. Despite the constant flow of security updates and patches, the number of attacks continues to rise.

Beyond the lack of preparedness on the business level, the cybersecurity workforce itself is also having an incredibly hard time keeping up with demand. By 2021, there are estimated to be an astounding 3.5 million unfilled cybersecurity positions worldwide, the current staff is overworked with an average of 52 hours a week, not an ideal situation to keep up with non-stop threats.

Given the state of cybersecurity today, the implementation of AI systems into the mix can serve as a real turning point. New AI algorithms use Machine Learning (ML) to adapt over time, and make it easier to respond to cybersecurity risks. However, new generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary.

Another great benefit of AI systems in cybersecurity is that they will free up an enormous amount of time for tech employees. Another way AI systems can help is by categorizing attacks based on threat level. While there’s still a fair amount of work to be done here, but when machine learning principles are incorporated into your systems, they can actually adapt over time, giving you a dynamic edge over cyber criminals.

Unfortunately, there will always be limits of #AI, and human-machine teams will be the key to solving increasingly complex #cybersecurity challenges. But as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field called adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what experts call poisoning the models, or machine learning poisoning (MLP) – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.

Four Fundamental Security Practices
With all the hype surrounding AI we tend to overlook a very important fact. The best defense against a potential AI cyber-attack is rooted in maintaining a fundamental security posture that incorporates continuous monitoring, user education, diligent patch management and basic configuration controls to address vulnerabilities. All explained below:

Identifying the Patterns
AI is all about patterns. Hackers, for example, look for patterns in server and firewall configurations, use of outdated operating systems, user actions and response tactics and more. These patterns give them information about network vulnerabilities they can exploit.

Network administrators also look for patterns. In addition to scanning for patterns in the way hackers attempt intrusions, they are trying to identify potential anomalies like spikes in network traffic, irregular types of network traffic, unauthorized user logins and other red flags.

By collecting data and monitoring the state of their network under normal operating conditions, administrators can set up their systems to automatically detect when something unusual takes place — a suspicious network login, for example, or access through a known bad IP. This fundamental security approach has worked extraordinarily well in preventing more traditional types of attacks, such as malware or phishing. It can also be used very effectively in deterring AI-enabled threats.

Educating the Users
An organization could have the best monitoring systems in the world, but the work they do can all be undermined by a single employee clicking on the wrong email. Social engineering continues to be a large security challenge for businesses because workers easily can be tricked into clicking on suspicious attachments, emails and links. Employees are considered by many as the weakest links in the security chain, as evidenced by a recent survey that found that careless and untrained insiders represented the top source of security threats.

Educating users on what not to do is just as important as putting security safeguards in place. Experts agree that routine user testing reinforces training. Agencies must also develop plans that require all employees to understand their individual roles in the battle for better security. And don’t forget a response and recovery plan, so everyone knows what to do and expect when a breach occurs. Test these plans for effectiveness. Don’t wait for an exploit to find a hole in the process.

Patching the Holes

Hackers know when a patch is released, and in addition to trying to find a way around that patch, they will not hesitate to test if an agency has implemented the fix. Not applying patches opens the door to potential attacks — and if the hacker is using AI, those attacks can come much faster and be even more insidious.

Checking Off the Controls

The Center for Internet Security (CIS) has issued a set of controls designed to provide agencies with a checklist for better security implementations. While there are 20 actions in total, implementing at least the top five — device inventories, software tracking, security configurations, vulnerability assessments and control of administrative privileges — can eliminate roughly 85 percent of an organization’s vulnerabilities. All of these practices — monitoring, user education, patch management and adherence to CIS controls — can help agencies fortify themselves against even the most sophisticated AI attacks.

Challenges Facing AI in Cybersecurity

AI-Powered Attacks
AI/Machine Learning (ML) software has the ability to “learn” from the consequences of past events in order to help predict and identify cybersecurity threats. According to a report by Webroot, AI is used by approximately 87% of US cybersecurity professionals. However, AI may prove to be a double-edged sword as 91% of security professionals are concerned that hackers will use AI to launch even more sophisticated cyber-attacks.

For example, AI can be used to automate the collection of certain information — perhaps relating to a specific organization — which may be sourced from support forums, code repositories, social media platforms and more. Additionally, AI may be able to assist hackers when it comes to cracking passwords by narrowing down the number of probable passwords based on geography, demographics and other such factors.

More Sandbox-Evading Malware

In recent years, sandboxing technology has become an increasingly popular method for detecting and preventing malware infections. However, cyber-criminals are finding more ways to evade this technology. For example, new strains of malware are able to recognize when they are inside a sandbox, and wait until they are outside the sandbox before executing the malicious code.

Ransomware and IoT
We should be very careful not to underestimate the potential damage IoT ransomware could cause. For example, hackers may choose to target critical systems such as power grids. Should the victim fail to the pay the ransom within a short period of time, the attackers may choose to shut down the grid. Alternatively, they may choose to target factory lines, smart cars and home appliances such as smart fridges, smart ovens and more.

This fear was realized with a massive distributed denial of service attack that crippled the servers of services like Twitter, NetFlix , NYTimes, and PayPal across the U.S. on October 21st , 2016. It’s the result of an immense assault that involved millions of Internet addresses and malicious software, according to Dyn, the prime victim of that attack. “One source of the traffic for the attacks was devices infected by the Mirai botnet”. The attack comes amid heightened cybersecurity fears and a rising number of Internet security breaches. Preliminary indications suggest that countless Internet of Things (IoT) devices that power everyday technology like closed-circuit cameras and smart-home devices were hijacked by the malware, and used against the servers.

A Rise of State-Sponsored Attacks

The rise of nation state cyber-attacks is perhaps one of the most concerning areas of cyber-security. Such attacks are usually politically motivated, and go beyond financial gain. Instead, they are typically designed to acquire intelligence that can be used to obstruct the objectives of a given political entity. They may also be used to target electronic voting systems in order to manipulate public opinion in some way.

As you would expect, state-sponsored attacks are targeted, sophisticated, well-funded and have the potential to be incredibly disruptive. Of course, given the level of expertise and finance that is behind these attacks, they may prove very difficult to protect against. Governments must ensure that their internal networks are isolated from the internet, and ensure that extensive security checks are carried out on all staff members. Likewise, staff will need to be sufficiently trained to spot potential attacks.

Shortage of Skilled Staff

By practically every measure, cybersecurity threats are growing more numerous and sophisticated each passing day, a state of affairs that doesn’t bode well for an IT industry struggling with a security skills shortage. With less security talent to go around, there’s a growing concern that businesses will lack the expertise to thwart network attacks and prevent data breaches in the years ahead.

IT infrastructure
A modern enterprise has just too many IT systems, spread across geographies. Manual tracking of the health of these systems, even when they operate in a highly integrated manner, poses massive challenges. For most businesses, the only practical method of embracing advanced (and expensive) cybersecurity technologies is to prioritize their IT systems and cover those that they deem critical for business continuity. Currently, cybersecurity is reactive. That is to say that in most cases, it helps alert IT staff about data breaches, identity theft, suspicious applications, and suspicious activities. So, cybersecurity is currently more of an enabler of disaster management and mitigation. This leaves a crucial question unanswered — what about not letting cybercrime happen at all?

The Future of Cybersecurity and AI

In the security world AI has a very clear-cut potential for good. The industry is notoriously unbalanced, with the bad actors getting to pick from thousands of vulnerabilities to launch their attacks, along with deploying an ever-increasing arsenal of tools to evade detection once they have breached a system. While they only have to be successful once, the security experts tasked with defending a system have to stop every attack, every time.

With the advanced resources, intelligence and motivation to complete an attack found in high level attacks, and the sheer number of attacks happening every day, victory eventually becomes impossible for the defenders.

The analytical speed and power of our dream security AI would be able to tip these scales at last, leveling the playing field for the security practitioners who currently have to constantly defend at scale against attackers who can pick a weak spot at their leisure. Instead, even the well-planned and concealed attacks could be quickly found and defeated.

Of course, such a perfect security AI is some way off. Not only would this AI need to be a bona fide simulated mind that can pass the Turing Test, it would also need to be a fully trained cyber security professional, capable of replicating the decisions made by the most experienced security engineer, but on a vast scale.

Before we reach the brilliant AI seen in Sci-Fi, we need to go through some fairly testing stages – although these still have huge value in themselves. Some truly astounding breakthroughs are happening all the time. When it matures as a technology it will be one of the most astounding developments in history, changing the human condition in ways similar to and bigger than, electricity, flight, and the Internet, because we are entering the AI-era.

READ MORE SemiWiki IoT blogs

Ahmed Banafa Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016. Read more articles at IoT Trends by Ahmed Banafa

References

https://www.csoonline.com/article/3250086/data-protection/7-cybersecurity-trends-to-watch-out-for-in-2018.html
https://gcn.com/articles/2018/01/05/ai-cybersecurity.aspx
https://www.darkreading.com/threat-intelligence/ai-in-cybersecurity-where-we-stand-and-where-we-need-to-go/a/d-id/1330787?
https://www.itproportal.com/features/cyber-security-ai-is-almost-here-but-where-does-that-leave-us-humans/
https://www.linkedin.com/pulse/wake-up-call-iot-ahmed-banafa

All figures: Ahmed Banafa


Herb Reiter on the Challenges of 2.5D ASIC SiPs

Herb Reiter on the Challenges of 2.5D ASIC SiPs
by Daniel Nenni on 02-23-2018 at 12:00 pm

Years ago my good friend Herb Reiter promoted the importance of 2.5D packaging to anybody and everybody who would listen including myself. Today Herb’s vision is in production and the topic of many papers, webinars, and conferences. According to Herb, and I agree completely, advanced IC packaging is an important technology for leading edge chip companies who are focused on high performance and low power. TSMC agrees of course supported by their CoWoS and INFOs packaging technology which has been adopted by leading semiconductor companies (Apple, Nvidia, Xilinx, etc…).

The latest trend for 2.5D packaging is the leading ASIC companies enabling the masses and as we write about it the word is spreading quickly to emerging AI chip companies (Nervana, DeePhi, Mythic, Groq) and the systems companies that are now doing their own chips (Google, Amazon, Facebook). On March 6[SUP]th[/SUP] you have the opportunity to hear it from Herb himself via a webinar sponsored by leading ASIC company Open-Silicon:

Solutions and Strategies to Mitigate the Physical Design, Assembly and Packaging Challenges of 2.5D ASIC SiPs

This Open-Silicon webinar, moderated by Herb Reiter of eda 2 asic Consulting, Inc., will address the unique physical design, assembly and packaging challenges of 2.5D ASIC SiPs, and outline the proven solutions and strategies that are available to mitigate these issues in order to successfully ramp ASIC SiP designs into volume production. Using a 2.5D HBM2 ASIC SiP as a case study, the panelists will cover all aspects of physical design of the interposer, ASIC, signal integrity analysis and STA, rail analysis and power integrity analysis. They will also address the package design, assembly and testing both at the wafer level and the SiP level.

 


The panelists will emphasize the importance of understanding the entire 2.5D ASIC SiP manufacturing supply chain ecosystem and all of its stakeholders, such as the HBM2 memory, ASIC, interposer, package substrate, assembly house, foundry and more. Attendees will learn about system planning, 2.5D ASIC SiP requirements and implementation strategies, package assembly flows, verification, test, and signoff. By understanding the implementation and manufacturing challenges associated with 2.5D ASIC SiPs and the solutions available, designers and architects will be better equipped to achieve high volume manufacturing with lower risk, higher performance and faster time-to-market.

This webinar is ideal for chip designers and SoC architects of the next generation of high bandwidth applications in HPC, networking, deep learning, virtual reality, gaming, cloud computing and data centers.

Herb has more than 30 years of semiconductor experience and he has been a tireless promoter of 2.5D packaging for many years. Herb writes for and works with industry organizations on 2.5D work groups and events at conferences around the world. I have worked with Herb on various conferences and recommend him professionally at every opportunity. Herb’s company EDA 2 ASIC Consulting started with single die designs in 2002 and now helps with the transition to multiple dies in a single package. This is one webinar that you don’t want to miss, absolutely.

About Open-Silicon

Open-Silicon transforms ideas into system-optimized ASIC solutions within the time-to-market parameters desired by customers. The company enhances the value of customers’ products by innovating at every stage of design — architecture, logic, physical, system, software and IP — and then continues to partner to deliver fully tested silicon and platforms. Open-Silicon applies an open business model that enables the company to uniquely choose best-in-industry IP, design methodologies, tools, software, packaging, manufacturing and test capabilities. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has successfully completed 300+ designs and shipped over 130 million ASICs to date. Privately held, Open-Silicon employs over 250 people in Silicon Valley and around the world. www.open-silicon.com


An AI assist for 5G enhanced Mobile Broadband for mobile platforms

An AI assist for 5G enhanced Mobile Broadband for mobile platforms
by Bernard Murphy on 02-23-2018 at 7:00 am

If you’re not up-to-speed on 5G, there are three use-cases: eMBB(enhanced mobile broadband) for mobile platforms (Gbps rates, immersive gaming, VR, AR – spectrum usage also extends up to mmWave, but that’s a different topic), mMTCfor massive machine type communication (ultra-low cost, ultra-low power, very dense networks) and URLLC or ultra-reliable low latency communication (for tele-surgery, traffic safety and aspects of industrial automation). CEVA is announcing their PentaG platform in support of eMBB at Mobile World Congress (MWC) next week.

Leveraging their skills in DSP IP, CEVA has built considerable experience and product road-time in wireless standards support, from 2G on up, and are now at 9B+ devices shipped to date across their product lines. Which means they already have a lot of credibility with the handset and base-station OEMs who are preparing for 5G. I blogged last year (One Cellular Technology to Rule Them All) on their work in this area in support of base-stations. Now they’re announcing what they’re doing in support of UEs (user-equipment aka mobile devices to the rest of us).

Enhanced Mobile Broadband (eMBB) is a tough standard to support; versus LTE it requires much higher capacity and bandwidth, much lower latency, multi-mode/RAT support for smooth evolution / coexistence with existing standards, and support for massive MIMO – multi antennas at both base-station and UE. But looks like it will be worth the effort. CCS Insight expects 1B subscribers by 2023 and 2.5B by 2025. The network operators, handset makers and semis are already actively engaging, most in support of the 5G-NR priority while Verizon apparently is still doing its own thing (with support from some cities and countries) in the mmWave part of the standard.

Emmanuel Gresset (Director Biz Dev in the CEVA Wireless Unit) told me that an important aspect in providing support for 5G in these relatively early days when the standard is still evolving is to balance between performance and flexibility. They put a lot of effort into looking at tradeoffs, and in ability for customers to reuse legacy software with enhancements only where needed for eMBB. He cited as an example their choice to use the already widely-deployed XC4500 in the Vector MAC unit processor (VMU). This has 64 MACs versus their XC12 with more MACs but the XC12 solution might have implied more software rework for existing customers. Instead they added extensions to the 4500 architecture to support 5G with minimal disruption to legacy code (and they incidentally pick-up the MAC shortfall in the VMU).

One part of the PentaG solution I found especially interesting is an AI processor based on neural nets which they use for link adaptation. Adaptation is a phase where the UE and the base-station communicate to optimize the quality of the link; the base-station sends information, the UE receives and looks at all options to optimize that signal, then sends back to the base-station to guide reconfiguring the link.

In earlier standards, the UE method to decide how to optimize was algorithmic. As standards evolved this had to be extended to algorithms plus lookup tables tuned to needs. As standards evolved further (LTE-Advanced), those tables had to grow significantly to meet link quality expectations. For CEVA it was very unclear that this approach could scale into 5G without loss in quality, hurting both transmission rates and power. PentaG instead uses a neural net approach which can be trained (by the OEM) to optimize adaptation. CEVA demonstrate this in improved throughput and significantly reduced power over their earlier-generation (LTE-A) solution. They also believe this approach will be much more flexible in adjusting to evolution in the 5G standard.

The VMU is designed to handle the massive MIMO requirement of 5G where you have a greatly increased number of antennas on the base-station and on the UE, resulting in 5X the channel bandwidth to be processed that you have in LTE. 5G MIMO also means that the UE has to deal with 10X the beamforming options it had to handle in LTE. The VMU assists here, through parallelism, a matrix engine and yet more MAC processing, again providing higher performance and lower power than earlier generation solutions.

A cluster of CEVA-X2 DSPs optimizes modem control for latency and performance across multi-RAT/5G and multiple simultaneous events. It also provides an optimized connect solution/queue manager to manage traffic housekeeping between units directly without needing to get bogged down in interrupt-driven transfers – yet again important in managing throughput and latency to 5G expectations.

Finally, the platform – and it is a platform, multiple IPs and software – offers a set of hardware accelerators for the encode and decode functions required for 5G: polar and LDPC. They also offer software libraries optimized for 5G (also for LTE-A / WCDMA / TD-SCDMA) and a HW/SW development kit with reference board. The software also includes the AI training suite that an OEM would use to train the neural nets. Emmanuel stressed that PentaG is not a full modem – you still have to add RF and cache for example. But it certainly seems to be the heart and soul of a 5G eMBB modem.

At MWC next week, CEVA will be showing an impressive demo based on a UE with 4 antennas and a base-station with 8 antennas. The UE will be in a car driving in the city, among high rises with quickly varying reception, at times with no line of sight to the base-station. They’ll show how actual reception/transmission rates compare to theoretical optimum values. Sounds like they’re pretty confident. You can learn more about PentaG HERE. If you want to learn more about 5G in general, there’s a useful reference HERE.