RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Smarter Way to Do Multi-Board PCB Systems

A Smarter Way to Do Multi-Board PCB Systems
by Daniel Payne on 03-23-2019 at 2:15 pm

Many electronic product ideas start out as sketches on the back of a napkin, then migrate over to diagrams drawn in Visio or PowerPoint, finally entered into EDA-specific tools. With that methodology there’s a big disconnect between the diagrams drawn with a purely graphical tool and the EDA tools, because there’s no data linkage happening, so there’s no consistency and no automation when a change is made to the specification. Necessity is the mother of all inventions, so I recently spoke with Gary Hindeat Cadence to learn about how this need for system-level capture was turned into a new product, named Allegro System Capture, announced earlier this year.

Q: How new is System Capture?

It’s been in development for a few years now, and it’s a platform for hardware design of systems with multiple boards, packages, cables and harnesses.

Q: Why was System Capture created in the first place?

I’ve had previous roles at Cadence as an AE and AE director, visiting customers across Europe that design PCB systems for automotive, industrial, mil-aero, networking and even formula racing. These teams often started out their designs with PowerPoint or Visio to capture the big-picture, but there was never any linkage to the electrical system and requirements. So we created System Capture as a way to automate the graphical diagrams that also included connectivity, and partitioning a system into multiple boards.


Electronic system definition that drives the detailed implementation

Q: Does System Capture work with any schematic or PCB layout tool?

Our System Capture tool works with the Cadence Allegro and OrCAD tools only at this point.

Q: Can you give me an example of how using your approach is beneficial?

Sure, consider a two board system having RF and Digital. With this approach you can partition your system into two boards, then have design engineers assigned to each board working in parallel, while the interconnect between the two boards is first entered with System Capture, maintaining consistency of the interconnect between the boards.

Q: What types of engineers would be using your new tool?

It really depends on the project, typical users would be: system architects, hardware architects, lead engineers senior engineers, EE, some MCAD users, PCB designers, SI experts, even manufacturing engineers.

Q: What problems does this new approach help mitigate?

Well, it eliminates surprises that happen when bringing together two or more boards. Let’s say that something changed like pin positions or pins names, with this approach you catch these changes so there aren’t surprises. The system-level connectivity is defined from the top-down, and there are consistency checks as you create the PCB layout for each board.


System connectivity mismatches identified and highlighted for users to resolve

Q: How much time can an engineer expect to save with this approach?

With the connectivity already defined in System Capture you can expect that during schematic capture between a 2X to 5X speedup. Placing decoupling capacitor rails is up to 10X faster on schematic pages now.

Q: What types of simulation analysis are supported?

We’ve got two types, signal integrity analysis is done with the Allegro Sigrity SI tool, and power integrity with Allegro Sigrity PI.

Q: Where can I learn more about System Capture?

On the web there’s a product page and data sheet, contact your local Cadence AE, or come visit us at PCB West in Santa Clara.

Related Blogs


Semiconductor Market Downturn in 2019

Semiconductor Market Downturn in 2019
by Bill Jewell on 03-23-2019 at 5:00 am

The global semiconductor market grew 13.7% in 2018, according to World Semiconductor Trade Statistics (WSTS). Each year, we at Semiconductor Intelligence review semiconductor forecasts and compare them to the final WSTS data. We used projections which were publicly released from late 2017 through early 2018, prior to the release of January 2018 WSTS data in March 2018. These forecasts ranged from 5.9% from Mike Cowan to 21.3% from Future Horizons. Most were in the 6% to 8% range. Our Semiconductor Intelligence projection in February 2018 was 12%, the closest to the final number of 13.7%.

We were set to award ourselves the (virtual) trophy for forecast accuracy. However, in researching recent forecasts for 2019, we found Objective Analysis posted a report on its forecast accuracy since 2008. The December 2017 video from VLSI Research shows a chart with the Objective Analysis statement “strong start supports 10%+ growth” for 2018. But in the video Jim Handy of Objective Analysis said their forecast was 14%. Thus, Objective Analysis wins the (virtual) trophy for 2018. The 2017 semiconductor market grew 21.6%. Last year we awarded our (again virtual) trophy to Future Horizons for its 11% projection. However Objective Analysis would have won that year also, with a forecast of ~20%.

What is the outlook for 2019? The year 2018 semiconductor market finished weak with an 8.2% decline in the fourth quarter from the third, according to WSTS. The first quarter of 2019 will be even weaker. Most major semiconductor companies are expecting up to double digit declines in 1Q 2019 from 4Q 2018. The exceptions are Qualcomm which expects a 0.9% increase (with 9.3% at the high end) and Infineon, which sees a flat 1Q 2019. Weak end demand and inventory adjustments are cited as key factors in the declines. Memory companies are the hardest hit, with Samsung down 24.3% in 4Q 2018 and SK Hynix down 13.0%. Micron just reported a 26.3% revenue decline in its fiscal quarter ended February 28, 2019. Micron’s outlook for the quarter ending May 31 is a 17.7% decline.

The global economic outlook for 2019 points to slower growth in 2019. The International Monetary Fund (IMF) January 2019 forecast is for world GDP growth to slow from 3.7% in 2018 to 3.5% in 2019. The decline is led by the advanced economies, with the U.S. slowing from 2.9% in 2018 to 2.5% in 2019 and the Euro area slowing from 1.8% to 1.6%. China is expected to drag down growth in the emerging/developing economies as its GDP growth decelerates from 6.6% in 2018 to 6.2% in 2019. On the positive side, India continues to show of over 7% and accelerating, the ASEAN-5 (Indonesia, Malaysia, the Philippines, Thailand and Vietnam) exhibit steady growth of around 5% and Latin America is recovering. Key factors cited by the IMF for the slowdown are trade tensions (especially between the U.S. and China) and the uncertainty of the U.K.’s exit from the European Union (Brexit). The outlook for 2020 show slight improvement, with acceleration to 3.6% world GDP growth led by emerging/developing economies.

The outlook for key end equipment is also bleak. IDC in March forecast a 0.8% decline in smartphone unit shipments in 2019 and a 3.3% decline in combined PC and tablet unit shipments.

Recent 2019 semiconductor market forecasts are generally negative. Our latest projection from Semiconductor Intelligence is a 10% decline. Several forecasts are in the -5% to -1% range. Objective Analysis has a chance for a three-peat forecast trophy for 2019; but would have to share with Morgan Stanley if -5% is closest to the final number. IC Insights expects a slight 1.6% gain for the IC market while Gartner projects a 2.6% gain for semiconductors. Memory is the weakest link in 2019. IC Insights projects the IC market excluding memory will grow 6.7%. WSTS expects 2.6% growth for semiconductor excluding memory. Our Semiconductor Intelligence forecast is for a 2% decline in semiconductor without memory.

The current outlook for the semiconductor industry for 2019 assumes lower memory prices, slower electronic equipment demand, inventory corrections, and slower growth for the global economy. Despite all the uncertainty, few analysts expect a global recession in 2019. The expectations for the 2020 semiconductor market are mixed. VLSI Research and Gartner forecast a rebound in 2020 of 7.0% and 8.1% respectively. IC Insights projects a 1.9% decline in the 2020 IC market. Our preliminary 2020 forecast from Semiconductor Intelligence is 5% to 10% growth.


SPIE Advanced Lithography Conference – ASML EUV Update

SPIE Advanced Lithography Conference – ASML EUV Update
by Scotten Jones on 03-23-2019 at 12:00 am

At the SPIE Advanced Lithography Conference ASML gave an update on both the current 0.33NA system and the 0.55 high-NA system development. I saw the presentations and got to sit down with Mike Lercel (Director of Strategic Marketing).
Continue reading “SPIE Advanced Lithography Conference – ASML EUV Update”


Attend Parts of DAC For Free, Really

Attend Parts of DAC For Free, Really
by Daniel Payne on 03-22-2019 at 5:00 am

The Design Automation Conference (DAC) is the must-see, annual event for semiconductor professionals that design chips, use EDA software, and buy semiconductor IP. Like all conferences there’s an entrance fee, but for the 11th year now you can get a free pass, courtesy of three sponsors: Avatar Integrated Systems, ClioSoft, Truechip. The free pass is part of the I Love DACpromotion going on now, but you must act before the deadline of May 17th. DAC is located in Las Vegas this year from June 2-6, so make your airline and hotel arrangements early to get the best deals.

Here’s what you’re going to experience with the I Love DAC pass:

  • Four daily Keynote sessions
  • Access to the Exhibition Floor with over 170+ Exhibitors
  • Access to two pavilions with daily presentations
  • DAC Pavilion, sponsored by Cadence

    • SKYTalks (mini-keynotes)
    • Industry leader discussions
    • Hot industry topic panels
    • Tear-downs
  • Design-On-Cloud Pavilion in Design Infrastructure Alley

    • Daily presentations focused on cloud-based and IP Topics
  • Chip Essentials Village

    • Demonstrations from leading companies providing essentials to SoC design.
  • Daily networking receptions Sunday – Wednesday

Design Infrastructure Alley
In 2018 there were IT and specialty vendors galore, with familiar names like:

  • Google Cloud
  • Microsoft
  • Cadence
  • Amazon Web Services
  • Metrics
  • Alibaba Cloud
  • IBM
  • Dell EMC
  • Univa
  • Footprintku
  • PureStorage
  • Rescale
  • Six Nines
  • Altair
  • Suse
  • ICmanage

These companies provide us the hardware, software and services needed while running EDA tools, licensing, storing massive data, security and even support cloud-based flows.

Chip Essentials Village
Let’s say that your company wants to exhibit at DAC on a budget, then check out the Chip Essentials Village because it offers an exhibit kiosk at a value price, presentation time in a theater and more.

DAC Experience
I’ve been attending the DAC conference since 1987 and I always come away filled with new insights learned from the Keynotes, foundries, EDA and IP vendors. You’ll rub elbows with system designers, architects, RTL designers, circuit designers, IC layout designers, CAD engineers, researchers, C-level executives, and of course the team of SemiWiki bloggers. There are about 60 technical sessions to attend, exhibits to peruse, and many networking opportunities.

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community of more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives as well as researchers and academicians from leading universities. Nearly 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area, with approximately 200 of the leading and emerging EDA, silicon, and intellectual property (IP) companies and design services providers. The conference is sponsored by the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design.


Narrow-Band IoT Adoption Grows as IP Options Narrow

Narrow-Band IoT Adoption Grows as IP Options Narrow
by Bernard Murphy on 03-22-2019 at 12:00 am

Cellular as a method to communicate with the IoT is on a tear for obvious reasons. It’s long-range with no concerns about the lesser reach of Bluetooth or Wi-Fi, it needs no added infrastructure since it already works with 2G/3G/4G (and ultimately 5G I presume) and it’s designed for ultra-low power, supporting those devices expecting to run on a coin-cell battery for 10 years. Commercial cellular IoT networks are blossoming across the world, with a total of 69 launches by 33 operators in 34 counties as of Q4, 2018; and NB-IoT represents 80% of all deployments.

For the big cellular players with in-house communications design expertise this is just another direction to grow. But this is IoT, with lots of new silicon design teams, so the market is likely to be more fragmented than more familiar mobile markets. Many of these players, not all new ventures, lack silicon communications expertise so depend on proven IP to handle the modem.

There used to be a number of providers in the NB-IoT space. CEVA, still very much active, has well-established expertise in cellular and introduced their first Dragonfly NB-IoT solution early last year and their eNB/Rel 14 release of that product more recently. ARM was pursuing NB-IoT with its Cordio platform but announced late last year that they would no longer pursue this direction. Commsolid, another IP supplier in this space, was acquired by Goodix and now makes chips rather than IP. When you’re building an IoT solution, modem chips are one way to go of course but if you want ultra-low power and ultra-low cost (which you generally do for high volume edge devices) it’s a lot more attractive to look at integrated ASIC solutions with the modem in an IP.

Which puts CEVA in an enviable position in serving this expanding market. In their eNB-IoT release they have also added multi-constellation GNSS positioning support, satisfying a need for location services in the majority of new IoT products, whether mobile or fixed (an interesting market wrinkle in itself; I have written about this before). A report from DNB Markets (on Nordic Semiconductor following MWC 19) confirms this. DNB are confident in cellular IoT prospects based on what they saw at the event and noted CEVA’s enabling position in driving competition in this space, citing interest coming from semiconductor companies who don’t have cellular expertise, but also from non-semiconductor companies who want to build their own chipsets and modules.

Nurlink, a China-based IC design company specializing in cellular IoT wireless communications, recently announced the introduction of their NK6010 eNB-IoT SoC powered by CEVA-Dragonfly. This supports all eNB-IoT frequency bands and major global carriers, as required to support certification of devices on any eNB-IoT commercial network around the world. Nurlink’s goal is to drive adoption of their chip in IoT devices such as smart meters, wearables, asset trackers and industrial sensors. They added that they’re now engaged with (mobile network) operators worldwide to certify their SoC.

That certification step shouldn’t be ignored. To be allowed onto the networks, you have to prove your device will play well with others in real life (not just in the lab), according to MNO expectations. If you’re already not a communications expert, this can be daunting. CEVA works hard to make this transition as smooth as possible. While MNOs will not certify IP, CEVA have built their own silicon based on the IP which they have been running through test trials at Vodafone’s IoT Future Lab in Düsseldorf, Germany. Using those open lab facilities which provide a realistic end-to-end live environment of the NB-IoT technology, CEVA connected to the Vodafone NB-IoT network and demonstrated end-to-end IP connectivity with its test chip running an eNB-IoT compliant software stack. This provides a “pre-certification”, not an official signoff but getting as close to compliance as possible short of proving it in the end-product, which should simplify certification for product developers.

Lastly, how low can you go on power? Integrating the modem into your ASIC automatically reduces power from a multi-chip solution. On top of that, Dragonfly is designed for additional power reduction down to a few micro-amps in sleep-mode through dedicated instructions to support power-saving mode (LTE PSM), also through support for LTE eDRX (extended discontinuous reception). Since communication should be relatively infrequent for applications intended for eNB-IoT, getting to 10-year battery life should be achievable as long as you don’t hog power in your application or sensors.

Want to learn more about CEVA Dragonfly? Click HERE.


ARM, NXP Share Usage, Challenges at Synopsys Lunch

ARM, NXP Share Usage, Challenges at Synopsys Lunch
by Bernard Murphy on 03-20-2019 at 7:00 am

Synopsys runs a “Industry verifies with Synopsys” lunch at each DVCon, which isn’t as cheesy as the title might suggest. The bulk of the lunch covers user presentations on their use of Synopsys tools which I find informative and quite open, sharing problems as much as successes. This year, Eamonn Quiqley, FPGA engineering manager from ARM and Amol Bhinge, R&D emulation and verification HW director from NXP, shared their experiences.

Eamonn hails from Ireland where they are great spellers but terrible pronouncers as I think the saying goes (half of my relatives are from the Cork area); pronouncing his name challenged most of the other speakers (it’s “Aymon” by the way). He talked about providing enterprise-class FPGA-based verification at ARM at their Trondheim, Redhill and Austin facilities. Here FPGA means FPGA-prototyping using HAPS.

I’m guessing this isn’t the only enterprise-scale use of FPGA prototyping, but it’s the first I have seen and it’s pretty impressive. We’re getting more familiar with datacenter-based emulation, but this is HAPS prototyping in long aisles of cabinets (I counted at least 12 per side in one image), each with multiple bays of prototyping systems. Looks just like a regular datacenter aisle but without the flashing lights on the cabinets (all the flashing lights are on the systems inside).

The goal of course is to provide global access and resource sharing with resilience (reliability, maintainability) and to optimize use of resources, also to provide flexibility in how these systems can be used. The trick in meeting the flexibility goal is to provide configurability in a controlled/limited range of options. This they accomplish through a number of widely-used (for them) configurations, from 1 to 16 FPGAs. The most heavily used configuration has 4 FPGAs, with each FPGA connected to the others. They add another S104 system to this to extend to support 8 FPGAs, which he said was designed to cover many needs and could be adapted if needed. They use these configs most commonly for CPU debug. For GPU debug they double this up again, allowing for up to 16 FPGAs. Cabling and configurations are designed to support multi-design mode (MDM) to maximize usage at all times.

Debug on FPGA prototypes is always tricky; after all they’re designed for speed rather than deep and broad visibility. Eamonn said that they find that deep-trace debug works really well if you know what you want to look at, capturing up to 2k signals at 17MHz, whereas global state visibility, running at 100k cycles/hour, works well if you know roughly whenyou want to look but not where.

Amol (NXP) opened with an interesting stat. Did you know that every new car contains at least 100 NXP products? I didn’t but it doesn’t sound unreasonable given the level of automation we’re now seeing even in entry-level cars. Rather than talking about specific verification objectives, Amol provided an entertaining and enlightening tour through challenges he still sees in SoC verification.

He kicked off with an interesting statement. Verification tools provide many flavors of coverage, but in his view it is already difficult to address just one type at a reasonable level across multiple domains. He views coverage closure as a long pole for multiple reasons: exclude files for IPs are not as reusable as they should be, it is difficult to deal with tie-offs, constants and parameters (he suggested these need added focus in verification flows) and they’re still struggling to get coverage on IPs.

He made an interesting point –there should be more investment in coverage for IO muxing. I know this is an area already covered as an app by formal tools (in fact he mentioned this area when he discussed formal tools), but I also know that IO muxing architectures can be highly custom, even from design group to design group within a company. I wonder how much effort is required to configure these apps to custom structures? Perhaps so much that many verification groups still resort to simulation-based signoff, in which case coverage metrics would certainly be interesting?

Amol said they worry particularly about false passes, whether checkers, assertions or VIPs may themselves contain errors or may overlook certain possibilities. He noted they had found parameter errors and tieoff errors which should have been caught but were not. He particularly likes Certitude, Z01X and the VC Formal FTA App in tracking down problems of this nature.

Gate-level verification continues to be important (thanks to automotive I believe) and they have found defects at this level which escaped RTL verification. A problem here is turn-around time, in tool run-time (he mentioned running 44 gate-level test cases took many months) and also in debug. He likes shaking out possible bugs earlier in RTL, and he cited VC Formal FXP as a useful tool in this area. But he still sees need for more work in tools and methodologies.

Amol wrapped up with a request for more support in performance verification, particularly along targeted paths such as PCIe to DDR or core to DDR. He mentioned need for more standardization and innovation in this area.
Overall, entertainment and insight into what can be possible for enterprise level FPGA prototyping and where yet more development is needed. And a free lunch – what more could you ask for? To watch the event, click HERE.


The Revolution Evolution Continues – SiFive RISC-V Technology Symposium – Part II

The Revolution Evolution Continues – SiFive RISC-V Technology Symposium – Part II
by Camille Kokozaki on 03-20-2019 at 5:00 am

During the afternoon session of the Symposium, Jack Kang, SiFive VP sales then addressed the RISC-V Core IP for vertical markets from consumer/smart home/wearables to storage/networking/5G to ML/edge. Embedding intelligence from the edge to the cloud can occur with U Cores 64-bit Application Processors, S Cores 64-bit Embedded Processors, and E Cores 32-bit Embedded Processors. Embedded intelligence allows mixing of application cores with embedded cores, extensible custom instructions, configurable memory for application tuning and other heterogeneous combination of real time and application processors. Some recently announced products is Huami in Wearable AI, Fadu SSD controller in Enterprise, Microsemi/Microchip upcoming FPGA architecture. Customization comes in 2 forms, customization of cores by configuration changes and by custom instructions in a reserved space on top of the base instruction set and standard extensions, guaranteeing no instruction collision with existing or future extensions, and preserving software compatibility.

Palmer Dabbelt, SiFive Software lead manager compared the complexity of x86 instructions to the simplicity of RISC-V. The current state of RISC-V specifications includes:

  • User Mode ISA spec: M extension for multiplication, A extension for atomics, F and D for single and double precision floating point, C extension for compressed 16-bit
  • Privileged Mode ISA spec: Supervisor, Hypervisor, Machine modes
  • External Debug spec: Debug machine mode software over JTAG

Some resources are listed below [SUP]1[/SUP].

Mohit Gupta, SiFive VP SoC IP covered the SoC IP solutions for vertical markets. SoC/ASIC design is nowadays turning into an IP integration task for cost and design cycle time reasons. No single memory technology is applicable to all designs; power, bandwidth, latency tradeoffs are needed for each custom requirement; SerDes interfaces vary by application and selection is based on power and area optimization. The DesignShare partners provide their differentiated IP at no initial cost for verification and integration, built on top of RISC-V cores and foundational IP helping the development cost and reducing needed expertise.

Krste Asanovic, SiFive Co-Founder and Chief Architect outlined the Customizable RISC-V AI SoC Platform.


The AI accelerator design metrics for ‘inference at the edge’ are cost/performance/power; for ‘inference in the cloud’ latency, throughput/cost matter most. For ‘training in the cloud’, the only metric that matters is performance.

In order to lower cost, power and service latency, Unix Servers can be dropped and replaced by self-hosting accelerator engines along SiFive RISC-V Unix multi-cores.

SiFive’s Freedom Revolution consists of:

  • High-bandwidth AI and networking applications in TSMC 16nm/7nm
  • SiFive 7-series RISC-V processors with vector units
  • Accelerator bays for custom accelerators
  • Cache-coherent TileLink interconnect
  • 2.4-3.2 Gb/s HBM2 memory interface
  • 28-56-112 GBs SerDes links
  • Interlaken chip-to-chip protocol
  • High speed 40+ Gb/s Ethernet

In development E7/S7/U7vector and custom extensions, accelerator bays, 3.2 Gb/s HBM2 in 7nm and higher performance RISC-V processors.

Jay Hu, CEO DinoplusAI rounded out the presentations’ portion of the Symposium. He provided a market overview that showed Deep Learning ASIC having the highest growth

He emphasized that 5G Edge cloud computing and ADAS/Autonomous driving require predictable and consistent ultra-low latency (~0.2ms ResNet 50), high performance and high reliability high precision inference (the CLEAR diagram summarizes the technology platform and positioning.

DinoplusAI provided a latency comparison with NVIDIA Tesla V100 and Google TPU 2

___
[1]Software Resources: Fedora, Debian, OpenEmbedded/Yocto, SiFive blog

Part II

(Part I can be found here)


Qualcomm Intel Facebook and Semiconductor IP

Qualcomm Intel Facebook and Semiconductor IP
by Daniel Nenni on 03-20-2019 at 12:00 am

What does Qualcomm, Intel, and Facebook have in common? Well, for one thing they all bought network onchip communications (NoC) IP companies. As I have mentioned before, semiconductor IP is the foundation of the fabless semiconductor ecosystem and I believe this trend of acquisitions will continue. So, if you are going to start a company inside the fabless ecosystem make it semiconductor IP, absolutely.

In 2013 Qualcomm acquired the Arteris FlexNoC product portfolio, but Arteris retained existing customer contracts and continued to license and enhance FlexNoC for customers. Qualcomm does not maintain any ownership interest in Arteris and Arteris is nowArterisIP. This was one of my top 5 IP acquisitions along with Denali, Virage, MIPS, and ARM. Not only did Arteris get a VERY nice exit, they parlayed to play again another day. And today ArterisIP has more than 100 employees, more than $20M in revenue, and new technology and products coming out every year. Make no mistake about it, ArterisIP is a fierce competitor and owns the NoC market. We have been covering Arteris since 2011 with 77 blogs viewed close to 300,000 times.

In 2015 Intel purchased the NetSpeed team. We started covering NetSpeed in 2015 and published 26 blogs garnering close to 100,000 views. NetSpeed worked closely with Jim Keller (former Apple) at Tesla. Jim went to Intel and the NetSpeed team followed, simple as that.

In 2019 Facebook acquired Sonics which, from what I have heard, was another team acquisition. It is supposed to be a secret as to where they went but there are no secrets in the fabless semiconductor ecosystem or, more importantly, on LinkedIn:

Sonics Our Next Chapter
More than 20 years ago, we started this business with the belief that the next generation of chips would be defined by networking techniques. Together, we have helped our customers achieve massive success, putting more than 5 billion chips into the marketplace – including, perhaps, the device on which you read this post.

It’s been amazing to see our vision come to fruition. Two decades of development later, we continue to believe that silicon IP solutions are the key to developing groundbreaking products.

Today, we’re excited to announce that we are moving on as a team. As part of this opportunity, we will be winding down our business.

We are deeply proud of our journey from a small Silicon Valley startup to a viable business. We could not have accomplished this without the support of our customers, vendors and our team. Thank you all for believing in our vision and supporting us.
Thank You!

We started covering Sonics in 2013 and published 38 blogs that earned more than 100,000 views. In 2016 Sonics pivoted from NoC to energy processing (power management) so they have not really been focused on NoC but they were definitely still in the game and have an excellent team. From what I have been told, a former Sonics employee already worked at Facebook so that is how it started.

With more than 100 IP companies (that I know of) inside the semiconductor ecosystem you are probably wondering why there were only (3) NoC contenders. I know I was until I talked to some IP folks and found out that network onchip communications is INCREDIBLY hard. Congratulations to ArterisIP, well played.


The Revolution Evolution Continues – SiFive RISC-V Technology Symposium – Part I

The Revolution Evolution Continues – SiFive RISC-V Technology Symposium – Part I
by Camille Kokozaki on 03-19-2019 at 5:00 am

SiFive held a RISC-V Technology Symposium on February 26 at the Computer History Museum in Mountain View. Keith Witek, SiFive SVP Corporate Development and Strategy kicked off the event and introduced the first keynote speaker Martin Fink, Western Digital CTO, at the time acting CEO of the RISC-V Foundation (as of this writing, Calista Redmond was just appointed the new CEO of the RISC-V Foundation). He shared a slide showing the growing RISC-V ecosystem from tools vendors, to IP/semi chip providers and design/foundry services. He stated that, moving forward, the areas of focus will include standards/specs, ecosystem growth, awareness and education.


Sunil Shenoy, SVP RISC-V IP BU, outlined the SiFive elements of leading the semiconductor design revolution with a global presence and reach including 12 offices, 320+ employees and 300+ tape outs so far and expertise in cloud chip design, RTL, physical design, silicon and design platforms on top of having the nucleus of the key RISC-V inventors, and a deep pool of technical and management talent.

The industry adoption for RISC-V is growing from Western Digital’s transitioning of 1 Billion+ cores per year to RISC-V, NVIDIA’s all future GPUs using RISC-V, India adopting RISC-V as national ISA, US DARPA mandating RISC-V in recent security proposals. Design wins are occurring in microcontrollers, wearables, networking, communications and storage.

The current challenges in hardware designing minimum viable products include cost, time and expertise. Currently a leading-edge technology node chip development costs $500M+ for 7nm, takes 2 to 4 years to develop and requires numerous experts in at least 14 disciplines.

In order to accelerate the innovation cycle addressing slow development, increased cost and increasing dependency on many experts, SiFive is reshaping the Silicon business by enabling free and open instruction set architecture, simplifying custom silicon development with templates and providing easy access allowing design in the cloud.


SiFive offers Core IP product Series (E, S, 2/3/5/7 Series, and U 5/7 Series). With 32 and 64 embedded Cores (some multi core capable) and 64-bit high-performance or multi-core application cores. The graph above summarizes the features by application, performance, cost targets of the core portfolio of offerings. Embedded RISC-V development can occur on platform boards such as HiFive 2 and the higher performance Linux capable HiFive Unleashed with expansion board.

Taking a page from the Software industry built on software stacks, SiFive is enabling the creation of a hardware stack where the customization is the focus with the lower stack levels mostly automated reducing cost and improving schedules while maintaining consistency of the design flow process.


SiFive also is enabling a DesignShare ecosystem where third party IP providers allow early integration and verification of their content in a safe cloud-based environment and where SiFive manages the NDA/contract and collects NRE/royalties at the appropriate time simplifying the customer-vendor interface and process.
The figure below shows a sample of the growing list of the participating IP providers. SiFive had the world’s first cloud Tape-out with Microsoft (Freedom U540) and the world’s first RISC-V SSD controller (FADU).


Simon Davidmann, Imperas CEO then followed, addressing getting the best approach from RISC-V with Application-targeted custom instructions. RISC-V adopters can be developing internal cores, are IP providers, creating open-source IP, enabling open source ecosystem as a business, or taking advantage of open-source RTL by incorporating in their products. Two types of adopters can either be creating new architectures with a ‘freedom to innovate’ key requirement or adopters where ‘free’ is the key requirement in order to support research, education, FPGA platforms and open-source communities.

The innovators want to add their own custom extensions and need quality simulation models to debug and analyze the custom instructions and to do RTL design verification. Imperas provides the high-performance CPU models, modeling technology with instruction-accurate verification simulators, tools, debuggers for embedded software development, debug and test purposes. Imperas Fixed Platform Kits (FPK) allow delivery models to customers and partners for pre-sales evaluation and development.


Megan Wachs, SiFive VP of Engineering discussed HiFive Freedom RISC-V Development Platforms. Two SiFive chip platforms include Freedom Everywhere (TSMC 180nm) and Freedom Unleashed (TSMC 28nm) allow customers to customize their SOC by combining pre-integrated configurable SoC architectures, processors, interconnect, off-chip interfaces, on-chip IP from a catalog of SiFIve and DesignShare partner IP providers along with the customer’s own IP. The HiFive1 Arduino-compatible RISC-V Development Board is sold out but a refresh is coming soon. The HiFive Unleashed, the first multi-core RISC-V Linux development board can be ordered on crowdsupply.com for $999.

The Freedom SDK along with SiFive’s Platforms allow configurability of the number and type of cores, includes peripherals and memory map with per-peripheral configurability. SiFive’s Open Source repository allows building one’s own Linux-capable FPGA image.

Ravi Thummarukudy, Mobiveil CEO highlighted RISC-V based platforms for SSD and IoT applications. Mobiveil provides IP and services for high speed interfaces, switches, bridges, memory controllers. To address data center energy and I/O bottlenecks, Mobiveil has a computational storage solution with filtering, scanning, data compression and many other features such as reducing I/O contention and improved power (up to 70% power savings) resulting in improved network performance, lower cost and latency. Mobiveil presented their configurable NVMe SSD Controller platform and reference design and the IoT SoC development platform.

Shafy Eltoukhy, SiFive SVP/GM Custom SoC BU outlined the SSD custom SoC solution with CPU features for the Storage applications using 64-bit real-time addressability for Big Data and real-time applications for Fast Data. AI uses cases for SSD helping in failure prediction, storage tuning, adaptive caching are some examples of what RISC-V can help address.

The Physical Design Platform features a robust silicon engineering methodology with comprehensive checklists throughout the analysis, exploration, implementation and tape-out phases. Custom SoC metrics include 140+ million units shipped with an average 25 DPPM, 300+ tape-outs spanning 100 different end applications for 150 unique customers from tier-1 system companies to startups.

Darrin Jones, Sr Director Technology Development for Azure Cloud Services Infrastructure put in perspective the semiconductor market exceeding 1 trillion devices in 2018 with about 6% CAGR since 2000. Azure boasts 2 million miles of intra data-center fiber, 55 Azure regions, 100+ data centers with millions of servers deployed. The Azure silicon development allows the design execution and verification, place and route and physical verification for advanced nodes such as 7nm with faster product iterations. SiFive, Cadence Design Portal and EDA tools and TSMC VDE are all deployed on top of the Microsoft Azure cloud infrastructure. Performance, price and agility are combining to offer compelling solutions that are seeing adoption.

End Part I


Surviving in the Age of Digitalization

Surviving in the Age of Digitalization
by Daniel Nenni on 03-18-2019 at 7:00 am

There was an interesting keynote at DVCon last month. It was titled “Thriving in the Age of Digitalization” which introduced the concept of digital twins for design and production. It was presented by Fram Akiki who is a relative newcomer to EDA but has an interesting history so I will start there.

Fram and I got started in the semiconductor industry at about the same time (early 1980s). He spent 21 years at IBM them 10 years at QCOM and is now in EDA so I asked him about his journey:

After 2 internships with IBM in Burlington, VT, I started full-time with IBM after college as a mixed signal IC designer. My first 10 years with IBM were in design and development roles across mixed signal, logic and microprocessor ICs. I had some very interesting experiences, including working on some of the industry’s first analog CMOS designs for networking and graphics to PowerPC designs for Apple Macs (Remember that?).

During my next ten years at IBM, I held management/executive positions within the Custom Logic group of IBM Microelectronics. I had the opportunity to lead some large engagements with graphics companies like nVidia and processors for leading gaming console manufacturers like Microsoft, Sony and Nintendo. These engagements led IBM to bring up a 300mm facility in East Fishkill, NY and launch the “Common Alliance” technology development platform.

The move to a fabless company (and the West Coast) is what I refer to as my “mid-life crisis.” After many years of engagements and travel to California, I decided to make the move to Qualcomm to experience life further down the supply chain. After a few years leading the foundry operations team, I had the opportunity to be one of the executives helping to lead Qualcomm’s diversification from the smartphone to the broader connected market. Two key projects in this diversification were Gobi (the industry’s first globally enabled cellular module) and Windows on Snapdragon (Qualcomm’s ARM-based processors targeted for compute applications).

Joining Siemens (including the Mentor acquisition) was attractive to me for a number of reasons. Throughout my time at IBM and Qualcomm, I had the opportunity to be a heavy user of EDA software, including Mentor products for IC design, simulation and PCB. My experience has spanned a broad portion of the semiconductor and electronics supply chain. The concepts of Ideation, Realization and Utilization that we talk about as part of the digital twin and digital thread really resonates with not only the challenges I have seen, but the opportunities that I see moving forward.


The digital twin concept really hit home with me after the recent Boeing 737 Max problems. As a pilot myself I can tell you that flying is 99% boredom and 1% sheer terror. The 1% sheer terror for me is stalling the plane. Stalling is when the angle of the wings is too high for the airspeed and the plane literally stops flying and drops out of the sky. The response of course is to jam the nose down to get it flying again (wind flowing under the wings) then resume climbing.

From what I have read, the Boeing 737 MAX planes falsely indicated stalls which the plane automatically corrected by forcing the nose down. The pilots tried to correct but were defeated by the plane’s “failsafe” automation. There have been two similar crashes so the 737 MAX planes are grounded until a fix can be provided. We will have to wait until the investigation is completed but over automation will always be a concern. I have this same concern with fully autonomous cars. There will definitely have to be a full grown digital twin before I trust my life to one, absolutely.

Bottom line:
With the increased end product complexity that semiconductors enable, verification will continue to be the biggest challenge and in some cases it will be a matter of life or death.