RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Apple Going (IP) Vertical with no Imagination!

Apple Going (IP) Vertical with no Imagination!
by Eric Esteve on 04-03-2017 at 12:00 pm

What conclusion could we derive from the recent (April 3) PR from Imagination where we learn that the company “has been notified by Apple Inc. (“Apple”), its largest customer, that Apple is of a view that it will no longer use the Group’s intellectual property in its new products in 15 months to two years time, and as such will not be eligible for royalty payments under the current license and royalty agreement.”?



Apple A9 Application Processor

The first very basic lesson is “good business practices”: it’s always dangerous for a company to rely on a single customer for more than 50% of the revenues. The royalties paid by Apple to Imagination in fiscal year 2015 (ending in April 2016) was US$ 89.7, for a total revenue of US$ 177. If you consider the GPU IP only (without the MIPS CPU royalty revenues of US$ 34.7), Apple royalty payment represent 78% of Imagination royalty revenue linked with the company flagship product… (all figures in $ million).

The second lesson is “differentiation”: Apple is continuing to deploy a strategy based on the internal development of the essential application processor functions. Since the acquisition of PA Semi in 2008, Apple develops an ARM compliant CPU (under an architecture license with ARM). By the way, the internal development of this CPU has allowed Apple to be the first to launch a smartphone based on a 64-bit CPU application processor. If internal IP development allows such kind of differentiation, that’s a win!
Just to comfort this second lesson, just take a look at Qualcomm: the fabless chip maker has been, and is still the WW #1 application processor vendor. If you look at Snapdragon architecture, you will see:

·ARM 64-b compliant CPU (architecture license)
·ADRENO GPU (internal design)
·Hexagon DSP (internal design)

And, if you remember the last time that Qualcomm has integrated standard ARM Cortex 64-bit CPU, it was to react to Apple launch of 64-bit CPU while the internal 64-bit CPU was not yet available, and the result was a Snapdragon product showing such a high-power consumption that it was almost not usable by system manufacturers…

The big lesson here is that if you want to design the highest differentiated product, you need to invest enough to develop the main IP (CPU, GPU or DSP) internally, the real lesson being that the leaders, system manufacturer (Apple) or chip maker (Qualcomm), did it and it was a strike!

When I read the PR from Imagination, I am afraid that the next lesson will be “Be prepared to pay a fortune to lawyers, and this for years”. In fact, Imagination is preparing to fight on the legal field:

“Imagination believes that it would be extremely challenging to design a brand new GPU architecture from basics without infringing its intellectual property rights, accordingly Imagination does not accept Apple’s assertions”. This may be a way to put pressure on Apple in order to push them to pay a license, even if ARM is no more using Imagination GPU… I am not a legal expert and not a GPU architecture expert too, so I can’t really comment.

Last point (and maybe a business lesson): I can comment about imagination strategy during the last 5 or 6 years. When you look at their annual report, that I did as recently on Saturday (but it was a coincidence), you can see that, on top of the GPU and CPU IP businesses, Imagination is listing 8 to 10 other businesses or product line (like Ensigma IP). When you are the GPU IP #1 vendor like Imagination was, it’s really better to keep focus and stay #1, instead of diversifying into products of low synergy, if any.

By Eric Esteve from IPnest


CEO Interview: Sanjay Keswani of Consensia

CEO Interview: Sanjay Keswani of Consensia
by Daniel Nenni on 04-03-2017 at 7:00 am

Sanjay Keswani founded Consensia in 2013. He has deep experience in the high-tech industry, guiding some of the world’s high profile technology brands through complex innovation and business transformation projects including companies such as Atmel, KLA-Tencor, Hughes Aircraft, and Motorola Mobility. Consensia customers include ARM, NXP, Qualcomm, Broadcom, Intel, Microsemi, and Cavium.

Who is Consensia? What is your vision for the company?
Consensia is a Dassault Systèmes channel partner, focused on the semiconductor, high tech (i.e. electronic systems / connected devices) and medical device industries. We provide software based solutions for efficient product development, in addition to consulting & training services, and technical support for the entire range of Dassault products . These include design productivity, semiconductor IP and product lifecycle management.

With significant M&A activity in the semi and systems markets, we see more vertical integration; so companies that were selling a few IC products to integrators and systems companies are now selling a diverse range of ICs, board based products and complete systems. We take a systems engineering view from IC to complete systems and enable companies to manage the product development process at all levels of their product. Taking a holistic view of product development is very critical to delivering high quality products to market. We are uniquely placed to offer a wide range of solutions to our customers.

Our vision is to enable our customers to serve a variety of market segments by enabling them to adhere to processes needed by their customers. For example, a semiconductor company selling into the automotive market has to adhere to APQP, and PPAP requirements along with DFMEAs and PFMEAs. Companies are also subject to increased level of compliance like RoHS, WEEE, REACH, and Conflict Minerals. Our solutions enable compliance activities on the same innovation platform.

What are your key offerings for the semi industry?
Our solutions connect design engineering, product engineering, and operations engineering in the semiconductor world. We provide solutions for both domains – design and operations.

In the design domain, (Synchronicity) DesignSync is a product that has been around for over 15 years. It’s been publicly disparaged by our competitors, but it’s interesting that all the ‘new’ DDM features that have been released by competitors over the last 2-3 years – local read/write caching, virtual workspaces, change based or manifest version control – have been part of DesignSync for as much as 10 years ago. There are significant updates to it ever year, so this year we are going to be running a series of ‘What’s New Din DesignSync’ webinars to show off what it’s capable of. DesignSync is the most widely used DDM solution in the world in terms of number of users and Dassault Systemes continues to invest heavily in new capabilities based on customer feedback.

Pinpoint is a unique analytics solution that enables ASIC/SOC & FPGA design teams to get a more accurate picture of their design progress. For example, if you have 100 RTL blocks and you’ve frozen 90 it doesn’t necessarily mean your design is 90% complete. Pinpoint helps bring information from multiple data sources – STA, Power, Physical Verification – into a single dashboard so that you can see how many failing timing paths you have at a certain clock speed, how many of a certain type of DRC you have, etc. so you have a much more realistic real-time and historic trend line view of progress, so you know exactly how mature your design is and when you are likely to reach design closure.

Issue and Defect Management is part of our IP Management – something that our semi customers use to help design teams track, trace and address post manufacturing issues that arise in the field. This is critical for semiconductor companies serving the automotive market where traceability is mandatory

On the operations side, we have the only ‘Out of the Box’ solution for managing the configuration of a chip – from wafer to finished goods. Our solution has the ability to capture product and customer requirements and create only valid and available product configurations. As a chip moves through its process lifecycle (Fab->Sort ->Assembly->Package->Test->Mark->Finished Good), the IC BOM grows pulling together parts and information from many different processes and their locations. We manage the entire configuration along with product costing.

We offer a comprehensive set of quality processes like CAPA, Non-Conformance, Audits which are of value to an operations executive. They use the same innovation platform as the other processes we enable.

What is your differentiation in terms of solution offerings and value proposition?
The difference between us and our competitors is that (i) we take a platform based approach to IP Management and (ii) we’ve been in the design data management business for a long time (since Synchronicity).

Taking a platform based approach to solving the complete product lifecycle challenges of our semiconductor customers means we work with them on how best to optimize design productivity, configuration (design data and manufacturing configuration) management, management of internal and external IP and how to increase the asset value of the company by using best practices to drive efficiencies in their product development processes

Several of our competitors focus on DDM – and some have recently (last 12 months) added IP Management. Dassault Systemes and Consensia have been in the IP management business for many years. We are not a “one trick” pony. While DDM is a critical part of design infrastructure, there are many other processes which are of strategic value which our competition does not address.

As far as IP Management is concerned every customer manage their IP differently; they have different business rules and operating models – there isn’t a one size fits all out of the box. So we take a consultative approach to designing a system that fits their MO AND allows them to understand and monetize their IP assets by reducing NRE, increasing re-use and maximizing their use of externally licensed IP.

The design data management solution is just one component in a bigger IP/SOC management picture that involves IP security, compliance (ITAR, 3[SUP]rd[/SUP] party IP tracking and tracing), role based access, issue defect and management – and many more components that are active long after an SOC has been released.

We come at it from a ‘top down’, pragmatic approach with IP Management – so DesignSync becomes the ‘hub’ that manages other DDM repositories, like SVN or Perforce, which become just another repository from which to pull or publish BOMs.

We engage with companies at an early stage and they can grow with Consensia’s support and solutions to a billion dollar revenue company. Our solutions are modular; we can start with virtually any process. Our engagement model is very flexible and our objective is to have a close relationship with our customers to enable their profitable growth.

Innovation and collaboration are overused terms; how do you specifically address those areas?
All of our solutions are designed for collaboration – we bring innovation at multiple levels. PLM solutions are addressing at a project governance level for global teams; whereas DesignSync enables individual IC design team members to collaborate by automating the complete lifecycle of an SOC from management of individual IP components to the automated SOC release management process.

Also read: Webinar: Top Five Challenges Preventing Design Closure!


Everything a Designer Wants to Ask About FDSOI

Everything a Designer Wants to Ask About FDSOI
by Adele Hars on 04-01-2017 at 12:00 pm

So you’ve got questions about FD-SOI? For chip designers in Silicon Valley, there’s a great opportunity to get answers from some of the world’s leading design experts. It’s coming up fast: April 14th, the SOI Consortium is organizing a full day of FDSOI tutorials for chip designers. Bear in mind that it’s not a sales day. It’s a learning day.

The courses will be given by top professors at top universities (including UC Berkeley, Stanford, U. Toronto and Lund). They’ve all also spent years working closely with industry, so they really understand the challenges designers face. They’ve helped design real chips, and have stories to tell.

Each one will address FD-SOI specific design techniques for: analog and RF integration (millimeter wave to high-speed wireline), ultra-low-power memories and microprocessor architecture, and finally energy-efficient digital and analog-mixed signal processing designs.

The tutorial day will be held in San Jose, beginning at 8am and running until 3pm. Each professor’s course will last one hour. Click here for registration information.

(BTW, the Tutorial Day follows the day after the annual SOI Silicon Valley Symposium in Santa Clara, which will be held on April 13th.)

Here’s a quick preview at what each professor will be addressing during the FDSOI Tutorial Day.

FDSOI Short Overview and Advantages for Analog, RF and mmW Design
– Andreia Cathelin, Fellow, STMicroelectronics, France

Professor Cathelin (yes, in this case she’s an ST Fellow, but she’s also a professor) has deep experience designing ground-breaking chips.


Summary slide from Professor Andreia Cathelin’s course at the upcoming FDSOI Tutorial(Courtesy: SOI Consortium and ST)


She’ll start with a short overview of basic FDSOI design techniques and models, as well as the major analog and RF technology features of 28nm FDSOI technology. Then the focus shifts to the benefits of FD-SOI technology for analog/RF and millimeter-wave circuits, considering the full advantages of wide-voltage range tuning through body biasing. For each category of circuits (analog/RF and mmW), she’ll show concrete design examples such as an analog low-pass filter and a 60GHz Power Amplifier (an FDSOI-aware evolution of the one featured on the cover of Sedra/Smith’s Microelectronics Circuits7thedition, which is probably on your bookshelf.) These will highlight the main design features specific to FD-SOI and offer silicon-proof of the resulting performance.


Unique Circuit Topologies and Back-gate Biasing Scheme for RF, Millimeter Wave and Broadband Circuit Design in FDSOI Technologies– Sorin Voinigescu, Professor, University of Toronto, Canada.

Professor Voinigescu is particularly well-known for his work in millimeter wave and high-speed wireline design and modeling (which are central to IoT and 5G). He’s worked with SOI-based technologies for over a decade. His course will cover how to efficiently use key features of FD-SOI CMOS technology in RF, mmW and broadband fiber-optic SoCs. He’ll first give an overview at the transistor level, presenting the impact of the back-gate bias on the measured I-V, transconductance, fT and fMAX characteristics. The maximum available power gain (MAG) of FDSOI MOSFETs will be compared with planar bulk CMOS and SiGe BiCMOS transistors through measurements up to 325 GHz.

Summary slide from Professor Sorin Voinigescu’s course at the upcoming FDSOI Tutorial (Courtesy: SOI Consortium and U. Toronto)


Next, he’ll provide design examples including LNA, mixer, switches, CML logic and PA circuit topologies and layouts that make efficient use of the back-gate bias to overcome the limitations associated with the low breakdown voltage of sub-28nm CMOS technologies. Finally, he’ll look at a 60Gb/s large swing driver in 28nm FDSOI CMOS for a large extinction-ratio 44Gb/s SiPh MZM 3D-integrated module, as a practical demonstration of the unique capabilities of FDSOI technologies that cannot be realized in FinFET or planar bulk CMOS.


Design Strategies for ULV memories in 28nm FDS-SOI
– Joachim Rodrigues, Professor, Lund University, Sweden

Having started his career as a digital ASIC process lead in the mobile group at Ericsson, Professor Rodrigues has a deep understanding of ultra-low power requirements. His tutorial will examine two different design strategies for ultra-low voltage (ULV) memories in 28nm FD-SOI.

For small storage capacities (below 4kb), he’ll cover the design of standard-cell based memories (SCM), which is based on a custom latch. Trade-offs for area cost, leakage power, access time, and access energy will be examined using different read logic styles. He’ll show how the full custom latch is seamlessly integrated in an RTL-GDSII design flow.


Summary slide from Professor Joachim Rodrigues’ course at the upcoming FDSOI Tutorial (Courtesy: SOI Consortium and Lund U.)


Next, he’ll cover the characteristics of a 28nm FD-SOI 128 kb ULV SRAM, based on a 7T bitcell with a single bitline. He’ll explain how the overall energy efficiency is enhanced by optimizations on all abstraction levels, from bitcell to macro integration. Degraded performance and reliability due to ULV operation is recovered by selectively overdriving the bitline and wordline with a new single-cycle charge-pump. A dedicated sense-amplifierless read architecture with a new address-decoding scheme delivers 90MHz read speed at 300mV, dissipating 8.4 fJ/bit-access. All performance data is silicon-proven.

Energy-Efficient Processors in 28nm FDSOI– Bora Nikolic, Professor, UC Berkeley, USA

Considered by his students at Berkeley as an “awesome” teacher, Professor Nikolic’s research activities include digital, analog and RF integrated circuit design and communications and signal processing systems. An expert in body-biasing, he’s now working on his 8th generation of energy-efficient SOCs. During the FDSOI tutorial, he’ll cover techniques specific to FDSOI design in detail, and present the design of a series of energy-efficient microprocessors. They are based on an open and free Berkeley RISC-V architecture and implement several techniques for operation in a very wide voltage range utilizing 28nm FDSOI. To enable agile dynamic voltage and frequency scaling with high energy efficiency, the designs feature an integrated switched-capacitor DC-DC converter. A custom-designed SRAM-based cache operates in a wide 0.45-1V supply range. Techniques that enable low-voltage SRAM operation include 8T cells, assist techniques and differential read.



Summary slide from Professor Bora Nikolic’s course at the upcoming FDSOI Tutorial (Courtesy: SOI Consortium and UC Berkeley)


Pushing the Envelope in Mixed-Signal Design Using FD-SOI
– Boris Murmann, Professor, Stanford University, USA

If you’ve ever attended a talk by Professor Murmann, you know that he’s a really compelling speaker. His research interests are in the area of mixed-signal integrated circuit design, with special emphasis on data converters and sensor interfaces. In this course, he’ll look at how FD-SOI technology blends high integration density with outstanding analog device performance. In same-generation comparisons with bulk, he’ll review the specific advantages that FD-SOI brings to the design of mixed-signal blocks such as data converters and switched-capacitor blocks. Following the review of such general benchmarking data, he’ll show concrete design examples including an ultrasound interface circuit, a mixed-signal compute block, and a mixer-first RF front-end.

Summary slide from Professor Boris Murmann’s course at the upcoming FDSOI Tutorial (Courtesy: SOI Consortium and Stanford U.)



Key Info About the FD-SOI Tutorial Day

Event
:Designing with FD-SOI Technologies
Where: Samsung Semiconductor’s Auditorium “Palace”, San Jose, CA
When: April 14th, 2017, 8am to 3pm
Cost: $475
Organizer: SOI Industry Consortium
Pre-registration required – click here to sign up on the SOI Consortium website.


Caution: Reset Domains Crossing

Caution: Reset Domains Crossing
by Bernard Murphy on 04-01-2017 at 7:00 am

Because you can never have too much to worry about in verification, reset domain crossings (RDCs) are another hazard lying in wait to derail your design. Which hardly seems fair. We like to think of resets as dependable anchors to get us back on track when all else fails, but it seems their dependability is not absolute, especially in modern designs.

We all know about clock domain crossings (CDCs), a problem that has been amplified by the integration onto SoCs of multiple interface standards along with high-performance compute engines, each needing to support different clock speeds. Signals passing between different clock domains on these devices are at risk of lock-up through metastability and/or loss of data. Finding and correcting these potential problems takes careful analysis.

Reset domains have also been with us for a while, but have especially proliferated in SoC design and design for low-power where, in additional to standard blanket resets like POR and software reset, we now find an abundance of local reset options, controlled at the IP or functional domain level. In the spirit of providing maximum controllability over power saving, IPs may have separate reset inputs for hard reset, soft reset, reset preserving retention registers and other options. At the system-level, application of reset has become more complex, requiring that application or release of reset be sequenced between functions; release on many blocks must wait at least until the controlling CPU has booted to ensure that startup from reset in those downstream blocks is well-controlled.

But this complexity is not the root-cause of RDCs, which start with asynchronous resets. The complexity, along with realities of multi-sourced IP design, simply makes RDCs harder to anticipate and isolate. An “ideal” way to fix the problem might be to forbid use of asynchronous reset. A lot has been written on the relative merits of synchronous versus asynchronous reset. Without getting into that debate, it is enough to observe that any place you need to ensure a reset where you may not yet have a clock (e.g. in the presence of gated clocks or switchable power domains) requires an asynchronous control to ensure the reset is applied. Then there’s the multi-sourced IP; you may have dominion over reset practices in your own IP, but you can’t control how other IP suppliers choose to reset. So RDCs can’t be banished – you have to learn to deal with them.

There are several different ways in which an RDC hazard can be created. One simple case that can occur is a path crossing between two flops, quite possibly using the same clock, where the first flop is asynchronously gated by RST_A and the second is asynchronously gated by RST_B. If RST_A and RST_B are not related, this becomes an asynchronous crossing and there is risk the second flop can become metastable or may sample incorrect data.

Another case has a reset synchronized in clock domain 1 but used asynchronously in clock domain 2. Because the reset is not synchronized to the second clock domain, again there is a metastability and/or incorrect data sampling hazard.

Even if you carefully generate reset signals synchronous to the domain clock and you’re not crossing between clock domains, you aren’t necessarily off the hook. In the example above, where both domains even use the same clock, there is still an RDC hazard because the path marked in red is not timed in STA and crosses between two potentially asynchronous reset domains.

Problems of this nature can be particularly dangerous for configuration registers, which are often exempt from warm-resets to speed-up recovery after reset. If an upstream warm-reset is applied while the configuration register is being written, an async crossing can corrupt the contents. A similar problem can occur in drivers for memory controller logic. If signals like chip select and write-enable can be asynchronously reset, again you may have a hazard.

There are plenty of other examples, distinguished more by the varieties of havoc they can wreak on the correct operation of your design, than by differences in root-cause. Correction is often not difficult, through more careful selection of resets and use of reset synchronizers. The real challenge here is in finding potential hazards scattered across large SoC designs. That’s where a tool like Meridian RDC from Realntent can help.

You can learn more about finding and correcting RDC hazards by registering for the RealIntent white-paper.

More articles by Bernard…


Lowering Costs for Custom SoC Development – ARM and Tanner EDA

Lowering Costs for Custom SoC Development – ARM and Tanner EDA
by Daniel Payne on 03-31-2017 at 12:00 pm

Cost is a major barrier when an electronic design company starts to consider developing a custom SoC for a particular market segment. But what if there was a way to lower the development cost, or even get to an SoC proof of concept for no cost except of course for your engineering expenses? That value proposition caught my attention immediately so I attended a webinar hosted by ARM (part of Softbank) and Tanner EDA (part of Mentor Graphics, soon to be part of Siemens) on March 28th to see if there was any catch involved.

Phil Burr from ARM was up first and he was able to categorize who is designing new SoCs today into three segments:

[LIST=1]

  • Sensor and mixed-signal companies designing IoT
  • Start-ups wanting to innovate
  • OEMs wanting to reduce cost, power and be unique

    ARM sees seven industries driving new SoC development:

    • Retail
    • Smart lighting
    • Medical
    • Industrial
    • Home
    • Agriculture
    • Building automation

    Companies that choose the SoC route can expect benefits like an 85% reduction in PCB area compared to component designs, a 90% reduction in their BOM, being different from other competitors, being able to protect their IP because the SoC is not using easily bought components from a parts catalog, and finally an easier way to control the supply chain. The folks at ARM have responded to these market conditions by creating something called the Cortex-M0 DesignStart which lets you integrate your own semiconductor IP with the popular 32 bit processor Cortex-M0, along with a system design kit. If all you need to do is to show proof of concept then you can do a virtual design for free. Two other options from ARM are using their FPGA prototype system for $995, or buying a license for the Cortex-M0 along with SDK, Keil MDK and support for $40K:

    The Cortex-M0 is the most compact 32 bit processor offered by ARM, which also means lower costs to designers. Code density with the Cortex-M is actually better than competitor’s 8 and 16 bit offerings as measured with CoreMark code, giving you lower power consumption and smaller flash size:

    So with DesignStart you get the processor IP from ARM, evaluation EDA tools from Mentor Graphics and if needed there are design partner services that are recommended and vetted by ARM (Open-Silicon, sondrel, SOC Solutions, hdl Design House, e-infochips, ADT). For process nodes you can choose to fab your ARM-based SOC at foundries across a wide range like 350nm all the way down to 28nm. Using shared wafers you can get test chips produced in 180nm for as little as $16K, so think affordable.

    Related blog – IoT Device Designers Get Help from ARMv8-M Cores

    Next up in the webinar was Jeff Miller from Tanner EDA and he shared how their company started way back in 1988 and has customers doing both AMS and MEMS designs. Mentor acquired Tanner in 2015, and it is one of the few EDA mergers that is still happily functioning quite well. Customers of Tanner EDA include JPL (image sensors on the Mars rovers), FLIR, Knowles, Proteus, NeuroPace and Second Sight. Jeff talked about some characteristics of IoT designs, like: High volume, low cost, small physical size, and low power.

    One IoT design example shared was from Swindon Silicon Systems with their Tire Pressure Measurement System (TPMS) that has shipped 100’s of millions units. They used two dies combined into a single package which is then placed inside of an auto tire to measure tire pressure, then send that data by RF to the car. The pressure sensor is designed as a MEMS and occupies one chip, then the second chip does the RF transmission, analog, ADC, and digital control:

    Typical building blocks for an IoT device are shown below:

    Mentor Graphics has three categories of software to help in the design of an IoT device:

    • Tanner EDA

      • Analog/Mixed-Signal
      • MEMS
      • Digital
      • RF
    • Nucleus

      • Embedded Software
    • PADS

      • Printed circuit board

    Related blog – Managing the IoT

    Focusing on the Tanner EDA offering, there are several tools for both the front-end and back-end of an Analog Mixed-Signal design:

    For the grand finale Jeff demonstrated how an ADC block found in most IoT devices could be entered using the schematic capture tool (S-Edit), simulated the ADC with a circuit simulator (T-Spice), connected a custom block to the AMBA peripheral bus, wrote some test firmware in C code using the Keil IDE, finally simulated the ARM Cortex-M0 with the test firmware and the custom ADC block. Here’s a block diagram of the demo design:

    Analog waveform results show up in the Tanner EDA viewer, while digital results show up in the ModelSim tool:

    Summary
    Instead of starting from scratch to create your next IoT design project that needs a processor and AMS blocks, you can instead consider using the approach offered by ARM and Tanner EDA to quickly get to your IoT proof of concept at very low to no extra costs, outside of the engineering time. Enjoy getting a head start because Tanner EDA includes all of their demo files shown in the webinar. If you are short on engineering experience, then talk to one of the ARM recommended design partners to get started.

    View the archived webinar online today.


  • SNUG and Robots

    SNUG and Robots
    by Bernard Murphy on 03-31-2017 at 7:00 am

    I got an invite to the SNUG (Synopsys User Group meeting) keynotes this year. I could only make it to the second keynote but what a treat that was. The speaker was Dr. Peter Stone, professor and chair of CS at UT Austin. He also chaired the inaugural panel for the Stanford 100-year study on AI. This is a guy who knows more about AI than most of us will ever understand. In addition to that, he’s a very entertaining speaker who knows both how to do and how to communicate very relatable research.


    His team’s research is built around a fascinating problem which I expect will generate new discoveries and new directions for research for many years to come. Officially this is work in his Learning Agents Research Group (LARG), slightly less officially it is around learning multi-agent reasoning for autonomous robots but mostly (in this talk at least) it’s about building robot teams to play soccer.

    The immediate appeal of course is simply watching and speculating on how the robots operate. These little guys play rather slowly with not much in the way of nail-biting moments but it’s fascinating to watch how they do it, especially in cooperative behavior between robots on the same team and competitive behavior between teams. When one robot gets the ball, forwards move downfield so they can take a pass. Meanwhile competitors move towards the player with the ball or move back upfield to intercept a pass or to block shots. The research behind this is so rich and varied that the speaker said he could easily spend an hour just presenting on any one aspect of what it takes to make this happen. I’m going to touch briefly on a few things he discussed that should strike you immediately.

    When we think about AI, we generally think about a single intelligent actor performing a task – recognition, playing Jeopardy or Go or driving a car. The intelligence needs to be able to adapt in at least some of these cases but there is little/no need to cooperate, except to avoid problems. But robot soccer requires multi-agent reasoning. There are multiple actors who must collaboratively work to meet a goal. We talk about cars doing something similar someday, though what I have seen recently still has each car following its own goal with adjustments to avoid collision (multi-agent reasoning would focus on team goals like congestion management).

    You might think this could be handled by individual robot intelligence handling local needs and a master intelligence handling team strategy, or perhaps though collaborative learning. From the outset, master command-center options were disallowed and team learning was nixed by the drop-in team challenge, asserting that any team player can be replaced with a new member with whom the players had not previously worked. This requires that each team player must be able to assess during the game what other team members can and cannot do and should be able to strategize action around that knowledge. Obviously, they also need to be able to adapt as circumstances change. The “master” strategy becomes a collective/emergent plan rather than a supervising intelligence.

    A second consideration is managing the time dimension. In the “traditional” AI examples above, intelligence is applied to analysis of a static (or mostly static) context. There can be a sequence of static contexts, as in board games, but each move is analyzed independently. Autonomous cars (as we hear about them today) may support a little more temporal reasoning but only enough perhaps to adjust corrective action in a limited time window. But soccer robots must deal with a constantly changing analog state space, they must recognize objects in real-time, they must deal with multiple cooperating and opposing players and a ball somewhere in the field, and must cooperatively reason about how to move a future objective – scoring a goal while defending their own goal.

    A third consideration is around how these agents learn. Again the “traditional” approach, based on massive and carefully labeled example databases, is impractical for soccer robots. The LARG group use guided self-learning through a method called reinforcement learning (RL). Here a robot makes decisions starting from some policy, takes actions and is rewarded (possibly through human feedback; this was cited as a way to accelerate learning) based on the result of the action. This is the reinforcement. Over time policies improve through closed-loop optimization. An important capability here is understanding sequences of actions, which can be formalized as Markov decision processes with probabilistic behaviors.

    One other component caught my attention. Team soccer is a complex activity; you can’t train it as a single task. In fact, even getting a robot to learn to walk stably is a complex task. So they break complex tasks down into simpler tasks in what they call layered learning. One example he gave was walking fast to the ball then doing something with the ball. Apparently you can train in one task to walk quickly but the robot falls over when it gets to the ball. They had to break the larger task down into 2-3 layers to manage it effectively.

    I should in fairness add that this is not all about teaching robots to play soccer. What the speaker and his team learn here they are applying to practical problems as varied as collaborative traffic control at intersections, building-wide intelligence and trading agents. But what a great way to stimulate that research 😉

    There is much more information you can find HERE and HERE. HERE is one of many games they have played and HERE is a fun blooper reel. And there are many more video examples!


    Cadence Expands Integrated Photonics Beachhead

    Cadence Expands Integrated Photonics Beachhead
    by Mitch Heins on 03-30-2017 at 4:00 pm

    In November of 2016, I made a bold statement that October 20, 2016 would stand as a watershed day in integrated photonics. The reason for this claim was that GLOBALFOUNDRIES proclaimed that integrated photonics was real and here to stay. The same week I wrote an article about Cadence Design Systems securing a photonic beachhead when they, Lumerical Solutions and PhoeniX Software held their first joint training class for over 70 prospective customers on a new fabless integrated electronic-photonic design automation (EPDA) flow. It’s now five months later, and I am more convinced than ever of my statement about that October. Several things have happened in those short five months.

    First, I’ve had the chance to be in a lot of conversations with potential users of the triumvirate EPDA flow. Interest has been high from the fabless community, adding weight to GLOBALFOUNDRIES’ proclamation about integrated photonics. So far, feedback from users has been very consistent. They are looking for a production-worthy design flow that promises to bring a much-needed formalism to electronic-photonic circuit design. The integration of photonic and electronic simulations along with the formalism of a schematic-driven layout flow seems to have answered a need that has heretofore been missing in photonic design. The fact that users are looking for such formalism says much to me about their seriousness in making production electronic-photonic designs.

    Second, users quickly noted the improved productivity they can get in the layout phase of the design when using Cadence Virtuoso in combination with the PhoeniX Software tools. In many ways, this even surpasses the productivity boost users saw when adopting automation for analog IC layout as photonic curvilinear shape generation can be very time-consuming if not automated. The joint EPDA flow goes a long way towards improving the engineer’s life when it comes to doing photonic layouts.

    Third, is that Cadence, PhoeniX and Lumerical are now planning to expand the flow into the 2.5D, 3D and SiP (system in package) domain. This is a major and important step as most high-volume applications will want to take advantage of silicon-photonics manufacturing cost advantages. These solutions will require photonic light sources, amplification and detection and that means working with III-V materials like InP or InGaAs in combination with the silicon photonic-IC (PIC). Si-based PICs will also need to be tightly tied to both digital and analog electrical ICs. While there are many ways to put these solutions together, one of the most obvious and near-term is to put them together as a SiP using an interposer capable of both electronic and photonic die-to-die and die-to-package connections.

    This is a major undertaking for a fabless company as it requires a significant investment in design tools and very good relationships with ecosystem partners. Consider, however, that except for the photonics part, Cadence has had a flow for some time now to enable heterogeneous electronic SiP designs, and they have been working with several partners in this area. With the new EPDA flow, much of the work and associated risk for a heterogeneous electronic-photonic EPDA-SiP design flow has already been addressed. Simulation of electronic SiP designs is already handled in Cadence’s Virtuoso ADE environment and likewise, with the integration of Lumerical’s INTERCONNECT circuit simulator, so too is simulation of the EPDA-SiP design. All the necessary plumbing exists. Similarly, layout of an electrical-optical interposer with waveguides can also be done in Virtuoso using the combination of Virtuoso and PhoeniX Software’s OptoDesigner.

    As a quick review, here is the current Cadence portfolio for 2.5D, 3D and SiP design.

    • OrbitIO Interconnect Designer: Used for die-to-die and die-to-package connectivity planning.
    • Genus Synthesis Solution and Modus Test Solution:Used for generating design-for-test (DFT) logic of the electrical portions of the SiP.
    • Innovus Implementation System and Physical Verification System (PVS): Used for digital design implementation and verification. Innovus has a plugin that provides for through-silicon-vias (TSVs) and micro-bump placement while PVS can do DRC and LVSI checking across multiple dice in the package.
    • Virtuoso ADE and Spectre: Used for simulation of electronic and photonic systems in combination with Lumerical’s INTERCONNECT photonic circuit simulator.
    • Virtuoso: Used for layout of analog and photonic designs in combination with PhoeniX Software’s OptoDesigner layout tool. Virtuoso also supports TSVs and the mapping of memory die bumps to logic die.
    • Cadence SiP Layout:Enables 3D displays of both silicon and package layers for multi-die integration.
    • Quantus QRC Extraction Solution:Enables parasitic extraction of interposer metal traces as well as TSVs and micro-bumps.
    • Tempus Timing Signoff:Enables static timing analysis and signal integrity checks across multiple die and power domains.
    • Voltus IC Power Integrity Solution: Enables multi-die, 2.5D, 3D and SiP power analysis.
    • Sigrity PowerDC: Voltus can forward information to Sigrity which can then determine a temperature distribution map based on power consumption data. This power map can then be provided back to Voltus for temperature dependent IR drop analysis.

    So, what is still missing for the EPDA-SiP flow? Board-based photonics is well on its way to being part of the overall solution. And, it appears that it would make sense to enable designers to account for heating effects on temperature-sensitive PIC devices caused by hot electrical ICs in a SiP. One last big issue that needs to be tackled by not just Cadence but the entire industry is what design-for-test looks like in photonics and heterogenous electronic-photonic SiPs.

    Nonetheless, I repeat my opinion that October 20, 2016 was indeed a watershed event for integrated photonics as it saw the launching of a very comprehensive fabless EPDA flow that will likely be host to many heterogeneous electronic-photonic designs to come. And, from what we are seeing now, it appears that the flow will become even more comprehensive, allowing Cadence to expand their integrated photonic beachhead.

    See also:


    Analyzing All of those IC Parasitic Extraction Results

    Analyzing All of those IC Parasitic Extraction Results
    by Daniel Payne on 03-30-2017 at 12:00 pm

    Back at DAC in 2011 I first started to hear about this EDA company named edXact that specialized in reducing and analyzing IC parasitic extraction results. So Silvaco acquired edXact and I wanted to get an update on what is new with their EDA tools that help help you to analyze and manage the massive amount of extracted RLC and even K values that all impact IC design performance, timing and power. I attended their webinar last week where Jean-Pierre Goujon presented.

    Just take a quick look at the 3D interconnect in the diagram below, and how with each smaller technology node we are seeing interconnect delays rise and the worst-case interconnect delay due to crosstalk dramatically rising:

    Related blog – RLCK reduction tool at DAC

    The basic idea is that if you can find the sources of these delays then you can do something about it through either cell placement, block placement, routing or transistor sizing. The interconnect in an IC using 60nm technology and smaller are causing more delay than the gates from your cell library. By analyzing your extraction results prior to SPICE simulation you will actually reduce the number of SPICE runs required.

    Let’s take a look at the IC design flow for analyzing two slightly different extracted netlists:

    The Viso tool shown in the lower right corner provides design analysis and exploration of parasitics through the following features:

    • Viewing resistance and capacitance values, RC time delays
    • Analysis in numerical, tabular views
    • Graphical views in both 2D and 3D
    • Detection of cut nets, dangling nets, sanity checks on DSPF/SPEF/CalibreView

    Related blog – CEO Interview: David Dutton of Silvaco

    In the middle of the flow is the Belledonne tool, useful for:

    • Comparing different extracted netlists (DSPF, SPEF, CalibreView)
    • Comparing statistics of: resistance, capacitance, static delays
    • Batch or interactive analysis
    • PDK optimization and validation

    At the top middle is Brenner, an EDA tool that matches pin and net names, so for example if one netlist had an MOS transistor with four fingers named ABCD it could be matched with another netlist where the same four fingers were in a different order, but still equivalent.

    Related blog – It’s Time to Put Your SPICE Netlists on a Diet

    Live Demo
    I’m a big believer in showing EDA tools live instead of using canned screen shots, because it actually provides a feeling for how responsive and speedy tools can be. Jean-Pierre started from the Unix command prompt and invoked the GUI called alps, then showed how he uses Belledonne to compare two DSPF files with different pin names. The actual comparison run time for 1 million Resistors was completed in just seconds. Here’s a graphical view comparing resistance values where the color Red depicts over a 5% difference:

    For the second demo circuit we looked at the global statistics from the netlist and then quickly sorted them by maximum RC values shown in a tabular format:

    You can click on a particular net and then visualize it in either 2D or 3D views to better understand the physical topology:

    To understand the context of this extracted interconnect you can overlay the results on top of the GDSII layout:

    In the third demo we looked at resistance analytics to understand where the maximum pin to pin resistance values were located. Once a high resistance path was found, the layer contribution to resistance was displayed to further pinpoint the greatest contributor. For this selected net it was the poly1 layer contributing to 92% of the total resistance.

    2D and 3D results showed resistance values by color to get a quick graphical understanding of where to start looking. We saw pin to pin RC delay values, net to net capacitance tables, and could understand what capacitance with grounded versus coupled.

    In the fourth and final demo I got to see how sanity checks could be run to identify opens in the IC layout interconnect caused by missing vias.

    Summary
    This was very practical webinar where we got to see live EDA tools running that will help IC designers and PDK developers better understand the effects of parasitic RCL values on their particular designs. In the past you may have been tempted to just run extraction and then immediately start running SPICE simulations, however the recommended flow is to perform analysis of the extracted netlists prior to starting SPICE simulation in order to understand and quantify the portions of your design with the most resistance and capacitance values. Circuit designs can now quickly understand where coupling capacitance is impacting their layouts, then decide if they need to make topology changes are go back and start resizing transistors.

    The archived webinar is online now here.


    SNUG 2017 Keynote: Aart de Geus on EDA Fusion!

    SNUG 2017 Keynote: Aart de Geus on EDA Fusion!
    by Daniel Nenni on 03-30-2017 at 7:00 am

    I spoke with Aart before his SNUG keynote and found him to be very relaxed and upbeat about EDA and our future prospects which reminded me of my first ever (cringe-worthy) blog, “EDA is Dead”. Now, eight years later, we have what Aart calls “EDA Fusion” to thank for the reemergence of EDA as a semiconductor superpower, absolutely.

    If you look at EDA’s recent revenue numbers you will see why Aart is smiling. Synopsys stock (SNPS) started 2016 in the $45 range and is now trading above $70. In fact, if you look at EDA as a whole we had a very good year. At the end of this blog is a report “Large divergence in EDA suppliers’ latest quarterly revenuescompliments of SemiWiki member Gerry Byrne. Gerry is the founder of edalics which provides EDA budget management services to semiconductor companies. But first, back to Aart’s keynote.

    The first example of EDA Fusion was the integration of IP into EDA. Currently Synopsys has the industry’s largest IP portfolio that generated more than $500M in revenue last year. More importantly, the Synopsy IP is designed with Synopsys tools resulting in deep design experience other EDA companies can only dream of. This direct design experience is critical as we move into FinFETs and increasingly complex process technologies with compressed design cycles.

    A more recent example of EDA Fusion is the integration of software quality and security into EDA with the Synopsys acquisition of Coverity. This has allowed Synopsys to swim upstream at the systems companies (Automotive, Mobile, IoT, etc…) thus increasing their total available market.

    I really admire Aart’s ability to come up with engaging keynotes for us every year. You can see his last 5 SNUG keynote videos HERE and I strongly suggest you do. Hopefully this year’s keynote will be up soon because it is something you really have to see to fully appreciate. And now for the 2016 EDA revenue report from edalics:

    Large divergence in EDA suppliers’ latest quarterly revenues

    During Q4 2016 global semiconductor revenue growth accelerated to 12.3% vs. Q4 2015 (SIA). With this positive semiconductor industry background, Synopsys returned to strong 14.8% Q4* growth vs Q4* 2015, only to be significantly eclipsed by Mentor’s 41.7% growth rate, while Cadence reported steady 6.3% growth:

    For comparison, the % growth rates in the previous quarter, Q3* 2016 versus Q3* 2015: Synopsys 7.9%, Cadence 2.9%, Mentor 11% and semiconductor revenues 3.6%.

    Examining the delta in revenue growth in dollars, Synopsys and Cadence’s Q4* 2016 vs Q4* 2015 revenues increased by $84.2M and $27.9M respectively, while Mentor’s revenues increased impressively by $140.7M. Synopsys and Cadence surpassed their own mid-point revenue guidance for the quarter by $14.3M and $1.0M respectively:

    Mentor stands out when comparing the latest quarterly results. To get a deeper understanding why, it is worthwhile comparing the results for the latest 12 month period (comparing the last 12 months of each company’s results most closely matching the calendar year 2016). During 2016 global semiconductor revenue was a record $338.9 billion, 1.1% higher than in 2015. The top 3 EDA suppliers all comfortably beat this semiconductor revenue growth rate. Synopsys grew fastest, by 10.5%, extending its EDA market share leadership. Cadence grew revenues by 6.7%, while Mentor achieved an 8.6% growth rate for the last 12 months (which includes 41.7% for Q4 2016):

    Mentor’s quarterly revenues fluctuate most of the big 3 and this was more accentuated than usual in 2016, a year of two contrasting halves, with a decrease in revenues in Q1 (-16.4%) and Q2 (-9.5%), followed by strong growth in Q3 (+11%) and Q4 (+41%), to achieve the 8.6% average growth rate for the whole year 2016, encompassing an increased portion of 2016 revenues booked in the second half of 2016 versus 2015.

    * Based on each company’s reported financial quarterly data which most closely match that calendar quarter.