Banner 800x100 0810

Meet Kandou’s Frank Lavety, Behind the Scenes Point Person for Timely Product Delivery

Meet Kandou’s Frank Lavety, Behind the Scenes Point Person for Timely Product Delivery
by Lauro Rizzatti on 05-17-2022 at 10:00 am

Frank Lavety

I learned about Kandou a year ago and liked what I heard, as should anyone who wants higher res displays and faster downloads from their electronic devices. I’ve been tracking Kandou since and believe it’s living up to its goal to be the undisputed innovator in high-speed, energy-efficient chip-to-chip link solutions to improve the way the world connects and communicates.

My blog last year looked at Kandou’s Matterhorn USB-C multiprotocol retimers with USB4 support. Two variations of Matterhorn, named for the famous Alps Mountain, are already adopted and successfully implemented by OEMs and ODMs developing mobile and desktop PCs. Matterhorn often overshadows Kandou’s equally impressive Chord signaling technologies optimized for high-speed, ultra-low power chiplet and chip-to-chip interconnects delivered as blocks of IP.

Recently, I chatted with Frank Lavety, Kandou’s General Manager and point person behind the scenes ensuring timely delivery of its entire product line. He recently described to me how USB4, the latest USB standard, will do even more to help quench our thirst for information. And noted as an aside, with a worldwide workforce, Frank said it’s nothing short of remarkable that Kandou’s interconnections worked efficiently and with little productivity impact in a pandemic scenario few of us imagined.

Frank joined Kandou in 2018 as VP Operations initially responsible for building Kandou’s fabless manufacturing operations and in 2019 assumed the role of General Manager, leading engineering, manufacturing and the commercialization of Kandou’s Matterhorn product family.

Here is a condensed version of our conversation.

Q: Frank, tell us about your background.

A: After graduating from University, I spent seven years with the medical device manufacturer Haemonetics Corporation, where I learned so much about high-quality, cost-efficient manufacturing. After finishing my MBA, I returned to Scotland from Boston, joining Wolfson Microelectronics in 2001, a fabless semiconductor start-up based in Edinburgh. Wolfson successfully IPO’d on the FTSE in 2003, eventually reaching a market cap of more than $1.5 billion.

During my 10 years at Wolfson, I established global manufacturing operations to support annual revenues in excess of $230 million. As VP Operations, shipped billions of mixed-signal ICs to the world’s largest consumer electronics brands, establishing Wolfson as a world leader in audio technologies.

In 2012, I was looking for a new technology challenge and joined Adlens Ltd., an Oxford University (Oxford, U.K.) spin out developing adaptive lens technologies used in eyewear and AR/VR applications, where I was COO and CEO before joining Kandou two years ago.

What attracted you to Kandou?

A: It reminded me of my early days at Wolfson and the satisfaction I had building a very successful global business from the ground up. I also saw similarities to my time at Oxford and the fact that Kandou is an EPFL spin out.

Furthermore, when I met with CEO and founder Amin Shokrollahi and the Board, I was inspired by the energy behind the vision. I felt strongly that I could add a depth of experience and a broad range of skills to help Kandou navigate from start-up to market leader.

What surprised you most about Kandou?

A: The passion and ideas around revolutionizing wired connectivity, the talent pool we built and the genuine interest from customers in our technologies.

What differentiates Kandou?

A: The vision around disrupting wired connectivity and challenging the status quo. Also, the charisma of Amin Shokrollahi and the culture he and the team have developed.

How have the semiconductor supply chain challenges affected Kandou’s ability to deliver products?

A: We correctly anticipated strong demand for Matterhorn and secured foundry capacity and backend IC packaging and testing services to meet all 2022 customer requirements. We worked hard to secure capacity commitments to support our customers’ volume requirements through 2022 and began engaging with our tier 1 supply chain early.

What keeps you up at night?

A: I’m constantly thinking about what our customers really want. I then reflect on our decisions and their significance to our business. I strongly believe that being vigilant and nimble drives innovation which in turn creates radically successful businesses. Flawless execution is key!

Thanks for your time, Frank.

About Kandou
Kandou, an innovative leader in high-speed, energy-efficient chip-to-chip link solutions to improve the way the world connects and communicates, is revolutionizing wired connectivity with greater speed and efficiency. It enables a better-connected world by offering disruptive technology through licensing and standard products for smaller, more energy efficient and cost-effective electronic devices. Kandou has a strong IP portfolio that includes Chord™ signaling, adopted by the OIF and JEDEC standards organizations. Kandou offers fundamental advances in interconnect technology that lower power consumption and improve the performance of chip links, unlocking new capabilities for customer devices and systems. Kandou is a fabless semiconductor company founded in 2011 and headquartered in Lausanne, Switzerland, with offices in Europe, North America and Asia.

Also Read:

CEO Interview: Chuck Gershman of Owl AI

CEO Interviews: Dr Ali El Kaafarani of PQShield

CEO Interview: Dr. Robert Giterman of RAAAM Memory Technologies


224G Serial Links are Next

224G Serial Links are Next
by Daniel Nenni on 05-17-2022 at 6:00 am

link designs

The tremendous increase in global data traffic over the past decade shows no sign of abating.  Indeed, the applications for all facets of data communications are expanding, from 5G (and soon, 6G) wireless communications to metropolitan area networks serving autonomous vehicles to broader deployment of machine learning algorithms.  As a result, there is a continuing need for faster signaling bandwidth among data centers – i.e., within racks, through the switch hardware, and between distant sites across a network.

There is also a greater diversity in the physical interfaces and signaling media to support communication between nodes, from long reach (LR) to ultra-short reach (USR), and from copper traces on an advanced package or printed circuit board to electro-optical conversion and optical cables.

The foundation for the increasing signal datarate support is the SerDes IP available for integration in SoCs and switch hardware (excluding the unique short-reach cases where clock-forwarded data-parallel interfaces may apply).

At the recent DesignCon conference at the Santa Clara Convention Center, Wendy Wu, Product Marketing Director in the IP Group at Cadence, provided both a historical perspective and a future roadmap outlook for serial data communications in her presentation “The Future of 224G Serial Links”.  This article summarizes the highlights of her talk.

224G links are critical in the next-generation datacenter

The first diagram below depicts a spine-leaf data center network architecture – the server to top-of-rack switch interface is in blue, and the rack-to-spine bandwidth interface is in orange.  The spine-leaf architecture is emerging in importance, due to the increasing “east-west” data traffic between servers for distributed applications.

The second figure below provides a roadmap for the switch hardware, highlighting the upcoming 224G SerDes generation, in support of a 102.4T switch bandwidth. This switch design incorporates 512 lanes of 224G SerDes to provide 102.4Tbps, or 800G/1.6T Ethernet utilizing 4/8 transceiver lanes.

Switch Hardware Implementation Options

Recent “More than Moore” silicon and advanced packaging technology enhancements have enabled sophisticated, heterogeneous 2.5D implementations for the next generation of switch hardware.  Wendy presented potential design examples, as shown below.

The 224G long-reach differential SerDes lanes are illustrated in blue.  Extra/ultra short-reach die-to-die interfaces are shown in pink.  The implementation on the right in the figure above adopts a “chiplet” approach, where the silicon technology solution for the core SoC and the 224G SerDes tiles could use different process nodes.

224G Standards Activity

There are two organizations collaborating on the standards activity for the 224G datarate definition.  The IEEE 802.3df working group and the Optical Internetworking Form (OIF) consortium are focused on the definition of the 224G interface.

The goal of their efforts is to enable interoperability across the SerDes interface, encompassing:  the physical media layer (both copper and fiber cable), data link layer, and link management parameters.

The OIF table below indicates some of the key targets for the 224G standard, and how it compares to previous datarate definitions.

Note that there is still active discussion on several parameters – specifically, the signal modulation format and the allowable electrical channel insertion loss for the physical medium and reach target.  The transition to higher datarates introduced pulse-amplitude modulation with 2 bits per unit interval – i.e., PAM4 – using four signal levels replacing the 1 bit per UI NRZ signaling used previously.  The PAM definition for 224G is still being assessed, with difficult tradeoffs between:

  • more bits per UI and more signal levels (e.g., PAM-6, PAM-8, or a quadrature phase and amplitude modulation)
  • signal-to-noise ratios
  • channel bandwidth required
  • power dissipation per bit
  • target bit error rate (pre-forward error correction)
  • link latency

More bits per UI reduces the channel bandwidth requirements, but increases the SNR sensitivity.  Continuing with PAM-4 leverages compatibility with the 112G interface, but would then require 2X the channel bandwidth (as the UI duration will be halved).

Note that the “raw” bit error rate for the high datarate PAM signaling interface is unacceptably high.  A forward error correction (FEC) symbol encoding method is required, extending “k bits” in a message to “n bits” at the transmitter, with error detection and correction decoding applied at the receiver.

The table above includes a forecast for when the interoperability targets for 224G will be finalized.

Physical Topologies for 224G Links

Wendy presented several physical topologies for the 224G link (in orange), including some unique approaches, as shown below.

The top part of the figure above illustrates how 224G SerDes would be used in more traditional single PCB-level implementations – e.g., chip-to-chip (possibly with a mezzanine card for an accelerator, medium reach), and chip-to-optoelectronic conversion plug-in module (very short reach).

The bottom part of the figure shows two emerging designs for serial communications:

  • a die-to-die, extra short reach integration of the optoelectronic circuitry in a novel 2.5D package accommodating optical fiber coupling
  • the use of an “overpass cable” on the PCB itself to reach the connector, enabling a long(er) reach

The signal insertion loss for the combination of the SoC-to-overpass connector plus the overpass cable is significantly improved compared to conventional PCB traces (at the frequency of 224G PAM-4 signaling).  The overpass connector is mounted on the PCB adjacent to the SoC, or potentially, integrated into an expanded package.  The twin-axial overpass cable carries the differential SerDes data signal.

DSP-based Receiver Equalization

In order to double the SerDes speed from 112G to 224G, the analog front-end bandwidth needs to increase by 2X for PAM4 or by 1.5X for PAM6 to allow higher frequency component to pass through. Improved accuracy and reduced noise should to be considered for the analog-to-digital (ADC). In order to compensate for the additional loss due to the higher data rate (i.e, higher Nyquist frequency), stronger equalization might be required. This could mean more taps in the FFE and DFE. In addition, advanced algorithms such as maximum likelihood sequence detector and others will become more important at 224G. Last but not the least, because the UI is reduced by 50%, ideally the PLL clock jitter should be reduced by 50% as well.

Wendy indicated, “Cadence is an industry leader in DSP-based SerDes.  There is extensive expertise in 112G SerDes IP development, spanning 7nm, 6nm, and 5nm process nodes.  Customer adoption has been wide-spread.  That expertise will extend to the upcoming 224G SerDes IP, as well.” 

The drive for increased SerDes lane bandwidth continues at a torrid pace, with the 224G standard soon to emerge, enabling higher data traffic within and between servers.

For more information on Cadence high-speed SerDes IP offerings, please follow this link.

Also Read:

Tensilica Edge Advances at Linley

ML-Based Coverage Refinement. Innovation in Verification

Cadence and DesignCon – Workflows and SI/PI Analysis


Cybersecurity Threat Detection and Mitigation

Cybersecurity Threat Detection and Mitigation
by Daniel Payne on 05-16-2022 at 10:00 am

Embedded Analytics min

Every week in the technology trade press I am reading about cybersecurity attacks against web sites, apps, IoT devices, vehicles and even ICs. At the recent IP SoC Silicon Valley 2022 event in April I watched a cybersecurity  presentation from Robert Rand, Solution Architect for Tessent Embedded Analytics at Siemens EDA. Common jargon in cybersecurity discussions are terms like: vulnerability, threat, attack, risk, asset and attack surface.

Regulated industries with safety-critical usage like automotive, aerospace and medical all have a keen awareness of cybersecurity threats, so there’s a culture of detection and mitigation. There’s even a national vulnerability database maintained by NIST (National Institute of Standards and Technology), and they show a growing trend of vulnerabilities over time.

Embedded Analytics

Having hardware security allows an electronic system to detect a threat quickly, understand it, then act by responding, which avoids a dangerous outcome and places the system into a safe mode, keeping everyone safer. The specific technology that Siemens EDA brings to cybersecurity is something called Embedded Analytics, extra hardware that gives developers and operators of embedded systems the visibility needed to see what’s happening inside their SoC, and the ability to quickly respond to a cyber threat. This added analytics IP inside your SoC subsystem connects to all of the IP, giving it system-level visibility:

Tessent Embedded Analytics

This added IP is customized for your specific SoC, and the data it analyzes can be communicated with existing IO (USB 2, USB 3, JTAG, Aurora Serdes, PCIexpress, Ethernet).  There’s an Eclipse-based IDE tool to allow SW/HW debug capabilities of the complete system, or you can even use a Python API to get the visibility and control desired.

SystemInsight IDE

Adding embedded analytics does give your SoC team detailed insight in how their subsystem is being used in realtime, provides visibility of the whole system from boot, allows monitoring of interconnect at a transaction level, lets you control and trace each processor. All of the common ISAs and interconnect protocols are supported, so you’re not waisting any time. You get to decide which subsystems are connected to the embedded analytics.

Sentry IP

To detect and respond to cybersecurity threats in just a few clock periods there’s the Sentry IP, and this hardware block is placed as a bus manager interface. This Sentry IP filters all transactions and only passes through safe transactions, blocks illegal transactions and can redirect when an attack is detected.

Sentry IP

There are 1,024 possible matches to identify vulnerabilities, so this system can adapt to new threats as they emerge.

Embedded Security Subsystem

Robert shared an example system that had several CPUs, RAM, Flash, CAN and interconnect components shown in light blue:

Embedded Security System

The embedded analytics IP modules are shown in teal color, and the Bus Sentry blocks are outlined in bright green. The subsystem can be run in either an application domain, or the security domain. The sentry blocks enforce your specific filters for each IP block connected. When your subsystem is run in the application domain you have complete visibility of what’s going on, provided by embedded analytics:

Application Domain

In the security domain all of the bus sentries are activated, providing hardware-level threat detection and mitigation:

Security domain

Summary

Cybersecurity threats are a growing concern for companies providing electronic systems, but there’s some good news as the approach of adding Embedded Analytics hardware to detect, filter and mitigate threats is now possible. With Siemens EDA the Tessent Embedded Analytics approach provides analytic tools for building signatures of normal operation, so that signatures of threats can be quickly detected. Monitors can continuously collect fine-grained data to detect new threats, and enable your security team to develop mitigation.

The Bus Sentry IP blocks allow your chip to mitigate any threats at the hardware level. Over time you can always add new filters and mappers to provide improved security hardening. On-chip monitors and sentries provide new promise against cybersecurity attacks.

There’s also a 19 minute YouTube video of this presentation.

Related Blogs


Bringing Prototyping to the Desktop

Bringing Prototyping to the Desktop
by Kalar Rajendiran on 05-16-2022 at 6:00 am

Corigine MimicTurbo GT Card

A few months ago, I wrote about Corigine and their MimicPro FPGA prototyping system and MimicTurbo GT card solutions. That article went into the various features and benefits of the two solutions, with the requirements for next-generation prototyping solutions as the backdrop. You can access that article here. At 250+ employees, spread over 10+ sites around the world, the startup is well-established. It is on a mission to make prototyping solutions accessible to a wider audience of engineers, right on their desktop. Customers have also been providing feedback to Corigine based on their hands-on experience since the launch of these solutions last year.

I followed up with Corigine to identify some key differentiating aspects of their solutions against the traditional competitive solutions. This post is based on my conversation with Mike Shei, Head of R&D and Ali Khan, VP Business & Product Development.

Competitive Comparison

Corigine has incorporated lot of run time capabilities to their prototyping solutions compared to traditional prototyping solutions such as Synopsys HAPS. For example, the “Local Debug and System Scope” functionality provides high visibility to data right on the desktop, for quickly identifying and resolving bugs. The memory analyzer capability provides backdoor runtime memory access to both user design memories and external DDR memories. HAPS does not offer these kinds of capabilities.

Corigine’s solutions use the ASIC’s clock scheme compared to Cadence’s Protium which uses cycle-based fast clock scheme when running the design. As the real user clock will always be half of the fast clock, the performance will be degraded 50% from a system performance point of view. Protium also relies on the Palladium platform for the debug capability. Mimic does not rely on any emulator for performing the debug.

Corigine’s auto-partitioning tool takes a system level point of view to minimize the hops when routing between FPGAs in a multi-FPGA based design. Partitioning tools of competitive solutions consider the logical partitioning, physical partitioning and system routing as three independent steps. Whereas Corigine considers these three aspects in an integrated fashion, thereby yielding better system performance.

Customer Feedback

Software Development

Competitive prototyping and emulation systems are expensive and thus access by engineers is prioritized. Corigine is hearing from the customer base that their software engineers are competing with hardware verification engineers for access to these resources. That puts a crimp on the bigger goal of hardware/software co-development and co-verification. Hardware/software co-verification is a very important aspect of product development, for integration of software (sw) with hardware (hw) well before final chips and boards become available. A good prototyping system should allow for not only ease of verification of the hardware but also enable hardware validation, software development, debugging and hw/sw integration.

With Corigine’s MimicTurbo GT card solution, software engineers now have access right on their desktop irrespective of wherever in the world they may be based. They could now do their software co-development right from their desktop without having to wait for access to expensive emulation platforms.

Customers also want their software developers to have access in a distributed environment and the MimicTurbo GT card solution makes that possible.

Scalability

While each MimicTurbo GT card can handle a design up to 48 million gates, multiple cards can be used to handle larger designs. Corigine uses QSFP cables to interconnect multiple servers to deliver this expanded capability. Customers are able to tap just the right amount of hardware resources as called for.

Time to Market

There are many scenarios where customers have multiple versions of cores that need to be verified. For instance, multiple customized versions of RISC-V cores. These multiple verification runs can be done simultaneously on a number of desktops with MimicTurbo GT cards, thereby cutting down on time.

Opportunity For IP Vendors

IP vendors could verify their IP cores using the MimicTurbo GT card solution and then provide the card and the IP cores to their customers for quick verification by them. This should be attractive to customers as it would save them time and effort from having to set up a new verification system.

Also read:

A Next-Generation Prototyping System for ASIC and Pre-Silicon Software Development

Scaling is Failing with Moore’s Law and Dennard

High-speed, low-power, Hybrid ADC at IP-SoC

Why Software Rules AI Success at the Edge


Taxis Don’t Have a Prayer

Taxis Don’t Have a Prayer
by Roger C. Lanctot on 05-15-2022 at 6:00 am

Taxis Dont Have a Prayer

Ever since the emergence of Uber and DiDi and Gett and Yandex and all the rest of the app-based ride hailing operators I have been worried about taxis. I had a sneaking suspicion that the ride hail operators were exploiting a loophole that put taxis at a disadvantage creating a mortal threat.

My suspicions were borne out by taxi driver protests throughout the world and by individual driver suicides. In this context, I was heartened when map-maker HERE introduced the world to HERE Mobility – an Israeli-based operation dedicated to the creation of a fleet management platform – which had all the earmarks of a global taxi aggregation solution. A way for taxi Davids around the world to take on ride hailing Goliaths maybe?

HERE Mobility looked to me like a global taxi-based alternative to ride hailing – a way for taxi operators to fight back against the ride hailing phenomenon. While HERE Mobility’s mission was broader than this, the ultimately doomed initiative (HERE Mobility shuttered its operations almost two years ago) highlighted the basic challenges facing the taxi industry in its efforts to take on app-based ride hailers.

That challenge – competing with an app – was further highlighted for me upon my arrival in Israel Saturday, to attend Ecomotion. While local taxi regulations in Israel supposedly require all taxis to accept credit card payments, the reality is something quite different and it highlights the fatal flaw in the taxi eco-system.

As I was entering my taxi at Ben Gurion airport I asked the driver if he would accept credit card payment as I was Shekel-less. We were well on our way to Tel Aviv before it became clear that the driver did not have a means to accept credit card payment. He suggested we could stop at an ATM machine – which are ubiquitous in the city.

There are few things sketchier than a taxi driver driving a customer to an ATM to obtain cash to pay for the ride. It has all the earmarks of a hostage situation – particularly considering the need to leave the vehicle along with one’s possessions – to obtain that essential cash.

Everything worked out fine and there were no hard feelings, but the experience was a head scratcher. Didn’t this taxi driver understand that cashless payment was one of the core value propositions of app-based transportation from operators such as Gett and Yango in Israel?

The overwhelming convenience of a line of waiting taxis at the airport – or any other similar taxi “rank” – is vastly diminished if those taxis are insisting on cash payment. But the challenges facing taxi operators are even greater than that.

The cards are indeed stacked against individual taxi operators because the sector is both highly regulated and fragmented. The appeal of HERE Mobility lay in its ability to attract hundreds of taxi operators around the world to its aggregation platform.

Aggregation alone, though, was not enough to overcome the structural impediments to successful competition with ride hailing operators. Taxi app operator Curb has learned this reality the hard way.

In the U.S., where Curb operates, taxi regulators impose two burdens that are nearly impossible for drivers to overcome. First, taxis are generally not allowed to quote a fixed fare for a ride – with the exception of zoned fares which are predetermined in many markets. And, second, taxi operators are usually confined to operating within or out of a specific geographic area.

Curb has sought to overcome the first restriction be obtaining wavers – one market at a time in the U.S. – to allow participating drivers to quote fixed fares for a particular trip. Quoting a fixed fare is simply taken for granted for ride hail operators – but generally forbidden for taxis. The only thing working in favor of taxis, in respect to fares, is the inability of taxis to use surge pricing – but, if the traffic is heavy, the regular taxi passenger will pay more anyway.

As for the second issue, geographic constraints, taxi drivers hands are simply tied. A taxi operating out of Washington, DC, for example, can take a fare to the suburbs (which include three different states), but cannot turnaround and bring a fare from the suburbs back into the city – the Washington-based taxi must dead-head back to DC.

Ubers and Lyfts can pick up fares from almost anywhere and drop them almost anywhere with few, if any, constraints. We won’t even bother to dwell on the fact that these drivers are using their own or a rented car with limited regulatory oversight of the vehicle’s condition or operation.

All of these impediments – nearly unlimited geographic operating area, fixed pricing, personal/rental vehicle operation, non-existent regulatory oversight, app-based cashless convenience – mean that taxis simply do not have a prayer. Without significant regulatory review – less regulations for taxis or more regulations for ride hail operators – the days of taxis are limited as is their scope of operation.

The one policy strategy that seems to make sense and to have some enduring credibility is the option – taken in countries such as Germany and Turkey – to require ride hail companies to cooperate with existing taxi operators. This seems to be an effective working solution – but not one that has been universally adopted.

Taxi operators – highly regulated and fragmented – with taxi drivers clinging to their cash-centric ways of doing business, will need help to survive. If regulators are unwilling to impose a co-operative approach, then it’s time to lighten the regulatory burden to give the sector a fighting chance. This is especially true if existing regulations are either not enforced or routinely ignored.

We need taxis. We should not forget that ride hailing is a luxury proposition and transportation a societal necessity. The ride hailers should not be allowed to skim off the cream and leave the less profitable and less favorable fares to taxis. That’s not fair.

Also read:

The Jig is Up for Car Data Brokers

ITSA – Not So Intelligent Transportation

OnStar: Getting Connectivity Wrong


Podcast EP79: Alphacore’s Capabilities and Growth Plans with Ken Potts

Podcast EP79: Alphacore’s Capabilities and Growth Plans with Ken Potts
by Daniel Nenni on 05-13-2022 at 10:00 am

Dan is joined by Ken Potts, Alphacore’s Chief Operating Officer. Ken has over 30 years of successful entrepreneurship in both Fortune 100 as well as emerging technology companies. Ken has held numerous executive and operational leadership roles in semiconductor products, semiconductor IP, and electronic design automation.

Dan and Ken discuss Alphacore’s current complement of high performance IP and chip services. The portfolio and customer base are detailed by Ken, with a discussion of what lies ahead in terms of customer growth and product expansion.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Chuck Gershman of Owl AI

CEO Interview: Chuck Gershman of Owl AI
by Daniel Nenni on 05-13-2022 at 6:00 am

Corp with Backdrop

Chuck Gershman CEO is the co-founder Owl Autonomous Imaging, Inc. Chuck is a Drexel University College of Engineering inductee into the Alumni Circle of Distinction, the highest honor bestowed upon alumni. He has been honored as a finalist for CMP publications (EE Times) prestigious ACE award as High Technology Executive of the Year and was previously named a Top 40 Healthcare Transformer by Medical Marketing & Media for his work on Clinical AI Decision Support for cancer patients. Chuck holds three US patents for his contributions to Microprocessor Architecture.

Chuck brings over 30 years of technology and semiconductor industry experience in executive management, marketing, engineering, business development, sales, consulting, and executive advising. Including Owl Autonomous Imaging, Mr. Gershman has served as CEO/COO and a Board Director for three companies, he knows what it takes to lead a vision to reality – having led successful exits with acquisitions by Intel and PMC-Sierra.

What is the backstory of Owl AI?
The foundation of Owl’s technology was created under a challenge grant from the US Air Force to track ballistic missiles in flight. Leveraging this technology and the associated patent portfolio, Owl has developed a monocular Thermal Ranging™ camera that provides HD Thermal video with precision ranging that delivers a 150x better spatial resolution than LIDAR (500x that of Radar). A number of our team members come from Kodak where they helped to develop the first commercial digital cameras and first optical scanner. With regards to thermal imaging our team has developed two thermal cameras that are currently deployed in space. The team also recently competed a military uncooled thermal design for one of the most advanced military grade thermal cameras developed to date.

What problems/challenges are you solving?
We are basically improving sensing and perception of living things such as humans and animals with our 3D dense range map regardless of time of day and regardless of visual impairments such as fog, rain, sleet, snow, exhaust, glare and speed to name some.

What markets does Owl address today?
Owl addresses automotive safety markets such as ADAS and AV’s, industrial off-road markets that require robotic mobility and select military applications. With regards to automotive safety. automatic emergency braking (AEB) has quickly evolved as a must have feature. This capability has now moved from not just automated braking of large objects like cars, busses or trucks but braking for pedestrians and animals. This is known as Pedestrain AEB. Though these systems have been shown to dramatically reduce accidents the current class of systems completely fail when operating at night. Testing completed earlier this year by the Insurance Institute of Highway Safety (IIHS) reported a 32% reduction in pedestrian crashes for systems enabled with PAEB versus those without during the day. However, they also found absolutely no difference in crash rate when operating at night. A complete fail. That is where Owl AI comes in.

What makes Owl AI and Owl AI’s products unique?
Owl’s new monocular ranging sensor system, called the Thermal Ranger, outputs a megapixel (MP) of thermal (night vision) video in parallel with optically fused, 3D range-maps that are similar in appearance to LiDAR and radar range map formats, but delivering orders of magnitude more data points per second. Owl’s solution is analogous to recent announcements of 3D single camera computer vision systems operating in the visual domain; however, Owl’s Thermal Ranger is unique as it delivers rich detail and 3D response day or night, including operation in extreme visually impaired environments known as DVE.

The Thermal Ranger is composed of a first of its kind Megapixel Digital Focal Plane Array (MP-DFPA) semiconductor chip producing nearly four times the resolution of today’s analog-based VGA thermal cameras. The Thermal Ranger also includes a multi-aperture optical component (MLA), and a suite of Convolutional Neural Network (CNN) ranging software for true thermal computer vision. The sensor operates in the thermal spectrum (longwave Infrared) allowing it to see the world clearly, in high-resolution, through adverse DVE and any lighting condition for instant classification and 3D location of pedestrians, cyclists, animals, vehicles, and other objects of interest. This is a true no light system, not to be misconstrued with a low light camera (NIR or SWIR).

This low cost, compact, single lens (monocular) system outputs megapixel HD thermal video producing vivid clarity, while simultaneously generating 3D range maps of up to 90 million points per second, which is orders of magnitude more angular and spatial resolution than LiDAR or radar sensors. For PAEB systems, the novel MLA enables simultaneous capture of both wide angle and telephoto fields of view (FOV) through a single main lens providing wide angle curb to curb response (100 degrees) while enhancing 2D long-range response to well beyond 300 meters and delivering high accuracy 3D range response at distances of over 185 meters. Removal of the MLA along with installation of a telephoto lens with a FOV idealized for long haul highway scenes results in the system being idealized for long haul AV trucking applications with object detection response up to 400m well beyond any other sensor available today including LiDAR.

What’s next for Owl AI? Or what is Owl AI’s future direction?
Owl currently has paying customers. Owl recently completed a Series A financing round of $15M to help us accelerate our development and we are focused on executing on our technology roadmap and expanding our go-to-market resources. We are starting to engage higher volume opportunity customers as well as identify and plan for future optimizations of our roadmap with key customer input. We believe our solution is cost effective today and we will continue to align our products with a strong value proposition over the long term.

Additional thoughts? 
We believe that today’s ADAS, AV and Robotic Mobility systems will be improved through the sensor diversity achieved by adding this fourth sensor modality. Lastly, our solutions are being designed with automotive quality standards in mind and we intend to meet the needs of the massive opportunity in this market.

Also Read:

CEO Interviews: Dr Ali El Kaafarani of PQShield

CEO Interview: Dr. Robert Giterman of RAAAM Memory Technologies

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing


Semiconductor Crash Update

Semiconductor Crash Update
by Daniel Nenni on 05-12-2022 at 10:00 am

Semiconductors are Capturing Electronics

Earlier this year semiconductor oracle Malcom Penn did his 2022 forecast which I covered here: Are We Headed for a Semiconductor Crash? The big difference with this update is the black economic clouds that are looming which may again highlight Malcolm’s forecasting prowess. I spent an hour with Malcolm and company on his Zoom cast yesterday and now have his slides. Great chap that Malcolm.

The $600B question is: When will the semiconductor CEOs start issuing warnings?

RECAP: PERFECT STORM BROKE IN JULY 2020 (BUT NOBODY WAS PAYING ATTENTION)

The 6.5% growth in 2020 set the stage for a big 2021 which ended up at 26%. Previous semiconductor records were 37% in 2000 and 32% in 2010 so 26% is not that big of a number meaning we will have a shorter distance to fall. Covid was the trigger for our recent shortages but it really was a supply/demand imbalance camouflaged by a crippled supply chain.

The big difference today is the daunting economic and geopolitical issues that could possibly raise Malcolm to genius forecasting level. The horrific geopolitics with Russia and China, rampant inflation, the workforce challenges around the world, and of course Covid is not done with us yet, not even close.

Let’s take a look at the four key drivers from Malcolm’s presentation:

  1. Economy: Determines what consumers can afford to buy.
  2. Unit Demand: Reflects what consumers actually buy plus/minus inventory adjustments.
  3. Capacity: Determines how much demand can be met (under or over supply).
  4. ASPs: Sets the price units can be sold for (supply – demand + value proposition).

The Economy is the big change since I last talked to Malcolm. In my 60 years I have never experienced a more uncertain time other than the housing crash in 2008 where a significant amount of my net worth would disappear overnight. Today, however, I am a financial genius for holding fast. Property values here are about double of the peak in 2008 which is great but is also a little concerning.

Bottom line: I think we can all agree the economy is in turmoil with the inflation spike and the jump in interest rates and debt. Maybe some financial experts can chime in here but in my experience this trend will get worse (recession?) before it gets better.

Unit Demand is definitely increasing due to the digitalization transformation that we have been working on for years. Here is a slide from a keynote at the Siemens EDA Users meeting last week. I will be writing about this in more detail next week. Unit volume is the great revealer of truth versus revenue so this is the one to watch. Unfortunately “take or pay” and “prepay” contracts are becoming much more common and that can disturb unit demand as a forecasting metric.

Bottom line: Long term semiconductor unit demand will continue to grow in my opinion (not at rate we experienced in 2021) but that will largely be due to the Covid backlog and inventory builds. The big risk here is China. China is in turmoil and they are the largest consumer of semiconductors. China stemmed the first Covid surge with draconian measures which they are again employing and again the electronics supply chain is impeded. Other parts of the world who are not paying attention to what is happening in China will suffer the consequences in the months to come, my opinion.

Capacity is a tricky one. Let’s break this one into two parts: Leading edge nodes (FinFETs) and mature nodes (Not FinFETs). We are building leading edge capacity with impunity. It’s a PR race between Intel, Samsung, and TSMC and since Intel is outsourcing significant FinFET capacity to TSMC it makes it even trickier.

To be clear, mature node capacity is being rapidly added but a lot of it is in China since they do not have access to leading edge technology but  will pale in comparison to FinFET capacity. Reshoring semiconductor manufacturing and the record setting CAPEX numbers are also an important part of this equation which makes the over supply argument even easier.

On the other side of the equation the semiconductor equipment companies are hugely backlogged no matter who you are and the electronics supply chain is still crippled so announcing CAPEX and actually spending it is two different things.

In my opinion, if all of the announced CAPEX is actually spent there will be some empty fabs waiting for equipment and customers. Remember, Intel had an empty fab in AZ for years and there are still empty fabs all over China. Staffing new fabs will also be a challenge since the semiconductor talent pool is seriously strained.

Bottom line: We did not have a wafer manufacturing capacity problem before Covid, we do not have a wafer manufacturing capacity problem today, and I don’t see an oversupply risk in the future. We did have a surge in chip demand due to Covid but that will end soon and the crippled supply chain (the inability to get systems assembled and to customers) is easing the fab pressures and that will continue this year and next depending on Covid and how we respond to it.

ASPs are being propped up by the shortage narrative. Brokers, distributors, middlemen are hording and raising prices causing ever more supply chain issues. I have heard of 10x+ price increases for $1 MPUs. Systems companies are paying a premium for off-the-shelf chips and foundries are raising wafer prices in record amounts which, at some point in time, will calm demand.

Bottom line: Malcom is convinced a significant crash is coming but I do not agree based on my ramblings above. If someone asked me to place a 10% over/under bet for semiconductor revenue growth in 2022 I would bet the farm on over. My personal number was and still is 15% growth in 2022.

Let me know if you agree or disagree in the comment section and we can go from there. Exciting times ahead, absolutely.

Also read:

Design IP Sales Grew 19.4% in 2021, confirm 2016-2021 CAGR of 9.8%

Semiconductor CapEx Warning

Chip Enabler and Bottleneck ASML

The ASIC Business is Surging!


Scaling is Failing with Moore’s Law and Dennard

Scaling is Failing with Moore’s Law and Dennard
by Dave Bursky on 05-12-2022 at 6:00 am

Scaling is Falling SemiWiki

Looking backward and forward, the white paper from Codasip “Scaling is Failing” by Roddy Urquhart provides an interesting history of processor development since the early 1970s to the present. However it doesn’t stop there and continues to extrapolate what the chip industry has in store for the rest of this decade. For the last half century, Moore’s Law, an observation regarding the number of transistors that can be integrated on  chip, was crafted by Gordon Moore, one of the founders of Intel Corp. That observation was followed by Robert Dennard of IBM Corp., who in addition to inventing the single-transistor DRAM cell, defined the rules for transistor scaling, now known as Dennard Scaling.

In addition to scaling, Amdahls law, stipulated by Gene Amdahl while at IBM Corp. in 1967, deals with the theoretical speedup possible when adding processors in parallel. Any speedup will be limited by those parts of the software that are required to be executed sequentially. Thus, Moore’s Law, Dennard Scaling, and Amdahl’s law have guided the semiconductor industry over the last half century (see the figure). However, Codasip claims they are all failing and that the industry must change and the processor paradigms must change with it. Some of those changes include the creation of domain-specific accelerators, customized solutions, and new companies that create disruptive solutions.

Supporting the paper’s premise that semiconductor scaling is failing are numerous examples in the microprocessor world. The examples start with the Intel x86 family as an illustration of how scaling failed as chip complexities and clock speeds increased with each new generation of the single-core CPUs. As each CPU generation’s clock frequency increased from the MHz to the GHz level thanks to the improvements in scaling, chip thermal limits became a restraining factor for performance. The performance limitation was the result of a dramatic increase in power consumption as clock speeds hit 3 GHz and higher and complexities hit close to a billion transistors on a chip. The smaller size of the transistors also resulted in increased leakage currents, and the higher leakage currents caused the chips to consume more power even when idling.

To avoid thermal runaway caused by increasing clock frequencies, designers opted for multi-core architectures, integrating two, four or more CPU cores on a single chip. These cores could operate at lower clock frequencies, share various on-chip resources, and thus consume less power. The additional benefit of the multiple cores was the ability to multitask, allowing the chip to run multiple programs simultaneously. However, the multicore approach was not enough for the CPUs to handle the myriad tasks that new applications such as graphics, image and audio processing, artificial intelligence, and still other functions.

Thus, Codasip is proposing that further processor specialization will deliver considerable performance improvements – the industry must change from adapting software to execute on available hardware to tailoring computational units to match their computational load. To accomplish this, many varied custom designs will be needed, permitting companies to design for differentiation. Additionally new approaches to processor design must be considered – especially the value of processor design language and processor design automation.

Using the RISC-V modular architecture as an example of the ability to create specialized cores and its flexibility to craft specialized instructions, Codasip sees the RISC-V as an excellent starting point for tailored processing units. Cores will typically be classified in one of four general categories – MCU, DSP, GPU, and AP (application processor), with each type optimized for a range of computations, some of which may not match what is actually required by the on-chip subsystem.  Some companies have already developed specialized cores (often referred to as application-specific instruction processors, ASIPs) that efficiently handle a narrowly-defined computational workload. However, crafting such cores requires specialized skills to define the instruction set, develop the processor microarchitecture, create the associated software tool chain, and finally, verify the core.

Codasip suggests that the only way to take specialization a step further is to create innovative architectures to tackle specialized processing problems.  Hardware should be created to match the software workload – that can be achieved by customizing the instruction set architecture, creating special microarchitectures, or creating novel processing cores and arrays. ASIPs can be considered a subset of domain-specific accelerator, a category defined in a paper presented in 2019 by John Hennessy and David Paterson – “A New Golden Age for Computer Architecture”.

They characterized DSAs as exploiting parallelism (such as instruction-level parallelism, or SIMD, or systolic arrays) if the class of applications benefitted from it. DSAs can better match their computational capabilities to the intended application. One example is the Tensor Processing Unit (TPU) developed by Google, which is a systolic array working with 8-bit precision. The more specialized the processor, the greater the efficiency in terms of silicon area and power consumption. However, with less specialization, the greater the flexibility of the DSA. On the DSA continuum there is the possibility of fine-tuning a core for performance, area, and power – and design for differentiation is enabled.

Specialization is not only a great opportunity, but it means that there will be many different designs created. Those designs will require a broader community of designers and a greater degree of design efficiency. Codasip sees four enablers that can contribute to the efficient design – the open RISC-V ISA, processor design language, processor design automation, and existing verified RISC-V cores for customization.

They feel that RISC-V – a free and open standard that only covers the instruction set architecture and not the microarchitecture – has garnered widespread support and does not prescribe a licensing model so both commercially licensed and open-sourced microarchitectures are possible. If designers use an processor design lanaguage such as Codasip’s CodAL, they have a complete processor description capable of supporting software, hardware, and verification aspects. If custom instructions are implemented by adding to the processor design language source and can thus be reflected in the software toolchain and verification environment as well as the RTL.

Also read:

Optimizing AI/ML Operations at the Edge

Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification

 


Podcast EP78: A Tour of DAC 2022 with Rob Oshana, General Chair

Podcast EP78: A Tour of DAC 2022 with Rob Oshana, General Chair
by Daniel Nenni on 05-11-2022 at 10:00 am

Dan is joined by Rob Oshana, general chair of this year’s DAC. Rob is vice president of software engineering R&D for the Edge Processing business line at NXP.  He serves on multiple industry advisory boards and is a recognized international speaker.  He has published numerous books and articles on software engineering and embedded systems.  He is an adjunct professor at the University of Texas and Southern Methodist University and is a Senior Member of IEEE.

Dan and Rob discuss the program for this year’s DAC. What the various parts of the conference will offer and a surprising discussion about the dynamics of moving back to a live event. This year’s DAC is shaping up to be a memorable event with many relevant topics and focus areas. You definitely want to hear the backstory.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.