DAC2025 SemiWiki 800x100

2022 Phil Kaufman Award Ceremony and Banquet Honoring Dr. Giovanni De Micheli

2022 Phil Kaufman Award Ceremony and Banquet Honoring Dr. Giovanni De Micheli
by Daniel Nenni on 01-10-2023 at 8:00 am

2022 Giovanni De Micheli

One of the first events on the 2023 EDA calendar is the Phil Kaufman Award ceremony and banquet honoring the 2022 recipient Dr. Giovanni De Micheli. The event, hosted by the Electronic System Design Alliance (ESD Alliance) and the IEEE Council on Electronic Design Automation (CEDA), will be held Thursday, February 23, starting at 6:30pm at The GlassHouse in San Jose, Calif. It’s an absolute must attend, absolutely!

Nanni, as he’s known, is professor and director of the Institute of Electrical Engineering (IEL) and of the Integrated Systems Centre at the École Polytechnique Fédérale de Lausanne (EPFL) in Lausanne, Switzerland. He was also a professor of Electrical Engineering at Stanford for many years and DAC chair in 2000. Nanni is recognized for his many contributions to EDA, including research on EDA tools and methodologies, helping drive advances in the academic field of design automation and incorporating many of them into commercial EDA solutions. His work expanded the fields of high-level synthesis, logic synthesis and network-on-chip.

In addition to celebrating Nanni’s accomplishments, it will be a good opportunity to network and catch up on what’s new. Absolutely. Act fast to take advantage of early bird ticket pricing available through Friday, January 13, at $175 per individual from member companies and $225 each for non-members. After that, member tickets are $225 each and $275 per non-member. Member pricing is offered for individuals or companies that are active SEMI or IEEE members.

Registration is found on the ESD Alliance website or the CEDA website.

Corporate sponsorship opportunities are available as well. Contact Bob Smith, executive director of the ESD Alliance, at bsmith@semi.org for more information.

Anyone who knew Phil Kaufman will tell you that he was a great visionary and a popular figure in our industry. At the time of his death in 1992 at age 50, he was president and CEO of Quickturn Systems (now Cadence). Before Quickturn, he was president of Silicon Compilers, acquired by Mentor Graphics that’s now Siemens EDA.

The yearly Phil Kaufman Award named in his honor was established in 1994 and is co-sponsored by the ESD Alliance and CEDA. In the spirit of Phil Kaufman’s innovation and entrepreneurism, the award honors individuals who have had a demonstrable impact on the field of electronic system design, turning innovative technologies into commercial businesses. The recipients are noted for their technology innovations, education/mentoring and business and industry leadership. Visit the Phil Kaufman Award webpage for more details and an impressive list of previous recipients.

The Electronic System Design Alliance (ESD Alliance), a SEMI Technology Community, an international association of companies providing goods and services throughout the semiconductor design ecosystem, is a forum to address technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design industry as a vital component of the global electronics industry.

Also Read:

IEDM 2022 – TSMC 3nm

9 Trends of IoT in 2023

The Smartphone Snitch in Your Pocket


Arteris IP Acquires Semifore!

Arteris IP Acquires Semifore!
by Daniel Nenni on 01-10-2023 at 5:45 am

Arteris Ip Magillem Semifore 3

The semiconductor ecosystem consolidation continues with an interesting acquisition of an EDA company by an IP company. Having worked with both Arteris and Semifore over the past few years I can tell you by personal experience that this is one of those 1+1=3 types of acquisitions, absolutely.

Semifore was founded in 2006 by a team of system architecture experts focused on the hardware/software interface. Like many EDA companies back then, Semifore grew organically from blood sweat and beers amassing one of the most impressive customer lists I have seen for a company of its size. Semifore cofounders Rich Weber and Jamsheed Agahi are true EDA heros.

Arteris came to SemiWiki in 2011 so we experienced this incredible piece of IP history in real time. To date, we have published more than 100 blogs with Arteris garnering more than 1 million views. Semiconductor IP has always been a SemiWiki audience favorite and Arteris is a big part of that. As I have said before, never ever bet against Charlie Janac!

Arteris was founded in 2003 and not only originated but also dominated the commercially available NoC market. In an interesting plot twist, in 2013 Qualcomm acquired the Arteris FlexNoC product portfolio but Arteris retained the customers and licensing rights. Shortly there after Arteris launched new FlexNoC products and has continued to do so, again dominating the commercial NoC market.

In another interesting twist Arteris acquired EDA company Magillem adding IP-XACT based software development products. We worked with Magillem from 2012 up until their acquisition in 2020.

Which brings us back to Semifore. The product synergy between Magillem and Semifore was obvious and discussed in detail prior to the Arteris acquisition. Now that Arteris has both Magillem and Semifore, another IP disruption is taking place right before our eyes.

Another exciting start to another amazing year inside the semiconductor ecosystem!

 

Arteris Acquires Semifore to Accelerate System-on-Chip Development

Augmenting leading network-on-chip IP and IP deployment automation with the leading hardware/software interface automation solution

CAMPBELL, Calif. – January 10, 2023 – Arteris, Inc. (Nasdaq: AIP), a leading provider of system IP which accelerates system-on-chip (SoC) creation, today announced that it has completed the acquisition of Semifore, Inc., a leading provider of hardware/software interface (HSI) technology. Semifore is used to effectively design, verify, document and help in the validation of the hardware-software integration that is essential to every SoC. Semifore’s technology is used by leading semiconductor and system companies across automotive, consumer electronics, communications, enterprise computing and other applications.

“The SoC is not done until the software drivers run,” said Richard Weber, founder and CEO of Semifore, Inc. “The combination of Arteris and Semifore will provide the scale needed to further deploy our register management technology for hardware-software interface to benefit new and existing customers looking to accelerate SoC designs.”

The addition of Semifore technologies and team expertise augments Arteris system IP and IP deployment automation with best-in-class register management products for effective software control of the IP and SoC hardware. This provides a single-source specification that auto-generates the SoC views needed across hardware designs and hardware-dependent software development including device drivers, firmware, verification and documentation. The unified view and automation of this critical SoC integration layer allow customers to accelerate hardware-software development and reduce the risks of costly SoC redesigns.

“Hardware-software integration is a key part of SoC development which our customers are trying to execute quickly and effectively, leveraging best-in-class system IP and SoC integration automation,” said K. Charles Janac, president and CEO of Arteris. “The addition of Semifore will complement our network-on-chip interconnect IP and expand our SoC solutions, addressing complex challenges that every project team faces today.”

The terms of the transaction were not disclosed. The acquisition is not expected to be material to 2023 revenue or earnings.

About Arteris
Arteris is a leading provider of system IP for the acceleration of system-on-chip (SoC) development across today’s electronic systems. Arteris network-on-chip (NoC) interconnect IP and IP deployment technology enable higher product performance with lower power consumption and faster time to market, delivering better SoC economics so its customers can focus on dreaming up what comes next. Learn more at arteris.com.

About Semifore
Semifore, Inc. provides the CSRSpec™ CSR authoring language and CSRCompiler™, a complete register design solution for hardware-software interface verification and documentation. Semifore’s tools enable CSR design management from a single source specification. CSR specifications expressed in CSRSpec, SystemRDL, IP-XACT or spreadsheets are inputs to CSRCompiler. CSRCompiler then automatically generates Verilog and VHDL RTL; Verilog or C headers; Perl, IEEE IP-XACT, UVM, HTML web pages and Word or FrameMaker documentation. Learn more at semifore.com.

Forward-Looking Statements
This news release contains forward-looking statements regarding the transaction(s) described in this release, including regarding anticipated benefits. Forward-looking statements allow potential investors an opportunity to understand Company management’s beliefs and opinions regarding potential future outcomes, which may be used as a factor by potential investors in evaluating an investment. Although forward-looking statements are based upon what Company management believes may be reasonable future outcomes, there can be no assurance that forward-looking statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in a forward-looking statement. Therefore, such statements are not guarantees. Arteris assumes no obligation to update any forward-looking statement in this release.

© 2004-2023 Arteris, Inc. All rights reserved worldwide. Arteris, Arteris IP, the Arteris IP logo, and the other Arteris marks found at https://www.arteris.com/trademarks are trademarks or registered trademarks of Arteris, Inc. or its subsidiaries. All other trademarks are the property of their respective owners.

Media Contact:
Gina Jacobs
Arteris
+1 408 560 3044
newsroom@arteris.com

Investor Relations Contact:
Erica Mannion or Mike Funari
Sapphire Investor Relations, LLC
+1 617 542 6180
ir@arteris.com

Also Read:

Arm and Arteris Partner on Automotive

Coherency in Heterogeneous Designs

Scalability – A Looming Problem in Safety Analysis


Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects

Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects
by Fred Chen on 01-09-2023 at 10:00 am

Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects

Stochastic defects in EUV lithography have been studied over the last few years. For years, the Poisson noise from the low photon density of EUV had been suspected [1,2]. EUV distinguishes itself from DUV lithography with secondary electrons functioning as intermediary agents in generating reactions in the resist. Therefore, noise or randomness associated with the secondary electrons should also be expected [3,4]. There should not be only randomness in the number of secondary electrons generated, but also in the distances they travel. The latter is effectively a randomness in the blur.

Poisson noise combined with a randomized local blur was studied to see if stochastic defects would arise naturally. Poisson statistics was applied twice on a 1 nm pixel grid, once for the absorbed photon dose of 30 mJ/cm2, and a second time for the secondary electron quantum yield (QY) of 8 per photon. The 50 nm pitch image (from a binary 1:1 line/space grating as the object) on a 0.33 NA EUV system is then convolved with a local blur Gaussian function, where the sigma is a random number in the range [0, sigma_max], where sigma_max, essentially the upper limit of local blur, is itself randomly selected from an exponential distribution. To prevent excessive roughness, the random local blur values are subject to a 3 nm x 3 nm rolling average, and extrapolated at the grid edges.

The stochastic defect occurrence is found to hinge on the upper limit of the local secondary electron blur. For a ‘typical’ value of 3.1 nm (46th percentile on the exponential distribution), the image was practically unaltered, whereas for a ‘rare’ value of 30 nm (99.75th percentile on the exponential distribution), the image of the feature was essentially disrupted, indicating a microbridge-type (unexposed) defect.

Poisson statistics may be an aggravating factor but are not the true triggers for stochastic defects. The 5 nm scale exponential distribution containing upper limits of blur as high as 30 nm is the key aspect. This would be a natural consequence of the cascade of secondary electrons scattering in the resist, due to the range of energies starting from ~80 eV down to ~0 eV, as well as mean free paths rising sharply at low energies [5,6]. This is different from, e.g., acid diffusion in chemically amplified resists, which is suppressed as acids move further out due to the reduced concentration gradient. A recent disclosure of the EUV-induced hydrogen plasma [7] reveals mean free paths on the order of cm, which, in principle, could significantly worsen the stochastic defects issue. However, how much this new factor is suppressed is still not clear.

References

[1] R. L. Brainard et al., SPIE 5374, 74 (2004).

[2] M. Neisser et al., J. Photopolym. Sci and Tech. 26, 617 (2013). https://www.jstage.jst.go.jp/article/photopolymer/26/5/26_617/_pdf

[3] H. Fukuda, J. Micro/Nanolith. MEMS MOEMS 18, 013503 (2019).

[4] F. Chen, https://www.linkedin.com/pulse/adding-random-secondary-electron-generation-photon-shot-chen

[5] O. Yu et al., J. Elec. Spec. and Rel. Phen., 241, 146824 (2020). https://www.sciencedirect.com/science/article/pii/S0368204818302007

[6] Seah, M.P. and W.A. Dench, Surface and Interface Analysis 1, 2 (1979).

[7] M. van de Kerkhof et al., https://arxiv.org/ftp/arxiv/papers/2105/2105.10029.pdf

This article first appeared in LinkedIn Pulse: Secondary Electrron Blur Randomness as the Origin of EUV Stochastic Defects

Also Read:

Predicting EUV Stochastic Defect Density

Electron Blur Impact in EUV Resist Films from Interface Reflection

Where Are EUV Doses Headed?

Application-Specific Lithography: 5nm Node Gate Patterning


Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken
by Robert Maire on 01-09-2023 at 6:00 am

Samsung 2022

-Samsung off the same chip cliff as Micron- “No skid marks”
-Samsung may be winning at a game of “Capex Chicken”
-No expectation of recovery any time soon – Consumers weak
-2023 a write off- Recovery will be delayed if spending isn’t

Samsungs worst quarter in 8 years no surprise

Samsung pre released its Q4 earnings calling out a 69% drop in profits. In a cosmic coincidence Micron said its revenues were down 69% a few weeks ago (profits down even more into a loss condition).

Obviously memory pricing is a primary culprit along with general overall weakness in consumer and corporate spending.

Its not like we didn’t see this coming as we have known and talked about the memory industry falling off for over 6 months now, so no one should be surprised. Samsung said memory pricing was off in the mid 20% range in the quarter. As we have stated in prior notes it will likely get worse before it gets better….

Q1 is always seasonal weakest quarter of the year for chips

Chip demand and therefore pricing is always the best in the fall as back to school and buying of holiday electronics as well as the release of the new IPhone etc. Q1 is always the worst due to the post holiday depression, Chinese New Years etc;. We have seen this pattern for decades, but when the tide is low it becomes much more apparent as is the case now. Simply put, Q1 will get uglier.

If Capex is not cut the depression will last much longer and get much worst Potentially fatally wounding some players

It doesn’t take a rocket scientist to understand that cutting supply is the way to support higher pricing. OPEC understands this very well. Aside from just cutting wafer starts cutting capex is the way to get there as it also saves more cash when you are likely losing money already (as is the case with Micron).

That is the rational thing to do unless you have other ideas in mind such as gaining market share or hurting competitors by sucking the oxygen (profits) out of the room. If you are big enough, like Samsung, you perhaps can try to price the market for memory where it is barely profitable for you but the smaller players are under water.

But Samsung wouldn’t do something that nasty, would they?

Samsung may win at a game “Capex Chicken”

“Chicken” is a game where protagonists hurtling directly at one another in souped up cars where the first one to swerve away to avoid a collision is the loser.

In the current three way game of “Capex Chicken” in the memory industry, Samsung is driving a massive 18 wheeler, SK Hynix is driving a dump truck and Micron is the pick up truck with Idaho plates with sacks of Simplot’s potatoes in the back.

Micron, with diminishing funds in the bank realized a while ago it couldn’t win this game of chicken and swerved off the collision course a long time ago by cutting capex to sustenance levels.

A picture is worth a thousand words

Capex comparison of Samsung/SK Hynix/Micron/Intel/TSMC

So far Samsung shows no signs of slowing capex (but has done so in the past). The most rational thing to do on Samsung’s part would be to announce a capex reduction so investors could breath a sigh of relief and assume that supply will be coming down and pricing will get better eventually. But Nooooooo……

Samsung has put the put the proverbial pedal to the metal. Whether its true or not or maybe just a head fake to scare off the competition it may have the same effect of further trashing pricing as buyers expect a flood of memory coming.

It may be the case of stepping on the accelerator to make everyone else think you are crazy and swerve out of the way of the accelerating 18 Wheeler. We do think even Samsung will slow, but not before they do some damage, as even they are not suicidal.

For the moment we will leave Chinese Memory manufacturer Yangtze out of the Capex Chicken game as they are the equivalent of Chinese electric vehicle maker BYD, but without brakes, so they won’t stop and can’t stop, until they hit the brick wall of US sanctions.

Potential Double Whammy on equipment companies

Just when semiconductor equipment companies are putting out their “hair on fire” fire drill from the October China embargo they could be getting hit with Capex reductions in the memory market.

The last time Samsung cut capex, they did so very abruptly and without any warning but only slowed for a short while, a couple of quarters.

Much as with the China situation before it may be hard to get a handle on what impact the memory market collapse will have on the major equipment makers, AMAT, LRCX, TEL and KLAC

Expect more delays and pushouts

More and more fab projects and equipment purchases will be pushed out or delayed. Equipment makers will have to do some fast shuffling of the order books to try to pull in orders to fill voids and changes.

So far, the unusually long backlog going into the downturn has been a buffer that has allowed the equipment companies some breathing room and some wiggle room. But as backlog starts to fade away it will become much more difficult.

Lam had billions of dollars worth of unfinished goods waiting in crates in the field for parts and completion. That buffer of money in the field will cushion the downturn but for only so long. Once those systems are complete and signed off there is not a lot of buffer left to make up for declining orders.

Intel has pushed back on most all the fab projects announced. TSMC seems to be charging ahead. Samsung for the time being just completed a major fab not too long ago and hasn’t said anything about its US projects. Micron simply doesn’t have the financial strength right now as net cash fades and has already slashed capex so any new fabs will be delayed significantly. Globalfoundries has no “real” plans for another fab in the US, only Asia.

To EUV or Not To EUV….that is the question for memory makers

One of our major concerns in the memory market is a repeat of the EUV “haves” and the “have nots” we saw in the foundry/logic space. When TSMC went whole hog into EUV before any of its competitors it gained a huge lead in technology over everyone else…. no one else even comes close to this day. Their lead was in fact so overwhelming that GloFo simply gave up, canceled its EUV program and relegated itself to the dustbin of technology. The great Intel now has to source chips from TSMC.

EUV and Not Euv is a very wide technology chasm.

In the memory space, Samsung and SK Hynix are the EUV “haves” and Micron and Yangtze the EUV “have nots”. This will not change for Yangtze due to the embargo and likely not in the near term for Micron due to financials.

Once the smoke clears and the current over supply of memory chips finally goes away and we are back to the races, Micron and Yangtze could be left in the dust much as GloFo was due to their lack of EUV. Maybe ASML, with all its excess cash should start an EUV leasing or “rent to own” business for those less fortunate.

The stocks

Obviously even though chip stocks rallied today, the Samsung news should not be taken in any way shape or form as being positive. It is just a confirmation of exactly how bad the chip situation is and is getting worse.

There is no calculable end in sight, it could be 3 quarters , 4 quarters , 5 quarters, 2 years or more ,its just unknowable right now.

We think quarterly reports from the semiconductor industry as well as the equipment industry will likely be similarly ugly. We would not assume, as many investors may, that the worst is over, its the bottom and time to buy….its not.

We are not in a one or two quarter downdraft, we have a multi faceted deep downturn that we haven’t seen in a decade.

Secular demand and macro trends remain positive at a high level view but the near term (year or more) remains very rough.

We are not attracted to any of the stocks based on a false “the worst is over” rally. We are not at or near a bottom level for the industry, especially going into the seasonally weakest quarter.

About Semiconductor Advisors LLC‌

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus

AMAT and Semitool Deja Vu all over again

KLAC- Strong QTR and Guide but Backlog mutes China and Economic Impact

LRCX down from here – 2023 down more than 20% due to China and Downcycle


Podcast EP135: Democratizing HPC & AI

Podcast EP135: Democratizing HPC & AI
by Daniel Nenni on 01-06-2023 at 10:00 am

Dan is joined by Doug Norton, VP of Business Development for Inspire Semiconductor, an Austin-based high performance computing chip design company.  He is also the President of the Society of HPC Professionals, a vendor neutral, non-profit organization educating and connecting the High Performance Computing user community.

Doug explains the core technology strengths, product plans and mission of Inspire Semiconductor. He outlines the products that will soon be on the market that provide massive compute support for AI-augmented high-performance computing (HPC). Inspire delivers very high performance per watt capability with a flexible, easy to program interface. These qualities will allow Inspire to help many types of companies and applications in their journey to AI-augmented HPC.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Chris Eliasmith and Peter Suma, of Applied Brain Research Inc.

CEO Interview: Dr. Chris Eliasmith and Peter Suma, of Applied Brain Research Inc.
by Daniel Nenni on 01-06-2023 at 6:00 am

image 11
Peter Suma and Dr. Chris Eliasmith

Professor Chris Eliasmith (right side) is co-CEO and President of Applied Brain Research Inc. Chris is also the co-inventor of the Neural Engineering Framework (NEF), the Nengo neural development environment, and the Semantic Pointer Architecture, all of which are dedicated to leveraging our understanding of the brain to advance AI efficiency and scale. His team has developed Spaun, the world’s largest functional brain simulation. He won the prestigious 2015 NSERC Polanyi Award for this research. Chris has published two books, over 120 journal articles and patents, and holds the Canada Research Chair in Theoretical Neuroscience. He is jointly appointed in the Philosophy, Systems Design Engineering faculties, as well being cross-appointed to Computer Science. Chris has a Bacon-Erdos number of 8.

Peter Suma (left) is a co-CEO of Applied Brain Research Inc. Prior to ABR, Peter led start-ups in robotics and financial services as well as managed two seed venture capital funds. Peter holds degrees in systems engineering, science, law and business.

What is ABR’s vision?
ABR’s vision is to empower the world’s devices with intelligent, concept-level conversations and decision-making abilities using our innovative Time Series Processor (TSP – https://appliedbrainresearch.com/products/tsp/) chips.

Whether it’s enabling full voice and language processing on a small, low-power chip for consumer electronics and automotive applications, processing radar signals faster and for less power, bringing cloud-sized AI signal processing on to devices, or integrating situational awareness AI to give robots the ability to understand and respond to complex commands to interact with people in a natural and intuitive way, our TSP chip family is poised to revolutionize the way devices sense and communicate.

ABR has been delivering advanced AI R&D projects since 2012 to clients including the US DoD, Intel, BMW, Google, Sony and BP. Some examples of our work include, developing the world’s largest functional brain simulation, building autonomous drone controllers for the US Air Force, and building small, powerful voice control systems for cars, appliances and IoT devices. Our TSP chips are our latest innovation as we work to fit more and better AI models into devices to give devices better artificial ‘brains’.

How did ABR begin?
ABR was founded out of Dr. Chris Eliasmith’s lab at the Centre for Theoretical Neuroscience at the University of Waterloo. Applied Brain Research Inc. (ABR) is now a leading brain-inspired AI engineering firm. Our AI engineers and neuroscientists develop technologies to improve AI inspired by work in AI and brain research at the lab.

You mentioned you have some recent developments to share. What are they?
We are very excited to announce that ABR has been admitted to the ventureLab and Silicon Catalyst Incubator programs to support the development of our new Time Series Processor (TSP) family of edge AI chips, which allow cloud-sized speech and signal AI models to run at the edge at low cost, power, and latency. We will be exhibiting at CES in the Canada-Ontario Booth in the Venetian Expo Hall D at booth number 55429 from Jan 5th to Jan 8th, 2023, in Las Vegas. ABR is also a CES Innovation Awards Honoree (https://appliedbrainresearch.com/press/2022-11-21-ces-innovation-awards/) this year.

Tell us about these new chips you are building?
Most electronic devices already do, or will soon have to, utilize AI to keep pace with the smart features in their markets. More powerful AI networks are larger AI networks. Today’s edge processors are too small to run large enough AI models to deliver the latest possible features, and CPUs and GPUs are too expensive for many electronic devices. Cloud AI is also expensive, and for many products connections cannot be guaranteed to be accessible and are often not configured correctly by the customer.

What device makers need is a small, inexpensive, low-power chip that can run large AI models to enable the products to lead their respective markets. A very efficient, economical, and low-power way to achieve this is to compress large AI models and design a computer chip that runs these compressed models.

ABR has done exactly this with a new patented AI time-series compression algorithm called the Legendre Memory Unit or LMU. With this compression algorithm we have developed a family of small but very powerful time series processing AI processors that run speech, language and signal inference AI models in devices that previously would have required a cloud server.

This enables more powerful and smarter devices with low power consumption. Batteries last longer, devices converse in full natural language sentences, and sensors process more events with greater accuracy. ABR is enabling a new generation of intelligent devices with our revolutionary low-power, low-cost and low-latency powerful AI Time Series Processor (TSP) for AI speech, language and signal processing.

What are the chips in the ABR TSP family?
There are currently two chips in the ABR TSP family. The Chat-Chip TSP and the Signal-TSP.

The ABR Chat-Chip TSP is the world’s first all-in-one full voice dialog interface, low-power chip. Low-cost and low-power speech chips until now have been limited to keyword-spotting AI models which are limited to understanding 50 or so words. These chips deliver those oh-so-frustrating speech interfaces in cars, toys and other speech-enabled, sometimes-disconnected, low-BOM-cost devices. ABR’s Chat- Chip TSP replaces those chips for the same cost with a full natural language experience. Dramatically upgrading the customer’s experience.

The ABR Chat-Chip enables a full natural language voice assistant in one chip, including noise filtering, speech recognition (ASR), natural language processing (NLP), dialog management, and text-to-speech (TTS) AI. The ABR Chat-Chip TSP can run cloud-sized speech and language AI models in one chip, consuming less than 50 milli-watts of power. This combination of low-cost, low-power and large speech and language AI model processing means the ABR Chat-Chip TSP brings full Alexa-like natural language dialog to all devices including devices that, until now, could never have implemented full language dialog systems due to cost, latency and model size limitations when using existing chips.

Cameras, appliances, wearables, hearables, robots, and cars can all carry on complex, real-time, full language dialog with their users. People can hear better with larger de-noising and attention-focusing AI models in earpieces. People can interact with devices, more privately, instantly, and more hygienically without touching buttons. The many robots in our lives now and the near future can interact verbally without a cloud connection. Devices can also explain to users how to use them, offer verbal troubleshooting, deliver their user manuals verbally, offer hygienic, touchless interfaces, handsfree operation, and market their features to consumers. All of this without needing an internet connection, but able to take advantage of one if present. Voice interfaces delivered locally are more private, as they do not send sound recordings to the cloud, eliminating the risk of leaking background noise and emotional context. As well, local dialog processing is faster, without the latency of a cloud network. Local dialog processing reduces device makers’ costs per device and in the cloud, by removing large portions of the cloud processing needed for voice interfaces and performing the local processing at up to 10x less in-device processor cost.

The ABR Signal-TSP performs AI signal pattern and anomaly detection by running larger AI models, faster and for less power than existing CPUs and GPUs. In a market where larger AI models are typically much more accurate AI models, device makers need inexpensive, low-power, large AI model processors to make their devices smarter than the competition’s. ABR’s Time Series Processors (TSPs) cost just few dollars but run large AI models that otherwise would require a full CPU or GPU costing between $30 to $200 USD to execute the same workload in real-time. ABR’s Signal TSP typically reduces power consumption by 100x, latency by 10x and cost by 10x over functionally equivalent CPUs or GPUs.

How are the TSP chips programmed?
ABR supports the TSP chips with an API and an AI hardware deployment SaaS platform called NengoEdge (edge.nengo.ai). AI models can be imported from TensorFlow and then optimized for deployment to the TSP and other chips using NengoEdge. With NengoEdge you can pick a network, set various hardware-aware optimizations, and then have NengoEdge train and optimize the network using hardware specific optimizations, including quantization and utilization of any available AI acceleration features, such as the LMU fabric if a TSP is targeted. The result is an optimal packing of the AI network onto the targeted chips to deliver the fastest, lowest-power and most economical solution for delivering the chosen network onto the target hardware. All without buying each chip to test or learning the details of each chip. Users can see the TSP shine on all time series workloads, for example for voice assistants or radar processing AI systems.

Can you tell us more about your LMU compression algorithm?
The Legendre Memory Unit (LMU) was engineered by emulating the algorithm used by time cells in the human brain and specifically how time cells are so efficient at learning and identifying event sequences. The LMU makes the ABR TSP’s large gains in efficiency, performance and cost possible for inferencing all time series and sequence-based AI models. We patented the LMU worldwide in 2019 and announced it NeurIPS in December 2019. We then published the software versions of the LMU on our website and GitHub in 2020. There are many papers now published using the LMU and achieving state of the art results on time series workloads by other groups. We have many clients who have licensed the LMU software running on CPUs, GPUs or MCUs for signal and speech processing in devices such as wearables, medical devices and drone controllers. Many of those are now waiting to move to a TSP chip to extend their battery life and support even larger models at lower power, cost and latency levels.

When will the TSP chips be available?
We are working to have first silicon TSP chips for both the Chat-Chip and Signal design available by Q1 2024. We are signing pre-orders and design LOI’s now. Contact Peter Suma, co-CEO of ABR at peter.suma@appliedbrainresearch.com or on 1-416-505-8973 to learn how we can super charge your devices to be the smartest in their class.

Also Read:

CEO Interview: Ron Black of Codasip

CEO Interview: Aleksandr Timofeev of POLYN Technology

CEO Interview: Coby Hanoch of Weebit Nano

CEO Interview: Jan Peter Berns from Hyperstone


The Smartphone Snitch in Your Pocket

The Smartphone Snitch in Your Pocket
by Roger C. Lanctot on 01-05-2023 at 6:00 am

The Smartphone Snitch in Your Pocket

The story in the New York Times came with a sensational headline: “Couple in Car Survive 300-foot Fall into a Canyon.” The canyon in question was Monkey Canyon in the Angeles National Forest outside Los Angeles and the couple survived, so the story goes, thanks to their satellite-connectivity-enhanced iPhone.

This is the kind of story that can transform what consumers think they know about SOS calling. It might lead one to believe that automatic crash notification, of the sort provided by OnStar-like services, are unnecessary in a world populated with iPhones and Google Pixel phones equipped with satellite-based SOS calling capability.

You might buy that idea until you read further in the Times story and learn that the iPhone that “saved” the couple in California was found by them 10 yards away from the car with a smashed screen – though somehow still working. The device prompted the couple that it could call for help with the new satellite functionality.

This couple is clearly lucky to be alive and lucky they had an iPhone. Had they been unconscious or unable to find the phone, the outcome might have been different.

The Times story contrasts with reports from across the Internet of iPhones at amusement parks mistaking rollercoaster rides for car crashes. Of course, users could leave their iPhones behind or turn off the emergency function before getting on a roller coaster – but the iPhone misinformation is likely creating at least minor headaches for emergency call centers.

The increasing promotion of smartphone based automatic crash notification is unfortunate but expected given the steadily expanding role of smartphones in cars. Every new car today comes with a companion application that allows for locating the car, operating the car remotely, determining the car’s functional status, and monitoring driver behavior.

If you have bought a new car in the past year or two or intend to in the next year or two your car will provide you with a driving score that you may use to obtain insurance quotes. Simultaneous with this shift has been an industry-wide embrace of mobile apps by insurers.

Insurers also want to evaluate your driving – for obvious reasons – but they also want you to use your phone to report claims. In fact, leading claims management company CCC Intelligent Solutions tells us that 20% of repairable claims are reported today using photo-based estimates derived from smartphones. More than 80% of consumers prefer using mobile claims management, the company says.

Smartphone-based insurance claims management does sound attractive, particularly from the standpoint of accelerating the claims process. But surely consumers will want to retain control of this process.

New technology from companies such as Sfara and Cambridge Mobile Telematics allow for the smartphone-based detection of low-speed crashes. These are precisely the kinds of vehicular interactions that many consumers prefer not to report to their insurance companies.

As we connect our cars and our insurance companies via mobile apps, we might all take care to ensure that we understand precisely which data is being collected and shared and under what circumstances. It’s not clear to me that the default mode for these applications is “opt out,” but it should be.

Smartphones are amazing devices and it is possible for a smartphone – these days – to be a life-saving tool. But the potential for misuse or abuse of personal data is enough to give any smartphone user pause before jumping into this particular pool.

It is also a heads up that the best form of OnStar-like automatic crash notification is built into the vehicle and able to detect the airbag deployment and gather important data from vehicle sensors to be shared with SOS call centers and first responders. Smartphones simply cannot replace this function.

Also Read:

Regulators Wrestle with ‘Explainability’​

Functional Safety for Automotive IP

Don’t Lie to Me


Formal Datapath Verification for ML Accelerators

Formal Datapath Verification for ML Accelerators
by Bernard Murphy on 01-04-2023 at 10:00 am

Datapath complexity min

Formal methods for digital verification have advanced enormously over the last couple of decades, mostly in support of verification in control and data transport logic. The popular view had been that datapath logic was not amenable to such techniques. Control/transport proofs depend on property verification; if a proof is found in a limited state space it is established absolutely. For large designs the check is often “bounded” – proven correct out to some number of cycles but not beyond. Experienced product teams generally had no problem with this limitation. Most electronic products allow for some threshold – perhaps 2 weeks or a month – before a reset is needed. But for one class of functions, datapaths, we will not tolerate any errors. These require a completely different approach to formal verification, which proves to be very important for math-intensive ML accelerators.

What are datapaths and why are they hard to verify?

A datapath is the part of a compute engine that does data processing, particularly math processing. It typically supports integers, fixed point numbers and floating-point numbers, with a range of precision options. Operations span from basic arithmetic to exponents and logs, trig, hyperbolic trig and more. Our intolerance of even occasional errors in such functionality was most infamously demonstrated by the Pentium FPDIV bug, estimated to occur in only one in 9 billion operations yet considered responsible for a $475M charge for replacement and write-off and a significant black eye for Intel. (In fairness, Intel are now leaders in applying and advancing state-of-the-art formal methods for more complete proving.)

Datapath verification (DPV) far exceeds the reach of simulation, as illustrated in the figure above. Faster machines and massive parallelism barely dent these numbers. Formal methods should shine in such cases. But property checking quickly runs out of gas because bounded model checkers (like SAT) can only search out through so many cycles before an exponentially expanding design state space becomes unmanageable. Instead, formal methods for datapaths are based on equivalence checking. Here equivalence is tested not between RTL and gate-level designs, but between RTL and reference C (or C++ or SystemC) models. If the reference model is widely trusted (such as Soft-float) this comparison should provide high confidence in the quality of the implementation.

VC Formal DPV and the Synopsys ARC NPX6 NPU Processor for AI

Synopsys recently hosted a webinar on application of their formal datapath verifier built on these principles. After an intro to the tool from Neelabja Dutta, Shuaiyu Jiang of the ARC group described how he used VC Formal DPV to verify datapath logic for their ARC NP6X Neural Processing Unit (NPU) IP.

The convolution accelerator example is useful to understand how the ARC team decomposed the verification task for what I think of as an assume/verify strategy though here applied to equivalence checking. The multiply phase is one such sub-component. Here assumptions would be that inputs to the C reference and the RTL implementation must be the same. In place of an output property check the proof defines a “lemma” requiring the outputs are the same. A similar process is run over each component in the convolution accelerator, followed by a top-level check for the assembled sub-proofs.

Shuaiyu also talks about application to the ARC Generic Tensor Ops Accelerator (GTOA). Briefly, ML frameworks (TensorFlow, TF-Lite, PyTorch, JAX, etc) work with tensor objects – here 2D image x color depth x sample size for a 4D tensor. These build on large sets of operators somewhat unique to each network (>1000 for TF), impeding portability, uniformity, etc. Following the ISA philosophy, Arm developed and open-released TOSA – Tensor Operator Set Architecture with ~70 basic instructions. TOSA-compliant frameworks and inference platforms should eliminate such inconsistencies. Though Shuaiyu did not comment on this point I assume ARC GTOA is built in line with the TOSA approach. The ARC ALU for these operations is necessarily even more math intensive than the convolution example, making it an even better example for DPV proofs, suitably decomposed.

To learn more

You can register to watch the webinar HERE. Also I suggest you read the second edition of “Finding Your Way Through Formal Verification (Second Edition)”. This has been updated in several areas since Synopsys released the first edition five years ago. There is a whole chapter now dedicated to DPV. Well worth a read – I’m a co-author 😀


How to Efficiently and Effectively Secure SoC Interfaces for Data Protection

How to Efficiently and Effectively Secure SoC Interfaces for Data Protection
by Kalar Rajendiran on 01-04-2023 at 6:00 am

secure interfaces article fig1

Before the advent of the digitized society and computer chips, things that needed protection were mostly hard assets such as jewelry, coins, real estate, etc. Administering security was simple and depended on strong guards who provided security through physical means. Then came the safety box services offered by financial institutions such as banks. The bank vaults themselves were not easily penetrable and the assets remained safe. But the service itself was not of much value if the assets couldn’t be taken out and put back in whenever the bank customer wanted. And therein was the vulnerable aspect of the service, which was at the time and point of access. What if an unauthorized party gets hold of the safety box key and accesses the contents? The institutions offering the service instituted a two-step process. The first step was to authenticate the party who wants to access the box contents. This was accomplished by checking the person’s relevant identity credentials. The second step was to use the appropriate key to open the box itself. To prevent any bad actors within the institution itself from opening the box without the customer being present, a dual-key mechanism was deployed.

Fast forward to the digitized society, other than the house we live in and the vehicles we drive, most other assets are not physical in nature. Stocks, bonds, intellectual property ownership, fiat currencies, crypto currencies, etc. The list goes on. These assets are secured not by some physical means but rather through encryption and storage in the form of zeroes and ones in electronic form around the world. In other words, security is being provided through a combination of electronic hardware/software solutions. For every security solution that is deployed, cyber criminals are always working to identify a weakness to break-in and steal assets. The goal of digital security mechanisms deployed in electronic systems is efficiency and effectiveness. At a conceptual level, the mechanism is similar to the bank safety box access method. Authenticate the user and decrypt the data using valid keys. Given this, how to protect and secure the interfaces without compromising on fast access time for legitimate users of the assets? Low latency authentication and encryption are key.

Starts with the Design

The added complication with digital security mechanisms is that they have to deal with many different types of interfaces to the data. This is pushing the industry to look at security as an integral part of electronic design architecture, not as an afterthought. The block diagram below showcases the various types of data interfaces in an electronic system.

Securing all of these interfaces at a hardware level and implementing zero-knowledge architecture so that the data is encrypted and can’t be used maliciously is critical. To add complexity to the mix, the interface standards bodies are regularly upgrading existing protocol specifications and bringing out new interfaces standards as well. These changes need to be implemented in designs either at the controller level, PHY level or both without compromising throughput and latencies.

The Demand for Secure Interface Solutions Keeps Growing

As an example, while the autonomous vehicle market is still in its early stage, it has already exposed security risks that are being addressed by today’s specifications used in cars for networking, ADAS camera/sensor connectivity, and displays. As advances in various fields of technology and markets happen, better security implementations will be needed. For example, quantum computing will have the capability to break today’s public key algorithms. Interface standards will need to adapt with quantum-safe algorithms over the coming years.

Implementing data security will continue to be on the top of the list of SoC designers’ tasks.

Synopsys Secure Interfaces

Synopsys offers the entire spectrum of interfaces that designers need for a variety of different applications. Their Interface IP products are pre-verified solutions that include silicon-proven Synopsys Controllers integrated with security features and offer reduced risk solutions for optimal security, low latency and area without compromising on performance. This makes it easier for SoC designers to address and implement data protection and security for quick time-to-market.

For more details on the following interfaces, visit the interfaces portfolio page.

Summary

Incorporating security into SoCs is a fundamental requirement for complying with international laws and regulations and satisfying privacy and data protection requirements of electronic systems users. Synopsys offers the industry’s broadest secure interfaces built for various applications such as HPC, Mobile, Automotive and IoT. For more details on Synopsys’ secure interface IP products, visit the product page.

Also Read:

Synopsys Crosses $5 Billion Milestone!

Configurable Processors. The Why and How

New ECO Product – Synopsys PrimeClosure


9 Trends of IoT in 2023

9 Trends of IoT in 2023
by Ahmed Banafa on 01-03-2023 at 6:00 am

9 Trends of IoT in 2023

The year 2023 will hit all 4 components of IoT Model:

  • Sensors,
  • Networks (Communications),
  • Analytics (Cloud)
  • Applications

With different degrees of impact.

 IoT Trend 1: Growth in Data and Devices with More Human-Device Interaction

By the end of 2019 there were around 3.6 billion devices that are actively connected to the Internet and used for daily tasks. With the introduction of 5G that will open the door for more devices, and data traffic.

You can add to this trend the increase adoption of edge computing which will make it easier for business to process data faster and close to the points of action

IoT Trend 2: AI a Big Player in IoT (again)

Making the most of data, and even understanding on a basic level how modern infrastructure functions, requires computer assistance through artificial intelligence.

The major cloud vendors, including Amazon, Microsoft, and Google, are increasingly looking to compete based on their AI capabilities.

Various startups hope to increase their market share thorough AI algorithms able to leverage machine learning and deep learning, allowing businesses to extract more value out of their ever-growing volumes of data.

Artificial intelligence is the fundamental ingredient needed to make sense of the vast amount of data collected these days, and increase its value for business. AI will help IoT data analysis in the following areas:

  • data preparation,
  • data discovery,
  • visualization of streaming data,
  • time series accuracy of data,
  • predictive and advance analytics,
  • real-time geospatial and location (logistical data)

IoT Trend 3: (VUI) Voice User Interface will be a Reality

It’s a battle among industry leaders who would like to dominate the market of IoT at an early stage.

Digital assistant devices, including Alexa, Siri and Google Assistant, are the future hubs for the next phase of smart devices, and companies are trying to establish “their hubs” with consumers, to make it easier for them to keep adding devices with less struggle and no frustrations

Voice represents 80% of our daily communications, taking a chapter from Sci Fi movies, talking to robots is the common way of communications, R2D2, C-3PO, and Jarvis to name few.

The use of voice in setting up the devices, change that set ups, giving commands and receiving results will be the norm not only in smart houses, factories but in between like cars, wearables for example.

IoT Trend 4: More Investments in IoT

IoT’s indisputable impact has and will continue to lure more startup venture capitalists towards highly innovative projects in hardware, software and services.

Spending on IoT will hit 1.4 trillion dollars by 2023.

IoT is one of the few markets that have the interest of the emerging as well as the traditional venture capitalists.

The spread of smart devices and the increase dependency of customers to do many of their daily tasks using them, will add to the excitement of investing in IoT startups.

Customers will be waiting for the next big innovation in IoT—such as

  • Smart mirrors that will analysis your face and call your doctor if you look sick,
  • Smart ATM machine that will incorporate smart security cameras,
  • Smart forks that will tell you how to eat and what to eat,
  • Smart beds that will turn off the lights when everyone is sleeping

IoT Trend 5: Finally, a Real Expansion of Smart IoT

IoT is all about connectivity and processing, nothing will be a better example than smart cities, but smart cities have been in a bit of a holding pattern recently.

Smart sensors around the neighborhood will record everything from walking routes, shared car use, building occupancy, sewage flow, and temperature choice 24/7 with the goal of creating a place that’s comfortable, convenient, safe, and clean for those who live there.

Once the model is perfected, it could be the model for other smart neighborhoods and eventually smart cities. The potential benefits for cities, however, make IoT technology especially compelling.

Cities of all sizes are exploring how IoT can lead to better efficiency and safety, and this infrastructure is increasingly being rolled around the world.

Another area of spreading smart IoT is auto industry with self-driving cars become a normal occurrence in the next few years, today tons of vehicles have a connected app that shows up to date diagnostic information about the car.

This is done with IoT technology, which is the heart of the connected vehicle.  Diagnostic information is not the only IoT advancement that we will see in the next year or so. Connected apps, voice search, and current traffic information are a few other things that will change the way we drive.

IoT Trend 6: The Rise of Industrial IoT & Digital Twin Technology

An amalgamation of technologies is pushing this new techno-industrial revolution, and IoT plays a big part in making manufacturing more efficient, less risky, and more profitable.

Industrial IoT brings enhanced efficiency and productivity through data integration and analysis in a way that isn’t possible without an interconnected manufacturing process

Another notion that is gaining popularity is “digital twin” technology. Through its use, organizations can create a clear picture of how their IoT devices are interacting with the manufacturing process.

This gives keen businesses insight into how the life cycle of their machines operates, and allows them to predict changes that may be needed ahead of time.

According to a Gartner survey, 48% of smart manufacturing adopters have made plans to make use of the digital twin concept

IoT Trend 7: More Movement to the Edge

Edge computing is a technology that distributed the load of processing and moved it closer to the edge of the network (sensors in case of IoT).

The benefits of using fog computing are very attractive to IoT solution providers.

Some of these benefits allow:

  • Users minimize latency,
  • Conserve network bandwidth,
  • Operate reliably with quick decisions,
  • Collect secure a wide range of data
  • Move data to the best place for processing with better analysis and insights of local data.
  • Edge computing has been on the rise in recent years, but the growing scope of IoT technology will make this move even more pronounced. Two factors are leading this change:
  • Powerful edge devices in various form factors are becoming more affordable
  • Centralized infrastructure is becoming more stressed.

Edge computing also makes on-device AI a realistic proposition, as it allows companies to leverage real time data sets instead of having to sift through terabytes of data in a centralized cloud in real time. Over the coming years and even decades, it’s likely that tech will shift to a balance between the cloud and more distributed, edge-powered devices.

Hardware manufacturers are building specific infrastructure for the edge deigned to be more physically rugged and secure, and security vendors will start to offer endpoint security solutions to their existing services to prevent data loss, give insights into network health and threat protection, include privileged user control and application whitelisting and control, that will help in the fast adoption and spread of edge computing implementations by businesses

IoT Trend 8: More Social, Legal, and Ethical Issues

IoT devices are a largely unregulated new technology. IoT will inevitably find itself facing social and legal questions in the near future. This is particularly relevant for data collected by these devices, which may soon find itself falling under the umbrella of the General Data Protection Regulation (GDPR). This regulation regarding the handling of personal data and privacy in the European Union, the GDPR extends its reach beyond the European region. Any business that wants to successfully operate within the EU will need to comply with the guidelines laid out in its 88-page document

Security issues are essential when it comes to legal regulation of personal data. Development teams can ensure the required level of security and compliance on various levels, including data encryption, active consent, various means of verification and other mechanisms. Their goal is to collect data legitimately and keep its accessibility, processing, and storage to a minimum that is dictated by the software product

IoT Trend 9: Standardization Still a Problem

Standardization is one of the biggest challenges facing growth of IoT—it’s a battle among industry leaders who would like to dominate the market of IoT at an early stage. But what we have now is a case of fragmentation. One possible solution is to have a limited number of vendors dominating the market, allowing customers to select one and stick to it for any additional connected devices, similar to the case of operating systems we have now have with Windows, Mac and Linux for example, where there are no cross-platform standards.

To understand the difficulty of standardization, we need to deal with all three categories in the standardization process:

  • Platform,
  • Connectivity,            
  • Applications.

In the case of platform, we deal with UX/UI and analytic tools, while connectivity deals with customer’s contact points with devices, and last, applications are the home of the applications which control, collect and analyze data.

All three categories are inter-related and we need them all, missing one will break that model and stall the standardization process. There is no way to solve the problem of fragmentation without a strong push by organizations like IEEE or government regulations to have common standards for IoT devices

Ahmed Banafa, Author the Books: Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

1. “Secure and Smart IoT Using Blockchain and AI “ Book by Prof. Ahmed Banafa

2. “Blockchain Technology and Applications “ Book by Prof. Ahmed Banafa

Also Read:

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?

5G for IoT Gets Closer

Webinar: From Glass Break Models to Person Detection Systems, Deploying Low-Power Edge AI for Smart Home Security