Banner 800x100 0810

Podcast EP141: The Role of Synopsys High-Speed SerDes for Future Ethernet Applications

Podcast EP141: The Role of Synopsys High-Speed SerDes for Future Ethernet Applications
by Daniel Nenni on 01-27-2023 at 10:00 am

Dan is joined by Priyank Shukla, Staff Product Manager for the Synopsys High Speed SerDes IP portfolio. He has broad experience in analog, mixed-signal design with strong focus on high performance compute, mobile and automotive SoCs and he has a US patent on low power RTC design.

Dan explores the use of high-speed SerDes with Priyank. Applications that enable high-speed Ethernet for data center and 5G systems are discussed. The performance, latency and power requirements for these systems is quite demanding, How Synopsys advanced SerDes IP is used to address these challenges is also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CTO Interview: John R. Cary of Tech-X Corporation

CTO Interview: John R. Cary of Tech-X Corporation
by Daniel Nenni on 01-27-2023 at 6:00 am

20220307SpanishHillsTacoDinner blurred 1

John R. Cary is professor of physics at the University of Colorado at Boulder and CTO of Tech-X Corporation. He received his PhD from the University of California, Berkeley, in Plasma Physics.  Prof. Cary worked at Los Alamos National Laboratory and the Institute for Fusion Studies at the University of Texas, Austin, prior to joining the faculty at the University of Colorado. At the University of Colorado, Professor Cary has served as department chair, center director, and faculty mentor.

In 1994, he co-founded Tech-X Corporation, which concentrates on computational applications for a wide variety of science and engineering applications.  Prof. Cary has researched multiple areas related to beam and plasma physics and the electromagnetics of structures.  He is a fellow of the American Physics Society, past chair of its Division of Plasma Physics, the 2015 recipient of the John Dawson Prize for Numerical Simulation of Plasmas, the 2016 recipient of the NPSS Charles K. Birdsall Award for Contributions to Computational Nuclear and Plasma Sciences, and the recipient of the 2019 IEEE Nuclear and Plasma Sciences Section Particle Accelerator Science and Technology Award.

What is the Tech-X backstory? 
The folks here at Tech-X have been working in high-performance computing, specifically as it relates to physical simulation, since the early 90’s.  Distributed memory parallelism, where a calculation is split effectively over many separate computers, was in its infancy. Tech-X was bringing the power of parallelism to plasma computations. Specifically, we excelled at computations of plasma acceleration of electrons due to high energies from the wake fields generated in plasmas caused by incident laser pulses. This work supported experiments at multiple national laboratories, fulfilling their needs for very large simulations.  Following these successes, Tech-X branched out over many areas of plasma physics, including magnetic fusion. We further broadened our capabilities to include electromagnetics of structures, such as cavities, antennas, and photonics.

In the process, Tech-X built an experienced cadre of high-performance computing experts.  These experts constructed a software stack for efficient computational scaling, which means that the computation does not bog down when performed on a large number of processors.  This software, VSim, is licensed for use on our customer’s own hardware.  In addition, Tech-X engages in consulting projects and partnerships staffed by its 30 employees and multiple long-term consultants.

More recently Tech-X has devoted increasing effort to democratizing High-Performance Computing (HPC), by building out an easy-to-use Graphical User Interface. Known as Composer, it allows users to build and run simulations as well as analyze and visualize the results.  Composer abstracts the process of job submission on HPC clusters so that to the user it is just like working on a desktop.  Tech-X is also developing a cloud strategy, so expect more announcements later this year.

What areas are you targeting for future growth?
Our mission is to provide specific capabilities in two areas. We currently provide VSimPlasma software and consulting services for the modeling of plasmas in semiconductor chambers. We are also in the early phases of productizing software for modeling of nano-photonics for photonic integrated circuits (PICs).  Both of these applications present unique challenges because of their small feature size of interest compared to the overall system size, which makes them computationally intensive. This arises because the range of feature scales is large, requiring fine resolution over a large region.  For example, in semiconductor chambers there are small features at the wafer surface, but even if the wafer is uniform the plasma forms sheaths, which represent drops in the electric potential at the edge of the wafer. These sheaths are much smaller than the size of the chamber.

In nano-photonics, PIC components being designed are measured typically in microns – but manufacturing causes roughness in the sidewalls that is much smaller, on the order of nm.  In either of these applications the grid must be very fine to resolve these small features to provide accurate results and it must also span a large region, leading to the requirement for many billions, or even trillions, of cells.  This is where Tech-X software excels.

 What makes VSimPlasma software unique?
Plasma chambers involve many different spatial scales, from the scale of the chamber itself down to the details of the plasma close to the wafer.  The larger scales have traditionally been modeled with fluid codes. However, to compute the details of the plasma sheath, (and consequently the distribution of particles hitting the wafer which determine, e.g., whether one can etch narrow channels sufficiently deep) one must use a particle-in-cell (PIC) method, as provided by VSimPlasma from Tech-X.  For such problems VSimPlasma is the leader due to its extensive physics including its capability to handle large sets of collisions, its many electromagnetic and electrostatic field solvers, and its multiple algorithms for particle-field interactions.  VSim also has the ability to model particle-surface interactions, including the generation of secondary particles, and the reactions of particles on the surface. These are crucial for accurately modeling plasma discharges.  In semiconductor etching, deep vias require the ions to hit the wafer with a near-vertical angle. VSim models that critical distribution extremely well, and we continue to refine our code with each release with feedback from our customers in the semiconductor industry.

An additional uniqueness of VSim in plasma modeling is fitting into commercial workflows. It has an easy-to-use interface and integrates with CAD.  VSim further allows the development of analyzer plugins so that the user can analyze both the fields and the particles within the plasma.

What keeps your customers up at night?
As everyone knows, moving to smaller critical dimensions is making the problems harder and driving up capex, which causes all kinds of business problems.  There are too many variables in advanced plasma processing to optimize with a pure experimental approach.  Semiconductor companies are augmenting prototyping with simulation. Plasma etch is a difficult area involving many variables, including geometries of the etch, wafer and chamber, the plasma energy and chemistry in the chamber, and the wafer surface and etch profile. Our semiconductor customers’ interests are to reduce time and cost by reducing experimental iterations when tackling an advanced process etching recipe. The ROI from use of simulation is measured in reduced time to production, development cost and machine utilization.

How do customers engage with you?
There are several ways our customers engage with us including directly phoning or emailing our sales team or requesting an evaluation license through our website.  An application engineer (AE) will then contact the customer to determine how our software might best fit their needs.  The AE sets up the download and walks the customer through the software.  Several of our customers have independently set up simulations using the software on their own.  VSim comes with a rich set of examples for modeling of plasmas, vacuum electronics devices, and electromagnetics for antennas and cavities.  In addition, we provide various levels of consulting services, ranging from an AE setting up your problem and guiding you to the solution, to an AE completely solving your problem, including data analysis, and then providing the direct result.

What is next for Tech-X?
We have a number of skunk-works projects under way that will bring exciting new capabilities to plasma and photonics modeling.  We are looking at GPU and cloud computing with the aim of making computations fast to reduce development time, the number of fabrication cycles and the need for capital expenditures.  We expect to be able to have improved capabilities for modeling the latest plasma etch reactors, which will be unique in the industry.  We have an upcoming webinar on proving our current capabilities, and will soon have a series of webinars that demonstrate our latest features and plans.

Webinar: Learn More About VSim 12.0
Built on the powerful Vorpal physics engine that researchers and engineers have used for over 20 years, VSim 12 offers new reaction capabilities, user examples, and numerous improvements designed to increase user efficiency.

Also Read:

Understanding Sheath Behavior Key to Plasma Etch


ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn

ASML – Powering through weakness – Almost untouchable – Lead times exceed downturn
by Robert Maire on 01-26-2023 at 10:00 am

Robert Maire Bloomberg

-Demand far exceeds supply & much longer than any downturn
-Full speed ahead-$40B in solid backlog provides great comfort
-ASP increase shows strength- China is non issue
-In a completely different league than other equipment makers

Reports a good beat & Guide

Revenues were Euro6.4B with system sales making up Euro4.7B of that. EPS was Euro4.6 per share. All beating expectations. 18 EUV systems were shipped and 13 systems recognized. Most importantly order intake was Euro6.3B of which EUV was Euro3.4B. In essence , ASML’s book to bill ratio remains very strong at better than 1.3.

ASML has a huge, multi year backlog of Euro40.4B, which keeps them very warm at night. Reassuringly , the backlog continues to build.

Backlog timeframe well exceeds any possible downturn length

With Euro40.4B in backlog and continuing strong orders, ASML has a multi year backlog. The bottom line is that customers never get off the order queue and the queue keeps growing in length.

Customers understand the long term growth model of semiconductors and are clearly ignoring a short term weakness whether its 6 months, a year, or more. ASML will ride over any expected weak period.

Other equipment makers, who compete for business with quick lead times are not so fortunate and will revert to a “turns” business and see orders fall off as customers can easily get out of the order queue and get back on when the industry picks up again.

ASP increases demonstrate strength

ASML mentioned that its EUV ASPs are increasing from 160M to 165-170M which further indicates the level of strength that being a virtual monopoly brings. ASML is the only EUV game in town and can price to market. DUV pricing has also increased. Both based on productivity parameters.

We highly doubt that other semiconductor equipment segments are able to push through price increases in the face of falling orders, even with increased performance, which they usually give away for free.

This is one of the keys that separates ASML from others in the semi equipment market and puts them in a league of their own. ASML is looking at an up 2023 while others are talking about WFE being down 20%.

This also implies that if lithography spend is actually up in 2023 the non-litho is actually down more than 20%, further separating ASML from other semi equipment makers

Full speed ahead with high NA and production capacity increases

ASML has been under a lot of pressure to increase production and has spent a huge amount of both money and effort with suppliers, most notably Zeiss, to increase production to an expected 60 EUV and 375 DUV systems in 2023.
ASML will continue to spend as the job is not over as they need more capacity. Also a major expense is the high NA product which is seeing a large spend in development in advance of any revenue.

This all suggests that ASML’s results might be even better without the “headwinds” of additional spend they currently have. Clearly the spend is relatively minor with a Euro7.4B cash balance and strong earnings, they are very comfortably awash in cash.

Results will still vary as to mix and lumpiness

Given the high ASP of systems and the differential between ASPs of DUV & EUV we expect lumpiness in quarters depending upon what is shipped in which quarter and where customer near term demand goes. ASML is expecting a slightly weak Q1 which appears to be due primarily to mix and normal lumpiness, we are not in the least concerned.

China remains a non-issue as semiconductors are a global zero sum game

We have repeated many times that the semiconductor industry is a zero sum game. That is that chip demand remains the same in the face of where the chips are made. If chips are not made in China (due to the embargo) they will be made elsewhere by others, and those others will need the same litho tools that China would have otherwise bought. The only impact is that China is kept out of the leading edge that other countries have access to.

ASML will still sell the same number of EUV tools just shipping them to other places. Although politically sensitive and much talked about, the actual impact on ASML is near zero.

ASML remains above the near term fray maintaining focus on long term

Management, while certainly cautious about near term issues, is rightly more focused on long term issues of capacity and technology. This 5 to 10 year focus is very appropriate given the business that they are in. We saw the lead time in EUV was decades as ASML struggled through advances but was rewarded in the long term for their long dedication to the cause of technology. Building capacity is a long term and costly struggle as is technology and ASML is investing for the future.

The stocks

We continue to view ASMLs valuation as well above the rest of the semi equipment makers, in a league of their own. They are also unique in that their view is of an up year versus everyone else’s expectation of a down year.

Although ASML talked about a potential recovery of the industry in H2 2023, we are a bit more cautious given the depth of this downturn being one of the worst we have seen in a long time. But none of this matters to ASML given their horizon.

We would remain an owner/buyer of ASML stock but would remain light on the rest of the group especially LRCX and AMAT given their shorter term equipment model in the face of the widespread weakness coupled with China issues, a double whammy that ASML does not see.

As with lenses and focus length business that ASML is well acquainted with, being focused on the long term means the short term is out of focus and less relevant to them…..

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Where there’s Smoke there’s Fire: UCTT ICHR LRCX AMAT KLAC Memory

Samsung Ugly as Expected Profits off 69% Winning a Game of CAPEX Chicken

Micron Ugly Free Fall Continues as Downcycle Shapes Come into Focus


Designing a ColdADC ASIC For Detecting Neutrinos

Designing a ColdADC ASIC For Detecting Neutrinos
by Kalar Rajendiran on 01-26-2023 at 6:00 am

The DUNE Experiment

Cliosoft recently hosted a webinar where Carl Grace, a scientist from Lawrence Berkeley National Laboratory (LBNL) talked about a cutting edge project for detecting neutrinos. The project is called the Deep Underground Neutrino Experiment (DUNE) project. Many of us know what a neutron is, but what is a neutrino? Before we get to that, here is some background on Cliosoft and some insights into LBNL.

Cliosoft: The company has been serving the semiconductor industry for more than 25 years. Its product offerings fall into three main categories: hardware design data management, IP Reuse and highlighting differences between two designs directly on a schematic or layout. The relevance of Cliosoft to the DUNE project is directly tied to Cliosoft’s Hardware Design Data Management tool suite. This tool suite empowers multi-site design teams to efficiently collaborate on complex hardware designs and the DUNE project is quite a complex one with demanding requirements. The project involves collaboration among LBNL, Fermilab and Brookhaven National Laboratory national labs.

LBNL: Many of us have heard of LBNL but may not be aware of its expertise, excellence and diversity. With 3,500 employees and 1,000 students, it is much larger than many would have imagined. And it has 1,750 visiting researchers. With this much of brain power directed at the physical sciences, computing, biosciences, earth and energy sciences, material and nanotechnologies, it is the most diverse US National Laboratory. It offers the following user facilities for researchers to tap into: Advanced Light Source, National Energy Research Scientific Computing Center, Energy Sciences Network, Joint Genome Institute, Molecular Foundry including National Center for Electron Microscopy. The Lab has 14 Nobel prizes to its credit with the most recent one in chemistry by Prof. Jennifer Doudna (co-discoverer of CRISPR gene editing).

Whether one is in particle physics and/or semiconductors, there is something of interest and value in this webinar. To watch this on-demand webinar, go here.

What is a Neutrino and why study them?

Neutrinos are fundamental particles with very low mass, travel close to the speed of light and interact only through gravity and the weak nuclear force. Neutrinos could help answer questions such as, why is there matter in the universe, do isolated protons decay, how can we witness the birth of a blackhole, etc.,

How to detect Neutrinos?

Neutrinos can travel at almost the speed of light and can pass through 60 light years of water on average before interacting with any matter. This makes it very difficult to detect them. The solution is the DUNE detector, the largest cryogenic particle detector ever made that can detect neutrinos from an intense neutrino beam directed toward it from 800 miles away. A tight neutrino beam is developed when the neutrinos generated by a proton accelerator at Fermilab hit a target. This tight neutrino beam is sent over 800 miles through solid underground rock and earth material to a detector sitting one mile under the ground. This setup prevents cosmic rays from having any impact on the experiment. The detector itself is an extremely large tank filled with liquid Argon. Liquid Argon being very dense provides a lot of targets for the neutrinos to potentially hit. Being chemically inert, Argon does not cause any chemical reactions to disturb the experiment and pollute the collected data.

 

When a neutrino interacts

When a neutrino interacts with an atom of Argon, the atom is ionized. The freed electron generates an electric charge that travels through the liquid Argon in the tank. The tank is placed under an enormous electric field that drifts this charge on to planes of wires. When the charge reaches those wires, they generate very small currents that can then be recorded. Reading out and digitizing these tiny currents induced in these wires is a key part of the experiment. A key function of the detection electronics is the analog to digital conversion (ADC). Immersing the detector electronics in liquid Argon greatly reduces the cabling capacitance, allowing lower achievable noise, and serves as an enabling technology for DUNE project.

Cold ADC Requirements

  • 2MS/s sampling rate per channel and 16 channels at 12-bit resolution
  • Sub-LSB noise performance
  • 30 years reliability in cold environment (-184oC)
  • Operate at both room temperature and cold for testing purposes

Readily available off-the-shelf ADCs cannot meet the above requirements. Custom ADCs need to be built and integrated into ASICs implementing the detection electronics.

Collaboration among teams from the three labs

A small team from each of LBNL, Fermilab and Brookhaven National Laboratory collaborated to design the detection electronics for the DUNE project. With different pieces of the required design IP developed by teams geographically separated, the Cliosoft data management solution enabled an automated design-aware surgical data synchronization. This allowed fine-grained access controls for each participating National lab and provided a way for network storage optimization at each participating site.

Summary

The three-lab team has successfully developed the ColdADC ASICs to instrument the neutrino detector immersed in liquid Argon. Approximately 40,000 ColdADC ASICs will be deployed at the DUNE Far Detector complex and will be immersed in liquid Argon. Each ColdADC will be reading out 16-channels for a total of 640,000 wire channels. The detector electronics can be operated over a 250oC range and have achieved better noise performance than the commercial ADC solution used in the Short Baseline Neutrino Detector (SBND) experiment. The DUNE experiment will be conducted over a 30-year period.

Also Read:

Design to Layout Collaboration Mixed Signal

Webinar: Beyond the Basics of IP-based Digital Design Management

Agile SoC Design: How to Achieve a Practical Workflow


10 Impactful Technologies in 2023 and Beyond

10 Impactful Technologies in 2023 and Beyond
by Ahmed Banafa on 01-25-2023 at 10:00 am

10 Impactful Technologies in 2023 and Beyond

There are many exciting technologies that are expected to shape the future, the following are some of the technologies that will impact our lives in the coming 5 years at different levels and depth:

Generative AI, also known as generative artificial intelligence, is a type of #AI that is designed to generate new content or data based on a set of input parameters or a sample dataset. This is in contrast to traditional AI, which is designed to analyze and interpret existing data.

There are several different types of generative AI, including generative models, which use machine learning algorithms to learn the underlying patterns and structures in a dataset, and then generate new data based on those patterns; and generative adversarial networks (GANs), which are a type of machine learning model that consists of two neural networks that work together to generate new data.

Generative AI has a wide range of potential applications, including image and video generation, music composition, and natural language processing. It has the potential to revolutionize many industries, including media and entertainment, advertising, and healthcare.

A voice user interface (VUI) is a type of user interface that allows people to interact with devices, applications, or services using voice commands. VUIs are becoming increasingly popular due to their ease of use and the increasing capabilities of natural language processing (NLP) technology, which enables devices to understand and respond to human speech.

VUIs are used in a variety of applications, including smart speakers, virtual assistants, and home automation systems. They allow users to perform tasks or access information simply by speaking to the device, without the need for manual input or navigation.

#VUIs can be designed to understand a wide range of commands and queries, and can be used to control various functions and features, such as setting reminders, playing music, or turning on the lights. They can also be used to provide information and answer questions, such as providing weather updates or answering queries about a particular topic.

Edge computing is a distributed computing paradigm that brings computing and data storage closer to the devices or users that need it, rather than relying on a central server or cloud-based infrastructure.

In edge computing, data is processed and analyzed at the edge of the network, where it is generated or collected, rather than being sent back to a central location for processing. This can help to reduce latency, improve performance, and increase the scalability of systems that require real-time processing or decision-making.

Edge computing is used in a variety of applications, including the Internet of Things (IoT), where it allows devices to process and analyze data locally, rather than sending it over the network to a central server. It is also used in applications that require low latency, such as video streaming and virtual reality, as well as in industrial and military applications where a central server may not be available.

5G networks use a range of technologies and frequencies to provide coverage, including millimeter wave bands, which are high-frequency bands that can provide very fast data speeds, but have limited range. They also use lower-frequency bands, which can provide wider coverage but lower data speeds.

#5G networks are expected to offer data speeds that are much faster than previous generations of mobile networks, with some experts predicting speeds of up to 10 gigabits per second. They are also expected to offer lower latency, or the time it takes for a signal to be transmitted and received, which is important for applications that require real-time responses, such as video streaming and online gaming.

5G technology is still in the early stages of deployment, and it is expected to roll out gradually over the coming years. It is likely to be used in a variety of applications, including mobile devices, IoT devices, and a wide range of other applications that require fast, reliable connectivity.

A Digital Twin is a virtual representation of a physical object or system. It is created by using data and sensors to monitor the performance and characteristics of the physical object or system, and using this data to create a digital model that reflects the current state and behavior of the physical object or system.

Digital twins can be used in a variety of applications, including manufacturing, healthcare, and transportation. In manufacturing, for example, a digital twin can be used to simulate the performance of a production line or equipment, allowing manufacturers to optimize their operations and identify potential issues before they occur. In healthcare, digital twins can be used to model the body or specific organs, allowing doctors to better understand the patient’s condition and plan treatment.

Digital twins are created using a combination of sensors, data analytics, and machine learning techniques. They can be used to visualize and analyze the behavior of the physical object or system, and can be used to optimize performance, identify issues, and make decisions about how to improve the physical object or system.

Quantum Computers are different from classical computers, which use bits to store and process information. Quantum computers can perform certain types of calculations much faster than classical computers, and are able to solve certain problems that are beyond the capabilities of classical computers.

One of the key benefits of quantum computers is their ability to perform calculations that involve a large number of variables simultaneously. This makes them particularly well-suited for tasks such as optimization, machine learning, and data analysis. They are also able to perform certain types of encryption and decryption much more quickly than classical computers.

Quantum computing is still in the early stages of development, and there are many challenges to overcome before it becomes a practical technology. However, it has the potential to revolutionize a wide range of industries, and is likely to play an increasingly important role in the future.

A Chat Bot is a type of software that is designed to engage in conversation with human users through a chat interface. Chat bots are typically used to provide information, answer questions, or perform tasks for users. They can be accessed through a variety of platforms, including messaging apps, websites, and social media.

There are several different types of chat bots, including rule-based chat bots, which are designed to respond to specific commands or queries; and artificial intelligence (AI)-powered chat bots, which use natural language processing (NLP) to understand and respond to more complex or open-ended queries for example #ChatGPT.

Chat bots are commonly used in customer service, where they can handle routine inquiries and help customers resolve simple issues without the need for human intervention. They are also used in marketing, where they can help businesses to connect with customers and provide information about products and services.

XR is a term that is used to refer to a range of technologies that enable immersive experiences, including virtual reality (VR), augmented reality (AR), and mixed reality (MR).

Virtual reality (VR) is a technology that allows users to experience a simulated environment as if they were physically present in that environment. VR is typically experienced through the use of a headset, which allows users to see and hear the virtual environment, and sometimes also to interact with it using handheld controllers or other input devices.

Augmented reality (AR) is a technology that allows users to see virtual elements superimposed on their view of the real world. #AR is often experienced through the use of a smartphone or other device with a camera, which captures the user’s surroundings and displays virtual elements on top of the real-world view.

Mixed reality (MR) is a technology that combines elements of both VR and AR, allowing users to interact with virtual elements in the real world. #MR typically requires the use of specialized hardware, such as a headset with a built-in camera, which captures the user’s surroundings and allows virtual elements to be placed within the real-world environment.

Distributed ledger technology (DLT) is a type of database that is distributed across a network of computers, rather than being stored in a central location. It allows multiple parties to share and update a single, tamper-evident record of transactions or other data, without the need for a central authority to oversee the process.

One of the most well-known examples of #DLT is the blockchain, which is a decentralized, distributed ledger that is used to record and verify transactions in a secure and transparent manner. Other examples of DLT include distributed databases, peer-to-peer networks, and consensus-based systems.

DLT has a wide range of potential applications, including financial transactions, supply chain management, and identity verification. It is also being explored for use in the development of new products and services, such as smart contracts and decentralized applications (dApps).

The Internet of Things (IoT) is a network of connected devices that are able to communicate and exchange data with each other. These devices can range from simple sensors and actuators to more complex devices such as smart thermostats and appliances.

The #IoT is made possible by the widespread availability of broadband internet, as well as the development of low-cost sensors and other technologies that enable devices to be connected to the internet. These devices are often equipped with sensors that allow them to gather data about their environment or their own operation, and are able to communicate this data to other devices or systems.

The IoT has the potential to transform many aspects of our lives, including how we live and work. For example, smart home systems that use IoT technology can allow users to control and monitor their home appliances and systems remotely, and can provide alerts and notifications about potential issues. The IoT is also expected to play a significant role in the development of smart cities, which are urban environments that use technology to improve the quality of life for residents.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

1. “Secure and Smart IoT Using Blockchain and AI “ Book by Prof. Ahmed Banafa

2. “Blockchain Technology and Applications “ Book by Prof. Ahmed Banafa

3. ChatGPT (Chat of Generative Pre-training Transformer)

Also Read:

CES 2023 and all things cycling

9 Trends of IoT in 2023

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?


Effective Writing and ChatGPT. The SEMI Test

Effective Writing and ChatGPT. The SEMI Test
by Bernard Murphy on 01-25-2023 at 6:00 am

Writing and ChatGPT min

ChatGPT is a hot topic, leading a few of my colleagues to ask me as a writer what I think of the technology.  I write content for tech companies and most of my contacts freely confess that they or more often their experts struggle with writing. If a tool could do that job for them they would be happy and I would have to find a different hobby. Overlooking the usual hype around anything AI,  I have seen a few examples of ChatGPT rewrites which I found impressive. Since I can’t get onto the site to test it myself (overload?), I must base what follows on those few samples.

As a promoter of AI, I can’t credibly argue that my expertise should be beyond AI’s reach. Instead, I spent some time thinking about where it might help and where it probably would not be as helpful. This I condensed into four objectives I consider important in effective writing: style, expertise, message, and impact (SEMI, conveniently 😊). Think of these as layers which progressively build an impression for a reader, rather than sequential components.

SEMI

Style: Inexperienced writers commonly spend too much time here, suggesting a possible advantage for novices. Write a first pass in your own style, then run it through the tool. ChatGPT will output reasonable length sentences and paragraphs in a conversational style. Probably easier to read than your first-pass. I haven’t been able to check if it supports conference paper style (3rd person, passive voice, etc.). The technology seems like it could offer a real advantage to anyone agonizing over their awkward prose or endlessly circling around the right way to phrase a particular sentence. That said, I advise reading the output carefully and correcting as you see fit.

Expertise: AI isn’t magical. ChatGPT is trained over huge amounts of data but almost certainly not huge amounts in your specialized niche. It can provide well-written general prose embedding your technical keywords or phrases, but your readers are looking for expert substance or better yet novel expert substance, not prose decoration around tech/buzz keywords. Only you can provide that depth, through examples and analysis. ChatGPT can still help with style after you have written this substance.

Message: Your target readers are looking for articles with a point. What is the main idea you want to convey? Implicitly perhaps “buy my product”, but raw commercials have a small audience. The message should be a useful and informative review of a general opportunity, need or constraint in the market you serve. Something readers will find valuable whether or not they want to follow up. The message should shape the whole article, from opening to closing paragraph. I very much doubt that ChatGPT can do this for you unless that message is already written into the input text.

Impact: What should I remember the day after or a week after I have read your article? We don’t remember lists. Your article should build around one easily remembered message. We also don’t remember “more of the same” pitches. We remember novelty, a new idea or twist which stands out from an undifferentiated background of “me too” claims from others. Novelty can be in the message, in the expert examples you present, or in a (product independent) claim of the characteristics of a superior solution. You should also consider that your article leaves an impression about yourself and about your company, as a source to be trusted. Or otherwise.

One last note. Readers develop impressions in SEMI order. I don’t approach writing in this order. I first think about the message. For expertise, I specialize in a relatively narrow range of technologies, and I talk to client experts before I write to provide me with strong and current examples. Style is something I have developed over the years, though I will certainly experiment with ChatGPT when the site again becomes available. Finally, lasting impact starts with the message. I finish the first draft then move onto something else for at least a day. Coming back later gives me time to mull over and consider improvements to better meet each of the SEMI objectives.

I’d be interested to hear about your ChatGPT experiments 😊

Also Read:

All-In-One Edge Surveillance Gains Traction

2022 Retrospective. Innovation in Verification

Formal Datapath Verification for ML Accelerators


All-In-One Edge Surveillance Gains Traction

All-In-One Edge Surveillance Gains Traction
by Bernard Murphy on 01-24-2023 at 10:00 am

Surveillance cameras min

Like it or not, the surveillance market is growing, at a CAGR approaching 10%. Big brother concerns sometimes cloud the picture but overlook the much larger practical yet less hype-worthy applications for surveillance. Home and industrial security, enhanced traffic flow management, monitoring for fire and other fast-growing threats. In support of crime scene investigations and in bodycams. What is common to all these applications is that they must be deployable at volume, low power, sometimes portable, always connected. Supporting ever-increasing resolution while intelligently filtering data upload to only significant activity, for review by human monitors .

Profile of a state-of-art surveillance platform

Start with the basics. A surveillance camera can no longer afford to stream a full video channel. The communication overhead and cost would simply be too high. Video must be cleaned up through image signal processing (ISP). Then run through object detection to filter out all but significant frames for upload (a person, a possible fire, an emergency vehicle). Now multiply this basic framework to support several cameras. Some perhaps pointed in different directions, some providing a stereo view through 2 image sensors. Some may be fisheyes, supporting a 360o image. These might be complemented by other types of sensors such as motion and perhaps range detectors.

Processing each image view depends on strong ISP functionality, for noise reduction, dynamic range optimization, de-warping for fisheye views, and much more. Then image recognition through inferencing on one or more trained networks. However these independent views become much more powerful when combined through sensor fusion. The views from 2 stereo sensors together can provide a depth assessment. Motion or range input can further enhance those approximations. Detection across multiple sensors provides path information for a moving object. All this detection demands an advanced platform to simultaneously process object detection and fusion from multiple sensors inputs.

Novatek Microelectronics Corp recently released their NT98530 multi-sensor IP Camera SoC targeting surveillance, retail, smart city, and transportation applications. This SoC is a good example of a platform targeting just these goals. It supports 8 mega pixels at 60 frames/sec while simultaneously performing advanced object recognition on each frame. All in an SoC providing almost all the electronics required by that multi-sensor platform, at a unit cost which will support wide deployment at low power.

CEVA SensPro2 inside

CEVA is well known for their DSP-based solutions for computer vision, audio, wireless and AI and have been working together with Novatek for almost ten years now. SensPro2 builds on the earlier SensPro Gen-1, increasing performance across a range of neural net benchmarks by up to a factor of 2. Computer vision and SLAM benchmarks improve by as much as 5X. There are even better improvements in speech and radar processing. All continue to be supported by rich software libraries and tools for mapping standard networks onto the neural net platform. Novatek deployed their NT98530 on top of SensPro2. claiming it delivers superior real-time performance for computer vision, AI-based analytics and multi-sensing sensor fusion at the edge, for a powerful, flexible edge AI camera solution which customers can mold to their requirements.

CEVA sensor fusion IPs are already deployed in a wide range of OEM products, from smart TVs to fixed wireless access devices, robot vacuums and VR/AR headgear. You can learn more in this press release with Novatek and this product page.

Also Read:

CEVA’s LE Audio/Auracast Solution

CEVA Accelerates 5G Infrastructure Rollout with Industry’s First Baseband Platform IP for 5G RAN ASICs

5G for IoT Gets Closer


Podcast EP140: Hyperstone’s Unique Position as a Supplier to the Industrial Market

Podcast EP140: Hyperstone’s Unique Position as a Supplier to the Industrial Market
by Daniel Nenni on 01-24-2023 at 6:00 am

Dan is joined by Steffen Allert. Since 2007, Steffen has been the Vice President of Sales for Hyperstone, a producer of Flash Memory Controllers for Industrial Embedded Storage Solutions. Steffen brings more than 20 years of sales and management experience in the semiconductor and electronics market.

Dan explores the unique position Hyperstone has as a supplier to the industrial market. Steffan explains the special demands this market presents regarding quality, low power, security, supply and reliability. Steffan discusses some of the new developments for the coming year at Hyperstone, including architectural enhancements to support greater security.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Webinar: Achieving Consistent RTL Power Accuracy

Webinar: Achieving Consistent RTL Power Accuracy
by Daniel Nenni on 01-23-2023 at 10:00 am

Image4

A comprehensive report from the US Department of Energy (DOE), “Semiconductor Supply Chain Deep Dive Assessment” (February 2022) calls for a 1000X energy efficiency improvement that is required to maintain future compute requirement needs given a finite amount of world energy production. Energy efficiency is at the top of designers’ minds today. A holistic approach to energy efficiency must start at the earliest stages of the design flow, at the architectural and micro-architectural levels. It is at those levels of abstraction where designers can evaluate power-performance-area tradeoffs and create energy-efficient architectures. Later stages in the design process present limited opportunities for power reduction.

Energy efficiency and power optimization efforts must be guided by power analysis at all levels of abstraction. Early power analysis is intrinsically less accurate than signoff power analysis in the implementation phases, but the goal of early power analysis must be to provide consistent accuracy to the designers to enable them to make informed decisions.

Traditional Register Transfer Level (RTL) power analysis tools take into account a very limited amount of information related to the actual implementation of the design. Fast synthesis technologies used in such tools are typically not timing driven, have limited parasitic capacitance estimation capabilities, build simple fanout-driven clock tree structures, and use imprecise heuristic methods for glitch power calculation.

These considerations call for the new generation of RTL power analysis tools to provide consistent accuracy by leveraging technology-related information, timing constraints, and accurate glitch power modeling. Such tools deliver a tight correlation of RTL power analysis vs. final signoff analysis in a consistent manner. This consistent accuracy is made possible only with the deep understanding of the implementation and signoff power calculation algorithms.

If you are designing chips for high-performance computing (HPC) and data center applications, bandwidth is, of course, a key consideration. However, as data centers get bigger and the required compute power increases, keeping power consumption to a minimum becomes a priority. In addition to power, latency is another key concern for HPC and data center SoC designers as access to the available memory pool is becoming a bottleneck and must be allowed in nanoseconds.

Webinar: Achieving Consistent RTL Power Accuracy

Register Transfer Level (RTL) power analysis, performed early in the design cycle, is a key component of end-to-end methodology to maximize energy efficiency. Such analysis has become a critical requirement for many IC designs today and in the future. Although RTL power analysis technology has been available to designers for many years, traditional approaches have relied on heuristic methods – thus lacking consistent accuracy. This webinar, presented by Alexander Wakefield, Synopsys Scientist, will focus on Synopsys RTL power analysis technology and best practices that achieve consistent accuracy in the design flow.

First, the motivation for RTL power analysis will be briefly outlined in the context of the overall methodology. Next, the basics of power consumption and associated calculations will be reviewed. Following that, several key factors affecting RTL power accuracy will be examined: fast synthesis and mapping, clock tree modeling, and parasitics estimation. These factors will also be compared with heuristic methods, and finally, some of the best practices to achieve good correlation and consistent accuracy will be covered.

REGISTER HERE

Presenter Bio:
Alex Wakefield is a Synopsys Scientist, with degrees in Engineering and Computer Science from the University of Adelaide, Australia. He has worked for more than 20 years in many areas including Synthesis, Simulation, SystemVerilog, UVM, Constraints, Coverage closure, and Embedded-Software. He has presented papers at many conferences and holds multiple patents. The last few years Alex has primarily been focused Power-Estimation, Leading the Synopsys global rollout of Power-Estimation with Zebu. He is involved in all Synopsys Power-Estimation engagements worldwide.

Also Read:

Synopsys Crosses $5 Billion Milestone!

Configurable Processors. The Why and How

New ECO Product – Synopsys PrimeClosure

UCIe Specification Streamlines Multi-Die System Design with Chiplets


Podcast EP139: The Third Quarter ESDA Market Data Report with Dr. Walden Rhines

Podcast EP139: The Third Quarter ESDA Market Data Report with Dr. Walden Rhines
by Daniel Nenni on 01-23-2023 at 8:00 am

Dan discusses the recent Q3 2022 ESDA report with Wally Rhines, Executive Sponsor of the SEMI Electronic Design Market Data report. While not tracking the record-breaking growth that has been seen over the past few quarters, growth is generally quite strong across the world. Dan explores the numbers with Wally, including some surprising growth results in certain regions.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.