wide 1

Power Delivery Network Analysis in DRAM Design

Power Delivery Network Analysis in DRAM Design
by Daniel Payne on 03-27-2023 at 10:00 am

IR drop plot min

My IC design career started out with DRAM design back in 1978, so I’ve kept an eye on the developments in this area of memory design to note the design challenges, process updates and innovations along the way. Synopsys hosted a memory technology symposium in November 2022, and I had a chance to watch a presentation from SK hynix engineers, Tae-Jun Lee and Bong-Gil Kang. DRAM chips have reached high capacity and fast data rates of 9.6 gigabits per second, like the recent LPDDDR5T announcement on January 25th. Data rates can be limited by the integrity of the Power Delivery Network (PDN), yet analyzing a full-chip DRAM with PDN will slow simulation times down too much.

The peak memory bandwidth per x64 channels has shown steady growth across several generations:

  • DDR1, 3.2 GB/s at 2.5V supply
  • DDR2, 6.4 GB/s at 1.8V supply
  • DDR3, 12.8 GB/s at 1.5V supply
  • DDR4, 25.6 GB/s at 1.2V supply
  • DDR5, 51.2 GB/s at 1.1V supply

A big challenge in meeting these aggressive timing goals is controlling the parasitic IR drop issues caused during the IC layout of the DRAM array, and shown below is a plot of IR drop where the Red color is an area of highest voltage drop, which in turn slows the performance of the memory.

IR drop plot of DRAM array

The extracted parasitics for an IC are saved in a SPF file format, and adding these parasitics for the PDN to a SPICE netlist causes the circuit simulator to slow down by a factor of 64X, while the number of parasitic RC elements added by the PDN is 3.7X more than just signal parasitics.

At SK hynix they came up with a pragmatic approach to reduce the simulation run times when using the PrimeSim™ Pro circuit simulator on SPF netlists including the PDN by using three techniques:

  1. Partitioning of the netlist between Power and other Signals
  2. Reduction of RC elements in the PDN
  3. Controlling simulation event tolerance

PrimeSim Pro uses partitioning to divide up the netlist based upon connectivity, and by default the PDN and other signals would combine to form very large partitions, which in turn slowed down simulation times too much. Here’s what the largest partition looked like with default simulator settings:

Largest partition, default settings

An option in PrimeSim Pro (primesim_pwrblock) was used to cut down the size of the largest partition, separating the PDN from other signals.

Largest partition, using option: primesim_pwrblock

The extracted PDN in SPF format had too many RC elements, which slowed down circuit simulation run times, so an option called primesim_postl_rcred was used to reduce the RC network, while at the same time preserving accuracy. The RC reduction option was able to decrease the number of RC elements by up to 73.9%.

Circuit simulators like PrimeSim Pro use matrix math to solve for current and voltages in the netlist partitions, so runtime is directly related to matrix size and how often a voltage change requires recalculation. The simulator option primesim_evtgrid_for_pdn was used, and it reduces the number of times a matrix needs to be solved whenever there are small voltage changes in the PDN. The chart below shown in purple has an X at each point in time when matrix solving in the PDN was required by default, then shown in white are triangles at each point in time that matrix solving is used with the simulator option. The white triangles happen much less frequently than the purple X’s, enabling faster simulation speeds.

Power Event Control, using option: primesim_evtgrid_for_pdn

A final PrimeSim simulator option used to reduce runtimes was primesim_pdn_event_control=a:b, and it works by applying an ideal power source for a:b, resulting in fewer matrix calculation for the PDN.

The simulation runtime improvements by using all of the PrimeSim options combined was a 5.2X speed-up.

Summary

Engineers at SK hynix have been using both the FineSim and PrimeSim circuit simulators for analysis in their memory chip designs. Using four options in PrimeSim Pro have provided sufficient speed improvements to allow full-chip PDN analysis with SPF parasitics included. I expect that Synopsys will continue to innovate and improve their circuit simulator family in order to meet the growing challenges of memory chip and other IC design styles.

Related Blogs


Siemens Keynote Stresses Global Priorities

Siemens Keynote Stresses Global Priorities
by Bernard Murphy on 03-27-2023 at 6:00 am

Space Perspective

Dirk Didascalou, Siemens CTO, gave a keynote at DVCon, raising our perspective on why we do what we do. Yes, our work in semiconductor design enables the cloud and 5G and smart everything, but these technologies push progress for a select few. What about the big global concerns that affect us all: carbon, climate, COVID and conflict? He made the point that industry collectively has a poor record against green metrics: a 27% contributor to carbon, more that 33% in energy consumption, and less than 13% of products recycled.

We need industries globally, for food and clothing, energy, health, education, and opportunity. Returning to a pastoral way of life isn’t an option so we must help industries become greener. While adapting faster to rapidly evolving demands and constraints thanks to geopolitical instability. Add in demographic ageing, relentlessly chipping away at the pool of critical skills in manufacturing. Siemens aims at these global challenges by helping industries to become more efficient, more automated, and more nimble through digital transformation.

Optimizing industry

Manufacturing industries are very process driven. For conventional production flows, global optimization on-the-fly – reworking flows or product mixes – is very difficult. Improvements in these contexts are more commonly limited to local optimizations, tweaking the process recipe where possible. Global optimization through trial-and-error experiments is simply not practical. Auto manufacturers ran into exactly this problem, intrinsic inflexibility in the Henry Ford manufacturing model. To their credit are already adjusting, often with Siemens help.

Digital transformation allows industries to model whole product lines digitally and experiment with options. Not only to model but also to plan how to adapt those lines quickly in real life, and to plan for predictive maintenance. This is the digital twin concept, though going far beyond the familiar autonomous car training example. Here Dirk is talking about a digital twin to model a continuous, context-driven process for business through manufacturing.

Siemens is itself a manufacturing company. They have a factory in southern Germany producing many of the products they use to help other companies in their automation goals.  The Amberg site manufactures 17 million products a year, from of a portfolio of 1,200. Each day they must reconfigure the factory 350 times to serve many different types of order. Siemens put their own digital transformation advice and products to work in this factory, delivering 14X productivity improvements on products with 2X the complexity in the same factory with the same number of people. The World Economic Forum has named that site one of their lighthouse factories.

What difference does this make to the big goals and to what we do? Siemens doesn’t need to produce 14X more products today. For the same product volume, those improvements drive lower energy consumption and therefore a lower carbon footprint. Digital transformation also minimizes need for trial-and-error modeling in the real world, a faster turnaround with less waste to produce better, greener manufactured goods. And it allows for more flexibility in quickly switching product features and mixes. Consumers get exactly the options they want at a similar cost, from more eco-friendly manufacturing. All enabled by digital twin models, sensing, compute and communication technologies and of course AI.

Real applications

One example is Space Perspective, a carbon neutral spacecraft powered by a balloon! It can carry eight people in a 12 mile per hour ascent to 100,000 feet. The craft was designed completely digitally using Siemens SimCenter-STAR CCM. Soon you won’t have to be a billionaire to go to space!

A more widely important example is vertical farming. 80 Acres Farms designed their indoor, vertically stacked farms using Siemens products. An 80 Acres farm can produce up to 300 times more food than a regular farm in the same footprint, using renewable energy, no pesticides, and 95% less water. These farms produce food locally to serve local needs, minimizing trucking costs and consequent impact on the environment.

Where does COVID fit into this story? Remember BioNTech? They produced the Pfizer vaccine, the first widely available shot. Designing the vaccine was a great accomplishment but then needed to be ramped to billions of doses in 6 months. That required more research on boosting immune response. Siemens products assisted with solutions to help simulate the impact of modeled molecular structures on immune response. A combination of simulations, AI, and results from clinical trials led to the vaccine many of us received following a record development and production cycle for biotech.

Northvolt is another example. This is a Swedish company building lithium-ion batteries for EVs and other applications. This is a serious startup with $30 billion in funding, not a wishful one-off. Batteries are integral to making renewable energy more pervasive, but we hear lots of concerns about environmental issues in their manufacture. Northvolt’s mission is to deliver batteries with an 80% lower carbon footprint than those made in other factories, and they recycle material from used batteries into new products. These guys are committed. Again, the whole operation was designed digitally with Siemens – creation, commissioning, manufacture, deployment, and recycling.

There are more examples. Milling machines as a service – yes that’s a real thing. A German company offers a machine which can be de-featured to do just the basics, competing on price with cheap Asian counterparts. When needed you can pay for an upgrade, enabled naturally through an app, which will turn on a more advanced feature. Naturally there are multiple such features 😊.

Closer to home for automotive design, safety analysis and ML training through digital twins is enabled by Siemens EDA. Samsung presented later in the same conference on using Siemens Xcelerator tools to reduce functional safety iterations by 70%. and to generate an integrated final validation report across the formal, simulation and emulation engines they used for ISO 26262 certification.

An inspiring keynote. Next time a relative asks you what you do for a living, aim a little higher. Tell them you design products that ultimately drive greener manufacturing, faster response to pandemic crises, and (who knows) maybe ultimately more constructive approaches to resolving conflict.


Gordon Moore’s legacy will live on through new paths & incarnations

Gordon Moore’s legacy will live on through new paths & incarnations
by Robert Maire on 03-25-2023 at 6:00 pm

Gordon Moore RIP

-Gordon Moore’s passing reminds us of how far we have come
-One of many pioneers of chip industry-but most remembered
-The most exponential & ubiquitous industry of all times
-“No exponential is forever”- Gordon Moore was an exponential

Remembering Gordon Moore

He will be remembered most for his observation (some say prediction) of the exponential growth of semiconductor technology.

This could further be described as cost per transistor or cost per square inch of silicon or whatever metric you use to describe the basic value of the semiconductor industry.

He was much, much more than the “Moore’s Law” phrase he coined, as he was one of the “traitorous eight” that left Shockley Semiconductor to form Fairchild Semiconductor and then later went on to found Intel with Shockley cohort Robert Noyce and Fairchild cohort Andy Grove.

Birth of the semiconductor industry

Some would suggest that the semiconductor industry started 75 years ago with the invention of the transistor. I would suggest that the true start of the semiconductor industry was 65 years ago with the formation of Fairchild which pioneered the “integrated circuit” which built and connected multiple transistors on a single piece of silicon and was the genesis of a new way of building devices at a huge scale.

Had we continued to use discrete transistors, we would have not seen the exponential growth and instead would have just replaced vacuum tubes with smaller more efficient transistors without the potential for exponential growth.

Gordon Moore- Both Pioneer & Father to an industry

Gordon Moore (and many others) were there for not just the birth of the industry but perhaps more importantly the development and advancement that set the industry on a path and trajectory of advancement and growth that later became the famous observation.

It would have been enough if he just helped invent the integrated circuit at Fairchild. If he had just started Intel. If he had just coined the observation of “Moore’s Law”. But he did all this and much more, he helped create an entire industry that 65 short years later has its presence felt in virtually every thing we touch and don’t touch today.

Semiconductors are like the unseen yet enabling oxygen we breathe every minute of every day

We rely on semiconductors in the most critical ways every second of our lives, yet they didn’t exist a mere 65 years ago. The semiconductor industry was created out of thin air to become the most pervasive, fastest growing on the planet.

This is truly the legacy of Gordon Moore and others more so than any single phrase or observation.

“No exponential is forever”- Gordon Moore

We know in both math and everyday life and death that no true exponentials go on forever, and that is sadly the case with Gordon Moore himself. His legacy, however , will go on forever for both the myriad of roles he played in the semiconductor industry as well as his philanthropic endeavors later on in life.

We will miss a truly great person…….

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event

Report from SPIE- EUV’s next 15 years- AMAT “Sculpta” braggadocio rollout

AMAT- Flat is better than down-Trailing tool strength offsets memory- backlog up


Podcast EP149: The Corporate Culture of Axiomise with Laura Long

Podcast EP149: The Corporate Culture of Axiomise with Laura Long
by Daniel Nenni on 03-24-2023 at 10:00 am

Dan is joined by Laura Long, Director of Business Development at Axiomise. She has over 15 years of experience in business development and has built a strong expertise working with clients with a presence and/or residence in various countries of the European Union, in the UK and in the Americas.

Dan explores the corporate culture at formal verification company Axiomise with Laura. Inclusiveness, diversity and collaboration among other topics are discussed. Laura provides a view into the development environment at Axiomise and what impact these strategies can have on results.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Developing the Lowest Power IoT Devices with Russell Mohn

Developing the Lowest Power IoT Devices with Russell Mohn
by Daniel Nenni on 03-24-2023 at 6:00 am

InPlay NanoBeacon Technology

Russell Mohnis the Co-Founder and Director of RF/AMS Design at InPlay Inc. and his team has been using WiCkeD from MunEDA for several years. We thought the rest of the world would like to learn about his experiences.

How did you get started in semiconductors and what brought you to InPlay?
I was initially drawn to analog and mixed-signal chip design because it seemed like a direct path to start using what I had learned in engineering school. I’ve stayed in the same field because there’s always something for me to learn and there are always interesting problems to solve, both of which I really enjoy. I like building things, and I’ve always been fascinated by all the fields that make the microelectronics industry possible: photolithography, material science, physics, robotics, chemistry, microscopy, not to mention all the algorithms, mathematics, and computer science that is pushing breakthroughs in the tools we use. It’s a field that keeps capturing my imagination in new ways. I like the idea of casting a design in a mask and having it produced nearly flawlessly millions of times over. I enjoy the pressure in trying to get it right the first time, and I take pride in the fact that there is a lot at stake. The feeling of getting a new part in the lab and seeing it work as designed is incredibly rewarding. And when there are problems, figuring them out is also rewarding.

I joined InPlay because our current CEO asked me to lead the RF and analog/mixed-signal design for InPlay’s chips at the end of 2016. I had worked with the other co-founders at my previous employer, which had gone through two acquisitions in the previous two years or so. I had a lot of respect for them and enjoyed working with them in the past. I always dreamed of starting my own company, so I thought it was a golden, albeit risky, opportunity. The team had a lot of complementary domain knowledge, and knowing the others were great in their fields gave me the confidence to join.

What does InPlay do?
InPlay is a fabless semiconductor company. We design and develop chips that enable wireless connectivity in applications that require low-latency, many devices, and low power … all at the same time. We are also enabling a new generation of active RFID smart sensors and beacons with our NanoBeacon product line. It doesn’t require firmware. The BOM is tiny. And power consumption is very low, so it can be powered by unique batteries and energy harvesting technologies.

What type of circuits do you design?
We design and develop all the necessary circuits for a radio transceiver. Some examples are low noise amplifiers, mixers, programmable amplifiers, analog to digital converters, digital to analog converters, low-drop out regulators, phase locked loops, power amplifiers. We also design the power management circuit necessary for the chip, which includes DCDC converters, really low power oscillators, references, and regulators.

Which MunEDA tools do you use?
We use WiCkeD and SPT.

How do you apply the MunEDA tools to your day-to-day job?
We’ve done some porting work over the past couple years. It was necessary with the foundry wafer shortage, especially for startup companies like us. Using SPT to get the schematics all ported over has been really helpful.

We also use WiCkeD for both optimization and for design centering over process/voltage/temperature variation. If the circuit is small enough, an opamp for example, after choosing the right topology, the optimizer can do the work of a designer to get the needed performance, all while keeping the design centered over PVT.

We’ve also used it for intractable RF matching/filtering tasks and for worst case analysis on startup issues for metastable circuits.

What value do you see from the MunEDA tools?
I see the MunEDA tools as basically another designer on my team. This is huge since we’re a small team, so the impact has been significant.

How about the learning curve?
MunEDA’s support is really great; they care about their customers, no matter how small. The learning curve is not too bad after some built-in tutorials. I see value from the tools every time I use them, from the first time, until now.

What advice would you give a circuit designer considering the MunEDA tools?
I would advise that they keep an open mind, and really look at the resulting data. I think many designers would be happy by the amount of time they can save, and the insight they can gain into the trade-offs in their designs.

Also Read:

Webinar: Post-layout Circuit Sizing Optimization

Automating and Optimizing an ADC with Layout Generators

Webinar: Simulate Trimming for Circuit Quality of Smart IC Design

Webinar: AMS, RF and Digital Full Custom IC Designs need Circuit Sizing


Mercedes, VW Caught in TikTok Blok

Mercedes, VW Caught in TikTok Blok
by Roger C. Lanctot on 03-23-2023 at 10:00 am

Mercedes VW Caught in TikTok Blok

Thirteen years ago, General Motors announced the introduction of a voice-enabled integration of Facebook in its cars. The announcement reflected the irresistible urge to please consumers and lead the market.

Today, multiple car makers are introducing games, streaming video, and social media apps, the most prominent of which is TikTok – with a billion users across 150 countries, including 200M+ downloads in the U.S. alone. Automotive integration looks like a no-brainer – it is, but not in a good way.

Volkswagen and Mercedes are in the forefront of the movement, Volkswagen with its announced plans for its Harman Ignite app store and Mercedes with its Faurecia Aptoide-sourced app store. Both car companies would do well to look back to the original social media integrations of GM, Mercedes, and others – which included Twitter. It all sounded like a great idea at the time – Facebook and Twitter in the dash! – but very soon, as the British say, there was no joy.

It didn’t take a rocket scientist to perceive that social media is ill-suited for automotive integration – with the possible exception of rearseat use by passengers. Car companies tried to create automated links from navigation apps to Twitter – for posts indicating departures and arrivals – or by emphasizing voice interaction, to no avail. It was soon clear that these apps simply didn’t belong.

The problem is that social media apps demand attention. Their entire business models are built on distraction. They simply don’t belong in cars.

TikTok has the added baggage of being a threat to privacy and national security in the eyes of many governments around the world. I’d argue connected cars are by definition a threat to privacy. Actually, based on the amount of CC-TV deployed around the world I’d say leaving your home is a threat to privacy.

TikTok appears to be a special case because of its ability to spread Chinese government propaganda and misinformation. In other words, it’s not enough that it is distracting and invading privacy, it may also invade and alter users’ political beliefs.

Car companies could not resist the Siren song of TikTok. They simply couldn’t ignore those billion users and included TikTok in their app stores. If ever there were a “red flag” moment in in-car app deployment, this is it.

With governments around the world having either already banned TikTok or with plans to do so, perhaps auto makers will take a hint. The Washington Post details the breadth of the growing official rejection of TikTok.

India – initially banned in 2020, permanent ban in January 2021
U.S. – government agencies have 30 days to delete TikTok from government-issued devices; dozens of state-level bans
Canada – banned on government-issued phones
Taiwan – banned on government devices since last December, considering nationwide ban
European Union – banned on government/staff devices
Britain – banned on government devices
Australia – banned on government staff devices
Indonesia – temporary ban in 2018, later lifted
Pakistan – varous temporary bans
Afghanistan – banned in 2021 – but workarounds possible

As auto makers such as Volkswagen and Mercedes reconsider the wisdom of TikTok integration in cars, maybe they’ll rethink some of the other crazy stuff – or at least confine it to the rearseat or limit access to when vehicles are parked or charging. Angry Birds? Really, Mercedes?

It’s a good time to pause and rethink what we are putting into cars. Car makers have a history of wanting to integrate the latest and greatest tech in their cars, which explains the growing number of announcements regarding in-vehicle ChatGPT and Meta integrations. The good news is that these days, with over-the-air software update technology, apps can be removed as quickly as they can be deployed. Let’s hope so.

Within a year of its launch of Facebook in its dashboards General Motors changed course and dropped the plan. I think we can expect a similar outcome in this case.

Also Read:

AAA Hypes Self-Driving Car Fears

IoT in Distress at MWC 2023

Modern Automotive Electronics System Design Challenges and Solutions


Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow

Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow
by Daniel Payne on 03-23-2023 at 6:00 am

analog Circuit Optimization

Analog IC designers can spend way too much time and effort re-using old, familiar, manual iteration methods for circuit design, just because that’s the way it’s always been done. Circuit optimization is an EDA approach that can automatically size all the transistors in a cell, by running SPICE simulations across PVT corners and process variations, to meet analog and mixed-signal design requirements. Sounds promising, right?

So which circuit optimizer should I consider using?

To answer that question there’s a webinar coming up, hosted by MunEDA, an EDA company started back in 2001, and it’s all about their circuit optimizer named WiCkeD. Inputs are a SPICE netlist along with design requirements, like: gain, bandwidth and power consumption. Outputs are a sized netlist that meets or exceed the design requirements.

Analog Circuit Optimization

The secret sauce with WiCkeD is how it builds up a Machine Learning (ML) model to run a Design Of Experiments (DOE) to calculate the worst-case PVT corner, find the transistor geometry sensitivities, and even calculate the On Chip Variation (OCV) sensitivities. This approach creates and updates a non-linear, high-dimensional ML model from simulated data.

Having a ML model enables the tool to solve  the optimization challenge, then do a final verification by running a SPICE simulation. There are automated iterations until all requirements are met. Now that sounds much faster than the old manual iteration methods. Training the ML model is all automatic, and quite efficient.

Circuit designers will also learn:

  • Where to use circuit optimization
  • What types of circuits are good to optimize
  • How much value circuit optimization brings to the design flow

Engineers at STMicroelectronics have used the circuit optimization in WiCkeD, and MunEDA talks about their specific results in time savings and improvements in meeting requirements. Power Amplifier company Inplay Technologies showed circuit optimization results from the DAC 2018 conference.

Webinar Details

View the webinar replay by registering online.

About MunEDA
MunEDA provides leading EDA technology for analysis and optimization of yield and performance of analog, mixed-signal and digital designs. MunEDA’s products and solutions enable customers to reduce the design times of their circuits and to maximize robustness and yield. MunEDA’s solutions are in industrial use by leading semiconductor companies in the areas of communication, computer, memories, automotive, and consumer electronics. www.muneda.com.

Related Blogs


Narrow AI vs. General AI vs. Super AI

Narrow AI vs. General AI vs. Super AI
by Ahmed Banafa on 03-22-2023 at 10:00 am

Narrow AI vs. General AI vs. Super AI

Artificial intelligence (AI) is a term used to describe machines that can perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI is classified into three main types: Narrow AI, General AI, and Super AI. Each type of AI has its unique characteristics, capabilities, and limitations. In this article, we will explain the differences between these three types of AI.

Narrow AI  

Narrow AI, also known as weak AI, refers to AI that is designed to perform a specific task or a limited range of tasks. It is the most common type of AI and is widely used in various applications such as facial recognition, speech recognition, image recognition, natural language processing, and recommendation systems.

Narrow #ai works by using machine learning algorithms, which are trained on a large amount of data to identify patterns and make predictions. These algorithms are designed to perform specific tasks, such as identifying objects in images or translating languages. Narrow AI is not capable of generalizing beyond the tasks for which it is programmed, meaning that it cannot perform tasks that it has not been specifically trained to do.

One of the key advantages of Narrow AI is its ability to perform tasks faster and more accurately than humans. For example, facial recognition systems can scan thousands of faces in seconds and accurately identify individuals. Similarly, speech recognition systems can transcribe spoken words with high accuracy, making it easier for people to interact with computers.

However, Narrow AI has some limitations. It is not capable of reasoning or understanding the context of the tasks it performs. For example, a language translation system can translate words and phrases accurately, but it cannot understand the meaning behind the words or the cultural nuances that may affect the translation. Similarly, image recognition systems can identify objects in images, but they cannot understand the context of the images or the emotions conveyed by the people in the images.

General AI  

 General AI, also known as strong AI, refers to AI that is designed to perform any intellectual task that a human can do. It is a theoretical form of AI that is not yet possible to achieve. General AI would be able to reason, learn, and understand complex concepts, just like humans.

The goal of General AI is to create a machine that can think and learn in the same way that humans do. It would be capable of understanding language, solving problems, making decisions, and even exhibiting emotions. General AI would be able to perform any intellectual task that a human can do, including tasks that it has not been specifically trained to do.

One of the key advantages of General AI is that it would be able to perform any task that a human can do, including tasks that require creativity, empathy, and intuition. This would open up new possibilities for AI applications in fields such as healthcare, education, and the arts.

However, General AI also raises some concerns. The development of General AI could have significant ethical implications, as it could potentially surpass human intelligence and become a threat to humanity. It could also lead to widespread unemployment, as machines would be able to perform tasks that were previously done by humans. Here are a few examples of General AI:

1.    AlphaGo: A computer program developed by Google’s DeepMind that is capable of playing the board game Go at a professional level.

2.    Siri: An AI-powered personal assistant developed by Apple that can answer questions, make recommendations, and perform tasks such as setting reminders and sending messages.

3.    ChatGPT: a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The language model can answer questions, and assist you with tasks such as composing emails, essays, and code.

Super AI

Super AI refers to AI that is capable of surpassing human intelligence in all areas. It is a hypothetical form of AI that is not yet possible to achieve. Super AI would be capable of solving complex problems that are beyond human capabilities and would be able to learn and adapt at a rate that far exceeds human intelligence.

The development of Super AI is the ultimate goal of AI research. It would have the ability to perform any task that a human can do, and more. It could potentially solve some of the world’s most pressing problems, such as climate change, disease, and poverty.

Possible examples from movies: Skynet (Terminator), Viki (iRobot), Jarvis (Ironman).

Challenges and Ethical Implications of General AI and Super AI

The development of General AI and Super AI poses significant challenges and ethical implications for society. Some of these challenges and implications are discussed below:

  1. Control and Safety: General AI and Super AI have the potential to become more intelligent than humans, and their actions could be difficult to predict or control. It is essential to ensure that these machines are safe and do not pose a threat to humans. There is a risk that these machines could malfunction or be hacked, leading to catastrophic consequences.
  2. Bias and Discrimination: AI systems are only as good as the data they are trained on. If the data is biased, the AI system will be biased as well. This could lead to discrimination against certain groups of people, such as women or minorities. There is a need to ensure that AI systems are trained on unbiased and diverse data.
  3. Unemployment: General AI and Super AI have the potential to replace humans in many jobs, leading to widespread unemployment. It is essential to ensure that new job opportunities are created to offset the job losses caused by these machines.
  4. Ethical Decision-making: AI systems are not capable of ethical decision-making. There is a need to ensure that these machines are programmed to make ethical decisions, and that they are held accountable for their actions.
  5. Privacy: AI systems require vast amounts of data to function effectively. This data may include personal information, such as health records and financial data. There is a need to ensure that this data is protected and that the privacy of individuals is respected.
  6. Singularity: Some experts have raised concerns that General AI or Super AI could become so intelligent that they surpass human intelligence, leading to a singularity event. This could result in machines taking over the world and creating a dystopian future.

Narrow AI, General AI, and Super AI are three different types of AI with unique characteristics, capabilities, and limitations. While Narrow AI is already in use in various applications, General AI and Super AI are still theoretical and pose significant challenges and ethical implications. It is essential to ensure that AI systems are developed ethically and that they are designed to benefit society as a whole

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

 

References

1.    Quantum Computing and Other Transformative Technologies , Book by Ahmed Banafa https://www.amazon.com/Transformative-Technologies-Publishers-Information-Technology/dp/8770226849/ref=sr_1_1?

2.    https://www.bbvaopenmind.com/en/technology/artificial-intelligence/intellectual-abilities-of-artificial-intelligence/

3.    #chatgpt

4. Terminator Movie

5. Iron Man Movie

6. iRobot Movie

7. https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/

Also Read:

Scaling AI as a Service Demands New Server Hardware

10 Impactful Technologies in 2023 and Beyond

Effective Writing and ChatGPT. The SEMI Test


Intel Keynote on Formal a Mind-Stretcher

Intel Keynote on Formal a Mind-Stretcher
by Bernard Murphy on 03-22-2023 at 6:00 am

Intellectual understanding min

Synopsys has posted on the SolvNet site a fascinating talk given by Dr. Theo Drane of Intel Graphics. The topic is datapath equivalency checking. Might sound like just another Synopsys VC Formal DPV endorsement but you should watch it anyway. This is a mind-expanding discussion on the uses of and considerations in formal which will take you beyond the routine user-guide kind of pitch into more fascinating territory.

Intellectual understanding versus sample testing

Test-driven simulation in all its forms is excellent and often irreplaceable in verifying the correctness of a design specification or implementation. It’s also easy to get started. Just write a test program and start simulating. But the flip side of that simplicity is that we don’t need to fully understand what we are testing to get started. We convince ourselves that we have read the spec carefully and understand all the corner cases, but it doesn’t take much compounded complexity to overwhelm our understanding.

Formal encourages you to understand the functionality at a deep level (at least if you want to deliver a valuable result). In the example above, a simple question – can z ever be all 1’s – fails to demonstrate an example in a billion cycles on a simulator. Not surprising, since this is an extreme corner case. A formal test provides a specific and very non-obvious example in 188 seconds and can prove this is the only such case in slightly less time.

OK formal did what dynamic testing couldn’t do, but more importantly you learned something the simulator might never have told you. That there was only one possible case in which that condition could happen. Formal helped you better understand the design at an intellectual level, not just as probabilistic summary across a finite set of test cases.

Spec issues

Theo’s next example is based on a bug vending machine (so called because when you press a button you get a bug). This looks like a pretty straightforward C to RTL equivalence check problem, C model on the left, RTL model on the right. One surprise for Theo in his early days in formal was that right-shift behavior in the C-model is not completely defined in the C standard, even though gcc will behave reasonably. However, DPV will complain about a mismatch in a comparison with the RTL, as it should. Undefined behavior is a dangerous thing to rely on.

Spec comparison between C and RTL comes with other hazards, especially around bit widths. Truncation or loss of a carry bit in an intermediate signal (#3 above) are good examples. Are these spec issues? Maybe a gray area between spec and implementation choices.

Beyond equivalence checking

The primary purpose of DPV, it would seem, is to check equivalence between a C or RTL reference and an RTL implementation. But that need is relatively infrequent and there are other useful ways such a technology might be applied, if a little out of the box. First a classic in the implementation world – I made a change, fixed a bug – did I introduce any new bugs as a result? A bit like SEQ checking after you add clock gating. Reachability analysis in block outputs may be another useful application in some cases.

Theo gets even more creative, asking trainees to use counter examples to better understand the design, solve Sudokus or factorize integers. He acknowledges DPV make be an odd way to approach such problems but points out that his intent is to break the illusion that DPV is only for equivalence checking. Interesting idea and certainly brain-stretching to think through such challenges. (I confess I immediately started thinking about the Sudoku problem as soon he mentioned it.)

Wrap up

Theo concludes with a discussion on methodologies important in production usage, around constraints, regressions and comparisons with legacy RTL models. Also the challenges in knowing whether what you are checking actually matches the top-level natural language specification.

Very energizing talk, well worth watching here on SolvNet!

 

 


eFPGA goes back to basics for low-power programmable logic

eFPGA goes back to basics for low-power programmable logic
by Don Dingee on 03-21-2023 at 10:00 am

Renesas ForgeFPGA Evaluation Board features Flex Logic ELFX 1K low-power programmable logic tile

When you think “FPGA,” what comes to mind? Massive, expensive parts capable of holding a lot of logic but also consuming a lot of power. Reconfigurable platforms that can swallow RTL for an SoC design in pre-silicon testing. Big splashy corporate acquisitions where investors made tons of money. Exotic 3D packaging and advanced interconnects. But probably not inexpensive, small package, low pin count, low standby power parts, right? Flex Logix’s eFPGA goes back to basics for low-power programmable logic that can take on lower cost, higher volume, and size-constrained devices.

Two programmable roads presented a choice

At the risk of dating myself, my first exposure to what was then called FPGA technology was back when Altera brought out their EPROM-based EP1200 family in a 40-pin DIP package with its 16 MHz clock, 400 mW active power and 15 mW standby power. It came with a schematic editor and a library of gate macros. Designers would draw their logic, “burn” their part, test it out, throw it under a UV lamp and erase it if it didn’t work, and try again.

Soon after, a board showed up in another of our labs with some of the first Xilinx FPGAs. These were RAM-based instead of EPROM-based – bigger, faster, and reprogramming without the UV lamp wait or removing the part from the board. The logic inside was also more complex, with the introduction of fast multipliers. These parts could not only sweep up logic but could also be used to explore custom digital signal processing capability with rapid redesign cycles.

That set off the programmable silicon arms race, and a bifurcation developed between the PLD – programmable logic device – and the FPGA. Manufacturers made choices, with Altera and Xilinx taking the high road of FPGA scalability and Actel, Lattice, and others taking the lower road of PLD flexibility for “glue logic” to reduce bill-of-materials costs.

eFPGA shifts the low-power programmable logic equation

All that sounds like a mature market, with a high barrier to entry on one end and a more commoditized offering on the other. But what if programmable logic was an IP block that could be designed into any chip in this fabless era – including a small, low-power FPGA? That would circumvent the barrier (at least in the low and mid-range offerings) and commoditization.

Flex Logix took on that challenge with the EFLX 1K eFPGA Tile. Each logic tile has 560 six-input look-up tables (LUTs) with RAM, clocking, and interconnect. Arraying EFLX tiles gives the ability to handle various logic and DSP roles. But its most prominent features may be its size and power management.

Fabbed in TSMC 40ULP, the EFLX 1K tile fits in 1.5mm2 and offers power-gating for deep sleep modes with state retention – much more aggressive than traditional PLDs. EFLX 1K also has production-ready features borrowed from FPGAs. It presents AXI or JTAG interfaces for bitstream configuration, readback circuitry enabling soft error checking, and a test mode with streamlined vectors improving coverage and lowering test times.

See the chip in the center of this next image? That’s a ForgeFPGA from Renesas in a QFN-24 package, based on EFLX 1K IP, which Renesas offers at sub-$1 price points in volume. Its standby target current checks in at less than 20uA. Smaller size, lower cost, and less power open doors previously closed to FPGAs. The lineage of ForgeFPGA traces back to Silego Technology, then to Dialog Semiconductor, acquired by Renesas in 2021.

 

 

 

 

 

 

 

Renesas brings the Go Configure IDE environment, putting a graphical user interface on top of the Flex Logix EFLX compiler. It supports mapping ForgeFPGA pins, compiling Verilog, generating a bitstream, and has a lightweight logic analyzer.

 

 

 

 

 

 

 

 

 

The pre-built application blocks for the ForgeFPGA have an interesting one that Flex Logix’s Geoff Tate points out: a UART. Creating a UART in logic isn’t all that difficult, but it turns out that everyone has gone about it differently, and it’s just enough logic to be more than a couple of discrete chips. A ForgeFPGA is a chunk of reconfigurable logic that can solve that problem, allowing one hardware implementation to adapt quickly for various configurations.

 

 

 

 

 

 

 

ForgeFPGA is just one example of what can be done with the Flex Logix EFLX 1K eFPGA Tile. Flex Logix can adapt the IP for various process nodes, and the mix-and-match tiling capability offers scalability. It achieves new lows for low-power programmable logic and allows chip makers to differentiate solutions in remarkable ways. For more info, please visit:

Flex Logix EFLX eFPGA family

Also Read:

eFPGAs handling crypto-agility for SoCs with PQC

Flex Logix: Industry’s First AI Integrated Mini-ITX based System

Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform