ads mdx semiwiki building trust gen 800x100ai

VC Formal Enabled QED Proofs on a RISC-V Core

VC Formal Enabled QED Proofs on a RISC-V Core
by Bernard Murphy on 08-10-2023 at 6:00 am

The Synopsys VC Formal group have a real talent for finding industry speakers to talk on illuminating outside-the-box-topics in formal verification. Not too long ago I covered an Intel talk of this kind. A recent webinar highlighted use of formal methods used together with a cool technique I have covered elsewhere called Quick Error Detection (QED). This for me is a good example of what really makes formal so fascinating – not so much the engines behind the scenes as the intellectual freedom they enable in solving a problem. Frederik Möllerström Lauridsen, a verification engineer at SyoSil, shared his experience using this method for Synopsys VC Formal for proofs on a RISC-V core.

VC Formal Enabled QED Proofs

The verification objective

Considering only the base ISA plus possible custom extensions, Frederick wanted a generic setup for RISC-V cores, in part through how they define their SVA assertions. He doesn’t go into detail in his talk, but I believe this means assertions which reference only the start and end of the pipeline, not the internals or the number of cycles required to complete. His goal is to detect both single instruction bugs and multi-instruction bugs. Single instruction bugs are relatively easy to find, but multi-instruction bugs are harder to uncover thanks to context dependent stalls without which for example register read/write conflicts might occur.

Single-instruction bugs (eg does an ADD really add) are not context dependent so can be checked by running the instruction through an otherwise empty pipeline. But multi-instruction bugs are context specific. How can you verify against all legal contexts? To see how, first you need to understand a little about QED.

QED

Quick Error Detection (QED) is a method first invented for post-silicon validation. There you start with machine-level code and regularly duplicate instructions reading and writing through a parallel set of registers / memory locations. You then compare original values with duplicated values; a difference signals an error. Similar techniques are migrating to pre-silicon verification, for an interesting reason. The intent is to regularly compare consistency between parallel implementations, with the promise that root cause errors may be caught long before being flagged by some more functionally meaningful assertion we might think to write. (Incidentally, this technique is not limited to formal verification. It is just as valuable in dynamic verification.)

Combining formal methods and QED

To apply QED you need a reference design and a design under test (DUT). Here the reference design is a single-instruction pipeline test, eg pushing an ADD instruction through an otherwise empty pipeline. In parallel the DUT will push though the same instruction, but how do you define context as an arbitrary selection of possible surrounding instructions? For this Frederick used a variant on QED called C-S2QED.

Without dropping too much into the technical weeds, S2 means “symbolic state”, which allows for arbitrary instructions going through the pipeline, constrained so that the first instruction entering the pipeline is the same as the instruction entering the reference pipeline. The “symbolic” part of this is key. It is not necessary to define what other instructions are going through. These are only constrained to be legal instructions. Since we are applying formal methods, all possibilities will be considered together in proofs. The other neat trick that Frederick applied was first to demonstrate that all instructions would pass through the pipeline within at most a fixed number of cycles, providing a limit for bounded proofs.

Now using the QED methodology, comparing the reference design and DUT through formal methods provides proof that there are no multi-instruction bugs in the pipeline implementation, or it provides a counterexample. Pretty cool! Frederick did acknowledge that they had not extended their method to any of the standard RISC-V ISA extensions (M, A, F, etc) though you could use VC Formal DPV for the M extension and no doubt clever folks can come up with creative possibilities for other extensions.

Very cool stuff. You can register to watch the webinar HERE.

For enthusiasts of this line of thinking check out a blog I wrote back in 2018, on the Wolper method to verify the correctness of data transport logic in network switches or on-chip interconnects or memory subsystems. I love the way formal has been applied so creatively in QED and in Wolper. There must be more opportunities like this 😊


Elon Musk is Self-Aware

Elon Musk is Self-Aware
by Roger C. Lanctot on 08-09-2023 at 10:00 am

Elon Musk is Self Aware

“I think we’ll be better than human by the end of the year.” – Elon Musk, CEO, Tesla

Parsing the impact of the latest Tesla earnings call featuring CEO Elon Musk has become an eerie out-of-body experience. The comments of the CEO are simultaneously assessed in real time and in retrospect as they are being spoken. It is automotive history in the making – a latter day Henry Ford undoing much of what that scion created. Think: re-vision (not division) of labor.

There is, of course, the prosaic assessment of projected earnings hits or misses – and the “markets” chose a negative response to Musk’s otherwise euphoric take on the company’s prospects. What was hard to ignore was the company’s ongoing success in the face of multiple macroeconomic obstacles and Musk’s own musings on his own path.

At one point he described himself as “the boy who cried FSD” – referring to the controversial full-self-driving capability available to new Tesla buyers for $15,000. This is the same FSD that is still not quite living up to its name.

Musk’s level of self-awareness is hard to ignore or avoid. One can only imagine what it’s like to read about yourself on a daily, hourly basis. In real-time Musk must come to grips with who he is, who he thinks he is, and who everyone else thinks he is or what they think of him.

Maintaining one’s grip on reality in these circumstances is itself no small feat. For Musk it is made even more complex by the fact that there is the Tesla Musk, the SpaceX Musk, the Twitter Musk, the x.AI Musk etc. etc. Everyone has their own Musk.

The Tesla Musk is probably the most interesting and palatable. But the Tesla Musk is not without his skeptics and critics taking into account unfulfilled full-self-driving forecasts, ongoing investigations of fatal crashes, and price cut and vehicle delivery flip-flops.

The most disturbing aspect of the latest Tesla earnings call with Musk is his comprehensive grasp of the technical issues (software, AI, battery tech) facing his company and the industry and his willingness to discuss those challenges and the company’s plans to overcome them. Perhaps even more important is Musk’s discussion of how the company has already overcome them.

Musk wastes no time getting to two of what may be the biggest questions facing the automotive industry:

  • How to enhance cars in such a way to improve safety and reduce highway fatalities.
  • How to hire and retain talent to work on cars.

Musk says nothing of “vision zero” platitudes and plans. After all, talking about vision zero, these days, is like talking about climate change. We feeble little humans have set off global climate shifts that will require decades if not centuries to reverse. In the same way, a million human beings are dying annually on a global scale on roadways – a reality that will be equally difficult to correct.

As the pied piper of electric vehicles, Musk is taking on both these global challenges at once – and can already point to some success.

For Musk the answer lies in a unified theory of “autonomy.” It will take mountains of data to improve and achieve full-self-driving, which will require a limitless supply of processing power (much of it from Nvidia), to achieve the objective of superhuman driving capability – a 10x-100x improvement on human driving – which still won’t get “us” to zero fatalities.

Just as Musk acknowledged, on the earnings call, the expanding adoption of Tesla’s fast charging connector and network technology by car makers such as General Motors and Ford Motor Company, he hinted at the prospect of the first car maker licensee of Tesla FSD technology. No names yet.

No other car company is even close to the required level of data collection and processing that Musk has already put in place and is expanding daily. In the context of achieving this ultimate goal of safe self-driving, the $15,000 price tag for FSD will seem trivial, he says, but even so a subscription-based alternative could be made available.

In a world where we have routinely been “sold” by “legacy” auto makers on the wonders and attractions and liberation of human driving, Musk has made machine-assisted driving aspirational. It is for this and other reasons that analysts and shareholders hang on his every word.

Notably, Tesla is fundamentally rewiring the consumer mindset regarding cars and driving in such a way that it is now short-circuiting the value of mass market automobile advertising. Increasingly, television, radio, or Internet advertising targeted at traditional internal combustion vehicle value propositions is missing its mark. Tesla does little advertising of its own.

I may only be speaking for myself, but as an EV owner my experience of TV advertising for ICE vehicles has been permanently altered. These ads are only interesting to me, now, as historical artifacts.

As for hiring and retaining the personnel necessary to achieve Musk’s dreams and Tesla’s objectives, Musk talks about interviewing and recruiting candidates who essentially don’t want to work for Tesla. By expanding his endeavors with SpaceX and, most recently, x.AI Musk has been able to hire and retain top performers whose contributions to other efforts convey a collateral benefit to Tesla.

Musk is following in the footsteps of auto industry founders who also diverted the efforts of their engineers into non-automotive endeavors. Car companies today have strangely lost the luster of past non-automotive forays.

At the very beginning of the earnings call Musk noted record vehicle production (nearing 2M annualized) and revenue ($25B) and talked about anticipating “quasi-infinite” demand for a future dedicated “robotaxi.” If any organization could make robotaxis popular, it would be Tesla.

Musk has thrust Tesla to the forefront of autonomous vehicle and artificial intelligence development. While we worry about the machines becoming self aware, a self aware Elon Musk is oddly reassuring. He knows how he sounds. He knows what we’re thinking – even as he is altering the way we think. Don’t be frightened, but do be aware, like Elon.

Also Read:

Xcelium Safety Certification Rounds Out Cadence Safety Solution

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US

Automotive IP Certification


Insights into DevOps Trends in Hardware Design

Insights into DevOps Trends in Hardware Design
by Bernard Murphy on 08-09-2023 at 6:00 am

DevOps

Periodically I like to check in on the unsung heroes behind the attention-grabbing world of design. I’m speaking of the people responsible for the development and deployment infrastructure on which we all depend – version control, testing, build, release – collectively known these days as DevOps (development operations). I met with Simon Butler, GM of the Methodics BU at Perforce to get his insights on directions in the industry. Version control proved to be just the tip of what would eventually become DevOps. I was interested to know how much the larger methodology has penetrated the design infrastructure (hardware and software) world.

Software and DevOps

DevOps grew up around the software development world, where it is evolving much faster than in hardware development. Early in-house Makefile scripts and open-source version control (RCS, SCCS) quickly progressed into more structured approaches, built around better open-source options combined with commercial tools. As big systems based on a mix of in-house and open/commercial development grew and schedules shrank, methods like CI/CD (continuous integration / continuous deployment) and agile became more common, spawning tools like Jenkins. Cloud-based CI/CD added further wrinkles with containers, Kubernetes and microservices. How far we have come from the early days of ad-hoc software development.

Why add all this complexity? Because it is scalable, far more so than the original way we developed software. Scalable to bigger and richer services, to larger and more distributed development teams, to simplified support and maintenance across a wide range of platforms. It is also more adaptable to emerging technologies such as machine learning, since the infrastructure for such technologies is packaged, managed, and maintained through transparent cloud/on-prem services.

What about hardware design?

Hardware design and design service teams have been slower to fully embrace DevOps, in some cases because not all capabilities for software make sense for hardware, in other cases because hardware teams are frankly more conservative, preferring to maintain and extend their own solutions rather than switch to external options. Still, cracks are starting to appear in that cautious approach.

Version control is one such area. Git and Subversion are well established freeware options but have scaling problems for large designs across geographically distributed development, verification, and implementation organizations. Addressing this challenge is where commercial platforms like Perforce Helix Core can differentiate.

In more extensive DevOps practices, some design teams are experimenting with CI/CD and Agile. During development, a new version of a lower level is committed after passing through quality checks. That triggers workspaces ready to roll with subset regression tests, running the new candidate automatically and all managed by Jenkins.

Product lifecycle management (PLM) has been common in large system development for decades. Cars, SoCs, and large software applications are built around many components, some legacy, some perhaps open source, some commercial. Each evolves through revisions, some of which have known problems discovered in design or in deployment, some are adapted to special needs. Certain components may work well with other components but not with all. PLM can trace such information, providing critical input to system audits/ signoffs.

In managing such functions in DevOps, design teams have two choices – fully develop their own automation or build around widely adopted tools. Some go for in-house for all the usual reasons, though management sentiment is increasingly leaning to proven flows in response to staffing limitations, risks in adding yet more in-house software, and growing demand for documented traceability between requirements, implementation, and testing. While management attitudes are still evolving, Simon believes organizations will inevitably move to proven flows to address these concerns.

Cloud

The state of DevOps adoption in hardware is somewhat intertwined with cloud constraints. For software there are real advantages to being in the cloud since that is often the ultimate deployment platform. The same case can’t be made for hardware. Simon tells me that based on multiple recent customer discussions there is still limited appetite for cloud-based flows, mostly based on cost. He says all agree with the general intent of the idea, but these plans are still largely aspirational.

This is true even for burst models. For hardware design and analytics, input and output data volumes are unavoidably high. Cloud costs for moving and storing such volumes are still challenging, undermining the frictionless path to elastic expansion we had hoped for. Perhaps at some point big AI applications only practical in the cloud (maybe generative methods) may tip the balance. Until then, heavy cloud usage beyond the cloud in-house design groups may struggle to move beyond aspirational.

Interest in unifying hardware and software DevOps

Are there other ways in which software and hardware can unify in DevOps? One trend that excites Simon is customers looking for a unified software and hardware Bill of Materials.

The demand is clear visibility into dependencies between software and hardware, for example does this driver work with this version of the IP? Product teams want to understand re-use dependencies between stack hardware and software components. They need insight into questions which PLM and traceability can answer. In traceability, one objective is to prove linkage between system requirements, implementation, and testing. Another is to trace between component usages and known problems in other designs using the same component. If I find a problem in design I’m working on right now, what other designs, quite possibly already in production, should I worry about? Traceability must cross from software to hardware to be fully useful in such cases.

Interesting discussion and insights into the realities of DevOps in hardware design today. You can learn more about Perforce HERE.


Breakthrough Gains in RTL Productivity and Quality of Results with Cadence Joules RTL Design Studio

Breakthrough Gains in RTL Productivity and Quality of Results with Cadence Joules RTL Design Studio
by Kalar Rajendiran on 08-08-2023 at 10:00 am

Joules RTL Design Studio Benefits

Register Transfer Level (RTL) is a crucial and valuable concept in digital hardware design. Over the years, it has played a fundamental role in enabling design of complex digital chips. By abstracting away implementation details and providing a clear description of digital behavior, RTL has contributed significantly to the advancement and widespread adoption of digital design methodologies. It abstracts away the specific implementation details and technology-dependent aspects, providing a more manageable and technology-agnostic representation of the design. RTL provides a basis for design exploration and optimization. Engineers can modify the RTL code to explore various design alternatives and identify the most efficient solutions.

While the chip design process benefits tremendously from the use of RTL, the designs need to be synthesized and taken through the layout process before the chips can be manufactured. Tools for synthesis and place and route rely on RTL as input to generate the physical layout of the chip. This transition comes with several challenges that designers need to address to ensure a successful and optimal chip implementation. Physical design constraints such as area, power and routability constraints must be satisfied during the layout process while considering the characteristics and limitations of the target process technology and manufacturing process. Power integrity, signal integrity, design for manufacturability (DFM) and many more requirements need to be addressed as well.

As designs grow in complexity, the productivity and turnaround time become significant challenges during the RTL-to-layout transition. The RTL-to-layout transition often involves iterative processes where designers must go back to the RTL level to make modifications and then repeat the layout process. Efficient iteration management is crucial to avoid time-consuming and costly iterations. It is in this context that Cadence’s recent announcement highlighting the delivery of the Joules RTL Design Studio takes significance. It promises to deliver up to 5X faster RTL convergence and up to 25% improved Quality of Results (QoR) when compared with traditional RTL design approaches.

Actionable Intelligence

The driving force behind the Joules RTL Design Studio lies in its ability to provide RTL designers with actionable intelligence and rapid insight into physical effects. This capability enables design teams to address potential issues early in the design process, leading to reduced iterations, thus speeding time to market. Front-end designers can now access digital design analysis and debugging capabilities from a single, unified cockpit, streamlining the design process and ensuring a fully optimized RTL design before implementation handoff. This provides the physical design tools a strong starting point.

Intelligent RTL Debugging Assistant System

Joules RTL Design Studio further distinguishes itself with an intelligent RTL debugging assistant system. It provides early power, performance, area and congestion (PPAC) metrics and actionable debugging information throughout the design cycle‑including logical, physical, and production implementation stages. Engineers can thoroughly explore “what-if” scenarios and identify potential resolutions with ease. This not only saves valuable time but also improves the overall design outcomes, leading to more efficient chip designs.

Integrated AI Platform

A key highlight of this solution is its integration with Cadence Cerebrus, an AI-driven solution for design flow optimization, and the Cadence JedAI Platform, which facilitates big data analytics. By leveraging generative artificial intelligence (AI) for RTL design exploration and comprehensive analytics with Cadence’s leading AI portfolio, designers gain new insights into design space scenarios, floorplan optimization, and frequency versus voltage tradeoffs. This opens up new possibilities for creative exploration and significantly enhances design productivity.

The software’s capabilities are based on proven engines, shared with Cadence’s Innovus Implementation System, Genus Synthesis Solution, and Joules RTL Power Solution. This integration allows users to access all analysis and design exploration features from a single intuitive graphical user interface (GUI), ensuring an optimal QoR and a seamless design experience.

Incorporating lint checker integration, Joules RTL Design Studio empowers engineers to run lint checkers incrementally. This capability helps rule out data and setup issues upfront, effectively reducing errors and accelerating the design completion process. The unified cockpit experience offered by the software caters to the specific needs of RTL designers, providing physical design feedback, localization, and categorization of violations, bottleneck analysis, and cross-probing between RTL, schematic, and layout. This user-friendly interface streamlines the design workflow and fosters productivity.

Intelligent System Design

Joules RTL Design Studio plays a vital role in Cadence’s broader digital full flow. This integrated flow offers customers a faster path to design closure, ensuring efficient and successful chip design. The tool aligns well with Cadence’s Intelligent System Design strategy, empowering engineers to achieve excellence in system-on-chip (SoC) design.

Summary

The impact of this innovation extends to all aspects of physical design, from power and performance to area and congestion. By incorporating advanced technologies like machine learning, big data analytics, and generative artificial intelligence, Cadence has engineered a powerful solution that empowers designers to achieve optimized RTL designs faster with improved QoR.

Customers from various industries have endorsed its powerful capabilities and the benefits it brings to their design processes. For details, refer to the Joules RTL Design Studio press release.

For more information, visit the Joules RTL Design Studio product page.

Also Read:

Cadence and AI at #60DAC

Automated Code Review. Innovation in Verification

Xcelium Safety Certification Rounds Out Cadence Safety Solution


DVCon India 2023 | Keynote: “Journeying Beyond AI: Unleashing the Art of Verification”

DVCon India 2023 | Keynote: “Journeying Beyond AI: Unleashing the Art of Verification”
by Daniel Nenni on 08-08-2023 at 10:00 am

Keynote by Sivakumar DVCon India 2023

DVCon India 2023 | Keynote: “Journeying Beyond AI: Unleashing the Art of Verification” by Sivakumar P R, Founder & CEO, Maven Silicon

Get Ready for an Epic Tech Odyssey with the keynote, ‘Journeying Beyond AI: Unleashing the Art of Verification’, by P. R. Sivakumar, Founder, and CEO, Maven Silicon.

The semiconductor industry is undergoing a transformative shift, embracing novel design methodologies and innovative flows to meet the demands of a rapidly evolving technological landscape. In this keynote address, we will explore how these advancements, such as AI-driven Electronic Design Automation (EDA), System of Chips (SoCs) utilizing Chiplets with UCIe, and cutting-edge 2.5D and 3D advanced packaging techniques, are revolutionizing chip production. This transformative journey positions the semiconductor industry to emerge as a trillion-dollar market by 2030, fueled by the creation of complex chips boasting trillions of transistors.

The rise of disruptive technologies, such as AI, cloud computing, and autonomous vehicles, has sparked a pressing need for sophisticated SoCs and chips specially designed to cater to these domains. These intricate designs incorporate standard CPUs, GPUs, FPGAs, and specialized AI accelerators, providing the foundation for groundbreaking innovation. With AI serving as a key driver for progress, its pervasive influence is permeating every industry sector.

Within the realm of EDA, machine learning has emerged as a vital tool, significantly enhancing the efficiency of the design and verification processes. Leveraging the power of machine learning, we are propelled towards the adoption of AI-driven EDA, facilitating the creation of advanced chips that fuel the growth and proliferation of emerging technologies. During this keynote, we will delve into the uncharted territory of verification challenges stemming from these new designs. Furthermore, we will illustrate how AI-driven EDA empowers verification engineers to efficiently validate these state-of-the-art chips, enabling them to unleash their creative potential and innovate with unprecedented freedom.

To know more, click here

About Maven Silicon
Maven Silicon is a trusted VLSI Training partner that helps organizations worldwide build and scale their VLSI teams. We provide outcome-based VLSI training with our variety of learning tracks i.e. RTL Design, ASIC Verification, DFT, Physical Design, RISC-V, and ARM etc. delivered through our cloud-based customized training solutions. To know more about us, visit our website.

Also Read:

Upskill Your Smart Soldiers and Conquer the Chip War in Style!

Chip War without Soldiers

Maven Silicon’s RISC-V Processor IP Verification Flow


The Era of Flying Cars is Coming Soon

The Era of Flying Cars is Coming Soon
by Ahmed Banafa on 08-08-2023 at 6:00 am

The Era of Flying Cars is Coming Soon

For decades, the concept of flying cars has captivated our imagination, fueling visions of a future where we can soar above the ground, free from the constraints of traffic and congestion. While once considered purely the stuff of science fiction, recent advancements in technology have brought us closer to turning this fantasy into a reality. Electric vertical takeoff and landing (eVTOL) vehicles, commonly known as flying cars, hold the promise of revolutionizing transportation, offering new levels of efficiency, convenience, and accessibility. It’s important to explore the needs driving the development of flying cars, the challenges they face, the benefits they offer, the risks involved, and what the future holds for this transformative technology.

Needs for Flying Cars

·      Congestion and Traffic Woes: Growing urbanization and population density have led to increasingly congested roads in cities around the world. Commuting times have become longer, and frustration levels have risen. Flying cars could alleviate these problems by utilizing the airspace, bypassing traffic and reducing travel times. This could lead to more efficient transportation and improved overall mobility.

·      Transportation Accessibility: Flying cars have the potential to address accessibility issues by providing transportation options for areas with limited infrastructure. Remote regions, islands, and disaster-stricken areas could benefit greatly from the ability to fly above ground-based obstacles, connecting previously isolated communities. Flying cars could bridge the gap between urban and rural areas, fostering economic development and social integration.

·      Rapid Emergency Response: Flying cars could revolutionize emergency services by enabling faster response times and facilitating the transportation of medical supplies, organs for transplantation, and injured individuals to hospitals. In situations where time is critical, such as during natural disasters or in hard-to-reach locations, flying cars could make a significant difference in saving lives and minimizing the impact of emergencies.

Challenges of Flying Cars

·      Infrastructure Requirements: The widespread implementation of flying cars requires the development of a comprehensive infrastructure framework. This includes establishing designated landing and takeoff zones, creating charging stations for electric vehicles, designing efficient air traffic management systems, and establishing regulations to ensure safe and efficient operations. Building this infrastructure will be a significant challenge that requires careful planning and coordination.

·      Safety and Reliability: Ensuring the safety and reliability of flying cars is of paramount importance. New technologies, such as autonomous flight systems, collision avoidance mechanisms, and fail-safe protocols, must be developed and rigorously tested to minimize the risk of accidents and malfunctions. Safety standards and certifications will need to be established to instill public confidence in this emerging mode of transportation.

·      Noise Pollution: Flying cars introduce the challenge of managing noise pollution in urban areas. The sound of numerous flying vehicles could disrupt the tranquility of residential neighborhoods and potentially cause annoyance or discomfort. Efforts must be made to design quieter propulsion systems and establish regulations to minimize noise emissions, ensuring that the benefits of flying cars do not come at the expense of quality of life for those on the ground.

Benefits of Flying Cars

·      Efficient Urban Mobility: Flying cars have the potential to significantly reduce commuting times by bypassing congested roads. This could lead to increased productivity, improved work-life balance, and enhanced overall quality of life for urban dwellers. Imagine being able to travel across a crowded city in minutes instead of hours, with the freedom to avoid gridlock and traffic congestion.

·      Environmental Sustainability: Electric-powered flying cars have the potential to contribute to environmental sustainability, provided they are powered by renewable energy sources. By shifting transportation from ground-based vehicles to the sky, flying cars could help reduce carbon emissions and mitigate the impacts of climate change. This transition to clean energy-powered transportation could have a positive impact on air quality and the overall health of our planet.

·      Economic Opportunities: The development and deployment of flying cars can stimulate economic growth and create new job opportunities. Manufacturing flying cars, building and maintaining the necessary infrastructure, and managing air traffic control systems all require a skilled workforce. Additionally, new industries and services could emerge around flying car technology, further boosting local economies and fostering innovation.

Risks Associated with Flying Cars

·      Air Traffic Management: The integration of flying cars into existing airspace systems poses significant challenges in terms of air traffic management. Ensuring the safe coexistence of conventional aircraft, drones, and flying cars requires the development of robust communication and navigation systems. Cooperation between aviation authorities, technology providers, and regulators is crucial to establishing effective protocols and infrastructure to manage the complex airspace environment.

·      Cybersecurity: As flying cars become increasingly reliant on software and connectivity, the risk of cybersecurity threats arises. Safeguarding against hacking, system breaches, and data privacy breaches is crucial to ensure passenger safety and protect against potential malicious activities. Strong cybersecurity measures and protocols must be implemented to ensure the integrity and privacy of the systems controlling flying cars.

·      Regulatory Framework: The development of comprehensive regulations and policies is essential to govern the use of flying cars. Striking a balance between innovation and safety, while addressing concerns related to privacy, noise pollution, and liability, requires careful consideration. Governments and regulatory bodies need to collaborate with industry stakeholders to establish a robust regulatory framework that ensures the safe and responsible deployment of flying car technology.

Future Outlook

·      Technology Advancements: Ongoing advancements in electric propulsion, battery technology, autonomous systems, and materials science will contribute to improving the performance, safety, and affordability of flying cars. Continued research and development will likely lead to more efficient and environmentally friendly flying car models in the future.

·      Urban Air Mobility Ecosystems: The successful integration of flying cars will involve the creation of urban air mobility ecosystems. This will require collaboration between vehicle manufacturers, infrastructure developers, air traffic control authorities, policymakers, and communities. Establishing a robust framework that encompasses infrastructure, regulations, and public acceptance is essential for the widespread adoption and safe operation of flying cars.

·      Public Acceptance: Public acceptance is critical for the successful integration of flying cars into society. Transparency in terms of safety, privacy, and environmental impact will play a vital role in fostering public confidence in this revolutionary mode of transportation. Educating the public about the benefits and addressing concerns through effective communication and public engagement initiatives will be crucial for the widespread acceptance and adoption of flying cars.

Flying cars hold the potential to transform transportation and reshape our urban environments. By addressing the needs for efficient mobility, accessibility, and emergency response, flying cars offer promising solutions to the challenges faced by our current transportation systems. However, significant hurdles related to infrastructure development, safety, and regulation must be overcome. With careful management of risks and continued technological advancements, flying cars could usher in a new era of transportation that is efficient, sustainable, and accessible to all. The future of flying cars depends on collaboration between industry, government, and society as we work together to turn this futuristic vision into a tangible reality.

Ahmed Banafa’s books

Covering: IoT, Blockchain and Quantum Computing

Also Read:

Xcelium Safety Certification Rounds Out Cadence Safety Solution

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US

Automotive IP Certification


Cadence and AI at #60DAC

Cadence and AI at #60DAC
by Daniel Payne on 08-07-2023 at 10:00 am

Cadence, AI, #60DAC min

Paul Cunningham from Cadence presented at the #60DAC Pavilion and gave one of the most optimistic visions of AI applied to EDA that I’ve witnessed, so hopefully I can convey some of his enthusiasm and outright excitement in my blog report. Mr. Cunningham reviewed the various ages of EDA design with each era providing about a 10X productivity improvement: Transistor-level, cell-based, RTL reuse, AI-driven system design.

Paul Cunningham, Cadence

Human chip designers are good at intuition, judgement, remembering experiences and understanding context, however, we are limited in our serial thinking patterns. AI, on the other hand has merits like scalability, parallelization, access to massive data, and the ability to classify data. To reach the next 10X productivity improvement will take an approach that accelerates design efficiency by keeping the human in the loop, not really replacing engineers.

In EDA 1.0, there were all of these separate EDA tools, each with their own silo of data, and most of the time was spent waiting to get back tool results so that an engineer could analyze the results. Now, with EDA 2.0, all of the tool results get collected as big data, then cataloged and indexed, creating a more wholistic viewpoint on the design process.

Within Cadence the data platform is called JedAI—Joint Enterprise Data and AI Platform—which lets engineering teams visualize workflow and design data across some of their tools, so expect it to grow across all of their tools in the future. Another use of AI at Cadence is in running the combination of logic synthesis and P&R tools to achieve better PPA results, and that’s called Cerebrus. In just a short period of time, Cerebrus has been used on more than 180 tapeouts, and using this methodology allows one engineer to do the work of 10 previous engineers, so that’s a big productivity boost and allows engineers to focus on more strategic projects.

On the PCB tool side, the application of AI is called Allegro X AI, and there, engineers are seeing 30-50X improvements on placement and routing, while achieving better QoR.

Functional verification is another hot topic area to apply AI, and the basic question remains, “Is verification ever done?” Verification engineers still need to debug why a test just failed, and why the coverage goals not being reached. AI technology can help by creating a triage funnel, and answering basic questions like, “Who just checked in recent changes?” AI is used to rank bug locations and help pinpoint which change caused the latest failures. Cadence has also found that applying language processing on waveforms is better done by machine in terms of finding patterns and signatures of failures. The product name for AI applied to verification is called Verisium.

In general, AI can be applied to most NP-complete problems in computer science. Using constraint-solving in randomization also shows promise with AI technique, as AI can learn what was randomized before, so you are not starting all over again. The Xcelium logic simulator uses ML to get up to 5X faster coverage with the same CPU usage as previous approaches.

For formal logic verification, the Jasper apps have a method where AI guides and helps choose the best proof techniques and can create about 30% more properties versus a manual approach.

Summary

At Cadence, the product groups have been adding AI capabilities to help IC, PCB designers and verification engineers  become more productive, explore more alternatives, and even improve the quality of results. Yes, a human engineer still has to direct the EDA tools and choose the best results to meet their specific PPA, DFT and DFM goals. The first application of AI is in digital flows where models are trained at the customer site—they don’t get shipped pre-trained.

Cadence has applied AI techniques successfully across many of their tools, so I look forward to more announcements to come.

Related Blogs

 


How Taiwan Saved the Semiconductor Industry

How Taiwan Saved the Semiconductor Industry
by Daniel Nenni on 08-07-2023 at 6:00 am

Taiwan USA

Now that semiconductors are front page news and a political football, I would like to write more about how we got to where we are today to better understand where semiconductors will go tomorrow. I will start this article with a provocative quote that really made me laugh and will put some context to what I am trying to accomplish here:

Trump accuses Taiwan of taking away America’s semiconductor business “Taiwan, they took our business away. We should have stopped them. We should have taxed them. We should have tariffed them,” said Donald Trump, Fox News, July 2023.

I know, American politics is an acquired taste but it is funny just the same. We already covered the fabless transformation in our book “Fabless: The Transformation of the Semiconductor Industry” so let’s start with a quick look at the history of the semiconductor industry in Taiwan (GPT 4.0):

The semiconductor industry in Taiwan has a rich history that spans several decades, starting from the 1970s to the present day. The industry has played a crucial role in Taiwan’s economic transformation from an agrarian economy to one of the world’s top high-tech industrialized nations.

Here are some of the key milestones:

“Formation of Electronics Research and Service Organization (ERSO): In 1973, the Taiwanese government established ERSO under the Industrial Technology Research Institute (ITRI). Its aim was to develop the electronics industry in Taiwan and provide a platform for semiconductor research and development.

Birth of Taiwan Semiconductor Manufacturing Company (TSMC): In 1987, TSMC, the world’s first dedicated semiconductor foundry, was founded by Dr. Morris Chang. The establishment of TSMC initiated the foundry model in the global semiconductor industry, which separated the manufacturing and design aspects of chip production. This allowed fabless companies to focus on designing innovative chip solutions without the need for significant capital investments in manufacturing facilities.

Hsinchu Science and Industrial Park: The Taiwanese government established this park in 1980 to serve as a hub for high-tech companies. It now houses numerous semiconductor companies and has played a significant role in the growth of Taiwan’s semiconductor industry.

Expansion of other major semiconductor companies: Over the years, several other Taiwanese companies have become key players in the global semiconductor industry. These include United Microelectronics Corporation (UMC), the world’s second-largest pure-play semiconductor foundry, MediaTek, a leading fabless semiconductor company, and ASE Technology Holding Co., the world’s largest provider of independent semiconductor manufacturing services in assembly and test.

Leadership in Advanced Process Technology: In recent years, TSMC, in particular, has emerged as a global leader in advanced process technologies, successfully achieving mass production at ever smaller node sizes (7nm, 5nm, 3nm and planning for 2nm and beyond). This has placed Taiwan at the forefront of the global semiconductor industry.

While this summary provides a brief overview, the history of Taiwan’s semiconductor industry is rich and complex, driven by strategic government initiatives, visionary leadership, strong educational programs, and the rise of the global digital economy. As of 2023, Taiwan is one of the world’s largest and most important centers for semiconductor manufacturing.”

Great summary, here is a little color on what happened. When I joined the semiconductor industry in the 1980s it was a challenging decade. Mini computer companies such as IBM, Hewlet-Packard, Digital Equipment, Data General, Prime Computer, and Wang all had their own fabs all over the United States. Unfortunately, due to over regulation (especially here in California) and the inability to hire skilled workers (sound familiar?), manufacturing of all types left the US for more friendly countries.

Additionally, in the 1980s, there were quite a few economic ups and downs including the crash of 1985. Keeping these very expensive fabs running was difficult which spawned the IDM foundry business where US and Japanese semiconductor companies accepted designs from outside customers for contract manufacturing to fill their fabs.

One of the first big fabless companies to do this was FPGA vendor Xilinx (founded in 1984, now owned by AMD). Sieko Epson (Japan) was Xilinx’s first IDM foundry partner. Xilinx quickly outgrew the relationship and moved to UMC and then TSMC which is where they are today.

Clearly IDM foundries were a stop-gap solution back then since they routinely competed with customers and the foundry business had lower margins than the products they manufactured internally so those products always had priority in the fabs.

Also in the 1980s, the ASIC business model was developed by VLSI Technology (founded in 1979) and LSI Logic (founded in 1980). VLSI and LSI accepted designs from fabless companies and manufactured them using internal fabs. But again the cost of the fabs was prohibitive. The ASIC business model is again thriving but it is now populated by fabless ASIC companies who do the design and manage manufacturing through the foundries.

Bottom line: The early IDM foundries and ASIC companies created the perfect storm for the pure-play foundry business model that fully evolved in the 1990s and that is where Dr. Morris Chang comes in.

To be continued… Morris Chang’s journey to Taiwan.

Also Read:

Morris Chang’s Journey to Taiwan and TSMC

Intel Enables the Multi-Die Revolution with Packaging Innovation

TSMC Redefines Foundry to Enable Next-Generation Products


Podcast EP175: The Complexities of Compliance for a Worldwide Supply Chain with Chris Shrope

Podcast EP175: The Complexities of Compliance for a Worldwide Supply Chain with Chris Shrope
by Daniel Nenni on 08-04-2023 at 10:00 am

Dan is joined by Chris Shrope. Chris leads high tech product marketing at Model N, a compliance leader for high-tech manufacturers. Chris has deep experience defining product market fit and related new product development activities. He received his MBA and holds certifications in Economics, Law, Product Management and Marketing. For any ocean lovers out there, like Dan, Chris is also an advisory board member of the Inland Ocean Coalition.

Dan explores the evolving government rules associated with semiconductor sales with Chris. The impact geopolitical tensions creates is outlined by Chris, along with a discussion of how semiconductor suppliers can ensure compliance. A multi-tier supply chain consisting of distributors and resellers can make it challenging to know exactly where parts are used.

Chris offers several strategies to manage this problem that are based on collaboration and forward visibility, among other approaches.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Harry Peterson of Siloxit

CEO Interview: Harry Peterson of Siloxit
by Daniel Nenni on 08-04-2023 at 6:00 am

hwp photo

Harry Peterson is a mixed-signal chip designer with a BS in Physics from Caltech.  He managed IC design groups within Fairchild, Kodak, Philips, Northern Telecom, Toshiba and Pixelworks.  During sabbaticals he helped fly experiments on NASA’s orbiting satellite observatory (OSO-8) and build telescopes in the Canary Islands.  He is CEO and a co-founder of Siloxit, a startup that has developed an Industrial IoT (IioT) module that securely monitors and controls operation of the electric grid.  He has produced many publications, patents, presentations, and short courses.  For fun he swims, hikes and thinks about astronomy.

Tell me about your journey to Siloxit.

I co-founded Siloxit which is a startup and a Portfolio Company in the Silicon Catalyst Incubator. Siloxit is in the business of Distributed Online Condition Monitoring (DOCOM) of infrastructure.  The specific infrastructure market we focus on is the energy grid.

Long ago when I was an undergraduate at Caltech, I was lucky enough to get a job building telescopes.  That continued through my grad school days.  Eventually I transitioned into integrated circuit design at the venerable Fairchild Semiconductor, which was a great place to learn device physics and a nice starting point for a lifetime of adventures in mixed signal circuit design.  In 1988 some of my Fairchild colleagues invited me to join them as employee number seven at ACM research, which was the IoT startup founded by Mike Markkula before IoT was a thing.  About three years ago, my friends Nick Tredennick and Eudes Prado Lopes and I decided that IoT for infrastructure was a very compelling concept that nobody had really implemented well, and we seized the opportunity to fix that by founding Siloxit.

Can you talk a little bit more about Fairchild? Because that’s a significant job to have early in your career.

When an opportunity came up to join the team at Fairchild Semiconductor, working at the R&D fab in Palo Alto, I jumped on it. The first assignment that they gave me was working out the device physics of RAM chips to be built in an exotic fabrication process.  IBM loved it, and bought a bunch of the chips.  But the chip consumed too much power to be competitive.

At Siloxit, you are working on technology improvements that can slash the cost of building and operating the grid.  Tell me, what problems are you solving?

The use cases for these kinds of devices come from the challenges with expanding the electric-grid infrastructure.  The grid is not doing so well these days.  The grid needs to run more efficiently and more cost effectively, the distribution network needs to be expanded, reliability and security need to be better.  Also, here in California, we need to get rid of the failure modes that start wildfires and burn down forests. And big technical challenges remain as we develop solutions that have the agility and stability to deal with low-cost sources using dynamic energy resources (DER) such as wind and solar.

The good news is that most of the grid’s problems can be fixed by just using IIoT to properly control and manage the system.  The even better news is that there are some pretty sharp declines in costs on the power-generation side, so the spiraling costs could actually be pushed back down. Going after some of the easy-to-solve problems on the transmission side will likely have a big financial impact.  Technology is going through very fundamental shifts; this is something that has been a hundred years in the making, but now renewable energy and other factors are redefining the grid.

Infrastructure fails way too often. So what do you mean by helping the grid to do its job?

Let’s focus on a key example – power transformers.  At first glance, these are just boring hundred-ton hunks of iron that generally don’t even contain any Silicon.  That’s deceptive.  The physics of effectively monitoring and managing their condition with cost-effective IIoT devices turns out to be a sweet problem that can be solved with IIoT.

Now, why should this matter to you? Well, the cost implications are significant.   Each major transformer failure leads to extensive disruptions and subsequent downstream consequences. The good news is that when we deploy IIoT that does the obvious diagnostic homework, we can easily identify which ones are likely to fail next.

By doing so, we can transform an expensive catastrophe into a more affordable scheduled maintenance event. This is the ultimate goal. How do we achieve this? By detecting the failure of an insulator. Insulators have a service life of up to half a century or more, enduring high levels of stress, particularly in harsh climates like my home state of Arizona. As an insulator approaches failure, it sends warning signs, indicating its impending demise. By leveraging our understanding of plasma physics, we can recognize these signals.

These signals manifest as electrical impulses that can be captured from the associated power lines. It’s not a complex task – the main thing we need is a reliable analog-to-digital converter capable of sampling at around 100 megasamples per second with a precision of approximately 14 bits.  Also required is an IIoT system that will be described below.

In a production run of a few hundred thousand units, the cost per unit can be reduced to about a thousand dollars. Considering the financial consequences of transformer failures, which are estimated at approximately $15 million per incident according to ABB, one of the leading transformer manufacturers, the cumulative impact of one in every 200 transformers failing annually is substantial.

To address this, we propose investing a small fraction of the cost of a single transformer failure on an insurance policy, or rather an Industrial Internet of Things solution. This IIoT solution is capable of detecting the precursors to such failures and relaying this information to a central control center. With this system in place, we can organize a prompt and efficient response.

In conclusion, by leveraging advanced sensor technology and IIoT solutions, we have the potential to significantly mitigate the financial and operational risks associated with power transformer failures. Through proactive monitoring and early detection of failure precursors, we can transform these potential disasters into manageable maintenance events.

This is kind of old school technology, power grids, meet new school, IoT sensors. What tech trends are you leveraging?

Chiplets, energy harvesting, and the cognitive edge are some of the trends that we leverage. Chiplets facilitate cost-effective heterogenous integration of the odd mix of technologies required for the needed IIoT. Energy harvesting is often the only practical solution that meets the cost and longevity constraints of our applications.  Service lifetime of much of the infrastructure that needs to be monitored and managed by our IIoT devices is very long.  Commonly it needs to exceed half a century.  The notion of meeting such requirements with batteries that ‘only’ last a decade is a complete non-starter.  Magnetic-field energy harvesting (MFEH) turns out to be a good solution for many of the use-cases we address.

So what is the cognitive edge and why is this essential to you?

It is essential to process data at the edge. This limitation becomes evident when considering the costs associated with transferring a terabyte of data to a distant data center, which includes significant transportation expenses. Ideally, all calculations should be performed at the sensing point itself, following a hierarchical approach.

You can also think of this as a bio-inspired architecture.  The partition for distributing the processing workload in hardware is quite similar to the body’s partition of processing, which puts, for example, a lot of the processing of vision into biological interconnect between the photoreceptors in your retina and the neurons in your brain. Some processing is done at the sensing point, while additional processing can take place at the gateway and other levels.  Successive layers of processing lead to successive layers of data compression.

Cognitive-edge architecture brings additional benefits.  It allows for better control over communication costs and enhances security. Currently, the security of the grid is a major concern due to frequent hacking attempts. However, existing efforts in this regard have proven inadequate. Therefore, our perspective is that specific applications, such as ensuring the integrity of transformers, would benefit from reevaluating the entire process from scratch. This doesn’t imply discarding the existing infrastructure and legacy grid; rather, it means incorporating new components that are not reliant on the ineffective elements of the old system. For instance, leveraging advanced communication chips or low earth orbit satellites, which are already starting to become available, can greatly enhance the system’s capabilities.

Artificial Intelligence is essential for many of the applications we address.  Most engineers and system architects are starting to realize the huge benefits that accrue when you distribute processing rather than just sending all the data from sensors to some central signal-processing block.

What big infrastructure development projects are affected by these developments?

Let’s begin by emphasizing the significance of these projects, even if they are not big in the sense that they always grab major headlines. An example worth highlighting is the recent press release from Iberdrola, a prominent Spanish company in the grid sector. They announced their commitment to invest half a billion dollars in expanding transmission line capacity in Brazil. While individual projects may not seem monumental, the cumulative effect of such opportunities can be substantial.

These developments involve the installation of thousands of kilometers of wire, alongside various accompanying components. The challenge lies in managing an increasingly complex grid with each new generation. Furthermore, the rise of affordable, renewable energy sources adds another layer of complexity to the business landscape. To achieve cost-efficiency, it is crucial to have an agile grid that can adapt to the fluctuations in power supply from sources like wind, solar, and emerging forms of nuclear energy.

Brazil, in particular, is experiencing substantial growth, constructing thousands of kilometers of new transmission lines. These installations must withstand challenging environmental conditions, ensuring both reliability and cost-effectiveness. Additionally, they need to be safeguarded against potential threats, including acts of vandalism or unintended failures caused by external factors. The key to achieving this lies in implementing more advanced condition monitoring systems. Siloxit aims to specialize in providing highly effective condition monitoring solutions while seamlessly integrating them into the communication networks and management structures of the grid.

Why did you choose to work with Silicon Catalyst?

When we decided to build IIoT that would help the grid do its job, we knew we would have to dance with elephants.  In the early days we were just half a dozen folks in a little startup explaining to billion-dollar customers that they should embrace our out-of-the-box thinking about better solutions for the problems they have been working on for a century.  When you first think about how a dozen folks in a little startup can actually interact with the people and institutions that are driving these big-league developments, the answers are not immediately obvious.  Shortly after we founded Siloxit, we joined Silicon Catalyst.  It has been a wonderful experience to see how Silicon Catalyst has helped us connect the dots, allowing us to partner with TSMC and ST Microelectronics and IMEC and Leti and many others.  The list of partners we’ve had the good fortune to work with thanks to Silicon Catalyst is just overwhelming.

I saw that you have a partnership coming up with YorChip. Is there something there you would like to talk about?

Our partnership with YorChip is very exciting. I have collaborated with Kash Johal, the CEO of YorChip, on various projects for many years. It is clear that chiplets offer significant advantages for specific use cases. Siloxit faces challenges that encompass multiple disciplines, requiring us to bring together various resources like threshold sensors, security widgets, processor widgets, A-to-D conversion solutions, and communication components within a very compact module while ensuring cost-effectiveness.

The contemporary approach is to leverage chiplets, which harness the best attributes of highly efficient technologies but in a size format that avoids unnecessary overprovisioning. For instance, an A-to-D conversion task doesn’t require a square centimeter-sized chip; a millimeter-sized chip may suffice. Furthermore, advancements in packaging technology, particularly in sensor, processor chip, A-to-D converter, and antenna domains, enable seamless integration of these chiplets. It’s not merely about assembling IP components from different catalogs but designing chiplets that effectively harmonize with the entire system.

Security is one critical aspect that demands special attention, requiring careful consideration at the logic design and device design levels.

By successfully integrating these chiplets, while utilizing only a small percentage of the available space, we achieve a significant outcome that is far from trivial but well worth the effort. YorChip’s focus aligns precisely with these objectives, and we anticipate incorporating this technology into more Siloxit products moving forward.

Also Read:

CEO Interview: Rob Gwynne of QPT

CEO Interview: Ashraf Takla of Mixel

CEO Interview: Dr. Sean Wei of Easy-Logic