WP_Term Object
(
    [term_id] => 16301
    [name] => Veriest
    [slug] => veriest
    [term_group] => 0
    [term_taxonomy_id] => 16301
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 10
    [filter] => raw
    [cat_ID] => 16301
    [category_count] => 10
    [category_description] => 
    [cat_name] => Veriest
    [category_nicename] => veriest
    [category_parent] => 386
)
            
Veriest Logo SemiWiki 1
WP_Term Object
(
    [term_id] => 16301
    [name] => Veriest
    [slug] => veriest
    [term_group] => 0
    [term_taxonomy_id] => 16301
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 10
    [filter] => raw
    [cat_ID] => 16301
    [category_count] => 10
    [category_description] => 
    [cat_name] => Veriest
    [category_nicename] => veriest
    [category_parent] => 386
)

Ramping Up Software Ideas for Hardware Design

Ramping Up Software Ideas for Hardware Design
by Bernard Murphy on 12-16-2021 at 6:00 am

This is a topic in which I have a lot of interest, covered in a panel at this year’s DAC; Raúl Camposano chaired the session. I had earlier covered a keynote by Moshe Zalcberg at Europe DVCon late in 2020; he now reprises the topic. Given the incredible pace of innovation and scale in software development these days, I don’t see what we have to lose in looking harder for parallels. And ramping up software ideas for hardware design.

Ramping Up Software Ideas for Hardware

Moshe Zalcberg on why we should think about this

Moshe makes the point that chip design is outrageously expensive, and designers are understandably averse to risky experiments. But as design continues to become even more outrageously expensive, the downside of not looking for new ideas may become even more compelling.

He cites relatively slow change in for example verification methodologies versus more rapid evolution in mobile phone technologies, semiconductor processes and most popular software languages. We’re staying level on verification effort and respins as complexity continues to grow but he wonders if we could do better. Competition isn’t only with complexity; we’re also competing with each other. Any team that is able to find significant advantage in some way will jump ahead of the rest of us. Yes, change is risky but so also is stasis.

He suggests a range of ideas we might borrow from the software world, from open-source to Python as a language (for test especially), to Agile, continuous integration and deployment (CI/CD), leveraging data more effectively and of course AI. Tentative steps are already being taken in some areas; we always need to be thinking about what we might borrow from our software counterparts.

Rob Mains on Open-Source Chip Design

I hear a lot of enthusiasm for open-source EDA, but what about open-source design? The RISC-V ecosystem is showing this can work. Rob Mains is executive director at Chips Alliance, whose mission is to encourage collaboration and open-source practices in hardware. Chip Alliance is part of the Linux Foundation which is a good start. They have heavyweight support from Google, Intel, SiFive, Alibaba and a lot of other companies and universities.

Rob sees a primary focus in promoting an open ecosystem, through for example standard bus protocols like OpenXtend and the Advanced Interface Bus between chiplets. He also sees opportunity for certain open-source EDA directions which could change the game, for example an Open PDK infrastructure. In this spirit he mentioned also Chisel and Rocketchip. Also the BAG family of generators from Berkeley, the FASoC family of tools from University of Michigan and layout synthesis from UT Austin.

Rob has some interesting predictions for this decade, for example that 50% or more design will be open source based and that design entry to implementation will no longer require human intervention. Bold claims. Viewed as a moonshot I’m sure they’ll drive some interesting progress.

Neil Johnson on Agile Design

Neil Johnson, now at Siemens, is a very accomplished thinker and speaker in this domain. He has embraced Agile and related methods whole heartedly yet accepts that he lives in a world of skeptics who “don’t buy any of this Agile nonsense”. He starts with his own ten-year journey in Agile, a testament to his credibility in this domain. That he follows with a poem he wrote titled “Your Agile is for Chumps”. This is a gentle but persuasive walk through counterarguments to the opposition he has heard to Agile methods.

I won’t ruin the experience by attempting to summarize this presentation. You should really watch the video (link below). I will say that he had me convinced, not by beating me over the head with claims that my arguments are wrong but by gentle reasoning that there’s a different way to look at the components of agile. And that perhaps traditional approaches may not be as solid as we think.

Vicki Mitchell on MLOps

This talk, presented by Vicki Mitchell, may require a couple of cognitive jumps for most of us. First you need to understand what DevOps is in the software world. According to AWS “this is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.” In other words, not the end software products but all the infrastructure and ecosystems that support the development of those products. These concepts are creeping into hardware design through adoption of tools like Jama, Jenkins and others. Vicki has presented multiple times on the value of DevOps practices in hardware design.

Now think about that philosophy for ML, particularly as used in adoption of ML in design practices. Hang on tight; this does make sense, but it is mind-bending. Vicki presents it as putting data and machine learning together. The summary I find easiest to understand is that use of ML in design cannot depend on a one-time training activity. It must continuously improve as new designs are encountered and new data is generated. MLOps is a way to make ML adjust flexibly yet robustly to this landscape of changing data, requirements and quite possibly models.

When ML becomes a part of even a waterfall flow with regressions, or CI/CD flows, it must fit into the DevOps flow. It should fit into CI/CD, automated testing, pipelining. So that failing or slow components don’t roadblock the whole flow as tests, design data and constraints change. In CI/CD flows, everything in the flow must adapt to supporting continuous integration and be continuously deployable. There’s a lot more good stuff here and in all the talks. Watch the video.

Finally a shout-out to Raúl, my partner with Paul Cunningham on the Innovation in Verification blogs. He started with a remembrance of Jim Hogan, who we all miss. Raúl asked several insightful questions at the end of each talk. This blog would run to many thousands of words if I did justice to his question and each of the talks. Again, watch the video!

Also Read:

Verification Completion: When is enough enough?  Part I

Verification Completion: When is Enough Enough?  Part II

On Standards and Open-Sourcing. Verification Talks

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.