WP_Term Object
(
    [term_id] => 35
    [name] => Perforce
    [slug] => perforce
    [term_group] => 0
    [term_taxonomy_id] => 35
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 99
    [filter] => raw
    [cat_ID] => 35
    [category_count] => 99
    [category_description] => 
    [cat_name] => Perforce
    [category_nicename] => perforce
    [category_parent] => 157
)
            
hip webinar automating integration workflow 800x100
WP_Term Object
(
    [term_id] => 35
    [name] => Perforce
    [slug] => perforce
    [term_group] => 0
    [term_taxonomy_id] => 35
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 99
    [filter] => raw
    [cat_ID] => 35
    [category_count] => 99
    [category_description] => 
    [cat_name] => Perforce
    [category_nicename] => perforce
    [category_parent] => 157
)

Insights into DevOps Trends in Hardware Design

Insights into DevOps Trends in Hardware Design
by Bernard Murphy on 08-09-2023 at 6:00 am

Periodically I like to check in on the unsung heroes behind the attention-grabbing world of design. I’m speaking of the people responsible for the development and deployment infrastructure on which we all depend – version control, testing, build, release – collectively known these days as DevOps (development operations). I met with Simon Butler, GM of the Methodics BU at Perforce to get his insights on directions in the industry. Version control proved to be just the tip of what would eventually become DevOps. I was interested to know how much the larger methodology has penetrated the design infrastructure (hardware and software) world.

Insights into DevOps Trends in Hardware Design

Software and DevOps

DevOps grew up around the software development world, where it is evolving much faster than in hardware development. Early in-house Makefile scripts and open-source version control (RCS, SCCS) quickly progressed into more structured approaches, built around better open-source options combined with commercial tools. As big systems based on a mix of in-house and open/commercial development grew and schedules shrank, methods like CI/CD (continuous integration / continuous deployment) and agile became more common, spawning tools like Jenkins. Cloud-based CI/CD added further wrinkles with containers, Kubernetes and microservices. How far we have come from the early days of ad-hoc software development.

Why add all this complexity? Because it is scalable, far more so than the original way we developed software. Scalable to bigger and richer services, to larger and more distributed development teams, to simplified support and maintenance across a wide range of platforms. It is also more adaptable to emerging technologies such as machine learning, since the infrastructure for such technologies is packaged, managed, and maintained through transparent cloud/on-prem services.

What about hardware design?

Hardware design and design service teams have been slower to fully embrace DevOps, in some cases because not all capabilities for software make sense for hardware, in other cases because hardware teams are frankly more conservative, preferring to maintain and extend their own solutions rather than switch to external options. Still, cracks are starting to appear in that cautious approach.

Version control is one such area. Git and Subversion are well established freeware options but have scaling problems for large designs across geographically distributed development, verification, and implementation organizations. Addressing this challenge is where commercial platforms like Perforce Helix Core can differentiate.

In more extensive DevOps practices, some design teams are experimenting with CI/CD and Agile. During development, a new version of a lower level is committed after passing through quality checks. That triggers workspaces ready to roll with subset regression tests, running the new candidate automatically and all managed by Jenkins.

Product lifecycle management (PLM) has been common in large system development for decades. Cars, SoCs, and large software applications are built around many components, some legacy, some perhaps open source, some commercial. Each evolves through revisions, some of which have known problems discovered in design or in deployment, some are adapted to special needs. Certain components may work well with other components but not with all. PLM can trace such information, providing critical input to system audits/ signoffs.

In managing such functions in DevOps, design teams have two choices – fully develop their own automation or build around widely adopted tools. Some go for in-house for all the usual reasons, though management sentiment is increasingly leaning to proven flows in response to staffing limitations, risks in adding yet more in-house software, and growing demand for documented traceability between requirements, implementation, and testing. While management attitudes are still evolving, Simon believes organizations will inevitably move to proven flows to address these concerns.

Cloud

The state of DevOps adoption in hardware is somewhat intertwined with cloud constraints. For software there are real advantages to being in the cloud since that is often the ultimate deployment platform. The same case can’t be made for hardware. Simon tells me that based on multiple recent customer discussions there is still limited appetite for cloud-based flows, mostly based on cost. He says all agree with the general intent of the idea, but these plans are still largely aspirational.

This is true even for burst models. For hardware design and analytics, input and output data volumes are unavoidably high. Cloud costs for moving and storing such volumes are still challenging, undermining the frictionless path to elastic expansion we had hoped for. Perhaps at some point big AI applications only practical in the cloud (maybe generative methods) may tip the balance. Until then, heavy cloud usage beyond the cloud in-house design groups may struggle to move beyond aspirational.

Interest in unifying hardware and software DevOps

Are there other ways in which software and hardware can unify in DevOps? One trend that excites Simon is customers looking for a unified software and hardware Bill of Materials.

The demand is clear visibility into dependencies between software and hardware, for example does this driver work with this version of the IP? Product teams want to understand re-use dependencies between stack hardware and software components. They need insight into questions which PLM and traceability can answer. In traceability, one objective is to prove linkage between system requirements, implementation, and testing. Another is to trace between component usages and known problems in other designs using the same component. If I find a problem in design I’m working on right now, what other designs, quite possibly already in production, should I worry about? Traceability must cross from software to hardware to be fully useful in such cases.

Interesting discussion and insights into the realities of DevOps in hardware design today. You can learn more about Perforce HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.