WP_Term Object
(
    [term_id] => 16051
    [name] => IC Manage
    [slug] => ic-manage
    [term_group] => 0
    [term_taxonomy_id] => 16051
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 6
    [filter] => raw
    [cat_ID] => 16051
    [category_count] => 6
    [category_description] => 
    [cat_name] => IC Manage
    [category_nicename] => ic-manage
    [category_parent] => 157
)
            
IC Manage Banner SemiWiki
WP_Term Object
(
    [term_id] => 16051
    [name] => IC Manage
    [slug] => ic-manage
    [term_group] => 0
    [term_taxonomy_id] => 16051
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 6
    [filter] => raw
    [cat_ID] => 16051
    [category_count] => 6
    [category_description] => 
    [cat_name] => IC Manage
    [category_nicename] => ic-manage
    [category_parent] => 157
)

Data Management for the Future of Design

Data Management for the Future of Design
by Bernard Murphy on 08-31-2020 at 6:00 am

Data management is one of those core technologies which is absolutely essential in any professional design operation. You must use a data management system; you just want it to be as efficient as possible. Most of us settled on one of a few commercial or open-source options. The problem seemed more or less solved. As usual in chip design, that problem has continued to scale beyond existing solutions. Now we have to contend with design databases on the order of petabytes – even a modest 50TB database will take 4 days or longer  to transfer to the cloud, a remote site or a foundry. Design activity is now much more distributed and interdependent. Now we have a new way to scale compute demand in the cloud, adding a new dimension in complexity to data storage and access.  Competitive advantage. demands a new approach to data management.

Data management for the future

Start first with the implications for design in general, when there is increasing interest in agile methods and continuous integration. Components, like an IP or subsystem, can be evolving in multiple directions. Pulled into designs which create demands for fixes or derivative enhancements. Teams want to know status, whether perhaps they should switch to a more promising option for their needs. Components no longer evolve along a simple linear path. We need to be able to use the best available fit, as it becomes available.

Fast storage caches

Cost and latency are growing for design jobs. This is partly in compute – we always want faster compute engines. What was state-of-the-art yesterday looks barely acceptable today. But this is just as much of a problem in storage. Disks (cold storage) are slow and expensive and the IT world continues to advance. Now we can cache in much faster and cheaper NVMe memory (warm-storage), close to the compute engines, a very important consideration when you think of the constant syncing and re-syncing that may occur in-process in workspace data.

Storage hierarchies of this type are already supported in cloud services, which suggests a segue to hybrid cloud bursting, a popular method to push excess demand to the cloud as needed. Maybe you’re not ready to switch to all cloud, partly because you have a lot of sunk cost in your datacenter and you can’t move over until that’s depreciated. (Maybe you also have some residual security concerns. Different topic.)

Managing huge workspaces with the cloud

But there’s a data challenge with the hybrid approach. In many cases you have to carry along unmanaged data with the managed data, data generated in earlier steps which is needed in later steps. Physical data, corners, that sort of thing. This unmanaged data quickly dominated the total data size. Ftp or rsync methods to send all this data from your in-house NFS network to a cloud machine can become unmanageable. So much so that they might negate a lot of the advantage of running in the cloud.

Instead, using on-demand loading at a granular level, from the in-house network to cloud storage, can minimize the data that needs to be transferred. And once that data is uploaded to NVMe cache in the cloud, cold storage is no longer needed. Compute can work directly with the cache for higher performance at lower costs (you pay for cold storage in the cloud for as long as you are tying it up).

Data management analytics

There’s one more thing to get from this completely unified data management, across product groups, regions, in-house data center and clouds. You can track data analytics and access control much more easily. Data churn, phase completeness, check-in status, who is allowed access to licensed or otherwise privileged IPs. To see in one place where a project is really at. To see who might need additional help, to see who is adding unexpected royalty margins to your products.

You can get more detail from this IC Manage white paper, “A Blueprint for EDA Infrastructure for 2021 and Beyond”.

Also Read

Effectively Managing Large IP Portfolios For Complex SoC Projects

CEO Interview: Dean Drako of IC Manage

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.