I mentioned some time ago (a DVCon or two ago) that Accellera had started working on a standard to quantify IP security. At the time I talked about some of the challenges in the task but nevertheless applauded the effort. You’ve got to start somewhere and some way to quantify this is better than none, as long as it doesn’t deliver misleading… Read More
Author: Bernard Murphy
Design Perspectives on Intermittent Faults
Bugs are an inescapable reality in any but the most trivial designs and usually trace back to very deterministic causes – a misunderstanding of the intended spec or an incompletely thought-through implementation of some feature, either way leading to reliably reproducible failure under the right circumstances. You run diagnostics,… Read More
Acceleration in a Heterogenous Compute Environment
Heterogenous compute isn’t a new concept. We’ve had it in phones and datacenters for quite a while – CPUs complemented by GPUs, DSPs and perhaps other specialized processors. But each of these compute engines has a very specific role, each driven by its own software (or training in the case of AI accelerators). You write software… Read More
Webinar: Finding Your Way Through Formal Verification
Formal verification has always appeared daunting to me and I suspect to many other people also. Logic simulation feels like a “roll your sleeves up and get the job done” kind of verification, easily understood, accessible to everyone, little specialized training required. Formal methods for many years remained the domain of … Read More
Virtually Verifying SSD Controllers
Solid State Drives (SSDs) are rapidly gaining popularity for storage in many applications, in gigabytes of storage in lightweight laptops to tens to hundreds of terabyte drives in datacenters. SSDs are intrinsically faster, quieter and lower-power than their hard disk-drive (HDD) equivalents, with roughly similar lifetimes,… Read More
How Should I Cache Thee? Let Me Count the Ways
Caching intent largely hasn’t changed since we started using the concept – to reduce average latency in memory accesses and to reduce average power consumption in off-chip reads and writes. The architecture started out simple enough, a small memory close to a processor, holding most-recently accessed instructions and data … Read More
Glasses and Open Architecture for Computer Vision
You know that AI can now look at an image and detect significant objects like a pedestrian or a nearby car. But had you thought about a need for corrective lenses or other vision aids? Does AI vision decay over time, like ours, so that it needs increasing help to read prescription labels and identify road signs at a distance?
In fact no.… Read More
Tcling Your Way to Low Power Verification
OK – maybe that sounds a little weird, but it’s not a bad description of what Mentor suggests in a recent white-paper. There are at least three aspects to power verification – static verification of the UPF and the UPF against the RTL, formal verification of state transition logic, and dynamic verification of at least some critical… Read More
AI, Safety and the Network
If you follow my blogs you know that Arteris IP is very active in these areas, leveraging their central value in network-on-chip (NoC) architectures. Kurt Shuler has put together a front-to-back white-paper to walk you through the essentials of AI, particularly machine learning (ML) and its application for example in cars.
He… Read More
Lint for Implementation
When I was at Atrenta, we took advantage of opportunities to expand our static tool (aka linting) first to clock domain crossing (CDC) analysis and DFT compatibility and later to static analysis of timing constraints, all of which have importance in implementation. CDC is commonly thought of as an RTL-centric analysis, however,… Read More
Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet