SemiWiki Podcast Banner

WP_Query Object
(
    [query] => Array
        (
        )

    [query_vars] => Array
        (
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [post_type] => Array
                (
                    [0] => post
                    [1] => podcast
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type IN ('post', 'podcast') AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 0, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 298163
                    [post_author] => 13
                    [post_date] => 2021-04-22 06:00:37
                    [post_date_gmt] => 2021-04-22 13:00:37
                    [post_content] => SoC integration offers huge benefits through reduced chip count in finished systems, higher performance, improved reliability, etc. A single die can contain billions of transistors, with multiple processors and countless subsystems all working together. The result of this has been rapid growth of semiconductor content in many old and new products, including automotive, networking, telecommunications, medical, mobile, entertainment, etc. While higher levels of integration are largely beneficial, there are new challenges with system level integration, debug and verification. Embedded Analytics will play an important role in implementing and verifying these large and complex systems.

Many SoCs have large numbers of blocks and subsystems connected through on-chip bus or network interfaces. They use on-chip memory and registers and incorporate complex software running application code. In previous generations system level observation and debug were challenging but made possible through what were previously external connections or in-circuit emulators (ICE). Modern SoCs require a completely new approach to understand dynamic system operation with sufficient visibility and control to make sense of what is occurring during operation.

Siemens EDA writes about this in a white paper called “Embedded Analytics - A platform approach”. They cite the causes of increased design complexity leading to increased difficulty in design, optimization, verification and lifecycle management.

First on their list is multi-source IP, where one SoC will contain IP from numerous sources, both internal and external. These IP elements can include heterogenous processors, interfaces, and a host of other kinds of blocks. Next comes the software for each of these processors. The software could be algorithmic or for managing chip operations or security. Each of these software packages in turn will probably be built on a software stack.

Complexity in these SoCs can come from hardware and software interactions. The Siemens white paper correctly points out that often the kinds of problems caused by these interactions are non-deterministic and efforts to observe them can make them disappear or change behavior. The cost of system level validation can cost tens of millions of dollars. Functional validation needs to start early in the design process through to system installation. System level interactions need to be examined using simulation, emulation, prototyping and in finished systems. Even after product shipment, software updates can cause system level issues that will need to be investigated.

By now it is clear that system level visibility into hardware and software is necessary. Without enough detail it may be difficult to pinpoint problems. On the other hand, too much data can also be an issue. The white paper points out that a truly effective observation and data gathering system for SoCs needs to have sophisticated control over what data is collected and when.

[caption id="attachment_298164" align="aligncenter" width="662"]Embedded Analytics Embedded Analytics[/caption]

Siemens EDA has developed the Tessent Embedded Analytics platform to allow system designers to wrap their hands around the problems relating to system level real time observation and analysis. There are several pieces to this platform, allowing it to be integrated with the target SoC and then be used to collect and interpret data on system operation.

Tessent Embedded Analytics has an IP subsystem that is integrated into the target SoC. This IP is easily parameterized to make integration efficient and easy. There is also a hierarchical message passing fabric used to transfer the collected data efficiently with minimal added silicon overhead. The message passing fabric can handle local or cross-chip data transfers and is separate from the mission mode interconnect.

To help filter the data collected there are programmable filters, counters and matchers that enable smart and configurable data filtering and event triggering in real time at the frequency of the SoC. There are secure data links for collecting data and interacting with the outside world. Tessent Embedded Analytics contains a software interface layer that communicates between the application layers and the analytics IP.

The Tessent Embedded Analytics platform includes the tools to create applications that interact with its IP components to enable sophisticated monitoring of the SoC. There is a software development kit (Embedded SDK) that lets user developed applications configure, control and process the analytics data. The Configuration API, Data API and Verification API are available for use in either the Tessent Embedded Anaytics IDE or 3rd party IDE environments through plugins.

The Siemens white paper describes in more detail how the entire process works and how it can support prototyping through FPGAs or emulators, as well as in-system silicon. Without an embedded analytics platform, system designers face an almost intractable problem when it comes to verifying and optimizing present day SoCs. Siemens seems to appreciate that while an embedded analytics platform must be comprehensive, it must not require excess silicon resource or interfere with system operation. The full white paper is worth reading to gain a better understanding of how Siemens EDA has assembled a powerful solution for these difficult challenges. The white paper is available on the Siemens EDA website.
                    [post_title] => Embedded Analytics Becoming Essential
                    [post_excerpt] => 
                    [post_status] => publish
                    [comment_status] => open
                    [ping_status] => closed
                    [post_password] => 
                    [post_name] => embedded-analytics-becoming-essential
                    [to_ping] => 
                    [pinged] => 
                    [post_modified] => 2021-04-21 05:05:52
                    [post_modified_gmt] => 2021-04-21 12:05:52
                    [post_content_filtered] => 
                    [post_parent] => 0
                    [guid] => https://semiwiki.com/?p=298163
                    [menu_order] => 0
                    [post_type] => post
                    [post_mime_type] => 
                    [comment_count] => 0
                    [filter] => raw
                )

            [1] => WP_Post Object
                (
                    [ID] => 296312
                    [post_author] => 2635
                    [post_date] => 2021-04-21 10:00:01
                    [post_date_gmt] => 2021-04-21 17:00:01
                    [post_content] => A vexing chip design issue is how to achieve (or improve) performance and power dissipation targets, allowing for a wide range of manufacturing process variation (P) and dynamic operation voltage and temperature fluctuations (VT).  One design method is to analyze the operation across a set of PVT corners, and ensure sufficient design margin across this multi-dimensional space.  Another approach is to dynamically alter the applied voltages (globally, or in a local domain), based on sensing the changing behavior of a reference circuit.

The introduction of fully-depleted silicon-on-insulator (FD-SOI) device technology has led to a resurgence in the opportunities for incorporating circuitry to adaptively modify the device bias conditions, to compensate for PVT tolerances.  This interest is further advanced by the goals of many applications to operate over a wider temperature range, and especially, to operate at a reduced VDD supply voltage to minimize power dissipation.

At the recent International Solid State Circuits Conference (ISSCC 2021), as part of a collaboration with GLOBALFOUNDRIES and CEA-Leti, Dolphin Design presented an update on their IP offering to provide adaptive body bias (ABB) to FD-SOI devices to compensate for PVT variation and optimize power/performance, with minimal overhead. [1]  This article provides some of the highlights of their presentation.

Background

For decades, designers have implemented methods to modify circuit performance based on real-time sensor feedback.

One of the first techniques addressed the issue of PVT variation on the output current of off-chip driver circuits – a critical design parameter is to maintain an impedance match to the package and printed circuit board traces to minimize signal reflections.  A performance-sense ring oscillator (odd inverter chain) delay was used as a real-time PVT measurement.  The frequency of the PSRO was compared to a reference, and additional parallel devices were added/disabled to the off-chip driver pullup and pulldown device stacks – see the figure below. [2]

PSRO

Another method that was commonly used to adjust the operational behavior was to dynamically alter the substrate bias applied to the design.  Recall that the threshold voltage of the FET is a function of the source-to-substrate voltage difference across the semiconductor p-n junction – by modulating the substrate bias, the Vt would be adjusted and the variation in circuit performance improved.  (As the magnitude of the junction reverse bias increases, Vt increases as well;  reducing the reverse bias reduces the Vt magnitude and improves performance.  A small bulk forward bias is also possible to further improve performance – e.g., ~100-200mV – without excessive junction diode current.)

In the very early nMOS processes, the p-type substrate was common to all devices (enhancement-mode and depletion-mode nFETs).  A charge pump circuit on the die generated a negative Vsub, periodically pulling capacitive-coupled current out of the substrate.  By sensing a reference device Vt real-time, the duration and/or magnitude of this charge pump current could be modified. [3]  For a 5V nMOS process, with a nominal Vsub = -3V, there was plenty of range available to modify the back bias.

The transition to CMOS processes introduced the need to consider both the p-substrate and n-well as potential body bias nodes.  Designers developed a strategy for inserting p-sub and n-well taps throughout the die to connect to bias supplies, separate from the VDD and GND rails connected to the circuitry.

With the ongoing Dennard CMOS scaling associated with Moore’s Law, body bias techniques were less viable.  The additional reverse-bias electric field across the junction is a breakdown issue at scaled dimensions.  As a result, Vsub typically became the same rail connection as GND, and Vnwell was the same rail as VDD.  If distinct substrate bias control to the devices was required, CMOS processes were extended to include a triple-well option – see the figure below.

triple well

An additional p-well inside a deep n-well inside the p-substrate allowed unique reverse-bias voltages to be applied to the n-well (pMOS) and local p-well (nMOS) devices.

Rather than using body bias, designers increasingly looked to adjust the VDD power supply for PVT compensation, a technique commonly denoted as dynamic voltage, frequency scaling (DVFS).  Parenthetically, the use of DVFS methods was expanded beyond adaptive compensation for a target frequency to also provide boost modes of higher frequency operation at higher supplies, as well as a variety of power management states at reduced supply values.  (And, the market for PMICs exploded, as well.)

The introduction of SOI device technology – and FD-SOI, in particular – changed the landscape for adaptive body bias techniques.  A FD-SOI cross-section is shown in the figure below.

FBB

Note the use of the triple-well fabrication technique, allowing a unique back bias to be independently applied to nMOS and pMOS devices.  Also, note the devices shown above are different than the conventional CMOS process technology.  The nMOS above is situated above an n-well, while the pMOS is situated above a p-well – this topology would be reversed in a typical CMOS process.  This unique FD-SOI process option is used to implement low Vt devices.

The presence of the thin isolating buried dielectric layer (BOX) below the device channel re-introduces the option of applying a p-well and n-well bias.  This technique involves etching the well contacts through the (thin) silicon channel and dielectric layers of the FD-SOI device.

The p-n junction breakdown electric field issues of scaled CMOS are eliminated – the allowable electric field across the BOX dielectric is greater.

The FD-SOI device topology shown above offers the opportunity to apply an effective forward bias to the body, reducing the threshold voltage magnitude and boosting performance. (In a conventional CMOS process, the nMOS device would be subjected to a reverse bias relative to the channel, applied to the p-substrate.)

The BOX dielectric isolates the channel region – there is no source/drain-to-substrate diode junction.  Note that bias restrictions remain for the p-n junctions of the device wells below the BOX.

Although the forward body bias technique increases the device leakage current, the supply voltage required to meet the target frequency can be reduced, with an overall power savings – more on that shortly.

Thus, there is renewed emphasis on the integration of ABB circuitry for FD-SOI designs, to compensate for PVT variations and/or optimize the operational frequency and power dissipation.

Dolphin ABB IP

A block diagram of the ABB IP is shown below.

architecture

A primary input to the IP is the target operational frequency for the controlled domain, Ftarget.  (For the Arm Cortex M4F core testcase design, the Ftarget was in the range ~10MHz – 1.5GHz.)

A coarse timing lock to this target is provided by a frequency-lock loop (FLL) circuit, comprised of a (digital) frequency comparator that generates adjust pulses to modulate the currents into n-well and p-well.  Specifically, the lock is based on two separate divider ratios, R and N, one for Ftarget and one for an internally generated clock, Finternal.  Lock is achieved when (Ftarget/R) = (Finternal/N).

The internally-generated clock reference for the frequency-lock loop in the ABB controller also includes PVT sense circuitry, to reduce the variation in the ring-oscillator frequency.

When the coarse monitor-based FLL is locked, the dynamic fine-grain adaptive bias is enabled.  The detailed adjust to the n-well and p-well current drivers uses feedback from timing monitors distributed throughout the design block to be controlled by the ABB IP.

As voltage(s) and temperature(s) within the block fluctuate, the monitor(s) signal the ABB controller to increment/decrement the divider ratio “N” to adjust the well current drivers, maintaining the lock to the target frequency, as illustrated in the figure below.

coarse fine regulation

The implementation of the ABB IP is all-digital for the FLL control and feedback, and the distributed timing monitors in the block.  The exception is the charge pump circuitry that provides the p-well and n-well currents – in the Dolphin ABB IP, a VDDA=1.8V supply is used, the same supply as provided to the I/O cells.  This enables a range of back bias voltage values from the charge pump.

Testsite and Measurement Results

The Dolphin team incorporated the ABB IP with an Arm Cortex M4F core, in a 22nm FD-SOI testsite fabricated at GLOBALFOUNDRIES – see the micrograph below, with the related specs.

micrograph 3

For this testsite, Dolphin chose to implement the Arm core using LVT-based cells and forward-body bias, with the device cross-section shown above.  The focus of this experiment was to achieve the target frequency at a low core supply voltage, thereby reducing overall power dissipation.  The available forward body bias values were:
  • LVT nMOS – Vnw:  0V to 1.5V (FBB)
  • LVT pMOS – Vpw:  0V to -1.5V  (FBB)
Measurement data examples are shown below, illustrating how the Vnw and Vpw bias voltage varies with sweeps in temperature and supply voltage, to maintain lock to the target frequency. voltage temp compensation Note that the independent current sources for the p-well and n-well imply that these bias voltages may be asymmetric. Of critical importance is the ability to use ABB to reduce the (nominal) core supply voltage, while maintaining the target frequency specification.  For this design testsite, the use of LVT cells and ABB with forward bias enabled a reduction of ~100mV in the supply, while still meeting the target frequency – e.g., from 0.55V to 0.45V.  This results in a ~20% overall power savings, as illustrated below (shown across three temperature corners, including both the power dissipation of the Arm core and the additional ABB IP). VDD versus power Summary FD-SOI technology has reinvigorated the interest in using adaptive body bias techniques for maintaining the operational target frequency over PVT variations.  Both reverse-body bias (RBB) and forward-body bias (FBB) techniques can be applied, to RVT and LVT device configurations.  At the recent ISSCC, Dolphin Design demonstrated how their ABB IP integrated with a core block can utilize FBB to achieve and dynamically maintain a target frequency.  This technique relaxes the corner-based design margin constraints that typically define the supply voltage – a low supply can be selected, with the corresponding power savings. Here is a link with additional information on adaptive body bias techniques in FD-SOI – link. -chipguy References [1]  Moursy, Y., et al., “A 0.021mm**2 PVT-Aware Digital-Flow-Compatible Adaptive Back-Biasing Regulator with Scalable Drivers Achieving 450% Frequency Boosting and 30% Power Reduction in 22nm FDSOI Technology”, ISSCC 2021, paper 35.2. [2]  Dillinger, T., VLSI Design Methodology Development, Prentice-Hall, 2019. [3]  US Patent # US-4553047, “Regulator for Substrate Voltage Generator”. [post_title] => Adaptive Power/Performance Management for FD-SOI [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => adaptive-power-performance-management-for-fd-soi [to_ping] => [pinged] => [post_modified] => 2021-04-21 18:50:36 [post_modified_gmt] => 2021-04-22 01:50:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296312 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 297988 [post_author] => 16 [post_date] => 2021-04-21 06:00:23 [post_date_gmt] => 2021-04-21 13:00:23 [post_content] => This wasn’t as much of a big bang announcement as others I have seen. More a polishing of earlier-announced reveals, positioning updates, together with some new concepts. First, you probably remember the Cortex-X announcement from about a year ago, allowing users to customize their own instructions into the standard instruction set. A response to similar flexibility in RISC-V. I get the impression this started as a tactical response to specific customer needs. Understandable, but you could see how that could get of control as interest spreads more widely. Richard Grisenthwaite, Sr VP, Chief Architect and Fellow at Arm talked about rationalizing standardization versus customization in a spectrum of support. Details not revealed yet but makes sense. Arm Announces v9 Generation

Who’s interested?

Richard nodded to the Fugaku supercomputer in his talk, that they possibly took advantage of this v9 flexibility. And he nodded to AWS Graviton2, as another potential beneficiary. But he added that really the rush to differentiate dominates every place we compute, from the cloud to the edge. Hence the balancing act in v9. Preserving all the benefits of standardization and compatibility with a massive ecosystem. While still allowing integrators to add their own secret sauce.

There’s more

That’s not all there is to v9. They have launched a rolling program with enhancements to machine learning, DSP and security in CPU, GPU and NPU platforms. Take machine learning first. Arm continues to stress that the range of ML-applications can’t be met with a one-size-fits-all solution. So they continue to extend support in A, M and R processors, working closely with colleagues at NVIDIA. (Jem Davies followed with more detail on this topic.)

DSPs?

This one took me a little by surprise. Looking backward, Arm didn't make much noise about DSPs. Perhaps because they didn’t see a big enough opportunity? But the range of DSP-related applications has been exploding. In automotive for infotainment audio, communication, sensing, driver alertness, road noise suppression, V2X. More in consumer audio (wireless earbuds for example). No doubt again a spectrum of needs where Arm sees an opportunity for enhanced standard processors rather than dedicated DSPs? Richard didn’t elaborate. He did however mention that the scalable vector extensions (SVE) they developed with Fujitsu for Fugaku. Expecting this capability to be extended to a much wider range of applications. He mentioned they have already created SVE2 to work well with 5G systems. I assume baseband applications you might normally expect high-end processors or DSPs to fill today. That can only be good; room for more kinds of embedded solution.

Security

Arm continues to emphasize security and thank goodness for that because I see no other central force to drag us towards building secure distributed systems. Impressively a following panel on this topic included a panelist from Munich RE Group. Investors care about liabilities. The easy-going days of “we can figure this out on our own” are drawing to a close. Confidential compute Architecture Arm see security stakes being raised by distribution of compute between edge devices and compute nodes through wireless and backhaul networks. Application developers and service providers will want to run tasks where it makes most sense, without having to worry about which compute nodes have what security. Here Richard talked about a confidential compute architecture to preserve data security. Arm plans to reveal more details on this architecture later in the year. One concept they will introduce is dynamically created realms, zones in which ordinary programs can run safely, separate from the secure and non-secure zones we already understand. Service customer apps and data need to be ensured high levels of security, yet the current view of secure versus insecure zones on a device doesn’t really address that need. (Where would such a task run? In the secure zone? Heck no. In the insecure zone? Ditto.) Realms provide a separate computation world outside the secure and non-secure zones, designed to depend on a small level of trust in the rest of the system. Even if a hack compromises components of the system, an app and its data running inside a realm can still be secure.

More security extensions on the way

Arm has also been working with Google on memory tagging extensions to protect against the memory safety issues we will never eliminate in our software. They’ve been working with Cambridge University on their Capability Hardware Enhanced Risk Instruction-Set Architecture (CHERI) to further bound vulnerabilities, all the way down to the ISA level. And they’re working with the UK government on a program called Morello, designed to bound the scope of any breach that does get a foothold. Lots of interesting work: rationalization of the extensions program, more ML-everywhere and an interesting start into DSP markets. You can read the press release HERE. [post_title] => Arm Announces v9 Generation - Custom, DSP, Security, More [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => arm-announces-v9-generation-custom-dsp-security-more [to_ping] => [pinged] => [post_modified] => 2021-04-21 05:05:45 [post_modified_gmt] => 2021-04-21 12:05:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=297988 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 298139 [post_author] => 3 [post_date] => 2021-04-20 10:00:33 [post_date_gmt] => 2021-04-20 17:00:33 [post_content] => In business we all have heard the maxim, "Time is Money." I learned this lesson early on in my semiconductor career when doing DRAM design, discovering that the packaging costs and time on the tester were actually higher than the fabrication costs. System companies like IBM were early adopters of Design For Test (DFT) by adopting scan design with special test Flip-Flops and then using Automatic Test Pattern Generation (ATPG) software to create test implementations that had high fault coverage, with a minimum amount of time on the tester. It took awhile for logic designers at IDMs to adopt DFT techniques, because they were hesitant to give up silicon area in order to improve fault coverage numbers, instead favoring the smallest die size to maximize profits.

Challenges

Today there are many challenges to test implementation time and costs:
  • Higher Design Complexity
    • >100,000,000 cell instances
    • 100's of cores
  • Subtle Defects
    • >50% of failures not found with standard tests
  • In-system testing required
  • Fewer test pins
    • Designs/core with <7 test pins
Consider a modern GPU design that has 50 Billion transistors, with 100 million cell instances, just how do you create enough test patterns to meet fault coverage goals while spending the minimum time on a tester?

A Solution

Adding scan Flip-Flops is a great start and a proven DFT methodology, but what if you want to meet these newer challenges? It's all about controllability and observability in the test world, and by adding something called a Test Point, you make controlling and observing a low coverage point in your logic much, much easier. Consider the following cloud of logic, followed by a D FF: without test points min If the D input to the FF is difficult to control, or set, then we never observe a change in the output at point B. By adding a Test Point, we can now control the D input, thus improving the fault coverage: with test point Ideally, a test engineer wants to use Test Points that are compatible with existing scan compression, having minimal impact on Power/Performance/Area (PPA), and is easy to use.

Synopsys TestMAX Advisor

I spoke with Robert Ruiz, Director of Product Marketing, Test Automation Products and Pawini Mahajan, Sr. Product Marketing Manager, Digital Design Group over a Zoom call to learn how the Synopsys TestMAX Advisor tool fits into an overall suite of test tools. Robert and I first met back in the 1990s when he worked at Sunrise Test Systems, and I was at Viewlogic, so he has a deep history in the test world. Pawini and I both worked at Intel, still the number one semiconductor company in the world by revenue. testmax advisor Here's the tool sub-flow for TestMAX Advisor: testmax flow The TestMAX Advisor tool will analyze and rank all of the FFs used in a design, then determine which nets need added controllability or observability, so a test engineer can then determine just how many Test Points should be inserted and see how much the fault coverage improvement and shorter test patterns are going to be. The engineer can even set the percentage coverage goals and allow TestMAX Advisor to add Test Points automatically to meet the goals. Side note to users of other ATPG tools, yes, you can use TestMAX Advisor with your favorite ATPG tool. You get to see the incremental fault coverage improvement by adding Test Points in a table format: test point analysis

Test Implementation

Adding extra Test Points is going to add new congestion to the routing, so the developers figured out how to make the placement of Test Points be physically-aware, so that congestion is minimized and timing impacts are reduced. Just look at the congestion map comparison below: implementation improvement min Seven examples of fault coverage improvements (up to 30%)  and pattern count reductions (up to 40%) were provided: coverage atpg count min

Summary

DFT using scan and ATPG tools is recommended to achieve fault coverage goals, and when shortest test times are important then you can consider adding new Test Points to improve controllability and observability. EDA developers at Synopsys have coded all of the features to make Test Point insertion an easy task that is producing some promising results to address the test implementation time and costs. TestMAX Advisor looks to be another worthy tool in your toolbox.

Related Blogs

[post_title] => Addressing SoC Test Implementation Time and Costs [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => addressing-soc-test-implementation-time-and-costs [to_ping] => [pinged] => [post_modified] => 2021-04-21 05:26:00 [post_modified_gmt] => 2021-04-21 12:26:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298139 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 298167 [post_author] => 13 [post_date] => 2021-04-20 06:00:40 [post_date_gmt] => 2021-04-20 13:00:40 [post_content] => 5G phones are now the top tier devices from many manufactures, and 5G deployment is accelerating in many regions. While 4G/LTE has served us well, 5G is necessary to support next-generation telecommunication needs. It will be used heavily by consumers and industry because it supports many new use cases. There is an excellent white paper by Omni Design Technologies that discusses the new applications for 5G, the technological changes that are necessary, and the hardware architectures needed to support it. The white paper, titled “5G Technology and Transceiver Architecture” lists the three main use cases as enhanced mobile broadband (eMBB), ultra-reliable and low-latency communications (URLLC), and massive machine-type communications (mMTC). Each has specific technical requirements aligned with the scenarios that each will be used for. Each will vary in terms of peak data rate, spectral efficiency, latency, connection density, reliability, and many more. Consumers will see 4k and 8k streaming, AR/VR improvements and much lower latency and higher speed access to the cloud. To fully realize higher bandwidth 5G will open up new communications bands from 24GHz to 100GHz. URLLC will be used for applications that require real-time performance, like automotive or some industrial applications. It calls for 1ms latency and 99.999% reliability. mMTC will be used for billions of connected devices such as wearables, IoT and sensors that use lower bandwidth and require low power. I already alluded to one of the key technologies, millimeter-wave (mmWave), that will be essential to 5G. The white paper says that 5G deployment is going to move first and most rapidly in the sub-6GHz bands with the help of infrastructure reuse. The bands above 24GHz will offer much greater bandwidth but come with additional technical complexity. One of the key issues is that propagation losses will be much higher due to obstacles and signal absorption in the atmosphere. MIMO will be used to improve signal performance through spatial diversity, spatial multiplexing and beamforming. Spatial diversity takes advantage of multipath to gain information using multiple antennas. The different antennas see different signals that can be used to mathematically determine the transmitted signal. Spatial multiplexing creates multiple spatially separated signals between the transmitter and receiver to boost data rates. As a consequence of technology changes the hardware architecture for 5G is also changing. Beamforming is one of the biggest drivers for changes we see in the hardware implementation. The white paper points out that current commercial solutions include 64TX and 64RX for base station deployments. This is a large increase from the 2x2 or 4x4 arrays used in 4G. It is no longer feasible to perform all the beamforming operations in pure analog or pure digital. If a pure digital approach is used, then each array element must have its own RF chain. This causes increased power consumption and adds components. Going with a pure analog approach requires only one RF chain but gives up a lot of the reconfigurability and spatial resolution. [caption id="attachment_298168" align="aligncenter" width="855"]5G Architecture 5G Architecture[/caption] Omni Design suggests that a hybrid approach can meet all system objectives and is easier to implement. Much of the processing can be done on the digital side. Fewer RF chains are needed, with the analog side handling phase shifting individually for each antenna. Omni Design offers data converter IP solutions for 5G applications. Their IP is suited for below 6GHz or above 24GHz, using an IF architecture. Their solutions are offered in multiple processes from 28nm to advanced FinFET nodes. Omni Design has patented technologies that enable data converters to operate at higher sampling rates and precision while significantly reducing power consumption. The white paper goes into detail on the performance characteristics of their IP for 5G. It also talks about the verification requirements and how their IP offering includes the necessary deliverables to ensure rigorous verification. With much of the 5G deployment still ahead of us, there will be an increasing need for data converter semiconductors. The Omni Design white paper, which is available here, is a good source of information useful for teams working to develop products for 5G telecommunication systems. [post_title] => 5G Calls for New Transceiver Architectures [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => 5g-calls-for-new-transceiver-architectures [to_ping] => [pinged] => [post_modified] => 2021-04-20 06:39:56 [post_modified_gmt] => 2021-04-20 13:39:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298167 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 298127 [post_author] => 15217 [post_date] => 2021-04-19 10:00:55 [post_date_gmt] => 2021-04-19 17:00:55 [post_content] => In early April, Gabriele Saucier kicked off Design & Reuse’s IPSoC Silicon Valley 2021 Conference. IPSoC conference as the name suggests is dedicated to semiconductor intellectual property (IP) and IP-based electronic systems. There were a number of excellent presentations at the conference. The presentations had been categorized into eight different subject matter tracks. The tracks were Advanced Packaging Solution and Chiplet, Analog and Memory Blocks, Design and Verification, Interface IP, Security Solutions, Automotive IP and SoC, Video IP and High-Performance Computing. One of the presentations I listened to was titled “Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets” and was presented by Ketan Mehta, Senior Director Product Marketing, Interface IP, from OpenFive, a business unit of SiFive, Inc. The term chiplet has been behind lot of hot discussions in the industry over the last few years and the volume and velocity of these discussions have increased of late. As addressing the needs of next generation chiplets is the key focus of Ketan’s presentation, it is a good idea to clarify what chiplet stands for, how much it is talked about and why. That would provide the proper backdrop for the solution that Ketan discusses in his presentation. Chiplets are neither chips nor packages. They are what we end up with after architecturally disintegrating a large integrated circuit into multiple smaller dies. The smaller dies are referred to as chiplets. The benefits are at least two-fold. The multiple smaller dies could avoid sub-10nm process node and reduce the development cost. The smaller dies could benefit from the better yield rate per wafer. An internet search for the term “chiplets” displays seventeen pages of results. With the exception of a few entries that talk about Lieber’s chocolate chiplets, all other entries refer to semiconductor related chiplets. The reason for the intensified discussion on chiplets is the projected market opportunity. According to research firm Omdia, chiplets driven market is expected to be $6B by 2024 from just $645M in 2018. That’s an impressive nine-fold projected increase over a six-year period. The following is a summary of what I gathered by listening to Ketan’s talk. For complete details, please register and listen to Ketan’s presentation. As a full-service provider for custom silicon, OpenFive offers services as well as a broad array of differentiated IP to enable the chiplets market. At a basic level, partitioning of a large die into chiplets results in primarily logic bound, memory bound or I/O bound chiplets. To integrate all the chiplets into a System-in-a-package (SiP) product, the interconnect IP has to be flexible, comprehensive and easy to integrate with their customers’ products. OpenFive offers D2D IO to enable the chiplets market. D2D IO is a parallel I/O interface at low latency and low power delivering high throughput for die-to-die connectivity. It includes a controller and a PHY. For artificial intelligence (AI), high-performance computing (HPC), storage or simply chiplet to chiplet interconnect, a D2D PHY interface may be better suited than other types of interfaces. For a comparison of D2D PHY and a generic extra short reach/ultra short reach (XSR/USR) SerDes, refer to Figure 1. Figure 1: Comparison of D2D PHY and XSR SerDes OpenFive chiplet The D2D controller has been designed with flexibility in mind. The Controller is designed to interface with not only the D2D PHY but also with many other types of interfaces. Depending on the particular need and constraints, the controller can interface with Bunch of Wires (BoW), Open High Bandwidth Interface (OHBI), Advanced Interface Bus (AIB) or an XSR SerDes. Refer to Figure 2 to see how the D2D Controller handles the data as it flows between the framing layer, the protocol layer and the client adaptation layer. Figure 2: D2D Controller Features OpenFive Ketan wraps up his presentation by showcasing how a RISC-V based CPU system and an 800G/400G Ethernet I/O system could benefit from using the D2D IO. If interested in benefiting from a chiplets implementation approach, I recommend you register and listen to Ketan’s entire talk and then discuss with OpenFive on ways to leverage their different IP offerings and services for developing your products.   [post_title] => Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => die-to-die-interface-phy-and-controller-subsystem-for-next-generation-chiplets [to_ping] => [pinged] => [post_modified] => 2021-04-19 12:02:53 [post_modified_gmt] => 2021-04-19 19:02:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298127 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [6] => WP_Post Object ( [ID] => 298205 [post_author] => 28 [post_date] => 2021-04-19 06:00:56 [post_date_gmt] => 2021-04-19 13:00:56 [post_content] => Silicon Catalyst Recently we published the article Semiconductor Startups – Are they back? which went SemiWiki viral with 30k+ views. It's certainly a sign of the times with M&A activity still running at a brisk rate. During the day I help emerging companies with business development including raising money and sell-side acquisitions so brisk is not just an observation but my personal experience, absolutely. If you are considering starting your own technology company or have one in progress this would be a great place to start. I cannot stress how important angel investors can be in not just seed funding, but also as mentors and guidance counselors, which brings us to the upcoming event:

Demystifying Angel Investing

Monday April 26, 2021 4:30pm to 6pm Pacific Time The Silicon Catalyst Angels group is pleased to announce the next Guest Speaker Series event, open to both members and non-members. The zoom webinar is scheduled to take place on Monday April 26th, 2021, starting at 4:30pm Pacific time. The event will include a presentation by Dr. Ron Weissman entitled, “Demystifying Angel Investing”, followed by a panel session with Angel investors that have a long history of participating in the funding of early-stage / seed-stage entrepreneurial teams focused on building new semiconductor companies. Participation is open to all investors, potential angel investors or you’re part of an early-stage startup hunting for investors, you don’t want to miss this informative presentation. Registration for the webinar can be made at: Register in advance for this webinar Agenda 4:30 to 5:15 – “Demystifying Angel Investing” – Dr. Weissman Guest Speaker Ronald Weissman (Angel Capital Association Board Member) will provide an introduction to angel investing. Learn the secrets of angel investing from a twenty-year industry veteran and member of the Angel Capital Association's Board of Directors who has invested in more than 40 startups and has served on dozens of startup boards of directors. Key topics to be covered include:
  • Who qualifies to be an angel investor?
  • Why become an angel investor?
  • What are the personal and community benefits of angel investing?
  • What is the process of finding and executing an angel deal?
  • What are the risks and rewards of angel investing?
  • How does one get started?
  • How do you find and evaluate deals?
  • Should you invest individually or join an angel group?
5:15 to 6pm – Panel Session with Semiconductor Industry Angel Investors Moderator: Dr. Ron WeissmanAngel Capital Association Panelists: Manthi Nguyen, Experienced Entrepreneur, Angel Investor, and member of Sand Hill Angels and Band of Angels. Amos Ben MeirSilicon Catalyst Angels, President, and active Angel Investor Rick LazanskySilicon Catalyst LLC, Chairman and long-time Angel Investor & serial entrepreneur Dr. Ronald Weissman is Chairman of the Software Group of the Band of Angels, Silicon Valley’s oldest angel organization and is a member of the Board of Directors of the Angel Capital Association, North America’s umbrella organization for angel investors. He has more than twenty years of experience in venture and angel capital.  Ron was a Partner and portfolio manager for seventeen years at global venture capital and private equity firm Apax Partners where he focused on North American and cross-border investing. He has invested or advised more than 60 companies and has served on more than 40 corporate boards. Today, Ron advises financial and corporate venture funds, national and regional governments and G2000 corporate innovation programs.  He is a frequent conference speaker and advisor on startup ecosystems, entrepreneurship, venture and angel capital trends, AI, startup governance, term sheets and valuation, M&A and other aspects of venture and corporate investing.  He has advised governmental and private organizations in Emilia-Romagna (Italy), Armenia, Chile, Israel and the Republic of Georgia as well as the US White House on developing effective startup ecosystems. He lectures regularly at Stanford, the University of Santa Clara and other universities in the US and abroad on venture and angel capital trends. Manthi Nguyen is a lead investor in Portfolia’s Rising Tide Fund, Portfolia Consumer Fund and Portfolia Enterprise Fund. Manthi led the Rising Tide’s investments in Unaliwear and Envoy and co-led its investments in Tenacity and OtoSense.  She led the investment in B.Well in the Enterprise Fund. Manthi is one of the most active deal syndicators in Silicon Valley, putting together investments across the Band of Angels, Sand Hill Angels, and Sierra Angels. Manthi and her husband Jim, run their own early stage investment company. Manthi has led investments in 30+ deals in the past 5 years, and served as acting CEO at Peloton Trucking. Ms. Nguyen was involved in a series of early startups developing routing, and networking technologies that were later acquired by NEC, Cabletron, Tut Systems, and Cisco. In the early part of her career, Ms. Nguyen was part of General Motors Advanced Manufacturing Research group working on developing technologies for office and factory automation. She participated in developing international standards for Open System Interconnect with National Institute of Standard and Technology (NIST) and International Organization for Standardization (ISO). Ms. Nguyen worked on modeling of business process, information flow, and supply chain management for the General Motors enterprise. Ms. Nguyen's experience at General Motors was invaluable in helping her build the foundation of understanding for how technologies are applied to solve real life problems. In the last 15 years, Ms. Nguyen has brought her executive experience to focus on small businesses, mentoring entrepreneurs and angel investing. Ms. Nguyen received her Bachelor of Science from University of Washington, and her Master of Business Administration from University of Michigan. Amos Ben-Meir is currently an active angel and venture investor in the San Francisco Bay Area. He is passionate about technology, business and the entrepreneurial eco-system as it relates to start-ups, venture capital and angel investing. As an active angel/venture investor and a Member and Board Director of Sand Hill Angels and Silicon Catalyst Angels, Amos looks to invest and work with great founding teams that are harnessing cutting edge technology to deliver great products and services and that will result in significant outcomes to all stakeholders. Prior to Amos’s angel & venture investing career, he was involved in six startups, either as an early employee or founder. Four of the startups had successful outcomes and two failed. Amos has held Director and VP Engineering positions during his entrepreneurial career. During these roles, Amos built and managed large engineering teams. This experience in the start-up world has driven him to stay involved in the San Francisco Bay Area start-up eco-system as an investor in start-ups and mentor to entrepreneurs. In addition, Amos holds various board observer and advisor positions in companies where he is an active investor. Since 2012, Amos’s startup investment portfolio has grown to more than 300 portfolio companies. A partial list of Amos’s investments can be found on his profile page on the Angel List  web-site: https://angel.co/abenmeir-me-com Rick Lazansky is a serial entrepreneur, active investor, and coach of many startups. Rick was inspired to start Silicon Catalyst by the growth of software startups, supported by incubators, accelerators, and open source software, and the need for ‘hard’ technologies to have the same level of ecosystem support. Rick has invested in more than 40 startups as an angel investor with Sand Hill Angels and as an LP in several venture funds. He had coached startup projects and classes at Stanford, Carnegie Mellon University, UC Santa Cruz and Berkeley. His startups include Vantage Analysis Systems, Denali Software, and RedSpark. He has served as a Board Director at three other incubators – i-GATE Hub in Livermore, Batchery in Berkeley, and Barcelona Ventures in Catalonia. He has a BA/BS in Economics and Information Science and an MS in SC/CE from Stanford. I hope to see you there! [post_title] => Demystifying Angel Investing [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => demystifying-angel-investing [to_ping] => [pinged] => [post_modified] => 2021-04-19 11:49:57 [post_modified_gmt] => 2021-04-19 18:49:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298205 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 298045 [post_author] => 34053 [post_date] => 2021-04-18 10:00:39 [post_date_gmt] => 2021-04-18 17:00:39 [post_content] =>
Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security vehicles Your modern car is a computer on wheels—potentially hundreds of computers on a set of wheels. Heck, even the wheels are infested with computers—what do you think prompts that little light on your dashboard to come on if your tire pressure is low? And computers don’t just run your infotainment system, backup camera, dashboard warning lights, and the voice that tells you to buckle your seatbelt. They direct the fundamental vehicle functions too—acceleration, braking, steering, and transmission. The Synopsys Automotive Group has coined a term for how vehicles are changing: the “SmartPhonezation of Your Car™.” Which means the transformation of the worldwide vehicle fleet is about much more than a bunch of new features and creature comforts. It means your car is part of the vast internet of things (IoT). This has enabled convenience, luxury, efficiency, safety, and the march toward autonomous driving, but it also makes it part of the equally vast IoT attack surface. As speakers at security conferences have warned for years, if hackers get control of a connected car, they could take over the acceleration, steering and brakes, demand a ransom from an owner simply to start the car, disable the locks and steal it, and more. That makes security just as important as safety in a car. If it’s not secure, it’s not safe.

Automotive Security Standards in Focus

Fortunately, that reality has prompted an increasing focus on vehicle cybersecurity. There are now multiple frameworks and standards aimed at improving it. One of the most recent is the National Highway Traffic Safety Administration’s (NHTSA’s) draft of “Cybersecurity Best Practices for the Safety of Modern Vehicles.” And while the timing of the draft (it was released in mid-December) was a bit earlier than Chris Clark expected, it did not come as a surprise. Clark, senior manager, automotive software and security, with the Synopsys Automotive Group, declared in a blog post he coauthored earlier this year that he expected 2021 to be “the year of automotive standards.” Not that standards are new. ISO 26262, from the International Organization for Standardization (ISO), addresses safety-related systems that include one or more electrical and/or electronic (E/E) systems. It has been around for a decade and was updated in 2018. As a Synopsys blog post puts it, the focus of that standard is on “ensuring that automotive components do what they’re supposed to do, precisely when they’re supposed to do it.” More recently, ISO/SAE 21434, created by ISO and the Society of Automotive Engineers, calls for “OEMs and all participants in the supply chain (to) have structured processes in place that support a ‘Security by Design’ process” covering the development and entire lifecycle of a vehicle. Those include requirements engineering, design, specification, implementation, test, and operations. A first draft of ISO/SAE 21434 was released a year ago, with the final standard expected by the middle of this year. But those two are private-sector, industry initiatives. ISO is “an independent, non-governmental international organization with a membership of 165 national standards bodies.” That, as Clark puts it, illustrates that “the automotive industry has historically been very strong proponents of self-regulation.” And while in the past that self-regulation had more to do with physical functionality and safety, more recently the industry has also been proactive in looking at how it can address cybersecurity. But the NHTSA best-practices document means government is going to play a more direct role. “It’s a good starting point for automotive organizations to say this is a real thing,” Clark said. “NHTSA isn’t just saying, ‘Do something about cybersecurity.’ It’s outlining explicit items that have to be addressed.” And he thinks NHTSA’s best practices along with ISO/SAE “are going to provide the automotive industry a good sounding board to look at how we address cybersecurity from a risk-based perspective. I think everybody could agree that the biggest concern is the risk of autonomous driving.” The goal isn’t perfection. “We’re not building a space shuttle, we’re building a car,” Clark said. “If we wanted to have every single security feature to ensure that a vehicle never failed, we couldn’t afford it.” But that doesn’t mean vehicle cybersecurity can’t improve—a lot.

Automotive Cybersecurity Framework Prescribes Layered Approach

NHTSA recommends that the automotive industry follow the National Institute of Standards and Technology’s (NIST’s) documented Cybersecurity Framework, which is “structured around the five principal functions, ‘Identify, Protect, Detect, Respond, and Recover,’ to build a comprehensive and systematic approach to developing layered cybersecurity protections for vehicles.” That layered approach, it says, “assumes some vehicle systems could be compromised, reduces the probability of an attack’s success and mitigates the ramifications of unauthorized vehicle system access.” If that sounds more general than specific, that is by design. The goal, which Synopsys supports, is for standards to mandate what results an industry must achieve, not prescribe how to achieve them.  “Not all standards are prescriptive,” Clark said. “Standards organizations are trying to minimize the impact on innovation and eliminate a check-box mentality.” Indeed, the reality of human nature is that if government set out a list of rules or specific requirements, “then everybody in the industry would do those things and nothing more,” he said. “But if we say organizations must design a security program that focuses on the cybersecurity of hardware and software to meet the needs of both the customer and the organization, then everybody’s going to be a little bit different, and some are going to be better than others. It starts to create the competitive landscape that we are really interested in.”

“Standards organizations are trying to minimize the impact on innovation and eliminate a check-box mentality.”

–Chris Clark
The key overall objectives of the Synopsys Automotive Group are what it calls the four pillars of automotive cybersecurity:
  • Safety: For the vehicle and its occupants
  • Security: Of the vehicle and data
  • Reliability: Of items and features
  • Quality: Of vehicle items
Those goals aren’t prescriptive either, but how to achieve them will become much more specific in the next few months. Over the next several months, this blog will feature a series of posts that cover the major elements of automotive cybersecurity addressed in the NHTSA and other best-practices standards. Planned topics include:
  • Risk assessment and validation
  • Sensor vulnerability
  • Cryptographic credentials, crypto agility, and vehicle diagnostics
  • After-market devices
  • Wireless paths in vehicles
  • Software updates/modifications and over-the-air software updates
The goal is to share insights that will help organizations evaluate and improve their security practices. “Many organizations feel that they have addressed cybersecurity—they know it’s important, but they never take the steps to figure out if the actions they are taking are effective,” Clark said. “Are they just meeting a requirement pushed down from an OEM, or are they changing how they do business to ensure that security is a core component and that any standards requirements that come down are easily met?” Another overall goal of the Automotive Group is to help organizations achieve NHTSA’s call for leadership making cybersecurity a priority. That, according to NHTSA, includes:
  • Providing resources for “researching, investigating, implementing, testing, and validating product cybersecurity measures and vulnerabilities”
  • Facilitating seamless and direct communication channels through organizational ranks related to product cybersecurity matters
  • Enabling an independent voice for vehicle cybersecurity-related considerations within the vehicle safety design process
The Synopsys role in enabling that, Clark said, will be to give automotive clients the range of tools and services they need in one place.  “No matter what the need is, all the way from SoC to a functional security problem or developing a new brake control system, we’ll provide the hardware technology that will address that and then go through your security testing and evaluation and software development. It’s an under-one-roof solution,” he said.
[post_title] => Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => your-car-is-a-smartphone-on-wheels-and-it-needs-smartphone-security [to_ping] => [pinged] => [post_modified] => 2021-04-19 11:48:43 [post_modified_gmt] => 2021-04-19 18:48:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298045 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 298187 [post_author] => 20 [post_date] => 2021-04-18 08:00:21 [post_date_gmt] => 2021-04-18 15:00:21 [post_content] => Dark Data Explained Dark data defines as the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing). Similar to dark matter in physics, dark data often comprises most organizations’ universe of information assets. Thus, organizations often retain dark data for compliance purposes only. Storing and securing data typically incurs more expense (and sometimes greater risk) than value. Dark data is a type of unstructured, un-tagged and untapped data that is found in data repositories and has not been analyzed or processed. It is similar to big data which is large and complex unstructured data (images posted on Facebook, email, text messages, GPS signals from mobile phones, tweets, Tick Tok videos, Snaps, Instagram pictures, and other social media updates, etc.) that cannot be processed by traditional database tools, but dark data differs in how it is mostly neglected by business and IT administrators in terms of its value. Dark data is also known as dusty data. Dark data is data that is found in log files and data archives stored within large enterprise class data storage locations. It includes all data objects and types that have yet to be analyzed for any business or competitive intelligence or aid in business decision making. Typically, dark data is complex to analyze and stored in locations where analysis is difficult. The overall process can be costly. It also can include data objects that have not been seized by the enterprise or data that are external to the organization, such as data stored by partners or customers. Up to 90 percent of big data is dark data.
Dark Data Explained
With the growing accumulation of structured, unstructured and semi-structured data in organizations -- increasingly through the adoption of big data applications -- dark data has come specially to denote operational data that is left un-analyzed. Such data is seen as an economic opportunity for companies if they can take advantage of it to drive new revenues or reduce internal costs. Some examples of data that is often left dark include server log files that can give clues to website visitor behavior, customer call detail records that can indicate consumer sentiment and mobile Geo-location data that can reveal traffic patterns to aid in business planning. Dark data may also be used to describe data that can no longer be accessed because it has been stored on devices that have become obsolete. Types of Dark Data 1) Data that is not currently being collected. 2) Data that is being collected, but that is difficult to access at the right time and place. 3) Data that is collected and available, but that has not yet been productized, or fully applied. Dark data, unlike dark matter which is a form of matter thought to account for approximately 85% of the matter and composed of particles that do not absorb, reflect, or emit light, so they cannot be detected by observing electromagnetic radiation, dark data can be brought to light and so can its potential ROI. And what’s more, a simple way of thinking about what to do with the data –- through a cost-benefit analysis –- can remove the complexity surrounding the previously mysterious dark data. Value of Dark Data The primary challenge presented by dark data is not just storing it, but determining its real value, if any at all. In fact, much dark data remains un-illuminated because organizations simply don’t know what it contains. Destroying it might be too risky, but analyzing it can be costly. And it’s hard to justify that expense if the potential value of the data is unknown. To determine if their dark data is even worth further analysis, organizations need a means of quickly and cost effectively sorting, structuring, and visualizing it. Important fact in getting a handle on dark data is to understand that it isn’t a one-time event. The first step to understand the value of dark data is identifying what information is included in your dark data, where it resides, and its current status in terms of accuracy, age, and so on. Getting to this state will require you to:
  • Analyze the data to understand the basics, such as how much you have, where it resides, and how many types (structured, unstructured, semi-structured) are present.
  • Categorize the data to begin understanding how much of what types you have, and the general nature of information included in those types, such as format, age, etc.
  • Classify your information according to what will happen to it next. Will it be archived? Destroyed? Studied further? Once those decisions have been made, you can send your data groups to their various homes to isolate the information that you want to explore further.
Once you’ve identified the relative context for your data groups, now you can focus on the data you think might provide insights. You’ll also have a clearer picture of the full data landscape relative to your organization so that you can set information governance policies that will alleviate the burden of dark data, while also putting it to work. Future of Dark Data Startups going after dark data problems are usually not playing in existing markets with customers self-aware of their problems. They are creating new markets by surfacing new kinds of data and creating un-imagined applications with that data. But when they succeed, they become big companies, ironically, with big data problems. The question many people are asking is: What should be done with dark data? Some say data should never be thrown away, as storage is so cheap, and that data may have a purpose in the future. Ahmed Banafa, Author the Books: Secure and Smart Internet of Things (IoT) Using Blockchain and AI Blockchain Technology and Applications

Read more articles at: Prof. Banafa website

References: http://h30458.www3.hp.com/us/us/discover-performance/info-management-leaders/2014/jun/tapping-the-profit-potential-of-dark-data.html http://h30458.www3.hp.com/ww/en/ent/You-have-dark-data_1392257.html http://www.gartner.com/it-glossary/dark-data http://www.techopedia.com/definition/29373/dark-data http://searchdatamanagement.techtarget.com/definition/dark-data http://www.computerweekly.com/opinion/Dark-data-could-halt-big-datas-path-to-success http://www.forbes.com/sites/gartnergroup/2014/05/07/digital-business-is-everyones-business/ https://medium.com/what-i-learned-building/7d88d014ba98 http://blogs.pb.com/digital-insights/2014/05/05/dark-data-analytics/ http://blogs.computerworld.com/business-intelligenceanalytics/23286/dark-data-when-it-worth-being-brought-light [post_title] => Dark Data Explained [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => dark-data-explained [to_ping] => [pinged] => [post_modified] => 2021-04-19 11:46:16 [post_modified_gmt] => 2021-04-19 18:46:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298187 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 298185 [post_author] => 34128 [post_date] => 2021-04-18 06:00:04 [post_date_gmt] => 2021-04-18 13:00:04 [post_content] => formal use model 2 verification Verification Challenge As chip design complexity continues to grow astronomically with hardware accelerators running riot with the traditional hardware comprising CPUs, GPUs, networking and video and vision hardware, concurrency, control and coherency will dominate the landscape of verification complexity for safe and secure system design. Even Quintillion (1018) or Sexdecillion (1051) simulation cycles will not be adequate for ensuring bug absence. Bug escape continues to cause pain and some of these will end up causing damage to life. It becomes blatantly obvious when you look at the best industrial survey in verification conducted every two years by Harry Foster and Wilson Research. With only 68% of the ASIC/IC designs getting built the first time-around and the same number running late, the story of FPGA based designs is even worse - with only 17% hitting the first time-around mark. I'm not a pessimist, and every time I look at the trends of such reports it doesn't make me feel that we are accelerating the deployment of the best verification methods known to mankind.

The Promise of Formal

Formal methods are a mathematical way of analysing requirements, providing clear specifications and makes use of computational logic under-the-hood to confirm or deny the presence of bugs in a model. This model can be a hardware design expressed as Verilog or VHDL, or any of the other languages such as Chisel, BlueSpec SV or even gate-level netlist. The only way of obtaining 100% guarantees that a given model doesn't have functional defects or violates security or safety requirements is to verify it with the rigour of formal methods assisted with a great methodology. A great methodology doesn't exist in vacuum and is built as a collection of best practices on top of the technologies and describes ‘how’ these technologies can be used. The how is therefore an important question to answer.

Challenge with formal methods: Lack of good training

One reason why formal methods adoption has been limited is because of the lack of know-how. While we may take a step back and understand the myriad reasons for this to be the case the foremost reason of why formal is not everyone's cup of tea is because of lack of good comprehensive know-how. Formal methods have an exciting history with numerous landmark contributions from eminent computer scientists but for engineers it continues to be enigmatic. It is still perceived as an abstruse subject - the only exception is the use of apps by engineers thanks to the EDA companies who provided automated solutions to help solve bespoke problems. Formal market is now estimated to be 40% of the simulation. The automation provided easy-to-use tools that can provide an easy starting point on one end by the use of static lint analysis and on the other extreme end, solving problems like connectivity checking, register checking, X-checking, CDC and so on. Between the two extremes, sits the rest of the interesting landscape of verification problems solvable through model checking also known as property checking as well as equivalence checking. Methodology is the key to success to everything and is no different for property checking. But a good methodology has to be on problem solving and should not tie you to a particular tool. While I'm a huge fan of property checking and production-grade equivalence checking technologies, they do not solve all the verification problems. For example, if I'm interested in making sure that a compiler works correctly, or an inter-connect protocol model doesn't have deadlocks, I may have to look beyond the run-of-the-mill property checking solutions. This is where theorem proving comes in. Theorem provers do not suffer from capacity issues like dynamic simulation, or property checking tools and if you know to use them well, they can be used to verify theoretically infinite sized systems including operating systems and compilers as well as huge hardware designs. There are several questions.
  1. Where do you go to learn about all these formal technologies?
  2. Why should you learn formal?
  3. What is formal?
  4. How does one find an accelerated path of learning formal with support without getting locked in a vendor tool?

Formal Verification 101

Welcome to Formal Verification 101 - the world's first on-demand, self-paced, video course that provides a comprehensive introduction to all essential aspects of formal methods leading to a certification at the end. This course comes with an online lounge accessible to the enrolled students where they can discuss any questions and engage with experts. Let me first give you a personal perspective on why I decided to do this?

A Personal Perspective

Designed and delivered by yours truly, I took time to understand what has worked for me and what didn’t.  When I started learning formal over two decades ago, we did not have video courses in computational logic. While I took many courses in my master’s course, as an Electrical & Electronics engineer it was a steep learning curve. A large part of it was that we were not given a lot of practical perspectives. I was learning something in theory but didn’t know why and where it was useful. I was lucky to work on my Doctorate with Tom Melham at University of Oxford and really lucky to have had a few hours with Mike Gordon from University of Cambridge who taught me how to use the HOL 4 theorem prover. If you’re not aware, both Tom Melham and Mike Gordon were one of the first few computer scientists using higher-order logic and formal for hardware verification. However, not everyone, can get the opportunities I got at Oxford and Cambridge. I have been working in the industry on projects and have been training engineers in the industry in practical use of formal and have trained nearly 200 engineers including designers and verification folks in semiconductor industry. Working on cutting-edge designs, with shrinking time schedules gives me a strange sort of excitement and joy but teaching and sharing what worked and what didn’t equally give me a thrill. So, as it happens, I love sharing knowledge and enjoy teaching.

Two decades later

When I founded Axiomise, three years ago, there was still no video courses covering all the key formal technologies from an industrial perspective. In fact, there wasn’t a structured course covering all three formal technologies from an industrial perspective. There have been a few tutorials in theorem proving, or scanty material on property checking off and on, but no comprehensive introduction to all the key formal technologies in one place with a practical perspective delivered as a standalone course with online support. Meanwhile, we built a range of instructor-led courses spanning from 1-day to 4-days that are designed to offer in-person tutorials in a structured manner covering theory, labs, and quizzes. The goal is to provide production-grade training to engineers in the industry. In its third year, this course is in demand and we continue to provide this training through face-to-face via Zoom. We issue certificates of completion. The main advantage of these courses is that I deliver them in-person. Students gain insights on real-world problems and get a chance to ask questions live during the training and get their hands dirty on hard problems where we learn together how to solve them.

Bridging the gap in industry

What we discovered was a gap in the industry and our own portfolio. Whereas our instructor-led courses are great for a newbie or a practising professional, these courses are not self-paced, and there is a commitment to multiple-days consecutively which can be a challenge for some people.  Formal Verification 101 course is designed to bridge this gap. You can take this comprehensive introductory course at your own pace, in your own time and learn the fundamentals of formal methods covering all the key formal technologies such as theorem proving, property checking and equivalence checking. We take an interactive approach to learning in this course by providing you with hands-on demos that you can redo yourself by downloading the source code, and gain experience of seeing formal methods in action. Once you’re comfortable with this course and pass the final exam and would like to explore more advanced concepts, you can take the multi-day instructor-led courses.

Expert Opinions

When I completed the course design, I invited several peers from industry and academia to take this course, review it and offer feedback. I had to be conscious in choosing my first audience, so I wanted a spread the experience level, as much as geographic spread. We gave this course to Iain Singleton a formal verification engineer, Rajat Swarup – Manager at AWS, Supratik Chakraborty, Professor at IIT Bombay and Harry Foster, chair of the IEEE 1850 Property Specification Language Working Group. They provided a candid and open feedback available to read on https://elearn.axiomise.com Harry Foster humbled me with this comment. “I’ve always said that achieving the ultimate goal of advanced formal signoff depends on about 20% tool and 80% skill. Yet, there has been a dearth of training material available that is essential for building expert-level formal skills. But not anymore! Dr Ashish Darbari has created the most comprehensive video course on the subject of applied formal methods that I have ever seen. This course should be required by any engineer with a desire to master the art, science, and skills of formal methods.” With videos, text, downloadable source code, interactive demos, quizzes and a final exam leading to a certificate we have got everything covered for you. I've myself, sat down and recorded all the content, created the captions for all the videos so people with challenges can also enjoy the course. It has been 20+ years of living in trenches, years of planning and several months of production that has gone in this work. I hope you will give this course a chance and join me in my love for formal. Let us collectively design and build a safer and secure digital world. Sign up for the unique course in formal methods at https://elearn.axiomise.com. [post_title] => Why I made the world's first on-demand formal verification course [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => why-i-made-the-worlds-first-on-demand-formal-verification-course [to_ping] => [pinged] => [post_modified] => 2021-04-21 05:03:54 [post_modified_gmt] => 2021-04-21 12:03:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298185 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 298163 [post_author] => 13 [post_date] => 2021-04-22 06:00:37 [post_date_gmt] => 2021-04-22 13:00:37 [post_content] => SoC integration offers huge benefits through reduced chip count in finished systems, higher performance, improved reliability, etc. A single die can contain billions of transistors, with multiple processors and countless subsystems all working together. The result of this has been rapid growth of semiconductor content in many old and new products, including automotive, networking, telecommunications, medical, mobile, entertainment, etc. While higher levels of integration are largely beneficial, there are new challenges with system level integration, debug and verification. Embedded Analytics will play an important role in implementing and verifying these large and complex systems. Many SoCs have large numbers of blocks and subsystems connected through on-chip bus or network interfaces. They use on-chip memory and registers and incorporate complex software running application code. In previous generations system level observation and debug were challenging but made possible through what were previously external connections or in-circuit emulators (ICE). Modern SoCs require a completely new approach to understand dynamic system operation with sufficient visibility and control to make sense of what is occurring during operation. Siemens EDA writes about this in a white paper called “Embedded Analytics - A platform approach”. They cite the causes of increased design complexity leading to increased difficulty in design, optimization, verification and lifecycle management. First on their list is multi-source IP, where one SoC will contain IP from numerous sources, both internal and external. These IP elements can include heterogenous processors, interfaces, and a host of other kinds of blocks. Next comes the software for each of these processors. The software could be algorithmic or for managing chip operations or security. Each of these software packages in turn will probably be built on a software stack. Complexity in these SoCs can come from hardware and software interactions. The Siemens white paper correctly points out that often the kinds of problems caused by these interactions are non-deterministic and efforts to observe them can make them disappear or change behavior. The cost of system level validation can cost tens of millions of dollars. Functional validation needs to start early in the design process through to system installation. System level interactions need to be examined using simulation, emulation, prototyping and in finished systems. Even after product shipment, software updates can cause system level issues that will need to be investigated. By now it is clear that system level visibility into hardware and software is necessary. Without enough detail it may be difficult to pinpoint problems. On the other hand, too much data can also be an issue. The white paper points out that a truly effective observation and data gathering system for SoCs needs to have sophisticated control over what data is collected and when. [caption id="attachment_298164" align="aligncenter" width="662"]Embedded Analytics Embedded Analytics[/caption] Siemens EDA has developed the Tessent Embedded Analytics platform to allow system designers to wrap their hands around the problems relating to system level real time observation and analysis. There are several pieces to this platform, allowing it to be integrated with the target SoC and then be used to collect and interpret data on system operation. Tessent Embedded Analytics has an IP subsystem that is integrated into the target SoC. This IP is easily parameterized to make integration efficient and easy. There is also a hierarchical message passing fabric used to transfer the collected data efficiently with minimal added silicon overhead. The message passing fabric can handle local or cross-chip data transfers and is separate from the mission mode interconnect. To help filter the data collected there are programmable filters, counters and matchers that enable smart and configurable data filtering and event triggering in real time at the frequency of the SoC. There are secure data links for collecting data and interacting with the outside world. Tessent Embedded Analytics contains a software interface layer that communicates between the application layers and the analytics IP. The Tessent Embedded Analytics platform includes the tools to create applications that interact with its IP components to enable sophisticated monitoring of the SoC. There is a software development kit (Embedded SDK) that lets user developed applications configure, control and process the analytics data. The Configuration API, Data API and Verification API are available for use in either the Tessent Embedded Anaytics IDE or 3rd party IDE environments through plugins. The Siemens white paper describes in more detail how the entire process works and how it can support prototyping through FPGAs or emulators, as well as in-system silicon. Without an embedded analytics platform, system designers face an almost intractable problem when it comes to verifying and optimizing present day SoCs. Siemens seems to appreciate that while an embedded analytics platform must be comprehensive, it must not require excess silicon resource or interfere with system operation. The full white paper is worth reading to gain a better understanding of how Siemens EDA has assembled a powerful solution for these difficult challenges. The white paper is available on the Siemens EDA website. [post_title] => Embedded Analytics Becoming Essential [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => embedded-analytics-becoming-essential [to_ping] => [pinged] => [post_modified] => 2021-04-21 05:05:52 [post_modified_gmt] => 2021-04-21 12:05:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=298163 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7628 [max_num_pages] => 763 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => e64def1ba05c75a0d2020dce1b92d1d6 [query_vars_changed:WP_Query:private] => 1 [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

Embedded Analytics Becoming Essential

Embedded Analytics Becoming Essential
by Tom Simon on 04-22-2021 at 6:00 am

Embedded Analytics

SoC integration offers huge benefits through reduced chip count in finished systems, higher performance, improved reliability, etc. A single die can contain billions of transistors, with multiple processors and countless subsystems all working together. The result of this has been rapid growth of semiconductor content … Read More


Adaptive Power/Performance Management for FD-SOI

Adaptive Power/Performance Management for FD-SOI
by Tom Dillinger on 04-21-2021 at 10:00 am

Dolphin FD SOI FBB

A vexing chip design issue is how to achieve (or improve) performance and power dissipation targets, allowing for a wide range of manufacturing process variation (P) and dynamic operation voltage and temperature fluctuations (VT).  One design method is to analyze the operation across a set of PVT corners, and ensure sufficient… Read More


Arm Announces v9 Generation – Custom, DSP, Security, More

Arm Announces v9 Generation – Custom, DSP, Security, More
by Bernard Murphy on 04-21-2021 at 6:00 am

Balance of Standardization min

This wasn’t as much of a big bang announcement as others I have seen. More a polishing of earlier-announced reveals, positioning updates, together with some new concepts. First, you probably remember the Cortex-X announcement from about a year ago, allowing users to customize their own instructions into the standard instruction… Read More


Addressing SoC Test Implementation Time and Costs

Addressing SoC Test Implementation Time and Costs
by Daniel Payne on 04-20-2021 at 10:00 am

testmax flow

In business we all have heard the maxim, “Time is Money.” I learned this lesson early on in my semiconductor career when doing DRAM design, discovering that the packaging costs and time on the tester were actually higher than the fabrication costs. System companies like IBM were early adopters of Design For Test (DFT)… Read More


5G Calls for New Transceiver Architectures

5G Calls for New Transceiver Architectures
by Tom Simon on 04-20-2021 at 6:00 am

5G Architecture

5G phones are now the top tier devices from many manufactures, and 5G deployment is accelerating in many regions. While 4G/LTE has served us well, 5G is necessary to support next-generation telecommunication needs. It will be used heavily by consumers and industry because it supports many new use cases. There is an excellent white… Read More


Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets

Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets
by Kalar Rajendiran on 04-19-2021 at 10:00 am

Comparison of D2D PHY and XSR SerDes OpenFive

In early April, Gabriele Saucier kicked off Design & Reuse’s IPSoC Silicon Valley 2021 Conference. IPSoC conference as the name suggests is dedicated to semiconductor intellectual property (IP) and IP-based electronic systems. There were a number of excellent presentations at the conference. The presentations had been… Read More


Demystifying Angel Investing

Demystifying Angel Investing
by Daniel Nenni on 04-19-2021 at 6:00 am

Silicon Catalyst

Recently we published the article Semiconductor Startups – Are they back? which went SemiWiki viral with 30k+ views. It’s certainly a sign of the times with M&A activity still running at a brisk rate. During the day I help emerging companies with business development including raising money and sell-side acquisitions… Read More


Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security

Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security
by Taylor Armerding on 04-18-2021 at 10:00 am

Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security

Your modern car is a computer on wheels—potentially hundreds of computers on a set of wheels. Heck, even the wheels are infested with computers—what do you think prompts that little light on your dashboard to come on if your tire pressure is low? And computers don’t just run your infotainment system, backup camera, dashboard warning

Read More

Dark Data Explained

Dark Data Explained
by Ahmed Banafa on 04-18-2021 at 8:00 am

Dark Data Explained

Dark data defines as the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing). Similar to dark matter in physics, dark data often comprises most organizations’… Read More


Why I made the world’s first on-demand formal verification course

Why I made the world’s first on-demand formal verification course
by Ashish Darbari on 04-18-2021 at 6:00 am

formal use model 2


Verification Challenge
As chip design complexity continues to grow astronomically with hardware accelerators running riot with the traditional hardware comprising CPUs, GPUs, networking and video and vision hardware, concurrency, control and coherency will dominate the landscape of verification complexity for safe … Read More