Bronco Webinar 800x100 1

Arm Announces v9 Generation – Custom, DSP, Security, More

Arm Announces v9 Generation – Custom, DSP, Security, More
by Bernard Murphy on 04-21-2021 at 6:00 am

Balance of Standardization min

This wasn’t as much of a big bang announcement as others I have seen. More a polishing of earlier-announced reveals, positioning updates, together with some new concepts. First, you probably remember the Cortex-X announcement from about a year ago, allowing users to customize their own instructions into the standard instruction set. A response to similar flexibility in RISC-V. I get the impression this started as a tactical response to specific customer needs. Understandable, but you could see how that could get of control as interest spreads more widely. Richard Grisenthwaite, Sr VP, Chief Architect and Fellow at Arm talked about rationalizing standardization versus customization in a spectrum of support. Details not revealed yet but makes sense.

Who’s interested?

Richard nodded to the Fugaku supercomputer in his talk, that they possibly took advantage of this v9 flexibility. And he nodded to AWS Graviton2, as another potential beneficiary. But he added that really the rush to differentiate dominates every place we compute, from the cloud to the edge. Hence the balancing act in v9. Preserving all the benefits of standardization and compatibility with a massive ecosystem. While still allowing integrators to add their own secret sauce.

There’s more

That’s not all there is to v9. They have launched a rolling program with enhancements to machine learning, DSP and security in CPU, GPU and NPU platforms. Take machine learning first. Arm continues to stress that the range of ML-applications can’t be met with a one-size-fits-all solution. So they continue to extend support in A, M and R processors, working closely with colleagues at NVIDIA. (Jem Davies followed with more detail on this topic.)

DSPs?

This one took me a little by surprise. Looking backward, Arm didn’t make much noise about DSPs. Perhaps because they didn’t see a big enough opportunity? But the range of DSP-related applications has been exploding. In automotive for infotainment audio, communication, sensing, driver alertness, road noise suppression, V2X. More in consumer audio (wireless earbuds for example). No doubt again a spectrum of needs where Arm sees an opportunity for enhanced standard processors rather than dedicated DSPs? Richard didn’t elaborate.

He did however mention that the scalable vector extensions (SVE) they developed with Fujitsu for Fugaku. Expecting this capability to be extended to a much wider range of applications. He mentioned they have already created SVE2 to work well with 5G systems. I assume baseband applications you might normally expect high-end processors or DSPs to fill today. That can only be good; room for more kinds of embedded solution.

Security

Arm continues to emphasize security and thank goodness for that because I see no other central force to drag us towards building secure distributed systems. Impressively a following panel on this topic included a panelist from Munich RE Group. Investors care about liabilities. The easy-going days of “we can figure this out on our own” are drawing to a close.

Arm see security stakes being raised by distribution of compute between edge devices and compute nodes through wireless and backhaul networks. Application developers and service providers will want to run tasks where it makes most sense, without having to worry about which compute nodes have what security.

Here Richard talked about a confidential compute architecture to preserve data security. Arm plans to reveal more details on this architecture later in the year. One concept they will introduce is dynamically created realms, zones in which ordinary programs can run safely, separate from the secure and non-secure zones we already understand. Service customer apps and data need to be ensured high levels of security, yet the current view of secure versus insecure zones on a device doesn’t really address that need. (Where would such a task run? In the secure zone? Heck no. In the insecure zone? Ditto.) Realms provide a separate computation world outside the secure and non-secure zones, designed to depend on a small level of trust in the rest of the system. Even if a hack compromises components of the system, an app and its data running inside a realm can still be secure.

More security extensions on the way

Arm has also been working with Google on memory tagging extensions to protect against the memory safety issues we will never eliminate in our software. They’ve been working with Cambridge University on their Capability Hardware Enhanced Risk Instruction-Set Architecture (CHERI) to further bound vulnerabilities, all the way down to the ISA level. And they’re working with the UK government on a program called Morello, designed to bound the scope of any breach that does get a foothold.

Lots of interesting work: rationalization of the extensions program, more ML-everywhere and an interesting start into DSP markets. You can read the press release HERE.


Addressing SoC Test Implementation Time and Costs

Addressing SoC Test Implementation Time and Costs
by Daniel Payne on 04-20-2021 at 10:00 am

testmax flow

In business we all have heard the maxim, “Time is Money.” I learned this lesson early on in my semiconductor career when doing DRAM design, discovering that the packaging costs and time on the tester were actually higher than the fabrication costs. System companies like IBM were early adopters of Design For Test (DFT) by adopting scan design with special test Flip-Flops and then using Automatic Test Pattern Generation (ATPG) software to create test implementations that had high fault coverage, with a minimum amount of time on the tester.

It took awhile for logic designers at IDMs to adopt DFT techniques, because they were hesitant to give up silicon area in order to improve fault coverage numbers, instead favoring the smallest die size to maximize profits.

Challenges

Today there are many challenges to test implementation time and costs:

  • Higher Design Complexity
    • >100,000,000 cell instances
    • 100’s of cores
  • Subtle Defects
    • >50% of failures not found with standard tests
  • In-system testing required
  • Fewer test pins
    • Designs/core with <7 test pins

Consider a modern GPU design that has 50 Billion transistors, with 100 million cell instances, just how do you create enough test patterns to meet fault coverage goals while spending the minimum time on a tester?

A Solution

Adding scan Flip-Flops is a great start and a proven DFT methodology, but what if you want to meet these newer challenges?

It’s all about controllability and observability in the test world, and by adding something called a Test Point, you make controlling and observing a low coverage point in your logic much, much easier. Consider the following cloud of logic, followed by a D FF:

If the D input to the FF is difficult to control, or set, then we never observe a change in the output at point B. By adding a Test Point, we can now control the D input, thus improving the fault coverage:

Ideally, a test engineer wants to use Test Points that are compatible with existing scan compression, having minimal impact on Power/Performance/Area (PPA), and is easy to use.

Synopsys TestMAX Advisor

I spoke with Robert Ruiz, Director of Product Marketing, Test Automation Products and Pawini Mahajan, Sr. Product Marketing Manager, Digital Design Group over a Zoom call to learn how the Synopsys TestMAX Advisor tool fits into an overall suite of test tools. Robert and I first met back in the 1990s when he worked at Sunrise Test Systems, and I was at Viewlogic, so he has a deep history in the test world. Pawini and I both worked at Intel, still the number one semiconductor company in the world by revenue.

Here’s the tool sub-flow for TestMAX Advisor:

The TestMAX Advisor tool will analyze and rank all of the FFs used in a design, then determine which nets need added controllability or observability, so a test engineer can then determine just how many Test Points should be inserted and see how much the fault coverage improvement and shorter test patterns are going to be. The engineer can even set the percentage coverage goals and allow TestMAX Advisor to add Test Points automatically to meet the goals.

Side note to users of other ATPG tools, yes, you can use TestMAX Advisor with your favorite ATPG tool.

You get to see the incremental fault coverage improvement by adding Test Points in a table format:

Test Implementation

Adding extra Test Points is going to add new congestion to the routing, so the developers figured out how to make the placement of Test Points be physically-aware, so that congestion is minimized and timing impacts are reduced. Just look at the congestion map comparison below:

Seven examples of fault coverage improvements (up to 30%)  and pattern count reductions (up to 40%) were provided:

Summary

DFT using scan and ATPG tools is recommended to achieve fault coverage goals, and when shortest test times are important then you can consider adding new Test Points to improve controllability and observability. EDA developers at Synopsys have coded all of the features to make Test Point insertion an easy task that is producing some promising results to address the test implementation time and costs. TestMAX Advisor looks to be another worthy tool in your toolbox.

Related Blogs

5G Calls for New Transceiver Architectures

5G Calls for New Transceiver Architectures
by Tom Simon on 04-20-2021 at 6:00 am

5G Architecture

5G phones are now the top tier devices from many manufactures, and 5G deployment is accelerating in many regions. While 4G/LTE has served us well, 5G is necessary to support next-generation telecommunication needs. It will be used heavily by consumers and industry because it supports many new use cases. There is an excellent white paper by Omni Design Technologies that discusses the new applications for 5G, the technological changes that are necessary, and the hardware architectures needed to support it.

The white paper, titled “5G Technology and Transceiver Architecture” lists the three main use cases as enhanced mobile broadband (eMBB), ultra-reliable and low-latency communications (URLLC), and massive machine-type communications (mMTC). Each has specific technical requirements aligned with the scenarios that each will be used for. Each will vary in terms of peak data rate, spectral efficiency, latency, connection density, reliability, and many more.

Consumers will see 4k and 8k streaming, AR/VR improvements and much lower latency and higher speed access to the cloud. To fully realize higher bandwidth 5G will open up new communications bands from 24GHz to 100GHz. URLLC will be used for applications that require real-time performance, like automotive or some industrial applications. It calls for 1ms latency and 99.999% reliability. mMTC will be used for billions of connected devices such as wearables, IoT and sensors that use lower bandwidth and require low power.

I already alluded to one of the key technologies, millimeter-wave (mmWave), that will be essential to 5G. The white paper says that 5G deployment is going to move first and most rapidly in the sub-6GHz bands with the help of infrastructure reuse. The bands above 24GHz will offer much greater bandwidth but come with additional technical complexity. One of the key issues is that propagation losses will be much higher due to obstacles and signal absorption in the atmosphere.

MIMO will be used to improve signal performance through spatial diversity, spatial multiplexing and beamforming. Spatial diversity takes advantage of multipath to gain information using multiple antennas. The different antennas see different signals that can be used to mathematically determine the transmitted signal. Spatial multiplexing creates multiple spatially separated signals between the transmitter and receiver to boost data rates.

As a consequence of technology changes the hardware architecture for 5G is also changing. Beamforming is one of the biggest drivers for changes we see in the hardware implementation. The white paper points out that current commercial solutions include 64TX and 64RX for base station deployments. This is a large increase from the 2×2 or 4×4 arrays used in 4G.

It is no longer feasible to perform all the beamforming operations in pure analog or pure digital. If a pure digital approach is used, then each array element must have its own RF chain. This causes increased power consumption and adds components. Going with a pure analog approach requires only one RF chain but gives up a lot of the reconfigurability and spatial resolution.

5G Architecture

Omni Design suggests that a hybrid approach can meet all system objectives and is easier to implement. Much of the processing can be done on the digital side. Fewer RF chains are needed, with the analog side handling phase shifting individually for each antenna. Omni Design offers data converter IP solutions for 5G applications. Their IP is suited for below 6GHz or above 24GHz, using an IF architecture. Their solutions are offered in multiple processes from 28nm to advanced FinFET nodes. Omni Design has patented technologies that enable data converters to operate at higher sampling rates and precision while significantly reducing power consumption.

The white paper goes into detail on the performance characteristics of their IP for 5G. It also talks about the verification requirements and how their IP offering includes the necessary deliverables to ensure rigorous verification. With much of the 5G deployment still ahead of us, there will be an increasing need for data converter semiconductors. The Omni Design white paper, which is available here, is a good source of information useful for teams working to develop products for 5G telecommunication systems.


Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets

Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets
by Kalar Rajendiran on 04-19-2021 at 10:00 am

Comparison of D2D PHY and XSR SerDes OpenFive

In early April, Gabriele Saucier kicked off Design & Reuse’s IPSoC Silicon Valley 2021 Conference. IPSoC conference as the name suggests is dedicated to semiconductor intellectual property (IP) and IP-based electronic systems. There were a number of excellent presentations at the conference. The presentations had been categorized into eight different subject matter tracks. The tracks were Advanced Packaging Solution and Chiplet, Analog and Memory Blocks, Design and Verification, Interface IP, Security Solutions, Automotive IP and SoC, Video IP and High-Performance Computing.

One of the presentations I listened to was titled “Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets” and was presented by Ketan Mehta, Senior Director Product Marketing, Interface IP, from OpenFive, a business unit of SiFive, Inc. The term chiplet has been behind lot of hot discussions in the industry over the last few years and the volume and velocity of these discussions have increased of late. As addressing the needs of next generation chiplets is the key focus of Ketan’s presentation, it is a good idea to clarify what chiplet stands for, how much it is talked about and why. That would provide the proper backdrop for the solution that Ketan discusses in his presentation.

Chiplets are neither chips nor packages. They are what we end up with after architecturally disintegrating a large integrated circuit into multiple smaller dies. The smaller dies are referred to as chiplets. The benefits are at least two-fold. The multiple smaller dies could avoid sub-10nm process node and reduce the development cost. The smaller dies could benefit from the better yield rate per wafer.

An internet search for the term “chiplets” displays seventeen pages of results. With the exception of a few entries that talk about Lieber’s chocolate chiplets, all other entries refer to semiconductor related chiplets. The reason for the intensified discussion on chiplets is the projected market opportunity. According to research firm Omdia, chiplets driven market is expected to be $6B by 2024 from just $645M in 2018. That’s an impressive nine-fold projected increase over a six-year period.

The following is a summary of what I gathered by listening to Ketan’s talk. For complete details, please register and listen to Ketan’s presentation.

As a full-service provider for custom silicon, OpenFive offers services as well as a broad array of differentiated IP to enable the chiplets market. At a basic level, partitioning of a large die into chiplets results in primarily logic bound, memory bound or I/O bound chiplets. To integrate all the chiplets into a System-in-a-package (SiP) product, the interconnect IP has to be flexible, comprehensive and easy to integrate with their customers’ products.

OpenFive offers D2D IO to enable the chiplets market. D2D IO is a parallel I/O interface at low latency and low power delivering high throughput for die-to-die connectivity. It includes a controller and a PHY. For artificial intelligence (AI), high-performance computing (HPC), storage or simply chiplet to chiplet interconnect, a D2D PHY interface may be better suited than other types of interfaces. For a comparison of D2D PHY and a generic extra short reach/ultra short reach (XSR/USR) SerDes, refer to Figure 1.

Figure 1:

The D2D controller has been designed with flexibility in mind. The Controller is designed to interface with not only the D2D PHY but also with many other types of interfaces. Depending on the particular need and constraints, the controller can interface with Bunch of Wires (BoW), Open High Bandwidth Interface (OHBI), Advanced Interface Bus (AIB) or an XSR SerDes. Refer to Figure 2 to see how the D2D Controller handles the data as it flows between the framing layer, the protocol layer and the client adaptation layer.

Figure 2:

Ketan wraps up his presentation by showcasing how a RISC-V based CPU system and an 800G/400G Ethernet I/O system could benefit from using the D2D IO.

If interested in benefiting from a chiplets implementation approach, I recommend you register and listen to Ketan’s entire talk and then discuss with OpenFive on ways to leverage their different IP offerings and services for developing your products.

Also Read:

Enabling Edge AI Vision with RISC-V and a Silicon Platform

WEBINAR: Differentiated Edge AI with OpenFive and CEVA

Open-Silicon SiFive and Customizable Configurable IP Subsystems


Demystifying Angel Investing

Demystifying Angel Investing
by Daniel Nenni on 04-19-2021 at 6:00 am

Silicon Catalyst

Recently we published the article Semiconductor Startups – Are they back? which went SemiWiki viral with 30k+ views. It’s certainly a sign of the times with M&A activity still running at a brisk rate. During the day I help emerging companies with business development including raising money and sell-side acquisitions so brisk is not just an observation but my personal experience, absolutely.

If you are considering starting your own technology company or have one in progress this would be a great place to start. I cannot stress how important angel investors can be in not just seed funding, but also as mentors and guidance counselors, which brings us to the upcoming event:

Demystifying Angel Investing

Monday April 26, 2021
4:30pm to 6pm Pacific Time

The Silicon Catalyst Angels group is pleased to announce the next Guest Speaker Series event, open to both members and non-members. The zoom webinar is scheduled to take place on

Monday April 26th, 2021, starting at 4:30pm Pacific time.

The event will include a presentation by Dr. Ron Weissman entitled, “Demystifying Angel Investing”, followed by a panel session with Angel investors that have a long history of participating in the funding of early-stage / seed-stage entrepreneurial teams focused on building new semiconductor companies.

Participation is open to all investors, potential angel investors or you’re part of an early-stage startup hunting for investors, you don’t want to miss this informative presentation.

Registration for the webinar can be made at: Register in advance for this webinar

Agenda

4:30 to 5:15 – “Demystifying Angel Investing” – Dr. Weissman

Guest Speaker Ronald Weissman (Angel Capital Association Board Member) will provide an introduction to angel investing. Learn the secrets of angel investing from a twenty-year industry veteran and member of the Angel Capital Association’s Board of Directors who has invested in more than 40 startups and has served on dozens of startup boards of directors. Key topics to be covered include:

  • Who qualifies to be an angel investor?
  • Why become an angel investor?
  • What are the personal and community benefits of angel investing?
  • What is the process of finding and executing an angel deal?
  • What are the risks and rewards of angel investing?
  • How does one get started?
  • How do you find and evaluate deals?
  • Should you invest individually or join an angel group?

5:15 to 6pm – Panel Session with Semiconductor Industry Angel Investors

Moderator: Dr. Ron WeissmanAngel Capital Association

Panelists: Manthi Nguyen, Experienced Entrepreneur, Angel Investor, and member of Sand Hill Angels and Band of Angels.

Amos Ben MeirSilicon Catalyst Angels, President, and active Angel Investor

Rick LazanskySilicon Catalyst LLC, Chairman and long-time Angel Investor & serial entrepreneur

Dr. Ronald Weissman is Chairman of the Software Group of the Band of Angels, Silicon Valley’s oldest angel organization and is a member of the Board of Directors of the Angel Capital Association, North America’s umbrella organization for angel investors. He has more than twenty years of experience in venture and angel capital.  Ron was a Partner and portfolio manager for seventeen years at global venture capital and private equity firm Apax Partners where he focused on North American and cross-border investing. He has invested or advised more than 60 companies and has served on more than 40 corporate boards.

Today, Ron advises financial and corporate venture funds, national and regional governments and G2000 corporate innovation programs.  He is a frequent conference speaker and advisor on startup ecosystems, entrepreneurship, venture and angel capital trends, AI, startup governance, term sheets and valuation, M&A and other aspects of venture and corporate investing.  He has advised governmental and private organizations in Emilia-Romagna (Italy), Armenia, Chile, Israel and the Republic of Georgia as well as the US White House on developing effective startup ecosystems. He lectures regularly at Stanford, the University of Santa Clara and other universities in the US and abroad on venture and angel capital trends.

Manthi Nguyen is a lead investor in Portfolia’s Rising Tide Fund, Portfolia Consumer Fund and Portfolia Enterprise Fund. Manthi led the Rising Tide’s investments in Unaliwear and Envoy and co-led its investments in Tenacity and OtoSense.  She led the investment in B.Well in the Enterprise Fund.

Manthi is one of the most active deal syndicators in Silicon Valley, putting together investments across the Band of Angels, Sand Hill Angels, and Sierra Angels. Manthi and her husband Jim, run their own early stage investment company. Manthi has led investments in 30+ deals in the past 5 years, and served as acting CEO at Peloton Trucking.

Ms. Nguyen was involved in a series of early startups developing routing, and networking technologies that were later acquired by NEC, Cabletron, Tut Systems, and Cisco. In the early part of her career, Ms. Nguyen was part of General Motors Advanced Manufacturing Research group working on developing technologies for office and factory automation. She participated in developing international standards for Open System Interconnect with National Institute of Standard and Technology (NIST) and International Organization for Standardization (ISO). Ms. Nguyen worked on modeling of business process, information flow, and supply chain management for the General Motors enterprise. Ms. Nguyen’s experience at General Motors was invaluable in helping her build the foundation of understanding for how technologies are applied to solve real life problems.

In the last 15 years, Ms. Nguyen has brought her executive experience to focus on small businesses, mentoring entrepreneurs and angel investing. Ms. Nguyen received her Bachelor of Science from University of Washington, and her Master of Business Administration from University of Michigan.

Amos Ben-Meir is currently an active angel and venture investor in the San Francisco Bay Area. He is passionate about technology, business and the entrepreneurial eco-system as it relates to start-ups, venture capital and angel investing.

As an active angel/venture investor and a Member and Board Director of Sand Hill Angels and Silicon Catalyst Angels, Amos looks to invest and work with great founding teams that are harnessing cutting edge technology to deliver great products and services and that will result in significant outcomes to all stakeholders.

Prior to Amos’s angel & venture investing career, he was involved in six startups, either as an early employee or founder. Four of the startups had successful outcomes and two failed. Amos has held Director and VP Engineering positions during his entrepreneurial career. During these roles, Amos built and managed large engineering teams. This experience in the start-up world has driven him to stay involved in the San Francisco Bay Area start-up eco-system as an investor in start-ups and mentor to entrepreneurs. In addition, Amos holds various board observer and advisor positions in companies where he is an active investor.

Since 2012, Amos’s startup investment portfolio has grown to more than 300 portfolio companies. A partial list of Amos’s investments can be found on his profile page on the Angel List  web-site: https://angel.co/abenmeir-me-com

Rick Lazansky is a serial entrepreneur, active investor, and coach of many startups. Rick was inspired to start Silicon Catalyst by the growth of software startups, supported by incubators, accelerators, and open source software, and the need for ‘hard’ technologies to have the same level of ecosystem support. Rick has invested in more than 40 startups as an angel investor with Sand Hill Angels and as an LP in several venture funds. He had coached startup projects and classes at Stanford, Carnegie Mellon University, UC Santa Cruz and Berkeley. His startups include Vantage Analysis Systems, Denali Software, and RedSpark. He has served as a Board Director at three other incubators – i-GATE Hub in Livermore, Batchery in Berkeley, and Barcelona Ventures in Catalonia. He has a BA/BS in Economics and Information Science and an MS in SC/CE from Stanford.

I hope to see you there!

Also Read:

Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality

Silicon Catalyst’s Semi Industry Forum – All-Star Cast Didn’t Disappoint

Chip Startups are Succeeding with Silicon Catalyst and Partners Like Arm


Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security

Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security
by Taylor Armerding on 04-18-2021 at 10:00 am

Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security

Your modern car is a computer on wheels—potentially hundreds of computers on a set of wheels. Heck, even the wheels are infested with computers—what do you think prompts that little light on your dashboard to come on if your tire pressure is low? And computers don’t just run your infotainment system, backup camera, dashboard warning lights, and the voice that tells you to buckle your seatbelt. They direct the fundamental vehicle functions too—acceleration, braking, steering, and transmission.

The Synopsys Automotive Group has coined a term for how vehicles are changing: the “SmartPhonezation of Your Car™.” Which means the transformation of the worldwide vehicle fleet is about much more than a bunch of new features and creature comforts. It means your car is part of the vast internet of things (IoT). This has enabled convenience, luxury, efficiency, safety, and the march toward autonomous driving, but it also makes it part of the equally vast IoT attack surface.

As speakers at security conferences have warned for years, if hackers get control of a connected car, they could take over the acceleration, steering and brakes, demand a ransom from an owner simply to start the car, disable the locks and steal it, and more.

That makes security just as important as safety in a car. If it’s not secure, it’s not safe.

Automotive Security Standards in Focus

Fortunately, that reality has prompted an increasing focus on vehicle cybersecurity. There are now multiple frameworks and standards aimed at improving it. One of the most recent is the National Highway Traffic Safety Administration’s (NHTSA’s) draft of “Cybersecurity Best Practices for the Safety of Modern Vehicles.” And while the timing of the draft (it was released in mid-December) was a bit earlier than Chris Clark expected, it did not come as a surprise. Clark, senior manager, automotive software and security, with the Synopsys Automotive Group, declared in a blog post he coauthored earlier this year that he expected 2021 to be “the year of automotive standards.”

Not that standards are new. ISO 26262, from the International Organization for Standardization (ISO), addresses safety-related systems that include one or more electrical and/or electronic (E/E) systems. It has been around for a decade and was updated in 2018.

As a Synopsys blog post puts it, the focus of that standard is on “ensuring that automotive components do what they’re supposed to do, precisely when they’re supposed to do it.”

More recently, ISO/SAE 21434, created by ISO and the Society of Automotive Engineers, calls for “OEMs and all participants in the supply chain (to) have structured processes in place that support a ‘Security by Design’ process” covering the development and entire lifecycle of a vehicle. Those include requirements engineering, design, specification, implementation, test, and operations. A first draft of ISO/SAE 21434 was released a year ago, with the final standard expected by the middle of this year.

But those two are private-sector, industry initiatives. ISO is “an independent, non-governmental international organization with a membership of 165 national standards bodies.” That, as Clark puts it, illustrates that “the automotive industry has historically been very strong proponents of self-regulation.”

And while in the past that self-regulation had more to do with physical functionality and safety, more recently the industry has also been proactive in looking at how it can address cybersecurity. But the NHTSA best-practices document means government is going to play a more direct role. “It’s a good starting point for automotive organizations to say this is a real thing,” Clark said. “NHTSA isn’t just saying, ‘Do something about cybersecurity.’ It’s outlining explicit items that have to be addressed.”

And he thinks NHTSA’s best practices along with ISO/SAE “are going to provide the automotive industry a good sounding board to look at how we address cybersecurity from a risk-based perspective. I think everybody could agree that the biggest concern is the risk of autonomous driving.” The goal isn’t perfection. “We’re not building a space shuttle, we’re building a car,” Clark said. “If we wanted to have every single security feature to ensure that a vehicle never failed, we couldn’t afford it.”

But that doesn’t mean vehicle cybersecurity can’t improve—a lot.

Automotive Cybersecurity Framework Prescribes Layered Approach

NHTSA recommends that the automotive industry follow the National Institute of Standards and Technology’s (NIST’s) documented Cybersecurity Framework, which is “structured around the five principal functions, ‘Identify, Protect, Detect, Respond, and Recover,’ to build a comprehensive and systematic approach to developing layered cybersecurity protections for vehicles.” That layered approach, it says, “assumes some vehicle systems could be compromised, reduces the probability of an attack’s success and mitigates the ramifications of unauthorized vehicle system access.”

If that sounds more general than specific, that is by design. The goal, which Synopsys supports, is for standards to mandate what results an industry must achieve, not prescribe how to achieve them.  “Not all standards are prescriptive,” Clark said. “Standards organizations are trying to minimize the impact on innovation and eliminate a check-box mentality.”

Indeed, the reality of human nature is that if government set out a list of rules or specific requirements, “then everybody in the industry would do those things and nothing more,” he said. “But if we say organizations must design a security program that focuses on the cybersecurity of hardware and software to meet the needs of both the customer and the organization, then everybody’s going to be a little bit different, and some are going to be better than others. It starts to create the competitive landscape that we are really interested in.”

“Standards organizations are trying to minimize the impact on innovation and eliminate a check-box mentality.”

–Chris Clark

The key overall objectives of the Synopsys Automotive Group are what it calls the four pillars of automotive cybersecurity:

  • Safety: For the vehicle and its occupants
  • Security: Of the vehicle and data
  • Reliability: Of items and features
  • Quality: Of vehicle items

Those goals aren’t prescriptive either, but how to achieve them will become much more specific in the next few months. Over the next several months, this blog will feature a series of posts that cover the major elements of automotive cybersecurity addressed in the NHTSA and other best-practices standards. Planned topics include:

  • Risk assessment and validation
  • Sensor vulnerability
  • Cryptographic credentials, crypto agility, and vehicle diagnostics
  • After-market devices
  • Wireless paths in vehicles
  • Software updates/modifications and over-the-air software updates

The goal is to share insights that will help organizations evaluate and improve their security practices. “Many organizations feel that they have addressed cybersecurity—they know it’s important, but they never take the steps to figure out if the actions they are taking are effective,” Clark said. “Are they just meeting a requirement pushed down from an OEM, or are they changing how they do business to ensure that security is a core component and that any standards requirements that come down are easily met?”

Another overall goal of the Automotive Group is to help organizations achieve NHTSA’s call for leadership making cybersecurity a priority. That, according to NHTSA, includes:

  • Providing resources for “researching, investigating, implementing, testing, and validating product cybersecurity measures and vulnerabilities”
  • Facilitating seamless and direct communication channels through organizational ranks related to product cybersecurity matters
  • Enabling an independent voice for vehicle cybersecurity-related considerations within the vehicle safety design process

The Synopsys role in enabling that, Clark said, will be to give automotive clients the range of tools and services they need in one place.  “No matter what the need is, all the way from SoC to a functional security problem or developing a new brake control system, we’ll provide the hardware technology that will address that and then go through your security testing and evaluation and software development. It’s an under-one-roof solution,” he said.

Also Read:

Global Variation and Its Impact on Time-to-Market for Designs

VC Formal SIG Virtually Conferences in Europe

Key Requirements for Effective SoC Verification Management


Dark Data Explained

Dark Data Explained
by Ahmed Banafa on 04-18-2021 at 8:00 am

Dark Data Explained

Dark data defines as the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing). Similar to dark matter in physics, dark data often comprises most organizations’ universe of information assets. Thus, organizations often retain dark data for compliance purposes only. Storing and securing data typically incurs more expense (and sometimes greater risk) than value.

Dark data is a type of unstructured, un-tagged and untapped data that is found in data repositories and has not been analyzed or processed. It is similar to big data which is large and complex unstructured data (images posted on Facebook, email, text messages, GPS signals from mobile phones, tweets, Tick Tok videos, Snaps, Instagram pictures, and other social media updates, etc.) that cannot be processed by traditional database tools, but dark data differs in how it is mostly neglected by business and IT administrators in terms of its value.

Dark data is also known as dusty data.

Dark data is data that is found in log files and data archives stored within large enterprise class data storage locations. It includes all data objects and types that have yet to be analyzed for any business or competitive intelligence or aid in business decision making. Typically, dark data is complex to analyze and stored in locations where analysis is difficult. The overall process can be costly. It also can include data objects that have not been seized by the enterprise or data that are external to the organization, such as data stored by partners or customers.

Up to 90 percent of big data is dark data.

With the growing accumulation of structured, unstructured and semi-structured data in organizations — increasingly through the adoption of big data applications — dark data has come specially to denote operational data that is left un-analyzed. Such data is seen as an economic opportunity for companies if they can take advantage of it to drive new revenues or reduce internal costs. Some examples of data that is often left dark include server log files that can give clues to website visitor behavior, customer call detail records that can indicate consumer sentiment and mobile Geo-location data that can reveal traffic patterns to aid in business planning.

Dark data may also be used to describe data that can no longer be accessed because it has been stored on devices that have become obsolete.

Types of Dark Data

1) Data that is not currently being collected.

2) Data that is being collected, but that is difficult to access at the right time and place.

3) Data that is collected and available, but that has not yet been productized, or fully applied.

Dark data, unlike dark matter which is a form of matter thought to account for approximately 85% of the matter and composed of particles that do not absorb, reflect, or emit light, so they cannot be detected by observing electromagnetic radiation, dark data can be brought to light and so can its potential ROI. And what’s more, a simple way of thinking about what to do with the data –- through a cost-benefit analysis –- can remove the complexity surrounding the previously mysterious dark data.

Value of Dark Data

The primary challenge presented by dark data is not just storing it, but determining its real value, if any at all. In fact, much dark data remains un-illuminated because organizations simply don’t know what it contains. Destroying it might be too risky, but analyzing it can be costly. And it’s hard to justify that expense if the potential value of the data is unknown. To determine if their dark data is even worth further analysis, organizations need a means of quickly and cost effectively sorting, structuring, and visualizing it. Important fact in getting a handle on dark data is to understand that it isn’t a one-time event.

The first step to understand the value of dark data is identifying what information is included in your dark data, where it resides, and its current status in terms of accuracy, age, and so on. Getting to this state will require you to:

  • Analyze the data to understand the basics, such as how much you have, where it resides, and how many types (structured, unstructured, semi-structured) are present.
  • Categorize the data to begin understanding how much of what types you have, and the general nature of information included in those types, such as format, age, etc.
  • Classify your information according to what will happen to it next. Will it be archived? Destroyed? Studied further? Once those decisions have been made, you can send your data groups to their various homes to isolate the information that you want to explore further.

Once you’ve identified the relative context for your data groups, now you can focus on the data you think might provide insights. You’ll also have a clearer picture of the full data landscape relative to your organization so that you can set information governance policies that will alleviate the burden of dark data, while also putting it to work.

Future of Dark Data

Startups going after dark data problems are usually not playing in existing markets with customers self-aware of their problems. They are creating new markets by surfacing new kinds of data and creating un-imagined applications with that data. But when they succeed, they become big companies, ironically, with big data problems.

The question many people are asking is: What should be done with dark data? Some say data should never be thrown away, as storage is so cheap, and that data may have a purpose in the future.

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at: Prof. Banafa website

References:

http://h30458.www3.hp.com/us/us/discover-performance/info-management-leaders/2014/jun/tapping-the-profit-potential-of-dark-data.html

http://h30458.www3.hp.com/ww/en/ent/You-have-dark-data_1392257.html

http://www.gartner.com/it-glossary/dark-data

http://www.techopedia.com/definition/29373/dark-data

http://searchdatamanagement.techtarget.com/definition/dark-data

http://www.computerweekly.com/opinion/Dark-data-could-halt-big-datas-path-to-success

http://www.forbes.com/sites/gartnergroup/2014/05/07/digital-business-is-everyones-business/

https://medium.com/what-i-learned-building/7d88d014ba98

http://blogs.pb.com/digital-insights/2014/05/05/dark-data-analytics/

http://blogs.computerworld.com/business-intelligenceanalytics/23286/dark-data-when-it-worth-being-brought-light


Why I made the world’s first on-demand formal verification course

Why I made the world’s first on-demand formal verification course
by Ashish Darbari on 04-18-2021 at 6:00 am

formal use model 2


Verification Challenge
As chip design complexity continues to grow astronomically with hardware accelerators running riot with the traditional hardware comprising CPUs, GPUs, networking and video and vision hardware, concurrency, control and coherency will dominate the landscape of verification complexity for safe and secure system design. Even Quintillion (1018) or Sexdecillion (1051) simulation cycles will not be adequate for ensuring bug absence. Bug escape continues to cause pain and some of these will end up causing damage to life.

It becomes blatantly obvious when you look at the best industrial survey in verification conducted every two years by Harry Foster and Wilson Research. With only 68% of the ASIC/IC designs getting built the first time-around and the same number running late, the story of FPGA based designs is even worse – with only 17% hitting the first time-around mark. I’m not a pessimist, and every time I look at the trends of such reports it doesn’t make me feel that we are accelerating the deployment of the best verification methods known to mankind.

The Promise of Formal

Formal methods are a mathematical way of analysing requirements, providing clear specifications and makes use of computational logic under-the-hood to confirm or deny the presence of bugs in a model. This model can be a hardware design expressed as Verilog or VHDL, or any of the other languages such as Chisel, BlueSpec SV or even gate-level netlist. The only way of obtaining 100% guarantees that a given model doesn’t have functional defects or violates security or safety requirements is to verify it with the rigour of formal methods assisted with a great methodology.

A great methodology doesn’t exist in vacuum and is built as a collection of best practices on top of the technologies and describes ‘how’ these technologies can be used.

The how is therefore an important question to answer.

Challenge with formal methods: Lack of good training

One reason why formal methods adoption has been limited is because of the lack of know-how. While we may take a step back and understand the myriad reasons for this to be the case the foremost reason of why formal is not everyone’s cup of tea is because of lack of good comprehensive know-how. Formal methods have an exciting history with numerous landmark contributions from eminent computer scientists but for engineers it continues to be enigmatic. It is still perceived as an abstruse subject – the only exception is the use of apps by engineers thanks to the EDA companies who provided automated solutions to help solve bespoke problems. Formal market is now estimated to be 40% of the simulation. The automation provided easy-to-use tools that can provide an easy starting point on one end by the use of static lint analysis and on the other extreme end, solving problems like connectivity checking, register checking, X-checking, CDC and so on.

Between the two extremes, sits the rest of the interesting landscape of verification problems solvable through model checking also known as property checking as well as equivalence checking. Methodology is the key to success to everything and is no different for property checking. But a good methodology has to be on problem solving and should not tie you to a particular tool.

While I’m a huge fan of property checking and production-grade equivalence checking technologies, they do not solve all the verification problems. For example, if I’m interested in making sure that a compiler works correctly, or an inter-connect protocol model doesn’t have deadlocks, I may have to look beyond the run-of-the-mill property checking solutions. This is where theorem proving comes in.

Theorem provers do not suffer from capacity issues like dynamic simulation, or property checking tools and if you know to use them well, they can be used to verify theoretically infinite sized systems including operating systems and compilers as well as huge hardware designs.

There are several questions.

  1. Where do you go to learn about all these formal technologies?
  2. Why should you learn formal?
  3. What is formal?
  4. How does one find an accelerated path of learning formal with support without getting locked in a vendor tool?

Formal Verification 101

Welcome to Formal Verification 101 – the world’s first on-demand, self-paced, video course that provides a comprehensive introduction to all essential aspects of formal methods leading to a certification at the end.

This course comes with an online lounge accessible to the enrolled students where they can discuss any questions and engage with experts.

Let me first give you a personal perspective on why I decided to do this?

A Personal Perspective

Designed and delivered by yours truly, I took time to understand what has worked for me and what didn’t.  When I started learning formal over two decades ago, we did not have video courses in computational logic. While I took many courses in my master’s course, as an Electrical & Electronics engineer it was a steep learning curve. A large part of it was that we were not given a lot of practical perspectives. I was learning something in theory but didn’t know why and where it was useful.

I was lucky to work on my Doctorate with Tom Melham at University of Oxford and really lucky to have had a few hours with Mike Gordon from University of Cambridge who taught me how to use the HOL 4 theorem prover. If you’re not aware, both Tom Melham and Mike Gordon were one of the first few computer scientists using higher-order logic and formal for hardware verification. However, not everyone, can get the opportunities I got at Oxford and Cambridge.

I have been working in the industry on projects and have been training engineers in the industry in practical use of formal and have trained nearly 200 engineers including designers and verification folks in semiconductor industry. Working on cutting-edge designs, with shrinking time schedules gives me a strange sort of excitement and joy but teaching and sharing what worked and what didn’t equally give me a thrill. So, as it happens, I love sharing knowledge and enjoy teaching.

Two decades later

When I founded Axiomise, three years ago, there was still no video courses covering all the key formal technologies from an industrial perspective. In fact, there wasn’t a structured course covering all three formal technologies from an industrial perspective. There have been a few tutorials in theorem proving, or scanty material on property checking off and on, but no comprehensive introduction to all the key formal technologies in one place with a practical perspective delivered as a standalone course with online support.

Meanwhile, we built a range of instructor-led courses spanning from 1-day to 4-days that are designed to offer in-person tutorials in a structured manner covering theory, labs, and quizzes. The goal is to provide production-grade training to engineers in the industry. In its third year, this course is in demand and we continue to provide this training through face-to-face via Zoom. We issue certificates of completion. The main advantage of these courses is that I deliver them in-person. Students gain insights on real-world problems and get a chance to ask questions live during the training and get their hands dirty on hard problems where we learn together how to solve them.

Bridging the gap in industry

What we discovered was a gap in the industry and our own portfolio. Whereas our instructor-led courses are great for a newbie or a practising professional, these courses are not self-paced, and there is a commitment to multiple-days consecutively which can be a challenge for some people.  Formal Verification 101 course is designed to bridge this gap. You can take this comprehensive introductory course at your own pace, in your own time and learn the fundamentals of formal methods covering all the key formal technologies such as theorem proving, property checking and equivalence checking. We take an interactive approach to learning in this course by providing you with hands-on demos that you can redo yourself by downloading the source code, and gain experience of seeing formal methods in action. Once you’re comfortable with this course and pass the final exam and would like to explore more advanced concepts, you can take the multi-day instructor-led courses.

Expert Opinions

When I completed the course design, I invited several peers from industry and academia to take this course, review it and offer feedback.

I had to be conscious in choosing my first audience, so I wanted a spread the experience level, as much as geographic spread. We gave this course to Iain Singleton a formal verification engineer, Rajat Swarup – Manager at AWS, Supratik Chakraborty, Professor at IIT Bombay and Harry Foster, chair of the IEEE 1850 Property Specification Language Working Group.

They provided a candid and open feedback available to read on https://elearn.axiomise.com

Harry Foster humbled me with this comment.

“I’ve always said that achieving the ultimate goal of advanced formal signoff depends on about 20% tool and 80% skill. Yet, there has been a dearth of training material available that is essential for building expert-level formal skills. But not anymore! Dr Ashish Darbari has created the most comprehensive video course on the subject of applied formal methods that I have ever seen. This course should be required by any engineer with a desire to master the art, science, and skills of formal methods.”

With videos, text, downloadable source code, interactive demos, quizzes and a final exam leading to a certificate we have got everything covered for you. I’ve myself, sat down and recorded all the content, created the captions for all the videos so people with challenges can also enjoy the course.

It has been 20+ years of living in trenches, years of planning and several months of production that has gone in this work. I hope you will give this course a chance and join me in my love for formal.

Let us collectively design and build a safer and secure digital world. Sign up for the unique course in formal methods at https://elearn.axiomise.com.

Also read:

Life in a Formal Verification Lane

Accelerating Exhaustive and Complete Verification of RISC-V Processors

CEO Interview: Dr. Ashish Darbari of Axiomise


Podcast EP16: Hyperscale Computing & Changes in the Datacenter

Podcast EP16: Hyperscale Computing & Changes in the Datacenter
by Daniel Nenni on 04-16-2021 at 10:00 am

Dan is joined by Frank Schirrmeister, senior group director of solutions marketing at Cadence Design Systems, Frank has extensive experience in complex system design from his work at companies such as Cadence, Synopsys, Imperas and ChipVision. He has also advised Vayavya Labs and CriticalBlue.

Dan and Frank discuss the many challenges of building hyperscale datacenters and the innovations that are helping to make this massive compute infrastructure buildout a reality. Domain-specific compute architectures and the required design tool support are some of the items explored in this conversation.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


TSMC Ups CAPEX Again!

TSMC Ups CAPEX Again!
by Daniel Nenni on 04-16-2021 at 6:00 am

TSMC 1Q21 Revenue by Platform

We were all pleasantly surprised when TSMC increased 2021 Capex to a record $28 billion. To me this validated the talk inside the ecosystem that Intel would be coming to TSMC at 3nm. We were again surprised when TSMC announced a $100B investment over the next three years which belittled Intel’s announcement that they would spend $20B on two new fabs in Arizona.

It wasn’t clear what the TSMC investment included but we now know (via the Q1 2021 Investor Call) that it’s predominantly CAPEX starting with $30B in 2021 and the rest over 2022 and 2023. Personally, I think TSMC CAPEX will end up being more than $100B because TSMC tends to be conservative with their numbers, absolutely.

Let’s take a look at CC Wei’s opening statement on yesterday’s investor call:

CC Wei First, let me talk about the capacity shortage and demand outlook. Our customers are currently facing challenges from the industry-wide semiconductor capacity shortage, which is driven by both a structural increase in long-term demand as well as short-term imbalance in the supply chain. We are witnessing a structural increase in underlying semiconductor demand as a multi-year megatrend of 5G and HPC-related applications are expected to fuel strong demand for our advanced technologies in the next several years. COVID-19 has also fundamentally accelerate the digital transformation, making semiconductors more pervasive and essential in people’s life.

D.A.N. The short term imbalance is of course the drop in utilization last year due to the uncertainty brought by the pandemic and now the hockey stick shape rebound which includes some panic buying. The bottom line is that we have enough capacity today and more than enough capacity coming tomorrow so no worries here.

CC Wei: To address the structural increase in the long-term demand profile, we are working closely with our customers and investing to support their demand. We have acquired land and equipment and started the construction of new facilities. We are hiring thousands of employees and expanding our capacity at multiple sites. TSMC expects to invest about USD 100 billion through the next 3 years to increase capacity, to support the manufacturing and R&D of leading-edge and specialty technologies. Increased capacity is expected to improve supply certainty for our customers and help strengthen confidence in global supply chains that rely on semiconductors.

D.A.N. Based on what we have seen on the SemiWiki job board TSMC is indeed hiring thousands of employees and the TSMC job posts are getting 2x more views than average. And yes TSMC is already spending that $100B, $8.8B was consumed in Q1 2021.

CC Wei:  Our capital investment decisions are based on 4 disciplines: technology leadership, flexible and responsive manufacturing, retaining customers’ trust and earning the proper return. At the same time, we face manufacturing cost challenges due to increasing process complexity at leading node, new investment in mature nodes and rising material costs. Therefore, we will continue to work closely with customers to sell our value. Our value includes the value of our technology, the value of our service and the value of our capacity support to customers. We will look to firm up our wafer pricing to a reasonable level.

D.A.N. Translation: there will be pricing adjustments to compensate for the added capacity.

CC Wei:  Next, let me talk about the automotive supply update. The automotive market has been soft since 2018. Entering 2020, COVID-19 further impact the automotive market. The automotive supply chain was affected throughout the year, and our customers continued to reduce their demand throughout the third quarter of 2020. We only began to see sudden recovery in the fourth quarter of 2020.

However, the automotive supply chain is long and complex with its own inventory management practices. From chip production to car production, it takes at least 6 months with several tiers of suppliers in between. TSMC is doing its part to address the chip supply challenges for our customers.

D.A.N. Some car companies have shortages and some don’t, it all depends on inventory and who cut orders in 2020. Toyota I’m told has the best managed inventory and is still making cars. Other car companies not so much.

CC Wei: Finally, I will talk about the N5 and N3 status. TSMC’s N5 is the foundry industry’s most advanced solution with the best PPA. N5 is already in its second year of volume production with yield better than our original plan. N5 demand continue to be strong, driven by smartphone and HPC applications, and we expect N5 to contribute around 20% of our wafer revenue in 2021.

D.A.N. I was told by a gaming chip leaker that there is panic buying in crypto and gaming which may explain TSMC’s big HPC numbers. Also, the word inside the ecosystem is that Samsung is having problems so there is a burst of 5N and 3N design activity. In fact, 80% of the 2021 CAPEX is being spent on 5N and 3N (which are pretty much identical fabs using different process recipes).

CC Wei: N3 will be another full node stride from our N5 and will use FinFET transistor structure to deliver the best technology maturity, performance, and cost for our customers. Our N3 technology development is on track with good progress. We continue to see a much higher level of customer engagement for both HPC and smartphone applications at N3 as compared with N5 and N3 at a similar stage.

D.A.N. This is due to Samsung’s failure at 3nm. Scotten Jones did a nice blog on this earlier this year:

ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era

CC Wei: Risk production is scheduled in 2021. The volume production is targeted in second half of 2022. Our 3-nanometer technology will be the most advanced foundry technology in both PPA and transistor technology. Thus, we are confident that both our 5-nanometer and 3-nanometer will be large and long-lasting nodes for TSMC.

D.A.N. Apple iProducts will be 3N next year which means HVM in 2H 2022. The IDM foundries (Intel and Samsung) do initial product introductions and spend a year or two ramping up to HVM so it is hard to compare new process introduction dates.

You can join a more detailed discussion here in the experts forum: TSMC Q1 2021 Earnings Conference Call