webinar banner2025 (1)

Rwanda is Building Africa’s Very Own Silicon Valley – Known as Kigali Innovation City (KIC)

Rwanda is Building Africa’s Very Own Silicon Valley – Known as Kigali Innovation City (KIC)
by Nicky Verd on 12-05-2019 at 6:00 am

A multi-billion dollar project inspired by America’s Silicon Valley for the production and development of technological advancement is being built in Rwanda’s capital, Kigali.

Kigali Innovation City

This is an innovative effort, the first of its kind on the continent. The aim is to build a critical mass of talent, research and innovative ideas that will transform the continent. The government of Rwanda plans to attract both domestic and foreign universities, technology companies, biotech firms, agriculture, healthcare and financial services; infrastructure including commercial and retail real estate.

According to Paula Ingabire, Rwanda’s Minister of ICT and Innovation, the KIC project has set itself the objectives of creating more than 50,000 jobs, generating $150 million in ICT exports per year and attracting more than $300 million in foreign direct investment.

The project started in 2018 and its said to cost $2 Billion US Dollars. I believe this is the kind of initiative the rest of African leaders/governments should emulate instead of promising jobs that no longer exist to the masses as a means to secure more votes.

Times are changing, and so should the approach to solving economic/social ills. The new era being ushered in by Industry 4.0 demands that people be given the right resources/infrastructure, and in so doing, they’ll create their own employment/livelihoods. As mentioned in my new book  Disrupt Yourself Or Be Disrupted, “You can do more today with your life having just an internet connection and that’s an opportunity our parents and grandparents never had.”

And so, Rwanda is committed to becoming the gateway to a technologically developed Africa and it is realizing this with a consistent development strategy that is a sight to behold. Rwanda is one of the world’s fastest-growing economies and leads the African continent in technological advancement and infrastructural development.

Rwanda is yet again preparing to build a $5 billion US Dollar model green city in Kigali from January 2020. This will also be the first in Africa – to focus on green technologies and innovations for green and climate-resilient urbanization.

Many African nations complain about the negative impact on their development resulting from the legacy of colonialism which positioned Africa to be perpetually at the mercy of the Western World however Rwanda has chosen the noble and daring act of rewriting the script of her future. Rwanda is consciously positioning herself for the Fourth Industrial Revolution(4IR) and genuinely giving the future generations tools to succeed in an era of exponential technologies.

In the 21 years since the 1994 genocide, Rwanda has come a long way. Despite a near-total lack of natural resources, the country continues to rise, making it one of the fastest-growing economies in the world. One key to this turnaround is technology. It seems that President Paul Kagame; also known as the “digital president” is positioning the country for an extraordinary leapfrog in the Fourth Industrial Revolution.

“We want technology to sort of permeate all the professions, all the jobs,” says Jean Philbert Nsengimana, Rwanda’s Minister of Youth and Information Technology.

It’s really interesting to note that Rwanda has a Minister of Youth and Information Technology. This means the country is paying attention to its greatest asset – the youths!

Hopefully, the rest of the continent is watching and learning from Rwanda – a purposeful Government and leadership abreast with what is trending, positioning itself to be a reference point for economic advancement and innovation through cutting edge technology

In my new book, Disrupt Yourself Or Be Disrupted, I mentioned that “Africa’s hopeful transformation lies in viewing entrepreneurship as a viable career path, not only as a last resort for joblessness. As the Fourth Industrial Revolution takes centre stage, Africa needs more entrepreneurs, innovators, start-ups, disruptors, inventors, pioneers and thought leaders and we cannot afford to be reckless about what will transform our continent.”

The time for disruption is now! The Fourth Industrial Revolution is Africa’s greatest opportunity to leapfrog and compete on a global scale.

“The Fourth Industrial Revolution is not about technology; It is about a new era, new ways of thinking and new ways of doing business” ~Nicky Verd

We have a continent to build! We are the heroes we’ve been waiting for. Wakanda is Real!

Get My New Book “Disrupt Yourself Or Be Disrupted” – A book born out of an authentic passion to ignite human potential in an era of Artificial Intelligence(AI). You cannot out-work, out-learn or out-efficient the machines but you can out-human them!

Available on Amazon


Intersection of Technologies

Intersection of Technologies
by gskahlon on 12-04-2019 at 10:00 am

Monitoring brain activities and translating the signals into commands to control devices is truly a ‘Cool Idea’. Nurio is the latest winner of Protolabs’ Cool Idea award – a $250k grant towards manufacturing services to rapid prototype and accelerate the product to the market. (Click here to learn more about Cool Idea Award)

What amazes me the most is the close intersection of different hardware and software technologies that have enabled this and similar devices:

Integrated Chip (IC) miniaturization – As smart phones continue to push the form factor of ICs (thanks to Moore’s law), we are able to fit more functionality in our pocket. This has led to an ecosystem of devices that can be re-purposed for new innovative application – Brain computing interface (BCI) being one of them.

Moore’s Law Graphic

Computing Power – Instead of having a separate computer to crunch the massive amount of data from the device, we can use the super computer in our pocket (smart phone)

Here is one of my favorite fact – Apple recently released their new iPhone with an A13 chip. This chip has 8.5 billion transistors. In comparison, the entire cluster of 117 servers for Toy Story (1995) had roughly 1 billion CPU transistors. (Image below. source: podcast from @a16z). So you have 8.5 times the processing power in your pocket then what Pixar had to make the first Toy Story movie. 🤯

Steve Jobs Toy Story CPUs

Artificial Intelligence (AI) – The hardest part with this amount of data is finding the signal from the noise. However now with self learning systems, we are able to process tons of data without any pre-processing from humans.

Protolab’s digital platform

Digital Manufacturing – Protolab’s digital platform allows innovators to iterate their design, build prototype in days to get user feedback, get expertise advice to select material and reduce cost to control product entry price point for increased market adoption. They are also able to scale production with demand. This could be done within few weeks to months! Without digital manufacturing, this speed does not exist and entrepreneur are stuck in long product development cycle while other technologies continue to progresses potentially risking their product obsolete by the time they bring it to market.

Digital Manufacturing

Being a tech geek, it is beautiful to see when innovators are able to perfectly bring all these different technologies together to build devices to advance the human race.


Arm Inches Closer to Supercomputing

Arm Inches Closer to Supercomputing
by Bernard Murphy on 12-04-2019 at 6:00 am

Summit supercomputer

When it comes to Arm, we think mostly of phones and the “things” in the IoT. We know they’re in a lot of other places too, such as communications infrastructure but that’s a kind of diffuse image – “yeah, they’re everywhere”. We like easy-to-understand targets: phones, IoT devices, we get those. More recently Arm started to talk about servers – those mega-compute monsters, stacked aisle by aisle in datacenters. Here in our view they were taking on an established giant – Intel. And we thought – OK, this should be interesting.

First moves were promising: Qualcomm was going to build servers, Cavium/Marvell, Ampere, Fujitsu and others jumped in. Then Qualcomm jumped back out. Turned out performance didn’t get anywhere near Xeon and (AMD) Epyc performance. We smiled smugly – I told you so. No way an Arm-based system can compete with real servers.

We didn’t feel quite so smug when AWS (Amazon) announced that A1 instances were now available for cloud compute. These are built on Arm-based Graviton processors, developed in AWS. Still, we argued, these are only half the speed of Xeon instances. Half the power also, but who cares about that? So we’re still right, these instances are just for penny-pinching cloud users who can’t afford the premium service.

The challenge for many of us in the semiconductor/systems world is that we see compute in terms of the tasks we know best – giant simulations, finite element analyses, that sort of thing, where needs are all about raw compute performance and I/O bandwidth. But the needs of the real world far outweigh our specialized applications. Most users care about video streaming, gaming, searching and of course AI inferencing (still bigger on the cloud than at the edge per McKinsey).

When it comes to those kinds of application, it turns out that raw performance isn’t the right metric, even for premium users. The correct metric is some function of performance and cost, and isn’t necessarily uniform across different parts of the application. If you’re serving videos at YouTube or Netflix volumes, even as Google or Netflix, you still want to do so profitably. Arm instances can be more cost-effective than Intel/AMD in such functions.

So Arm found a foothold in servers; how do they build on that? They have a roadmap to faster cores, a progression from the Cosmos platform (on which the Graviton processor was based), into the Neoverse platforms, starting with N1, each with a ~30% improvement in performance. AWS just released a forward look into their next generation A2 instances, based on N1, with (according to Arm) a 40% improvement in price performance.

They’re also pushing towards an even more challenging objective: supercomputing. Sandia labs is already underway on their experimental Astra roadmap, using the Marvell/Cavium Thunder X platform – Arm-based. An interesting start, but then where are all the other exascale Arm-based computers? By their nature there aren’t going to be a lot of these around, but still, you might have expected some chatter along these lines.

Now Arm is priming the pump more in this direction through a partnership with NVIDIA. NVIDIA GPUs are already in the current fastest supercomputer in the world, the Summit at Oak Ridge National Labs. They announced earlier in the year that they are teaming with Arm and Arm’s ecosystem partners, to accelerate development of GPU-accelerated Arm-based servers, and to broaden adoption of the CUDA-X libraries and development tools. Cray and HPE are also involved in defining a reference platform.

You can see a picture emerging here, building more footholds in the cloud and in supercomputing, establishing that Arm has a role to play in both domains I’m pretty sure they’re never going to be the central faster-than-anything-on-the-planet part of computation, but clouds and supercomputers have much more diverse needs than can be satisfied by mainstream servers alone. You want the right engines at the right price-point for each task – top-of the-line to compute really fast where that’s what you need, Arm-managed GPUs for massive machine-learning training or inferencing, Arm-powered engines for smart storage and software-defined networking, and application-optimized engines for web-serving and other functions where performance per dollar is a better metric than raw performance.

You can get more info from Arm’s announcement at the SC ’19 conference.


Webinar on coping with the complexities of 3D NAND design

Webinar on coping with the complexities of 3D NAND design
by admin on 12-03-2019 at 10:00 am

In order to beat Moore’s Law NAND Flash memories have moved from a planar topology to 3D construction. This allows for increased memory sized in much the same way a multistory building provides more building square footage on the same size building lot. Just like in building construction, adding a third dimension to the mix increases complexity in almost every way. Some 3D NAND designs go up to 64 devices high. Observing the problems encountered is a bit like watching a 3D chess game. The only way to understand all the read and write behaviors along with issues like program disturb errors that cause a shift in the threshold voltage of adjacent memory cells, is to use advanced modeling methods to simulate what is happening at the device level.

Silvaco recently presented a webinar on the topic of optimizing the select gate transistor in 3D NAND memory cells. The presenter, Dr. Jin Cho, did an excellent job of providing an overview of the construction and operation of 3D NAND Flash memories. He started by describing the topology of 3D NAND ICs, and then he covered some of their fabrication related issues. Among these are bowing that may occur during the high aspect ratio etch of the channel. The staircase region at the end also presents challenges during its construction. Finally, there is film deformation that can occur after the slit etch.

Role of select gate transistor on Erase

Dr. Cho talked about the large top and bottom select gate transistors, and the dummy gates that are required to prevent coupling to the active gates. Together they consume a lot of the column height, limiting space for the active gates. However, the best way to precisely know the optimal topology is to perform detailed TCAD modeling and simulation. For the purposes of the webinar Dr. Cho constructed a full 3D structure for simulation. It included implantation and diffusion. He omitted oxidation and nitridation. To properly model disturb phenomena he included at least 4 cell strings. For the TCAD device simulation he used the FN tunneling model for the write/erase operations, band to band tunneling model for the gate-induced-drain-leakage (GIDL) characteristics, and trap assistant leakage model for leakage current simulation.

Silvaco offers a cell mode simulation for the circular shape of the channel region. Without this, using voxels, a choice must be made between slow but accurate results with  ~1nm resolution, or fast but inaccurate results resolution at ~5nm. Cell mode means no tradeoffs with high precision shape and fast computation times. He emphasized that this approach fits well with 3D NAND simulation needs.

The webinar then covered the details of the cell characteristics. After this the simulation for worst case read D0 and D1 conditions are discussed illustrating how the entire bit line contributes to cell behavior. The program operation was analyzed as well to set the stage for talking about program disturb. In 3D NAND there are three cases for program disturb – X, Y, XY. The net result of Dr Cho’s detailed analysis presented in the webinar is that using dummy gates strongly improves program disturb behavior. Also, it is shown that increasing the channel in the bottom select transistor reduces leakage and increases GIDL which has beneficial effects on erase speed.

The webinar, which can be easily viewed in replay, goes into much more depth than can be provided here, and is easily worth viewing if you are interested in the details of 3D NAND performance optimization. I suggest going to the Silvaco website to view the replay in its entirety.


Why is Intel Still Short on Chips?

Why is Intel Still Short on Chips?
by Daniel Nenni on 12-03-2019 at 6:00 am

Intel is again apologizing to customers for 14nm chip shortages. As a result PC makers are revising down revenue expectations for 2020. Something does not smell right here. I have also read that due to the continued shortages Intel will be second sourcing CPU chips to Samsung 14nm. This smells even worse, absolutely.

First and foremost, the Intel 14nm process is by far superior to Samsung 14nm. If the rumor was Intel 22nm moving to Samsung 14nm that would be much more believable. Do you remember when Apple dual sourced the A9 SoC to Samsung and TSMC (chipgate)? That did not end well and the Samsung 14nm and TSMC 16nm processes have much more in common than Intel 14nm.

Second, Intel has significantly increased 14nm capacity this year (25%?) and from what I have been told 14nm yield is excellent. Intel is also in production at 10nm which should free up some 14nm capacity. Unless of course Intel borrowed 14nm equipment for 10nm which is highly unlikely.

Third, what about AMD eating up Intel’s market share? Intel chips are more expensive than AMD and now there are long lead times? AMD revenue numbers should be spiking?

Fourth, PC makers have no shame so Intel could be the scapegoat for missed revenue numbers. We will know more after the Q4 reports but I’m not 100% convinced either way here.

Fifth, the Samsung rumor started from the Korean press which is the least reliable for semiconductor news, my opinion.

This is a copy of the letter Intel sent customers with the paragraph highlighted that is probably the source of the false outsourcing rumors. Clearly it says Intel is freeing up capacity so they can build more CPUs using Intel fabs, right?

November 20, 2019

To our customers and partners,

I’d like to acknowledge and sincerely apologize for the impact recent PC CPU shipment delays are having on your business and to thank you for your continued partnership. I also want to update you on our actions and investments to improve supply-demand balance and support you with performance-leading Intel products. Despite our best efforts, we have not yet resolved this challenge.

In response to continued strong demand, we have invested record levels of Capex increasing our 14nm wafer capacity this year while also ramping 10nm production. In addition to expanding Intel’s own manufacturing capability, we are increasing our use of foundries to enable Intel’s differentiated manufacturing to produce more Intel CPU products.

The added capacity allowed us to increase our second-half PC CPU supply by double digits compared with the first half of this year. However, sustained market growth in 2019 has outpaced our efforts and exceeded third-party forecasts. Supply remains extremely tight in our PC business where we are operating with limited inventory buffers. This makes us less able to absorb the impact of any production variability, which we have experienced in the quarter. This has resulted in the shipment delays you are experiencing, which we appreciate is creating significant challenges for your business. Because the impact and revised shipment schedules vary, Intel representatives are reaching out with additional information and to answer your questions.

We will continue working tirelessly to provide you with Intel products to support your innovation and growth.

Sincerely,

/s/ Michelle Johnston Holthaus

Michelle Johnston Holthaus

Executive Vice President

General Manager, Sales, Marketing and Communications Group

Forward-Looking Statements: This letter includes forward-looking statements based on expectations as of November 20, 2019, which are subject to many risks and uncertainties that could cause actual results to differ materially from those anticipated. Important factors that could cause actual results to differ materially are set forth in Intel’s Q3 2019 earnings release and our most recent reports on Forms 10-K and 10-Q, available at intc.com.


ASML Will Take Semiconductor Equipment Lead from Applied Materials in 2019

ASML Will Take Semiconductor Equipment Lead from Applied Materials in 2019
by Robert Castellano on 12-02-2019 at 10:00 am

For the first time since 1990, Applied Materials is poised to lose its lead in the semiconductor equipment market, according to my recently published report “The Global Semiconductor Equipment: Markets, Market Shares, Market Forecasts.

Applied Materials, which has been losing market share in the wafer front end (WFE) equipment market for the past three years, is poised to lose its lead in 2019. ASML will take over the lead on the strength of its shipments of pricey EUV lithography equipment.

The chart below shows market shares for the top five individual equipment companies.  Applied Materials, which had a market share of 19.2% in 2018 (down from 23.0% in 2015), will increase its share of the total market slightly to 19.4% in 2019. However, ASML, which held an 18.0% share in 2018, will jump to a 21.6% share in 2019.

ASML Will Take Semiconductor Equipment Lead from Applied Materials in 2019

ASML Will Take Semiconductor Equipment Lead from Applied Materials in 2019

Applied Materials competes directly with several companies:

  • ASML in metrology/inspection
  • Lam Research in deposition and etch
  • Tokyo Electron in deposition and etch
  • KLA in metrology/inspection

Lam’s market share will decrease from 15.6% in 2018 to 14.2% in 2019, due to the company’s high exposure to memory, and in particular NAND, which is being impacted by low ASPs and high inventory overhang. Also, ASML’s EUV lithography systems minimizes the need for deposition and etch equipment used in multiple patterning with DUV lithography.

Tokyo Electron’s market share will decrease from 15.6% in 2018 to 14.8% in 2019. The company recently reported its consolidated financial results (cumulative) for the first half of the current fiscal year were:

  • Net sales of 508,442 million yen (year-on-year decrease of 26.4%)
  • Operating income of 102,454 million yen (year-on-year decrease of 41.6%)
  • Ordinary income of 106,692 million yen (year-on-year decrease of 41.1%)
  • Net income attributable to owners of parent of 78,722 million yen (year-on-year decrease of 41.8%).

KLA’s market share will increase from 6.2% in 2018 to 6.9 in 2019. Metrology/inspection equipment is critical to assuring high yields during semiconductor manufacturing, particularly as new technology nodes are reached. Metrology systems are used to measure parameters such as thin film thickness or linewidths, and inspection systems are used to detect defects and monitor abnormalities in production.

Based on a modest recovery of 5% in the overall WFE market in 2020 and on capex spends planned by semiconductor manufacturers, ASML will increase its market share in 2020 to 22.8%, while Applied Materials will maintain its share of 19.3%.


Intel vs AMD vs Google vs Amazon vs NUVIA

Intel vs AMD vs Google vs Amazon vs NUVIA
by Daniel Nenni on 11-29-2019 at 6:00 am

NUVIA founders John Bruno Manu Gulati Gerard Williams III

Arguably the cloud was the quickest road to riches for chip designers large and small. As an emerging company, if you wanted to raise money just put “Datacenter” in your pitch deck and you were assured millions. You would be competing with semiconductor’s version of David and Goliath (AMD and Intel) but that was a good thing, right? Hundreds of start-up chip companies certainly thought so.

Intel currently owns 90% of the server processor market followed by AMD. Intel literally cannot make chips fast enough to satisfy demand and AMD seems to be stuck in the “just friends zone” amongst cloud companies so there is certainly opportunity.

Back in the heydays of fabless semiconductor companies it seemed like anything was possible. The asset heavy IDMs were slow to innovate creating opportunity for the quicker moving fabless companies with budgets big and small. Today IDMs are moving faster and lighter but the biggest obstacle I see are the cloud companies doing their own chips, absolutely.

Google started it with their Tensor chip in 2016 (now in its third generation) followed by the Edge TPU in 2018. Last year Amazon announced their own Graviton chip which is based on the ARM architecture (now in its second generation). In fact, the only cloud companies of the top five (Microsoft, Amazon, Google, IBM, Oracle) that are NOT making their own chips are IBM and Oracle. IBM sold their semiconductor division to Globalfoundries in 2015 and Oracle recently abandoned the SUN Microsystems IC team they acquired in 2010 for $5.6B, ouch.

That is not to say that money is NOT available for silicon startups gunning for Intel and AMD. Chip company NUVIA recently had their $53M  coming out party. Before you get your hopes of raising $53M up, the company’s founders (John Bruno, Manu Gulati and Gerard Williams III ) are silicon experts taping out chips at Google, Apple, ARM, Broadcom, and AMD and have more than 100 patents granted to date:

“The world is creating more data than it can process as we become increasingly dependent on high-speed information access, always-on rich media experiences and ubiquitous connectivity,” said Gerard Williams III, CEO, NUVIA. “A step-function increase in compute performance and power efficiency is needed to feed these growing user needs. The timing couldn’t be better to create a new model for high-performance silicon design with the support of a world-class group of investors.”

About NUVIA Inc.
Headquartered in Santa Clara, NUVIA was founded on the promise of reimagining silicon design for high-performance computing environments. The company is focused on building products that blend the best attributes of compute performance, power efficiency and scalability. For more information, please visit www.nuviainc.com.

More recently, you needed to have “AI” in your pitch deck if you wanted to raise money to make a chip. Hundreds of companies did and are making AI chips creating our next semiconductor start up bubble, my opinion.

But still, the systems companies have again transformed the semiconductor industry. Apple, Google, Amazon, Huawei, etc… are writing some VERY big checks and throwing budgetary caution to the wind. Remember, these companies have seemingly unlimited compute resources and can run simulations and verification in minutes versus hours or days. Systems companies also have stricter time-to-market requirements (Apple for example) and a huge software burden that fabless chip companies will never directly experience.

Moving forward I see the systems companies acquiring even more chip start-ups and semiconductor talent and putting them in an unbridled design environment which will be VERY hard for fabless companies to compete with.

Intel on the other hand is mid pivot with a new CEO and some very clever management hires. It is too soon to tell but from what I have seen and heard inside Intel, the competitive bar will be raised significantly in the Goliath vs Goliath battle with the systems companies leaving quit  few Davids behind, absolutely.


Webinar Recap: IP Security Threats in your SoC

Webinar Recap: IP Security Threats in your SoC
by Daniel Payne on 11-28-2019 at 10:00 am

Methodics Security SoC

Three years ago my youngest son purchased a $17 smart watch on eBay, but then my oldest son read an article warning about how that watch would sync with your phone, then send all of your contact info to an address in China. My youngest son then wisely turned the watch off, and never used it again. Hackers have been able to spoof and hide obstacles from the Tesla Model S using Autopilot. In 2015 hackers took control of a Jeep Cherokee through its entertainment system. A researcher in 2016 showed how he could control his or any other Nissan Leaf over the Internet.

Clearly we live in a connected world and the number of security threats to life-critical areas  need to be contained. The experts at Methodics just conducted a webinar on this topic: IP Security Threats in Your SoC, so I watched the 25 minute archived video to find out if there’s hope against security breaches .

Vishal Moondhra was the presenter and he noted how semiconductor IP use is growing, with a single SoC often containing 100’s of IP blocks. The most sensitive electronic design areas are considered life-critical: automotive, medical, aerospace and industrial.

IP use is really quite a patchwork, with blocks coming from multiple sources:

IP patchwork Methodics

The entire process of creating an IP block is quite dynamic, taking months to years with many versions being released along the lifecycle in order to meet changing requirements. IP users often have many tough questions which impact security,  like:

  • Which version of each IP is really being used?
  • What scenarios for attack have each IP been subjected to?
  • Are there security vulnerabilities in each IP being used within an SoC? 

Let’s consider the example of a USB core, where during post-silicon penetration testing a security issue is uncovered. Which USB core version was being used? Who provided the IP? Did previous versions have the same security issue? Are there other cores using the same IP?

Answers to these questions often take too much time, they could be incomplete, and they requires collaboration between multiple engineers across teams probably using spreadsheets and documents. This is not a good security approach.

There is a systematic way to meet these security challenges by using enterprise traceability with an IP Lifecycle Management approach in the Percipient tool from Methodics. Here’s the big picture of how Percipient fits into an ecosystem:

Starting at the bottom we see the Data Management (DM) layer which uses industry standard DM tools. On top of that is workspace management layer where each engineer has their specific portion of an SoC design. The IP Lifecycle Management (IPLM) layer is where engineers control the release management process. At the meta-layer there’s information attached to each IP block like which engineers are using it on projects and any special IP properties. The Bill Of Materials (BOM) layer is fully addressable, so you know exactly what your SoC project really contains, in a full hierarchical fashion.

In this too flow there are multiple vendor tools integrated like Jira for bug reporting, Jama and Doors for requirements gathering, even simulators from EDA vendors. The analytics portion allows you to quickly find out about any project, and know the overall security vulnerability of your system

The standards body Accellera has an IP Security Assurance (IPSA) Working Group, and Methodics is a member of that, because it takes a village to ensure that IP security is done right.

With a tool like Percipient you can now easily track the consumption of all IP use, like: who is using an IP block, where each IP block is being used, versions of an IP, report bugs per IP, even check IP against a list of approved projects. Once this is all setup, you can measure the security level by looking at a dashboard or traversing the IP hierarchy fo security issues where the colors define the security severity:

For IP BOM management there are three main features provided:

  • IP traceability
  • Versioning with dependencies
  • SoC version history

With Percipient there is traceability for all of the IP blocks and each workspace being used per engineer on the project, which enables the system to identify which users are affected by any IP version change. Event notifications are then automated for each engineer involved, so there are no costly communication delays.

The BOM is traceable and integrated with Jira for bug tracking, so all security impacted IPs are revealed. This traceability is created by adding a ‘security concern’ field for each bug, then Percipient rolls up all of the security concerns, and the displays the known Security Threats.

Security Threat Matrix

A live demo was then performed, walking us through how to use both Percipient and Jira tools to report bugs, assign security issues to an IP block, then seeing how the security matrix is auto-updated with the latest issues.

The final webinar part had Mr. Moondhra answering questions that were asked by attending engineers.

Q&A Session

Q: In the dashboard of security numbers, where do they come from?

A: All of the security numbers are automatically rolled up from the entire IP hierarchy of an SoC. In our demo, we had 28 concerns, 4 high, 1 medium, and 23 low. This live number is auto-updated as time progresses, and the integration ensures consistency between Jira and Percipient.

Q: How easy is it to bring 3rd party IP into Percipient?

A: We’ve made it easy to import IP, so each new IP version is typically brought in as a tar file using check-in with a tool like Perforce, then you release a new version into Percipient, and it takes just seconds,

Q: Who fills out security concern field?

A: The users who track bugs and security issues, using the Jira tool.

Summary

The challenges of IP security threats are a real and growing concern for teams building new SoCs. Using tools like Percipient and Jira for safety-critical designs in automotive, medical and other human-life fields makes a lot of sense. Instead of trying to create your own home-grown software patchwork, why not give Methodics a call to see how their approach would fit into your methodology.

Watch the archived video here. 

Related Blogs


GM’s CES No Show: EmBARRAssing!

GM’s CES No Show: EmBARRAssing!
by Roger C. Lanctot on 11-28-2019 at 6:00 am

After failing in 2017 and 2018 to put a single woman on-stage to deliver a high profile keynote, the Consumer Technology Association featured four female keynoters in 2019. Until recently, two women were on the docket for the 2020 show in January, but news arrived last week that General Motors’ CEO Mary Barra had cancelled her previously announced appearance.

All indications and expectations had been pointing to an announcement from Barra of more details regarding GM’s plans for new electric vehicles with the potential of a high profile EV reveal. Alas, GM claimed that the distraction of the United Auto Workers strike negated the General’s ability to put an EV prototype together in time.

There are few in the industry that buy that excuse or explanation. Any EV prototype would have been assembled without much assistance from the UAW rank and file. The more likely and credible explanation is that GM got wind of what Ford Motor Company was going to show last Sunday and realized that its own plans were half-baked.

GM executives generally, and Barra in particular, can talk a good game on earnings calls regarding the progress of the company’s Cruise Automation autonomous vehicle development or its plans for electric pickup truck production in Hamtramck in 2021. But when it comes to taking the stage at the largest technology event in the world it appears that GM lost its nerve.

There will be no shortage of auto makers touting their tech at the Las Vegas event. But Daimler will now grab the limelight as the sole car maker keynote.

Since her appointment as CEO on January 15, 2014, Barra’s signature strategy has been to downside the once-massive GM organization – selling off Opel to PSA, closing plants, and exiting multiple overseas markets. The moves, though severe and maybe even necessary, have been greeted with unmitigated admiration by investors as GM’s profits have spiked.

When the UAW walked out, an observer might have been forgiven for thinking that GM had decided that the soundest financial move for the company would be to eliminate vehicle production entirely – in the interest of improving profits. Sadly enough, the reality set in after a more than month-long work stoppage that actually making cars was essential to the company’s fiscal well being.

Nobody buys the explanation – from GM – that the UAW is to blame for Barra’s keynote cancellation. Just as GM failed to deliver a final negotiated agreement to the UAW in time to forestall the walkout, Barra and GM determined that there was more to be gained from NOT taking the stage than from taking the stage with a half-baked tale to tell.

GM is getting quite adept at not doing things, like not supporting California’s effort to preserve its exceptional status regarding emission controls and not launching a comprehensive EV strategy. GM’s decision to opt out of CES might even free up the stage for a rival – maybe even Ford, which would welcome the opportunity to talk about the new Mustang Mach-E.

If Mary Barra isn’t careful she may well be remembered more for what she didn’t do than for what she did while leading GM. Not showing up for CES is simply embarrassing.


Webinar Recap: Challenges of Autonomous Vehicle Validation

Webinar Recap: Challenges of Autonomous Vehicle Validation
by Daniel Payne on 11-27-2019 at 10:00 am

Waymo Jaguar

Autonomous vehicle progress is in the daily news, so it’s quite exciting to watch it develop with the help of SoC design, sensors, actuators and software from engineering teams spanning the entire globe. Tesla vehicles have reached Level 2 autonomy, Audi e-tron is at Level 3, and Waymo nearly at Level 5 with robot taxis being tested in Phoenix and Silicon Valley with a human driver ready to take the wheel. How are EDA, IP and systems companies facing the challenges of delivering electronic systems to enable autonomous vehicles that are safe under all conditions?

Tesla Model 3
Waymo Jaguar
Audi e-tron

To help answer that question I attended a webinar from Mentor, a Siemens business, presented by Dave Fritz – he’s the Global Technology Manager for Autonomous and ADAS. Three key concepts were shared to frame the webinar discussion on how to validate Autonomous Vehicles:

  1. Correct operation can only be determined in the context of the entire vehicle and the environment within which it is operating.
  2. Constrained random testing cannot guarantee coverage corner cases not possible with physical platforms, requires correlation between virtual and physical models.
  3. Consolidation of functionality is inevitable and will follow the path of other industries that have gone through the same transformation.

There are similarities in how a Smartphone Application Processor gets designed and validated compared to an ADAS/AV Controller, as they both have closed loop modeling as shown below with inputs on the left, outputs on the right, and a stack in the middle:

Smartphone App Processor
ADAS/AV Controller

With an AV the input stimulus comes from the real world driving conditions, so the number of states is huge, something way beyond what an App Processor would encounter, so a new validation methodology is sought. Instead of using a hardware-driven development process where hardware is developed first, then software and validation in sequence later, the continuous integration process of hardware and software being developed in parallel is a better approach.

The ecosystem for automotive design is quite complex, with multiple vendors which in turn calls for increased collaboration in order to meet the stringent ISO 26262 requirements for functional safety.

Automotive Ecosystem

Shown in blue above is the Virtual Model, and this early model is what allows AV design teams to get to market quicker by simulating and validating the entire environment.

From a product viewpoint Siemens has assembled quite a wide swath of technology called PAVE360 that allows automotive scenarios to be modeled, video viewed, and sensor models generating raw data for LIDAR, Radar and camera.

PAVE360

The beauty of this methodology is that a complete system like AV can be modeled and validated even prior to silicon. In the example Dave talked about how the ECO was modeled with a PowerPC core, the braking used physics-based control, transmission was modeled in Matlab, and even the dashboard was instrumented. Scenarios can be brought into PAVE360 from accident databases or created. Big questions are answered when running scenarios like:

  • Did we avoid the accident?
  • Did the occupants survive?

Yes, you can even model what happens to the passengers in terms of airbag interactions.

Air bag deployment

It’s wonderful to hear about AV companies driving millions of actual miles to build up experience in real driving, but to reach sufficient safety levels it has been estimated that you need to drive billions of miles, a goal not likely to happen. With virtual scenarios you can drive the billions of miles under any scenario.

Summary

The webinar also addressed issues like the fidelity of modeling abstraction, using formal methods, correlating between physical and virtual models, and handling corner cases. What I came away with after this webinar was that using PAVE360 as a platform creates a high confidence that virtual models indeed match physical, and that you can catch corner case issues in the lab before in the field. Of course, you want to continue on-road testing to ensure that there are no surprises with virtual testing.

To view the archive webinar start here.