Emulation Webinar SemiWiki

WP_Query Object
(
    [query] => Array
        (
            [paged] => 2
        )

    [query_vars] => Array
        (
            [paged] => 2
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [post_type] => 
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type = 'post' AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 10, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 289268
                    [post_author] => 14
                    [post_date] => 2020-08-09 08:00:15
                    [post_date_gmt] => 2020-08-09 15:00:15
                    [post_content] => Robert Maire Semiconductor Advisors

KLA reports excellent June & Better Sept Guide-
Orbotech Diversification Helps provide growth-
Mix is perfect-China working fine-
Foundry is solid-

Solid execution and Financials
KLAC reported $1.46B in revenues and $2.73 Non GAAP with gross margins of 60.3%. That is versus expectations of $1.42B and EPS of $2.41...a handy beat. Guide is for September revenues of $1.48B +-$75M and EPS of $2.42 to $3.06, this is versus street of $1.4B and $2.29 in EPS.

Perhaps most important, management feels that the second half will be up versus the first half with a chance of more memory in the December quarter.

Diversification works
Perhaps the only negative in the quarter was that the core "process control" business (the old KLA) was down slightly, 2%, quarter over quarter. The new business segments of semiconductor process and PCB & Component were up sharply, 18% and 26% respectively.

Quarters like this are one of the key reasons for KLA's diversification and it worked like it should. Wafer inspection was down 9% with patterning up 3% Q/Q. Overall business was up 3% Q/Q

Given that ASML saw a very sharp drop in orders for litho tools we are not surprised to see a bit of softness in wafer inspection. The fact that patterning was up is likely a bit better as it could have been soft as well. We would expect patterning sales at KLA to trail litho sales at ASML by a bit.

China sales continue unabated and un touched
Much as we saw with Lam, KLA's China sales seemed to have seen near zero impact. China business with KLA is neck and neck with Taiwan (TSMC) business at 26% each. Korea is not far behind at 21% (Samsung), with the US at 11% (Intel and a smidge of Micron?).

From management's perspective there seems to be no expectation of China slowing. You certainly wouldn't know there were any restrictions whatsoever on China ales of US semiconductor technology.

Financials remain the best in the business
With gross margins of 60.3% and operating margin at 35% and an expectation of further improvement its hard to complain. The dividend was increased again to $0.90 per quarter and the free cash flow was great at $411M for the quarter.

Customer demand remains strong
Management commented at several instances that end user demand remained very strong with no expectation of anything other than getting better. Covid 19 seems to almost be a distant memory as most all supply chain issues seem long since resolved. Even though the country and the world seem pre-occupied with Covid and political instability you wouldn't know it from KLA's report which seems much like any other up cycle in the industry.

Intel's recent admission of its 7NM stumble does't look to have impact on KLA as Intel if anything should likely be spending more with KLA to try to find and fix the yield busting problems.

We also continue to hear that Samsung still has yield issues with the only company that seems to be flying along is TSMC....perhaps they buy a lot of KLA tools to keep things running so smoothly.

China domestic chip producers are obviously spending big time to get up the steep learning curve. There seems to be no shortage of money to spend on the semiconductor industry in China. KLA tools are likely at the top of their wish list as they accelerate learning.

The Stock
The stock was off slightly in the after market likely due to the fact that the core business was flattish coupled with the fact that the stock was up big today in advance of earnings. We would not be surprised to see some profit taking given the strong ride that KLA stock has been on.

The sector as a whole has been on fire as semiconductors seem to continue to be a way to play the anti-Covid investment strategy due to demand for work at home and remote learning etc;. The ongoing strength of this seems to be lasting longer than expected especially if it goes all the way through the second half of the year unabated.

KLA remains one of our favorite all time holdings in the group
                    [post_title] => KLAC Solid QTR and Guide With No China Worries
                    [post_excerpt] => 
                    [post_status] => publish
                    [comment_status] => open
                    [ping_status] => open
                    [post_password] => 
                    [post_name] => klac-solid-qtr-and-guide-with-no-china-worries
                    [to_ping] => 
                    [pinged] => 
                    [post_modified] => 2020-08-10 13:07:58
                    [post_modified_gmt] => 2020-08-10 20:07:58
                    [post_content_filtered] => 
                    [post_parent] => 0
                    [guid] => https://semiwiki.com/?p=289268
                    [menu_order] => 0
                    [post_type] => post
                    [post_mime_type] => 
                    [comment_count] => 0
                    [filter] => raw
                )

            [1] => WP_Post Object
                (
                    [ID] => 289296
                    [post_author] => 16
                    [post_date] => 2020-08-07 06:00:17
                    [post_date_gmt] => 2020-08-07 13:00:17
                    [post_content] => Tortuga Logic is hosting a webinar on Tuesday, August 18th from 12 to 1PM PDT, in which Xilinx will present their experiences in using the Tortuga Logic Radix-S and Radix-M products for security verification of root of trust in their advanced SoC FPGAs. REGISTER HERE to attend the webinar.

Security Verification of Root of Trust

SECURITY CHALLENGES
In general security verification is problematic for several reasons:
  • Traditional dynamic methods, even with constrained random, struggle to find the “abuse” type of problems that are common in security attacks. Even the best of directed+random tests still explore around nominal expected behaviors.
  • Directed (+random) tests only exercise specific behavior instances, lacking the completeness you want for robust security signoff.
  • Security problems often span between hardware and software. Formal would be helpful for completeness in the hardware but cannot help with the software part of the problem.
By their nature, Xilinx products are highly configurable, which makes security verification an even more challenging problem. That Xilinx considers the Radix products an effective way to address these challenges is a pretty hefty endorsement. XILINX USE TO VERIFY ROT SECURITY From an advanced viewing I know that Nathan will be talking about application to security testing in several key areas:
  • To zeroize key material so that confidential information cannot be leaked. Previous methods depended on sampling which was necessarily incomplete.
  • To verify that flows of key material will be restricted within the root of trust
  • To verify that the integrity of key information will be controlled through access controls, so that for example key data cannot be modified from the outside.
  • I also know he will address the bitstream security question, always a concern for FPGA-based logic.
A key point Nathan will discuss in all of this is the importance of the Tortuga Logic information flow verification in this security testing, a capability which goes right to the heart of the completeness challenge I mentioned earlier. SUMMARY Xilinx products are popular in a wide range of applications where hardware-enabled system security is a requirement. Security for Xilinx platforms is provided by a root of trust subsystem, for which a large number of security requirements must be verified to provide a sufficient level of assurance. Pre-silicon security verification is a difficult problem due to design complexity, the fact that security issues often span hardware and software, and that existing tools target functional verification and not security verification. This presentation will cover how Xilinx uses Tortuga Logic’s Radix to verify several root of trust security requirements more efficiently throughout the development lifecycle. Radix extends existing simulation and emulation flows to efficiently verify confidentiality and integrity requirements enabling an effective secure development lifecycle for hardware. SPEAKERS: Dr. Nicole Fern is a Senior Hardware Security Engineer at Tortuga Logic. Her primary role is providing security expertise and defining future features and applications for the product line. Before joining Tortuga Logic in 2018 she was a postdoc at UC Santa Barbara. Her research focused on the topics of hardware verification and security. Nathan Bolger is a Senior Verification Engineer at Xilinx Inc. He has been with Xilinx for 8 years as part of the front-end verification team. He is responsible for the processor subsystem of Xilinx’s SoC devices. Primarily responsible for verification of our security and configuration center for two generations of products; focusing on verification of cryptographic algorithm accelerators and device root of trust. REGISTER HERE to attend the webinar. [post_title] => WEBINAR: Security Verification of Root of Trust for Xilinx [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => webinar-security-verification-of-root-of-trust-for-xilinx [to_ping] => [pinged] => [post_modified] => 2020-08-08 06:32:23 [post_modified_gmt] => 2020-08-08 13:32:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289296 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 289077 [post_author] => 11830 [post_date] => 2020-08-06 10:00:23 [post_date_gmt] => 2020-08-06 17:00:23 [post_content] =>

HCL VersionVault delivers version control and more

HCL is an interesting organization. You may know them as an Indian company that provides software and hardware services.  At about $10B US and over 110,000 employees working around the world, they are indeed a force in the industry. They’ve also created a software company called HCL Software that develops tools and technologies that are of interest to IC designers. The webinar discussed here presents VersionVault. HCL VersionVault delivers version control and more to the designer.

HCL Software is doing a series of webinars on their DevOps offerings. For the uninitiated, the term refers to the combination of software development (Dev) and IT operations (Ops). It’s all about improving the system development lifecycle and the quality of the delivered product. On July 22, they conducted a webinar on VersionVault. The format of the webinar included live video of the presenters interacting with each other with some slides only at the beginning. I found the format and style of this delivery to be quite engaging.

The webinar was presented by Steve Boone, head of product management at HCL Software DevOps, Howie Bernstein, product manager for HCL Compass and HCL VersionVault and John Kohl, chief architect for VersionVault (for 27 years).

Webinar Content

Steve began by providing on overview of the DevOps portfolio from HCL Software.  Steve presented an eye-popping statistic – by 2022 80% of revenue growth will depend on digital offerings and operations. Makes you think long and hard about what’s important. Steve presented a compelling overview of the acceleration of digital transformation and what it means to all of us and business health in general.

This discussion alone is worth watching the webinar. You can find an overview of the secure, data-driven business agility offered by HCL DevOps here.

Howie then weighed in on just what VersionVault is. It turns out there are a lot of descriptions for this product. Paraphrasing, here are a few:

  • Version control software
  • Enterprise class (unlimited scalability and the ability to handle very complex structures)
  • Easy to use with a built-in configuration management process
  • Good for regulated industries thanks to built-in authoritative auditing capabilities
  • Reduces time for embedded system development
  • Can synchronize design among a globally disparate development team

Bottom line: HCL VersionVault delivers version control and more. So, the leading question posed by Howie was: “OK, if it’s so great, how come I never heard of it?”  The response is quite interesting – you have heard of it, but as another name. VersionVault is functionally equivalent to ClearCase, except it is fully 64-bit on all platforms. I’m willing to bet most folks have heard of ClearCase, so this was starting to get really interesting.

What followed was a detailed discussion from John about the unique and powerful capabilities of VersionVault. The VersionVault virtual file system delivers some significant advantages. To learn more, you’ll need to watch the webinar. The rest of the webinar consisted of a back-and-forth conversation between Howie and John as they discussed the product and fielded questions from the audience. I felt like I was eavesdropping on a planning meeting – very engaging and informative. The chemistry between these two folks made the whole thing work really well.

I’ll conclude with a short description of VersionVault from the VersionVault web page:

HCL VersionVault can help organizations by balancing flexibility with the organization’s need for control. It provides controlled access to soft assets, including code, requirements, design documents, models, schematics, test plans, and test results. User authentication and authoritative audit trails help your organization meet compliance requirements with the minimal administrative hassle for you. With access virtually anytime, anywhere, HCL VersionVault gives you the freedom to work efficiently where and when you need.

The webinar is a little under an hour, but the time flew by for me, it was that good. You can watch the webinar here.

CDN Live

As CDN Live is approaching, you can visit HCL’s virtual booth there as well. They will have a broad range of staff at the booth to answer questions and interact with visitors in real-time via online chat. There will be lots of downloadable content to explain how they can help your design process. It’s definitely worth a look.

They’ve even provided a code for instant approval for those who choose to register: SilSP6

 

 

 

[post_title] => HCL Webinar Series - HCL VersionVault Delivers Version Control and More [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => hcl-versonvault-delivers-version-control-and-more [to_ping] => [pinged] => [post_modified] => 2020-08-06 14:29:29 [post_modified_gmt] => 2020-08-06 21:29:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289077 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 289196 [post_author] => 30678 [post_date] => 2020-08-06 06:00:26 [post_date_gmt] => 2020-08-06 13:00:26 [post_content] =>

Last week, Intel announced its second-quarter financial results which easily beat the analysts’ consensus expectations by a handsome margin. Yet the stock price plummeted by over 16% right after the earnings call with management. Seven analysts downgraded the stock to a sell and the common theme on all the downgrades was that their 7-nanometer process was delayed again which meant that Intel had fallen behind in its process technology, and was lagging TSMC by a wide margin.

Intel INTC SemiWiki

On its earnings call, Intel posted $19.7BN in revenues vs. the street at $18.54BN and generated EPS of $1.23 vs. $1.12 but it also delivered more bad news on its manufacturing process technology. In a note from Jefferies analyst, Jared Weisfeld, after the earnings call:

There’s a lot to unpack in that paragraph, but the key question it raises is how could it be that Intel, which was for decades the undisputed leader in manufacturing process technology which allowed it to deliver to market the highest performance and highest margin CPUs for PCs and servers, lost its lead so dramatically?

I believe that Intel is in the middle of what Andy Grove called a “Strategic Inflection Point” (SIP) that goes to the heart of its business. The last time Intel faced such an existential SIP was in 1984 when it was losing market share in memory chips, which was its core business, and it pivoted to become a microprocessor company. In Grove’s book, Only the Paranoid Survive, he described how Intel made that transition and, as a result, by 1995 became the world’s largest and most influential semiconductor company. Clearly Grove made the right decision to pivot into microprocessors as that got the company out of a highly competitive low-margin commodity business and into a ridiculously high margin business of providing a highly differentiated and proprietary product with well over 90% market share.

Intel continued to hold its position as the world’s largest semiconductor company until 2017 when Samsung overtook it by revenues, and TSMC caught up on manufacturing process technology. To understand how Intel became the dominant semiconductor company and hold that position for over 20 years and then lose it, one has to go back in time to 1985 when Intel’s 386 chip was its lead product in the then-nascent PC market.

What Grove could not have fully appreciated at the time he was navigating the memory to microprocessor pivot was how significant that move would later turn out to be. At the time, PCs were still a relatively small market and ran MS-DOS from Microsoft which had a clunky text-based user interface. PCs were underpowered compared with the mainframes, and relegated to simple tasks such as word processing and spreadsheets, while mainframes, minicomputers, and workstations continued to be used for “real” computer work. Nobody back then could have imagined that microprocessors from Intel would become the brains behind the entire computer industry for decades to come. When I joined Intel in 1983, nobody even at Intel used PCs for their work. As a software developer, I was coding for a Digital Equipment Corporation (DEC) VAX minicomputer and Apollo workstations. Neither of these two companies exists anymore.

What Intel was able to accomplish was harnessing the advanced manufacturing expertise that it had honed through manufacturing memory chips in volume, and create an engine of innovation that leveraged the tight coupling of Intel’s microprocessor design and a manufacturing capability that was able to give the CPU designers more and more transistors to build each generation of microprocessor, making it faster and cheaper. Moore’s law was less of a law than a mandate to the manufacturing side of Intel to keep refining the process technology and shrink the size of transistors so as to pack more on to a single chip of silicon while also clocking the processors at faster speeds. To put it in perspective, the 386 had 275,000 transistors and the next generation the 486, which I worked on, had over one million transistors. Today’s Core i7 has around 3 billion transistors.

I remember going to meetings with engineers working in the mainframe and minicomputer companies to get their feedback on the 486, because we, of course, wanted them to use that chip in their next-generation systems, and they laughed at us for even being so presumptions as to assume they would ever use what they considered a toy. However, the were very generous with their suggestions and explained to us how their advanced designs worked and why we would never meet their requirements, but what they failed to take into account, was that as transistor budgets grew, Intel engineers were able to add all these kinds of advanced capabilities, and more, to its CPUs to eventually overtake their proprietary systems in performance, and at far lower costs.

The result of Intel’s rapid innovations and improvements in the price-performance of the x86 CPUs was that it created a Strategic Inflection Point for the entire computer industry. To explain this, I’ll refer back to Grove’s Only the Paranoid Survive book where he wrote:

Image for post

This vertically integrated approach had its pros and cons. The advantage is that when a company develops every piece themselves, the parts work better together as a whole. The disadvantage is that customers got locked into one vendor and it limited choice. The other disadvantages, which are more important, is that the rate of innovation was only as fast as the slowest link in the chain and that the market was more fragmented, which prevented any one company from reaching economies of scale. The end result was that the computer industry was made up of independent islands with no interoperability or scale. Once a customer chose one solution, they were stuck with it for a very long time and paid a lot more.

Then the microprocessor came along and as it became the basic building block for the industry. Economies of scale kicked in, which greatly accelerated the rate of improvement, and which also vastly expanded the market for PCs, then later servers, eventually replacing the proprietary systems. Back to Grove:

Image for post

As a result of the reorientation of the industry from vertical to horizontal, many computer companies did not survive their Strategic Inflection Point. DEC, Unisys, Apollo, Data General, Prime, Wang, and many others, went out of business or got acquired by PC companies such as Compaq’s acquiring of DEC. One of the key lessons of this massive change in the computer industry as explained by Grove:

This modularization theory was also thoroughly in Clayton Christensen’s paper on Disruption, disintegration and the dissipation of differentiability and explored in more detail in his book The Innovators Solution where he pointed out that:

By 1995, this transformation was in full swing and the transition from the “Old Computer Industry” to the “New Computer Industry” was complete, Intel had won. However Intel missed the next inflection point, and by missing that, laid the seeds of the problems that it is facing today.

The strategic inflection point that Intel missed was mobile, more specifically Apple’s iPhone which was launched in January 2007. Since the Intel x86 CPUs used too much energy, Apple chose to go with chips based on the much more power-efficient ARM architecture. As it happened, Intel acquired StrongARM from Digital Equipment Corporation which then went into the XScale processor and was a low-power chip designed for mobile (subsequently sold to Marvell in 2006). Intel certainly had the engineering and manufacturing capabilities to design and supply the kind of chips that Apple needed in the new iPhone. Apple had already switched from the IBM PowerPC chips to the x86 chips in its Macs and Steve Jobs had a very good relationship with Andy Grove and later Paul Otellini who had become the CEO at the time the iPhone was being designed. As Otellini described it in a 2013 interview in the Atlantic to Alexis Madrigal:

Not being willing to win over Apple meant that Intel was shut out of participating in the mobile phone market, but more importantly, it gave TSMC an opening to become the manufacturer of choice for the chips going into the Apple iPhone and then all the other Android-based mobile devices. It took the horizontal layer that Intel controlled in semiconductors, and split it into two narrower layers: One was the ARM-based CPU architecture which ended up dominating the mobile phone market and the other layer below it was the manufacturing of these devices which TSMC ended up with the lion’s share of. Since ARM licenses its design to other so-called “fab-less” chip design companies, such as Qualcomm, it vastly increased the number of companies innovating around the ARM architecture at a magnitude that Intel couldn’t dream of matching which accelerated the rate of innovation and variety, yet all still compatible with the ARM architecture, so the software designed for mobile phones had a larger market. In addition, TSMC, as the chip fab of choice, was able to scale up and enjoy massive economies of scale which then allowed it to push its process technology forward at an even faster rate and eventually surpass Intel as it recently did. Even Samsung was able to catch up with Intel, first as a supplier of the memory chips and later more advanced logic chips. The result is that the leadership of the mobile era has shifted from Intel and Microsoft, sometimes referred to as Wintel, to ARM and TSMC.

Image for post

This trend was very thoroughly analyzed in Chips and Geopolitics and Intel and the Danger of Integration by Ben Thompson in his Stratechery blog which is an excellent analysis of how TSMC was able to get ahead of Intel in manufacturing of advanced semiconductor products.

Exacerbating these external events, Intel also had the misfortune of poor leadership under Brian Krzanic, which resulted in an increased pace of senior management turnover. Even more recently, Apple has successfully lured many of Intel’s top VLSI chip designers in Israel and Oregon to work on designing Apple’s processors for its next-generation iPhones and iPads and soon, even Macbooks.

Intel is potentially facing as big of a strategic inflection point today as it was in 1984, but the major difference is that its core data center business is highly profitable and still growing due to the continued strength of cloud computing and Intel still dominates as a supplier CPUs used in cloud data centers. That sector requires very high-performance CPUs and power consumption, although a factor, is not as important as it is in a battery-powered mobile device. However, there are a growing number of fabless chip design groups, including Amazon, Google, Huawei, as well as startups such as Ampere that are working on ARM-based high-performance CPUs for data centers, and that market will get even bigger with the future growth of edge-computing in 5G networks. Since TSMC will be manufacturing these ARM server chips, and the new ARM-based chips in the Macbooks are rumored to be faster than Intel’s CPUs (and lower power), Intel’s advantage will erode even in its core data center business.

In a January research report from Marc Lipacis, an equity analyst from Jefferies, he makes a strong case that Intel should spin off its manufacturing business which could then directly compete with TSMC. His analysis was that Intel would add $19 to its share price by going fab-less.

Image for post

By not playing a leading role in the growth of the mobile phone market Intel has lost more than just market share and revenues, but its leadership role in the next era of computing and communications. Intel’s role in the PC era went beyond supplying the CPUs, as it controlled the ecosystem which allowed it to influence the direction of technology to its benefit. In the mobile computing era, Intel is absent and it left the role of the mobile ecosystem leadership to ARM and TSMC.

As mentioned earlier, Intel also lost a large number of senior managers and has recruited new management from fab-less companies who may be more receptive to spin off the manufacturing part of Intel. The new CEO Bob Swan was brought in from outside the company, first as CFO, and later promoted to CEO, so he may have less attachment to the old ways and have the courage to do something bold. The board, on the other hand, still has a lot of legacy directors, who may not be receptive to such a bold but necessary move. The market certainly has made its position clear as the stock price reflects. History is against Intel’s chances of catching up on the process technology side and the external 10x force of TSMC and ARM. As Grove referred to these types of changes in his book Only the Paranoid Survive, cannot be wished away. Even if Intel has enough inertia to continue on its current path, the macro trends will eventually catch up with it and if it does not manage its way through this strategic inflection point, it will lose its preeminent position in the technology industry.

[post_title] => Murphy’s Law vs Moore’s Law: How Intel Lost its Dominance in the Computer Industry [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => murphys-law-vs-moores-law-how-intel-lost-its-dominance-in-the-computer-industry [to_ping] => [pinged] => [post_modified] => 2020-08-06 14:44:05 [post_modified_gmt] => 2020-08-06 21:44:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289196 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 10 [filter] => raw ) [4] => WP_Post Object ( [ID] => 288873 [post_author] => 11830 [post_date] => 2020-08-05 10:00:06 [post_date_gmt] => 2020-08-05 17:00:06 [post_content] => Samtec and SnapEDA An exceptional customer experience starts before the sale. Successful companies realize it never ends. Dedicated post-sales support and a robust ecosystem for aftermarket product extensions are ingredients that tend to delight the customer. These comments are relevant in the consumer sector, but they apply to high tech as well. In my last post about Samtec, I discussed the company’s commitment to customer service. I’d like to review a couple of recent examples that illustrate how the company puts a “customer first” philosophy into action.

It’s Easy to Integrate Samtec Products

Samtec recently published an overview of their expanded signal integrity (SI) evaluation kit portfolio that drives the point home about the importance of “try before you buy” as an element of customer delight. The company even has a trademarked term for their commitment to customer service, Sudden Service®.  Samtec has a broad portfolio of products to support high-speed data communication at the system level. They back that up with a broad portfolio of SI eval kits that allow you to test their products in your target system to ensure they meet performance requirements in a real setting.  Here are some highlights of recent additions to the portfolio.

NovaRay® SI Evaluation Kit

NovaRay® SI Evaluation KitSamtec NovaRay® 112 Gbps PAM4 Extreme Density Arrays combine fully shielded differential pair design and two reliable points of contact. NovaRay is ideally suited for high-performance, dense systems found in the data center. The kit provides system designers and SI engineers an easy- to-use solution for testing NovaRay connectors. It delivers a high-quality system with robust mechanical design.

NovaRay Flyover® SI Evaluation Kit

NovaRay® Flyover® SI Eval KitAs data rates increase, trace length on PCBs decrease. Samtec’s high-speed, Flyover® cable assemblies simplify PCB design and limit signal degradation in high data rate applications. Samtec’s NovaRay Extreme Density & Performance Socket Cable Assembly uses 34 AWG Eye Speed® twinax cables. The NovaRay Cable Terminal includes rugged metal latching for mating with the NovaRay cable assembly. The NovaRay Flyover® SI Evaluation Kit offers engineers an easy to use platform to testing this bleeding edge solution.

FQSFP-DD to NovaRay Flyover SI Evaluation Kit

FQSFP DD to NovaRay® Flyover® SI Evaluation KitSamtec’s Flyover QSFP Systems provide improved signal integrity and architectural flexibility by routing critical high-speed signals through low-loss, ultra low skew twinax cables, instead of through expensive, lossy PCBs. The ultra-high-density design includes sideband signaling via press-fit contacts to help increase airflow, and a multitude of “End 2” options that allow for maximum design flexibility. The FQSFP-DD to NovaRay Flyover SI Evaluation Kit offers an off-the-shelf option for testing Double Density Flyover QSFP Cable Systems with a NovaRay End 2 option.

AcceleRate® HD SI Evaluation Kit

AcceleRate® HD SI Evaluation KitSamtec AcceleRate® HD Ultra-Dense Multi-Row Mezzanine Strips support high- speed, high-cycle applications with maximum routing and grounding flexibility. AcceleRate HD features milled Edge Rate® contacts with smooth mating surfaces to reduce wear and increase durability. The AcceleRate HD SI Evaluation Kit offers an off-the-shelf, easy to-use system for testing AcceleRate HD. Test engineers benefit from the high-quality system targeted at lab use.

UEC5-2 SI Evaluation Kit

UEC5 2 SI Evaluation KitThe Samtec 20+ Gbps FireFly™ Edge Card Socket Assembly is one part of a two- piece system within the FireFly Micro Flyover System™. It offers both a tiny footprint on the PCB and flexibility for both FireFly copper and optical cable assemblies. The UEC5-2 SI Evaluation Kit gives engineers an easy-to-use solution for testing UEC5- 2 and edge card connectors. UEC5-2 connectors are popular on FPGA/SoC evaluation and development kits, embedded computing boards and many other applications.

It’s Easy to Design with Samtec Products

In a recent press release, Samtec announced over 200,000 symbols & footprints for its interconnect products. It turns out that engineers spend days creating digital models for each component on their circuit boards, such as symbols and footprints. Samtec and a company called SnapEDA teamed up to make these models readily available to engineers, because both companies believe that engineers deserve the best in terms of ease-of-use, quality, and convenience. In the second quarter of 2020, SnapEDA created over 120,000 new Samtec connector models, including high-speed and micro-pitch board-to-board, edge card, and rugged connectors. With these new connector models, engineers can now easily discover and design-in Samtec products. “Samtec is an inspiration when it comes to their dedication to the customer experience. Whether it’s their 24-hour free sample program, or their endless pursuit to expand the availability of design resources for their products, Samtec is truly world-class. These new models are yet another example of that dedication to their customers,” said Natasha Baker, CEO and Founder of SnapEDA, based in San Francisco, CA. All models can be downloaded from Samtec’s website. They are also available on SnapEDA, as well as through over a dozen of its collaborators including Digi-Key and Mouser. Formats supported include Cadence OrCad, Allegro, Altium, Autodesk Eagle, Mentor PADS, KiCad, PCB123, and Proteus. Does your system level cable/connector vendor do all this for you? If not, you may want to give Samtec a call. [post_title] => How Samtec Puts the Customer First [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => how-samtec-puts-the-customer-first [to_ping] => [pinged] => [post_modified] => 2020-08-05 10:33:33 [post_modified_gmt] => 2020-08-05 17:33:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288873 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 288946 [post_author] => 16 [post_date] => 2020-08-05 06:00:50 [post_date_gmt] => 2020-08-05 13:00:50 [post_content] => Talking not so long ago to a friend from my Atrenta days, I learned that the great majority of design teams still run purely structural CDC analysis. You should sure asynchronous clock domains are suitably represented in the SDC, find all places where data crosses between those domains that require a synchronizer, gray-coded FIFO or similar approved macro at those boundaries. But purely structural CDC analysis (and for that matter RDC analysis also) isn't good enough. Why not? Some people still don’t see why they shouldn't handle the whole business in STA, throwing in an extra bit of Tcl programming. The reason that doesn’t work is that purely structural analyses generate gigabytes of potential problems, almost all of them spurious. Whether you use a commercial tool or you can custom craft your own tool in STA, no human reviewer can sensibly process that output, so they either discard it (“yeah I ran CDC, but I didn’t have time to review the results”) or they subjectively filter a big chunk at a time, not knowing what they might miss in that process. Structural CDC analysis The answer is to filter more intelligently, and that’s where this gets tricky. Some automation is part of the answer, carefully clustering results (which can get pretty sophisticated using ML methods). But it doesn’t reduce those gigabytes enough and it may miss important considerations in power domain crossings and AMS crossings. Getting this right requires adding more intelligence beyond a simple structural analysis, adding more functional and implementation awareness.

Why structural checks aren't enough

Take the example in the picture above: A REQ/ACK handshake between two asynchronous clock domains in different power domains. Synchronizer cells don’t help. You can check the REQ and ACK signals go through sync cells, but it is wrong to synchronize bits of a data bus individually. It is important to check the data bus crossing functionally. And since these two side of the handshake sit in two distinct power domains, there are additional crossing checks to be considered on power-down and wake-up. Certainly, you must sequence clocks and resets correctly across these domains under different shut-down/power-up or voltage scaling conditions. But you also need to remember that under voltage scaling timing behaviors will change. Some filtering must be based on a system-level understanding of intent. Quasi-static signals assignments are one such example. Quasi statics are nominal domain crossings for signals which only rarely change; configuration signals are a very common example. Quasi-static signals are the biggest contributor to false positive violations in domain crossing analysis. Correctly constraining these signals alone can massively reduce noise in analysis. But then you have to check to ensure those signals truly are quasi-static. That demands assertion check on those signals in all your formal or dynamic verification environments.

Cadence/TI CDC/RDC tutorial

Cadence participated in a tutorial at (virtual) DAC 2020, “Prevent Domain Crossings from Breaking Your Chip”, which discussed the importance of CDC/RDC analysis beyond structural analyses. Bijitendra Mittra from Cadence India, Venkataraman Ramakrishnan from TI and Sudhakar Surendran from TI India each presented their perspectives. Venkat gave a good general overview of the increasing challenges and the requirements for comprehensive CDC/RDC signoff. Bijitendra drilled down into the analysis details, emphasizing the importance of connecting the structural with functional and implementation. He made the point that these need to be closely coupled, so that, for example, assumptions/constraints set in CDC/RDC analysis become an intrinsic part of the verification plan for formal/dynamic verification. Folding coverage metrics for CDC/RDC into a comprehensive metrics-driven plan is equally important. Sudhakar gave a design perspective,  real life issues he has run into in both RTL and implementation stages. These included re-convergence, initialization sequences, cascaded asynchronous events, glitch verification, performance analysis and handling hard IP integration, where CDC/RDC issues are not typically understood in .lib or STA models. The tutorial presented a very comprehensive front-to-back view of true CDC/RDC sign-off achieved using Cadence verification solutions. To learn more about the Cadence JasperGold Clock Domain Crossing App, an app that lets users to perform comprehensive CDC signoff, please visit the Cadence product page.   [post_title] => Structural CDC Analysis Signoff? Think Again. [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => structural-cdc-analysis-signoff-think-again [to_ping] => [pinged] => [post_modified] => 2020-07-28 09:01:28 [post_modified_gmt] => 2020-07-28 16:01:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288946 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 289069 [post_author] => 16 [post_date] => 2020-08-04 14:00:20 [post_date_gmt] => 2020-08-04 21:00:20 [post_content] => What do you do next when you've already introduced an all-in-one extreme edge device, supporting AI and capable of running at ultra-low power, even harvested power? You add a software flow to support solution development and connectivity to the major clouds. For Eta Compute, their TENSAI flow. The vision of a trillion IoT devices only works if the great majority of those devices can operate at ultra-low power, even harvested power. Any higher, and the added power generation burden and field maintenance make the economics of the whole enterprise questionable. Alternatively, reduce the number of devices we expect to need and the economics of supplying these devices looks shaky. The vision depends on devices that are be close to self-sufficient in power. Adding to the challenge, we increasingly need AI at the extreme edge. This is in part to manage the sensors, to detect locally and communicate only when needed. When we do most of what we need locally, there's no need to worry about privacy and security. Further, we often need to provide real-time response without the latency of a roundtrip to a gateway or the cloud. And operating expenses go up when we must leverage network or cloud operator services (such as AI). All-in-One Extreme Edge

All-in-one extreme edge

Eta Compute has already been leading the charge on ultra-low power (ULP) compute. They do this building on their proprietary self-timed logic and continuous voltage and frequency scaling technology. In partnerships with Arm for Cortex-M and NXP for CoolDSP, they already have established a multi-core IP platform for ULP AI at the extreme edge. This runs typically below 1mW when operating and below 1uA in standby. It can handle a wide range of use cases – image, voice and gesture recognition, sensing and sensor fusion among other applications. They can run any neural networks and support the flexible quantization now commonly seen in many inference applications.

TENSAI Flow

Semir Haddad (Sr Dir Product Marketing) told me that Eta Compute’s next step along this path is to provide a full-featured software solution to complement their hardware. This is designed to maximize ease of adoption, requiring little to no embedded programming. They will supply reference designs in support of this goal. The software flow (in development) is called TENSAI. Networks are developed and trained in the cloud in the usual way, eg through TensorFlow, reduced then through TensorFlowLite. The TENSAI compiler takes the handoff to optimize the network to the embedded Eta Compute platform. It also provides all the middleware: the AI Kernel, FreeRTOS, hardware abstraction layer and sensor drivers. With the goal, as I said before, that not a single line of new embedded code should be needed to bring up a reference design.

Azure, AWS, Google cloud support

Data collection connects back to the cloud through a partnership with Edge Impulse (who I mentioned in an earlier blog). They support cloud connections with all the standard clouds – Azure, AWS and Google Cloud (he said they see a lot of activity on Azure). Semir stressed there is opportunity here to update training for edge devices, harvesting data from the edge to improve accuracy of abnormality detection for example. I asked how this would work, since sending a lot of data back from the edge would kill power efficiency. He told me that this would be more common in an early pilot phase, when you’re refining training and not so worried about power. Makes sense. Semir also said that their goal is to provide a platform which is as close to turnkey as possible, except for the AI training data. They even provide example trained networks in the NN Zoo. I doubt they could make this much easier. TENSAI flow is now available. Check HERE for more details. [post_title] => All-In-One Extreme Edge with Full Software Flow [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => all-in-one-extreme-edge-gets-tensai-software-support [to_ping] => [pinged] => [post_modified] => 2020-08-04 12:51:57 [post_modified_gmt] => 2020-08-04 19:51:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289069 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 288841 [post_author] => 11830 [post_date] => 2020-08-04 10:00:18 [post_date_gmt] => 2020-08-04 17:00:18 [post_content] =>

Attack vectors and EDA countermeasures

One of the Designer Track at this year’s DAC focused on the popular topic of automotive electronics.  The title was particularly on-point, The Modern Automobile: A Safety and Security “Hot Zone”. The session was chaired by Debdeep Mukhopadhyay, a Professor at the Indian Institute of Technology in Kharagpur.

This special, invited session can be summarized as follows:

The advent of the Automotive 2.0 era has driven increased integration of electronics and networking into the conventional automobile. The modern electrified automobile can be simply viewed as a connected, embedded system on wheels. Not surprisingly, safety and security concerns are coming increasingly to the forefront. This special session will focus on answers to multiple questions related to automotive safety and security - what are the issues at the system level, what are the standards available today, how do safety and security co-exist (or collide!), and what does it mean to build and verify security in our chips.

Presenters included:

  • Chuck Brokish, Director of Transportation Business Development, Green Hills Software LLC
  • David Foley, Semiconductor Architect, Texas Instruments, Inc.
  • Steve Carlson, Director, Cadence Design Systems

While all the presentations were relevant and on-point, I’d like to focus on the presentation from Steve Carlson at Cadence. I’ve known Steve for a long time. His career began at LSI Logic, arguably the birthplace of ASIC. Steve began by pointing out the magnitude of the cyber security problem. The attacks are ubiquitous, everything from shipping vessels to pacemakers. Governments are getting involved and we can expect lots more compliance requirements.

If one looks at the attack vectors for this problem, a lot of it is at the hardware or hardware/software interface level. So, EDA should be able to help. Said another way, after all the time and money invested in software security, it’s time for hardware to take center stage.

Steve pointed out that we’ve seen a lot of work in the functional safety area regarding standards compliance and certification. These techniques will transfer now to the security domain. Steve talked about pre-silicon attack verification – basically a way to validate the robustness of security layers with simulated attacks on the design before tapeout. Formal methods hold great promise for this activity as they are not dependent on input vectors and the associated “blind spots” they can bring. More on this in a moment.

A comprehensive overview of the various attack vectors and the countermeasures EDA offers was presented. This diagram really drove home the breadth of the problem. It’s included at the top of this post. Rather than spend an hour on this chart (DAC presentations are short) Steve chose to focus on formal methods. It turns out there are a number of specialized formal security applications that can prove things like data integrity, so this is a promising approach to verify compliance. The breadth of this technology is summarized in the diagram below.

Functional and Sercurity Formal Analysis

Steve ended his discussion with a vision of top-down verification of hardware security. Similar to approaches used for early hardware/software verification, he advocated a top-down approach to model the entire system in package, including the chip, interposer, package and board. This will allow the development of attack tests at a high level that can be used later in the design flow to verify the robustness of the system.

There are industry-level efforts to advance the cause as well. Cadence is working with several organizations to advance the state of testing and compliance, including Accellera. Security is a daunting task; it was good to hear about some positive momentum from Cadence.

If you have a DAC pass, I encourage you to watch this entire designer track session. I believe the material will be available online for an extended period of time.  You can find this session on The Modern Automobile: A Safety and Security “Hot Zone” here.

 

[post_title] => Cadence on Automotive Safety: Without Security, There is no Safety [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => cadence-on-automotive-safety-without-security-there-is-no-safety [to_ping] => [pinged] => [post_modified] => 2020-08-02 11:19:22 [post_modified_gmt] => 2020-08-02 18:19:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288841 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 289060 [post_author] => 13840 [post_date] => 2020-08-04 06:00:42 [post_date_gmt] => 2020-08-04 13:00:42 [post_content] => A lot has been written and even more spoken about artificial intelligence (AI) and its uses. Case in point, the use of AI to make autonomous vehicles (AV) a reality. But, surprisingly, not much is discussed on pre-processing the inputs feeding AI algorithms. Understanding how input signals are generated, pre-processed and used by AI algorithms ultimately leads to the need to tightly combine advanced digital signal processing (DSP) with AI processing. [caption id="attachment_289109" align="alignright" width="300"]VSORA MPU for DSP and All Processing. Source: VSORA VSORA MPU for DSP and All Processing. Source: VSORA[/caption] Today’s AI computing units – CPUs, GPUs, FPGAs, ASICs, Hardware Accelerators, etc – focus on the execution of the algorithms, overlooking input signals management, which, perhaps, explains why the issue has never been raised before. Until now, that is. VSORA devised a compact and efficient approach combining advanced digital signal processing (DSP) with AI algorithmic acceleration, on the same silicon, exchanging data via on-chip large memory, setting a new standard for performance, power consumption, efficiency, area and cost. See figure 1.

Fundamentals of Autonomous Vehicles

To understand the issue, let’s consider a key AI application: autonomous vehicles. The brain or controller of a self-driving car operates on a three-stage loop: Perception, Planning, and Action. See figure 2. [caption id="attachment_289110" align="alignnone" width="525"]Figure 2. Autonomous vehicle three stage loop controller: Source Shutterstock Figure 2. Autonomous vehicle three stage loop controller: Source Shutterstock[/caption]

Perception

In the Perception stage, the controller learns the environmental characteristics of the vehicle surroundings. This is accomplished by collecting a variety of data produced by a range of AV sensors, outside the scope of ordinary sensors monitoring the car’s status, such as oil and water temperature, oil and tire pressure, battery charge, light bulb functionality, and the like. The AV sensors encompass a combination of different types of cameras, radars, lidars, ultrasonic devices, etc.. Actual type and quantity depend on vehicle manufacturers. For example, Tesla elected not to use lidar sensors. The data generated by these sensors is processed via compute-intensive DSP algorithms to extract accurate and vital information to ensure safe AV driving. The higher the level of autonomy of the vehicle, the more the vehicle relies on the accuracy and timeliness of what the sensors provide.

Autonomous Vehicle Sensors

AV sensors can be grouped in two broad classes: “cost-effective” and “high-performance.” Both types of sensors capture data from the vehicle’s environment and elaborate it in-situ via pre-defined algorithmic processing before sending them to the controller. The difference is in the handling of pre-processed data. In cost-effective sensors, pre-processed data is further run through local algorithms, for instance, tracking that generates lists of tracked objects dispatched upstream to the controller. For example, a sensor, be it radar, lidar or camera, may detect a series of objects in front of the car and then run them through a local image classification algorithm in an attempt to identify a pedestrian about to cross a road. In high-performance sensors, the pre-processed data from all sensors is fed straight through to the controller where it is run through a series of algorithms. For example, fusing this data with data captured from other sensors for the same object, or clustering that combines multiple detections into bigger units, or distance transforms, or particle filter, typically some type of Kalman filter. While the data is unique to each sensor, it corresponds to objects that can be represented by vectors in the real world (x,y location, distance, direction of travel, speed, etc.). Once ensured that all the vectors from all sensors are aligned and use the same reference frame, each vector is positioned on an x-y grid in a map. The 2D map populated with the vectors encompass the vehicle environment and can be described using a 2D matrix. See figure 3. [caption id="attachment_289111" align="alignnone" width="2355"]Figure 3: Cost-optimized vs High-performance AV Sensors. Source: VSORA Figure 3: Cost-optimized vs High-performance AV Sensors. Source: VSORA[/caption] The complex processing generates tracking information. To prevent false information, the controller may track many more objects than what is being presented, and through a decision process resolve to track-and-show the object or continue tracking the object or to delete it from further consideration. An example of the input information to the first stages could be the 3D lidar cloud and the 2D and/or 3D camera images. The two types of sensors lead to significant differences in the system requirements. The local algorithms in cost-effective sensors reduce the computing power of the controller and the data-transfer bandwidth of the communication with the controller. However, the advantages trade off accuracy because of imperfections in the sensors. In poor light conditions, or bad weather, sensors may generate incomplete and ambiguous data that may cause serious problems. Missing a pedestrian crossing a road in a blizzard because the camera failed to identify the pedestrian may have dramatic consequences. In high-performance sensors, the amount of data traffic between sensors and control unit is significantly higher, demanding larger bandwidth and by far more computing power in the controller to combine and process data from several sensors in the time frame available. The upside is more accurate decisions since they are based on all sensor data. The bottom line is that the Perception stage in the AV control unit is greatly dependent on powerful DSP, heavy data traffic and intense memory accesses.

Challenges

The IEEE article titled “Fusion of LiDAR and Camera Sensor Data for Environment Sensing in Driverless Vehicles“ states that “heterogeneous sensors simultaneously capture various physical attributes of the environment.” However, “these multimodal sensor data streams are different from each other in several ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent Perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other.” See figure 4. [caption id="attachment_289112" align="alignnone" width="525"]Figure 4: Environmental Perception by combining sensor raw data. Source: VSORA Figure 4: Environmental Perception by combining sensor raw data. Source: VSORA[/caption] The requisites pose a series of challenges, such as, how to create a geometrical model to align the sensor outputs, how to process them to interpolate missing data with quantifiable uncertainty, how to fuse distance data; how to combine 3D point cloud from a Lidar with luminance data from a camera, and more. To make an autonomous vehicle reliable, accurate and safe, it is imperative to solve the above challenges. As discussed above, collective AV sensors data is typically combined into an occupancy map that stores information of relevant individual objects. Clustering identifies objects in an occupancy map, and the addition of distance transforms to clustering increases the accuracy of tracking and of the entire system. See figure 5. [caption id="attachment_289113" align="alignnone" width="525"]Figure 5. Occupancy grid. Source: IEEE Article referenced above Figure 5. Occupancy grid. Source: IEEE Article referenced above[/caption] When tracking objects, it is crucial to know where they are at any given time, and to predict where they may be in the near future. While implementing prediction is relatively simple, the problem explodes in complexity as the number of objects to be tracked increases. The issue gets aggravated when objects disappear and reappear for various reasons. For instance, when tracking two objects moving in different direction, and suddenly one object overlays and hides the other. Distance transform improves algorithmic decisions by identifying distances between objects, and thereby helps overcome or reduce sensor induced errors and anomalies. Clustering also helps to reduce the decision trees. The probability to have an accurate information on an object increases substantially with 300 parallel pings on it vs. a single ping. The same also helps to deal with starting and ending tracking of a real or false object. An adaptive particle filter, typically based on Kalman filter, may be used for tracking framework. For example, implementation of a recursive Bayesian estimation algorithm can handle non-linear and non-Gaussian state estimation problems. As important, low latency in the communication between the Perception and the Decision stage is essential for accuracy and reliability. Just to exemplify, at a speed of 240 km/h, a vehicle would cover a distance of 67 meters in each second demanding system responses much faster than 1 second/iteration to avoid catastrophic outcomes. The above considerations highlight the complexity of the task and the conspicuous computing power required to confront it. Only an advanced DSP implementation in an ad-hoc design architecture can solve the issues.

Planning / Decision

The Perception stage is followed by the Planning or Decision stage that establishes a collision-free and safe path to the vehicle destination. The objective is achieved by combining risk assessment or situation understanding with mission and route planning. These tasks require mustering vehicle dynamics, traffic rules, road boundaries and potential obstacles. The traditional procedure for the Planning stage progresses through 4 steps. It starts with route planning that searches for the best route from the origin to the destination. Traffic information generated through C-V2X inputs may be included in this stage. The second step determines the geometric trace the vehicle should drive on to reach the destination following set boundaries (road / lane) and traffic rules, whilst avoiding obstacles. The third step deals with manoeuvre choice. Based on vehicle position and speed, manoeuvre comes up with the best vehicle actions to realize the specified path identified in step 2. As an example, best action among “turn right”, “go straight”, “change lane to the left”, etc. Finally, the fourth step deals with trajectory planning, i.e., the vehicle’s actual transition from one state to the next in real-time. It involves vehicle constraints and navigation rules (lane/road boundaries, traffic rules, movement, …) while at the same time avoiding obstacles on the planned path (other vehicles, road conditions,...). Since trajectory planning is both time and velocity dependent, it can be regarded as the actual motion planning for the vehicle. During this time, the system evaluates errors between actual location and planned trajectory to revise the trajectory plan if needed. The bottom line is that the planning stage in the AV control unit is greatly dependent on powerful AI processing and intense memory accesses.

Action: Execution/Control

The final stage in the autonomous vehicle controller is the Action stage. This stage implements the trajectory plan computed by the planning stage. For example, activating a turn-signal, moving to an exit lane, and turning off the present road. As the actions get executed, the environmental situation changes, forcing the entire process to re-start from the Perception stage.

VSORA

VSORA, a startup with decades of experience in creating DSP designs for the wireless and communication industry, conceived a re-programmable, scalable, software-driven multi-core DSP/AI solution that ensures a cost-effective development. At a high level, the VSORA architecture is similar to the DSP architecture described in figure 6. [caption id="attachment_289115" align="alignnone" width="638"]Figure 6: VSORA MPU high-level view Figure 6: VSORA MPU high-level view[/caption] Called Matrix Processing Unit (MPU) for handling multi-dimensional matrices, the device can be configured with a variable number of cores. Each core is also configurable by defining a set of parameters. For DSP applications, the number of ALUs can be programmed in multiples of 8, up to 1,024 per core. For AI applications, the number of MACs can be programmed from 256 to 65,536 per core. The cores can further be programmed in terms of quantization, on-chip memory sizes, etc. The architectures of the two types of MPU are similar but not identical.

VSORA: Combining Signal Processing and AI

As discussed above, the design of a controller managing autonomous vehicles relays heavily on a variety of algorithms for signal processing and for AI, both requiring high level of performance, while keeping power consumption and cost to the minimum. To compound the issue, these algorithms are going through continuous refining and updating making a solution casted in hardware unacceptable. The VSORA solution is unique in that the same hardware can easily handle both the Perception and the Planning stages of the controller loop as discussed above. [caption id="attachment_289116" align="alignright" width="300"]Figure 7: VSORA MPU for Perception & Planning Processing Figure 7: VSORA MPU for Perception & Planning Processing[/caption] Specifically, the Perception stage could be mapped on an optimized DSP MPU, and the Planning stage on a second MPU configured to accelerate AI algorithms. The two MPUs share an on-chip large memory, with the DSP MPU writing the results into the memory and the AI MPU reading those results out of the memory, in sequence, preventing memory conflicts. See figure 7. The setup eliminates the performance bottleneck associated with external memories caused by restricted data-transfer bandwidth. It also reduces latency and power consumption by drastically shortening the data path to/from memory. The actual implementation of the entire system fits in a small footprint, consumes low power, provides high performance, and is remarkably efficient. The VSORA architecture is ideal to serve the AI/ADAS industry.

Conclusions

Early implementations of a VSORA system consisting of 512 ALUs and 16kMACs with 32MWord of memory, based on 7 nm process technology node, fitted in a silicon area of approximately 25 sqmm. Running at 2GHz, the ALU MPU executed about 1T MAC/second, and the AI MPU delivered a performance of 65 TOPs. The results meet the challenges posed by today’s autonomous vehicle. The architecture is scalable to the extent that by doubling the area to 50 sqmm, it would be capable to handle the requirements for the next generations of autonomous vehicles. Special thanks to co-author Jan Pantzar, VSORA Sales VP. [post_title] => Combining AI and Advanced Signal Processing on the Same Device [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => combining-ai-and-advanced-signal-processing-on-the-same-device [to_ping] => [pinged] => [post_modified] => 2020-08-06 08:25:47 [post_modified_gmt] => 2020-08-06 15:25:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289060 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 289103 [post_author] => 13 [post_date] => 2020-08-03 10:00:20 [post_date_gmt] => 2020-08-03 17:00:20 [post_content] => Every chip needs ESD protection, especially RF, analog and nm designs. Because each type of design has specific needs relating to IOs, pad rings, operating voltage, process, etc. it is important that the ESD protection network is carefully tailored to the design. Also because of interactions between the design and its ESD protection network, this work cannot wait to the end of the design cycle and just get slapped on. At each stage in the design process, from library definition and modeling, circuit design and layout, to verification, the requirements of ESD protection need be considered. Magwel Webinar Designing and Verifying HBM ESD Protection Networks Because of the level expertise required a lot of design teams look for a comprehensive solution from companies that specialize in ESD design. The solution provided could consist everything from cell design to verification of the completed design for ESD. Sofics and Magwel have teamed up to work with customers, together offering everything that is needed to implement and verify a comprehensive ESD solution. Sofics has deep experience on the design side and Magwel develops ESD discharge event verification tools. Partnering with Sofics and Magwel brings a lot to the table. Sofics and Magwel will be hosting a joint webinar to discuss the complete solution they offer by working together with customers. In this webinar Sofics and Magwel will discuss the areas they each focus on and the touch points between them, and with the customer and  foundry. There will a presentation from each company that goes into detail on their portion of the solution. This will be followed by a Q&A. Sofics has a deep portfolio of working with processes from leading and specialty foundries. As a result, they have learned how to optimize many important performance criteria. For low power, they offer cells with extremely low leakage. Where RF and high-speed performance is important they minimize capacitance and resistance. At the same time their ESD protection structures protect against multiple types of ESD failures. In the webinar Sofics will discuss these and other specific techniques they use. Magwel is working with Sofics so that their ESDi tool works efficiently with the ESD devices and process parameters for each design. ESDi can verify that the ESD protection network is going to work correctly during ESD discharge events. Using jointly prepared device models and technology files, Magwel's ESDi can simulate every ESD discharge event, including voltage drop and current levels for each device and net. The webinar will include an overview of Magwel’s ESDi including its analysis capabilities, violation reporting and debugging features. The webinar is scheduled for August 11th 10AM PST. You can sign up for the webinar online here. [post_title] => Webinar - Designing and Verifying HBM ESD Protection Networks, August 11, 10AM PST [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => webinar-designing-and-verifying-hbm-esd-protection-networks-august-11-10am-pst [to_ping] => [pinged] => [post_modified] => 2020-08-04 20:58:46 [post_modified_gmt] => 2020-08-05 03:58:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289103 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 289268 [post_author] => 14 [post_date] => 2020-08-09 08:00:15 [post_date_gmt] => 2020-08-09 15:00:15 [post_content] => Robert Maire Semiconductor Advisors KLA reports excellent June & Better Sept Guide- Orbotech Diversification Helps provide growth- Mix is perfect-China working fine- Foundry is solid- Solid execution and Financials KLAC reported $1.46B in revenues and $2.73 Non GAAP with gross margins of 60.3%. That is versus expectations of $1.42B and EPS of $2.41...a handy beat. Guide is for September revenues of $1.48B +-$75M and EPS of $2.42 to $3.06, this is versus street of $1.4B and $2.29 in EPS. Perhaps most important, management feels that the second half will be up versus the first half with a chance of more memory in the December quarter. Diversification works Perhaps the only negative in the quarter was that the core "process control" business (the old KLA) was down slightly, 2%, quarter over quarter. The new business segments of semiconductor process and PCB & Component were up sharply, 18% and 26% respectively. Quarters like this are one of the key reasons for KLA's diversification and it worked like it should. Wafer inspection was down 9% with patterning up 3% Q/Q. Overall business was up 3% Q/Q Given that ASML saw a very sharp drop in orders for litho tools we are not surprised to see a bit of softness in wafer inspection. The fact that patterning was up is likely a bit better as it could have been soft as well. We would expect patterning sales at KLA to trail litho sales at ASML by a bit. China sales continue unabated and un touched Much as we saw with Lam, KLA's China sales seemed to have seen near zero impact. China business with KLA is neck and neck with Taiwan (TSMC) business at 26% each. Korea is not far behind at 21% (Samsung), with the US at 11% (Intel and a smidge of Micron?). From management's perspective there seems to be no expectation of China slowing. You certainly wouldn't know there were any restrictions whatsoever on China ales of US semiconductor technology. Financials remain the best in the business With gross margins of 60.3% and operating margin at 35% and an expectation of further improvement its hard to complain. The dividend was increased again to $0.90 per quarter and the free cash flow was great at $411M for the quarter. Customer demand remains strong Management commented at several instances that end user demand remained very strong with no expectation of anything other than getting better. Covid 19 seems to almost be a distant memory as most all supply chain issues seem long since resolved. Even though the country and the world seem pre-occupied with Covid and political instability you wouldn't know it from KLA's report which seems much like any other up cycle in the industry. Intel's recent admission of its 7NM stumble does't look to have impact on KLA as Intel if anything should likely be spending more with KLA to try to find and fix the yield busting problems. We also continue to hear that Samsung still has yield issues with the only company that seems to be flying along is TSMC....perhaps they buy a lot of KLA tools to keep things running so smoothly. China domestic chip producers are obviously spending big time to get up the steep learning curve. There seems to be no shortage of money to spend on the semiconductor industry in China. KLA tools are likely at the top of their wish list as they accelerate learning. The Stock The stock was off slightly in the after market likely due to the fact that the core business was flattish coupled with the fact that the stock was up big today in advance of earnings. We would not be surprised to see some profit taking given the strong ride that KLA stock has been on. The sector as a whole has been on fire as semiconductors seem to continue to be a way to play the anti-Covid investment strategy due to demand for work at home and remote learning etc;. The ongoing strength of this seems to be lasting longer than expected especially if it goes all the way through the second half of the year unabated. KLA remains one of our favorite all time holdings in the group [post_title] => KLAC Solid QTR and Guide With No China Worries [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => klac-solid-qtr-and-guide-with-no-china-worries [to_ping] => [pinged] => [post_modified] => 2020-08-10 13:07:58 [post_modified_gmt] => 2020-08-10 20:07:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289268 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7162 [max_num_pages] => 717 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => f62d33959811d33714296730d37132f9 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

KLAC Solid QTR and Guide With No China Worries

KLAC Solid QTR and Guide With No China Worries
by Robert Maire on 08-09-2020 at 8:00 am

Robert Maire

KLA reports excellent June & Better Sept Guide-
Orbotech Diversification Helps provide growth-
Mix is perfect-China working fine-
Foundry is solid-

Solid execution and Financials
KLAC reported $1.46B in revenues and $2.73 Non GAAP with gross margins of 60.3%. That is versus expectations of $1.42B and EPS of $2.41…a… Read More


WEBINAR: Security Verification of Root of Trust for Xilinx

WEBINAR: Security Verification of Root of Trust for Xilinx
by Bernard Murphy on 08-07-2020 at 6:00 am

root of trust min

Tortuga Logic is hosting a webinar on Tuesday, August 18th from 12 to 1PM PDT, in which Xilinx will present their experiences in using the Tortuga Logic Radix-S and Radix-M products for security verification of root of trust in their advanced SoC FPGAs. REGISTER HERE to attend the webinar.

SECURITY CHALLENGES
In general security… Read More


HCL Webinar Series – HCL VersionVault Delivers Version Control and More

HCL Webinar Series – HCL VersionVault Delivers Version Control and More
by Mike Gianfagna on 08-06-2020 at 10:00 am

Screen Shot 2020 08 02 at 9.23.20 PM

HCL is an interesting organization. You may know them as an Indian company that provides software and hardware services.  At about $10B US and over 110,000 employees working around the world, they are indeed a force in the industry. They’ve also created a software company called HCL Software that develops tools and technologies… Read More


Murphy’s Law vs Moore’s Law: How Intel Lost its Dominance in the Computer Industry

Murphy’s Law vs Moore’s Law: How Intel Lost its Dominance in the Computer Industry
by Michael Bruck on 08-06-2020 at 6:00 am

Intel INTC SemiWiki

Last week, Intel announced its second-quarter financial results which easily beat the analysts’ consensus expectations by a handsome margin. Yet the stock price plummeted by over 16% right after the earnings call with management. Seven analysts downgraded the stock to a sell and the common theme on all the downgrades was that… Read More


How Samtec Puts the Customer First

How Samtec Puts the Customer First
by Mike Gianfagna on 08-05-2020 at 10:00 am

Samtec SnapEDA

An exceptional customer experience starts before the sale. Successful companies realize it never ends. Dedicated post-sales support and a robust ecosystem for aftermarket product extensions are ingredients that tend to delight the customer. These comments are relevant in the consumer sector, but they apply to high tech as… Read More


Structural CDC Analysis Signoff? Think Again.

Structural CDC Analysis Signoff? Think Again.
by Bernard Murphy on 08-05-2020 at 6:00 am

strainer min

Talking not so long ago to a friend from my Atrenta days, I learned that the great majority of design teams still run purely structural CDC analysis. You should sure asynchronous clock domains are suitably represented in the SDC, find all places where data crosses between those domains that require a synchronizer, gray-coded FIFO… Read More


All-In-One Extreme Edge with Full Software Flow

All-In-One Extreme Edge with Full Software Flow
by Bernard Murphy on 08-04-2020 at 2:00 pm

Obstacles to Edge AI min

What do you do next when you’ve already introduced an all-in-one extreme edge device, supporting AI and capable of running at ultra-low power, even harvested power? You add a software flow to support solution development and connectivity to the major clouds. For Eta Compute, their TENSAI flow.

The vision of a trillion IoT… Read More


Cadence on Automotive Safety: Without Security, There is no Safety

Cadence on Automotive Safety: Without Security, There is no Safety
by Mike Gianfagna on 08-04-2020 at 10:00 am

Attack vectors and EDA countermeasures

One of the Designer Track at this year’s DAC focused on the popular topic of automotive electronics.  The title was particularly on-point, The Modern Automobile: A Safety and Security “Hot Zone”. The session was chaired by Debdeep Mukhopadhyay, a Professor at the Indian Institute of Technology in Kharagpur.

This special, invited… Read More


Combining AI and Advanced Signal Processing on the Same Device

Combining AI and Advanced Signal Processing on the Same Device
by Lauro Rizzatti on 08-04-2020 at 6:00 am

Figure 1 1

A lot has been written and even more spoken about artificial intelligence (AI) and its uses. Case in point, the use of AI to make autonomous vehicles (AV) a reality. But, surprisingly, not much is discussed on pre-processing the inputs feeding AI algorithms. Understanding how input signals are generated, pre-processed and used… Read More


Webinar – Designing and Verifying HBM ESD Protection Networks, August 11, 10AM PST

Webinar – Designing and Verifying HBM ESD Protection Networks, August 11, 10AM PST
by Tom Simon on 08-03-2020 at 10:00 am

Promo Ad 400x400 2

Every chip needs ESD protection, especially RF, analog and nm designs. Because each type of design has specific needs relating to IOs, pad rings, operating voltage, process, etc. it is important that the ESD protection network is carefully tailored to the design. Also because of interactions between the design and its ESD protection… Read More