RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

We Don’t Want IoT Cybersecurity Regulations

We Don’t Want IoT Cybersecurity Regulations
by Matthew Rosenquist on 10-14-2020 at 6:00 am

We Dont Want IoT Cybersecurity Regulations

It simply makes no sense to call for IoT devices to be certified safe-and-secure. Before you get bent out of shape, hear me out.

Regulations are unwieldy blunt instruments, best left as a last resort. Cybersecurity regulations are not nimble, tend to be outdated the day they are instituted, and become a lowest-common-threshold for an industry to follow. This stifles security innovation and the application of best practices. On the upside, regulations do force industries that have ignored basic security practices to meet a common standard. But history has shown those industries rarely go any farther than the regulatory requirements. All the data breaches we see in the news every week, almost all of those organization are compliant with regulations, yet they are losing data records by the billions. Compliance does not equal security!

Yet some are pounding the government drums, advocating for IoT certification regulations. I find their beliefs to be shortsighted and premature.

Regulations are definitely needed in some situations, but only for narrow applications to accomplish specific goals. Protecting privacy of children online, securing sensitive healthcare records, or requiring controls around credit card transactions are all codified to some extent in regulations.

I am a passionate security advocate, some would even go so far as to say a fanatic, but I don’t like this idea of requiring IoT devices to be certified safe and secure. It is simply too broad and undermines the economic model which is driving rapid innovation.

We don’t require such certification for phones, tablets, personal computers, or servers. So why would anyone think requiring certification for low powered IoT devices is a good strategy?

Certification adds significant costs and time to product development. IoT devices are emerging for a vast variety of uses and tend to be less expensive than fully-featured computing systems. The scale of validation is another problem as the number of IoT devices will soon exceed over 50 billion. The process to determine who will certify entirely new classes of devices and what criteria will be accepted is a political nightmare.  Operationalizing such requirements will be expensive and a nightmare at such a massive scale.  The bureaucracy and costs will add tremendous friction to the market, pushing out many companies and products.

There is no doubt IoT needs significantly more security, but recommending overly broad regulations is very premature and likely damaging to everyone that benefits from smart devices. There are many other options and solutions that could deliver much better protection at a lower cost and not catastrophically impede innovation, competitiveness, and healthy market cycles. Establishing standards, best practices, for design and validation is a great start. Driving the consumers, to recognize and value secure designs, creates a competitive advantage for manufacturers to challenge each other. Open bug bounties, public security research, and sharing of penetration testing certifications would drive better processes for the IoT industry.

If such practices fail to be adopted or are not sufficient, then we should discuss regulation. But first, we must pursue more optimized avenues to establish safety and security in partnership with the IoT industry, so the ecosystem can become more adaptable to evolving threats, support innovation, and be trustworthy for the benefit of all users. Let us not rush to a model of inflexible regulations, as they should only be considered as the last option.


A Look Inside the Cloud at the Arm DevSummit 2020

A Look Inside the Cloud at the Arm DevSummit 2020
by Mike Gianfagna on 10-13-2020 at 10:00 am

Executive roundtable speakers
Executive roundtable speakers

Virtual conferences are getting better all the time. Easy-to-navigate agendas, good production value in terms of visual presentation, professionally produced video segments and interspersed live events all contribute to the experience. Arm held their developers’ summit in the US on October 6-8, and it had all the attributes of a good virtual conference experience. One of the live events was an executive roundtable that took a look inside the cloud to see what impact Arm is having there.

First, a look at the panel:

Chris was the moderator. His organization is responsible for the proliferation of Arm-based solutions throughout the data infrastructure of today and tomorrow, from cloud computing to the network edge. Prior to joining Arm, Chris served as senior vice president of Devices Products at Western Digital Corporation. Previously, Chris was the vice president of Marketing at Luxtera, a silicon photonics startup, after spending over nine years at Broadcom.

Founded in 2002 with a mission to support a rapidly growing global community of photographers, Don focused his passion, expertise, and business on serving the only shareholders he believes truly matter: the customer. Personally investing in everything from culture to code to customer support and everything in between over the past 17 years, Don successfully bootstrapped SmugMug Inc. which purchased Flickr from Yahoo in 2018, to not only profitability, but also into the world’s largest and most influential photographer-focused community.

Liz is a developer advocate, labor and ethics organizer, and Site Reliability Engineer (SRE) with 16+ years of experience. She is an advocate at Honeycomb for the SRE and Observability communities, and previously was an SRE working on products ranging from the Google Cloud Load Balancer to Google Flights. She lives in Brooklyn with her wife Elly, metamours, and a Samoyed/Golden Retriever mix, and in San Francisco and Seattle with her other partners. She plays classical piano, leads an EVE Online alliance, and advocates for transgender rights.

David joined AWS in 2007, as a software developer based in Cape Town, working on the early development of Amazon EC2. Over the last 12 years, he has had several roles within Amazon EC2, working on shaping the service into what it is today. Prior to joining Amazon, David worked as a software developer within a financial industry startup.

The discussion began with Dave Brown responding the Chris’ question, “Why is AWS building Arm-based CPUs?” By the way, Amazon EC2 is the part of Amazon Web Services that provides elastic, scalable infrastructure, EC2 = Elastic Compute Cloud. Dave explained the whole thing started around 2012 with a rather familiar scenario. How to reduce workload on the mainstream processing system by offloading to dedicated accelerators. Dave referred to these accelerators as offload cards. This is rather common today, but I would say advanced thinking in 2012.

In 2018, based on the performance they were seeing from this approach, AWS launched the first server chip with an Arm core that could be used as instance type for AWS customers, and the Graviton was born. In 2019, Graviton2 was launched, providing a significant performance boost. Dave quoted a 40 percent better performance for Graviton2 instance types compared to other architectures across a wide range of customer applications.

Don MacAskill then weighed in on SmugMug’s experience with Arm in the cloud. It turns out photo serving was the first production use case for the AWS Graviton. Don’s perspective was actually not performance-centric, but rather economy-based. He explained that image processing workloads are often not CPU-bound. Rather they are bound by network bandwidth, memory, and storage I/O. This creates the situation where these applications typically over-pay for premium compute capacity that is actually not used. The Graviton architecture provided a much more cost-effective processing unit for these workloads that was actually quite fast as well. A perfect match.

Liz discussed her experiences with the Graviton architecture on Honeycomb, which is an observability tool that gathers telemetry data. It’s interesting to note that this system receives billions of events from cloud applications. Liz explained they began using Graviton2 about eight months ago, so a newer user. So far, it’s a win with a 20-30 percent performance improvement and a 20 percent price reduction.

The panel then discusses several other perspectives, including the effort to convert to Graviton. It’s an interesting and compelling dialogue. You can view the entire presentation, and other keynotes and panels by logging in here. If you don’t have an account, it’s easy to set one up. This is a great event to take a look inside the cloud to see what impact Arm is having there.


Randomization Fools Us Some of the Time

Randomization Fools Us Some of the Time
by Bernard Murphy on 10-13-2020 at 6:00 am

random min

Though hopefully not some of us all of the time. Randomization is a technique used in verification to improve coverage in testing. You develop tests you know you have to run, then you throw randomization on top of that to search around those starter tests, to explore possibilities you haven’t considered. Truly random tests are not actually very useful. Many won’t represent realistic possibilities, making you waste time and compute resource on useless verification. More useful is to constrain randomization in ways that should ensure the randomized tests you run are still meaningful. Unsurprisingly, this is known as constrained random testing, a mainstay in functional verification today. It’s a low effort way to increase coverage. Or is it? Constrained randomization fools us sometimes. Dave Rich at Mentor just released a white paper on that topic.

Mixing Variable types

SystemVerilog is pretty easy-going about letting you mix types in expressions, a philosophy inherited I assume from C. In SV you can have even more finely specified word sizes than in C, but the same principle holds. From a few hundred feet up they’re all just values. Throw them together into a complex expression and let the compiler figure out the details. Especially in constraints, when we’re not trying to synthesize hardware, we just want to calculate.

But the devil is in those details. Variables in a constraint sub-expression may need to be extended for correct evaluation. Expressions may overflow, with unexpected consequences. We’re pretty careful about this kind of thing in synthesis, perhaps less so in constraints. Dave uses an example expression A+B>>C>D to illustrate. This is already ugly in relying on implicit operator precedence, but beyond that, in his example A is 3-bit, B and D are 4-bit and C is an integer. The shift operation may truncate the value of A+B. As a result of which the comparison may not deliver what you expected. This is the first problem. An expression will evaluate the way the language reference manual says it should, possibly not the way you intended. There is no option for “do what I mean, not what I say” in the language.

Rich shares other examples such as comparing signed and unsigned variables, where a signed value unintentionally overflows from a positive value to a negative value. Like most bugs, obvious when you see it, but easy to overlook when you forget one of the variables is signed.

Randomization adds more devilry

So far this about being careful with calculations in SystemVerilog. A generally good practice whether or not you’re using those expressions in constraints. However it’s one thing to carefully reason your way through each sub-expression when you can reason about values that make sense to you. Though Dave doesn’t mention this, I suspect there’s an additional level of danger when those variables are randomized. Did you really reason your way through all the possible randomized values those could take on?

There’s one other concern in any kind of random generation, especially constrained random. That’s distribution. You want to avoid generated tests heavily weighted to certain variable values with little testing for other values. Constraints skew distributions; this is unavoidable. You need to be able to control that skew. Dave gives some hints on how this can be managed.

Good white paper. You can read it in full HERE.

Also Read:

Siemens PAVE360 Stepping Up to Digital Twins

Verifying Warm Memory. Virtualizing to manage complexity

Trusted IoT Ecosystem for Security – Created by the GSA and Chaired by Mentor/Siemens


Tempus: Delivering Faster Timing Signoff with Optimal PPA

Tempus: Delivering Faster Timing Signoff with Optimal PPA
by Mike Gianfagna on 10-12-2020 at 10:00 am

Tempus Delivering Faster Timing Signoff with Optimal PPA

In July, I explored the benefits of the new Cadence Tempus™ Power Integrity Solution. In that piece, I explored some of the unique capabilities of this new tool with Brandon Bautz, senior product management group director and Hitendra Divecha, product management director in the Digital & Signoff Group at Cadence. I recently had the opportunity to speak with these two gentlemen again. This time, we explored the new 20.1 release of the Tempus Timing Signoff Solution in terms of its ability to deliver faster timing signoff with optimal PPA results. Once again, I was impressed by the information they provided, this time about how they are addressing the customers’ time-to-market constraints.

We began our discussion with a review of the design challenges and customer requirements. It’s well-known that design and modeling complexity are increasing at advanced nodes; and while the competitive marketplace demands higher performing devices (for longer battery life, faster compute, etc.) the time-to-market window for these devices continues to shrink. Hitendra presented a very concise summary of all these forces, included below. I haven’t seen such a coherent view like this before—it’s worth a look.

In order to address time-to-market challenges, it’s clear that the following five items, in order by priority level, are the key customer requirements:

  1. Fewest iterations
  2. Optimization/the best PPA
  3. Fastest design closure
  4. Usability/ease-of-use
  5. World-class support

Hitendra and Brandon then went into several dimensions of the 20.1 Tempus release, which address these customer requirements while dealing with the myriad of challenges listed above. I’ll provide a short summary of each dimension here. It’s how Cadence delivers faster timing signoff with best-in-its-class PPA, but above all, it reduces the customers time-to-market challenges.

Integration with the Innovus Implementation System

The integration between various signoff quality engines at Cadence has served them well. With this integration, Cadence provides an ECO flow that is physically aware and embeds the power of path-based analysis inside the digital full flow. This seamless integration puts signoff-quality timing and power analysis in the hands of the place-and-route engineer.

As a result of the integration, engineers can achieve higher-quality block-level implementation and smoother signoff at the chip level. One can achieve 2X faster convergence with improved PPA. For example, Renesas presented their results using Tempus ECO at CadenceLIVE. They reported a designer time/effort reduction of 50% with a 10% power reduction as well. It’s not surprising that approximately 80% of Innovus customers are using Tempus ECO.

Machine Learning

Cadence has multiple initiatives to utilize machine learning (ML) techniques to drive runtime and power, performance and area (PPA) gains throughout its product lines. In the case of Tempus ECO, Cadence utilized an “ML outside” application in Cadence-speak to “learn” from numerous advanced-node designs. By analyzing the various design characteristics (slack, congestion, etc.), Cadence enhanced the optimization algorithms to improve runtime by 2X to 3.5X while still maintaining excellent PPA results.

SmartHub

This technology delivers a rich debugging toolbox through a GUI. Besides improving usability, it’s a key method that allows signoff quality information to be delivered to the place-and-route engineer in an easy-to-understand graphical manner.

C-MMMC

C-MMMC stands for concurrent multi-mode/multi-corner static timing analysis (STA). This technology provides a significant speedup in runtime for the analysis of multiple timing views by combining them into a single run. A case study from Inphi highlighted their use of C-MMMC with physically aware ECO to accelerate full-chip closure and signoff by 2X. Impressive.

SmartMMMC Optimization

With the number of views increasing for advanced nodes to 200 or more, it becomes necessary to compact these views to manage turnaround time. SmartMMMC automatically accelerates optimization across a large number of views with virtually no PPA penalty. Designers significantly benefit from this approach because they can more easily close timing across all views in a single optimization pass.

High-Capacity ECO

Beyond view count, there are also unique challenges associated with optimization of very large designs. High-capacity ECO enables the efficient optimization of large, full chip designs in a flat, easy-to-use flow. A CadenceLIVE case study from Marvell was discussed. Using this approach, Marvell was able to reduce runtime from 27 hours (traditional hierarchical Tempus ECO) to 5 hours (Tempus full-chip ECO). More impressive results.

Distributed Static Timing Analysis (DSTA) 

This one has been around a while but is quite critical to signing off extremely large designs that exist at advanced nodes. Think of performing STA on 300 million to 1 billion instances. Doing this on a huge single machine would be prohibitively expensive whereas distributing the problem across multiple, smaller machines is preferable.

The problem here is partitioning the design in a smart way so that the communication between parallel machines doesn’t negate all the benefits. Cadence has figured out a way to do just that. Given DSTA’s scalable nature, the technology is well-suited to cloud deployment. 

Summary

So ends another chapter in the Tempus story. I learned enough during my conversation with Brandon and Hitendra to know the story is far from over. There will be more installments. So, this is how Tempus reduces time-to-market challenges and delivers faster timing signoff with optimal PPA results. The key takeaway for me is that the Tempus integration with Innovus is the key driver and concurrent power and timing optimizations produce exceptional results. Also, SmartHub is impressive, enabling designers to quickly converge on their designs as full-chip ECO and DSTA allows faster design closure and shorter turn-around times.

You can learn more about the Cadence Tempus timing signoff solution here.

Also Read

Bug Trace Minimization. Innovation in Verification

Anirudh CadenceLIVE Plays Up Computational Software

Lip-Bu Hyperscaler Cast Kicks off CadenceLIVE


Verification IP Coverage

Verification IP Coverage
by Daniel Nenni on 10-12-2020 at 6:00 am

Truechip SemiWiki 2020

I am pleased to introduce Truechip to the SemiWiki community. Truechip is a leader in the IP Verification – Design and Verification solutions market, one of the fastest growing market segments we track. Truechip has been serving customers for more than 10​ years specialization in VIP integration, customization and SOC Verification.

Founded in 2008, the Truechip corporate vision is to create world class Verification IP Solutions, to provide expert consultancy to the ASIC and SoC design market, to design ASICs and SoCs from architecture to working silicon, to be the leading provider of Semiconductor IP Solutions. To be a one-stop-shop for design and verification.

Truechip is well known here in Silicon Valley for their collaborative customer support and services. Nitin Kishore is the Truechip founder and CEO. Nitin and Truechip are both semiconductor success stories worthy of telling so this is a great opportunity for SemiWiki, absolutely.

Nitin started his career as a design engineer at ControlNet then spent 10 years at Freescale Semiconductor as Sr. Engineer and Design Manager. Seeing an opportunity, as natural born entrepreneurs do, Nitin founded Truechip.

Truechip has more than 100 silicon proven Verification IPs for Storage, BUS/Interface, USB, Automotive, Memory, PCIe, Networking, MIPI, AMBA, Display, and Defence/Avionics.

Truechip has a lot of technical content on their website including demos, articles and webinars under the Resources tab in the header. Seriously, there is a lot of IP content on www.truechip.net.

On the demo page you can learn about CXL, PCIe Gen5, Gen4, Gen3, USB 4.0, Ethernet 800 G, TileLink, HBM, GDDR6, LPDDR5, DDR5, AMBA, and MIPI I3C 1.1 verification IPs among many others. There are more than a dozen technical articles and some very interesting webinar replays:

  • CXL – A PCIe based solution for interconnect
  • SD Express -The Future of SD Cards
  • Understanding JESD204C – A high-speed serial link between data converters and logic devices
  • Gen-Z, An Architectural Understanding
  • Revealing USB 3.2 – From Bootup
  • DDR-Exploring DIMMS
  • Ethernet-Unveiling the Basics
  • PCIe Gen4 – Decoding Verification

The next Truechip webinar is on October 13th and 14th:

TileLink – Unveiling The Basics

Who Should Attend :

  • Professionals working on development of Soc/IP/VIP level of TileLink.
  • Professionals working on verification of TileLink at Soc/IP/VIP level or any intermediate level.
  • People keen to know how TileLink is shaping a new era of interconnects.
  • Freshers in the field of VLSI industry.

Key Take Aways from Webinar:

  • TileLink Overview
  • Single bus interface TileLink Features
  • Use Cases
  • Truechip TileLink VIP Features & advantage

IP has been one of the most popular topics we have covered over the last 10 years and I expect that to continue. This is the beginning of a blog series on IP verification so stay tuned.

About Truechip
Truechip is a leading provider of Verification IP solutions. We also provide verification, DFT and Physical design services. We aid to accelerate IP/ SOC design thus lowering the cost and the risks associated with the development of ASIC, FPGA and SOC. A privately held company with a solid and seasoned leadership, having global footprints and coverage across North America, Europe and Asia. Truechip offers the Industry’s first 24 x 7 technical support.

Also read:

Webinar Replay on TileLink from Truechip

TrueChip CXL Verification IP

USB4 Makes Interfacing Easy, But is Hard to Implement


Toshiba Cost Model for 3D NAND

Toshiba Cost Model for 3D NAND
by Fred Chen on 10-11-2020 at 8:00 am

Toshiba Cost Model for 3D NAND

Toshiba (now known as Kioxia) was the first company to propose a 3D stacked version of NAND Flash memory called BICS [1]. BICS (BICost Scalable) Flash used explicit process cost reduction based on depositing and etching multiple layers at once, avoiding multiple lithography steps. This strategy replaced the usual approach of shrinking the unit cell size, and at the same time, replaced the floating gate charge storage element with a nitride layer containing charge-trapping defects.

As a result of this new approach, the actual cell footprint is much larger than in previous generations of NAND Flash. The process integration is also very different, which makes the wafer cost difficult to judge. However, a model for the cost per bit was provided by Toshiba [2], relative to that of a planar NAND (32nm when disclosed in 2009). Additionally, hints of the size of the 3D NAND cell have recently become available [3,4]. With these pieces of information, a model of wafer cost is now possible to draw out.

Toshiba’s Cost per Bit model for 3D (2009)

Toshiba disclosed the cost per bit advantage of its BICS 3D NAND scheme in 2009 (Figure 1).

Figure 1. Toshiba’s cost per bit, relative to planar. Since this was published in 2009, 32nm planar Flash is expected to be the reference. Source: Reference [2].

It is quite clear that Toshiba is highlighting the bit cost advantage of its BICS scheme, compared to the standard stacked 3D Flash case. This advantage would also for 3D Crosspoint schemes for resistance-based memories.

While cost per bit is relevant to selling the product, the technology development is more sensitive to cost per wafer. How does that trend?

Bits per wafer: BICS vs. planar

The size of the BICS unit cell is shown in Figure 2. The ~100 nm diameter of the channel was disclosed as early as 2010 [4], but the actual electron micrographs showing unit cells were not available until the end of 2019 [3].

Figure 2. Plan view of BICS unit cell, based on [3] and [4]. The four sections of the channels at each corner add up to a complete one.

From this picture we get a unit cell area within a single layer of 130 nm x 150 nm, or 19500 nm^2. For N layers, the effective cell size is divided by the factor N. On the other hand, for reference, the 32 nm planar NAND cell stays at a fixed area of 64 nm x 64 nm, or 4096 nm^2. With the bit cell area known, we can calculate the bits per 300mm wafer. Then, by multiplying the cost/bit by bits/wafer, we can get the cost/wafer.

Cost per wafer: 3D vs planar

The cost per wafer thus obtained is shown in Figure 3. Also shown is the expected wafer cost for continued 2D planar scaling, assuming same costs for 22nm and 32nm, and 20% extra cost (e.g., extra patterning for 20% of layers for SADP to SAQP transition) for 16nm and 11nm. Since these costs are all relative to the 32 nm planar case, a further simplifying assumption is made that the area efficiency is the same for all cases.

Figure 3. Wafer cost for BICS vs stacked 3D vs planar scaling. 32 nm planar NAND is the reference for BICS and stacked 3D wafer cost.

The simply stacked 3D cost rises higher than BICS, due to the repetition of many complex lithography-related process steps. On the other hand, planar scaling is relatively flat, but rises slowly for 1X generations. Interestingly, the BICS wafer cost is also flat up to 16 layers, before starting to rise slowly. The close proximity of continued planar scaling and BICS wafer costs could also explain the longevity of the former, as shown in Figure 4. In that figure, as in Figure 3, the layer number axis is kept for the scaled planar case, to reflect the “effective layer number” based on the scaled cell size.

Figure 4. Bit cost of planar scaling compared to stacked 3D and BICS.

It would be expected that 16nm is still competitive with 16-layer BICS. If the assumed 20% extra cost for 16nm were an overestimate, or depreciated, then 16nm could remain competitive even longer. On the other hand, scaled planar NAND has the fundamental disadvantage of always having peripheral CMOS circuitry outside the array, consuming wafer area (therefore adding cost). In contrast, the 3D approach in principle has the option of putting such circuitry under the array. Stacking of two layers of 11nm NAND Flash (if it existed) can possibly match the bit cost of 64-layer BICS, but the wafer cost would be 40% higher, resulting in just being a little more expensive than the 64-layer BICS case. Despite the relatively lower bit cost, the wafer cost of BICS-style 3D NAND is still increasing with layer count, for 32 layers and more. This is further complicated by the addition of string stacking [5], which is the equivalent of double/multiple patterning in the vertical direction.

The Bigger Picture: total system costs

It should be remembered that even with the lower cost/GB offered by 3D NAND, the total cost of the system using the 3D NAND chip has to include that of the fairly expensive controller chip used to guarantee the reliability (endurance, retention, etc.) of the stored bits. This is especially so when more than one bit is stored per cell. The controller is fabbed at advanced foundry nodes, and is a well-known source of heat (see e.g., [6]). The heat must be dissipated to prevent degradation of the NAND performance, so special heat sinking is needed as well. These all add to the cost of operating a 3D NAND chip.

Alternative memory candidates could offer better reliability, and therefore not require the expensive controller or special heat sink. Furthermore, new in-memory computing paradigms may be supported (see e.g., [7]). It may be expected that although the initial bit costs are high, scaling for such alternative memories would follow the same path as 3D NAND, due to the demonstrated bit cost reduction.

References

[1] A. Nitayama et al., “Bit Cost Scalable (BiCS) technology for future ultra high density storage memories,” 2007 Symposium on VLSI Technology.

[2] https://savolainen.wordpress.com/2011/12/

[3] https://www.systemplus.fr/wp-content/uploads/2019/12/SP19483-3D-NAND-Memory-Comparison-2019_Sample.pdf

[4] T. Ichikawa et al., “Topography simulation of BiCS memory hole etching modeled by elementary experiments of SiO2 and Si etching,” SISPAD 2010.

[5] https://semiengineering.com/how-to-make-3d-nand/

[6] https://www.tomshardware.com/reviews/samsung-980-pro-m-2-nvme-ssd-review

[7] https://arxiv.org/pdf/1906.06603.pdf

Related Lithography Posts


Like US and China, India must ensure foreign tech companies are locally owned

Like US and China, India must ensure foreign tech companies are locally owned
by Vivek Wadhwa on 10-11-2020 at 6:00 am

Like US and China India must ensure that foreign tech companies here are locally owned

Synopsis: India could learn from both countries, requiring that Facebook India be sold to one of India’s tech tycoons. This would be one step to ensuring that all data be kept locally and tightly protected, and that the algorithms that determine the information that users will receive – which, after all, influences their behaviour — truly reflect India’s culture and values.

For a moment in time, there was complete harmony in the social media world. US President Donald Trump demanded that TikTok be sold to a US company, and China’s propaganda outlet Global Times tweeted: ‘The US restructuring of TikTok’s stake and actual control should be used as a model and promoted globally.’

China and the US agreed that having foreign companies control commonly used apps not only poses a threat to national security, but also distorts a country’s culture and values. Global Times went on to say, ‘Overseas operation of companies such as Google, Facebook shall all undergo such restructure and be under actual control of local companies for security concern.’

India could learn from both countries, requiring that Facebook India be sold to one of India’s tech tycoons. This would be one step to ensuring that all data be kept locally and tightly protected, and that the algorithms that determine the information that users will receive – which, after all, influences their behaviour — truly reflect India’s culture and values. The soulless geeks of Silicon Valley and the ruthless autocrats of China simply cannot be trusted to do that. In fact, it would conflict with their interests.

Facebook started out as a benign, open social media platform to bring friends and family together. Increasingly obsessed with making money, and unhindered by regulation or control, it began selling to anybody who would pay to advertise to its users. It focused on gathering all data it could about them, and on keeping them hooked to its platform. The more sensational Facebook posts attracted more views, and more views meant more data to harvest — and thus greater profit.

As could be expected, sinister players started using these data to incite violence, spread hatred, and rig elections. With information readily available that permitted parties to identify, say, Muslims with extreme views or Hindus who feel marginalised, the troublemakers began to use Facebook’s platforms, including WhatsApp, to target these people with false information. They polarised societies and created ‘information bubbles,’ belief systems isolated from counterbalancing evidence.

In Myanmar, Facebook was used stir up hatred against the Rohingya Muslim minority. According to the United Nations (UN), it had a ‘determining role’ in their genocide, having ‘substantively contributed to the level of acrimony and dissension and conflict within the public’. Millions of people all over the world now believe that an international paedophile elite has been secretly abducting and sexually abusing children, and even ‘harvesting their blood’ to make a youth serum. According to that belief, this global cabal includes the likes of Hillary Clinton and US investor George Soros — who apparently use the serum ‘to control the world’.

Facebook was Ground Zero for the spread of these claims, and is literally undermining democracies worldwide by putting profits ahead of morals and ethics. One of its employees, data scientist Sophie Zhang, recently wrote a 6,600-word memo documenting the company’s abuses. She said, ‘I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count.’

This has parallels to the junior British autocrats who ruled India and the criminal corporation, the East India Company, that pillaged the country. India should not allow foreign corporations to monitor Indian citizens and influence their social and political preferences and moral values as they crush competitive businesses. This is what technologies such as Facebook enable: a winner-take-all system that captures more information than Big Brother did in George Orwell’s 1949 novel, Nineteen Eighty-Four.

A greater threat comes from China. The Chinese government is oppressing its population and committing genocide in Xinjiang using the information it gains from apps such as WeChat. India has already made the bold move of banning TikTok. It needs to also look critically other Chinese companies that are dominating its technology industries, such as Xiaomi, Oppo, OnePlus, and Huawei. They, too, should have to divest their Indian operations.

Despite Global Times’ tweets, China long ago locked out foreign companies such as Google, Twitter, and Facebook, allowing its entrepreneurs to build copycat companies that adhere to Chinese cultural values and laws. The problem for India now is that because of China’s National Intelligence Law of 2017, which requires all of its companies and citizens to ‘support, assist and cooperate with the State intelligence work,’ every Chinese information technology poses a risk through its capacity to intercept private communications, shut down key services, or even sabotage infrastructure.

For the Facebook India divesture, in particular, there are two tycoons who are worthy of the prize that Trump handed his supporter Larry Ellison, founder of Oracle, with the TikTok breakup: Mukesh Ambani and Anand Mahindra. My vote goes to Mahindra because he doesn’t have the type of monopoly that Ambani is building with Jio, and because Mahindra is himself an expert in social media.

As Tech Mahindra builds its 5G business, owning the Indian version of Facebook could be quite an asset. India needs to bake competition into its new policies and not allow any one company to become a monopoly.


Three Things You Have Wrong About Intel!

Three Things You Have Wrong About Intel!
by Daniel Nenni on 10-09-2020 at 10:00 am

Three Things You Have Wrong About Intel

First let me tell you that I have nothing but respect for Intel. I grew up with them in Silicon Valley and have experienced firsthand their brilliance and the many contributions they have made to the semiconductor industry. In fact, I can easily say the semiconductor ecosystem would not be what it is today without Intel.

But no company is perfect and there have been many bumps and bruises along their 50+ year journey. The following is just my Intel opinion of course but I will put my semiconductor experience against anyone else in the mainstream media without hesitation.

1. Intel will go fabless

It all started with a story leaked a while back that Intel signed a big wafer deal with TSMC. Next Intel CEO Bob Swan said on a conference call that Intel was in fact looking at outsourcing and the media’s imagination went crazy after that.

Intel insight: CEO on U.S. manufacturing’s role in driving the digital revolution

To be clear, Intel has been a happy TSMC wafer customer for many years so that was not really news. Most, if not all, of it was the result of Intel acquisitions but the point is there has been a trusted Intel/TSMC relationship in place for a long time.

Intel is a semiconductor legend and manufacturing is in their DNA. Whoever says Intel will become fabless (like AMD did) clearly does not work inside the semiconductor industry. It is NOT going to happen.

Here is my professional Intel CEO assessment but first it is important to understand the first 30 years of Intel leadership. Intel was led by some of the top technical CEOs the semiconductor industry will ever see:

Robert N. Noyce
Intel CEO, 1968-1975, Co-founder of Fairchild Semiconductor
Education: Ph.D in physics, Massachusetts Institute of Technology

Gordon E. Moore
Intel CEO, 1975-1987, Co-founder of Fairchild Semiconductor
Education: Ph.D in chemistry and physics, California Institute of Technology

Andrew S. Grove
Intel CEO, 1987-1998, previously worked at Fairchild Semiconductor
Education: Ph.D. in chemical engineering, University of California-Berkeley

Craig R. Barrett
Intel CEO, 1998-2005, Joined Intel in 1974, chairman from 2005 until 2009.
Education: Ph.D. in materials science, Stanford University

The first 30 years of Intel can best be described by Andy Grove’s famous quote “Only the Paranoid Survive” which resulted in Intel being the most dominant semiconductor company in the world. Unfortunately, the next two Intel CEOs were NOT paranoid technical leaders which brought Intel to where it is today, NOT the most dominant semiconductor company in the world.

The current CEO is not a technical leader but he is a financial one and he did not grow up Intel. There is no Intel born swagger in Bob Swan. This is Bob’s big adventure to make his CEO bones in the business world and he will do whatever it takes to be successful as defined by Wall Street, not Moore’s Law. The one thing Bob Swan will NOT do however is erase the Intel manufacturing legacy and go fabless. Nobody wants that on their semiconductor CEO resume.

Intel Fab 42 in AZ now ready to pump out leading edge products

However, I do believe Bob will outsource Intel designed products to TSMC but only for the price and power competitive markets.

Partnering with TSMC will put Intel on a level manufacturing playing field with competitors and Intel will have much higher volumes so margins will be an advantage. Intel can then better focus their internal manufacturing efforts on HPC chips for the cloud which is where the majority of profits will come from over the next 10 years.

2. Intel will take TSMC wafers from AMD

You should also know that the chances of Intel buying up ALL of the TSMC wafers at a given node so AMD can’t have any is ZERO. Yet another dumb thing non semiconductor professionals are saying. Wafer agreements are put in place well in advance of the design start much less manufacturing. TSMC builds fabs based on wafer agreements so there are no capacity surprises, just ask Apple.

To be clear, it takes Intel and AMD longer to design a chip that it does for TSMC to build a fab, do the math.

3. AMD is beating Intel

That is a matter of debate of course. The company financials state otherwise but remember Intel has not had to look in their competitive rearview mirror at AMD since the AM386 more than 30 years ago. Clearly that is no longer the case, I can assure you AMD is in Intel’s competitive cross hairs moving forward. Thus the expanded outsourcing to TSMC, that is a clear shot at AMD. AMD acquiring Xilinx is a clear shot at Intel and Nvidia acquiring Mellanox and Arm is a clear shot at both AMD and Intel.

Bottom line: Competition is the life blood of the semiconductor industry so this is all great news for the ecosystem and the rest of the world, absolutely.

Thus far Bob Swan seems to have the right amount of paranoia to pivot Intel back into a dominant position so two thumbs up for Bob.

On a side note, in 2013 I strongly suggested privately and publicly that Intel should acquire Nvidia and make Jensen Haung Intel CEO number six. That would have been one hell of a ride! Instead Intel hired Brian M. Krzanich which is probably the biggest Semiconductor CEO dumpster fire on record.


CEO Interview: Wally Rhines of Cornami

CEO Interview: Wally Rhines of Cornami
by Daniel Nenni on 10-09-2020 at 6:00 am

Wally Dan 56thDAC

Wally Rhines is President and CEO of Cornami, Inc., a company named for its “tsunami of cores”. The company has developed a “TruStream” programming environment that generates independent executable streams of data and control. They have also designed a chip that provides the computational fabric for multi-core execution of programs, yielding six orders of magnitude or more in performance versus traditional Xeon or nVidia based servers, while consuming less than half the power.

Previously, Wally was CEO of Mentor Graphics from 1993 through 2018.  During his tenure, Mentor developed the Calibre family of physical verification products and the world’s leading products for design for test, while growing revenue by 5X and market value by 10X. Before Mentor, Rhines was Exec VP, Semiconductor Group of Texas Instruments with worldwide responsibility for TI’s semiconductor business.

After more than 25 years as CEO of Mentor, why did you seek another CEO position?
I didn’t. I became involved in a variety of consulting activities, board positions, speeches and authoring two books. I was busy. While performing consulting work for DARPA, I was asked to investigate industry progress in fully homomorphic encryption, or FHE, because of the high priority that the Department of Defense has placed upon this ultimate capability for cybersecurity. When I discussed FHE with semiconductor companies, I was told that FHE capability is ten or more years away and that the computational performance requirements would be more than one million times today’s best processors. After writing my report for DARPA, I ran into a friend of mine, Gordie Campbell, who asked me to visit Cornami. That visit turned out to be the stimulus for a major change in direction for me.

What was unique about Cornami?
Cornami started as a software company with an innovative programming environment developed by Fred Furtek and Paul Master. They attacked the “tyranny of non-deterministic p-threads” with a C-like programming environment that generated independently executable streams of data and control. The result was at least an order of magnitude performance improvement in multi-core processor systems. To gain maximum advantage from the software, however, they designed a multi-core processor for machine learning. Emulation of the chip verified a one or more order of magnitude performance improvement over the best promised results of all the new post Von Neumann chip architectures as well as the Von Neumann architectures like nVidia Ampere. We could have parted friends at that point since I had seen so many post Von Neumann chip architectures that I found the product space to be very crowded. But then Paul said something that surprised me. “We can also do something you’ve never heard of called fully homomorphic encryption, or FHE”. I told him I had heard of FHE and I was very sure that Cornami couldn’t do it, at least not in real time. This was the start of months of analysis of Cornami’s emulation data. By March of 2020, I was convinced. Cornami’s software and hardware will perform FHE in real time.

What is FHE and why is it so important to the future of computing?
FHE was invented by Craig Gentry in 2009 as part of his PhD research at Stanford University. Homomorphic encryption is a form of encryption that allows computation to be performed on the data without decrypting it. DARPA refers to it as “the holy grail of cryptography”. Fully homomorphic encryption extends the capability from simple calculations to any form of arithmetic or logical computation that computers perform. Since no data center, or operating system or chip, can be fully protected from hacking, the ultimate cybersecurity solution is to keep all data encrypted anytime that it is out of the owners’ control. FHE does this. FHE ushers in a new era of data, where encrypted data can be collected and built into encrypted machine learning models. Encrypted queries can be made to these models and the encrypted results returned to the user generating the query. The data is never revealed; it can be sold again and again. Many have heralded this revolution as “Data Is the New Oil” because of the ability to protect and reuse the data with FHE.

With such incredible value, why hasn’t FHE already become the worldwide standard for cybersecurity?
FHE has one major limitation. It is very computationally intensive. To achieve computational performance on the encrypted “cyphertext” that is equivalent to computers operating on plain text, the computer would have to be about one million times as fast as conventional Xeon servers. Fortunately, Cornami’s TruStreams software and chips solve that problem. The processors provide linear performance scalability across processor cores on a chip, multiple chips, multiple printed circuit boards and servers, as verified through detailed emulation of the chips. FHE also provides some additional challenges regarding data movement and configurability. Just speeding up a von Neumann architecture chip doesn’t solve the problem, even if you could tolerate the power dissipation. FHE algorithms are changing monthly so FHE chip architectures must be reconfigurable.

If application software must be modified to run with TruStreams, how will Cornami build a large base of software for user applications of machine learning?
For machine learning, the large base of application software already exists and can be run unchanged. Most of these applications depend upon standard interfaces like ONNX, PyTorch, TensorFlow, etc. Cornami has mapped these interfaces into TruStreams. For FHE, there are standard frameworks like TFHE, Palisade, Seal, HeLIB, etc. whose instructions are directly executed by TruStreams.

Hundreds of companies are already preparing for the era of homomorphic encryption. Gartner predicts that by 2025, at least 25% of all companies will have projects supporting FHE. The most immediate users are in the financial services industry but medical analytics and Department of Defense conversions are not far behind.

When can customers buy chips, boards and servers with this kind of capability?
Next year. We’re in the final stages of verification of the chip design. We still have to raise one more round of funding. We have partners for server development and production as well as FHE services for our customers to implement their new level of data security.

Cornami Achieves Unprecedented 1,000,000x Acceleration to Deliver Real-Time Fully Homomorphic Encryption (FHE)

Also Read:

CEO Interview: Dean Drako of IC Manage

CEO Interview: Murilo Pilon Pessatti of Chipus Microelectronics

CEO Interview: Pengwei Qian of SkillCAD


yieldHUB – A Yield Management Checklist for Startups and a New Look

yieldHUB – A Yield Management Checklist for Startups and a New Look
by Mike Gianfagna on 10-08-2020 at 10:00 am

yieldHub – A Yield Management Checklist for Startups and a New Look

In July, I covered a webinar that described how yieldHUB helps bring a new product to market. That webinar described how to implement new production introduction (NPI) using an array of tools and techniques that should be part of any semiconductor enterprise. In a recent article published by yieldHUB, they took a few steps back and addressed the starting point for all this. What does a new company need to consider as they build their production management infrastructure? The information presented is quite valuable for any new company. It is indeed a yield management checklist for startups.

I’ll review the checklist items here and add some of my thoughts and experiences. I highly recommend you read yieldHUB’s article on the topic, entitled 18 things Fabless start-ups should look for in a Yield Management System.

  1. Customized reports: Any good system will have a set of compelling, good-looking reports built in. I’m here to tell you that’s nice, but your team will very predictably need tweaks to those reports to meet their specific needs, almost immediately. A great system will let you customize those reports easily to meet this need.
  2. User experience: This is another “window dressing” item. A new system demonstrated by the vendor will always seem easy to use. Dig in and try if for yourself. If it’s not easy to use, no one will use it.
  3. Speed: This is a usability item. Your users will be doing a lot of what-if analysis on potentially huge data sets. If it’s too slow, your user base will lose interest. yieldHUB recommends getting users involved in a speed assessment. This is a great idea.
  4. Security: Your data is your business. Is it safe from prying eyes?
  5. Safety: This is a data continuity item. Can a natural disaster take you out, or will your system and its data survive?
  6. Archiving: This one may not seem critical at first, but if you do business with any large company with products that have a long shelf life, you are going to need to access information from a long time ago, and potentially very quickly.
  7. Scalability: This might be one the most popular reasons fabless startups fail. You are managing small amounts of information right up until you land your first big design win. Then all that changes overnight. If you can’t scale, you will fail.
  8. Automated report generation: Everybody wants different reports on a different schedule. Hiring someone to do this is a terrific waste of resources and a good way to make a lot of mistakes. Make sure your chosen system can handle this.
  9. Alerts: Things go wrong. Sometimes in the middle of the night. Don’t count on anyone spotting it. Make sure your system can alert you automatically.
  10. Generalities and detail: Most times, the big picture is what everyone needs. But there will be many times when a drill-down into the data will be needed to optimize or troubleshoot. Your chosen system needs to do both. yieldHUB calls this “high-level to die-level.”
  11. Correlation: Can you correlate issues observed across other parts of the supply chain? This is a critical item to optimize production and yield.
  12. Data format support: OK, STDF is a standard. But you WILL need to support other, potentially proprietary formats as your ecosystem grows. How extensible is your chosen system? Can the vendor help here?
  13. Outlier detection: This is a detailed one. Can you apply advanced algorithms to your data to satisfy advanced quality requirements going forward?
  14. Analysis tools: Your users will want to process the data in your system in a lot of ways, and they can’t really know them all up-front. How robust are the analysis tools of your chosen system?
  15. Integration ease: How will a new system integrate with your current workflow? Don’t underestimate how much effort this can take.
  16. Service: Is your chosen system supported by people who truly understand it?
  17. Roadmap: Will your chosen vendor be able to grow and expand with you?
  18. Cloud support: Cloud is the future. Your chosen vendor better understand that.

It’s also worth mentioning that yieldHUB recently launched a new website. The site has a great new look and is quite well organized. You can check out the new yieldHUB website here. And while you’re there, check out the cool new video explaining yieldHUB. The link is right on the home page. It’s just over a minute long and well worth a look. As the title says, a yield management checklist for startups and a new look.

And if you wonder if your competitors ahead of you in yield management, there is a webinar coming up with that title that will help. You will learn how successful fabless companies solve their yield management problems. The webinar will be broadcast on Tuesday, October 27, 2020 at 1PM Pacific time. You can register for the webinar here.