webinar banner2025 (1)

CEO Interview: Johnny Shen of Alchip

CEO Interview: Johnny Shen of Alchip
by Daniel Nenni on 06-10-2020 at 10:00 am

Alchip IPO Johhny Shen Kinying Kwan 1
Johnny Shen CEO and Kenying Kwan Chairman of Board

Alchip was founded in 2003 by a group of Silicon Valley veterans that followed a similar path of working for semiconductor companies then moving to the EDA/IP/ASIC ecosystem. In fact, I used to play basketball with the Alchip co-founder/chairman during that time and can tell you he is a fierce competitor. Good thing too because the ASIC business is fiercely competitive, absolutely.

I saw in the news that Alchip is moving into the North America ASIC Market.  Why now?
Coming to North America has always been part of our long-range plan.  We started when we began putting assets in place beginning last November.  There is huge demand in North America for the High-Performance ASICs that are our core competency.  We didn’t so much make the decision to come as we have been asked by a number of large hperscalers, OEMs, and fabless device companies, both established and start-ups, to make our services more locally available. We’ve opened an office in Milpitas headed by Hiroyuki Nagashima, who formerly ran our business in Japan, and have staffed both design services and business management capabilities.

I know Alchip has a huge presence in Europe and Asia Pacific region, but you’re probably not a familiar name to many in the US.  Can you tell a bit about the company?
We were founded in 2003 and all of executive management have Silicon Valley roots and are alumnae of top-tier North American engineering and business schools.  As important as our pedigree is the fact that we registered record revenue for the second quarter of this year and anticipate record revenue for 2020, despite the current business environment.   But more germane to the question: We became a publicly traded company in 2014.  Some our more important technology milestones include completing 16nm and 12nm AI and automotive devices over the past 2-years and we will soon announce the successful tape-oust of sub-10nm designs.

ASICs are a huge industry.  Is there any particular area you’re focusing on and why?           
As I mentioned earlier, we are a high-performance computing ASIC company. A lot of ASIC companies try to be everything to everybody. But our legacy has always been to work on the leading-edge, hand-in-glove with our foundry partner,TSMC. Right now, for instance, we have one 7nm design in production and multiple 7nm tape-outs underway. This is the legacy that makes us attractive to hyperscalers and start-ups alike for high performance applications including artificial intelligence for cloud inference and training.

You talked a great deal about high performance computing, what are some of the specific attributes you bring to the challenges?
The ASIC business is complex from a both a business model and technical model perspective. We like to think that we put the “s” in Specific with a Flexible Business Model that has five different entry points and three different exit points. We have developed a design methodology for large, high power ICs, best-in-class 3rd Party IP, and advanced 2.5D package technology.

From both a business model and technology perspective it’s important to point out that we have developed a best-in-class IP portfolio to lower design risk and shrink time-to-volume.  This allow us to focus on what we do best, then differentiate ourselves by working with others, then  leveraging what they do best into a world-class ASIC flow.

Across the board,, our technology optimizes our design flows and methodologies to maximize our client’s time-to-design and time-to-market with maximum yield results.

OK, but it’s all about proof. Can you tell us about some of the other work you’ve done in other parts of the world?
Interestingly, North America is the beneficiary of world-class thinking. We have helped our clients go-to-market with industry-leading high-performance chips for supercomputing and server applications. To date, we’ve totaled had more than 400 advanced technology tape outs.  That’s more than 20 advanced technology a year.  That’s a number that I think any other company will have a hard time matching.

Can you give me a bit more detail as to what differentiates your design flow and methodologies?
Alchip’s design platform provides a unique clocking methodology that improves overall clock network capacitance and power; minimizes clock skew and insertion delay; and maximizes routing resource.  Most importantly, it minimize on-chip variation to deliver significantly better yields at advanced nodes.  Our design platform also offers a knowledge-based design flow for different applications in a manageable QoR range at each design steps to achieve superior power, performance and area witin a controllable design cycle for large scale designs.  This translates into faster time-to-market and ensure one-pass silicon success.

Some ASIC companies decided to develop their own IP and yet you have turned to leading 3rd party IP partners. Why?
Building an HPC is complex and requires very specialized IP. Complexity requires collaboration and I think strategically, we recognize that no one company can cost-effectively be all things to all people.  If it is commercially available IP, we want to collaborate, not compete with our best-in-class IP partners.  However  want to compete with our parters, but an IP block is not available, we won’t let that be a roadblock to success. So we focus on the specialized IP that they our IP partners don’t have. That’s why for instance, we provide proprietary macros such as  D2D IP for PFN (12nm) APLinkl1.0..

Why is advanced packaging so important to high performance computing applications. What services do you offer?
Packaging is the new ‘Moore’s Law’ for high-performance computing challenges.  Understanding and applying the technology is critical to meeting the demand for more functionality and greater performance in a smaller physical footprint.

We just rolled out a CoWoS program.   This is new for us and new for the industry; it requires a significant investment our our part.  But that investment is important to ease our customer’s long range roadmap concerns. CoWoS is critical to f today’s HPC ASICs because it incorporates multiple side-by-side die on a silicon interposer.  The CoWoS service rolled out today covers multiple package designs. Looking further down the road we’re also working with customers and making investments in INFO technology.  Yes, this requires a huge investment on our part.  But it says to the marketing, we’re committed and ready.

COVID-19 really turned the world economy on its ear. How is Alchip fairing?
As I said earlier, we recorded a record first quarter have forecast a positive outlook for the remainder of the year.  Our work force was minimally impacted and we are looking for a record second quarter and will be working on multiple 7nm tape-outs.

The impact on COVID-19 has been insignificant on our existing business.  We are working at full capacity and people can work from home or the office.  The associated travel ban may eventually slow business development.  But we have repeat business from existing customers that has kept our pipeline full.  For us, 2020 will be a record year.  The current situation acutally favors the outlook for high-performance computing because of the growing importance and emphasis on cloud computing and that’s our sweet spot.

Congratulations.  But what does the rest of the year look like?
The HPC market remains a major revenue contributor in terms of both production shipment.  Project NREs will also account for a considerable part of the company’s total revenue.  We are seeing that 7nm HPC shipments will be a critical factor to growth in the second half of the year.  At the same time, we are seeing only non-significant impact from Covid-19

Looking forward, and in that same vain, what is the outlook for the High-Performance Computing Market, specifically, the High-Performance Computing Market in North America?
According to MarketsAndMarkets, the global High-performance Computing (HPC) Market was valued at $35.8 billion in 2019 and is expected to reach a value of $50.32 billion by 2025, at a CAGR of 7.02%. Growing adoption of HPC solutions and services across diverse industries such as datacenters, finanicial institutions, autonomous vehicles, and 5G infrastructure, is the major growth driver.  Geographically, North America became the largest HPC market last year and is expected to hold that position through 2025.  With Alchip’s track record for high-performance ASICs in cloud service appliations in other geographical areas, we are very well positioned to serve North America HPC customers with proven design solutions manufactured on TSMC leading edge technologies.

About Alchip
Alchip Technologies Ltd., headquartered in Taipei, Taiwan, is a leading global provider of silicon design and production services for system companies developing complex and high-volume ASICs and SoCs. The company was founded by semiconductor veterans from Silicon Valley and Japan in 2003 and provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced, including 7nm processes. Customers include global leaders in AI, HPC/supercomputer, mobile phones, entertainment device, networking equipment and other electronic product categories. Alchip is listed on the Taiwan Stock Exchange (TWSE: 3661) and is a TSMC-certified Value Chain Aggregator.

For more information, please visit: http://www.alchip.com

Also Read:

Tortuga Logic CEO Update 2020

CEO Interview: Robert Blake of Achronix

Flex Logix CEO Update 2020


Moortec Delivers Distributed, Real-Time Thermal Sensing on TSMC N5 Process

Moortec Delivers Distributed, Real-Time Thermal Sensing on TSMC N5 Process
by Mike Gianfagna on 06-10-2020 at 6:00 am

Screen Shot 2020 06 09 at 9.51.49 AM

Moortec is known for its innovative in-chip monitoring and sensing products. They’re based in the UK and have been delivering this kind of embedded technology since 2010. Dan Nenni covered an overview of the company recently. SemiWiki also hosted a webinar about optimizing power and increasing data throughput in AI from Moortec technology and you can view the replay here

Moortec recently launched a new in-chip technology for highly distributed, real-time thermal analysis on TSMC’s N5 process. I got the opportunity to hear a live briefing from Moortec’s CEO, Stephen Crosher about this technology and how it expands Moortec’s capabilities. At first glance, this appears to be a product evolution kind of announcement. That is actually not the case, however. This new capability opens up new markets and applications for Moortec that are quite important. Let me elaborate.

Up to now, embedded sensing has focused on monitoring the overall process, voltage and temperature profile of the chip in a coarse-grained manner. The technology addresses device reliability and enhanced performance optimization by supporting power management and voltage/frequency scaling strategies. Moortec’s newly announced technology, called Distributed Thermal Sensor (DTS) is 7X smaller than previous versions and offers high accuracy measurements across a wide temperature range with lower latency through a higher sampling rate.

These improvements open up a new set of in-chip monitoring and optimization capabilities. The increased gate density of advanced FinFET technology presents new challenges in power optimization and lifetime reliability with a focus on thermal stress and electromigration. Stephen pointed out the need to monitor and manage aging effects is critical since FinFET devices have yet to experience 20 years of actual operating life.

DTS technology can now be placed not just at locations inside a cluster of CPU cores but the thermal sensors are so small they can be placed deep within individual cores, much closer to hotspots. The sensor data is then sent to a central hub. This flexibility allows fine-grained monitoring and control of temperature gradients across the chip as well as load balancing of the actual CPUs. The addition of more temperature sensing at strategic locations around the chip facilitates very tight control of temperature and power, allowing improved reliability and performance with lower power consumption. Thanks to the speed of the DTS devices all this can be done in real time.

Some applications of DTS and the associated market areas include:

  • Workload distribution and thermal load-balancing [Data Center, HPC]
  • Restricted power, increase core/accelerator utilization [AI]
  • Reduction of thermal stress for reliability [Automotive]
  • Enhanced user experience through better battery life [5G & Consumer]

Stephen went on to say: “We’ve seen a clear need for tighter thermal control of semiconductor devices. Multi-core architectures applied to AI, automotive, consumer and many other applications, benefiting from highly distributed sensing schemes to minimize system-level power consumption, optimize data throughput, and improve product lifetimes. We are confident that this extension to Moortec’s portfolio will enable our customers to maximize the performance of their silicon and further strengthen the long-term collaboration we have with TSMC.”

DTS technology design kits were made available in early 2020 and Moortec reports that the technology has already been licensed to several major customers. You can learn more about Moortec’s new distributed thermal sensor technology here.

 


WEBINAR: Adnan on Challenges in Security Verification

WEBINAR: Adnan on Challenges in Security Verification
by Bernard Murphy on 06-09-2020 at 6:00 am

security testing

Adnan Hamid, CEO of Breker, has an interesting background. He was born in China to diplomat parents in the Bangladesh embassy. After I’m sure an equally interesting childhood, he got his BSEE/CS at Princeton. Where, like most of us he had to make money on the side, in his case working for a professor in the Psych lab on artificial intelligence projects. This was before deep learning got hot. They were working on planning algorithms, more of a rule-based approach to AI. You tell the tool what endpoint you want to get to, and it figures out how.

Adnan joined AMD as a new grad, assigned to create testbenches, a task he quickly recognized would be mammoth and never-ending. He thought back to the work he’d done in the psych lab, wanting to try planning as an approach to test generation. Tell a tool this is the result you want to get to, give it a bunch of strategies and let the tool figure out reasonable test cases. From that, Adnan started Breker, where they pioneered portable stimulus starting from graph-based models because they’re easy for us simple humans to understand. Which of course evolved further into an even broader standard for system-level verification.

Breker has, among other capabilities, an app for security verification based on this standard. I wanted to get his take on why security verification is hard and what characteristics a solution needs to have, independent of any product viewpoint. That led to an interesting discussion.

Where Security is Most Important

We started with the business motivation. For all the advantages of the IoT, the disadvantage is that nothing now is a closed system. Everything by default can be hacked, traced, identities stolen, caused to misbehave. This is a huge problem for the DoD who face very sophisticated attackers. It’s a huge problem for the government and infrastructure, given the antiquated IT equipment on which they depend, it’s a huge problem for financial institutions, for businesses whose reputation rests on the security of their systems, and on and on.  So #1, security now come with a societal price tag and a financial price tag, both huge. Much of it demands six-9’s (99.9999%) levels of security which somehow we have to prove.

Defining the Requirement

The second problem is not so much a technology problem as a people problem. How do you know you’ve tested enough? This is of course a classic verification problem but the classic solution – hit some level of coverage through randomized testing – doesn’t work. For security we need to check all the vulnerabilities of a particular target. It would take forever to get to six-9s. Plus a lot of what you’d be testing would be meaningless. What you really want is to test exhaustively within a realistic range of possibilities – like a semi-formal approach to dynamic verification. In fact, formal is already good at doing this at the IP level and in some sub-systems. What we need is to carry that concept over into full system testing.

How do you get there? Definitely not by writing a test, then another test, ad infinitum. Or by randomizing those tests. First, the range of possibilities needs to be captured in a way we can visualize easily. What are the possible master/slave paths in this system? For all the possible access and privilege combinations on those endpoints, which are allowed? Then in the memory map, which regions are accessible to which devices and under what conditions?

This might all seem rather trivial but Breker finds that simply constructing these access rule tables, no tools required, will often reveal gaps in understanding. Architect/designers have to fill out each entry and when they do, they realize they actually don’t know the right answer for some cases. Because we really are human after all.

Testing Compliance

Once the tables are filled out, tools can get involved. Now you’re back to that question of whether you want to hit six-9s or just do the best you can within a schedule. Which will of course depend on the application of the product. If you do have to hit six-9s, you have to do exhaustive testing, that semi-formal kind of testing at the system level. AI planning algorithms, starting from PSS models, are actually pretty good at that.

That’s Adnan’s view. If it’s critical to business or national security, you have to hit a high level of confidence. If you have to hit a high level of confidence, you need to use a semi-formal (realistic exhaustive) approach. And the best way to do that is through planning algorithms to generate exhaustive tests over a PSS graph.

You can learn more by REGISTERING TO WATCH THE BREKER WEBINAR, scheduled for June 16, 2020 at 10am Pacific.

Also Read

Breker Tips a Hat to Formal Graphs in PSS Security Verification

Verification, RISC-V and Extensibility

Build More and Better Tests Faster


Talking Sense With Moortec…Speak No Evil!

Talking Sense With Moortec…Speak No Evil!
by Tim Penhale-Jones on 06-08-2020 at 10:00 am

Speak no Evil Moortec

In the first of this blog trilogy, Talking Sense with Moortec…’Are you listening’,  I looked at not waiting for hindsight to be wise after the event, instead make use of what’s available and act ahead of time. In the second, Talking Sense with Moortec…’See no evil’, we bizarrely saw how Sir Francis Drake, Admiral Nelson and Clint Eastwood all had something in common with Mizaru, one of the 3 wise monkeys (Kikazaru and Iwazaru being the other two).

In the final blog, I would like to consider Iwazaru…’speak no evil’.

In polite society (think ‘Sense & Sensibility’ or ‘Pride & Prejudice’ in 19th Century British literature), it used to be said that children should be seen but not heard.  Then as they grew up, they were expected to only politely comment on experiences…this politeness has (allegedly) carried through British society. Roll on 200 years and we are now accused of being the masters of the understatement, or more likely, not saying what we really mean!

This can pose a few challenges; under UK Health & Safety Law, it is as much an offence not to notify of a potential hazard, as it is to be the perpetrator of the offence, that doesn’t sit well with the ‘be positive’ messaging. It’s pretty clear that Iwazaru is likely to keep ‘shtum’.

There are so many rules about verbal etiquette…I’ve heard it said that one shouldn’t discuss money, religion or politics on a first date…wise words I’m sure, yet what do you talk about?! All these negative comments about talking up or speaking your mind. In history it is typically only the brave who speak out, often under hardship or duress…they are commonly seen as trouble makers or more recently as ‘the  whistle-blower’. Yet, when does whistle-blowing become heroic having moved on from ‘telling tales’…I guess it depends on your point of view, for many it’s the definition of the truth.

Back to our SoC, we know we have a solid engineering team who want the truth as a collective entity.  A great way to get at that truth, without relying on heroes or whistle-blowers, is the use of established in-chip monitoring IP. Whether you want to do the basics of monitoring temperature as an ‘insurance policy’ or do more in-depth analysis and optimisation by using multiple types of sensor to support, for example, Adaptive Voltage Scaling (AVS), these monitors can provide a lot of real-time data. My colleague Richard McPartland covered this in his recent blog entitled “Key Applications For In-Chip Monitoring … In-Die Process Speed Detection”

So is there hope for our three monkeys in SoC design? Iwazaru doesn’t want to talk negatively, Mizaru will always ‘turn a blind eye’ and Kikazaru only sees positive things, none of them is prepared to step up…it would appear that in-chip monitoring is the only option, to provide real time data about what is going on in your chip. And let’s be honest, surely no-one will keep quiet if they know of an issue, that could cost several million dollars in mask costs alone…so maybe the UK Health and Safety law should apply to SoC development too?!

If you have missed any of Moortec’s previous “Talking Sense” blogs, you can catch up HERE


Welcome Samtec and System Design on SemiWiki

Welcome Samtec and System Design on SemiWiki
by Mike Gianfagna on 06-08-2020 at 10:00 am

Samtec Cables in Action

I always enjoy welcoming new corporate members to the SemiWiki platform. Each company brings new technology, a different perspective and the opportunity for the SemiWiki community to hear about another aspect of chip design and manufacturing. But this introduction is different. This time, a new corporate member is opening up a whole new dimension to the conversation – system design.

Samtec provides connectors, cable assemblies and active optical modules. Their products take the data produced by the chips in a system and deliver it in a fast, reliable and accurate manner to other parts of the system. The company literally provides the infrastructure that allows all the parts of a system to communicate, whether it’s on a board, a backplane or between racks. This is the realm of system design and I’m delighted that Samtec has opened the door to this new chapter of SemiWiki exploration.

I got to know Samtec quite well in my prior life at eSilicon. A key piece of IP for eSilicon was our high-performance SerDes. Most folks will know that a SerDes provides a way to send data through a serial interface at high speed to another part of a system. It’s great to have a top-notch SerDes, but without a high-performance channel to carry the data, well, it’s like all dressed up and no place to go. The demands for things like accuracy, precision and signal integrity are substantial for a high-performance SerDes link. So, the first conception I want to dispel is that cables and connectors are easy. Trust me, they are not. We did some pioneering work with Samtec to demonstrate just how good our SerDes was and just how good their interconnect was. More on that in a moment.

First, a bit about the company. Samtec has been around since 1976. They are headquartered in New Albany, Indiana. The company employs over 6,000 people in over 40 international locations, with over 25,000 customers in more than 125 countries. Samtec is a great place to work. Their employee retention rate is 96 percent. The culture is to be envied and imitated. In their own words:

Much more than just another connector company, Samtec puts people first with a commitment to exceptional service, quality products, and convenient design tools. We believe that people matter, and taking care of our customers and our employees is paramount in how we approach our business. This belief is deeply ingrained throughout the organization, and means that you can expect exceptional service coupled with technologies that take the industry further faster.

OK, so how does a company like Samtec fit in the lexicon of chip design?  Simply put, Samtec is one of many companies that complete the system. I believe thinking about design in a holistic way like this is very important. A good example of how this works is DesignCon. I first started attending this show a long time ago. It was a mid-size EDA-oriented event. I got away from it for a bunch of years and recently returned. The size of the exhibit floor, both from a physical and technology perspective literally blew me away. All aspects of what it took to build a real product were represented, not just the chip. As I walked into the main hall, I saw floor-to-ceiling banners from the main sponsors of the show. Companies like Samtec, Anritsu and Keysight. What happened to Synopsys, Cadence and Mentor?  Was I on another planet? After a few minutes of wandering the show floor, it became clear that this show had “grown up” and was now addressing ALL the technologies needed to build a new product. This is why I’m excited to welcome Samtec to SemiWiki.

I mentioned eye-catching SerDes demos earlier. eSilicon had developed a 56 Gigabits per second SerDes IP block that we implemented in a test chip to showcase its capabilities. Thanks to its robust design, the part was able to drive a signal at maximum speed over very long distances with very low loss. But how do you demonstrate that? Enter Samtec, who delivered a five-meter copper cable. That’s not a misprint, the cable was over 16 feet long and able to deliver high performance, high precision signals if driven properly. And eSilicon could certainly do that. So, we had some fun demonstrating a five-meter copper channel running at 56G at a bunch of trade shows. Most folks who visited us at first refused to believe what we were showing was possible. It was the precision and quality of Samtec’s products that carried the day for us.

Our work with Samtec on this long-reach demo was covered by Dan Nenni in a SemiWiki post back in 2018. We called the demo “reach beyond the rack”. Below is photo of a unique demo we did at the AI Hardware Summit in 2019. Using Samtec’s ExaMAX Backplane Connector paddle cards and a five-meter ExaMAX Backplane Cable Assembly, we literally ran the “reach beyond the rack” demo between our two booths at the show. I don’t think a demo that spans two booths had ever been done before.

I’ll stop here for now. There will be a lot more interesting information from Samtec over the coming months. Information that will expand your design horizons. In the meantime, you can find out more about Samtec here: https://www.samtec.com.

In closing, I met a lot of very talented folks at Samtec during my time at eSilicon. I’d like to give a shout-out to a couple of them.  I’m sure you’ll be hearing more from these folks. Matt Burns is the technical marketing manager at Samtec. He was interviewed in the SemiWiki story, above. And Ralph Page, system architect at Samtec was the driving force in the design of the five-meter cable demo. He also coined the phrase, “reach beyond the rack”.

 

 


Can Threshold Switches Replace Transistors in the Memory Cell?

Can Threshold Switches Replace Transistors in the Memory Cell?
by Fred Chen on 06-08-2020 at 6:00 am

Threshold switch I V

The overwhelming majority of transistors produced in the world are used in memory cells, either as the memory itself (Flash, SRAM), or as the access device (DRAM). Yet, it is not necessary to have a transistor in every memory cell. In 2015, 3D XPoint, the first major product based on transistor-less memory cells, was announced [1]. Crossbar also disclosed details of their own transistor-less memory cell in the same year [2].

What makes transistor replacement attractive?
The driving force behind these developments is the reduction of memory cell footprint. The memory element can be stacked directly on top of a “selector” that acts to pass or block current, depending on the applied voltage across the cell [3]. The selector itself is smaller than a transistor as it is simply a layer stacked between electrodes. The lack of a transistor also removes the restriction to build the memory array directly on top of the silicon substrate, enabling the stacking of multiple layers to form a 3D memory array. Moreover, the circuitry that normally surrounds the memory array can now be placed underneath the array, further saving chip area.

Brief description of threshold switching
Threshold switches have actually been around for a while, in many forms, but gained particular notice after Stanford Ovshinsky published his observations of the switching behavior in disordered semiconductors in 1968 [4]. In particular, phase change memory includes the use of amorphous chalcogenides, which exhibit the following interesting behavior [3,4]:

(1) The amorphous chalcogenide maintains a very high resistance up until a high enough voltage, the threshold voltage (Vth) is reached;

(2) Upon reaching the threshold voltage, the material enters an extremely conductive (“ON”) state, and the voltage across the material is reduced;

(3) The material remains in the conductive state until the voltage across it is reduced below a holding voltage (Vh), which is the voltage needed to sustain a minimum holding current (Ih). At this point, it returns to the initial high-resistance (“OFF”) state.

The behavior can be visualized in an I-V curve below:

Figure 1. I-V curve for a threshold switch. Blue: OFF-to-ON. Red: ON-to-OFF.

A wide variety of materials have been found to support threshold switching [5,6]; furthermore, a number of mechanisms have been found to be consistent with this switching:

  1. Metal-insulator transition [7]
  2. Electrothermal effect [8]
  3. Movement of chemical (ionic) species [5]
  4. Disappearance of small polarons [9]
  5. Order-disorder transition [6]

Regardless of the mechanism(s) involved, the special characteristics of threshold switches, particularly the occurrence of the “snapback”, i.e., the abrupt reduction of voltage after reaching Vth, lead to some important implications for the use of threshold switches.

Threshold switches can only be used with specific resistance-based memories
Threshold switches involve switching between currents orders of magnitude apart. As a result, any memory element connected in series with the threshold switch must also be able to conduct fairly high currents. This precludes the usual charge storage memories like DRAM or Flash which use insulators. Phase change memory is a more common companion to threshold switches [6]. Moreover, some of the other emerging memories may not be compatible either if they will be damaged by the sudden current surge.

Threshold switches need current compliance
Since the current surge in the ON state can be quite dramatic, a current-limiting element in series with the threshold switch is necessary to keep the current within spec. This can be a fixed resistance or an active device like a diode or a transistor. The details of this operation are quite subtle. The voltage on the cell is initially all on the threshold switch in the OFF state. Once it goes on, the voltage on the switch is reduced, so the balance of the voltage must fall on the current-limiting element. The I-V of the current-limiting element determines how much current is passed.

Figure 2. A threshold-switched cell needs a current-limiting element like a resistor to limit the current from reaching damaging levels.

Read current must exceed holding current
In order to stay ON, the threshold switch must continue to pass current larger than the holding current Ih. When the resistive memory element is being read, the threshold switch needs to be ON, so there will be a minimum read current.

The minimum read current also sets a limit on how many times the cell may be read before the memory element is disturbed, i.e., accidentally changed from one resistance state to another. This will be covered again later.

Initiation of threshold switches
Some threshold switches require an initiation (“forming”) step. Equivalently, the threshold voltage drops from its initial value, to which it can eventually recover [10]. The main concern here is whether it drops far enough that the half-selected cell voltage [3] can in fact turn ON the threshold switch.

Voltage margin can be tight
Since threshold-switched memory cells will be arranged in a crosspoint array, operation voltages will be designed accordingly. Figure 3 shows the most commonly used half-select scheme, where unselected cells in the same row or column as the selected cell necessarily receive half the voltage that the selected cell receives.

Figure 3. Crosspoint array bias schemes, enabled by the use of threshold-switched memory cells. Left: half-select scheme. Right: third-select scheme. The circle indicates the selected cell.

In this case, the maximum cell operation voltage is twice the threshold voltage Vth. For a third-select scheme, the unselected cells all receive +/- 1/3 the voltage on the selected cell. Therefore, the maximum operating voltage is 3x the threshold voltage Vth. Note that the read and write cell voltages must fall into the allowed ranges: (Vth, 2Vth) for half-select, (Vth, 3Vth) for third-select. Since the read voltage in the half-select scheme will be over half that of the write voltage, the chance of read disturb is extremely high. Even for the third-select scheme, the read voltage being over 1/3 the write voltage still poses significant risk in large arrays, as the read voltage will practically be close to 40% of the write voltage. To mitigate this, the read pulse should be very short, definitely much shorter than the write pulse.

Bottom line: cost of memories based on threshold-switching

In the end, widespread acceptance of a given memory technology depends on how effectively its cost can be driven down. Threshold switches offer a significant starting point due to their smaller footprint compared to transistors. Moreover, they are free from the usual transistor scaling issues such as short-channel effects and contact resistance dependence on doping [11, 12]. An even bigger plus is the large current density (>10 MA/cm2) that is generally available [3].

The cell size for a 1X nm DRAM is 0.0026 um2 [13], while the cell size for a 3D XPoint memory is 0.00176 um2 [14], indicating the cell size advantage already exists for a threshold-switched cell. A future cell size of 0.02 um x 0.02 um has an equivalent cell density to 100 stacked layers of 0.2 um x 0.2 um 3D NAND Flash cells, the current state-of-the-art for cell density. A larger cell size of 0.04 um x 0.04 um needs 4 stacked layers to achieve the same density. Thus, scaling threshold switches to 10 nm would provide a big boost to their becoming mainstream. That said, it also requires the readiness of the resistance-based memory element attached to the threshold switch, as mentioned above. Therefore, the replacement of transistors in memory cells by threshold switches requires the widespread acceptance of resistance-based memory as an alternative to charge-based memory.

References
[1] https://en.wikipedia.org/wiki/3D_XPoint

[2] https://ieeexplore.ieee.org/document/7104114?denied=

[3] L. Zhang, S. Cosemans, D. J. Wouters, G. Groesneken, M. Jurczak, B. Govoreanu, “One-Selector One-Resistor Cross-Point Array With Threshold Switching Selector,” IEEE Trans. Elec. Dev. 62, 3250 (2015).

[4] S. R. Ovshinsky, “Reversible Electrical Switching Phenomena in Disordered Structures,” Phys. Rev. Lett. 21, 1450 (1968).

[5] Z. Wang, M. Rao, R. Midya, S. Joshi, H. Jiang, P. Lin, W. Song, S. Asapu, Y. Zhuo, C. Li, H. Wu, Q. Xia, J. J. Yang, “Threshold Switching of Ag or Cu in Dielectrics: Materials, Mechanism, and Applications,” Adv. Func. Mat. 28, 1704862 (2018).

[6] P. Noe, A. Verdy, F. d’Acapito, J-B. Dory, M. Bernard, G. Navarro, J-B. Jager, J. Gaudin, J-Y. Raty, “Toward ultimate nonvolatile resistive memories: The mechanism behind ovonic threshold switching revealed,” Sci. Adv. 6:eaay2830, 2020.

[7] A. L. Pergament, G. B. Stefanovich, A. A. Velichko, S. D. Khanin, “Electronic Switching and Metal-Insulator Transitions in Compounds of Transition Metals https://www.researchgate.net/profile/Alex_Pergament/publication/257231373_Electronic_Switching_and_Metal-Insulator_Transitions_in_Compounds_of_Transition_Metals/links/5475ad030cf245eb4370f15e/Electronic-Switching-and-Metal-Insulator-Transitions-in-Compounds-of-Transition-Metals.pdf

[8] J. M. Goodwill, A. A. Sharma, D. Li, J. A. Bain, M. Skowronski, “Electro-Thermal Model of Threshold Switching in TaOx-Based Devices,” ACS Appl. Mater. Interfaces 9, 11704-11710 (2017).

[9] D. Emin, Polarons, Cambridge University Press, 2013, 180-185.

[10] https://thememoryguy.com/nvm-selectors-a-unified-explanation-of-threshold-switching/

[11] A. Razavieh, P. Zeitzoff, D. E. Brown, G. Karve, E. J. Nowak, “Scaling Challenges of FinFET Architecture below 40nm Contacted Gate Pitch,” 75th Annual Device Research Conference, 2017.

[12] https://www.linkedin.com/pulse/contact-resistance-silent-device-scaling-barrier-frederick-chen

[13] https://www.techinsights.com/blog/samsung-18-nm-dram-cell-integration-qpt-and-higher-uniformed-capacitor-high-k-dielectrics

[14] https://www.techinsights.com/blog/intel-3d-xpoint-memory-die-removed-intel-optanetm-pcm-phase-change-memory

Related Lithography Posts


Do You Love DAC? Here’s Why I Do

Do You Love DAC? Here’s Why I Do
by Mike Gianfagna on 06-07-2020 at 10:00 am

I Love DAC Over The Years

Hello all, and welcome to DAC Season. As you all probably know by now, there are some twists to DAC Season this year. First, it’s being held July 20 – 24 this year instead of in June. I believe there was one other time the conference spilled into July, so this isn’t the norm. DAC, like pretty much every other conference these days has also gone virtual. Where to locate DAC has always been the source of a lot of passionate opinion and discussion. This time, we get a pass on that.

The conference chair this year is Dr. Zhuo Li from Cadence. Dr. Li has quite the challenge, presiding over a conference that is doing a lot of things for the first time with respect to both space and time. You can watch his series of video blogs that chronicle the momentum of DAC and the recent decision to make it virtual on DACtv here. They’re quite informative and Dr. Li does have a sense of humor.

I want to talk a bit about the I Love DAC movement – it’s history, motivation and evolution. First, let’s take a look at DAC 2020. The technical program has always been of high quality, with relevant topics and no-nonsense technical content. This year is no exception.

You can check out the keynotes and SKY Talks on the DAC homepage. There’s also a lot of tutorials there. If you want to cruise through the entire conference agenda, you can check out the conference program here. Whether your interests are technically oriented or business motivated, you’ll find things in this agenda you’ll want to see.

The virtual exhibits portion of DAC is taking shape. There will be new content to download, videos to watch, chats with staff from exhibiting companies and the opportunity to schedule private video meetings. Except for the tchotchkes, pretty much everything you would do at the live version with no sore feet at the end of the day and (potentially) no hangover the next morning.

You can register for DAC here. You’ll notice the rates are less expensive than prior years. That’s another benefit of a virtual conference. You’ll see an I Love DAC registration that provides access to the virtual exhibits, daily keynotes, SKY Talks, tech talks, analyst reviews, design-on-cloud presentations, RISC-V presentations and the daily virtual happy hour. The fee is an attractive $0. You can get more details about I Love DAC here. This is a lot of content for free and I want to spend the rest of the post providing some history on I Love DAC and how it evolved to the killer deal offered today. By the way, I believe the virtual happy hour is BYOB. That means break out the good stuff and catch up with colleagues. So, on to a bit of I Love DAC history…

It was April 2009. I was the VP of marketing at Atrenta (the company that brought you SpyGlass). We were brainstorming about ways to increase booth traffic. How do we get more people to DAC? How do we make sure all those suites in our huge booth would be occupied during the show? It was the same every year by the way. Going into the weekend before DAC, our suite appointments would typically be at around 70%. By noon on Monday (the first day of the show), we’d be over 100%. What that means is we started booking the bistro tables in the booth storefront for overflow meetings.

In spite of this track record, we still struggled to figure out how to get more folks to the show and to the Atrenta booth. There was free Monday, which provided a free exhibit pass to attendees directly from DAC. That program had been on and off over the years, but it was only one day and the exhibits were open for four days in 2009. Then, I got a call from David Lin, the VP of marketing at Denali. Those who’ve been going to DAC for a while will remember the epic Denali parties and their ever-popular EDA Idol contest. Denali was the king of DAC after dark, so when David Lin called with an idea you definitely wanted to listen.

David had a really great idea. Exhibitors at DAC had the opportunity to purchase exhibit passes for the entire week. David had also been talking with Scott Sandler at SpringSoft and he proposed that the three of us, Denali, SpringSoft and Atrenta jointly purchase 600 week-long exhibit passes and hand them out on a first-come, first-served basis to current and recently laid off EDA users. I thought this was a brilliant idea, so did Scott Sandler. And so, the I Love DAC movement was born.

I Love DAC was always sponsored by three companies. The original three did it for a while. It was VERY popular. We designed a unique I Love DAC badge each year and that became a collectible item. At Atrenta, we had some fun promoting the event with a video of someone who thought I Love DAC was a dating site. After a few years, the DAC Executive Committee brought I Love DAC into the formal program and it became a DAC institution and continues to today.

So that’s the I Love DAC story. I suspect there are plenty of SemiWiki readers who were around to see this all unfold. My memory isn’t what it used to be, or at least I don’t think so. Anyway, if I left out any important details please feel free to comment. Have fun at DAC this year.


Google Coming to Your Car

Google Coming to Your Car
by Roger C. Lanctot on 06-07-2020 at 6:00 am

Google Coming to Your Car SemiWiki

Casual observers of the automobile industry are quick to compare connected cars to “smartphones on wheels.” It’s a simple way of looking at things that makes some sense now that half of all cars produced in the world are made with a built-in cellular modem…or two. It belies the complexity of connecting cars, but maybe it’s an accurate way to look at things now that Google’s Android operating system is on its way to dominating in-dash infotainment systems.

Strategy Analytics estimates Android’s share of the global smartphone market at 86%. Android is a long way from that kind of dominance in the world of the connected car, but the die is cast. Android is steadily muscling aside Blackberry’s QNX operating system, legacy Microsoft offerings, various Linux distributions, and a handful of other bespoke systems.

Cars are different. Winning the infotainment system OS race is not a zero sum game. Unlike smartphones, cars have multiple operating systems, multiple networks, and multiple microprocessors. Still, Android’s arrival and impending hegemony in the automotive industry has massive implications.

Car makers are attracted to Android because it promises lower development costs. There are many more app developers working in Android, thanks in large part to that smartphone market dominance, which means they are both readily available and less expensive to hire.

Just like those smartphones, though, cars will require frequent software updates – and that’s a trick that is relatively foreign to the average auto maker. Only Tesla Motors has managed to make automotive software updates look easy – and Tesla isn’t even using Android…yet.

Android arrives at a point in time when creating and managing millions of lines of code is beginning to dominate the design process at most auto makers. The emerging and growing mountain of software code is driving massive hiring and pushing auto makers to seek out sources of savings.

In shifting to Android the industry is looking for development savings of 30%-40%, but there’s a catch. Not only will all that “relatively” inexpensive code require updates – it is also likely to demand greater processing power, memory, and storage capacity – in anticipation of dozens of software updates likely to occur over the estimated decade-long life of any given vehicle.

That’s a pretty big fly in the ointment. Under-resourced infotainment systems are a sore point that continues to plague the automotive industry. Cars are being sold and driven today that lack sufficient processing or memory resources to support their Android and, yes, non-Android systems.

In essence, the onset of Android is opening the automotive industry to a veritable ocean of clever code and related applications. It is also contributing to the auto industry’s pivot toward the rapid adoption of over-the-air (OTA) software update technology. That, in turn, is broadening the deployment of cloud-based services and applications including everything from hybrid navigation to digital assistants and edge computing.

It’s also introducing a wider range of failure points, cyber security vulnerabilities, and plain old software bugs. But a properly configured system, equipped with OTA update capability, can enable a car maker to maintain or extend the value of a vehicle or even avoid expensive in-person recalls.

Software-related recalls are a growing challenge for auto makers. An over-the-air software update capability may allow some auto makers to avoid expensive recalls. Recalls are a major inconvenience and, in most cases, a safety threat. Even auto makers hate recalls, which cost $300 per dealer recall visit on average.

Andy Gryc, co-founder of Third Law autotech marketing, was kind enough to compile recall statistics from the National Highway Traffic Safety Administration. Gryc’s recall analysis shows software-related issues have grown in number creating an increased financial exposure for auto makers and driving the adoption of over-the-air (OTA) updating technology.

Third Law recall research: http://www.thirdlawreaction.com/automotive-recalls-infographic-2019/

Multiple suppliers such as Harman International, Wind River, and Aurora Labs have stepped in with OTA solutions as has the eSync Alliance. The 10-member eSync Alliance has rolled out a software developer kits to accelerate the adoption of OTA updates across the industry.

Excelfore OTA announcement: https://excelfore.com/blog/excelfore-esync-sdk-drives-low-cost-low-risk-integration-of-ota/

There’s just one problem. No OTA system, no matter how clever, can make up for insufficient processing capacity or memory. In their rush to pinch pennies, auto makers are putting themselves in a bind.

Google’s Android operating system is a resource hog. Nevertheless, many auto makers are tacking in Google’s direction, adding the Android Auto smartphone mirroring solution. Renault, Volvo, and General Motors are preparing to launch GAS – Google Automotive Services.

It appears that the auto industry is coming to terms with its FOG – Fear of Google. Resistance remains – as some auto makers worry they will lose control of their customers in a whole-hearted embrace of Google – but resistance need not be futile.

Not all auto makers are adopting Android. Tesla is perhaps the most notable exception, but there are many others. Most auto makers are seeking ways to collaborate with Google without surrendering control of their own platforms.

The bottom line is that Android does not travel light. It’s an OS with a lot of baggage. Auto makers can get to their destination, achieve their objectives, without Android. The growing volume of software code, though, calls for providing adequate hardware resources and OTA capabilities.

Cars need OTA update capability for map updates, cyber security patches and updates, and, perhaps most essentially, to add features, functions, and value to cars after the sale. Cars are increasingly defined by software, and cars defined by software will need connectivity and updates. Make sure your next car is connected and updatable. That’s a solid takeaway.


8 Key Tech Trends in a Post-COVID-19 World

8 Key Tech Trends in a Post-COVID-19 World
by Ahmed Banafa on 06-05-2020 at 10:00 am

8 Key Tech Trends in a Post COVID World

COVID-19 has demonstrated the importance of digital readiness, which allows business and people’s life to continue as usual during pandemics. Building the necessary infrastructure to support a digitized world and stay current in the latest technology will be essential for any business or country to remain competitive in a post-COVID-19 world. [3]

COVID19 pandemic is the ultimate catalyst for digital transformation and will greatly accelerate several major trends that were already well underway before the pandemic. The #COVID-19 pandemic will have a lasting effect not only on our economy, but on how we go about our daily lives, and things are not likely to return to pre-pandemic norms. While this pandemic has forced many businesses to reduce or suspend operations, affecting their bottom line, it has helped to accelerate the development of several emerging technologies. This is especially true for innovations that reduce human-to-human contact, automate processes, and increase productivity amid social distancing. [2]

The following technologies stand to bourgeon in a post COVID19 world:

1) Artificial intelligence (AI)
By 2030 #AI products will contribute more than $15.7 trillion to the global economy. A number of technological innovations such as intelligent data processing, and face and speech recognition have become possible due to AI [3]

Post-COVID-19, consumer behaviors won’t go back to pre-pandemic norms. Consumers will purchase more goods and services online, and increasing numbers of people will work remotely. As companies begin to navigate the post-COVID-19 world as economies slowly begin to reopen, the application of artificial intelligence (AI) will be extremely valuable in helping them adapt to these new trends. [2]

AI will be particularly useful for those within retail and supply chain industries. Through machine learning and advanced data analytics, AI will help these companies detect new purchasing patterns and deliver a greater personalized experience to online customers. [2]

AI tools analyze large amounts of data to learn underlying patterns, enabling computer systems to make decisions, predict human behavior, and recognize images and human speech, among many other things. AI-enabled systems also continuously learn and adapt. These capabilities will be extremely valuable as companies confront and adapt to the next normal once this pandemic subsides. [2]

AI will increasingly contribute to the forecasting of consumers’ behavior, which became hardly predictable, and to help businesses organize effective logistics. Chatbots may provide clients’ support 24/7, one of the ‘must-have’ during the lockdown. [3]

2) Cloud computing
Fortunately, #cloud companies are weathering the pandemic stress-test caused by the sudden spike in workloads and waves of new, inexperienced users. Microsoft reports a 775% spike in cloud services demand from COVID-19. [6]

In post-COVID-19 world, cloud technology is likely to receive a surge in implementation across all types of apps. As the virus spread, people were forced to work from home (WFH) and online learning models were implemented, the demand for cloud-based video conferencing and teaching has skyrocketed. Various cloud service vendors have actively upgraded their functions and provided resources to meet this demand. Moving forward, businesses and educational institutions are likely to continue to make use of this technology. As demand for this technology continues to grow, implementation of this technology into mobile applications for easier access will be key, for the cloud the sky is the limit. [2]

3) VR/AR
This pandemic increased the number of people using #VR headsets to play video games, explore virtual travel destinations and partake in online entertainment, as they isolate at home, they’re also using this technology to seek human interaction through social VR platforms.

Businesses have also been experimenting with VR platforms to train employees, hold conferences, collaborate on projects, and connect employees virtually. For example, scientists worldwide have turned to VR platform for molecular design, to collaborate on coronavirus research and potential treatments. Now that businesses and consumers know the extent to which this technology can be used, we are likely to see more virtual conferences and human interactions as our new normal sets in. [2]

4) 5G Networks
5G is acknowledged as the future of communication and the cutting edge for the entire mobile industry. Deployment of #5G networks will emerge between 2020 and 2030, making possible zero-distance connectivity between people and connected machines. This type of mobile internet connectivity will provide us super-fast download and upload speeds (five times faster than 4G capabilities) as well as more stable connections. [3]

The industry buzz surrounding 5G technology and its impact on the next-generation of connectivity and services has been circulating over the last year or so. Yet, the technology still isn’t widely available and it holds the potential to revolutionize the way mobile networks function, because of COVID-19, the 5G market may materialize sooner than expected. As large numbers of people have been forced to isolate, an increase in working and studying from home has been stressing networks and creating higher demand for bandwidth. People have now realized the need for faster data sharing with increased connectivity speeds, an acceleration in the rollout of 5G technology to ensure the bandwidth and capacity challenges of existing infrastructure is more real than ever. [2]

5) Voice User Interface (VUI)
As consumers are becoming increasingly concerned that their mobile devices (which are touched more than 2,600 times per day) can spread #coronavirus. As the fear of spreading germs grows, so will the use of voice tech in forms of voice user interface (VUI) , which can reduce the number of times one touches any surface, including our mobile devices. Almost 80% of our communications done using verbal communication, that’s why voice usage will continue to increase and extend to other smart-home components implicated as major germ hubs. As more TVs and entertainment components, light switches, appliances, plumbing fixtures, and alarm systems incorporate voice control functionality, there will be less need to touch them.

6) Internet of Things (IoT)
IoT will enable us to predict and treat health issues in people even before any symptoms appear, with smart medication containers, IP for every vital part of your body for the doctor to hack. to smart forks that tell us if the food is healthy or not. Personalized approaches concerning prescribing medicines and applying treatments will appear (also referred to as precision medicine). In 2019 there were about 26 billion IoT devices and it’s estimated by statista.com that their number will increase to 30.73 billion in 2020 and to 75.44 billion in 2025. The market value is about $ 150 billion with estimated 15 IoT devices for a person in the US by 2030.

#IoT also fuels edge computing, thus data storage and computation become closer to the points of action, enabling saves in bandwidth and low latency. IoT will transform the user experience profoundly, providing opportunities that weren’t possible before. Gaining this experience may be forced by the pandemic, when people are spending almost all their time at home. IoT devices, that make life quality better and daily life more comfortable can become quite trendy. For example, telemedicine and IoT devices helping to monitor people’s health indicators may increase their popularity.[3]

7) Cybersecurity
Cybersecurity is one of the vital technologies for organizations, especially whose business processes are based on data-driven technologies. Much more attention is being paid to privacy and data protection since the European Union’s General Data Protection Regulations (GDRP) has been signed and recently CCPA in California.

During COVID19 pandemic lock-down, when thousands of people are forced to work remotely, volumes of private data may become totally vulnerable or at least not protected in a proper way. This emerging challenge may give another incentive to the Implementation of #cybersecurity practices. Cybercriminals took advantage of the fear factor of this virus to send their own viruses, there are many examples of such activities recently including fake domains of COVID19, phishing emails promising virus protection kits and even info about canceled summer Olympic games. In addition there is an increase in ransomware attacks on health institutions and even hacking of research centers to steal any info about possible vaccine of COVID19. [3]

8) Blockchain Technology
The COVID-19 crisis has revealed a general lack of connectivity and data exchange built into our global supply chains. Future resiliency will depend on building transparent, inter-operable and connective networks. If there were any lingering doubts over the value of blockchain platforms to improve the transparency of businesses that depend on the seamless integration of disparate networks, COVID-19 has all but wiped them away. We should look at this healthcare crisis as a vital learning curve that can show us how to build transparent, inter-operable and connective networks. #Blockchain is supporting efforts around the globe to battle the virus as explained in the following list [4]:

1)    Tracking Infectious Disease Outbreaks,

2)    Donations Tracking,

3)    Crisis Management and

4)    Securing Medical Supply Chains.

Tracking Infectious Disease Outbreaks

Blockchain can be used for tracking public health data surveillance, particularly for infectious disease outbreaks such as COVID-19. With increased blockchain transparency, it will result in more accurate reporting and efficient responses. Blockchain can help develop treatments swiftly as they would allow for rapid processing of data, thus enabling early detection of symptoms before they spread to the level of epidemics. Additionally, this will enable government agencies to keep track of the virus activity, of patients, suspected new cases, and more. [5]

Donations Tracking
With the help of blockchain capabilities, donors can see where funds are most urgently required and can track their donations until they are provided with a verification that their contributions have been received to the victims. Blockchain would enable transparency for the general public to understand how their donations have been used and its progress. [5]

Crisis Management
Blockchain could also manage crisis situation. It could instantly alert the public about the Coronavirus by global institutes like the World Health Organization (WHO) using smart contracts concept. Not only it can alert, but Blockchain could also enable to provide governments with recommendations about how to contain the virus. It could offer a secure platform where all the concerning authorities such as governments, medical professionals, media, health organizations, media, and others can update each other about the situation and prevent it from worsening further without censorship. [5]

Securing Medical Supply Chains
Blockchain has already proven its success stories as a supply chain management tool in various industries; similarly, Blockchain could also be beneficial in tracking and tracing medical supply chains. Blockchain-based platforms can be useful in reviewing, recording, and tracking of demand, supplies, and logistics of epidemic prevention materials. As supply chains involve multiple parties, the entire process of record and verification is tamper-proof by every party, while also allowing anyone to track the process. [5]

Ahmed Banafa, Author the Books :

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at : https://medium.com/@banafa

References

[1] https://www.weforum.org/agenda/2020/04/10-technology-trends-coronavirus-covid19-pandemic-robotics-telehealth/

[2] https://clearbridgemobile.com/five-emerging-mobile-trends-in-a-post-covid-19-world/

[3] https://www.sharpminds.com/news-entry/the-future-of-it-covid-19-reality-5-technology-trends/

[4] https://www.weforum.org/agenda/2020/05/why-covid-19-makes-a-compelling-case-for-wider-integration-of-blockchain/

[5] https://medium.com/datadriveninvestor/blockchain-technology-and-covid-19-c504fdc775ba

[6] https://www.zdnet.com/article/microsoft-cloud-services-demand-up-775-percent-prioritization-rules-in-place-due-to-covid-19/


Webinar: Hyperscale SoC Validation with Cloud-based Hardware Simulation Framework

Webinar: Hyperscale SoC Validation with Cloud-based Hardware Simulation Framework
by Daniel Nenni on 06-05-2020 at 6:00 am

Slide1

S2C has been developing FPGA prototyping platforms since 2003, and, over time, their FPGA prototyping platforms have supported increasingly larger more sophisticated FPGA prototyping projects with three key attributes; 1) scalable prototyping gate capacities, 2) a high-speed interface between the FPGA prototype and software running on a host computer, and 3) support for globally distributed users.  A natural evolution of these three FPGA prototyping attributes has led S2C to produce its latest FPGA prototyping platform dubbed the “Prodigy Cloud System”.

WEBINAR REGISTRATION

Hyperscale SoC Validation Gate Capacity – Driven by customer demand for what S2C calls “Hyperscale SoC Validation”, S2C’s Prodigy Cloud System now supports very large SoC prototyping requirements up to 2 billion gates (“Hyper”) in a modularly scalable way … hence “Hyperscale”.  To achieve Hyperscale capabilities, S2C has harnessed the latest and largest Intel FPGA called the Stratix 10 GX 10M FPGA.  This Intel FPGA blows away all other FPGAs generally available today with an estimated usable gate capacity of 80M gates per FPGA.  The GX 10M FPGA is fabricated with Intel’s 14nm silicon, so its expected to run faster and consume less power.  Intel acknowledges in its GX 10M FPGA press release1 that “One market in particular has a critical interest in always using the largest available FPGAs: the ASIC prototyping and emulation market.”  Intel sees FPGA emulation and prototyping supporting a variety of system development tasks including;

  • Algorithm development using real hardware
  • Early SoC software development prior to the chip’s manufacture
  • RTOS verification
  • Corner-case condition testing for both hardware and software
  • Regression testing on successive design iterations

The S2C Prodigy Cloud System comes in a standard server rack and can scale up to eight (8) Quad 10M Logic Systems, each with four (4) GX 10M FPGAs, so one server rack can house up to 32 GX 10M FPGAs.  With an estimated 80 million gates per FPGA, the Prodigy Cloud System should easily support 2 billion gates for FPGA prototyping!  And, if that’s not enough gate capacity, multiple server racks can be connected together.

Fast Interface to Simulation Environment – The second key attribute of the Prodigy Cloud System is an out-of-the-box hardware and software solution for applying large quantities of real-world test data in the form of bus traffic, communications traffic, video images, etc. to the FPGA prototype from your host computer.  This approach creates what S2C calls a “simulation infrastructure” that enables the user to connect the SoC hardware model in the FPGA to a simulation-like verification environment on the host computer.

S2C calls this option ProtoBridge, and it includes PCIe bridge and AXI master/slave logic that is compiled and downloaded into the FPGA together with the SoC prototype design.  The host computer connects to the FPGA prototype with a PCI cable that supports up to 1GB/s transfers, and ProtoBridge includes PCI driver software, and a set of C-API function calls to drive AXI bus transactions from the host computer.Global Remote Access and Control – The third key attribute of the Prodigy Cloud System is support for multiple globally distributed users.  Large SoC design teams today may be spread across multiple locations in the US, Europe, China, India, and Vietnam, so remote access to the FPGA prototyping resources is essential for an optimal ROI from the prototyping investment.

To address this key attribute, S2C has developed what it calls Prodigy Neuro hardware and software.  The Prodigy Neuro hardware, called the Prodigy Neuro Control Module, manages global power control to the FPGA hardware, clocks and resets, self-test, and monitoring of the Prodigy Cloud System FPGA board connections.

Prodigy Neuro Software provides centralized control of the Prodigy Cloud System hardware resources, as well as user and prototyping project management.  Prodigy Neuro Software includes a browser-based GUI for easy remote access to Prodigy Cloud System hardware for centralized resource control and multi-design management and monitoring.  Prodigy Cloud System hardware can be allocated to multiple different projects running simultaneously on the hardware, with 3-level permission control for multiple users.  Prodigy Neuro Software also provides FPGA prototyping hardware usage analytics, warnings of hardware faults, and auto-detection of hardware connection with instant messaging upon first check-in.So, if you thought that your SoC designs were too large for FPGA prototyping, or you needed a shareable FPGA prototyping resource for a globally distributed design team and multiple FPGA prototyping projects, you should register for this webinar. You will get a copy of the replay if you can’t attend the live broadcast.

WEBINAR REGISTRATION

Also Read:

WEBINAR: Prototyping With Intel’s New 80M Gate FPGA

S2C Delivers FPGA Prototyping Solutions with the Industry’s Highest Capacity FPGA from Intel!

AI Chip Prototyping Plan