ads mdx semiwiki building trust gen 800x100ai

From Now to 2025 – Changes in Store for Hardware-Assisted Verification

From Now to 2025 – Changes in Store for Hardware-Assisted Verification
by Daniel Nenni on 01-12-2022 at 6:00 am

Jean Marie Brunet

Lauro Rizzatti recently interviewed Jean-Marie Brunet, vice president of product management and product engineering in the Scalable Verification Solution division at Siemens EDA, about why hardware-assisted verification is a must have for today’s semiconductor designs. A condensed version of their discussion is below.

LR: There were a number of hardware-assisted verification announcements in 2021. What is your take on these announcements?

JMB: Yes, 2021 was a year of major announcements in the hardware-assisted verification space.

Cadence announced a combination of emulation and prototyping focused on reducing the cost of verification by having prototyping take over tasks from the emulator when faster speed is needed.

Synopsys announced ZeBu-EP1, positioned as a fast-prototyping solution. It isn’t clear what the acronym means, but I believe it stands for enterprise prototyping. After several years of maintaining that ZeBu is the fastest emulator on the market, Synopsys launched a new hardware platform as a fast (or faster) emulator. Is it because ZeBu 4 is not fast enough? More to the point, what is the difference between ZeBu and HAPS?

In March 2021, Siemens EDA announced three new Veloce hardware platform products: Veloce Strato+, Veloce Primo and Veloce proFPGA. Each of these products addresses different verification tasks at different stages in the verification cycle. The launch answered a need for hardware-assisted verification to be a staged, progressive path toward working silicon. Customers want to verify their designs at each stage within the context of real workloads where time to results is as fast as possible without compromising the quality of testing.

In stage 1, blocks, IP and subsystems are assembled into a final SoC. At this stage, very fast compile and effective debug is needed with less emphasis on runtime.

At stage 2, the assembled SoC is becoming a full RTL description. Now, design verification requires a hardware platform that can run faster than the traditional emulator. One that needs less compilation, less debug but more runtime.

In stage 3, verification moves progressively toward system validation. Here it’s about full performance where cabling interconnect to the hardware allows it to run as fast as possible.

LR: Let’s look at the question of tool capacity. Some SoC designs exceed 10-billion gates making capacity a critical parameter for hardware platforms. A perplexing question has to do with capacity scalability. For example, does a complex, 10-billion gate design (one design) have the same requirements as 10, one-billion gate designs (multiple designs) in terms of usable emulation capacity?

JMB: This question always triggers intense discussions with our customers in the emulation and prototyping community. Let me try to explain why it’s so important. Depending on the customer, their total capacity needs may be 10-, 20- or 30-billion gates. In our conversation with customers, we then inquire about the largest design they plan to emulate. The answer depends on the platform they’re using. Today, the largest monolithic designs are in the range of 10- to 15-billion gates. For the sake of this conversation, let’s use 10-billion gates as a typical measure.

The question is, do they manage a single monolithic design of 10-billion gates in the same way they manage 10, one-billion gate designs? The two scenarios have equivalent capacity requirements, but not the same emulation complexity.

Emulating a 10-billion gate design is a complex task. The emulator must be architected to accommodate large designs from the ground up through the chip and subsystem to the system level including requirements at the software level.

A compiler that can map large designs across multiple chips, across multiple chassis is necessary. A critical issue is the architecture that drives the emulation interconnect. If not properly designed and optimized, overall performance and capacity scaling drops considerably.

With off-the-shelf FPGAs as the functioning chip on the boards, the DUT is spread across each interconnected FPGA, lowering the capacity of each FPGA. By interconnecting multiple chassis, the overall performance drops below that of one or a few FPGAs.

Synopsys positions its FPGA-based tools as the fastest emulator for designs in the ballpark of one-billion gates. The speed of the system clock is high because FPGAs are fast. When enough hardware is assembled to run 10-billion gates, an engineer ends up interconnecting large arrays of FPGAs that were never designed for this application. And typically, the interconnection network is an afterthought conceived to accommodate those arrays. This is different from a custom chip-based platform where the interconnection is designed as an integral part of the emulator.

Cadence claims support for very high capacity in the 19-billion gate range. The reality is that no customer is emulating that size of design. The key to supporting high-capacity requirements is the interconnect network. It doesn’t appear that the Palladium Z2 interconnect network is different from the network in Palladium Z1, which is known for capacity scaling issues. As a result, customers should ask if Palladium Z2 has the ability to map a 10-billion gate design reliably.

Today, Veloce Strato+ is the only hardware platform that can execute 10-billion gate designs in a monolithic structure reliably with repeatable results without suffering speed degradation.

The challenge concerns the scaling of the interconnect network. Some emulation architectures are better than others. Based on the roadmap taken by different vendors, future scaling will get even more challenging.

By 2025, the largest design sizes will be in the range of 25-billion gates or even more. If today’s engineering groups are struggling to emulate a design at 10-billion gates, how will they emulate 25 billion+ gates?

Siemens EDA is uniquely positioned to handle very large designs, reliably and at speed, and we continue to develop next-generation hardware platforms to stay ahead of the growing complexity and size of tomorrow’s designs.

LR: Besides the necessary capacity, what other attributes are required to efficiently verify complex, 10-billion gate designs?

JMB: Virtualization of the test environment is as important as capacity and performance.

In the course of the verification cycle, the DUT representation evolves from a virtual description (high level of abstraction) to a hybrid description that mixes RTL and virtual models, such as AFMs or QEMU. Eventually, it becomes a gate-level netlist. When an engineer is not testing a DUT in ICE (in circuit emulation) mode, the test environment is described at a high level of abstraction typically consisting of software workloads.

It’s been understood for a while that RTL simulation cannot keep up with execution of high-level abstraction models running on the server. The larger the high-level abstraction portion of the DUT, the faster the verification. The sooner software workloads are executed, the faster the verification cycle. This is the definition of a shift-left methodology. A virtual/hybrid/RTL representation is needed to run real software workloads on an SoC as accurately as possible and as fast as possible. An efficient verification environment allows a fast, seamless move from virtual to hybrid, from hybrid to RTL, and from RTL to gate.

The hybrid environment decouples an engineer from the performance bottleneck of full RTL, which supports much faster execution. In fact, hybrid can also support software development that is not possible in an RTL environment. A full RTL DUT runs in the emulator with very limited interaction with the server in hybrid mode or the parts of the DUT that run on the server. Here the connection between server and platform, or what we call co-model communication, becomes critical. If not architected properly, the overall performance fails to be acceptable. Unlike the bottleneck of the emulator, now the bottleneck is the communication channel.

We have invested significant engineering resources to address this bottleneck. Our environment excels in virtual/hybrid mode because of our unique co-model channel technology

Capacity, performance and virtualization are the key attributes to handle designs of 10+-billion gates. When designs hit 25 billion+ gates in 2025, the communication channel efficiency becomes even more critical since hybrid emulation becomes prevalent in a wide range of applications.

LR: Thank you, Jean-Marie, for your perspectives and for explaining some of the little-known aspects of successful hardware emulation use.

Also Read:

DAC 2021 – Taming Process Variability in Semiconductor IP

DAC 2021 – Siemens EDA talks about using the Cloud

DAC 2021 – Joe Sawicki explains Digitalization


DAC 2021 – What’s Up with Agnisys and Spec-driven IC Development

DAC 2021 – What’s Up with Agnisys and Spec-driven IC Development
by Daniel Payne on 01-11-2022 at 10:00 am

IDesignSpec min 1

Walking the exhibit floors at DAC in December I spotted the familiar face of Anupam  Bakshi, Founder and CEO of Agnisys, so I stopped by the booth to get an update on his EDA company. My first question for him was about the origin of the company name, Agnisys, and I found at that Agni means Fire in Sanskrit, one of the five elements.

Agnisys at #58DAC

The company vision is the same today as it was from the founding, it’s a tool flow going from the specification to implementation, across design and verification and SW and device drivers. Having a single source of truth on registers for all engineering groups to know and use is a core idea. IDesignSpec is their EDA tool launched 11 years ago now, and the scope of the tool has only grown over time.

IDesignSpec

There are now resellers of Agnisys tools in all continents, the number of licenses have been going up, and the new trend is for site licensing, instead of having just a handful of licenses on one project. When one IC design team starts using IDesignSpec, then other adjacent teams start to hear about the benefits and want to give it a try on their project too.

Another EDA tool at Agnisys is called ISequenceSpec, released about three years ago, and it helps engineers to capture sequences as stimulus generation used in verification, firmware and even post-silicon validation. ISequenceSpec can convert into UVM or C levels. Here’s where ISequenceSpec fits into a design flow:

ISequenceSpec

The newest EDA tool has taken a totally different approach to introduction, because it is being crowd-sourced, and it’s called ISpec.ai. What’s unique is that this tool automatically converts English assertions into proper SystemVerilog Assertions (SVA) by using Machine Learning (ML) techniques. So the company finds out what engineers think about when learning SVA, and then the users can give Yes (Green) or No (Red) feedback, leaving any comments about the quality of the conversion. This tool was released about 2-3 months ago, then existing customers became aware of it and started testing, and so far about 200 engineers have provided feedback.

iSpec.ai

They have even offered quizzes to see if engineers can answer questions about SVA with or without using iSpec.ai, which is kind of fun and technical at the same time. So this tool in a way is kind of similar to Google Translate, as it translates in both directions, both into SVA or English. The company plans to productize this web-based tool after a learning phase.

DVCon US 2022 is coming up in February, and Agnisys has a paper on the iSpec.ai tool, so consider attending that online event to see what progress has been made so far.

Co-located with DAC this year was the RISC-V conference, and Agnisys presented on, “A System Level Verification and Validation Environment using SweRV”. You can watch this 10 minute presentation on YouTube, and it was a Lightning Talk. SweRV is an open-source RISC-V core from Western Digital.

RISC-V Lightning Talk

Connecting all of the semiconductor IPs together in a system-level environment, your team either does this by hand or with some automation. Using SweRV as the processor you can then connect together tests at the IP or system level. Using the C to UVM interface, then both levels can talk together. The processor knows C, while the other IPs understand UVM. So you can run your C program, and it then causes UVM transactions by the tool using SweRV.

Another new tool in 2021 is called IDS-FPGA, now part of the IDesignSpec family, so that FPGA design teams can reduce their development times by using an approach with automated code generation, IP generators, and have an integrated flow with FPGA vendor software. They support the Xilinx UltraScale+ IP-based design development, and have integration with Xilinx Vivado and the Intel Quartus Prime architectures.

Summary

Agnisys has a 15 year history in providing their IDesignSpec tool, and it just keeps getting more robust each year. This company is one of the very few EDA vendors that actually demonstrates their tool live, running on a laptop, so it wasn’t just a PowerPoint presentation at DAC. I think that engineers are really attracted to seeing an EDA tool running live, because they are curious at how the GUI looks, how quickly it operates, and how intuitive the flow is.

Also read:

AI for EDA for AI

What the Heck is Collaborative Specification?

AUGER, the First User Group Meeting for Agnisys

 


Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems
by Fred Chen on 01-11-2022 at 6:00 am

EUV shadowing across slit

EUV lithography systems continue to be the source of much hope for continuing the pace of increasing device density on wafers per Moore’s Law. Recently, although EUV systems were originally supposed to help the industry avoid much multipatterning, it has not turned out to be the case [1,2]. The main surprise has been the rise of stochastic defects and variability [1,2], which challenge both dose and overlay control. It has constrained sub-20 nm features to be printed with multipatterning assistance such as SALELE [3]. This has also accelerated the development of the next-generation High-NA EUV tools [4,5] in order to bring back the opportunity for avoiding multipatterning. On the other hand, High-NA tools have their own concerns as well [4-6].

EUV technology requires a substantially different infrastructure from previous optical lithography. A fundamental reason is it is based on reflective optics rather than transmissive optics. Even the mask needs to be built on a reflective multilayer substrate. This, in turn, has led to some distinct quirks in the EUV imaging process. Due to the reflection being an inherently off-axis process, the illumination of the mask has some inherent asymmetry, as shown in Figure 1[7].

Figure 1. EUV illumination of the mask is essentially a rotated off-axis angle across an arc-shaped slit. Illustration is based from Figure 1 in [7].

There is an arc-shaped slit, 26 mm across and ~1-2 mm thick (depending on design), through which a central illumination ray angle of 6 degrees is rotated azimuthally. As a result, features in the center of the exposure field are actually illuminated at different angles from features at the edge of the exposure field. Each different angle produces a different effective “shadow” which comprehends the light’s propagation through and reflection by the multilayer substrate, as well as double pass through the mask pattern [8]. Such shadowing could cause loss of image contrast (also known as fading) [9].

Figure 2. A particular illumination at the slit center is rotated at the slit edge. Illustration is based from Figure 9 in [10].

Consequently, the horizontal vs vertical line shadowing behavior varies across slit. The appropriate metric for the degree of shadowing is the larger incident angle, at the mask, in the direction perpendicular to the lines between the two pole angles in an ideal dipole illumination setup (targeting sine=0.5 wavelength/pitch at the wafer) for the slit center. Some results are shown in Figure 3 for horizontal and vertical lines. Low-NA (NA=0.33) and High-NA (NA=0.55) systems are plotted side by side.

Figure 3. Horizontal and vertical line shadowing vs slit position, for different pitches on both 0.33 and 0.55 NA systems.

There are several things to point out.

  1. In all cases, the smaller pitch has worse shadowing, i.e., a larger incident angle for one of the illumination poles compared to the other.
  2. The vertical line shadowing varies linearly across slit, because when the azimuthal angle flips sign going from one side of the slit to the other, light is still shining on one side of the line but casts a growing or diminishing shadow.
  3. The horizontal line shadowing is worse than the vertical line shadowing.
  4. High-NA tools do not necessarily provide relief from shadowing, particularly for vertical lines, at pitches targeted for High-NA.
  5. The doubling of demagnification in the High-NA tools from 4x to 8x causes equal shadowing at half the pitch for the latter.

DRAM active areas (Figure 4) present an interesting special case, for they are neither horizontal nor vertical but slanted in between.

Figure 4. Shadowing for DRAM active area lines (angled at 14.5 degrees with respect to the horizontal).  

As may be expected, the shadowing for slanted lines has combined characteristics of horizontal and vertical lines. The High-NA tool does not necessarily provide less shadowing than the Low-NA tool, but the range of shadowing across slit is less. Low-NA tools already show significant shadowing for 16-nm half-pitch, while High-NA tools do so for 10-nm half-pitch.

References

[1] https://m.blog.naver.com/PostView.naver?blogId=jkhan012&logNo=222410469787&categoryNo=30&proxyReferer=https:%2F%2Fwww.linkedin.com%2F

[2] D. De Simone and G. Vandenberghe, “Printability study of EUV double patterning for CMOS metal layers,” Proc. SPIE 10957, 109570Q (2019).

[3] K. Sah et al., “Defect characterization of EUV Self-Aligned Litho-Etch Litho-Etch (SALELE) patterning scheme for advanced nodes,” Proc. SPIE 11611, 116112H (2021).

[4] E. van Setten et al., “High NA EUV lithography: Next step in EUV imaging,” Proc. SPIE 10957, 1095709 (2019).

[5] https://www.imec-int.com/en/articles/high-na-euvl-next-major-step-lithography

[6] A. H. Gabor et al., “Effect of high NA “half-field” printing on overlay error,” Proc. SPIE 11609, 1160907 (2021).

[7] P. C. W. Ng et al., “A Fully Model-Based Methodology for Simultaneously Correcting EUV Mask Shadowing and Optical Proximity Effects with Improved Pattern Transfer Fidelity and Process Windows,” Proc. SPIE 7520, 75200S (2009).

[8] E. van Setten et al., “Multilayer optimization for high-NA EUV mask3D suppression,” Proc. SPIE 11517, 115170Y (2020).

[9] C. van Lare, F. Timmermans, and J. Finders, “Mask-absorber optimization: the next phase,” J. Micro/Nanolith. MEMS MOEMS 19, 024401 (2020).

[10] H. Tanabe, “Classification of EUV masks based on the ratio of the complex refractive index k/(1-n),” Proc. SPIE 11854, 11581416 (2021).

This article originally appeared in LinkedIn Pulse: Horizontal, Vertical , and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems


Can you Simulate me now? Ansys and Keysight Prototype in 5G

Can you Simulate me now? Ansys and Keysight Prototype in 5G
by Shawn Carpenter on 01-10-2022 at 10:00 am

5G Signal propagation

Ansys and Keysight wanted to see if they could answer the question, If we put virtual cellphones in different locations in a city, can we predict what kind of 5G signal we’re going to get in those locations? To find out, they created and tested a detailed virtual model of a city, including a variety of 5G antennae, receivers, and transmitters typically found in a high-density urban area. It turns out, we can.

The team used Ansys HFSS to construct 5G MIMO base station antenna array models and handset antenna models for a 28 GHz, high-band system and placed them in different locations around a realistic city model. From there, they used HFSS SBR+ to figure out what was really happening between the antennas by using physics to model the propagation of signals between the base station and the handsets.

5G Signal propagation through complex city environments is modeled with a Shooting and Bouncing Rays (SBR) electromagnetic field solver. These signal propagation simulations are linked to detailed Ansys HFSS phased array base station and handset antenna system models.

Together, Ansys and Keysight tested a proof of concept for an accurate, physics-based virtualized process for understanding 5G physical channel behavior. The prototype was a true partnership between Ansys and Keysight capabilities. Ansys methodology was leveraged to model the physical layer—virtual antennas, scattering, and their coupling tendencies—on top of Keysight’s method for modeling the actual 5G radio architecture and beam selection process.

Ansys HFSS and HFSS SBR+ are used to compute the physical channel response for an installed 5G base station array and user equipment antennas, and Keysight SystemVue extracts the time domain channel properties, recreating user signal angle of arrival for MIMO beamforming.

Eventually, virtual modeling will take the place of the “hunt and peck” method of installing and adjusting 5G base stations to maximize coverage. For detailed information on the proof of concept, check out our recent webinar, 5G mm-wave Physical Channel Modeling with EM Physics.

What problems can we solve with 5G virtual modeling?

5G promises mid-band and high-band channels capable of delivering massive quantities of data at blindingly fast speeds. 5G radio equipment vendors and wireless service providers quote impressive capabilities for high-band systems at 28 and 39 GHz. The catch is: these systems are only cost effective in high-population areas, like city centers.

There are four main factors that complicate 5G systems beyond what we were seeing at lower-band frequencies:

More Drops
At high frequencies, the farther away from the base station, the more signals are dropped. The speed of drop off is about ten times faster in 5G. A highly populated area requires many more access points to service all subscribers.

Low Signal Penetration
Signals have a hard time penetrating common building materials at high bands in the mm-wave frequencies. A 4G mobile phone inside a building can receive signal from a cellular tower miles away because these lower frequency longer wavelength signals can penetrate the structures and surfaces between the phone and the tower. At the higher frequency 5G bands, buildings go from acting like signal sponges to acting like mirrors. At 28GHz, a popular 5G high band frequency, plate glass (1.5-centimeter thickness) reduces signal penetration by a factor of one thousand. Physically thicker cement and brick pose even worse attenuation. The mirror effect of exterior surfaces creates another problem: delay spread. With signals bouncing everywhere, receivers get delayed copies of the signal, making receiver design much more complicated.

Distance Loss
5G systems operate using antennas that concentrate signal energy in spot beams to overcome the signal distance losses that increase more quickly than in 4G. It’s critical to identify the right locations for access points such that every subscriber is covered with minimal overlap. It’s possible to test real-world installations and locations, but it’s time consuming and expensive. High-frequency, high-bandwidth measurement equipment is considerably more expensive, even cost prohibitive at times.

Bureaucratic Delay

Getting a permit from a city council or other governing body to install a 5G antenna system can be an arduous process. Nobody wants to get approved for 10 mount locations and find out that 5 of them are non-optimal, so they need to go back to the drawing board and re-apply with the city. The key is to identify the right number of access points, in the right locations, to offer consistent coverage with the fewest number of access points possible.

All of these issues can be solved with a virtualized process. Can we let the computer show us how well we’re serving a city’s subscribers? Ansys and Keysight say yes.

Animation of the E-fields of a 28 GHz signal with 400 MHz bandwidth traveling from a phased array base station model into a city environment. The signal bounces off the street (bottom) and the wall of a facing building (right). Single frequency electric field is shown in the top left for 2 cut planes.

What does the future of the electronics industry look like?

The short answer: partnerships. After many years of competing, Ansys and Keysight see more ways to move the industry forward by working as a complementary pair.

“Between Ansys and our partners, we stand a chance at creating the first digital twin of a 5G network that can cooperate with an actual, living network,” said Shawn Carpenter, Program Director 5G & Space at Ansys.

Using the existing prototype, Ansys and Keysight can tell you what signals are transmitted and received, but it’s much harder to address the five or six network layers that might be involved when a subscriber pulls out their smart phone to use a navigational app. How do we identify the shortest route across the network to the Cloud? How do we get data to the Cloud while maintaining data integrity? What delays do we expect in getting data our handset requests?

Many of the data communications issues are simply outside Ansys HFSS’ purview. The environment we use for our electromagnetics modeling is fantastic for modeling antenna systems or radio frequency components, but it’s not designed to model complete cities with interconnected cars driving up and down the streets, drones flying through it, and aircraft flying over the top.

In 2020, Ansys acquired Analytical Graphics Incorporated (AGI), a specialist in multi-domain mission engineering. AGI has incredible capabilities that extend into the 5G space too. This prototype included a few subscribers, but the real world is a lot more dense, and AGI’s know-how will assist in evaluating complex networks and simulating at scale. AGI also partners with Scalable Network Technologies, a master at answering these kinds of questions. Just recently, Scalable Networks was acquired by Keysight. To entangle the knot even further, AGI already has an existing interface for their network modeling tools inside of Keysight’s STK Interface. Between Ansys, Keysight, AGI, Scalable Network Technologies, and our other complementary partners, we have what we need to simulate and emulate at scale, and we’ll continue to polish our workflow integration.

Also Read

Cut Out the Cutouts

Is Ansys Reviving the Collaborative Business Model in EDA?

A Practical Approach to Better Thermal Analysis for Chip and Package


IBM at IEDM

IBM at IEDM
by Scotten Jones on 01-10-2022 at 6:00 am

Vertical FET process

IBM transferred their semiconductor manufacturing to GLOBALFOUNDRIES several years ago but still maintains a multibillion-dollar research facility at Albany Nanotech. IBM is very active at conferences such as IEDM and appears to have a good public relations department because they get a lot of press.

At the Litho Workshop in 2019 I heard an IBM presentation from the Albany research group, explaining that IBM had to have the research line because they needed state-of-the-art technology for the processors that run their computers. I personally question this rational, the Albany Research group collaborated with Samsung on the 5nm process Samsung put into production. I estimate that Samsung’s 5nm process when compared to TSMC’s 5nm process – has 1.69x the power consumption (worse), 0.64x the performance (worse), and 0.72x the density (worse). I am sure there are special features in the process to support IBM, but I am also sure the same features could be implemented in the TSMC process without a multibillion-dollar research investment. I also thought it was interesting that they said that while developing the process they turned up the EUV dose until they got good yield and then they transferred it to Samsung expecting Samsung to reduce the EUV dose. When Samsung began ramping their 5nm process there were industry rumors Samsung couldn’t get enough wafers through their EUV tools (high EUV dose leads to low throughput) and the yields were low.

IBM also makes a big splash in the mainstream press every few years with some new development but in my opinion a lot of the developments don’t live up to the hype. For example, in early 2021 IBM announced the development of a 2nm technology but as I have previously written it is more like TSMC’s 3nm process that 2nm, and unlikely to be competitive versus expected 2nm processes from Intel and TSMC. You can read my 2nm article here: https://semiwiki.com/semiconductor-services/ic-knowledge/298875-is-ibms-2nm-announcement-actually-a-2nm-node/

This is not to say that IBM doesn’t do important research, years ago they were responsible for many key industry innovations including copper metallization, I just question whether a multibillion-dollar semiconductor research facility makes sense for a company that doesn’t make semiconductors.

In this article I will discuss three IBM papers from IEDM.

Vertical-Transport Nanosheet Technology for CMOS Scaling beyond Lateral-Transport Devices

In my opinion this paper is another example of an IBM announcement I don’t expect to live up to the hype. Authors note, this work was done in cooperation with Samsung. The mainstream media has already published about this “breakthrough” as if it will be a production solution.

Figure 1 illustrates the Vertical-Transport Nanosheet (VTFET) process.

Figure 1. Vertical-Transport Nanosheet (VTFET) process.

The basic idea here is to make nanosheets but rather than in the horizontal direction, to turn them into the vertical direction. In the paper a vertical nanosheet is compared to a FinFET and shown to offer better performance and area. I see two issues with this.

First, my understanding is vertical transistors are very favorable for SRAM usage where the interconnect needs are simple and regular but doesn’t work well for random logic designs with complex interconnect needs.  Imec has previously shown some very interesting vertical SRAM work although it doesn’t appear to have gained any traction in the industry. With the advent of chiplets a simple SRAM process that offers superior density makes a lot of sense. But once again, for logic use the vertical transistor area would likely go up a lot to accommodate the interconnect requirements.

The second issue I see with this is it is being compared to FinFETs. The transitions away from FinFETs to stacked horizontal nanosheets (HNS) is already under way. HNS offers density and performance advantages over FinFETs, but even more importantly offers a long-term scaling path. HNS can improve performance by stacking more sheets vertically, they also open the opportunity to introduce a dielectric wall creating an Imec innovation called Forksheets with reduced n to p spacing. Beyond this, stacking n and p HNS in a 3D-CMOS/CFET architecture offers more scaling with zero horizontal n to p spacing. Beyond HNS, the sheets can potentially be replaced with 2D materials providing even more scaling. Drive current and therefore performance of vertical fins is driven by the fin size, and I don’t see how the devices can scale the way HNS can. I believe this is why the industry has chosen HNS as the successor to FinFETs, Samsung is already trying to ramp a HNS process (Samsung calls it Multibridge), Intel is planning HNS (Intel calls them RibbonFETs) for 2024 and TSMC has published HNS work and is widely expected to adopt them for 2nm (although they haven’t formally announced their 2nm process technology selection).

Critical Elements for Next Generation High Performance Computing Nanosheet Technology

In my view this paper is a lot more interesting than the previous one because it is addressing issues with the HNS technology that all the major leading edge logic suppliers are facing. IBM has done a lot of good work on HNS in the past and this paper builds on that.

There are two HNS issues addressed in this paper.

The first issue is that pFET mobility is poor for HNS. IBM has previously described two techniques to improve pFET mobility, one is to trim back the channel after release and deposit a SiGe cladding layer. Another technique is fabricating the channels on a strain relaxed buffer layer.

In this paper SiGe channels were formed by depositing lower Ge content channels over higher Ge content sacrificial layers when the original nanosheet stack is deposited. The difference in Ge content is to enable the selective release etch, to etch out the sacrificial films and leave the channels intact. The SiGe channel provides improved mobility, improved performance, and greater reliability.

Figure 2 illustrates the SiGe channel HNS pFET.

Figure 2. SiGe channel HNS pFET.

The second issue addressed here is how to achieve multiple-uniform threshold voltages (Vts) for HNS. For FinFETs the fin-to-fin distance is relatively wide and multiple Vts can be achieved by depositing and selectively removing multiple work function metals. With HNS the sheet to sheet (Tsus) spacing is so small that there isn’t enough space for a full stack of work function metals. The metals also tend to be thicker on the outside of the NS and thinner in between the nano sheets leading to non-uniform Vts.

IBM pioneered the use of dipoles to control VT over decade ago and that technique is now getting a lot of attention for HNS because dipoles can be created by doping the high-k dielectric and don’t require extra thickness the way multiple work function metals do. Dipoles can also fix the Vt non uniformity issue.

Figure 3 illustrates how work function metals can lead to non-uniform Vts and how volumeless dipoles fix the problem.

Figure 3. Work Function metal versus Dipoles for Vt control. (a) Pure metal multi-Vt scheme which could cause huge Vt non-uniformity for high nVt device and high pVt device and (b) Volumeless multi-Vt reduces nWFM thickness and share the metals to improve the Vt uniformity.

Gate-Last I/O Transistors based on Stacked Gate-All-Around Nanosheet Architecture for Advanced Logic Technologies

The third paper I wanted to discuss is another paper looking at HNS issues.

Another challenge in HNS implementation is how to create I/O transistors that can operate at higher voltage. In this paper a gate last process flow creates two different gate oxide thicknesses with a combination of deposited oxide and novel selective oxidation. The selective oxidation creates thick and thin selective oxides that are added to the deposited oxide. The key to this technique is that grown oxide consumes silicon during oxidation and therefore the thicker grown oxide consumes more silicon than the thin grown oxide opening up the sheet to sheet spacing (Tsus) to accommodate the thicker oxide.

Figure 4 illustrates thick and thin gated oxide HNS devices and the improved Tsus to accommodate the thick oxide.

Figure 4.- Thick and thin gate oxide HNS devices with increased Tsus for the thick I/O oxide devices.

Conclusion

Despite the mainstream media hype about IBM’s Vertical-Transport Nanosheet announcement at IEDM, we believe it is IBM’s work on perfecting HNS processes is more likely to have an impact on the industry. pFET channel mobility, volume less Vt solutions and high voltage I/O solutions address problems the industry is currently wrestling with for the FinFET to HNS transition

Related Blog


Your Smart Device Will Feel Your Pain & Fear

Your Smart Device Will Feel Your Pain & Fear
by Ahmed Banafa on 01-09-2022 at 6:00 am

Your Smart Device Will Feel Your Pain Fear

What if your smart device could empathize with you? The evolving field known as affective computing is likely to make it happen soon. Scientists and engineers are developing systems and devices that can recognize, interpret, process, and simulate human affects or emotions. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While its origins can be traced to longstanding philosophical inquiries into emotion, a 1995 paper on #affective computing by Rosalind Picard catalyzed modern progress.

The more smart devices we have in our lives, the more we are going to want them to behave politely and be socially smart. We don’t want them to bother us with unimportant information or overload us with too much information. That kind of common-sense reasoning requires an understanding of our emotional state. We’re starting to see such systems perform specific, predefined functions, like changing in real time how you are presented with the questions in a quiz, or recommending a set of videos in an educational program to fit the changing mood of students.

How can we make a device that responds appropriately to your emotional state? Researchers are using sensors, microphones, and cameras combined with software logic. A device with the ability to detect and appropriately respond to a user’s emotions and other stimuli could gather cues from a variety of sources. Facial expressions, posture, gestures, speech, the force or rhythm of key strokes, and the temperature changes of a hand on a mouse can all potentially signify emotional changes that can be detected and interpreted by a computer. A built-in camera, for example, may capture images of a user. Speech, gesture, and facial recognition technologies are being explored for affective computing applications.

Just looking at speech alone, a computer can observe innumerable variables that may indicate emotional reaction and variation. Among these are a person’s rate of speaking, accent, pitch, pitch range, final lowering, stress frequency, breathlessness, brilliance, loudness, and discontinuities in the pattern of pauses or pitch.

Gestures can also be used to detect emotional states, especially when used in conjunction with speech and face recognition. Such gestures might include simple reflexive responses, like lifting your shoulders when you don’t know the answer to a question. Or they could be complex and meaningful, as when communicating with sign language.

A third approach is the monitoring of physiological signs. These might include pulse and heart rate or minute contractions of facial muscles. Pulses in blood volume can be monitored, as can what’s known as galvanic skin response. This area of research is still in relative new but it is gaining momentum and we are starting to see real products that implement the techniques.

Source: galvanic skin response, Explorer Research

Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. Some researchers are using machine learning techniques to detect such patterns.

Detecting emotion in people is one thing. But work is also going into computers that themselves show what appear to be emotions. Already in use are systems that simulate emotions in automated telephone and online conversation agents to facilitate interactivity between human and machine.

There are many applications for affective computing. One is in education. Such systems can help address one of the major drawbacks of online learning versus in-classroom learning: the difficulty faced by teachers in adapting pedagogical situations to the emotional state of students in the classroom. In e-learning applications, affective computing can adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services also benefit from affective computing applications that can determine a client’s emotional state.

Robotic systems capable of processing affective information can offer more functionality alongside human workers in uncertain or complex environments. Companion devices, such as digital pets, can use affective computing abilities to enhance realism and display a higher degree of autonomy.

Other potential applications can be found in social monitoring. For example, a car might monitor the emotion of all occupants and invoke additional safety measures, potentially alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human-computer interaction, such as affective “mirrors” that allow the user to see how he or she performs. One example might be warning signals that tell a driver if they are sleepy or going too fast or too slow. A system might even call relatives if the driver is sick or drunk (though one can imagine mixed reactions on the part of the driver to such developments). Emotion-monitoring agents might issue a warning before one sends an angry email, or a music player could select tracks based on your mood. Companies may even be able to use affective computing to infer whether their products will be well-received by the market by detecting facial or speech changes in potential customers when they read an ad or first use the product. Affective computing is also starting to be applied to the development of communicative technologies for use by people with autism.

Many universities done extensive work on affective computing resulting projects include something called the galvactivator which was a good starting point. It’s a glove-like wearable device that senses a wearer’s skin conductivity and maps values to a bright LED display. Increases in skin conductivity across the palm tend to indicate physiological arousal, so the display glows brightly. This may have many potentially useful purposes, including self-feedback for stress management, facilitation of conversation between two people, or visualizing aspects of attention while learning. Along with the revolution in wearable computing technology, affective computing is poised to become more widely accepted, and there will be endless applications for affective computing in many aspects of life.

One of the future applications will be the use of affective computing in #Metaverse applications, which will humanize the avatar and add emotion as 5th dimension opening limitless possibilities, but all these advancements in applications of affective computing racing to make the machines more human will come with challenges namely SSP (Security, Safety, Privacy) the three pillars of online user, we need to make sure all the three pillars of online user are protected and well defined , it’s easier said than done but clear guidelines of what , where, who, who will use the data will make acceptance of hardware and software of affective computing faster without replacing physical pain with mental pain of fear of privacy and security and safety of our data .

References

https://www.linkedin.com/pulse/20140424221437-246665791-affective-computing/

https://www.linkedin.com/pulse/20140730042327-246665791-your-computer-will-feel-your-pain/


Podcast EP56: sureCore Memory, From Ultra-Low Power to Ultra-Low Temperature

Podcast EP56: sureCore Memory, From Ultra-Low Power to Ultra-Low Temperature
by Daniel Nenni on 01-07-2022 at 10:00 am

Dan is joined by Paul Wells. CEO of sureCcore. Paul describes a variety of new and innovative applications for sureCore memory products. including ultra-low power applications for consumer connected devices and new applications for quantum computing.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Identity and Data Encryption for PCIe and CXL Security

Identity and Data Encryption for PCIe and CXL Security
by Tom Simon on 01-07-2022 at 6:00 am

Security for Cloud Applications

Privacy and security have always been a concern when it comes to computing. In prior decades for most people this meant protecting passwords and locking your computer. However, today more and more users are storing sensitive data in the cloud, where it needs to be protected at rest and while in motion. In a Synopsys webinar Dana Neustadter, Senior Marketing Manager for Security IP, cites figures from Skyhigh Networks that show as much as 21% of files uploaded to file sharing services contain sensitive data, such as medical, financial or personal information.

While we all feel that data centers and related infrastructure are secure, if you have ever watched a video of a “penetration tester”, you will see just how easy it is for bad actors to physically access some sites. If you want to see one of these videos, search for “pen tester” on Youtube. Fortunately, the industry is responding to this issue by adding security to specifications such as PCIe and CXL (Computer eXpress Link). These additions go a long way toward meeting the requirements of new laws and regulations which are creating requirements for system security where there is sensitive data.

Security for data in motion in PCIe and CXL, of course, depends on proper on chip security within SOCs. A trusted execution environment should offer power-on, runtime and power off security, through a Hardware Security Module (HSM). The real key to PCIe and CXL security is the addition of an Integrity and Data Encryption (IDE) component. In the Synopsys webinar Dana does a thorough job of describing the function and operation of an IDE in conjunction with authentication and key management. The PCIe 5.0/6.0 and CXL 2.0/3.0 specifications call for this additional functionality to afford increased security.

Security for Cloud Applications

The IDE is intended to sit within the PCIe Transaction Layer. This is a critical aspect of the design, because while added security is an essential requirement, it needs to have minimal impact on latency and performance. Right now, the specs allow for IDEs in the handling of TLP streams. FLIT mode will be included in the PCIe 6.0 release. Packets are protected by AES-GCM with 256-bit keys and 96-bit MAC tags. Ideally the addition of IDE should be plug and play, and this is the case for the Synopsys PCIe IDE and Controller IP. Another important element is that FIPS 140-3 certification is becoming important in the industry and should be supported through a certification test mode.

CXL operation and support mirrors that of PCIe. Dana includes the flow for both PCIe and CXL when IDE is included. Of course, with CXL there are some differences because of the three types of protocols it supports. IP for the CXL IDE needs to include containment and skid modes, and additions for PCRC when running CXL.cache/mem. Dana also discusses the ins and outs of key management for the large number of streams that can be operating in a design.

This webinar is comprehensive in that it discusses the needs and requirements for PCIe and CXL security in cloud applications. It also goes in depth on the components, architecture and related standards that are supported in the Synopsys DesignWare IP. Near the conclusion of the webinar, Dana shows how several different SOCs for AI or networking can be constructed largely from IP available from Synopsys. The webinar is available for replay on the Synopsys website.

Also read:

High-Performance Natural Language Processing (NLP) in Constrained Embedded Systems

Lecture Series: Designing a Time Interleaved ADC for 5G Automotive Applications

Synopsys’ ARC® DSP IP for Low-Power Embedded Applications


Cliosoft and Microsoft to Collaborate on the RAMP Program

Cliosoft and Microsoft to Collaborate on the RAMP Program
by Kalar Rajendiran on 01-06-2022 at 6:00 am

Cliosoft RAMP SemiWiki

We have all heard of many advanced technological inventions and products from the defense sector that subsequently got commercialized. While most of the Defense Advanced Research Projects Agency (DARPA) projects are classified secrets, many military innovations have had great influence in the commercial sector in the fields of electronics, communications and computer science. A well-known invention that we all rely on every day, the Internet, had its beginnings from such a project. Since its commercialization, so many advances have happened with and using the Internet including cloud computing.

While the commercial sector has made incredible advances over time, the defense sector has been mostly limited by security concerns and related matters. The Department of Defense (DoD) and the traditional Defense Industrial Base (DIB) are still following obsolete practices and outdated processes in some fields. In particular, as they relate to State-of-the-Art (SOTA) custom IC and System On a Chip (SoC) design and associated physical design. The Navy and Air Force having recognized this, have embarked on an initiative to leverage commercial capabilities to demonstrate secure enhanced design. The initiative is called the Rapid Assured Microelectronics Prototypes (RAMP) program. The purpose of this prototype is to facilitate the rapid development of IC hardware for further evaluation and technology enablement of the DoD. The RAMP program is now in its second phase. Microsoft has been tasked with leading the program by collaborating with companies in the electronics, EDA, semiconductor and related fields.

Microsoft has selected the following industry leaders to collaborate with: Ansys, Applied Materials, Inc., BAE Systems, Battelle Memorial Institute, Cadence Design Systems, Cliosoft, Inc., Flex Logix, GlobalFoundries, Intel Federal, Raytheon Intelligence and Space, Siemens EDA, Synopsys, Inc., Tortuga Logic, and Zero ASIC Corporation.

This article will focus on what Cliosoft brings to the RAMP program.

Cliosoft’s sole focus is helping semiconductor companies manage their design data and their IP.

Its SOS family of design management solutions serves as the backbone for design collaboration at  many of the largest semiconductor companies. Cliosoft also provides an enterprise IP management platform called HUB that is used by companies to easily create, publish and reuse their design IPs. Their Visual Design Diff (VDD) platform allows design teams to quickly compare two versions of a schematic or layout by graphically highlighting the differences directly in the design editor. Together, the above three data platforms enable easy and secure handling of data and IP through all aspects of microelectronics development and workflow. Let’s take a closer look.

Designing Flexibility

Design teams need the flexibility to be able to use multi-vendor tool flow on their designs. SOS is integrated and production tested with design tools from multiple vendors. Efficient management of design data and upkeep of proper documentation requires disciplined effort from the team members. Making it easy for them to invoke SOS revision control & design data management features directly from their preferred tools helps achieve success.

Creativity, IP ReUse and Return on Investment

Cliosoft HUB lets people across an enterprise share their IP and expertise with others. It enables problems to be solved quickly by crowdsourcing and designs to be completed faster without reinventing the wheel. Cliosoft HUB helps manage and track these collaborative efforts.

Effective IP reuse requires an IP-based design methodology and a good software infrastructure to enable it. It also requires an easy way for designers to find the right IP and gauge its quality. When reusing an IP, designers need the ability to get help with the IP if needed, report issues found, and be notified if there are updates. Cliosoft HUB addresses all of these requirements.

Engineers needing a piece of IP may find that it has already been developed in another division. They can now quickly access the IP and leverage the expertise of the IP creators. They can also benefit from other users in the enterprise who may have integrated that IP into their designs. All the interaction is recorded within Cliosoft HUB and becomes a knowledge base that future users of the IP can leverage.

IP Assembly

For situations when a piece of higher-level IP must be developed internally, it is usually a matter of assembly using lower-level IP blocks. In order to successfully complete this assembly process, hierarchical visibility is required along with access to the knowledge base and issues tracking of all IP blocks.

IP Traceability

Traceability is key to understanding the evolution of an IP block, and the modifications that were made for bug fixes or new features. Cliosoft HUB provides IP traceability through a knowledge base that describes the evolution, reuse and integration of IP into various products. This kind of traceability is especially required for compliance reasons in the defense sector, automotive and medical device markets. Standards such as ISO26262 and MIL-STD-882 mandate this kind of documentation. All of Cliosoft’s products are ISO26262 certified.

Also Read

DAC 2021 – Cliosoft Overview

Cliosoft Webinar: What’s Needed for Next Generation IP-Based Digital Design

Webinar – Why Keeping Track of IP in the Enterprise Really Matters


CES 2022 and the Electrification of Cycling

CES 2022 and the Electrification of Cycling
by Daniel Payne on 01-05-2022 at 10:00 am

bosch min

With the Omicron variant of the COVID-19 virus in the news, there have been some big corporate names withdrawing from CES ( Peleton, Super73), however the cycling innovation companies assembled once again in Las Vegas this year for CES 2022. Data from statista show the strong growth in bicycle revenues in March 2020, when the pandemic started in the US, showing an 85% growth in Electric bikes:

eMobility Experience

This year visitors to CES could go for a test ride on an outdoor track, with about a hundred electric models to ride.

eMobility Test Track

e-Bikes

Alta Cycling Group showcased their eBike brands: Diamondback, IZIP, Redline and Haibike. The Diamondback Union 2 has a class 3 Bosch Performance Line Speed motor, fenders, lights and a rack for commuting and shopping trips:

The IZIP brand categorizes their eBikes into: Adventuring, Commuting, Cruising.

IZIP
IZIP
IZIP
Tern Bicycles
Hyper Bicycles
Coaster Cycles
VAAST Bikes
iX Rider
Bianchi Aria E-Road
Magnum Scout
Totem USA – Zen Rider
RKS Motor
SoFlow
Aventon – Adventure Ebike
Dongguan CXM
Euphree – City Robin
Fiil Bikes
Giant – Road E+ 1 Pro
Go Power Bikes – GO Express
Hongji Bike: E-CityMM01
Hyper Bicycles
Rad Power Bikes
Tern Bicycles
LeMond Bicycles

I noticed how some of these eBikes have concealed the batteries into the frame, while others look like the batteries are just bolted onto the frame. I prefer the concealed look much better.

From China there’s NIU and with an e-bike called the BQi, designed with a step-through frame, concealed batteries and a 62 mile range, boasting a top speed of 28mph, priced attractively at just $1,075, which is quite low for an e-bike.

NUI BQi-C1 Pro

Bird started out with scooter rentals, but this year had two new e-Bikes, one with a step-through frame, and the other with a traditional top tube. I liked the built-in light features, concealed batteries, and these look to be commuter bikes.

BirdBike

Hailing from my home state of Minnesota is Benno Bikes, and they showed a lineup of four models:  Boost, Ejoy, Escout, Remidemi.  Each of these models is aimed at carrying cargo in the back and front.

Benno Bikes – BOOST

If speed is your ultimate goal, then consider the 50mph top speed from Delfast, and it boasts a range of 200 miles.

Delfast (Source: UPI)

Another city e-bike, but with a twist, Urtopia sports a single-speed, carbon frame, fingerprint scanner, LED lighting, turn signals, and a built-in display on the stem.

Urtopia
OKAI EB20

Wise-integration has an e-bike charger that is 6X smaller and 6X more energy efficient by using GaN technology.

Wise-integration

GPS Tuner provides the infrastructure that an e-bike system needs: an IoT Adapter, white label apps, and the cloud.

GPS Tuner

All of those e-bikes need to be parked and charged, especially for commuters, so ParkENT has developed a secure charging station.

ParkENT – secure charging station

Carbon frames provide the lightest weight for bicycles, however the traditional process to make them is quite labor intensive, which drives the prices up, so Superstrata has a 3D-printed carbon frame composite, in both traditional and e-bike versions.

Superstrata E

CES 2022 Innovation Awards Honoree

We all love to win awards, right? Bosch got an Honoree award for their eBike Systems, which consists of an eBike Flow app, an LED user interface, color display, rechargeable battery and drive unit. It’s smart enough to support over-the-air updates, something that we take for granted with our smart phones and other electronic devices. With an eBike you really need to know how low the battery charge is, so that you don’t get surprised mid-ride. You’ve likely heard of Bosch as supplying automotive parts, but they’ve also been supporting eBikes with electric motors for several years now too.

JBL has a portable Bluetooth speaker called the Wind 3 that mounts to your handlebars while cycling.

JBL Wind 3

I wear a heart rate monitor while cycling, but now there’s a new sensor product from CORE that measures your core body temperature during a workout. It’s also used by 8 professional cycling teams.

CORE

Indoor Trainers

LG showed off their Virtual Ride, a stationary bike concept along with three vertical 55-inch OLED displays, spanning quite the range of vision to make you feel more immersed while working out:

Echelon has their EX-8S, a Peleton competitor, sporting a 24″ curved display, and priced at $2,399, plus a $34.99 monthly subscription.

Echelon EX-8S

AI workouts targeted to just your fitness level is the goal of Renpho and their new Smart Bike Pro.

Renpho – Smart Bike Pro

Cultbike comes with a 22″ touchscreen to view your spin class workouts, and you can view actual outdoor video scenery to pass the time.

Cultbike

Cycling Cameras

There are a couple of use models for adding a camera system to your bike: safety – you now have a record of approaching vehicles in case of a collision or near miss, social – you like to share video clips or photos from the route and your cycling buddies.

apeman debuted the SEEKER series of 4K HD action camera for rear-facing, or front-facing configurations.

Smart Helmets

How about adding programmable lights to the front and back of a cycling helmet, then adding Bluetooth speakers? That’s what OKAI did with the SH10 smart helmet.

OKAI SH10

Electricity Generation

Growing up as a kid in Minnesota I recall seeing a 3-speed English bike with a wheel-mounted generator that provided electricity for a front light. Now there’s a company called WITHUS & EARTH that generates electricity from a device placed near your rotating wheel, yet not touching it, as magnets placed inside of the wheel help turn the dynamo. The company has won a CES award for the third year in a row now.

WITHUS & EARTH

Cycling App

From Korea comes a cycling app called Veloga Cycle, sporting lots of data fields, analytics, and a way to share your ride with others. Here in the US we’ve already seen many similar apps: Strava, MapMyRide, RideWithGPS. My cycling journey with apps started out with MapMyRide, but then I switched to Strava, because all of my buddies used it, and I wanted to fit into the community.

Veloga Cycle

Daniels 2021 Cycling

Here’s the Strava stats for my cycling in 2021, and you’re invited to follow me on Strava, I will follow you back so let’s stay in shape together.

My epic endurance ride was from Tualatin, OR to Pacific City and back, 206.5 miles, yes, in one ride.

Here’s a list of all the electronics that I ride with:

On rainy days in Oregon I cycle indoors with:

Zwift

I did a virtual Everest on February 6, 2021 with a buddy, climbing over 29,029 feet, and covering 132 miles on Zwift. Follow me on Zwift, Daniel Payne (VV), and I will follow you back.

Summary

The electrification of the bicycle continues in 2022, with the e-bike category continuing to grow across a wide range of models. Gamification of fitness is another mega-trend, with spin bikes and smart trainers leading the way. Traditional bike companies are trying to catch up with new e-bike models, while the number of untraditional bike competitors continues to rise.

Related Blogs