We’ve heard recently from several sources that millimeter wave radios, once the exclusive realm of defense and satellite use, are now finding homes in applications such as automotive radar and 5G networks. Therein lies a significant opportunity for digital design: moving frequency conversion and filtering from the analog domain into the digital domain. Continue reading “One transistor for the future of mmWave?”
At What Point Does Transistor Gate Length Stop Getting Smaller?
When I started doing IC design back in 1978 we had 6,000 nm channel gate lengths, and today you can buy a smart phone with 16 nm or 14 nm technology, although the gate lengths in those phones are more like 34 nm. The International Technology Roadmap for Semiconductors (ITRS) makes predictions about emerging trends in our industry and they just released a chart showing transistor gate length stopping its typical shrinking trend in the year 2021:
Illustration: Erik Vrielink
Source: IEEE Spectrum
Notice how in just the two years from 2013 to 2015 that the ITRS increased their pessimism on the economics of ever shrinking transistor gate lengths. Does this mean that it’s impossible to build transistors with gate lengths shorter than 10nm? No, but it costs so much that you have to question why do it.
If it simply costs too much money to get smaller than 10 nm channel length, then what are semiconductor manufacturers going to be doing? There are lots of ideas, like adding 3D fabrication to add more transistor density or even changing the transistor orientation to vertical. Groups like the Semiconductor Industry Association (SIA) will collaborate with the Semiconductor Research Corporation (SRC) to list research priorities that could be used by industry or government programs. There’s even an IEEE initiative called Rebooting Computing that could provide some direction for how semiconductor technology can continue to add value.
The semiconductor roadmap from ITRS started back in 1998 and it really helped the equipment manufacturers focus on achieving milestones for the industry. Our industry had some 19 companies developing leading-edge fabs in 2001, however today we only have the big four: Intel, TSMC, Samsung and GLOBALFOUNDRIES. You won’t find these four competitors sharing much together about their detailed technology challenges and directions, but these companies do drive their equipment and material suppliers.
NAND Flash chips are a leading user of 3D structures as a means to increased density, and Samsung announced a 256 Gb 3D NAND flash memory in April 2016 that uses 48 memory layers.
FinFET transistors have been used for several years now, starting with Intel’s 22 nm TriGate devices where the transistor gate has three sides of a horizontal, fin-shaped channel where current is controlled. The ITRS roadmap predicts that a different type of transistor will surpass FinFET by using a lateral, gate-all-around device. Beyond the lateral device, the report predicts vertical transistors with pillars or nanowires that stand up on end. Even the silicon material used in the channel region will change to use III and V column materials like silicon germanium.
Smaller transistor sizes have not always been accompanied by faster chip performance in the same percentage expected because the wires used to connect the transistors have now become the dominant factor in determining speed. The IEEE even has their own roadmap called the International Roadmap for Devices and Systems (IRDS).
There’s an October event, the 1st International Reboot Computing Conference, and IRDS will be having a meeting to continue their roadmap efforts.
I’ve witnessed first-hand in my lifetime the transition from Bipolar to NMOS, NMOS to CMOS, and planar CMOS to FinFET, so look forward to the continuing saga of semiconductor creativity that battles to extend Moore’s Law to 10 nm gate lengths.
A Credible Player at the Power Table
For a while it seemed like Mentor lived on the margins of the (RTL) design-for-power game. They had interesting micro-architectural optimization capabilities through their Calypto heritage but no real industry chops in power estimation, a must-have when you are claiming to reduce power. Better known offerings in RTL power estimation have dominated the landscape: PowerArtist, SpyGlass Power and more recently Joules.
(A quick sidebar to head off complaints from the emulation guys – there are excellent solutions for power estimation built around emulation. But that’s not what you need when you’re tweaking microarchitecture on a block; block designers most commonly use simulation to generate activity files.)
It now looks like Mentor has stepped up to the big table with their PowerPro solution. They have power estimation that looks comparable to the Cadence and Synopsys solutions (with one caveat) and they have further strengthened their tools for finding opportunities for optimization and (if you choose) implementing and verifying those changes automatically.
Accuracy in estimation is important first because you only want to implement proposed changes that give a big saving so you want to be sure the tool is in the ballpark when ranking candidates, and second because you don’t want your picks to wind up increasing power because implementation realities weren’t considered.
A large part of the art of getting this (reasonably) right is in acceptably modeling how the design will be physically synthesized. You want to do realistic tech mapping, ensure that datapath elements will map to efficient design macros, you want to synthesize a reasonable clock tree with reasonable buffering, you want to guide from UPF (or estimate) reasonable Vth distributions and you absolutely must fold in realistic physical data – wire lengths, clock tree topology and so on.
PowerPro seems to be covering most of these bases, at least to the extent Mentor are willing to share details. They use SPEF to import physical information, a realistic choice given the range of possible physical platforms. The downside is that this lacks tight coupling with physical design which might allow for further optimization. But there’s a potential upside for IP developers – you don’t want to tie optimization choices to just one implementation. Comparison between a few different implementations could be a good way to guide optimization for more general use.
In any case, tool vendors need to be careful not to over-engineer RTL power estimation. Mentor claims ~15% accuracy to gate-level based on their own correlation studies. This is exactly what everyone else claims. You may have read my comments elsewhere that these things are in a distribution. Perhaps 1σ is at 15%, but all solutions show outliers and it’s not clear any vendor has cracked making the distribution any tighter, so PowerPro seems to be right in the middle of the pack.
Next we get to an area where PowerPro may lead the pack – micro-architectural power optimization. This always starts by assuming first-order clock or memory enable gating constraints (or logic) has been correctly defined by the designer. Tools then look for what additional gating could be inserted to save power. These methods are formal, looking upstream and downstream of first-order gates for what additional constraints (or strengthening of existing constraints) those constraints logically imply. These cases can add up. I have heard of potential for up to 15% further saving, a large chunk of that coming especially from memory-gating.
Through Calypto functionality, Mentor is able to look many clock cycles forward and backwards in logic to find opportunities. Since looking this deep could generate an overwhelming number of suggested improvements, PowerPro does a cost-benefit tradeoff analysis, comparing power, area and timing to filter out just those suggestions that look optimal. A number of the power saving techniques are quite familiar – looking for redundant reads and writes on a memory for example. Some techniques sounded innovative (at least to me):
- Redundant reset checks, for flops in a design which are not directly observable (like many low power suggestions, making this fix may not be a good idea for other reasons. But in some cases it might)
- PowerPro is able to find logic for memory enable and light sleep pins – no need for labeling on your part
- PowerPro will look for sequential data gating opportunities, eg an expensive operation like multiply in cases when the output is don’t care
All changes are formally-verified to be functionally equivalent, using Calypto’s well-proven SLEC.
There’s a nice interface for selecting fixes you want to implement, also to explore other power reduction options (reducing voltage in some areas for example), supported by macros that let you explore pre-canned options like DVFS.
PowerPro supports (user-controlled) automatic implementation of selected fixes in the RTL. A few years ago, I remember auto-fixing being looked on with deep disdain by most RTL designers (no tool is going to mess with my RTL). But it seems that’s changing; Mentor said they now see a 50-50 split. I guess that was going to come at some point; I’m just surprised how quickly the user-base is switching.
You might wonder why I opened this blog with a picture of cupcakes. Mentor served these at the lunch and learn. I was told they are very healthy, gluten-free and they were certainly attractive and tasty. Not so sure about the healthy part.
Overall, PowerPro looks like a pretty complete RTL estimation and optimization solution. You can learn more about the product HERE.
efabless: Think GitHub for ICs and IP
For those of you who don’t know, GitHub is the crowdsourcing version of the defacto industry standard GIT source code management software. Currently, more than 14 million people have deposited more than 35 million software projects (mostly open-source) on GitHub making it the largest host of source code in the world.
Now think semiconductors. Imagine what could be done with an open crowdsourcing platform that dramatically reduces the cost and administrative barriers of semiconductor design and manufacturing. Sounds disruptive, right? Given the flat nature of the semiconductor industry I think disruption is a very good thing.
Do you remember how the fabless semiconductor transformation started 30 years ago? A pure-play foundry (TSMC) dramatically reduced the cost and administrative barriers of semiconductor manufacturing. Disruption is what made semiconductors the foundation of modern day life and disruption is what we need to maintain the cycle of semiconductor innovation that got us to where we are today, absolutely.
efabless corporation is the world’s first crowdsourcing platform for semiconductors. We harness the creativity of the community and dramatically reduce the cost and administrative barriers that have inhibited semiconductor innovation. In so doing, we create significant new markets for semiconductors and enable system companies to build better products and create new applications.
Another intriguing part of efabless is the people behind the company. The first name that stands out is Lucio Lanza. Lucio is the 2014 Phil Kaufman Award winner (recognizing excellence and vision in EDA) after spending his entire career in electronics. He started with Olivetti and Intel (with Phil Kaufman), then moved to EDA with Daisy Systems and Cadence, then IP with Artisan Components (ARM). Paul McLellan did a nice interview with Lucio HERE which is a must read for semiconductor professionals old and new.
The second intriguing name is Michael Wishart, efabless Chairman and CEO. Michael retired from Goldman Sachs after thirty years covering the technology industry as an investment banker. Michael is currently on the board of Cypress Semiconductor and before that he was on the Spansion board. The first question I asked Mike was why he is coming out of retirement. I already knew the answer but I wanted to hear it in his words. The answer of course is “disruption” and his rational matched up perfectly with mine.
The second question I asked was how are they going to monetize efabless. I was happy to hear it is a success based revenue sharing business model similar to how Artisan Components disrupted the semiconductor IP industry with their “Free IP Business Model” in 1998. As a competitor to Artisan at the time I can tell you this was a VERY disruptive move that transformed the fabless semiconductor ecosystem into what it is today, a force of nature.
Bottom line:efabless provides community members with a robust design flow that they access without cost or NDA. efabless obfuscates from designers the underlying technology of foundries and thereby facilitates community access to foundry process technologies, again without the requirements of NDAs. The marketplace and community is inherently collaborative with proprietary and open IP and ICs that can be forked, customized, or improved by other community members to solve interesting problems and open new opportunities. Again, think GIT Hub for ICs and IP. You can join the efabless community HERE.
SEMICON West – Harry Levinson and Mike Lercel Interview
On Tuesday morning at SEMICON I had the opportunity to sit down with Harry Levinson, Sr. Director of Technology Research and Sr. Fellow at Global Foundries and Michael Lercel, Director of Strategic Marketing at ASML to discuss the state of lithography.
I opened the discussion with a question about how we are going to address lithography from 10nm down to 5nm.
Mike Lercel – two specific directions, control and edge placement, the second theme is how multiple patterning introduces sources of variability. Process simplicity with EUV is beneficial at both 7nm and 5nm. A lot of argon fluoride immersion multiple patterning will stay and EUV will be used for the most challenging layers.
Harry Levinson – from a chip maker’s perspective there has been a lot of concern about EUV maturity for 7nm. They are looking at 7nm as a node that can be done with optical.
I asked a question about Line Edge Roughness (LER) and how much it matters for cut masks. Harry noted that even for cuts you do care about LER, for contacts and vias you do care about regularity. Via or contact on line-end is one of the most critical applications so LER does matter. Mike noted that LER affects where the line ends.
Harry noted that the good news is if we introduce EUV at 7nm we aren’t pushing it too hard. People are working on understanding LER. Shot noise is at the top of the list and you can’t do much other than increase the dose. Photoresists also contributes to LER and you need to control it at a molecular level and even the building blocks of the polymer are important so we need smaller building blocks. Mike – some of the novel materials look interesting because metal and some of the others are different than what we have today.
I asked about smoothing to address LER. Mike said it is spatial frequency dependent, the high frequencies can be smoothed better than the low frequencies. Harry, there is definite potential. Smoothing contact holes is harder than line/space and pesky line ends.
Harry said the 7nm node could be done optically and depending on customer demand could be introduced early with optical and then EUV could come next in 2018.
Mike said ASML systems in the field are at 125 watts and about 85 wafers per hour (wph), ASML’s target is 125 wph at 250 watts. Harry noted that the throughput is based on ASML assumptions and manufacturers have different requirements in terms of fields, dose, etc. Harry went on to say they are struggling to have EUV equivalent to immersion triple patterning on cost and that 5nm will likely be defined how far you can push EUV and still have single patterning. Mike, a true shrink that requires 6 immersion layers versus 3 EUV layers is kind of a cost wash.
Harry, a 2.5nm overlay budget is really hard because you are dealing with angstroms. Mike went on to say that is why you need really good mix and match of EUV to immersion.
Harry said EUV at 7nm would really help because you could learn before you have to really push it.
Mike noted that there are 8 – 3300s out in the field running and generating a lot of cycles of learning. 445,000 wafers have been processed through the tools. Without EUV we could be looking at 100 mask layers in a logic technology slowing down cycles of learning, design verification and manufacturing cycle time. (Authors note, I commented that this really struck me at the Advanced Lithography Conference this year, that for the first time there are multiple EUV systems around the world running wafers in volume and that is what you need for learning).
Harry said we will see contacts and vias done first, then metal blocks. Mask defects are still a problem but contact/vias have a lot of space to cover the defects (dark field with small open area). He is concerned about metal masks and defects (light field with high open area). It would be very desirable if line/spaces could be EUV. At N7 metal is 3 masks but he would like to do a single EUV mask. Mike also pointed out that at N5 you could be looking at a grating and more than 2 block masks.
Harry said that for 7nm contacts/vias productivity is still the main issue, they need 250 watts robust in the field. I asked him if he had a 250-watt high uptime tool today could he do 7nm contact/vias and he said yes. For contacts you have local critical dimension uniformity (LCDU) the contact version of LER and they can hit the specs with high dose, the key is how far you can back off the dose without hitting yield. With respect to mask defects the ITRS specs were based on planar gates, today there is no rigorous metric but to get to metal layers you need lower mask blank defects. Mike agreed, with contacts dark field can cover defects, metal is light field and the can’t cover the defects.
Harry, once you have the power you need to see if there are mask and wafer heating issues. Mike, Samsung saw a mask blister at 40,000 wafers, it wasn’t that long ago that immersion hazing occurred on masks at 35,000 wafers.
I asked about pellicles and Mike said no new announcements today (Mike and Harry were both scheduled to present in a session after our interview). They have run 200 wafers at a customer tool and they continue to run it at a 40-watt level. They still need to improve the pellicle. Harry jumped in to say the pellicles don’t have the transmission we are used to plus may need a filter; we will lose at least 20% of the light.
In closing Harry said that the front end of line (FEOL) has lower density so mask blank defectivity is less of an issue. EUV could enable multiple gate lengths and Mike also noted eliminate multiple cut masks.
1-T SRAMs in high-density, portable applications
For SoCs designed for various applications such as mobile, automotive, wearable computing, gaming, virtual reality, PC, imaging, security, and IOT applications, it is incredibly important to keep area (cost) and power as low as possible. Considering the growing percentage of chip area used for memory, it makes sense to choose the optimum memory IPs for each application. Among the memory IPs targeted for high-density consumer applications is the single-transistor dynamic random-access memory from Mentor Graphics (Novelics) called coolSRAM-1T.
You can read all about the coolSRAM-1T in Fundamentals of coolSRAM-1T Memory.
The one-transistor (1T) bit cell offers up to 50% reduction in core area for a given bit capacity compared to the more widely-used six-transistor (6T) bit cell. When your focus is on density over speed, the 1T architecture is an ideal choice. Figure 1 illustrates the density relationship between two embedded memory IP architectures.
But what about static power for always-on SOCs? A 6T SRAM uses an active driver to maintain data, so leakage power can be a concern in advanced process nodes. The coolSRAM-1T uses passive storage structures optimized for low leakage. To minimize subthreshold and junction leakage, the Mentor Graphics’ coolSRAM-1T dynamic memory cell utilizes the thick oxide or input/output (I/O) transistor option available in all advanced process nodes. The coolSRAM-1T is a nearly seamless replacement for existing SRAM-6T for lower leakage and chip area. It is also cost-effective since it can be implemented using a bulk-CMOS process with no additional mask steps.
The peripheral circuits for the coolSRAM-1T include (1) the sense amplifier and (2) the write-back circuit that restores the charge into the cell after a destructive read. To boost the signal in a given cell capacitor area, we operate the cell array at I/O voltage, which results in a larger signal for the sense amplifier and improves performance. The interface to the system is at the VDD (core) voltage. The signals must be level-shifted from one voltage domain to another as they travel from the memory interface to the cell array, and we offer three approaches for doing that that offer different tradeoffs depending on your needs.
The coolSRAM-1T is integrated into the Mentor Graphics MemQuest compiler, the web-based tool suite that lets you specify and implement custom memories. Compiled instances in the 160nm 1.8V/3.3V, 130nm 1.5V/3.3V, and 110nm 1.2V/3.3V technology nodes have been incorporated into customer products and are in volume production and is also silicon proven in 65nm technology. With the IP license comes documentation about the test flow, which includes three major steps:
[LIST=1]
The coolSRAM-1T embedded memory IP is the only silicon-proven single-transistor SRAM IP that can be implemented in bulk CMOS. For high-density consumer applications, it can lower the overall system cost and static power consumption by reducing the area and the number of external components.
Learn more about Mentor Graphics coolSRAM-1T and the trade-offs between using the coolSRAM-1T and the coolSRAM-6T in this free Mentor whitepaper Fundamentals of coolSRAM-1T Memory.
Filling out the rest of the mobile device
We spend an inordinate amount of energy tracking the big chip – the application processor – in a mobile device. As we’ve seen this space is coming down to a handful of players. A more interesting competition is heating up around the APU for the rest of chips needed to make a phone. Continue reading “Filling out the rest of the mobile device”
Foundry Technology Packaging Solutions
A significant shift is underway in the fabless semiconductor business model. As the application markets have become more diverse (and more cost-sensitive), product requirements have necessitated a new focus on multi-die packaging technology.
Continue reading “Foundry Technology Packaging Solutions”
Limits to Deep Reasoning in Vision
If you are a regular reader, you’ll know I like to explore the boundaries of technology. Readers I respect sometimes interpret this as a laughable attempt to oppose the inevitable march of progress, but that is not my purpose. In understanding the limits of a particular technology, it is possible to envision what properties a successor technology should have. And that to me seems more interesting than assuming all further progress in that direction will be no more than fine-tuning.
Take deep learning and vision. Recent progress in this direction has been quite astounding; in one example, systems have bested humans in identifying dog breeds. These systems are now used in cars for driver assistance and safety applications – detecting lane markings, collision hazards, even traffic signs. Increasingly Google and Facebook use image recognition to search and tag people, animals and objects in images. It seems we’ve almost conquered automated image recognition at a level better than humans. But have we really, and if so, is that good enough?
While progress in deep reasoning has been impressive, there have also been some fairly spectacular fails. Microsoft was forced to retire a chatbot after it developed racist and other unpleasant tendencies. Google had to remove the “gorilla” tag from its Photos app after complaints that it was identifying dark-skinned people as gorillas. And Google released open-source software which identifies surrealist collages of faces in what we would consider perfectly ordinary images (in fairness, Google was pushing the software to see what happened).
You could argue that this is just normal progression for technology. Perhaps once the bugs are worked out, these problems will be rare. But I am skeptical that solutions as they stand just need better training. Our own fallibility in image recognition should be a hint. It’s common to see faces and other images in complex irregular patterns if we stare at them for a while. This phenomenon is called pareidolia, a bias of the brain to see patterns, particularly faces in random images. I can’t imagine why deep reasoning should be immune from this problem; after all we modeled the method on human reasoning, so it would be surprising if it did not also inherit weaknesses in that approach. In fact the Google software that produced surrealist images is known to have this bias.
How good the recognition has to be may depend on the application, but clearly there is room for improvement and for some applications, the bar is going to be very high. More training might help, up to a point. So might more hidden layers, though apparently the value of adding layers drops off sharply after a relatively small number. Ultimately we have to acknowledge that the only straightforward way to fix deep reasoning problems is to try harder, which is not an encouraging place to start when you want to find breakthrough solutions.
Or perhaps we could go back to how we think. Most of us don’t instantly convert what we think we see into action. We consider multiple factors and we pass our conclusions through multiple filters. This is so apparent that we all know people who seem to lack these safeguards; we consider them socially-challenged (or worse). Now think of a cascade of neural nets where each net is trained in different ways. Deep learning methods for particle detection at the Large Hadron Collider (LHC) use similar methods, also combining different approaches – neural nets and binary decision trees – to weed out false positives. This alone might be a good start, with a first order goal to default to “I don’t know” when there is ambiguity in recognition.
Training more nets and other methods would be more expensive and the outcome may initially be more ambiguous than we might like. But maybe that’s an inescapable reality of improved recognition. Perhaps we should think of what we have today as hind-brain recognition – good for quick reaction (fight-or-flight) response but, like the hind-brain, not good at ultra-high-fidelity recognition where we might need improved tools.
I’m sure however this evolves the field will continue to be called deep learning, but that’s just a label. For one insight into limitations in existing architectures and newer methods, see HERE. You can see the Google surrealist art HERE.
AMD Unveils Full Radeon RX 400 Models And Positioning At E3
At E3 2016 in Los Angeles, California Advanced Micro Devices disclosed the numbering and targeted use cases of their full line of Polaris-based GPUs, branded as the “Radeon RX Series” of graphics cards. Advanced Micro Devices had previously disclosed some details about the new Radeon RX series of graphics cards at Computex 2016 in Taipei just a few weeks ago, but limited most details to the specific RX 480 graphics card. Now that we know more about the full line of AMD’s Radeon RX series of graphics cards, we can officially say that the RX 480 will be the highest end Polaris GPU that Advanced Micro Devices is launching.
AMD CEO Lisa Su chatting it up at E3 on the PC Gaming Show powered by PC Gamer (Photo credit: Anshel Sag)
Advanced Micro Devices said last week that the Polaris family of Radeon RX GPUs includes three different models, the Radeon RX 460, Radeon RX 470 and the Radeon RX 480. AMD also disclosed that mobile GPUs based on the Polaris architecture are expected to be available later this year, taking advantage of the performance-per-watt improvements in the architecture and 14nm process technology to enable “console-class gaming to thin and light notebooks for the first time.”
However, Advanced Micro Devices is only publishing the $199 pricing of the RX 480, which they disclosed at Computex. What is new beyond the names of the full Polaris launch lineup is that AMD is indicating what kinds of performance profiles gamers can expect from each of these GPUs. Advanced Micro Devices also reiterated that the 4GB version of the Radeon RX 480 will be selling for $199, indicating that we might see some higher memory configurations of the RX 480 for more than $199. I believe we will probably see a $249 8GB card.
Radeon RX 480
Advanced Micro Devices says the Radeon RX 480 is expected to deliver “smooth AAA gaming experiences to gamers at 1440p resolutions”. This means that AMD expects this card to perform fairly well compared to the high-end of the previous generation, roughly falling somewhere around a GTX 970. Because we don’t actually know the real performance numbers or have a card to test on our own, we cannot definitively say where the RX 480 will perform when compared to the competition.
I expect the RX 480 to perform well in Vulkan, DX12 and VR games, which may further help AMD’s case in terms of performance. It’ll be interesting to see if the rumors of an NVIDIA GTX 1060 are true as that would certainly make things very interesting depending on pricing, performance and availability.
Advanced Micro Devices is promoting the Radeon RX 400 series Polaris GPU family as “optimized for low-level APIs”, “console-class development” and “high performance VR”. The API point refers to the work done on Vulkan and the console point refers to what associate analyst Anshel Sag and I wrote on shader “intrinsic functions”. One key point on VR is that the RX 480 meets the minimum bar for both Facebook’s Oculus VR and the HTC Vive.
Radeon RX 470 and 460
This leads us to the other two GPUs in AMD’s Radeon RX 400 series line of GPUs, the Radeon RX 470 and Radeon RX 460. AMD is disclosing marketing positioning and model numbering but isn’t disclosing pricing or performance. They’re not doing this to be coy or manipulative, but rather to keep NVIDIA in the dark as long as possible. It’s also intended to drive speculation as evidenced by this column and get every hour possible to optimize drivers.
So here is the positioning…the Radeon RX 470 is being targeted towards “smooth HD gaming” which I believe is AMD’s code for being able to play most games at 1080P at 60 FPS. The Radeon RX 460 is offered as a “cool and efficient GPU for eSports” which probably means that it is mostly aimed towards MMO and RTS gamers where the graphics are less intensive. The mention of cool and efficient very likely means low power and possibly passive cooling in some scenarios.
Wrapping up
Advanced Micro Devices is slowly trickling out information on the RX 400 Series designed to create some ecosystem and end user excitement and to keep NVIDIA guessing. It makes life harder for analysts trying to piece everything together to find meaning, but this is life covering gaming. Everyone in gaming hardware, software and services does this.
While we still don’t know theexact performance of AMD’s new Radeon RX series of GPUs, AMD has released some telling clues about how each of their GPUs is expected to perform. The only thing that is really left up to the imagination at this point is the pricing of the Radeon RX 470 and RX 460 GPUs. The rumored NVIDIA GTX 1060 could create some chaos, too, if it’s at a real card, readily available and priced lower than AMD’s expectations.
Also, based on Advanced Micro Devices’ positioning of the entire RX 400 series as the “democratization of VR”, it would be really interesting to see how many of these three GPUs end up being Oculus Ready. To this point, only the RX 480, is but no one is saying the 470 or 460 isn’t. This is just me speculating. Furthermore, it will be interesting to see how much they will really drive down the price of building a VR system based on the pricing and VR certifications or the quality, something that we still don’t quite know. All we do know is that June 29th will be the big reveal and until then, we can only guess. But guessing makes the gaming world go around.

