CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

NVIDIA Rounds Out Pascal-Based GeForce Lineup With GTX 1060 And New Software Features

NVIDIA Rounds Out Pascal-Based GeForce Lineup With GTX 1060 And New Software Features
by Patrick Moorhead on 07-31-2016 at 12:00 pm

NVIDIA has been working hard to progress forward their new Pascal family of GPUs ever since their announcement at Dreamhack in May 2016 in my hometown, Austin, TX. The announcement included two of NVIDIA’s newest GPUs, the GTX 1080 and GTX 1070, both of which are somewhat available now. I worked with my colleague, Anshel Sag, to review the GTX 1080 and found it to be a fantastic GPU both in terms of performance and enthusiast value at that time. However, there were many software features announced along with the new Pascal family of GeForce GTX GPUs that hadn’t quite been ready yet but were major capabilities that elevated the GeForce GTX family. I thought those software capabilities deserved some extra attention and is important to me as any company who can get solution stickiness versus hardware progress becomes more important to the ecosystem. This is true for any tech product and solution. I also want to comment on the GTX 1060, NVIDIA’s latest card, priced from $249-299.

GeForce GTX 10 Series software
Some of these new Pascal-enabled features are non-performance-focused like NVIDIA Ansel (not to be confused with Anshel) which allows for a new way of taking and viewing in-game screenshots. Ansel will finally be released by NVIDIA with support for Mirror’s Edge and The Witcher 3 in mid-July.


NVIDIA CEO Jen-Hsun Huang introduces Ansel at the GTX 1080 launch event in Austin, TX (Photo credit: Patrick Moorhead)

In addition to software like Ansel, NVIDIA is overhauling their Gameworks program to include VRWorks to accelerate and improve the experiences inside of VR utilizing the strengths of Pascal and NVIDIA drivers. While there are many technologies that are a part of Gameworks and VRWorks, the most important ones to improving the VR experience on NVIDIA hardware are Simultaneous Multi-projection, ray traced audio and PhysX. These technologies are designed to improve the overall experience in VR either through improved performance or added realism.

VR Fun House
NVIDIA has actually created a full featured game that also serves as a technology demo called VR Fun House that they showed at their Pascal launch and subsequent events like E3. NVIDIA also announced that the VR Fun House will be coming to Steam this July, which means that anyone can play around with this carnival-style experience. This demo utilizes many of NVIDIA’s technologies including PhysX to add realism to VR and improve the overall experience with fluid dynamics, fabric physics and turbulence.


GTX 1080 Launch event attendee playing with NVIDIA Fun House (Photo credit: Patrick Moorhead)

Speaking of turbulence, I was completely blown away by the Everest VR experience that NVIDIA had worked with Solfar Studios to create. In part of that experience, Solfar Studios utilizes PhysX at the peak of Mt. Everest to make the snow swirl around you and wind blow by you in a way that can only be described as breath taking. There is definitely something game-changing there, I wasn’t convinced of it until I tried the final build of the Everest VR experience.

VRWorks

One of NVIDIA’s potentially most powerful VRWorks features is their simultaneous multi-projection feature that is enabling increased performance in VR with very little compute overhead. However, in order for this feature to work, it must be implemented by the game developer in their game. Traditionally this would be very difficult to do considering that trying to convince individual developers to implement a feature independently is very hard. That’s why NVIDIA instead approached the game engine companies directly in order to get simultaneous multi-projection implemented at the engine-level. They have announced that simultaneous multi-projection will be coming to both Unreal Engine and Unity, the two most popular game engines in the world today.


NVIDIA explains SMP benefits to tech analysts and press at an Austin, TX event after the GTX 1080 launch (Photo credit: Patrick Moorhead)

This means that developers don’t have to do as much work to implement simultaneous multi-projection because it will already be at the engine level. This also means that many more games will support this feature than I had originally anticipated, because traditionally certain NVIDIA-only features are implemented on a case by case basis. NVIDIA has already announced that they already have 30 games that will support simultaneous multi-projection including games like Pool Nation VR, Everest VR, Unreal Tournament, Raw Data, Adr1ft and Obduction.

NVIDIA has claimed to increase their VR performance by 50% or more with simultaneous multi-projection, which could vastly improve the performance of NVIDIA GPUs in VR regardless of the model.

NVIDIA GeForce GTX 1060

This leads us into NVIDIA’s latest announcement, the GeForce GTX 1060. Third-party, independent benchmarks are not available yet and NVIDIA claims that the GTX 1060 delivers GTX 980 performance at $249 or less than half the price of the 980 at launch, and will be available July 19th. This is virtually unheard of when you consider that the GTX 1060 is only one generation newer than the GTX 980 and less than half the price and just as fast. The GTX 1060 is slated to replace the GTX 960 which initially launched at $199, but also comes with half the performance in most cases and has more RAM than the 960, 970 or 980. This is obviously thanks to the Pascal architecture which also enables the GTX 1070 and 1080 to be as fast as they are.

The GTX 1060 features a new GP-106 ASIC (applications specific integrated circuit) from NVIDIA which is different from the GP-104 in the GTX 1080 and GTX 1070. This isn’t a cut-down die. This means that NVIDIA has purposely built this ASIC for the middle of the market and that it should yield fairly well. This also means that it should be even more optimized for low power gaming which explains the extremely low, stated 120W TDP and single 6-pin power connector compared to the 165W GTX 980. The graphics card itself will ship with 6GB of 8Gbps memory, which from my and Anshel’s experience should be enough for most gaming experiences and helps NVIDIA and their board partners keep down costs compared to 1070 and 1080. Additionally, NVIDIA told me that the GTX 1060, unlike the 1070 and 1080 will mostly ship in custom board partner designs and that the NVIDIA Founders’ Edition will be only available on NVIDIA.com at $299 for those that want it. If they can deliver the GTX 1060 in meaningful quantities, then Advanced Micro Devices just launched RX 480 has a very real competitor in a few weeks and that ultimately means great things for the consumer.

Wrapping up
As a whole, NVIDIA has done a very good job of catching up on their position inside of VR and have committed themselves immensely to improving VR. A lot of this has to do with the platform that NVIDIA is creating inside of VRWorks that enables NVIDIA’s GPUs to excel in truly immersive VR experiences like Everest VR. While it remains to be seen how many of the features outside of simultaneous multi-projection and Ansel get adopted by game developers, NVIDIA has done a very good job of introducing their new hardware and software products.

While there may still be some supply issues with the GTX 1070 and 1080, NVIDIA has reassured us that the GTX 1080 is the fastest ramp of a flagship GPU they have ever had and that demand is just that immense. It remains to be seen how well they can keep up with demand for the GTX 1060, but I suspect that most supply issues will be resolved by the end of the quarter regardless. I’m stunned, yes, stunned, at just how quickly NVIDIA got the GTX 1060 ready and I believe they accelerated the timetable when they fully realized the price/positioning of Advanced Micro Devices RX 480. Who wins? The GTX 1060 isn’t available yet nor are third party, independent benchmarks available on it, but for sure, the consumer wins as we have a fistfight in the mid-range.

NVIDIA has presented a very strong line-up of new GPUs that range from $699 to $249 and supported it with a broad array of new software, features and capabilities as well.

More from Moor Insights and Strategy


E-Class: Saving Lives with Fine Print

E-Class: Saving Lives with Fine Print
by Roger C. Lanctot on 07-31-2016 at 7:00 am

Television spots for cars are becoming a little like pharmaceutical ads filled with fine print and warnings about side effects and clarifications. Safety advocates are taking Mercedes to task for its latest TV ads for the 2017 E Class, claiming that the car company is misleading consumers into thinking the car can drive itself. For me, fine print is the trade-off on the road to saving lives.

Autonews: “Mercedes Challenged over ‘Drive Pilot’ TV Ads: –http://tinyurl.com/hqen85m

I have to say that when I first noticed official online information regarding the capabilities of the new E-Class I, too, was surprised at the volume of fine print. It looks like the future will be filled with it.
The fine print in question in the E-Class notes that the car cannot drive itself but has driving assistance features noting, further, that there are frequent reminders for the driver to keep hands on the wheel. The TV ad briefly depicts the driver removing his or her hands from the wheel while appearing to be on a highway before switching to a self-parking scenario.

But fine print also appears on Websites in descriptions of the vehicle’s Car-to-X communication capability and the description of its Drive Pilot Active Lane Change function – a clear response to the lane-changing proposition offered by Tesla Motor’s Autopilot. The key difference, of course, is Tesla doesn’t advertise on TV.

We may as well face facts. From today forward increasingly sophisticated cars will be sold with increasingly sophisticated safety systems … and a magnifying glass. The latest J.D. Power APEAL scores bear this out, with makers of safety-system-laden cars leading the list:

“Safety Features Score Big, Boosting New-Vehicle Appeal, J.D. Power Study Finds” – http://tinyurl.com/h5upkg2

My own personal research has shown that new car dealer sales people are generally more attuned to and more excited about explaining safety features than infotainment features. The J.D. Power study bears out Strategy Analytics’ own findings that safety is a higher car purchasing priority than infotainment.

In an ideal world, all safety features will work intuitively and, perhaps, be available and functioning at all times. But the reality is that humans need to learn how and when to activate and de-activate these systems. The pressure on HMI scientists is growing rapidly – along with the deployment of driver monitoring systems.

The fine print is unnerving, but I think Mercedes is getting it right. It’s hard to pursue “The best, or nothing” without getting into some areas where supplemental explanation is necessary. The surfeit of fine print is a small price to pay for lives saved in the future.

Car-to-X fine print:

Drive Pilot Active Lane Change:


“[3] Active Lane Change Assist is no substitute for active driving involvement. It does not predict the curvature and lane layout of the road ahead or the movement of vehicles ahead. It is the driver’s responsibility at all times to be attentive to traffic and road conditions, and to provide the driving inputs necessary to retain control of the vehicle. System may not detect some objects, obstacles or vehicles in the area into which the vehicle would move. See Operator’s Manual for system’s operating speeds and additional information and warnings.”

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Why is AMD Stock Jumping?

Why is AMD Stock Jumping?
by Daniel Nenni on 07-30-2016 at 7:00 am

One of my favorite pastimes is listening to the quarterly investor calls of the leading semiconductor companies. I can then match up the talking points with the calls I do with Wall Street, the conferences I attend, and the other data points I have collected while working inside the fabless semiconductor ecosystem for more than 30 years. Rarely do all the points match up but sometimes they are so off it makes for an interesting conversation.

As usual the prepared statements were a bit obscure and convoluted so it was very hard to draw any specific conclusions but there are several points worthy of discussion:

First and foremost, it is getting harder to differentiate the reasoning behind GAAP and non-GAAP results. For example, AMD sold a majority interest (85%) in their assembly and test operations to another firm to raise cash. As a result they recorded a significant profit on this sale ($127M after provisions for taxes).

This (one-time) profit was included in GAAP results. The CEO proudly said, “We achieved profitability this quarter, for the first time since 2014.” Yet, without this one-time operations spin-out, the net operating results were a significant loss.

It is also interesting to note that there are not many key AMD assets left for sale or spin-out. It was reported that the iconic Sunnyvale campus is for sale so AMD can move to a smaller facility. Unfortunately, AMD does not actually own the property so it cannot be used to bolster future GAAP.

Second, AMD is paying an average of almost 7% on long-term bond interest, a total of ~$2B, which will start to mature in 2018. There is clearly insufficient operating income to pay off these bonds so they’re going to have to start rolling them over, assuming their credit rating holds up.

Third, the PC market is still rather weak with “upper single digit declines to be expected this year”, according to the webcast call.

The last Bulldozer-based APU (CPU + GPU) product release occurred earlier this year, but there was no specific mention of this product on the webcast call. (As AMD has already pre-announced the replacement core architecture “Zen” it’s likely that this last APU refresh will not generate much new revenue or market share in the consumer PC or laptop area.)

They did indicate that the initial Zen-based server chip is with first customers for evaluation, and that “sales should begin in the first half of 2017”.

Fourth, AMD is benefiting from both the XBox One and the PS/4 for the coming Christmas season, although one of the analysts said, “OK, that’s a nice ramp in revenue, but the price takedown negotiated with Microsoft and Sony over time means this is likely a low margin return.”

AMD countered with, “We are also ramping for the mid-life refresh of the consoles, as well — we’re excited about Microsoft’s announcement of the XBox One S, a new console with improved High-Dynamic Range rendering support.” Unfortunately, Microsoft also has announced another refresh in 2017 which will likely cut into XBox One sales this year.

Bottom line: AMD is definitely counting on a very well-received Christmas buying season for game consoles which may not happen.

Fifth, AMD announced a refresh of their discrete GPU card plug-in product family. The good news is they priced it very aggressively, and it appears to be selling well. The bad news is two-fold — (1) this is just a FinFET-based replacement to an existing product line with better power dissipation but no significant performance boost that one would expect with a new GPU architecture and (2) Nvidia just announced their stripped down version of their new architecture and a competitive performance and price point. Some games play better on one or the other. Hard to imagine a mid-range, aggressively-priced GPU refresh generating sustaining revenue growth.

Sixth, there is a new joint venture with a Chinese firm to provide a specific x86-based server. Last quarter, when this JV was announced, AMD recorded “operating revenue” to address the R&D applied to this project. On the conference call this quarter, if I understood correctly, they recorded the payment from the Chinese firm as “IP licensing” and they said they had hit the first project milestone and expected the scheduled payments to continue.

Seventh, no mention of AMD’s ARM-based server product initiative, either the existing “Seattle” 28nm design (announced late last year) or its supposed refresh (using the ARM A72, in a 14nm process, due early 2017, as I recall). Remember, AMD bought ARM based server company SeaMicro in 2012 for $334M only to shut it down in 2015.

Eighth, there are high hopes for the Chinese JV, but that’s a couple of years away from generating any significant sales. It’s just R&D reimbursement currently.

Bottom line, expect a strong game console sales bump this year (but with pressure on margins). A GPU refresh will generate a bump in revenue (again, not at the high-end with its commensurately higher margins). PC revenues will decline. Server revenue is still very weak. And, there is a rather large debt redemption pending.

It’s all going to come down to whether the Zen-based x86 refresh captures new server (and subsequently, consumer PC) market share away from Intel. Time will tell, although the fact that the initial 14nm GPU part was somewhat “underwhelming” in power/performance does not bode particularly well for the Zen-based server part in 2017. I should also note that the original architect and team leader of Zen (Jim Keller) now works for Tesla as do several of his team members.


SoC QoS gets help from machine learning

SoC QoS gets help from machine learning
by Don Dingee on 07-29-2016 at 4:00 pm

Several companies have attacked the QoS problem in SoC design, and what is emerging from that conversation is the best approach may be several approaches combined in a hybrid QoS solution. At the recent Linley Group Mobile Conference, NetSpeed Systems outlined just such a solution with an unexpected plot twist in synthesis.

The QoS picture isn’t as simple as it looks; there are more factors than slotting traffic in some priority scheme where higher priority stuff moves through the system with less blocking. NetSpeed’s Joe Rowlands called this “lossy” information transfer, where local decisions on traffic patterns might solve a localized problem but don’t necessarily help overall system performance.

Let’s separate out the fact that IP blocks tend to speak different QoS languages – the case for using a network-on-chip in abstracting QoS in the first place. Complexity versus priority becomes more obvious in this diagram:


It’s interesting how these were classified. The difference between “variable” and “dynamic” is a difference between data traffic and user interaction. Also, the assignment of the GPU to “low” makes an assumption that it has enough memory bandwidth and typically outruns most of its tasks. And, putting the camera in “real-time” is a distinction – as with any pixel processing engine, some latency to get it started is OK, but once it is rolling operations have to proceed deterministically, otherwise there are unacceptable gaps in the output.

Overlay on top of that diagram steps for power optimization and the problem of sequencing agents, and the issues of cache coherency and memory control. NetSpeed uses what they call a layered SoC interconnect synthesis solution, a lot of words for multiple approaches working together to solve different aspects of the problem. There are two key elements of their solution: Pegasus, a “last level cache” block that can serve as traditional memory cache or configurable cache at other points in the network; and Gemini, their coherent NoC IP.


With multiple cache controllers and specialized accelerators, Gemini offers massive configurability in a formally proven, deadlock-free interconnect. How should a NoC be configured? NetSpeed has deployed machine learning algorithms to the NoC synthesis problem to set their router topology and link width, virtual channel, and buffer sizes. Instead of setting QoS only on a per-router basis, bandwidth is allocated at the system level for end-to-end QoS, accounting for cases with low power modes.


The results of traffic-based adaptability, power control, and cache configurability combined with the machine learning router configuration yield solid results. (Comparing results between competing NoC implementations is nearly impossible unless one were to implement the same complex SoC in every variant. Even then, different optimization strategies would produce different results – see the Apple A9 dual sourcing conversation for a case in point.) NetSpeed offers a chart from what they say is a tier 1 mobile OEM using manual bandwidth tuning versus the automated NetSpeed synthesis. It is clear NetSpeed’s automation outperforms on bandwidth in every use case considered, sometimes dramatically.


NetSpeed also claims a development time advantage, and I think that takes into account what would likely be multiple trial-and-error runs in manual iterations. No info was provided on how much simulation goes into the machine learning optimization process, but even a significant up-front simulation effort would appear to be worth the wait. There’s more discussion on what they simulate on the NetSpeed Gemini product page.

I’m hearing this story more and more often – EDA tools are infringing on system-level expertise with automation of very complex design optimization problems. It’s a classic make-versus-buy conflict, where years of experience might seem threatened by adopting a tool that might do things better. But, the bottom line in this new industry environment probably isn’t the huge mobile SoC design pursued by a small army of specialists. The target for these types of tools may be mid-range SoC designs in areas like the IoT where the years of SoC design experience isn’t built into the organization. Tools like NetSpeed will help teams with moderate system-level experience get better optimized chips designed faster.


Industry Analyst Perspectives On The Apple WWDC 2016 Keynote

Industry Analyst Perspectives On The Apple WWDC 2016 Keynote
by Patrick Moorhead on 07-29-2016 at 12:00 pm

I was in-person and live at Apple’s WWDC keynote in San Francisco. The following are my quick takeaways from the event.

Watch and watchOS:
Apple is focusing on exactly what they should be with watchOS, and that’s speed, ease of use and upping the ante in health and fitness. With the first Watch and watchOS, Apple solved many of the problems with wrist wearables, but not all of them. Watch was perceived by many as slow.

If the experience is anything close to what Apple showed, this will make a huge difference and remove many purchase objections and I believe with a little marketing, increase sales.

Watch already has the experience, insight and accuracy lead in health and fitness and these improvements like activity sharing to the ability to encourage and smack talk only cements that lead.

The Breathe app is a natural extension of Apple to attack daily health. Breathing is as important as sleeping. What I wish it could do is to tell me when I’m anxious or stressed and tell me when to do it, similar to standing.


(Photo credit: Patrick Moorhead)

Given the impressive developer stats, tvOS is getting developer traction. The improvements to search, adding many video channels, phone as a remote, and instant watching will make huge improvements in the experience. Instant Sign On is the Trojan horse, where if cable companies supported, could really replace the cable box.

What will determine commercial success will be Apple’s ability to market it as consumers will need to be reminded what the new Apple TV can do. Many consumers see Apple TV as what it was before, and that’s a streamer, not an interactive platform.

I still believe the omission of 4K video capabilities is an oversight and will limit Apple TV to a mainstream audience, which from a volume standpoint is OK, but not perceptual standpoint. Apple does premium experiences and 4K is premium.


(Photo credit: Patrick Moorhead)

Mac and macOS Sierra:
Apple brought some of the biggest improvements to macOS in years. These are black and white features, not shades of gray, that dramatically impact the user experience. Auto unlock with Watch and Apple Pay on the web are huge as they change what many of us do every day on our PCs or Macs, that is, to login and buy stuff.


(Photo credit: Patrick Moorhead)
With Optimized Storage, Apple may have finally gotten cloud storage right and will be better positioned to compete with Microsoft and Google. If the service works as reliably and as quickly as what they showed on stage, this could drive many consumers to even reconsider a Mac. Once people let this feature settle in, the more fanfare it will get.

iPhone and iPad and iOS:
iOS 10 appears to be the biggest release since iOS 8 and Apple has opened up the “crown jewels” of their OS. With iOS 10, Apple has given developers deep access to Siri, iMessage, and Maps, and that’s huge. While Apple was less prescriptive as Facebook, Google, and Microsoft, this serves as Apple’s answer to CaaS (conversations as a service) and bots. The Apple platform is there and ready to do this, and a whole lot more.

Opening up Siri will make an enormous effect on the experience. I don’t see anything Google can do that Apple cannot, and they’re doing it with the highest levels of privacy, which is unique. It’s apparent Apple has been working on this for a long time and in many ways, appears ahead of where Google is with their developers on an intelligent agent.


(Photo credit: Patrick Moorhead)

HomeKit and Home app:
There aren’t many HomeKit-enabled devices in aggregate now, but I like what I heard from Apple. Security and privacy are key and sometimes that stretches out time for manufacturers to be ready with security compliant-hardware. I was pleasantly surprised to see the Home App as I thought that was a missing element at launch. The Home App will dramatically improve usability for the consumer.


(Photo credit: Patrick Moorhead)

Privacy and AI:
Apple made many good points on privacy and have found a unique way to maintain privacy and provide personal and differentiated AI services. Essentially, Apple does training in the cloud on non-personal information then does the inference on the device.


(Photo credit: Patrick Moorhead)

I’m a bit blown away how they did this but when you own the hardware, software and the cloud, this opens up a lot of possibilities. I believe Apple has done much more with AI than they talk about. Siri is a complete AI platform and they were first to market with one. Even their phone multitasking and power management uses AI schemas. I believe if Apple let’s people under the AI covers we will see elements of AI leadership.

More from Moor Insights and Strategy


Dragging RTL Creation into the 21st Century

Dragging RTL Creation into the 21st Century
by Bernard Murphy on 07-29-2016 at 7:00 am

When I was at Atrenta, we always thought it would be great to do as-you-type RTL linting. It’s the natural use model for anyone used to writing text in virtually any modern application (especially on the Web, thanks to Google spell and grammar-checks). You may argue that you create your RTL in Vi or EMACS and you don’t need no stinking GUI. I have bad news for you – you are now officially part of the older generation. “Kids” graduating these days expect GUI support for any code they create. So get used to it.

Naturally there are limits to how far you can take real-time checking. It would be neither practical nor useful to launch CDC or formal analysis every time you hit the space or Return key. But that’s not what up and coming developers expect. They want the editor to flag and, if appropriate, correct the basic errors. This is especially important for VHDL development, which can be particularly challenging for VHDL novices (in which group I count myself). I should add that Sigasi provides similar capabilities for Verilog and for mixed-language.

On VHDL, you might argue “who cares – everything I do is in Verilog”. That purist stance is more difficult to sustain these days. Perhaps you have to integrate an Imagination Technologies GPU into your SoC (or one or more of many other IPs) and you need to add power management or other tweaks to support your integration. You’re going to have to deal with VHDL and the less experience you have, the more mistakes, you’re going to make (and the more time you’re going to spend trying to understand those mistakes). I can personally vouch for this. A language-aware editor would have made my life a lot easier.


Sigasi, based in Belgium, has created just such a linting capability, embedded in their Sigasi Studio product line. The base set checks for a wide range of common mistakes in VHDL:
· Unused declarations
· Duplicate declarations
· Declaration could not be found
· VHDL 2008 features in VHDL 93 mode
· Assignment validation
· Case statement validation
· Instantiation statement validation
· Library validation
· Range validation
· Deprecated and non-standard packages
· Duplicate, conflicting design unit names
· Missing return statement in function bodies
· Missing, unnecessary and duplicate signals in the sensitivity list
· Port, signal, variable, constant or generic declarations that are never read or written

A more advanced set checks for:
· Null range error
· Use of deprecated packages
· Redundant use of OTHERS
· Defining function bodies inside packages
· Infinite loops and processes without sensitivity lists
· Incorrect use of whitespace in some contexts
· Reference to unneeded libraries
· Unused declarations for ports, generics, signals, etc
· Incomplete and over-specified sensitivity lists


The most advanced version includes checks for:
· Dead states in FSMs
· Inaccessible code
· Objects never written or never read
· Naming conventions
· Consistent capitalization
· Case references
· Incomplete associate optional
· Positional association in instances

I’d like to call out a couple of these checks since they may seem like “wow, who really cares”, where in fact they can bite you badly. Start with naming conventions. Like it or not, a lot of in-house checks and generation tools depend on consistent naming conventions to drive connectivity creation and checking. Automatic connectivity creation tools are completely dependent on you following consistent naming conventions. Such a tool will automatically connect together AHB_PCI_SLAVE (on a PCI IP) and AHB_PCI_SLAVE_MIRROR (on the AHB bus), but will ignore the connection if you didn’t follow the convention. Some relatively simple name checking can save you a whole lot of problems.


Then take consistent capitalization. VHDL doesn’t care about capitalization, but this can lull you into a false sense of security. Capitalization does matter when you get to a Verilog/VHDL interface, because Verilog does care about capitalization and you’ll not get a connection if this is wrong. Both this problem and the preceding problem are good examples of things that will seem perfectly fine while you’re working on an IP but will bite you in integration (and it may take quite a while to figure out why).

Sigasi analysis will generate informational, warning and error flags and will indicate where quick fixes are available (I really wish I had those when I was messing with VHDL). The Studio applications in which the linter is available come (optionally) in an Eclipse app, so should plug in easily to common RTL development environments.

You can learn more about Sigasi check-as-you-type capabilities HERE.

More articles by Bernard…


Why Elon Musk’s crazy plans for Tesla aren’t crazy

Why Elon Musk’s crazy plans for Tesla aren’t crazy
by Vivek Wadhwa on 07-28-2016 at 4:00 pm

Elon Musk recently laid out a “master plan” for where his company, Tesla Motors, is heading. The vision is undoubtedly ambitious: four new kinds of Tesla vehicles, solar initiatives, autonomous driving technologies and a ride-sharing program.
Continue reading “Why Elon Musk’s crazy plans for Tesla aren’t crazy”


Stressed out about Electrostatic Discharge (ESD) or Electrical Overstress (EOS)?

Stressed out about Electrostatic Discharge (ESD) or Electrical Overstress (EOS)?
by bkeppens on 07-28-2016 at 12:00 pm

Do not lose sleep worrying that your integrated circuits might fail during EOS/ESD events. Join us for the 38th annual EOS/ESD Symposium in Anaheim, CA in September. Experts on the field will address the latest research on EOS and ESD in the rapidly changing world of electronics.

As electronics continue to become commonplace in every aspect of our lives, including medical applications, the control of our homes, and our cars, cost and reliability are of utmost importance. To accommodate these requirements and overcome challenges, progress has to be made in the form of creative ESD design, innovative, comprehensive, and predictive verification methods and on the side of the factor control standards and methods.

The 2016 EOS/ESD Symposium addresses this and more through tutorials, workshops, technical sessions, invited talks, and through the products and services presented in the industry exhibits.

There are 13 technical sessions covering topics like factory and materials, advanced CMOS, high voltage and RF ESD challenges, EOS/ESD case studies, device physics and modeling, ESD EDA tools, system level ESD, and ESD testing.

Download the entire program on our website, register for the event and stop losing sleep over ESD issues.

ESD Fundamentals: A six-part series on Electrostatic Discharge (ESD) prepared by the ESD Association

History & Background
To many people, Electrostatic Discharge (ESD) is only experienced as a shock when touching a metal doorknob after walking across a carpeted floor or after sliding across a car seat. However, static electricity and ESD has been a serious industrial problem for centuries. As early as the 1400s, European and Caribbean military forts were using static control procedures and devices trying to prevent inadvertent electrostatic discharge ignition of gunpowder stores. By the 1860s, paper mills throughout the U.S. employed basic grounding, flame ionization techniques, and steam drums to dissipate static electricity from the paper web as it traveled through the drying process. Every imaginable business and industrial process has issues with electrostatic charge and discharge at one time or another. Munitions and explosives, petrochemical, pharmaceutical, agriculture, printing and graphic arts, textiles, painting, and plastics are just some of the industries where control of static electricity has significant importance. The age of electronics brought with it new problems associated with static electricity and electrostatic discharge. And, as electronic devices become faster and the circuitry getting smaller, their sensitivity to ESD in general increases. This trend may be accelerating. The ESD Association’s “Electrostatic Discharge (ESD) Technology Roadmap”, revised April 2010, includes “With devices becoming more sensitive through 2010-2015 and beyond, it is imperative that companies begin to scrutinize the ESD capabilities of their handling processes”. Today, ESD impacts productivity and product reliability in virtually every aspect of the global electronics environment.

Despite a great deal of effort during the past thirty years, ESD still affects production yields, manufacturing cost, product quality, product reliability, and profitability. The cost of damaged devices themselves ranges from only a few cents for a simple diode to thousands of dollars for complex integrated circuits. When associated costs of repair and rework, shipping, labor, and overhead are included, clearly the opportunities exist for significant improvements. Nearly all of the thousands of companies involved in electronics manufacturing today pay attention to the basic, industry accepted elements of static control. ESD Association industry standards are available today to guide manufacturers in establishing the fundamental static charge mitigation and control techniques (see Part Six – ESD Standards). It is unlikely that any company which ignores static control will be able to successfully manufacture and deliver undamaged electronic parts.


SEMICON West – Leti FDSOI and IOT, status and roadmap

SEMICON West – Leti FDSOI and IOT, status and roadmap
by Scotten Jones on 07-28-2016 at 7:00 am

On Tuesday, July 12th at SEMICON West I had an opportunity to sit down with Marie Semeria, the CEO of Leti and discuss the status and future of FDSOI. Leti pioneered FDSOI 15 years ago and has been the leading FDSOI research ever since.

Two years ago Leti and ST Micro demonstrated products on 28nm that are cost competitive with bulk technology. For the first time the industry could consider two approaches to leading edge requirements, FinFET for high-end and FDSOI for low-cost and flexible IOT designs. Both technologies can cover multiple technology nodes. Since then ST has licensed 28nm to Samsung and Global Foundries and 14nm developed with Leti to Global Foundries. Global Foundries is now preparing to introduce a 22nm technology, 22FDX based on the Leti-ST 14nm front end with a relaxed back end for cost. FDSOI is out of research into foundries and an IDM and products are coming out.

In terms of scalability:

  • 14nm – demonstrated the technology is scalable to 14nm with ST Micro.
  • 10nm – they have completed modeling and some test devices. They have a full integration scheme and they have shown the modeling matches the actual results allowing them to have confidence when they use modeling to extrapolate to the next node. Strained SOI and silicon germanium are 10nm performance boosters but even with the current substrate they can meet 10nm requirements.
  • 7nm – modeling done.
  • 5nm – beyond 7nm Leti believes that at 5nm horizontal nanowires will be the next technology.

Authors note – the following table was added to the article on 8/10/2016

Leti defines the nodes mentioned above as follows where CPP = contacted poly pitch.

[TABLE] align=”center” class=”cms_table_grid” style=”width: 300px”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Node (nm)
| class=”cms_table_grid_td” style=”text-align: center” | 14
| class=”cms_table_grid_td” style=”text-align: center” | 10
| class=”cms_table_grid_td” style=”text-align: center” | 7
| class=”cms_table_grid_td” style=”text-align: center” | 5
| class=”cms_table_grid_td” style=”text-align: center” | 3
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | CPP (pitch)
| class=”cms_table_grid_td” style=”text-align: center” | 80
| class=”cms_table_grid_td” style=”text-align: center” | 60
| class=”cms_table_grid_td” style=”text-align: center” | 50
| class=”cms_table_grid_td” style=”text-align: center” | 40
| class=”cms_table_grid_td” style=”text-align: center” | 30
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | M1 (pitch)
| class=”cms_table_grid_td” style=”text-align: center” | 64
| class=”cms_table_grid_td” style=”text-align: center” | 48
| class=”cms_table_grid_td” style=”text-align: center” | 40
| class=”cms_table_grid_td” style=”text-align: center” | 32
| class=”cms_table_grid_td” style=”text-align: center” | 24
|-

Commercially 28nm is running at ST and Samsung and 22nm is coming up at Global Foundries. Global Foundries plans a follow-on to 22nm and Leti has assignees in Dresden working with Global Foundries and discussions are ongoing. The exact node for the follow-on technology hasn’t been announced yet (authors note – in a recent interview with Samsung they also discussed a follow-on technology to the 28nm process they are running; they want to avoid multi-patterning for cost reasons so it sounds like a relaxed 22nm technology at Samsung, at Global Foundries my guess is something in the 12nm to 16nm range will be next).

The ecosystem for FDSOI is completely established with fabless, foundries, IP companies and IDMs all supporting it. Leti has established the silicon impulse initiative as a gateway for designers to get trained and use multi-project wafers to evaluate FDSOI. In one year more than 20 companies have joined the initiative to assess the technology. There are over 60 tape-outs running at ST, Global Foundries and Samsung.

Marie expects to see many more FDSOI products in IOT due to low energy consumption and the ability to support RF and embedded memory. They have demonstrated RF to over 300Ghz! Leti is working with ST to develop back-end memory for 28nm or 20nm for a microcontroller. The memory may be PCM or OxRAM. Leti is also working with Spin Tech on magnetic memory, they have a European research grant and are focused on embedded memory and low voltage operation.

IOT is a very fragmented market today and requires a lot of different types of IP, FDSOI could be the IOT platform. In automotive IOT it is at the connected device level, plus processing of data and security. More and more big companies are developing their own structures and clouds to manage data. SOI has good radiation hardness and that is an advantage for automotive. At DAC Leti demonstrated a new driver assistance system using an ST microcontroller. Automotive needs low cost and global environment coverage, Leti has a probabilistic approach that avoids floating point operations and lowers computing requirements by 100x and power by 200-400x. In IOT you have to think about specific requirements of the application and then you can have tremendous impact on power and cost. You don’t need a lot of capacity in computing if you look at the whole system. They are working with companies in automotive to optimize the system to keep relevant information close to the sensors and optimize it for the type of operation.

In the late nineties IBM introduced partially depleted SOI (PDSOI) in their internal processor line. I suggested to Marie that because PDSOI required an expensive SOI substrate and yet didn’t reduce process costs it was an expensive solution and created an image of SOI an unaffordable whereas FDSOI greatly reduces the process complexity making FDSOI far more affordable (authors note – IBM’s processor needs were for high performance and cost wasn’t really an issue). My belief is this created a perception of SOI as high cost that FDSOI is still working to overcome and Marie agreed with me on this.

Today with Global Foundries poised to ramp 22FD and Samsung and ST running 28nm FDSOI is finally poised to take off. Global Foundries and Samsung are also both planning follow on nodes and FDSOI has a path to continue to scale for many years.


A Chinese smartphone drill in progress

A Chinese smartphone drill in progress
by Don Dingee on 07-27-2016 at 4:00 pm

One of our astute readers caught what looks like a major gaffe in the Linley Group mobile conference presentations from this week. It’s another indication of the speed of change in mobile markets and the instability that is giving Apple and others heartburn.

Here’s the chart in question:


The point of contention is who, exactly, are the China tier 1 vendors? Linley lists Huawei, Lenovo, Xiaomi, Yulong, and ZTE. As it turns out, that is outdated info according the IDC Worldwide Quarterly Mobile Phone Tracker:


Never heard of OPPO or vivo? I was flipping channels last night and saw some reality show where the participants were holding an OPPO phone. It turns out both brands are owned by BBK Electronics, and there’s a third brand coming soon called imoo. It’s insane how quickly these Chinese brands are appearing and disappearing on the top 5 mobile list, although OPPO and vivo have been out there for several years quietly building.

(For those not familiar with the title reference, a “Chinese fire drill” was a popular game among teenage drivers out on the town with their friends, where everyone would exit the car at a stoplight and run around it until the light turned green, and whomever was nearest the driver door jumped in and took control. Maybe we need to call it the “American fire drill” now.)

The importance of this list in the Linley argument is who is or may soon be doing their own LTE chipsets – a bullet on their slides says the top 3 plus “internal” make up 98% of mobile. Qualcomm still owns the high end, and MediaTek has surpassed the internal vendors: Apple, Huawei, and Samsung combined. We know Xiaomi has their soon-to-release “Rifle” chipset.

The days of premium mobile brands and high-end chipsets may be coming to a close, however, at least in terms of who makes the most money. Even Linley says that most of the remaining mobile growth is at the low end in developing countries, and that MediaTek and Spreadtrum are the primary beneficiaries of that trend.

In response, Qualcomm continues to push their offering lower, and made a compelling argument for a scalable LTE roadmap:


That’s why Qualcomm was all lathered up in their recent earnings report about unnamed Chinese companies not counting chips correctly – these numbers are starting to get pretty big. I suspect we’ll see more change in this in the coming quarters, it’s moved substantially since we published “Mobile Unleashed” about 8 months ago.