CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Apple Car Crumble

Apple Car Crumble
by Roger C. Lanctot on 10-21-2016 at 7:00 am

Software, mechanical and electrical engineers working for auto makers received a huge self-esteem injection this week as events unfolding at Apple suggested that the company had abandoned long-rumored plans for building a car. Considering the fact that Apple still hasn’t delivered a decent navigation app with traffic services, it’s hardly a shock that the company would consider making a car to be a bridge too far.

The bottom line is that making a car IS a pretty tough task. In many respects cross-town rival Tesla Motors has made the whole process look – at least outwardly – entirely too easy, if expensive. The hiring binge at Apple that suggested interest in the automotive industry and the firing binge reflecting its denouement reveal a newly realistic Apple.

It appears that Apple retains an abiding interest in developing automated driving technology, at least according to “observers” and “insiders.” But a car developed in house or in partnership with an existing car maker now appears unlikely.

Observers and analysts were quick to point out that Apple only focuses on high margin opportunities. The theory here is that the low margins in the auto industry created a sour grapes scenario for Apple thereby leading to a bowing out of the race to build a car.

The margin argument is a quaint and convenient one and it ignores the fact that Apple changes the economics of the markets it enters. The whole argument for Apple entering the automotive industry was that it would change the game and rewrite the rules.

By definition, if Apple found certain conditions in the market to be unfavorable, Apple would alter those conditions. Apple fans were looking to Apple to bring innovative transportation solutions to the market that would help reduce traffic, congestion, highway fatalities and fossil fuel consumption.

It is more likely that Apple came to the conclusion that it not only lacked any novel new solutions to these transportation challenges, but adding even MORE cars to the equation would be unhelpful and therefore both financially and spiritually unrewarding. There were bigger barriers for Apple to overcome than low margins. Low margins (negative margins?) haven’t stopped Tesla Motors.

The reality is that Apple is not built or positioned to take on the automotive industry and nothing in its various product offerings suggest a grasp of what it takes to make a car. The nature of creating a car requires an extraordinary level of cooperation and coordination between teams working on different systems.

The internal operational security within Apple, which limits communication between teams, is a big barrier to this kind of collaboration. Additionally, the regulatory oversight and liability exposure characteristic of the auto industry are enough to scare away even the bravest and most robust legal department.

Finally, it’s pretty clear that Apple has always regarded the car as an accessory to the Apple eco-system, which remains true. Apple continues to deliver devices produced under highly controlled circumstances encompassing hardware and software and which work universally and globally in a predictable manner.
Apple’s product development and design discipline have served the company well in its run up to dominance in the smartphone market. Apple’s automotive integrations, such as CarPlay, are universally more predictable and easier to use – even if Apple’s resistance to adopting industry standards periodically creates nightmares for car companies.

This resistance to industry standards is yet another limitation for Apple’s ambitions. When it comes to connectors and interfaces it’s Apple’s way or the highway. It looks like Apple will have to hitch a ride on the automotive highway for the foreseeable future. And that’s very reassuring for all those who design and build cars for a living. It’s a tough job. Thank you for seeing and accepting that, Apple.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


The Fabless Empire Strikes Back, Global Foundries and Cadence make moves into Integrated Photonics!

The Fabless Empire Strikes Back, Global Foundries and Cadence make moves into Integrated Photonics!
by Mitch Heins on 10-20-2016 at 4:00 pm

In August I wrote an article proclaiming Score 1 for IDMs vs Fabless and discussedIntel’sannouncement of volume production of their 100G PSM4 and 100G CWDM4 transceiver products.

This week the Fabless Empire strikes back.
Daniel Nenni and I attended a two-day Photonic Summit and workshop hosted by Cadence Design, PhoeniX Software andLumerical Solutions.
The keynote speaker for the summit was Ted Letavic, Senior Fellow of Global Foundries. Finally, one of the big fabless guys spoke out about integrated photonics and his presentation should strike fear into the hearts of every large IDM currently vested in integrated photonics. Letavic rolled out a succinct and powerful argument for why integrated photonics was a technology that was going main stream and he made it abundantly clear that Global Foundries was entering into the integrated photonics foundry business.

This week should be marked as a water shed day in photonics as it was a day when a major pure-playfoundryproclaimed thatintegrated photonics was real and here to stay and thenwent on toclaimin no uncertain terms that they had demonstrated that they can make high yielding integrated photonics on a 300mm CMOS platform. In fact, Global Foundries isusing a300mm SiGe on SOI process to enable active photo detectors and modulators. And, in case there were any nay-sayers in the audience, Ted’s talk was immediately followed by John Bowers of UCSB who gave a presentation about their accomplishments of bonding III-V materials (most notably on-chiplasers) on to silicon. All of the pieces of the puzzle are now coming together, light sources, high speed modulators, high index silicon waveguides for small, low cost small photonic devices and a growing infrastructure for photonic test and packaging.

The icing on the fabless photonic attack came in two forms. The first was a speaker Aaron Zilkie from Rockley Photonics, a fabless PIC (photonic integrated circuit) chipset startup. Rockley is promising a disruptive change to data center network architectures using an integrated high-speed switching solution comprising a digital packet-switching ASIC with an Optical I/O PIC all integrated into one single low-cost package without the need for power hungry high-speed RF signals traces between the ASIC and the optical components. The solution promises to reduce multiple levels of switches in the data center network effectively flattening the network and providing for a more flexible and higher performing data center.

The fact that a small fabless company could in theory team up with Global Foundries to challenges the likes of Intel in the data center was not lost on those in the audience who came from the IDM side.

And if the startups didn’t both the incumbents then they should have at least been worried by the like of Hewlett Packard Enterprisewho showed a complete reticle field of integrated photonic structures with 1000’s of resonator rings being used to characterize the effects of process variance on their photonic devices.

The second prong of the Fabless attack came in the form of Cadence Design stepping up to the photonics plate.They had made some noise at the Optical Fiber Conference in the Spring of this year but the event they co-hosted with PhoeniX Software and Lumerical Solutions, two well-known photonic design automation companies, clearly showed that Cadence is jumping into the photonics fray with both feet. For those of you who have been living under a digital rock for the last 25 years, Cadence literally owns the analog and mixed signal implementation market with their Virtuoso franchise and they have a very sizeable share of the mixed-signal verification market to boot. Virtuoso is ubiquitous for custom and analog design and they have now combined forces with PhoeniX Software and Lumerical Solutions to produce a state-of-the-art, top-down, electro-optical design automation suite. Lumerical is known for their photonic simulation engines and solvers while PhoeniX is known for their native curvilinear shape engines. Both companies have participated in literally hundreds of photonic tape-outs over the last decade. The combination of Lumerical and PhoeniX with Cadence, plus the entry of a high-volume pure-play CMOS-based foundry spells real trouble for IDMs who have been ruling the high-end integrated photonics markets.

The Fabless vs IDM photonics battle is on. Stay tuned as this story continues to develop!

Also read: Fabless Photonic Design Flow Takes Shape as Cadence teams up with Lumerical and PhoeniX


Why Integrate Bluetooth LE IP in a Single Wearable SoC?

Why Integrate Bluetooth LE IP in a Single Wearable SoC?
by Eric Esteve on 10-20-2016 at 12:00 pm

Did you know that, in over 800 teardowns of mobile and wearable products from 2012 to 2015, wireless chips outnumbered the actual number of products, indicating multiple wireless ICs in some designs ([SUP]1[/SUP])? It could be interesting to look at the advantages of integrating wireless technology such as Bluetooth low energy in a single SoC. Especially for systems where bill of material (BOM) cost and power consumption can be real issues, like Internet of Things (IoT) and wearable applications.

Bluetooth has been initially defined for short range usage, typically headset and smartphone, with paired device broadcasting approach. According with Bluetooth SIG, the future launch of Bluetooth 5 at the end of 2016, beginning of 2017, will suppress these limitations in term of range and broadcasting capability, and also double the speed and by consequence half the power consumption. Let’s have a look at the various wireless architectures integrating Bluetooth low energy, and the preferred process technology associated with each option.

  • Standalone RF transceiver: The controller and PHY are integrated in the transceiver chip that connects to the main SoC which houses the software stack and application code. RF transceiver is implemented in legacy nodes, like 180 nm.
  • Wireless network processor: Several wireless protocols are integrated in a dedicated processor housing the wireless protocol stack, but the application code is in the application processor SoC. This network processor option target mature 90 nm node.
  • Fully integrated wireless SoC: This monolithic, single die implementation is ideal for Bluetooth with the low energy technology for IoT applications. The Link Layer and PHY are integrated into the SoC that runs all software stacks and application codes. The 40nm and 55nm technology nodes are becoming popular for this monolithic option.
  • Combo wireless chipset solution: Several wireless technologies such as WiFi and Bluetooth are integrated into a single transceiver connected to the SoC that includes the digital modem. All software, wireless stacks, and application codes reside in an external non-volatile memory. This solution is prevalent for mobile application processor and leverage aggressive process nodes like 28 nm, or below.

The combo wireless chipset architecture is offering unlimited off-chip memory, giving programmers more resources, but the number of chips (application processor + transceiver + flash) is too large if the goal is to develop a very low cost application, such as wearable or IoT device. Current wearable designs, like fitness wristbands, implementing Bluetooth LE for low bandwidth wireless connectivity, integrate only two chips: one SoC connected to a Bluetooth LE IC via a UART or I2C bus. It is now possible to push the integration for this type of application, by integrating the complete Bluetooth IP (Link Layer + PHY) into a single SoC. It’s technically possible, but what are the benefits?

A monolithic solution will offer the expected benefits like lower power, lower BOM cost and smaller footprint, all of these being extremely valuable for the target application. And it will also offer lower latency, as the data sent over the AMBA AHB bus can reduce latency by 5 to 10 cycles versus the SPI bus.

If we look more carefully at the power consumption, the two chip solution is made of a SoC, say in 40 nm or 55 nm, and one RF transceiver in 180 nm. Integrating the Bluetooth LE as an IP in the SoC will allow to seriously decreasing the power consumption part of the Bluetooth function. This function, implemented in a 180 nm process node (RF transceiver architecture), is now running in the SoC targeting 40 nm (55 nm) process node at much lower (Vdd) voltage, 0.9V for these ultra-low-power processes. This benefit is coming on top of the expected power saving going with the integration of two chips into a mono-chip. For battery powered devices like fitness wristbands, lower power consumption translate into longer usage, and the time between charges could make or break the product.

As mentioned earlier, the price point of a wearable or IoT device solution could be decisive. If it’s too high, the product will stay in a niche: cool, but too expensive to allow for wide market adoption. The wireless integration enables the removal of a complete chip sets, reducing packaging and test cost and removing duplication of power management. This can save over $0.15 in packaging costs and 20-30 extra pads that are required to support the additional wireless network processor. These savings, in conjunction with a reduced PCB footprint, make the total system cost savings very attractive.

The Bluetooth PHY is available on 180-nm, 55-nm and 40-nm process nodes allowing designers to also take advantage of the advanced processes’ power, area and performance benefits, especially on 55-nm and 40-nm. Moreover, the SoC development costs are still reasonable on these process nodes when compared with nodes below 28 nm. Unlike for very high volume applications like smartphone, or performance demanding like servers, targeting 55 nm or 40 nm process nodes allow enough power and cost savings to build successful systems for such low computing, low bandwidth systems.

Last but not least, the DesignWare Bluetooth Low Energy IP solution is qualified by the Bluetooth Special Interest Group (SIG) which is critical for designer success.

Eric Esteve from IPNEST

([SUP]1[/SUP]) According with a survey made by Teardown.com


Disarming Trolls

Disarming Trolls
by Bernard Murphy on 10-20-2016 at 7:00 am

An unintended consequence of the ubiquity of the Internet, particularly in social media, is the rise of the troll. Trolls post comments of unbelievable vitriol in some cases, comments that if issued in person and in public might lead to arrest and psych evaluations. Then vitriol turns into viral vitriol and the helpless target is bombarded with hate speech. But you can’t just suspend the rights of trolls. Speech is protected, at least up to a point, in many countries and few of us could honestly claim that we have never indulged in a heated response to a post or email. We may not be as vile as the worst offenders, but we share some of their traits.

In fact, theories of what makes for trollish behavior seem to be in flux. Accepted wisdom is that many of these people are socially awkward misfits (particularly teens and young adults) working out aggression through the anonymity of the Internet. But recent research suggests that many trolls are proud of their opinions, which they feel reflect social norms they want to defend. They are quick to anger, in that state perhaps less aware of crossing lines in self-expression, but mostly they are happy to be identified, to garner credit among like-minded thinkers for their vigorous support of those norms. Clickbait and echo chambers certainly play on this all-too-human weakness.

So “outing” trolls won’t necessarily help, and since none of us are perfect we ought to recognize that we too might be tempted to indulge in trollish behavior. Perhaps it would be preferable to try to block bad posts rather than bad posters, which requires some level of recognition and determination of how to respond. Social media providers are working on varying types of system in this line. Twitter, one of the most visible platforms for troll attacks, has an interesting approach called Periscope, which depends on users rather than machine learning to decide whether a tweet is abusive or offensive. As soon as one reader reports a tweet in this context, Periscope polls a randomly selected jury of other users reading the same tweet, to comment on whether they also find it offensive or abusive. If found guilty, the commenter is put in a 1-minute timeout and comments on the tweet are disabled. Repeat offenders are permanently muted. Nice approach, depending on human rather than artificial intelligence and difficult to game (I would think) given random jury selection.

Then again, Twitter may not have moved fast enough. According to Jim Cramer, Salesforce.com may have walked away from an acquisition in part because of the negative reputation around public perception of hatred associated with Twitter traffic. Which should be a reminder to other social platforms. It’s not just about being morally righteous – it’s also about company valuation.

In Google, a group called Jigsaw has developed (and no doubt continues to develop) a capability called Conversation AI. This is a machine-learning-based approach for which they used 17 million comments on New York Time stories, with moderator flags on offensive/abusive comments, plus data from Wikipedia discussion logs where they used a crowd-sourced service to flag reactions. Google claims this now can match judgments against a human panel with ~90% certainty and a ~10% false positive rate. Not bad, but I’m pretty sure these rates need to improve quite a bit to reach reasonable 1[SUP]st[/SUP] Amendment standards. Meantime Google is planning continued trials with NYT and Wikipedia.

An interesting sidebar here is that Conversation AI was inspired in part by work done by Riot Games on moderating player behavior in their massive multiplayer League of Legends world. Riot Games use machine learning to analyze conversations that has led to players being banned. From this they are able to show players in real-time where aspects of their comments are offensive or abusive. According to that company, providing this feedback has led to a 92% drop in offending behavior, which to me is an indicator that nipping the problem in the bud may be more effective that post-facto censorship.

Facebook doesn’t seem to be (at least publicly) as active in this area, perhaps because you connect only (mostly) to friends and you can unfollow or unfriend anyone who offends you. They do have some capabilities to detect a related problem – someone impersonating your account with the same name and profile. They’re also testing methods to detect intimate images as instances of revenge porn. In both cases, the potential victim is notified but must choose to have action taken (to avoid problems in purely automated responses).

This seems like an area where collaboration between providers is needed, perhaps even more than in domains like general AI. We could even dream that similar methods might encourage a general rise in the level of civility in on-line debate, out of which all kinds of wonderful things might happen (perhaps sane and effective government, to pick just one random example). Details on Google Jigsaw and the Riot Games work come from this Wired article. A Twitter Periscope article can be found HERE and can find an article on characterization of trolls HERE. What I could find on Facebook work in this area is HERE.

More articles by Bernard…


Should Cybercrime Victims be Allowed to Hack-Back?

Should Cybercrime Victims be Allowed to Hack-Back?
by Matthew Rosenquist on 10-19-2016 at 12:00 pm

Being hacked is a frustrating experience for individuals and businesses, but allowing victims to hack-back against their attackers is definitely a dangerous and ill-advised path.

Compounding the issues is the apparent inability of law enforcement and governments to do anything about it. Cybercrime is expected to reach a dizzying $6 trillion dollars by 2021, according to Cybersecurity Ventures’s crime report. With so much at risk and so little being done, tempers can quickly rise. Many are askingwhy not let people and companies hack-back their attackers? Some have gone so far as to say the U.S. Department of Justice (DoJ) and the Federal Bureau of Investigation (FBI) have not declared it to be illegal.

Well, it is. Not only is it illegal, it is a terrible idea fraught with peril and liability.

This is not the Wild West
Individuals are not judge, jury, and executioner. We as a society have long ago decided upon following the rules of due process. Otherwise chaos and victimization run rampant at the cost of people’s rights and liberties. The same will hold true with cyber hack-back schemes.

Foremost, it is extremely difficult, nearly impossible in fact, to know exactly who is hacking you in a digital environment. Security professionals call it ‘attestation’. Knowing who is behind the attack.

In the course of events and investigation, you may see an IP address of the would be assailant, but it could be false. It is easy to ‘spoof’ your identity and appear as someone else. It is trivial to forge credentials or fake an Internet address, email, machine name, network card number, or just about any other form of digital identity. Even if the offending system is properly identified, it could be hacked itself and under the control of others. You may bring down or impact another innocent victim, just like you. Conversely, someone downstream might inadvertently attack your systems, thinking you were knowingly attacking them.

The risks of unintended consequences are very high
What if your hack-back efforts brings down a hospital, critical infrastructure, or a safety system? Innocent people could be injured or even die. Is that acceptable? You may cause more damage and create more victims. Not a very good plan and you have no way of knowing what cascade effects will result. Hack-back actions may end up being disproportionate and viewed as more harmful to the community than the original offense.

Vigilantism is rarely a good path in modern times. People who believe they have the right to dole out justice then begin to define what is a crime and what they can rightfully do about it. The difference between a crime, what is unjust, and something they just don’t like, can get blurred. This is a dangerous slope.

We do not want just anyone to decide what constitutes being ‘hacked’. There are already cases where people take such situations to the extreme and call foul. I talked with one shop owner who thought a customer’s bad review of their product was a ‘hack’ and they should be punished. They wanted to hack-back this persons systems so they could not write any more bad reviews. I was shocked and strongly advised against such actions. I would not want to give them or anyone else driven by emotion, the latitude to then act upon such opinions.

For some it is tough to fathom. Being attacked and choosing to not respond seems cowardice. But as attribution is not clear, it we must withhold from brazen and unguided outbursts. If your wallet goes missing in a crowded stadium, should you start tackling people who you think could have been involved? That will likely get you in far more trouble with the crowd and with law enforcement. At the end of the day I suspect you will either be in a holding cell or the hospital. Either way, you would have time to reflect on your poor decision.

A terrible idea

Hacking people, even those who you suspect are behind attacks against you, is not recommended. The White House describes it as “a terrible idea”. Security professionals, echo the same sentiment. Hacking others, even if they are in the wrong, opens you up to significant liability. Any business or individual who pursues this course should be prepared to pay a multiple of the damage they cause to whomever they hack. It does not necessarily matter if they started it or not. No matter how passionate you might feel in the moment, lashing out with a risk of harming other systems and people is not the best path.

So let’s put this issue behind us and rally our efforts to more productive endeavors. We should be working on how to better predict, prevent, detect, and recover from attacks. Governments and law enforcement must continue to develop better tools to quickly track down culprits, remove their ability to victimize others, and have the tools to gather necessary evidence to properly prosecute cybercriminals in alignment with established laws and justice procedures. Technology should fuel our evolution forward to a better society, not push us back into feudal states of retribution and individual revenge.

Interested in more? Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.


ARM and SoftBank: A Joint Vision of the Future!

ARM and SoftBank: A Joint Vision of the Future!
by Daniel Nenni on 10-18-2016 at 8:00 pm

Next week is ARM TechCon and I’m extra excited about this one because of the SoftBank acquisition. In fact, the opening keynote says it all. ARM CEO Simon Segar and SoftBank CEO Masayoshi Son will discuss the next chapter in the book of ARM. To better prepare for this keynote you should probably read our book “Mobile Unleashed: The History of ARM”. SemiWiki members can get a free PDF version HERE. IF you are not a member please join HERE as my guest.

ARM is poised for a new chapter in its growth as part of the SoftBank Group, the global technology player. The technology, the business model, and the partnership remain at the heart of what ARM does, but the future holds greater possibilities.

For the opening keynote of ARM TechCon 2016, we are pleased to welcome Masayoshi Son, the chairman and CEO of SoftBank Group Corp., to the stage for his first public appearance with ARM since this historic acquisition. ARM CEO Simon Segars will begin the session by sharing his perspective on the acquisition, then welcome Mr. Son.

Mr. Son will discuss his own professional journey, the innovation that fuels his passion, and his broad vision of a happier, smarter, better connected world. Following the keynote, Mr. Son and Mr. Segars will take the stage together to talk about the acquisition and what lies ahead for the ARM and SoftBank ecosystems.


ARM TechCon is also a great place to spend time with your favorite EDA company. In fact one of my favorite EDA companies, Mentor Graphics, has an ARM TechCon landing page HERE where you can see a schedule of their technical sessions plus what they are demonstrating in their booth.

Mentor Graphics is the Platinum Sponsor for this year’s ARM TechCon. We support the broad range of ARM®-based architectures to develop today’s advanced digital technologies.

Spoiler Alert: Mentor has a wide range of embedded demos including Automotive, IoT (with ARM Trust Zone), Medical, and Industrial Robotics. Here are the embedded technical session abstracts:

Making Full use of Emerging ARM based Heterogeneous Multicore SoCs
The complexity and pace of heterogeneous SoC architectures is accelerating at blinding speed. While these complex hardware architectures enable product vision, they also create new and difficult challenges. Running an OS on a single core is child’s play. This is also true for running SMP-capable OS on homogenous multicore processors. The modern day SoC now combines asymmetric multiple cores, graphics processing units, offload engines and more on a single piece of silicon. This session will discuss opportunities for system partitioning and consolidation, and some of the key issues and challenges of developing and debugging software on these complex systems. Presenter: Felix Baum, Product Manager, Mentor Graphics Embedded Systems Division

Hard Real-time Virtualization – How Hard Can It Be?
The ARMv8-R architecture offers effective virtualization while maintaining the hard real-time response needed to control applications in the industrial, automotive, medical, and military markets. Virtualization enables safety, security, and reliability and it can be the key to successful, cost-effective development and deployment of complex software applications. This session brings together engineers from ARM and Mentor Graphics to describe how these processors can be applied in next-generation, highly-assisted automotive driving systems. These safety-related applications are kept free from interference by the underlying isolation present in the new ARMv8-R processor architecture. Presenters: Felix Baum, product manager, Mentor Graphics Embedded Systems Division – Jon Taylor, Product Specialist, ARM

Device Software: Where Safety Meets Security
Safety has been codified in several industry standards such as ISO 26262 for automotive and IEC 61508 for industrial where software has become a vital part of both the device and ensuring its safety. Security has now become critically important for device manufacturers and their suppliers, including those that supply COTS software.

Existing standards define the lifecycle leading to the creation of safety critical software, but do not say anything directly about security. Cybersecurity, however, is now an important consideration for manufacturers, governmental agencies, and the public at large. Fortunately, there is significant overlap between safety and security software development, and the practices underlying safe software development can be extended to security.

This session discusses the overlap between the two practices, and what to consider when fulfilling governmental and industry recommendations for cybersecurity over and above what is required for safety. Presenter: Robert Bates, Chief Safety Officer , Mentor Graphics Embedded Systems Division.

Making Sure Your UI Makes the Most of the ARM-based SoC: Performance Matters
When architecting user interfaces for embedded devices, it’s not uncommon for developers to hit performance problems on the target hardware. This session looks at the process of how to implement compelling UI designs on ARM-based architectures and provides guidance to ensure UI performance is efficiently utilized via the SoC. Examples from automotive, medical, and industrial applications will be cited. Hands-on performance analysis techniques and tooling will be used throughout the presentation. Presenter: Phil Brumby, Senior Technical Marketing Engineer, Mentor Graphics Embedded Systems Division.

I hope to see you there!


Achieving Lower Power through RTL Design Restructuring (webinar)

Achieving Lower Power through RTL Design Restructuring (webinar)
by Daniel Payne on 10-18-2016 at 4:00 pm

webinar semiwiki small

From a consumer viewpoint I want the longest battery life from my electronic devices: iPad tablet, Galaxy Note 4 smart phone, Garmin Edge 820 bike computer, and Amazon Kindle book reader. In September I blogged about RTL Design Restructuring and how it could help achieve lower power, and this month I’m looking forward to learning even more about this topic at a webinar scheduled for October 26th at 10AM PDT.

As the number of CPUs, GPUs, and IPs is growing in today’s SoCs, power management is becoming a very complex task, especially during the exploration phase where design restructuring is used in order to find the most optimal low power architecture, to meet the design requirements for performance, die size and power consumption.

Given the complexity of today’s designs, a multitude of questions need to be answered : Why should a certain block be moved? Where should it be moved? Will there be an impact on timing closure or low power or both? How long would it take to update the design? Etc…

STAR – RTL Build and Signoff

With its RTL Build & RTL Signoff tools, Defacto Technologies‘ STAR platform is leading design automation towards press button design restructuring decisions. Now STAR is adding new capabilities to help reconcile RTL design restructuring and power Intent decisions. Instead of waiting until the design is updated to assess the impact on low power, a designer can now make the right design changes while fully complying with low power intent requirements. This is a significant improvement which is providing benefits in daily SoC design tasks.

With the STAR software tool you can perform six related functions:

STAR works with industry standard file formats, so it will easily fit into your existing design flow:

Register today for the webinar.

Related Blogs


Soitec – Enabling the FDSOI Revolution

Soitec – Enabling the FDSOI Revolution
by Scotten Jones on 10-18-2016 at 12:00 pm

Recently I published two blogs on Fully Depleted Silicon On Insulator (FDSOI) and the potential the technology shows for a variety of low power and wireless applications. In order to produce FDSOI devices, the device layer has to be thin enough to ensure the device is fully depleted and ideally the buried oxide has to be thin enough to allow back gate for performance tuning. Soitec has been the industry leader in developing the substrate technology for FDSOI and last Thursday I had the opportunity to discuss FDSOI substrates with Christophe Maleville, Executive Vice President of Digital Electronics Business Unit, Soitec.

My two most recent blogs on FDSOI are available hereand here:

Soitec’s proprietary process for FDSOI is Smart CutTM. The basic process is as follows:

[LIST=1]

  • The process begins with a high quality device wafer.
  • Oxidation is performed on the device wafer to create a thin – high quality silicon dioxide layer. The silicon dioxide layer is very critical because this oxide layer will become the buried oxide layer.
  • Hydrogen is ion implanted into the device wafer to create a “fracture plane”.
  • The device wafer is cleaned, flipped over and bonded oxide side down to a “handle wafer”.
  • The device wafer is “split” along the fracture plane leaving a thin device layer separated from the handle wafer by the buried oxide. Because the device layer is very thin the original device wafer may be reused multiple times.
  • The final FDSOI wafer is smoothed and annealed.

    The Smart CutTM process has been in use on 300mm wafers for 15 years. Originally the wafers had thicker device layers and were used to make Partially Depleted SOI (PDSOI) devices. The problem with PDSOI was that you had an expensive SOI substrate that still requiring all the same processing as a standard planar wafer, this relegated PDSOI to niche use. FDSOI still uses an expensive substrate but at the same time eliminates a lot of processing making it cost competitive with the alternative technologies.

    Around 2004 or 2005 the industry was faced with the limitation of bulk planar technologies and realized that somewhere around the 20nm node the technology was going to reach its limit. The solution was to move to fully depleted devices with better electrostatic control. In order to successfully make FDSOI very good device layer thickness uniformity is required and at the time the uniformity being produced was approximately 5x what it needed to be. The industry began to pursue FinFETs on bulk to create fully depleted devices. Since that time Soitec has been able to achieve five angstrom uniformity making FDSOI a viable alternative to FinFETs.

    In 2010 Soitec demonstrated they could meet the requirements for FDSOI and in 2012 ST-Ericsson demonstrated FDSOI devices. As noted in my previous blogs, ST is currently producing 28nm devices, Samsung is ramping 28nm devices, GLOBALFOUNDRIES will soon be ramping 22nm devices and 12nm is in development.

    The following table summarizes FDSOI material requirements by node:

    FDSOI Requirements by Node

    [TABLE] align=”center” class=”cms_table_grid” style=”width: 400px”
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | Node
    | class=”cms_table_grid_td” | Delivered device thickness (nm)
    | class=”cms_table_grid_td” | Device thickness after processing (nm)
    | class=”cms_table_grid_td” | Buried oxide thickness (nm)
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | 28nm
    | class=”cms_table_grid_td” | 12nm + or – 0.5nm
    | class=”cms_table_grid_td” | 6nm
    | class=”cms_table_grid_td” | 25nm
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | 22nm
    | class=”cms_table_grid_td” | 12nm + or – 0.5nm, improved roughness
    | class=”cms_table_grid_td” | 5-6nm
    | class=”cms_table_grid_td” | 20nm
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | 12nm
    | class=”cms_table_grid_td” | 12nm
    | class=”cms_table_grid_td” | ~5nm
    | class=”cms_table_grid_td” | 20nm, may go to 15nm for improved electrostatic control
    |- class=”cms_table_grid_tr”
    | class=”cms_table_grid_td” | 7nm
    | class=”cms_table_grid_td” | Add strain
    | class=”cms_table_grid_td” | ~4nm
    | class=”cms_table_grid_td” | TBD
    |-


    The delivered device thickness is the thicknesses on the wafer delivered by Soitec. In the course of device fabrication, the device layer is thinned to the “after processing” thickness. After processing 5-6nm can be done today with high yield. Thinning below 4nm leads to very critical cleaning and epitaxial requirements.

    Strain can be done by creating a strained silicon layer over a silicon germanium layers for the device layer. After bonding the germanium layer is removed during the splitting operation leaving a strained silicon layer over the buried oxide layer.

    Soitec has been manufacturing 300mm PDSOI wafers (mainly for IBM, AMD and Freescale) as well as making other products such as Power and Photonics wafers. PDSOI had been in production since 2012 and Soitec is now converting the capacity to FDSOI and ramping up FDSOI production. Soitec has facilities in France and Singapore. In France capacity can be converted to FDSOI in 4 to 6 months and in Singapore in 6 to 9 months with a potential global capacity of 1.5 million wafers per year between the two sites. Soitec has also licensed Smart CutTM to two other companies.

    FDSOI has demonstrated good analog and RF performance and very low power. ST-Ericsson have shown a 3GHz processor with very little temperature rise during operation making it ideal for integration near a heat sensitive devices. Due to the thin isolated device layer FDSOI has good soft error rate immunity and is roughly 1,000x more radiation tolerant than other dveices. With a 15nm buried oxide 0.35 volt operation has been demonstrated by LEAP in Japan.

    FDSOI is well suited for 5G, wearables, IOT and smart watches and provides capabilities automotive needs now.


  • Phish Finding

    Phish Finding
    by Bernard Murphy on 10-18-2016 at 7:00 am

    I wrote recently on the biggest hole in security – us. While sophisticated hacks on hardware and software make for good technology reading, fooling users into opening the front door remains one of the easiest and lowest cost ways for evil-doers to break into our systems. And one of the more popular ways to fool us is phishing in all its various guises – dangling a tempting email or link encouraging us to click through to the next level.

    An outfit called PhishLabs has published quite detailed surveys for the past few years. The most recent covers both consumer-targeted phishing and business/organization-targeted spear-phishing; I’ll just look at some of the consumer-related highlights (lowlights?) here.

    One style of phishing is email-based, often asserting that you need to update account information to keep an account current or avoid a penalty. These have historically been fairly unsophisticated, often looking a little too clumsy and threatening to be taken seriously, but some more recent attempts are much more difficult to spot. A recent phish posing as a mail from American Express is so well crafted that all the contents look reasonable, checking the real mail address doesn’t help and the only indication that this is a phish is a single-letter spelling change in one link.

    Industries targeted for consumer-based phishing shouldn’t be a surprise. The most common are:

    • Financial services at 33%, in which I would expect credit card targets dominate
    • Cloud storage and file hosting comes in at 20%. Attacks here grew over 150% in 2015 and are apparently targeted primarily to collect usernames and passwords
    • Webmail and online services at 18%
    • Ecommerce at 12%. Indications are that activity through Alibaba contributes significantly to this rate
    • Payment services at 10%. This originally attracted over a quarter of attacks but has dropped significantly as a target, for unexplored reasons.

    The most rapidly growing among these targets are cloud storage and file hosting and webmail and online services.

    A depressing result from the survey is that 77% of attacks worldwide are directed at US consumers, which maybe says something about our wealth, or our gullibility, or possibly both. China is the next closest target at a paltry 5%, again attributed at least in part (per the survey) due to growth in Alibaba transactions. Attack rates in both the US and China are growing, though the US so dominates the percentage that this must be nearing saturation. Curiously the UK and Germany have seen a decrease in this area.

    Lest you think that blocking everything but .com sites will save you, the majority of phishing sites are hosted on legitimate but compromised domains. However, outside these common domains one observation they make is that while known problem top-level domains had been handled by browser blacklists and whitelists, this approach may become difficult to maintain as ICAAN has opened up more free-form naming for top-level domains. At least for the present this point may be moot as bad actors seem to prefer working with the (relative) trust we already have in .com, .org and .net domains.

    PhishLabs also describe some of the ecosystem for phishing malware. As in other types of malware, the majority of users of phishing kits are not sophisticated enough to build these themselves so acquire them in Dark Web marketplaces. Kits are pretty cheap but, as in legitimate markets, users prefer freeware. Now there is a growing trend for kit developers to distribute their kits free, but with unadvertised backdoors. This way the developer can collect whatever the malware user collects, for direct use or for sale.

    Phishing is in one sense the last frontier in security. Not in the sense that we are anywhere close to conquering hacks – we’ll never get there. I mean more in the sense of the weakness that is exploited – us, the consumers. To me that makes it a very interesting area, because zero-day defenses have to an understanding of psychology and culture (what works in the US might not work in China, or vice-versa) as much as they do of technology. PhishLabs themselves have products and services to detect and shut down phishing sites. I have also seen work on discovery in this area based on Deep Learning techniques. This should be an area ripe for innovation.

    You can read the full PhishLabs survey HERE.

    More articles by Bernard…