NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

DARPA Flex Logix and TSMC!

DARPA Flex Logix and TSMC!
by Daniel Nenni on 01-23-2017 at 10:00 am

When I first saw emerging semiconductor IP company Flex Logix actively involved with TSMC I knew something big was coming and boy was I right. DARPA announced today that an agreement is in place with Flex Logix to develop EFLX eFPGA technology on TSMC 16FFC for use by companies or Government agencies designing chips for the US Government. Wow! GO DARPA!

“Embedded FPGA technology is a game changer in the chip design process and we are pleased to be working with DARPA,” said Geoff Tate, CEO and co-founder of Flex Logix. “Chip development costs and lead times keep increasing and the ability to reconfigure RTL at any time can eliminate expensive chip spins, enable one chip to address many customers and applications, and extend the life of chips and systems. As a result, designers can easily keep up with changing standards and customer requirements.”

Digging a little deeper you will find that this announcement is tied to the DARPA Microsystems Technology Office (MTO) and specifically the Craft Project:

It can cost up to $100 million and take more than two years for a large team of engineers to design custom integrated circuits for specific tasks, such as synchronizing the activity of unmanned aerial vehicles or the real-time conversion of raw radar data into tactically useful 3-D imagery. This is why Defense Department engineers often turn to inexpensive and readily available general-purpose circuits, and then rely on software to make those circuits run the specialized operations they need. This practice can speed up design and implementation, but it also results in the deployment of unnecessary and power-hungry circuitry. And that, in turn, can lead to technology that requires more power than can be practically supplied on small flying platforms or on warfighters already burdened by too much battery weight.

The Circuit Realization at Faster Timescales (CRAFT) program seeks to shorten the design cycle for custom integrated circuits to months rather than years;devise design frameworks that can be readily recast when next-generation fabrication plants come on line; and create a repository of innovations so that methods, documentation, and intellectual property can be repurposed, rather than reinvented, with each design and fabrication cycle. This novel, less expensive design paradigm also could help diversify the innovation ecosystem by making it practical for small design teams to take on complex custom circuit development challenges that are out of their reach today.

Reducing the time and cost for designing and procuring custom, high-efficiency integrated circuits, should drive more of those in the DoD technology community toward best commercial fabrication and design practices. A primary payoff would be a versatile development environment in which engineers and designers make decisions based on the best technical solutions for the systems they are building, instead of worrying about circuit design delays or costs.

The program manager on Craft is Dr. Limton Salmon who came to DARPA from both sides of the semiconductor industry. He spent 15 years in executive roles directing development of CMOS technology from the 130nm through the 7nm node at GlobalFoundries, Texas Instruments and Advanced Micro Devices. Prior to that Dr. Salmon was an academic at Case Western Reserve University and Brigham Young University.

We have been covering FlexLogix on SemiWiki for 12 months now and have a dozen articles on different aspects of the technology and company. The SemiWiki Flex Logix landing page is HERE.

Congratulations to the hard working people at DARPA, Flex Logix, and TSMC, absolutely.

About Flex Logix
Flex Logix, founded in March 2014, provides solutions for reconfigurable RTL in chip and system designs using embedded FPGA IP cores and software. The company’s technology platform delivers significant customer benefits by dramatically reducing design and manufacturing risks, accelerating technology roadmaps, and bringing greater flexibility to customers’ hardware. Flex Logix recently secured $7.4 million of venture backed capital. It is headquartered in Mountain View, California and has sales rep offices in China, Europe, Israel, Taiwan and Texas. More information can be obtained at http://www.flex-logix.com or follow on Twitter at @efpga.


Technology Update With Andrew Faulkner and Jim Lipman of Sidense

Technology Update With Andrew Faulkner and Jim Lipman of Sidense
by Daniel Nenni on 01-23-2017 at 7:00 am

Sidense is an interesting company in a very important market segment. Sidense was founded in 2004 and their 1T-OTP memory macros are now used in hundreds of chips from 180nm to 16nm for code storage, secure encryption keys, analog and sensor trimming and calibration, ID tags, and chip and processor configuration.

If you are designing chips for mobile, automotive, industrial, consumer, or Internet of Things (IoT) you probably already know Sidense but just in case you don’t I was able to catch up with Andrew Faulkner and Jim Lipman for a brief Q&A update on their technology and what we can expect moving forward.

Your one-time programmable memory IP products are based on an antifuse bit cell. Can you briefly explain how the bit cell works?
The antifuse bit cell, when un-programmed, behaves like a capacitor with an insulating SiO[SUB]2[/SUB] layer between a transistor gate and silicon substrate. When programmed, the insulating oxide undergoes a permanent and controlled breakdown and the bit cell behaves like a conducting diode. Unlike an electrical fuse, which is normally conductive or “closed” until a current of sufficient magnitude flows through the fuse and interrupts (blows) the conductive path, an antifuse OTP bit cell is normally non-conductive, representing a “0” logic bit state, until it is programmed to a logic bit state of “1.”

The antifuse OTP bit cell is programmed by applying a sufficiently high-voltage across the gate and substrate of a thin oxide transistor (around 6V for a 2 nm thick oxide, or 30MV/cm) to break down the oxide between gate and substrate. The positive voltage on the transistor’s gate forms an inversion channel in the substrate below the gate, causing a tunneling current to flow through the oxide. The current produces additional traps in the oxide, increasing the current through the oxide and ultimately melting the oxide and forming a permanent conductive channel from gate to substrate. Once programmed, the antifuse bit cell cannot be un-programmed.

Antifuse-based OTP memory operates very reliably over a wide temperature range compared to other types of NVM memory that depend on charge storage to determine the state of the memory’s bit cells. The memory is also highly secure, since it is extremely difficult to read or modify the OTP memory’s contents.

Sidense’s 1T-OTP is one transistor per bit cell – how is this accomplished?
The Sidense bit cell utilizes a unique split channel architecture, 1T-Fuse™, in which the bit cell transistor’s gate overlaps both thick (I/O) and thin (gate) oxide regions. This architecture has been implemented in many foundry and IDM process flows from180nm down to 16nm.

Why have you been able to successfully use a single transistor antifuse-based bit cell for your OTP and what are the advantages over a two-transistor antifuse bit cell?
The 1T-Fuse bit-cell architecture has several advantages over two-transistor antifuse bit cells. Using only one transistor per bit cell results in very small OTP memory arrays that minimally impact the size and cost of the chips in which they are embedded. Since the programming channel is very small and only occurs over the transistor channel, the programming is very robust and reliable. In addition, it is almost impossible to ascertain whether or not a bit cell is programmed, either by physical methods (de-processing or cross sectioning) or by scanning techniques since the bit-cell state does not depend on charge storage, like a flash bit cell.

Sidense has successfully demonstrated 1T-OTP operation down to 16nm FinFET processes. What do you think is the scaling limit for your technology?
Sidense 1T-OTP macros are designed such that the bit-cell programming voltage, which is higher than normal chip operating voltages, does not affect the macro’s peripheral circuitry. The 1T-OTP bit cell architecture has shown good scalability down to 16nm and we don’t expect to run into problems down to 7nm. However as process technology scales, it becomes more difficult to design peripheral circuitry, such as sense amplifiers, to work at the lower voltages of the process. The technology scaling limit may well be set by these circuits rather than by the 1T-OTP bit cell.

What technical challenges are your customers facing with respect to NVM and how does Sidense address them?
We have seen universal acceptance of our OTP across many applications in the Smart Connected Universe, which comprises the mobile computing and communications, IoT, automotive, industrial, medical and wearable market segments. The challenges our customers are seeing in implementing NVM include low-power and low-voltage operation, an expanding need for high temperature operation for automotive and industrial applications and the ever-increasing need for higher security both for data-in-transit and data-at-rest in storage.

Along with the inherent low power and security of the 1T-OTP bit cell, Sidense has developed various circuit techniques to minimize power consumption and enhance OTP IP security. In addition, 1T-OTP is available at more than 17 foundries and almost 60 process variants, including those targeting highly demanding automotive and low-power IoT and mobile applications.

What are the advantages for your customers to design in 1T-OTP compared to other OTP?
I’ve already mentioned 1T-OTP’s broad scalability, from 180nm to 16nm and below, low power and high security attributes. 1T-OTP arrays are available in several BCD, HV and CIS technologies at more than 11 nodes across more than 5 foundries. Compared to other OTP technologies, such as eFuse, 1T-OTP is easy to program and read and is very reliable with retention greater than 10 years at maximum operating temperature and a 100% read duty cycle. Since the state of an antifuse OTP bit cell is not determined by charge storage it is difficult to determine the state of the bit cell, making it inherently secure.

Antifuse-based 1T-OTP does not require any additional masks or process steps and does not require burn-in, again making it a very cost-effective NVM solution. Furthermore, bit-cell programming may be done either by using an external power supply or using an integrated IPS macro, supporting programming at-test or in the field. Once data programmed into OTP is finalized, either for an entire memory array or part of it, conversion to ROM is simple, just by changing one non-critical mask step.

Where does your 1T-OTP fit in with all the new memory technologies (FRAM, MRAM, RRAM, and PCRAM) that are currently under development or in production?
We feel that 1T-OTP serves markets not targeted by the new memory technologies, which are mostly being developed and used as NAND-based flash and DRAM replacements in high-density cache or data storage applications. Many of the new technologies do and will require additional process steps and hence additional cost. For the most part, they are or will be available as separate chips rather than memory IP cores.

What is your take on the IoT market? Does it represent a big opportunity for Sidense?
Absolutely! However, we look at it in a slightly different way than others. Analysts like to talk about killer apps and after the smartphone wave there was a void – without a doubt, IoT has filled that void.

Unfortunately many folks have difficulty getting their heads around IoT and what it is. Is it devices? Is it big data? Is it everything in between? How do we define which market segments it comprises and, more importantly, how do we target them? In our opinion IoT should really be the IoE, the Internet of Everything!

As we discussed earlier we defined the “Smart Connected Universe” that cuts across traditional markets, including automotive, industrial, consumer and others. Devices in the Smart Connected Universe are defined by a few common characteristics: sensing, smarts and connectivity. In fact wherever we see a combination of these characteristics we find a sweet spot for our OTP and eMTP products. Our products are used to store secure keys and trim and calibration parameters, among many other uses. These Smart Connected devices exist as bridges between the analog world that surrounds us and the digital world and with that in mind, sensors are key components. In the IoT ecosystem, sensors are everywhere and so are opportunities for Sidense NVM products.

www.sidense.com


Another Interesting Thing From TSMC!

Another Interesting Thing From TSMC!
by Daniel Nenni on 01-21-2017 at 7:00 am

As I mentioned in my previous post, the TSMC investor call this month was very interesting and Morris Chang was in fine form during the Q&A. As a semiconductor professional I think some of the questions are ridiculous but maybe they have value to the financial people. This one question from Randy, who I think is very astute, is SemiWiki discussion worthy:

Randy Abrams
Yes, thank you. The first question, I wanted to ask your outlook is more in line with the industry where you are guiding 5% to 10% for foundry near similar levels. Could you talk about the factors to be more inline after gaining the last few years? And can you also address the China business; we are seeing the China foundries grow faster. SMIC is growing 20% to 30%, how does TSMC combat or defend share more on the mature nodes, where they’re starting to grow faster?


(At the bottom it says SMIC is partially owned by TSMC. TSMC did get a 10% equity stake after the IP litigation which I thought TSMC had already divested. Please post a comment if you know otherwise.)

First and foremost, TSMC is being conservative as they always are and they are shielding their #1 customer which is Apple. There is no way the second half of the year will be 5% growth with Apple single sourcing 10nm from TSMC for the next iPhone and iPad. TSMC will again be in double digits (10-15% revenue growth) for 2017 as I previously stated.

This is going to be another strong year for the foundries but I do find it interesting that while the semiconductor foundry business is posting double digit gains the semiconductor industry as a whole is relatively flat… Comments?

Second, SMIC is surging on 2[SUP]nd[/SUP] source business now that they are shipping a TSMC compatible 28nm, most of which is in China. How does TSMC combat or defend the mature nodes? In China they don’t, they push the market to FinFETs. Remember, the TSMC GDS compatible market stops with FinFETs and SMIC does not expect to have 14nm until 2020 or so. Meanwhile TSMC is getting ready to release a fourth generation (12nm) FinFET process optimized for density and cost. In fact, I hope TSMC shows an updated version of the infamous Intel chip scaling graph shown below.


Remember, this graph was based on a paper done by TSMC before 16nm went into production. TSMC then released 16FF+, 16FFC, and now 12nm.

My guess is that TSMC 12nm will easily be on par with Intel 14nm in regards to chip density and superior in cost per transistor… Comments?

Unfortunately, Intel is still flogging this outdated slide. In fact, just this month at the J.P. Morgan 2017 Tech Forum, Intel Client Computing Group VP Navin Shenoy said Intel 14nm is equivalent to Samsung and TSMC 10nm so they are considering renaming their 10nm:

“I’m confident that when 10 nanometer — our 10 nanometer — comes out, and this is something that maybe we should rename it, I don’t know, we’ll think about that, but when our 10 nanometer comes out, we will have a clear density advantage, and a clear performance and power advantage versus what others in the industry have.”

Well, yes and no. Unfortunately for Intel their 10nm will come out about the same time as TSMC 7nm so no, Intel will not have a clear density advantage:

Morris Chang
I think 2017 will be pretty strong in terms of technology, it will be a pretty strong 16 or 14 FinFET year, and our market share in 16, while it’s quite high, is not as high as I would like, it’s actually in the close to 70% or 65% to 70%. Now that is not quite as high as our 28 nanometer which even now, you know, like almost 80% and now, 2017 is – I think it’s a pretty – we think will be a pretty strong year and result.

Absolutely…


TCAD Simulation of Organic Optoelectronic Devices

TCAD Simulation of Organic Optoelectronic Devices
by Daniel Payne on 01-20-2017 at 4:00 pm

In my office there are plenty of LED displays for me to look at throughout the day: three 24″ displays from Viewsonic, a 15″ display from Apple, an iPad, a Samsung Galaxy Note 4, a Nexus tablet, a Garmin 520 bike computer, and a temperature display. LED and OLED displays are ubiquitous in all sorts of consumer electronics, so there must be some clever way that engineers simulate and design these. To learn more about OLED and LED devices I spoke with Steve Broadbent by phone, and his background includes a MS degree in physics from the University of Maryland plus decades of experience in the world of TCAD. I was surprised to discover that OLED devices are now being used for residential and commercial lighting applications and not just consumer electronics. Steve works at Silvaco and will be hosting a webinar on January 26th on this TCAD topic, from 10AM-11AM PST.

Device simulators in TCAD have been around for many years now and they work with calibrated models and then use mesh structures for the predictive analysis of creating new device structures. With a device simulator you can speed up new development, reduce risks and even improve the reliability. Command-line device simulation was the earliest approach, however to make the TCAD experience even easier a GUI approach is now being used as the starting point for device simulation. The new GUI-based device simulator for LED and OLED devices from Silvaco is called Radiant, and they first announced it in 2015.

Related blog – It’s Better than SUPREM for 3D TCAD

With the Radiant tool you are looking at a 2D cross section for an LED or OLED device as shown below in the left-hand side of the screen, where each layer is a different color in the stack. In the right-hand side of the screen we are defining the structure of the stack.


The GUI for Radiant

In the webinar Steve will be showing how you go about setting up each of the seven organic layers of an OLED in the Radiant tool. You get to specify the properties of each material being used, like the work function, permittivity, heat capacity, and color.


Making a multi-layer OLED

From the Radiant GUI you can run analysis like a DC simulation, which then creates a DeckBuild to run the simulation. When DeckBuild is finished you can view a current plot. Another type of analysis is an optical simulation, and one example to be discussed is from M. Baldo as published in Nature, volume 395 on page 151:

Emission spectra of two OLEDs

Another way to help optimize the design of LED and OLED devices is to sweep a parameter through a specific range and then run a number of simulations in sequence. This allows the TCAD user to better understand how to make trade-offs in reaching their design requirements.

Related blog – 3D TCAD Simulation of Silicon Power Devices

Webinar

Register for the webinar today, and learn something new about TCAD for OLED and LED devices on January 26th. Here is the webinar outline:

 

  • TCAD simulation of Organic LED
    • Simulations required in the developemt of Oragnic LED
    • Theoretical background (Electrical simulation, Optical simulation)
  • An integrated simulation environment for the LED/OLED devices
    • What is Radiant
    • Features
    • Areas of simulations Radiant will cover
    • Simulation flow (Electrical simulation, Optical simulation)
  • Examples
    • Electrical simulation of a multilayer OLED device
    • Optical simulation of a multilayer OLED device
    • FDTD simulation of light propagation in an OLED device
  • Summary and future enhancement

Even if you register for the live webinar and cannot make it on January 26th, don’t worry, because you will receive an email with a link to the archived webinar. My favorite part of any webinar is always the Q&A time where you can get those nagging questions answered and receive clarification on what you just learned.


Fan-Out Wafer Level Processing Gets Boost from Mentor TSMC Collaboration

Fan-Out Wafer Level Processing Gets Boost from Mentor TSMC Collaboration
by Mitch Heins on 01-20-2017 at 12:00 pm

I caught up with John Ferguson of Mentor Graphics this week to learn more about a recent announcement that TSMC has extended its collaboration with Mentor in the area of Fan-Out Wafer Level Processing (FOWLP).

In March of last year Mentor and TSMC announced that they were collaborating on a design and verification flow for TSMC’s InFO (integrated Fan-Out) packaging technology using Mentor’s Xpedition Enterprise and Calibre nmDRC/RVE platforms. That flow allowed designers to layout the InFO structures with Mentor’s Xpedition Package Integrator and then use Calibre nmDRC for design rule checking with cross probing back into Xpedition using Calibre RVE.

Since then, the Mentor and TSMC teams have been working closely together to enhance the flow to shorten design cycle times, minimize designer effort and ensure higher quality GDS hand-offs to improve first-time success rates. Key to the collaboration were efforts to ensure seamless assimilation of TSMC’s newest technologies whether it be single or multiple die on InFO packaging, with or without a substrate, and with or without package-on-package. To achieve this, Mentor has attacked several different areas.

Firstly, Mentor developed new Xpedition Enterprise functionality to make it easier to create InFO-specific fab-ready metal structures such as seal rings, parameterized mesh pad generation, degassing holes and additional metal for balancing metal density.

Mentor next added HyperLynx DRC technology to the flow for in-design InFO-specific manufacturing verification checks. HyperLynx DRC allows designers to find and fix DRC issues while still in the design phase reducing the number of iterations out to GDSII for DRC checking in Calibre. Final sign-off rule checking is still done with Calibre nmDRC for both die and InFO package design rule checks.

New to the flow with this release is the addition of Calibre 3DSTACK and the capability to do sign-off level layout-vs-schematic (LVS) checks for inter-die connectivity verification of the entire InFO-based package.

For IC designers this may sound trivial, but when you realize that you are possibly dealing with multiple die, each with their own CAD database, as well as data for the silicon wafer providing the InFO connectivity you start to see how messy the CAD flow can get. Also considering each die may have thousands of pins you also realize how easy it would be to get something hooked up wrong and how hard it would be to find a mistake without good LVS tools. This will be a much appreciated addition to the flow.

In December of 2016, John was interviewed for an article in Chip Scale Review in which he outlined how TSMC has worked with EDA companies like Mentor to develop EDA solutions for IC and package design with an intent to ensure that InFO designs would be fully compliant with TSMC’s packaging design rules and sign-off requirements. At that time, John mentioned that TSMC was in fact working to expand the InFO tool support into sign-off electrical analysis to enable designers to analyze the parasitic impacts from InFO and its neighboring layers. It appears this is now in place for the Mentor flow, with the addition of signal integrity checking of the InFO interconnects using signal path tracing, extraction, simulation and netlist export.

The flow also now supports integration to thermal analysis and thermally-aware post-layout simulation flows to provide early identification of potential system level heat issues. The connection to the simulation world also enables such things as multi-die reliability analysis including analysis of electromigration and IR drop.

While Fan-Out Wafer Level Processing is catching on with its promises of low cost, small form factors, and low power with high performance, the addition of a fully integrated IC and package design flow goes a long way toward making this a truly usable technology. TSMC is using its extensive expertise in generating process design kits for advanced IC processes along with their significant experience and long historic relationships with EDA players like Mentor Graphics to jump out well ahead of their OSAT (outsource assembly and test) competitors in bringing FOWLP technology into real production use.

See Also:


Adversarial Machine Learning

Adversarial Machine Learning
by Bernard Murphy on 01-20-2017 at 7:00 am

It had to happen. We’ve read about hacking deep learning / machine learning, so now there is a discipline emerging around studying and defending against potential attacks. Of course, the nature of attacks isn’t the same; you can’t really write an algorithmic attack against a non-algorithmic analysis (or at least a non-standard algorithmic analysis). But you don’t have to. These methods can be spoofed using the same types of input used in training or recognition, through small pixel-level modifications.

In the link below an example is shown in which, through such modifications, both a school bus and the face of a dog are recognized as an ostrich, though to us the images have barely changed. That’s a pretty major misidentification based on a little pixel tweaking. A similar example is mentioned in which audio that sounds like white noise to us is interpreted as commands by a voice-recognition system. Yet another and perhaps more disturbing vision recognition hack caused a stop-sign to be recognized as a yield sign.

Researchers assert that one reason neural nets can be fooled is that the piece-wise linear nature of matching at each layer of a deep net can be nudged in a direction which compounds as recognition progresses through layers. I would think, though I don’t see this mentioned in the article, that this risk is further amplified through the inevitably finite nature of the set of objects for which recognition is trained. Recognition systems don’t have an option of “I don’t know” so they’re going to tend to prefer one result with some level of confidence and that tendency is what can be spoofed.

Out of this analysis, they have also devised methods to generate adversarial examples quite easily. And the problem is not limited to deep neural nets of this type. Research along similar lines has shown that other types of machine learning (ML) can also be spoofed and that adversarial examples for these can be generated just as easily. What is even more interesting (or more disturbing) is that adversarial examples generated for one implementation of ML often works across multiple types. One team showed they were able, after a very modest level of probing, to spoof classifiers on Amazon and Google with very high success rates.

This is not all bad news. A big part of the reason for the research is to find ways to harden recognition systems against adversarial attacks. The same teams have found that generating adversarial examples of this kind, then labelling them for correct recognition provides a kind of vaccination against evil-doers. They look at this kind of training as a pro-active approach to security hardening in emerging ML domains, something that is essential to ensure these promising technologies don’t hit (as much of) the security nightmares we see in traditional computing.

You can read a more complete account HERE.

More articles by Bernard…


Missteps in Securing Autonomous Vehicles

Missteps in Securing Autonomous Vehicles
by Matthew Rosenquist on 01-19-2017 at 12:00 pm

Recently an autonomous car company highlighted some plans to keep their vehicles safe from hacking. Yet their plans won’t actually make them secure. Such gaffs highlight issues across many different industries where cybersecurity is not sufficiently understood by manufacturers to deliver products hardened against attack. The result, in the case of autonomous vehicles, could be catastrophic.


In the article Why Some Autonomous Cars Are Going to Avoid the Internet, the CEO of the company told the Financial Times (paywall)Our cars communicate with the outside world only when they need to, so there isn’t a continuous line that’s able to be hacked, going into the car”. They are choosing to operate the cars in a mostly offline manner to protect against cyber threats.

Sounds Effective
At first glance, this would seem to be a worthwhile protection mechanism against hackers. It is not. The ‘control’ is to reduce connectivity to the Internet, which does provide some security value for the time it is not connected. But that is where the logic falls apart and in the end, it does not significantly reduce the chances of being attacked.

It seems logical, that by reducing the overall vulnerability of the system it will improve the security. But that is not always the case. Just because you remove 50% of the vulnerabilities, it does not mean you reduced the chances of being victimized by half. It is more complex as other dependencies are at work. This mistake is even common among entry level security professionals who are taught to think of risk as a pure equation (R=T x V x I). The Risk equals the Threat times the Vulnerability multiplied by the Impact, which is a fine equation when used properly for a specific purpose. Reduce any amount of vulnerability and the resulting risk is also reduced. However, this equation is not applicable to every problem or discussion.

Back to the autonomous vehicle security problem. Intermittent connectivity is a reduced availability tactic. To the attackers, it is simply a network latency problem and can be easily overcome. There is a great deal of precedent and history proving this, which I won’t get into. Instead, let’s think about the problem in a different way by using an analogy.

Building a Wall

Imaging you were tasked with protecting your village from marauders. You employed a security specialist to greatly reduce the risks of bandits getting into your hamlet and causing havoc. A wall is built halfway around your town, visible to all. The security specialist then confidently announces he has reduced the vulnerabilities by half, therefore significantly reduced the chances of a successful attack by 50%. Nope. The marauders simply need walk around the wall to get into the town. It might slow them down, as they are laughing and walking around the defenses, but it will not deter or prevent an attack.

The same is being proposed here, which is why reducing the Internet connectivity of autonomous vehicles, is an ineffective security control. Such tactics have proven futile in the past.

Exploitation
The root of the logic problem is in thinking about security in terms of equal vulnerabilities. Not all weaknesses are the same. There may be a hundred vulnerabilities but only 5 are being used to compromise a system. Only the efforts to close those 5 (the ones being exploited) will be important, while the other 95 are meaningless to the immediate goal of being secure.

Cyberattackers will wait for connectivity to compromise devices, just like thieves will bypass the locked door to enter via the open window, and bandits will walk around a wall to enter a village unimpeded. The chances of attack are not significantly reduced, just the timeliness of when it will occur.


Accountability

In this case, the car company is promoting a security design feature which really is ineffective. Yet, they don’t even realize it. As consumers, we must hold manufacturers accountable for the security, safety, and privacy of the products they produce. This is especially true of devices that hold the potential for life-safety risks. It should draw concern when in marketing and public communications, companies are showing a lack of cybersecurity knowledge and experience, likely as a result of improper skill-sets or executive prioritization, while at the same time exhibiting confidence in the security of their products. It is a dangerous combination.

Getting Security Right

It is important to institute optimal security capabilities as part of the design and core functions (Hardware, Firmware, OS/RTOS, software, endpoints, networks, etc.) to protect passengers and pedestrians from potentially catastrophic accidents resulting from digital compromises. Security must be effective, economical, and not undermine usability.

Understanding cybersecurity can be challenging, but many car companies are investing heavily in autonomous vehicles to make it a reality. As part of that investment, they must employ the right caliber of cybersecurity professionals to develop a proper strategy, architecture, and capabilities. Thankfully, I do know many in the field who are working on more comprehensive solutions, beyond reducing internet connectivity, to manage the broad range of risks that could impact us all. I believe it is time for all the automakers to work together and develop cohesive capabilities that meet the growing expectation of security, privacy, and safety.

Interested in more? Follow me on Twitter (@Matt_Rosenquist), Steemit, and LinkedIn to hear insights and what is going on in cybersecurity.


IP development strategy and hockey

IP development strategy and hockey
by Tom Dillinger on 01-19-2017 at 7:00 am

eye diagrams min

One of the greatest hockey players of all time, Wayne Gretzky, provided a quote that has also been applied to the business world — “I skate to where the puck will be, not to where it has been.” It strikes me that this philosophy directly applies to IP development, as well. Engineering firms providing IP must anticipate how customer requirements will evolve, and execute a design and qualification plan well in advance of the demand curve.

I recently had the opportunity to chat with members of the engineering team at Analog Bits, providers of IP for SerDes lanes, PLL’s, memories, on-chip sensors, and I/O’s for memory (and general purpose) interfaces. They impressed upon me characteristics of current development projects that are “critical success factors” to the IP business model:

 

  • multi-protocol SerDes IP extends applicability across markets

Analog Bits has focused on development of SerDes IP to be applied for several serial interface protocols.

  • IP providers must lead in the development of (standards for) next generation high-speed SerDes data rates.

The silicon testsite plan at Analog Bits involves demonstration of 25G data rates (at leading process technology nodes).

  • Testsite silicon requires anticipating customer integration, test, and qualification requirements.

To be successful, testsite development requires a “skate to where the puck will be” strategy. Developing testsite shuttles is costly, both in NRE for silicon wafers and board-level testbench development and in engineering development resources. The IP team must invest wisely, to ensure that the resulting test measurement and qualification data will satisfy future customer requirements.

ESD qualification of I/O’s requires addressing the (evolving) CDM and HBM robustness standards demanded by end markets (e.g., JEDEC and AEC-Q100 tests).

SerDes IP on a testsite shuttle requires a test plan that demonstrates an adequate eye opening, using a topology representative of the losses that are likely to be present in the system design environment. The wrapback test specification used for IP evaluation is key — e.g., “total loss less than 22dB at 8GHz (for 16Gbps) through a loop back including 24″ of FR-4 trace”.

The SerDes physical (hard IP) implementation on the testsite also requires addressing future customer needs. The granularity of SerDes lanes, with the corresponding pad topology for signals and power, needs to satisfy a wide range of applications. The figure below illustrates the modular approach that Analog Bits has pursued.

Another example of engineering development to address customer requirements is the availability of SerDes IP cells for any die side of the customer’s SoC. At advanced process nodes, recall that an increasing number of mask layers must use unidirectional segments — e.g., device gates, lower-level metals. Unique IP cells are required for the different sites of the die. The figure below illustrates the vertical orientation SerDes cell on silicon testsites, and several examples of floorplanning testcases.

High-speed lanes are becoming more prevalent than other I/O types for performance-driven SoC’s. Demonstration of flexible, modular (hard) SerDes IP implementations with many lanes is a must.

The team at Analog Bits is focusing their engineering development and test resources on IP designs and shuttle testsites that anticipate the requirements of new markets for advanced process nodes. They are following the same approach that earned Gretzky the nickname “The Great One”.

For general information on the IP available from Analog Bits, please follow thislink.

-chipguy


China moves from manufacturer to full supplier

China moves from manufacturer to full supplier
by Bill Jewell on 01-18-2017 at 12:00 pm

CES 2017 wrapped up last week in Las Vegas. The show had over 175,000 attendees and over 3,800 exhibiting companies, according to the organizer, the Consumer Technology Association (CTA). The U.S. had the most companies exhibiting at CES with 1,755. China was close behind at 1,575 companies according to Benjamin Joffe’s article in Forbes: “The 4 Kinds Of Chinese Tech Firms That Dominated CES 2017”. Joffe believes many Chinese companies have developed innovative technology which is competitive on a global level.

The CTA’s audited data for CES 2016 showed total attendance of 177,393. The largest international presence was China, with 4,867 attendees. China was ahead of traditional electronics leaders South Korea (4,567) and Japan (2,641). The China attendance was over three times its CES 2012 number of 1,568 while overall attendance was up 14% in 2016 versus 2012.

The strong China showing at CES is a reflection of the transition from Chinese companies from manufacturing electronics which were designed, marketed and sold by non-Chinese companies (i.e. U.S., Europe, Japan, South Korea and Taiwan) to integrated companies which design, manufacture, market and sell their own products. The dominance of China in electronics manufacturing is demonstrated by World Trade Organization (WTO) statistics on exports of office and telecom equipment (the combination of two trade categories which comprise most electronics) for 2015. China accounted for over a third of exports.


The shift of Chinese companies to integrated suppliers is evident from the market share rankings for major electronic devices as shown below. Chinese companies are highlighted in red. IDC’s preliminary 2016 PC market share numbers rank Chinese company Lenovo number one at 21.3%, edging out HP at 20.9%. Lenovo became a major player when it acquired IBM’s PC business in 2005. IDC’s 3[SUP]rd[/SUP] quarter 2016 data shows the tablet market is dominated by Apple and Samsung, but two Chinese companies, Lenovo and Huawei, are in the top five. Korean companies Samsung and LG are the major suppliers of LCD TVs with over a third of the market between them. WitsView, a division of TrendForce, estimated Chinese company LeEco moved into third place in 2016 with the acquisition of U.S. TV company Vizio in July. The fourth and fifth companies, Hisense and TCL, are also Chinese.


The emergence of Chinese suppliers is most apparent in the smartphone market. Third quarter 2016 market share numbers from Counterpoint Research show the continuing dominance of Samsung (20%) and Apple (12%). Seven of the next eight smartphone brands are Chinese companies (in red). If parent companies are considered, a different picture emerges. BKK Electronics of China is the parent company of number four Oppo and number five vivo. BKK also owns small but growing smartphone supplier OnePlus. The combined market shares of Oppo and Vivo is 12.6%, giving BKK the number two ranking ahead of Apple.


As with PCs and TVs, some of the growth in Chinese smartphone market share was driven by acquisitions. Lenovo acquired the Motorola smartphone business in 2014. However, Lenovo has not had as much success with the Motorola acquisition as it did with the IBM PC business acquisition, according to a recent Wall Street Journal article.

The advancement of Chinese electronics companies from mere manufacturers to integrated companies is a continuation of trends over the last 50 years. Japan, South Korea and Taiwan all moved from being primarily sources of low cost manufacturing to being major players in driving innovation in the electronics industry. As these three countries became more prosperous, wages increased and much of the manufacturing went to China. China, with over 1.3 billion people, has a large enough labor force to continue as a low cost manufacturing county while its electronics companies compete on a global level as full service suppliers.


Where the Emerging Tech Jobs Are

Where the Emerging Tech Jobs Are
by Bernard Murphy on 01-18-2017 at 7:00 am

There’s an article published in InfoWorld on jobs trends in several emerging tech areas. The trends are based on analysis of job postings and job-seeker searches from the beginning of 2014, sourced by Indeed.com. I would have liked to dig deeper into Inded.com, to get more info on jobs in our industry but unfortunately it seems you need a magic key to get anything beyond the sample trends (at least for job postings), so I’ll have to stick to what is covered in the InfoWorld article.

This article tracked job trends in 3D printing, Bitcoin/Blockchain/crypto-currency, VR/AR and mixed reality, AI and machine learning, IoT and wearable tech. These aren’t broken down into hardware versus software versus system, so we must take what we can from the gross metrics.

In job postings, AI/ML is climbing fast and is well ahead of all other domains, ending the year a factor of at least 6 over everything except IoT. IoT has also been climbing rapidly (until recently) but even so, AI/ML postings were about 50% ahead. The other four groups are down in the weeds in job postings, led by VR/AR/MR, then 3D printing, then Bitcoin and similar technologies, with wearable tech bumping along the bottom.

The upward slope in AI/ML has been significant since July of last year, easily outpacing IoT which is still rising, but not especially fast. What’s even more interesting is a comparison of job postings versus job searches for AI/ML, which shows postings 50% ahead of searches by the end of last year. There are a lot more jobs being offered in this area than there are people looking for them, even though searches in this area are also on a rapid rise. Seems this is a good area to invest learning/credit time for anyone still in college or planning a career change.

Again, these are gross trends. What these trends mean for a hardware or embedded software design in any of these domains remains a mystery, but I would have to assume that overall demand is something of a pointer for trends in sub-domains. Also, trends change. Maybe AI/ML will hit a speed-bump (it’s happened before). But I’m guessing that won’t happen for a while – AI/ML success in speech and image recognition, also in online advertising and preference detection will likely continue to stimulate growth in those domains.

Here’s the InfoWorld article. I’ll promise a blog on a topic of your choice to the first person who can show me how to get trend data in multiple domains from Indeed.com (without having to pay for something).

More articles by Bernard…