IMEC is a technology research center located in Belgium that is one of the premier semiconductor research centers in the world today. The IMEC Technology Forum (ITF) is a two-day event attended by approximately 1,000 people to showcase the work done by IMEC and their partners.
Continue reading “IMEC Technology Forum (ITF) – Secrets of Semiconductor Scaling”
Is GaN Disruptive? Revisiting the Criteria
In March 2010 Efficient Power Conversion (EPC) proudly launched our GaN technology at the CIPS conference in Nuremberg, Germany. Parts and development kits were readily available off-the shelf and therefore designers could immediately get started with a new state-of-the-art semiconductor technology.
Continue reading “Is GaN Disruptive? Revisiting the Criteria”
The Guiding Light & Other Photonic Soaps
I’m a child of the sixties and seventies and on the occasion when I was sick and couldn’t go to school I got to experience the world of daytime TV soap-operas. Back then we only got 3 channels and it wasn’t until the late 60’s that we got color TV! I remember titles like “The Guiding Light”, “Secret Storm”, and “As The World Turns”. Forty plus years later, I am again re-living “The Guiding Light” except now it’s in the form of reading about “guiding-the-light” on silicon photonic integrated circuits (PICs). Like the daytime soaps, there seems to be a never ending cast of characters (photonic components) that are being presented. I thought it would be instructional to review one of the characters used to “guide-the-light” on a PIC.
Waveguides are the primary components used in photonic design to guide light. More than a conduit for light, they are building blocks from which other components are created to modulate, filter and switch light. From the book Silicon Photonics Design, section 1.4, silicon PIC waveguides are most commonly made from the active device layer of silicon-on-insulator (SOI) wafers (see fig 3.1– cross section of SOI wafer). Much research has gone into engineering the waveguide geometries. There are several types of waveguides, but the most commonly used are the strip (or rectangular) and rib waveguides (see fig 3.4). Strip waveguides are typically used for routing and low-loss tight curves while rib waveguides are often used to create electro-optic devices such as modulators as the rib allows for electrical connections to be made to the waveguide. And yes, you read that right. Photonics routing uses curves, not Manhattan style shapes and waveguides are typically single-layer designs as the silicon crystal of the waveguide core is grown, not deposited on the wafer.
One of the key metrics for photonics is signal loss. The signal intensity must be great enough to be sensed at the end of the signal’s journey and every piece of waveguide it traverses takes its toll. There are several contributors to waveguide loss including absorption due to metal in proximity, scattering and reflections due to sidewall roughness, material loss in active doped structures and surface-state absorption from improperly or un-passivated waveguides. One might think that the best waveguide would be one in which no loss is allowed at all. However, it is these loss mechanisms that actually enables the manipulation of light in the waveguide. Without them, waveguides would just be simple conduits or light wires. Fortunately for us this is not the case. Before going further, a couple of keys points should be noted about photonics and waveguides.
[LIST=1]
The implications of these two points are far reaching. In some cases, it is signal absorption in un-passivated waveguides that is used to make detectors. But that’s a topic for yet another article. For waveguide routing these points enable us to use what is referred to as wavelength division multiplexing (WDM). WDM lets us convert from the spatially parallel electric domain to a wavelength-parallel optical domain, significantly reducing the number of waveguides needed to transmit large amounts of data (see Fig 2.2 from the book Photonic Network-on-Chip Design). Imagine a 64-bit bus worth of data all traveling down 1 waveguide! Secondly, it is the radiation of light outside of the waveguide that lets us couple the light in one waveguide to other waveguides. By controlling the resonance points of these specialized resonant waveguides (micro-rings) we can switch one or multiple signals simultaneously. Moreover, we can also use this capability to filter one or more wavelengths from the main signal for additional processing or routing to specific sensors. This is the basis for building high performance, low power switching matrices suitable for switching wide parallel data such as required for CPU-to-memory applications.
More recently another waveguide character has come onto the scene, that being Silicon Nitride (Si[SUB]3[/SUB]N[SUB]4[/SUB]). Si3N4 can also be used as a waveguide material and unlike crystalline Si, Si[SUB]3[/SUB]N[SUB]4[/SUB] can be grown onto the wafer which means it can be stacked, much like metal systems in electrical ICs. While photonics allows us to cross waveguides on the same layer without interference (see point 2 above), there is a loss penalty that is incurred at each crossing. Having the ability to stack waveguides with less lossy crossings enables switch-matrices that look like figure 5.53.
Just like in the soaps of the 60’s, this new character, Si[SUB]3[/SUB]N[SUB]4,[/SUB]adds an entirely new twist to the photonics story line. It has already prompted an entirely new dialog around subjects such as photonic networks-on-chip, packet-switching vs circuit-switching networks and new terms like selective transmission, an interesting combination of electronic and photonic networking that uses some of both technologies. We are at the beginning of a truly interesting time for silicon photonics. The plot is starting to evolve and become richer as more new characters are added into the story line … and just like the daily viewers of those early TV soaps, I can hardly wait to see what will happen in the next episodes.
OpenPOWER Keeps On Truckin’ At Annual Development Summit
The OpenPOWER Foundation, a collection of companies that have coalesced around IBM’s POWER architecture recently had their OpenPOWER Summit in San Jose, California. OpenPOWER was founded by IBM, Google, Tyan and Mellanox to coalesce around IBM’s approach towards opening up the POWER architecture to anyone that wishes to license it or build their IP into it. IBM provided the CPU, memory and accelerator architecture and all of the other founding members provided some other component of HPC that was needed to create a complete solution.
IBM holds their OpenPOWER Summit every year in San Jose California alongside their partner NVIDIA in the same convention center with some overlap between the two. The OpenPOWER Foundation keeps making progress, year after year, and at this year’s OpenPOWER Summit, they moved even more steps closer to the deployment of complete OpenPOWER systems. IBM, Google, Rackspace (Rackspace Hosting) and others offered even more support, products and R&D towards the cause.
IBM’s VP of HPC and Data Analytics Sumit Gupta talks about what POWER brings to cognitive and data scene (Credit: Patrick Moorhead)
Google delivers on last year’s promises but doesn’t commit to deployment
At this year’s OpenPOWER Summit, one of the biggest moves forward came from Google in terms of an increased commitment and involvement compared to previous years. Google had previously announced motherboards using OpenPOWER and that they would port software. This year they announced that this porting was nearly complete with a “majority” ported to date. However, Google would not commit to deploying these systems en masse. Google’s decision on what compute they decide to use in their datacenter is purely based upon what Google’s Maira Mahony calls, “performance per TCO dollar.” According to Mahoney, for many of Google’s employees enabling POWER systems is merely a configuration change and doesn’t put too much of a burden on them to change.
Google’s Maira Mahony talks through what they have accomplished with OpenPOWER (Credit: YouTube)
Google also announced that they are developing a next-generation OpenPOWER and Open Compute Project form factor server. Google is working with Rackspace to co-develop an open server specification based on the new IBM POWER9 architecture and the two companies will submit a candidate server design to the Open Compute Project.
It’s not in Google’s best interest to do full deployment commit as it gives them better negotiation leverage with both Intel and OpenPOWER members.
Rackspace Hosting moving POWER8 to the data center and creates POWER9 server with Google
Rackspace announced that their ‘Barreleye’ OpenPOWER/Open Compute Project server has now moved from the “lab to the data center”. Rackspace anticipates that ‘Barreleye’ will move into broader availability throughout the rest of the year, with the first applications on what they call the “Rackspace Public Cloud powered by OpenStack”. Rackspace and IBM collectivity contributed the ‘Barreleye’ specifications to the Open Compute Project in January, 2016. The specifications were formally accepted by the Open Computer Project in February 2016.
This progress shows how many companies are working together to make OpenPOWER work in more than just a theoretical capacity, but in a truly competitive manner that could give the competition a run for their money. Rackspace’s own manage cloud computing platform could really stand to benefit from the increased performance from the OpenPOWER platform and added acceleration from a whole host of accelerators, be their CAPI-based accelerators or NVIDIA accelerators using NVLink. More performance per dollar could be better for a company like Rackspace who wants to deliver the best performance per dollar to their customers.
IBM doubles down with new LC servers
Added support from Google and Rackspace is extremely valuable to OpenPOWER but IBM’s contributions have been the most critical to OpenPOWER’s success since its founding. IBM, in collaboration with NVIDIA and Wistron, plans to release its second-generation of OpenPOWER HPC server, which includes NVIDIA’s Tesla P100 compute co-processors and NVLink interconnects as well as IBM’s POWER8 processors connected directly through NVLink. These systems will be available in Q4 2016. IBM also announced that they plan to add systems to their line of LC servers. IBM is planning on adding Open Compute Project-compliant systems to their POWER Systems LC portfolio for cognitive and big data applications. IBM’s plans are in addition to three other OpenPOWER Foundation members also announcing that they have plans for Open POWER-based Open Computer Project-complaint systems like Mark III Systems, Penguin Computing and Stack Velocity.
50 new OpenPOWER infrastructure announcements
In addition to the increased involvement from some of OpenPOWER’s biggest members, there were 50 new OpenPOWER infrastructure announcements from a broad range of members and partners. Bittware, IBM, Mellanox and Xilinx announced more than a dozen new Coherent Accelerator Processor Interface (CAPI) solutions. Alpha Data unveiled a new Xilinix-based FPGA CAPI hardware card at the summit as well. Edico’s DRAGEN processor which was developed in collaboration with Xilinx and IBM is based on Xilinx’s FPGA running on IBM’s POWER Systems LC class. Supermicro is also currently developing two new POWER-based servers for IBM which are based on the company’s ‘Ultra’ architecture and IBM intends to add them to their LC server line. Last but not least was Tyan, who announced their own POWER8-based server solution that is designed to fit into a 1U formfactor with planned availability in April, 2016.
Wrapping up
The OpenPOWER Foundation and their members have shown once again that the movement that OpenPOWER is continuing to gain steam and in many respects has eclipsed the ARM-based server initiative. The increased commitment from Google, Rackspace Hosting, IBM, NVIDIA and others clearly show that OpenPOWER is here to stay and that we are really starting to get close to a complete ecosystem built around OpenPOWER.
More announced products are showing that companies are investing real moneyinto developing products that they think will actually sell and not just for experimental purposes. Rackspace Hosting has said they will deploy their solutions into production, while Google hasn’t said they would but also hasn’t said that they wouldn’t and if the “percentage per TCO dollar” is better than Intel, then they say they will. Intel has many levers to pull, though and are masters at this game and don’t assume that any of this is “all or nothing”.
Things are really starting to get interesting for OpenPOWER and it will be interesting to see what further announcements we’ll see throughout the year.
The chilling effect Peter Thiel’s battle with Gawker could have on Silicon Valley journalism
Gawker infringes on privacy and publishes tabloid-like stories that damage reputations. It is one of the most sensationalist and objectionable media outlets in the country. It also has not been kind to me. So it’s not a company that I would expect to be defending. But I worry that the battle that billionaire Peter Thiel has clandestinely been waging against it will be damaging to Silicon Valley by furthering distrust of its motives.
For better or worse, Gawker is entitled to the same freedom as any other news outlet. If it crosses the line, as it likely did with wrestler Hulk Hogan, the courts should deal with it. Silicon Valley’s power brokers should not get involved because they have access to resources that rival those of governments. They can outspend any other entity and manipulate public opinion.
Silicon Valley has more than an unfair advantage; its technologies exceed anything that the titans of the industrial age had. These technologies were built on the trust of the public — and that is needed for an industry that asks customers to share with them with literally every part of their lives. This enormous influence should come with restraint and an understanding that those with power will be scrutinized — sometimes unfairly and unjustly.
What some may find particularly troubling is that Peter Thiel is on the board of Facebook — which has become the world’s most influential media platform. Facebook decides what news a billion people will see and controls a significant portion of the traffic to leading news websites. Publications’ entire businesses can be wiped out based on a change in its algorithm. Thiel is also chairman of the board of security firm Palantir Technologies, which provides intelligence data to the CIA and FBI, and an investor in many other powerful technology firms.
There is almost no chance that any of Thiel’s companies will use their technology to target his opponents and dissenters. But Thiel’s activism only increases concerns at a time when Facebook is under fire for having a perceived liberal bias. And there is only a temporary hiatus in the battles between the FBI and Apple over security and privacy. Silicon Valley doesn’t need another dark cloud hanging over it, yet one seems to be developing.
It’s not just journalists who are affected, the culture of the technology industry is at stake too. Silicon Valley prides itself on openness, diversity, and freedom of thought and expression. You can be competing one day and cooperating on another. Criticism is accepted and dissent is expected. It’s rare to read a story such as this where a prominent figure went to great lengths to silence an adversarial voice.
Other than Gawker’s tech website, Valleywag, which was shut down this year, there are few publications in Silicon Valley that will confront its tech moguls and overhyped start-ups. Witness the ethical breaches committed by Theranos; lives were put at risk. Yet it took an exposé by John Carreyrou of the Wall Street Journal to uncover its corruption. And he had to withstand ugly threats by the company’s lawyers.
Technology businesses should be focused on their credibility and building trust by making their executives more accessible to journalists, not battling media organizations.
As Nick Bilton wrote in Vanity Fair, the valley’s system “has been molded to effectively prevent reporters from asking tough questions. It’s a game of access, and if you don’t play it carefully, you may pay sorely. Outlets that write negatively about gadgets often don’t get prerelease versions of the next gadget.
Writers who ask probing questions may not get to interview the CEO next time he or she is doing the rounds. If you comply with these rules, you’re rewarded with page views and praise in the tech blogosphere. And then there’s the fact that many of these tech outlets rely so heavily on tech conferences.” Investor Jason Calacanis added, “If you look at most tech publications, they have major conferences as their revenue. If you hit too hard, you lose keynotes, ticket buyers, and support in the tech space.”
Technology is the industry of disruption — and that makes people wary. There is growing anxiety everywhere over what will be next to change. As it becomes a greater part of the economy, checks and balances are needed more than ever. The risk is that Thiel’s attempt to quash a reprehensible publication will only weaken what little exists.
For more, visit my website: www.wadhwa.com and follow me on Twitter: @wadhwa
Intel’s New Strategy Is The Right One For The Company
Intel has been the focus of a lot of attention in the last week due to the company’s major restructuring announcement which came on the heels of Intel’s most recent earnings announcement. The majority of analyses that immediately followed the company’s announcement focused singularly on the layoffs, which amount to 11% of the workforce or 12,000 employees, insinuated that Intel is walking away from the PC and asked questions about what this means for the future of the Intel. I know I did. I didn’t have to wait long for an answer.
CEO Brian Krzanich followed up within the week with a blog further clarifying the strategy. It was a clear attempt to give a layman’s explanation of what Intel is trying to do. I still wanted more information so I followed up with a conversation today with Intel’s most senior management and got even more clarity about the company’s future, including its commitment to the PC market and how mobility fits into its new strategy.
Here’s my revised take on Intel’s restructuring and new strategy following that conversation. Intel believes its future lies within the coregrowth areas of cloud and data center, IoT (Internet of Things), memory and programmable solutions. These growth areas shouldn’t be confused with business segments, however. That’s what tripped me up at first – and why at first blush others may be asking “where’s the PC?” in Intel’s new strategy. I personally thought the growth areas were the company’s priorities, and took data center, for instance as the #1 priority. That wording, I believe, may have been misunderstood as Intel pivoting away from the PC market.
Intel’s PC division, its biggest revenue and profit dollar division, is now managed by Murthy Renduchintala, who was brought in late last year to align the design, engineering, software enabling and focus Intel’s strategy and execution across the company’s client (Intel speak for PC and mobile devices), communications, device and IoT segments. In combining these divisions under one roof, Intel’s wants continued and aggressive innovation in its PC business to drive innovation across the fast-growing IoT category. Krzanich called the PC a connected “thing” in his strategy blog. This actually makes sense to me since the PCs was one of the first smart and connected devices that is now foundational to make up the burgeoning category of the IoT. There are a big variety now of compute-connected “things”.
It’s important to understand that the PC business is still the bedrockthat Intel is based on and currently determines how much of the company’s core IP and innovations evolve and drive significant fab scale. Ultimately, without the innovations and scale in the PC space, many of Intel’s most profitable technologies today simply wouldn’t be possible. That was true before and that’s true now.
The size of the PC market has been flat to decreasing, but Intel has maintained more than solid profitability in that sector, contrary to many others in the PC sector. In fact, Intel sees certain segments of the PC as valuable places to invest further like 2 in 1s and gaming, which are experiencing explosive growth. Right now as “things” stand, Intel’s biggest revenue base and profit base both come from the PC sector. To abandon that makes no sense.
Make no mistake – the company is also taking a bullish focus on the cloud and data center. That’s just a smart move. Intel’s cloud growth focus is based on the idea of supporting the expansion of virtualization and analytics which are driving major data center and cloud growth. Intel’s memory and programmable solutions are also a huge growth potential for the company as many people recognize. With technologies like FPGAs, Rack Scale Architecture and 3D XPoint memory technologies, they all offer both short term and long term growth as the industry starts to move toward more accelerated computing models that utilize faster storage and co-processors. It also increases to market basket for Intel. These innovations also have a lot of potential to help Intel to continue to push forward in the cloud and data center and are very complementary across nearly all of Intel’s other core growth areas. Intel will still need to figure out how to counter the growth in GPU-based accelerators in areas such as the DNN (deep neural network) space.
Mobility is also still critical to Intel’s growth – and what we see is the chip giant reprioritizing and reallocating manpower and resources in order to further push into areas that make sense, like their long term 5G initiatives and connectivity strategy. Intel is doing this as a part of the restructuring and as they had mentioned before, they would be looking at certain projects and evaluating their feasibility. I have participated in these excruciating reviews at former employers and they’re like cough syrup… it tastes horrible going down, but it needs to be done. My favorite “endearing” term that my teams used were ‘hairball.” We had to figure out the giant “hairball”.
As a result of these reviews, I learned today that Intel’s senior management has determined that they will be ending their SoFIA projects (specifically 3Gx, LTE, LTE2) as well as their Broxton SoC for smartphones and tablets. The cancellation of these projects is intended to free up Intel’s resources to refocus their brainpower on their modem technology and 5G efforts. Intel has been showing some serious commitments to 5G deployment and penetration in the future, and they clearly believe that 5G is their opportunity to carve out a competitive advantage for themselves end to end in the future of mobility and in connecting the growing number of smart and connected ‘things’ to the cloud. From my vantage point, Intel has a better chance in 5G than they do in low-end 4G mobile devices. While I would have more confidence in this strategy if they had significant 4G installations, I also know Intel is much further ahead on 5G than they were with 4G at Infineon which, frankly, I believe led to Infineon losing the Apple 4G business years ago.
With Intel’s refocused efforts on 5G and commitments to profitable segments of the PC and its continued development, I think we are going to start to see a very different Intel.
Intel’s new strategy is the right strategy for the company, and I don’t take those words lightly. My perspective comes from over 25 years of engagement with them, as their largest customer in the 90′s, competitor in the 00′s, and now researching them and their ecosystem as an analyst in the 10′s. Some of the decisions that arise from the new strategy are hard and have a human cost, and for that reason, I’m sad about things like this. I can personally attest to these hard experiences. Net-net, the company is seriously betting a lot of its resources and manpower on what it calls the virtuous cycle of growth and things like 5G and they needed to make trade-offs in order to make accelerated development, and some programs had to be cancelled to free up capital. Now it’s up to Intel to execute.
Highlights of the 28nm FD-SOI San Jose Presentations
Most of the presentations from the FD-SOI Symposium in San Jose last month (April 2016) are now available on the SOI Consortium website (click here to see the full list — if they’re posted, you can download them freely from there). If you don’t have time to wade through them all, here are some of the highlights. (Plus since I was there I’ll also cover some that aren’t posted.)
Since there’s so much to cover, I’ll break this into two parts. This is Part 1, focusing on presentations related to some of the products hitting the market using 28nm FD-SOI. Part 2 will focus on the presentations related to 22nm FD-SOI.
Samsung – in 28FDS mass production
Samsung now has a strong 28nm FD-SOI tape-out pipeline for 2016, and interest is rising fast, said Kelvin Low, the company’s Sr. Director of Foundry Marketing in his presentation “28FDS – Industry’s First Mass-Produced FDSOI Technology for IoT Era, with Single Platform Benefits”. They’ve already done 12 tape-outs, and are working on 10 more now for various applications: application processor, networking, STB, game, connectivity…) and see more coming up fast and for more applications such as MCU, programmable logic, IoT and broader automotive. It is a mature technology, he emphasized, and not a niche technology. The ecosystem is growing, and there’s lots more IP ready. 28nm will be a long-lived node. Here’s the slide that summed up the current production status:
As you see, the production PDK with the RF add-on will be available this summer. Also, check out the presentations by Synopsys (get it here), which has repackaged the key IP from ST for Samsung customers, as well as Leti on back-bias (get it here), Ciena (they were the Nortel’s optical networking group) and ST (it’s chalk-full of great data on FD-SOI for RF and analog — get it here).
At the San Jose symposium, ST showed once again the enormous advantages FD-SOI provides in analog design.
NXP – integration, differentiation and passion
Ron Martino gave a talk full of energy and passion entitled, “Smart Technology Choices and Leadership Application Processors”.
If you read Ṙon’s terrific posts here on Semiwiki recently, you already know a lot about where he’s coming from. If you missed them, they are absolute must-reads: here’s Part 1 and here’s Part 2. Really – read them as soon as you’re done reading this.
As he noted there, NXP’s got two important new applications processor lines coming out on 28nm FD-SOI. The latest i.MX 7 series combines ultra-low power (where they’re dynamically leveraging the full range of reverse back biasing – something you can do only with FD-SOI on thin BOX) and performance-on-demand architecture (boosted when and where it’s needed with forward back-biasing). It’s the first general purpose microprocessor family in the industry’s to incorporate both the ARM® Cortex®-A7 and the ARM Cortex-M4 cores (the series includes single and dual A7 core options). The i.MX 8 series targets highly-advanced driver information systems and other multimedia intensive embedded applications. It leverages ARM’s V8-A 64-bit architecture in a 10+ core complex that includes blocks of Cortex-A72s and Cortex-A53s. (They’ve now posted an awesome i.MX 8 demo from FTF2016 on Twitter — see it here.)
In his San Jose presentation, Ron said that FD-SOI is all about smart architecture, integration and differentiating techniques for power efficiency and performance. And the markets for NXP’s i.MX applications processors are all about diversification, in which a significant set of building blocks will be on-chip. The IoT concept requires integration of diverse components, he said, meaning that a different set of attributes will now be leading to success. “28nm FD-SOI offers advantages that allows scaling from small power efficient processors to high performance safety critical processor,” he noted – a key part of the NXP strategy. Why not FinFET? Among other things, it would bump up the cost by 50%. Here are other parts of the comparison he showed:
For NXP, FD-SOI provides the ideal path, leading to extensions of microcontrollers with advanced memory. FD-SOI improves SER[SUP]*[/SUP]by up to 100x, so it’s an especially good choice when it comes to automotive security. Back-biasing – another big plus – he calls it “critical and compelling”. The icing on the cake? “There’s so much we can do with analog and memory,” he said. “Our engineers are so excited!”
Sony – GPS (with 1/10th the power!) now sampling
You know how using mapping apps on your smartphone kills your battery? Well now there’s hope. Sony’s getting some super impressive results with their new GPS using 28nm FD-SOI technology. These GPS are operated at 0.6V, and cut power to 10x (!) less than what it was in the previous generation (which was already boasting the industry’s lowest power consumption when it was announced back in 2013).
In San Jose, Sony Senior Manager Kenichi Nakano presented, “Low Power GPS design with RF circuit by the FDSOI 28nm”, proclaiming with a smile, “I love FD-SOI, too!” All the tests are good and the chip is production ready, he said. In fact, they’ve been shipping samples since March.
Analog Bits – Lowest Power SERDES IP
SERDES (Serializer/Deserializer) IP is central to many modern SOC designs, providing a high-speed interface for a broad range of applications from storage to display. It’s also used in high-speed data communications, where it’s had a bad rep for pulling a huge amount of power in data centers. But Analog Bits has been revolutionizing SERDES IP by drastically cutting the power. Now, with a port to 28nm FD-SOI, they’re claiming the industry’s lowest power.
In his presentation, “A Case Study of Half Power SERDES in FDSOI”, EVP Mahesh Tirupattur described FD-SOI as a new canvas for chip design engineers. The company designs parts for multiple markets and multiple protocols. When they got a request to port from bulk to 28nm FD-SOI, they did it in record time of just a few months, getting power down to 1/3 with no extra mask steps. Plus, they found designing in FD-SOI to be cheaper and easier than FinFET, which of course implies a faster time to market. “The fabs were very helpful,” he said. “I’m pleased and honored to be part of this ecosystem.”
Stanford – FD-SOI for the Fog
Listening to a presentation by Stanford professor Boris Murmann gets you a stunning 30,000 foot view of the industry through an amazing analog lens. He’s lead numerous explorations into the far reaches of analog and RF in FD-SOI, and concludes that the technology offers significant benefits toward addressing the needs of: ultra low-power “fog” computing for IoT (it’s the next big thing – see a good Forbes article on it here); densely integrated, low-power analog interfaces; universal radios; and ultra high-speed ADC.
Next in part 2, we’ll look at the 22nm FD-SOI presentations in San Jose.
Facebook and Deep Reasoning with Memory
Neural nets as described in many recent articles are very capable at recognizing objects and written and spoken text. But like anything we can build, or even imagine, they have limitations. One problem is that after training, the neural nets we usually encounter are essentially stateless. They can recognize static patterns but not pattern sequences and they can’t advance their learning without being retrained.
Time sequence patterns are important because that’s where semantic understanding has to start. You cannot claim a system has understanding unless it can make inferences from previously-supplied information. For example, given “Bilbo took the ring. Bilbo went back to the Shire. Bilbo left the ring there”, then answer “Where is the ring?”.
One way to address this limitation is to use recurrent neural nets (RNNs) in which perceptrons support feedback. These can make learning part of the training process, so what is memorized is embedded in the net itself, but RNNs tend to have limited and, in simpler implementations, very short-term memory. Another way is to use Memory Neural Networks (MemoryNN’s) which use associative memory in combination with a neural net. Facebook is very active in research and publications in this area (which may come as a surprise to those of you who think Facebook mostly worries about optimizing cat videos).
The MemoryNN approach at Facebook isn’t quite as simple as storing previous sentences. What is stored is a reduced vector of characteristics to enable and simplify comparisons on essential features. A lookup is then a closest match comparison on a requested feature set. FB calls these feature sets “feature vectors”. (I would imagine deciding what are the best essential features and how to grade object features on those scales then becomes a major topic its own right.)
There are several interesting challenges in modelling and matching feature vectors. One is that even with associative memories, we tend to think of exact matches per feature, but it is often more useful to also allow close matches (e.g. synonyms in text). A second interesting dimension is when in the timeline the information was stored. If, following the Lord of the Rings example above, Frodo subsequently picked up the ring, went to Mount Doom and dropped the ring there, the (current) answer to “where is the ring?” should be Mount Doom. But an answer to “Where did Bilbo leave the ring?” would still be the Shire.
A third challenge is to model unknown words, often (but not necessarily) proper names. One way Facebook deals with this is to model the context in which the word appears, determine what known words appear in that context and assume the new word is similar to those words (e.g. it is inferred to be a noun). The Facebook paper below talks about methods to address each of these needs.
MemoryNN’s are not restricted to text-based tasks. They can be useful in any objective where learning-on-the-fly can improve accuracy. For pictures, Visual Q&A (another Facebook capability; they have a demo – check it out) can answer questions about what is in a picture. You could imagine this being very useful in text/voice-based feature searches on image libraries (maybe find all pictures containing a dog). And MemoryNN’s can be particularly helpful in self-training. AlphaGo (the Google Go-player) uses MemoryNN’s to self-train on reasonable next moves from the current position in Go. Self-training is a very active area of research given the often high level of investment required in human-directed training for neural nets.
MemoryNN’s look like a major evolution in deep reasoning. Certainly Yann LeCun who runs Facebook AI Research thinks so. It’s also interesting to think about what is driving AI at Facebook and Google. They’re working on very similar areas and very similar goals – this competition should drive rapid advances in what we will be able to do. You can go through Yann’s slides on this topic and more HERE and you can read a Facebook paper on Memory Neural Networks HERE.
Electrostatic Discharge analysis of FinFET technology
Sofics recently had the opportunity to characterize FinFET technology through cooperation with one of its customers. We analyzed the technology related to ESD and identified several challenges.
Continue reading “Electrostatic Discharge analysis of FinFET technology”
Free Webinar: Designing Low-Power IoT Systems
As I have written before, IoT looks to be a key driver for design starts and future semiconductor revenue growth which is why we wrote “PROTOTYPICAL” and included a field guide to FPGA Prototyping. If you want to get funding for your new IoT chip project, having a working prototype is a good thing, absolutely. If you want to take a look at the latest IoT articles on SemiWiki just click “IoT” in the navigation bar above, right between ARM and Automotive.
I don’t want to scare you but for the past ten years smartphones have been driving the semiconductor industry. In fact, today semiconductor growth depends on it. Unfortunately, a recent report by IDC predicts a sharp reduction in smartphone shipments from 10%+ in 2015 to 3% in 2016. Ouch!
So let’s get some IoT designs started and the key to IoT design is of course low power, and when you mention low power design ANSYS should come to mind, which brings us to the webinar in question:
Designing Low-Power IoT Systems
The Internet of Things (IoT) is a vast network of interconnected devices that communicate with each other wirelessly or over the internet to monitor systems, transmit data and change states of devices. It is already improving our lifestyles, healthcare methods, industrial productivity, and business models. The technologies involved in the design of products and services include smart and autonomous sensor systems, cloud infrastructure, big data analytics, wireless communications and cybersecurity.
Date: June 9, 2016
Time: 11:00 AM – 12:00 PM (EDT)
REGISTER
Attend this webinar to learn how ANSYS engineering simulations can help you to meet the challenges of the IoT. Discover how to validate and even improve the power consumption, lifespan, reliability and overall integrity of this new generation of sensors. The key is maximizing the efficiency/cost ratio and optimizing the productivity of relatively simple, inexpensive systems for a very attractive market.
And the nice thing about webinars is that, even if you can’t make the programmed time, if you register in advance you will get a friendly reminder when the replay is up. And here is a special offer: If you attend this webinar I will send you a free PDF copy of “PROTOTYPICAL”. Such a deal!
The first half of “PROTOTYPICAL” is a concise history of FPGA-based prototyping. The second half of “PROTOTYPICAL” is an all-new Field Guide titled “Implementing an FPGA Prototyping Methodology” authored by the teams at S2C. It looks at when design teams need an FPGA-based prototyping solution, how to choose one, and how to be sure the platform is scalable including a look at the latest cloud-based implementations. It then dives into the methodology: setting up a prototype, partitioning, interconnect, debugging, and exercising a design. It’s a practical view of the questions teams have and the issues they run into, and how to solve them.

