CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Further delays in KLAM deal not a good omen

Further delays in KLAM deal not a good omen
by Robert Maire on 08-14-2016 at 4:00 pm

Deal likely getting worse as time & remedies go by…
Just a couple of short weeks ago on the earnings conference call, Lam management was adamant about the KLAM deal getting done and done by the Oct 20th deadline. Martin Anstice, the CEO , went to great lengths to tell us that the deal was under control, was going to happen, was pro-competitive and has “zero overlap”, etc;..

Now the company is “walking back” those statements saying that the deal likely will not get done by Oct 20th

What happened in the last two weeks???
We wonder what happened over the last two weeks that precipitated the change in the story. Obviously it was some unanticipated events or perhaps further complications that will take more time. Clearly, the more time this takes, the worse the deal gets as regulators have more time to come up with issues and ask for more remedies. Getting bogged down is not a good thing at all…..just ask AMAT

An out of control process???
Given that the goal posts keep getting moved further out and now the goal posts have been moved out of the Oct 20th end zone it begs the question of wether or not the deal process is under control…. it seems not.

One would imagine that the investment bankers and lawyers on the deal should have a pretty good idea & experience of the overall process and timing but that does not appear to be the case as the company was forced to change stories in such a short time frame.

We called it…
In our last two notes on KLAC and LRCX we called into question both the timing and the costs of the deal and expressing our concerns…

Lam beats on EPS & Revs and good Q1 (Sept) guide


KLAC accelerates business into Q4 and where are we on the KLAM deal?


We were the only analyst that had correctly predicted the problems with the Applied/TEL deal and so far we have also been in the same position on the KLAM deal….

Most of the sell side remains overly cheery and positive and don’t see the myriad of potential issues. This is complex stuff with many angles and nothing should ever be assumed to be a “slam dunk”……

What will KLA do??? Ask for more money???
As we brought up in previous notes KLAC could either walk away or ask for more money as the “deal clock” runs out. We think it would be very reasonable for KLAC to ask for more money as their performance has been stellar and Lam needs the deal more now than when it was first announced. Lam can’t walk away from this deal as easily as AMAT could walk away from TEL.

As we previously pointed out we had been lightening up on the shares of LRCX based on deal issues and valuation but we would retain our position in KLAC as they are in the cat birds seat right now.

Kost Kreep…
In the end, this deal will not be as attractive as when it was first announced. The remedies are obviously worse than anticipated and we have the added risk of KLAC asking for more money or walking away at the 11th hour.

The extended negotiations are a clear indication that the remedy costs are too high otherwise Lam would have put this deal to bed a long time ago

Probability down to a bit over 50/50…
We still think the deal gets done though the costs are likely even worse than they were just two short weeks ago.

We are sure that the company will play down or excuse away any remedy issues as inconsequential but investors will now have to take a much closer look. The regulators have the upper hand and time is on their side and they can negotiate for a better deal.

We are sure that Samsung is pushing Korean regulators for 2 pounds of flesh and Japanese regulators are probably well aware of the “fox guarding the hen house ” problem from both TEL and Hitachi.

Better Street management needed…
Expectations of the deal need to be better managed as the rapid changes do not instill confidence. Stop using the term “zero overlap” as that has little to do with regulatory approval. “Outer Limits” is an old TV show and not the drop dead date of the deal. Its also clear from many industry sources that customers are not universally in love with the deal and to call it “pro-competitive” is a stretch.

Our guess is that the November analyst meeting will have to be pushed out (and should be pushed out) as the deal will either not be done or there will not be enough time to get a coherent story together.

The stocks…
We would not be owning LRCX right now but would hold on to our KLAC. As previously mentioned a strangle option play may work given the current instability in the story and thus the stock.


SEMICON West – Globalfoundries Update

SEMICON West – Globalfoundries Update
by Scotten Jones on 08-14-2016 at 12:00 pm

On Wednesday of SEMICON West I got to sit down with Gary Patton, CTO of GlobalFoundries and get an update on what has been going on with them.

Gary started the interview by pointing out that it has now been a year since the GlobalFoundries purchase of many of IBM’s semiconductor assets and they have hit every commitment they made. They had a black eye from the ramp up of 28nm in Dresden, they canceled 20nm and had to license 14nm from Samsung. Last year they said they would qualify 14nm at the beginning of this year and they did. They now have a ton of tape-outs in-line, they are in production on multiple parts and yields are world class. The 14nm process they are running now will also provide a baseline for 7nm development.

Gary confirmed GlobalFoundries will not be offering a 10nm process. They believe it will be short lived node and don’t see the value proposition in it (authors note, at 20nm TSMC is really the only foundry that offered it and they quickly transitioned to 16nm, many believe the 10nm to 7nm transition will be similar).

For 7nm they are designing the process to be done with optical but they are also positioned to introduce EUV when it is ready. In Gary’s opinion, the mistake that was made at 20nm was it added multi patterning but it didn’t provide much scaling so the value proposition wasn’t there. 7nm is designed to optimize cost and scaling. He wouldn’t comment specifically on 7nm timing but he would say he thought they would be competitive with other foundries. They will have a base 7nm technology and they are also looking at further performance kickers to bring in later plus preparing for EUV.

In terms of EUV the IBM 3300 EUV tool is at the advanced patterning center in Albany and GlobalFoundries will put their first tool there in partnership with NY State and CNSE. Prior to 20nm you might have 8 to 10 – 1x metal layers. With multi pattering you can only afford a small number of 1x layers.

In terms of 22FDX (their FDSOI process), progress is right on target. The 0.5 process design kit (PDK) was released in the second quarter as planned. They said they would get high 128Mb SRAM yield this year and they got it mid-year. They have invested heavily on IP this year – Cadence, Synopsis, Mentor, ARM, Invecas (design). They now have a pretty large foundational library of IP. With 22FDX they can operate at 0.4 volts and can get to 1pA per micron of leakage for IOT and they are working on integrating RF. You can have much higher Ft and Fmax than a FinFET technology and he thinks that will be important for 5G. They are on track for risk production at the end of this year and volume production next year. FDSOI is optimized for cost and power and really designed for IOT and the mobile space, for large chips and high performance, you need the performance of FinFETs. FDSOI is also easy to design for. FinFET design at 14nm is 3x the cost of a 28nm design according to Gartner and 7nm is expected to be another 3x.

In terms of embedded memory, embedded Flash works down to 28nm. GlobalFoundries has a collaboration with Everspin on MRAM and they will include it on the 22FDX platform. They are also looking at it for 14nm and beyond.

The IBM acquisition benefits have really played out as expected. 350 engineers have moved up to Malta to help with development and they still collaborate with Samsung on development as well. Gary has ramped up RF development, he controls the R&D budget. Burlington is an RF leader and they are using it to strengthen RF in Dresden, Singapore and Malta. They are number one in the ASIC space and they have moved it onto 14LPP as well as using it to provide foundry IP. They are already quoting ASICs on 7nm. They got silicon photonics IP, trusted foundry status and they have completed the certifications. IBM had a fairly large 2.5D and 3D program and they combined it with Global’ s program. They were first to do TSV and 32nm for the Micron Hybrid Memory Cube. IBM had no focus on wireless, the RF group was really on the outside. They now have one of their best RF development guys in Dresden leading the implementation of RF on the 22FDX technology.


Dealerless Future for Driverless Cars

Dealerless Future for Driverless Cars
by Roger C. Lanctot on 08-14-2016 at 7:00 am

The Chevrolet Volt was a technological marvel from its very launch. A so-called plug-in extended range electric vehicle that could be operated entirely on battery power as long as it was only driven short distances or for hundreds of miles on gasoline. But something happened on the way to the market that suggests deeper troubles in the automotive industry.

The maker of the Chevy Volt, General Motors, is a business-to-business operator. GM sells cars to dealers, not directly to customers. As a result, GM is dependent on dealers to properly market and sell its cars.

I remember my first visit to a Chevy dealer to look into purchasing a Volt and the dealer’s response. At the time, dealers were asking for more than the sticker price for a car that was in limited supply. Then, they were claiming that the car could not be leased, although GM had announced a generous lease plan.

It took a minute or two to realize that dealers were not pushing the Volt. Dealers saw the Volt as a threat to their internal combustion engine-based business – a business where regular service visits were a core value proposition for the longevity of their profitable operations.

Volt buyers were EV enthusiasts and fuel efficiency fanatics. Dealers could see that these customers were not going to be good for their business. Dealers responded by actively discouraging customers from buying Volts, used Volts as bait for a switch to an ICE vehicle, or ignored the Volt entirely.

The resulting lackluster sales results speak for themselves and the subsequent plunge in gas prices has only put an exclamation point on the experience – as did the failure of Cadillac’s own Volt version. But with Tesla Motors grabbing headlines with its ongoing sales success with far more expensive EV machines, the GMs and BMWs and Daimlers and Porsches of the world are determined to respond.

But all of these car companies rely on dealers and dealers rely on service and used car sales to remain profitable. The Volt experience highlights the degree to which dealers are out of step with crucial transformative elements of the current automotive landscape.

It also shows them as a barrier to progress on the road to electrification and automation. That path is being rapidly paved by new market entrants and disruptive players that may or may not turn to dealers. The failure of dealers to drive sales of EVs is likely to usher in a direct sales transformation of the auto industry.

Dealers are not only out of step. Dealers are in danger.

In an environment where consumers are looking for new ways to acquire wheels or, alternatively, are seeking to avoid owning cars entirely, dealers are rapidly becoming an anachronism. At the core of that market transformation is dealer intransigence on three core areas vital to car makers:

Recalls – Dealers – led by their national organization NADA – have resisted the efforts of the auto makers and the National Highway Traffic Safety Administration to emphasize the identification and correction of open recalls in new and used vehicles and among vehicles already on the road.

Software updates
– Dealers refuse to recognize and embrace the need for automated, dealer-less, over-the-air software updating of vehicles.

Electric vehicles
– Dealer commitment to marketing and selling electric vehicles remains, at best, suspect and, at worst, unreliable. The EV is antithetical to the dealer business model as currently conceived.

All of this matters because GM, BMW, Porsche and others are poised to bring their Tesla-mass-market-Model-3-killers to the market through what are likely to be unenthusiastic dealer networks. The least enthusiastic dealer network of all may well be Chevrolet’s, which may be why GM is planning such a limited run of Bolt EVs (25,000) while at the same time promoting Bolt EV leases through the Express Drive program with Lyft.

According to a report from Inside EVs, it will be possible to rent a 2017 Bolt EV for $99/week as part of Lyft’s Express Drive program with GM. Lyft drivers who take the Bolt offer and complete 65 drives/week pay nothing, according to the report.

What’s important to bear in mind here is that GM’s relationship with Lyft and GM’s Maven program can be viewed as the precursors of a direct sales program. Like every other manufacturer of internal combustion engine-based vehicles, GM knows that dealers have yet to find a way to capitalize on EV sales.

Even BMW has struggled to find success selling its i3 through dealers in spite of instituting a loaner program of gas-fueled vehicles for i3 owners in need of a vehicle for longer trips. EVs have been popping up in shared car fleets around the world – primarily for systems that require the users to return the cars to a charging station.

The Bolt’s planned 200M+ range should liberate the vehicle from its charging source for extended periods of time. But the Bolt won’t be liberated from the dealer network.

The Volt’s disappointing sales results raises questions regarding the long-term viability of existing new car dealer networks for handling the demands of new electrified and eventually self-driving vehicles. The Lyft and Maven opportunities for GM point to new sources of dealer disruption.

Apps like Beepi and services like Flexdrive threaten to disintermediate dealers completely. Car makers may look to these emerging solutions as new go-to market alternatives to increasingly sclerotic and intransigent dealer networks.

Dealers need to embrace and leverage new technologies from electrification and over-the-air software updates to car sharing and ride hailing and seek out new ways to enable alternative car ownership models and, most importantly of all, reduce the complexity of purchasing a car. Both Beepi and Flexdrive have reduced the car acquisition process to an app and the press of a button while eliminating the need for a dealer.

New car dealers have a lot to gain from mitigating the pain of vehicle acquisition and ownership. They also have a lot to gain from paying closer attention to the changing priorities of car makers.


Qualcomm is Back on Top of the SoC World!

Qualcomm is Back on Top of the SoC World!
by Daniel Nenni on 08-13-2016 at 7:00 am

In 2015 Qualcomm stunned the fabless semiconductor world with an unprecedented layoff. When I first heard about it the number was 5% but it kept growing and finally hit 15%. The big misstep here was, that after being the SoC leader starting in 2007 with the Snapdragon series of chips that powered the Smartphone revolution, QCOM did not make the jump to 64 bit in a timely manner. In fact, Apple beat QCOM to 64 bit with the A7 SoC in the iPhone 5s and QCOM responded with the most ridiculously famous quote:

“I know there’s a lot of noise because Apple did [64-bit] on their A7. I think they are doing a marketing gimmick. There’s zero benefit a consumer gets from that.”

You can read more about this misstep in embarassing detail in our book “Mobile Unleashed”. Chapter 9 “Press Q to Connect” is a complete semiconductor history of Qualcomm.

QCOM quickly abandoning their custom 32 bit Krait ARM based architecture and cobbled together off the shelf ARM Cortex 64 bit cores. This Snapdragon 810 was built on a leading edge TSMC 20nm process (same as the Apple A7) and failed miserably losing QCOM their largest customer (Samsung).


Fast forward to the most recent Snapdragon 820 and the new custom 64-bit Kyro ARM based architecture using the Samsung 14nm process and now QCOM is again in the SoC lead winning back Samsung (Galaxy S7 and Note7) plus many other devices around the world.

China is the best example of QCOMs resurgence. Apple and Samsung have both lost market share in China to Chinese smartphone companies: Huawei, OPPO, Xiaomi, and Vivo. All of these companies use QCOM chips and IP. China smartphone companies are now invading other emerging countries like India so this is a big upside for QCOM.

It has been reported that the next generation QCOM Snapdragon 830 has already been taped-out to the Samsung 10nm process making silicon available in the first half of 2017. You can bet the next series of Samsung phones and phablets will be packing the latest and greatest from QCOM beating Apple 10nm based devices to market. This will also be the first time QCOM will have the process lead over Intel! Yes, QCOM will have 10nm server, modem, and mobile chips BEFORE Intel!

The most recent QCOM quarterly numbers beat ($1.18 earnings per share and revenues of $6 billion against the consensus estimate of 98 cents in EPS and $5.6 billion in revenues) has the analysts excited about Qualcomm and semiconductors again, but not excited enough in my opinion.

QCOM owning the high end mobile SoC market again is a great foundation for what is coming next and that is server chips with their Chinese partnership that I have written about before, which brings QCOM head to head with Intel as a semiconductor company. Having already beat Intel in the mobile and modem chip business, a big server win in China will give Intel a serious financial pause and make Qualcomm the heir apparent to the Intel processor fortunes.

The proposed SoftBank acquisition of ARM is viewed as a serious net positive for QCOM since it has been long rumored that Intel, Apple, or even Samsung would acquire ARM. Not only does SoftBank provide a neutral home for ARM, the added financial strength gives ARM customers a much quicker roadmap into servers, IoT, Automotive, Drones, and every other device with a chip in it, absolutely.


Memory War Z: Samsung spins antidote to 3D XPoint

Memory War Z: Samsung spins antidote to 3D XPoint
by Don Dingee on 08-12-2016 at 4:00 pm

The 2016 edition of the Flash Memory Summit produced more than the usual amount of excitement. Samsung’s response to the Intel/Micron 3D XPoint challenge arrived in new slideware, indicating the war for next-generation SSDs is just starting. Who has the advantage?

We’d all like to think this is about creating a breakthrough technology, leapfrogging the competition and blowing their roadmap away. On the surface, 3D XPoint does that – in fact, in their launch presentation, the suggestion was made that it has been nearly 30 years since a mainstream memory technology was created, MRAM notwithstanding.


Simple math says Intel and Micron are 4 years ahead, based on 3D XPoint work beginning in 2012. Micron is feverishly evaluating 3D XPoint-based SSDs using sample chips and an FPGA controller, supporting rapid iterations as improved chips drop from the fab. They claim 90% reuse in the controller and firmware, but it would seem this is a classic case of the last 10% of the content being 90% of the effort. 3D XPoint behavior is still a moving target, especially in large SSD configurations.

Samsung has taken the Brad Pitt maneuver from World War Z, calmly watching a zombie mob run down the hallway after 3D XPoint while evaluating its survival options. If there is one concept Samsung understands, it is the winner in non-volatile memory is not the “best” technology. What propelled Samsung to the top in both DRAM and NAND flash was a fab strategy that allowed capacity to be interchanged, creating rapid response to demand swings and shipping product in the face of industry-wide allocation. Good product, solid yield, best availability and pricing.

Meanwhile, Intel is taking its usual brute-force approach. Create a technology so complex that only you and your licensees can build it in silicon. Plow billions of dollars into unique fabs and ruthlessly pursue yield and learning curve reductions. Market the daylights out of the new technology so consumers ask for it – that’s coming next, we’ll see commercials for Intel Optane very shortly.

Here comes Samsung Z-NAND. A quote from the press release:

Samsung has also developed a high performance, ultra-low latency SSD solution, the Z-SSD. Samsung’s Z-SSD shares the fundamental structure of V-NAND and has a unique circuit design and controller that can maximize performance, with four times faster latency and 1.6 times better sequential reading …

In the fine print: this is all from an emulator. It appears from various reports that Samsung has been tinkering with phase-change, if only to debunk some of the 3D XPoint story, and they claim to have comparison data between existing V-NAND SSDs, a (virtual?) prototype of a PRAM-based SSD, and the definitely virtual prototype of a Z-NAND-based SSD.

If Samsung can indeed leverage existing V-NAND technology with increased parallelism – likely through adding planes, similar to 64 regions in 3D XPoint – and build something that looks more like SLC NAND in this new Z-NAND, they can saw years off the fab timeline and have big-time capacity ready to go before 3D XPoint hits volumes.

Bottom line here: Intel has to win, somehow, with its huge investment in 3D XPoint, if not in SSDs then as a DRAM-replacement. Samsung is bound and determined to stay at the top of the pack in SSDs, and will use its capacity story to do that with Z-NAND. It’s all marketing and darned few public technical details right this second; we won’t know how this plays until 2017 at the earliest.


I already live in the future and so should you

I already live in the future and so should you
by Vivek Wadhwa on 08-12-2016 at 12:00 pm

I live in the future. I drive a Tesla electric vehicle, which controls the steering wheel on highways. My house in Menlo Park, Calif., is a “passive” home that expends minimal energy on heating or cooling. With the solar panels on my roof, my energy bills are close to zero — and that includes charging the car. My iPhone is encased in a cradle laced with electronic sensors that I can place against my chest to generate a detailed electrocardiogram. Because I have a history of heart trouble, including a life-threatening heart attack, knowing that I can communicate with my doctors in seconds is a comfort.

I spend much of my time talking to entrepreneurs and researchers about breakthrough technologies, such as artificial intelligence and robotics. These entrepreneurs are building a better future, often at a breakneck pace. One team built in three weeks a surgical-glove prototype that delivers tactile guidance to doctors during examinations. Another built visualization software that tells farmers the health of their crops using images taken by off-the-shelf video cameras flown on drones. That technology took four weeks to develop. You get the idea. I do, in fact, live in the future as it is forming. It is forming far faster than most people realize, and far faster than the human mind can comfortably perceive.

In short, the distant future is no longer distant. The pace of technological change is rapidly accelerating, and those changes are coming to you very soon, whether you like it or not.

Such rapid, ubiquitous change has, of course, a dark side. Many jobs as we know them will disappear. Our privacy will be further compromised. Future generations may never drive a car or ride in one driven by a human being. We have to worry about biological terrorism and killer drones. Someone you know — maybe you — will have his or her DNA sequence and fingerprints stolen. Man and machine will begin to merge into a single entity. You will have as much food as you can possibly eat, for better and for worse.

The ugly state of politics in the United States and Britain illustrates the impact of income inequality and the widening technological divide. More and more people are being left behind by innovation and they are protesting in every way they can. Technologies such as social media are being used to fan the flames and to exploit ignorance and bias. The situation will get only worse — unless we find ways to share the prosperity we are creating.

We have a choice: to build an amazing future, such as we saw on the TV series “Star Trek,” or to head into the dystopia of “Mad Max.” It really is up to us; we must tell our policymakers what choices we want them to make. The key is to ensure that the technologies we are building have the potential to benefit everyone equally; balance the risks and the rewards; and minimize the dependence that technologies create. But first, we must learn about these advances ourselves and be part of the future they are creating. That future cannot be ignored.

You could say that I live in a “technobubble,” a world that is not representative of the lives of the majority of the people in the United States or in the world. That’s true. I live a comfortable life in Silicon Valley, and I am fortunate to sit near the top of the technology and innovation food chain. As a result, I see the future sooner than most people. The noted science fiction writer William Gibson, who is a favorite of hackers and techies, once wrote: “The future is here. It’s just not evenly distributed yet.” But from my vantage point at its apex, I am watching that distribution curve flatten, and quickly. Simply put, the future is happening faster and faster. It is happening everywhere. Technology is the great leveler, the great unifier, the great creator of new and destroyer of old.

We are only just commencing the greatest shift that society has seen since the dawn of humankind. And as in all other manifest shifts — from the use of fire for shelter and for cooking to the rise of agriculture and the development of sailing vessels, internal-combustion engines and computing — this one will arise from breathtaking advances in technology. This shift, though, is both broader and deeper, and is happening far more quickly than the previous tectonic shift.

This post is based on my and Alex Salkever’s upcoming book, “Driver in the Driverless Car: How Our Technology Choices Will Create the Future,” which will be released this winter.


How Connected Healthcare is Becoming Vital

How Connected Healthcare is Becoming Vital
by Bill McCabe on 08-12-2016 at 7:00 am

There is one word that describes the direction that the health care industry is heading, “connectivity”. This catch all term is used to describe using the internet to increase the reach of medicine. This is also known as the internet of things (IOT) and it is nothing new. It is however relatively new to healthcare.

The goal of connected healthcare is to empower both the providers and patients. Using connectivity, a provider can make use of remote patient monitoring, and consultations without the need to be face to face. This may seem like a moot point to some, but it would enable doctors to reach patients that they have never been able to before. Connected healthcare would also allow things like our cell phones and tablets to send real time medical information to our healthcare providers.

Taking it a step further the aim is going to involve using medical data in news ways. Rather than your medical file sitting unused in a cabinet somewhere the aim of connected healthcare is to compile the data in a way that lets your healthcare provider identify areas in which your day to day life may need improvement. Using this data, you and your provider would then be able to create novel solutions to the issue.
The question still remains though, why is connected healthcare becoming vital? We just explained what it is and some of the benefits but where is the “need”?

It is quite simple; out healthcare network would resemble that of a spider web if we connected all of the facilities with string. You have your imaging done at the hospital, your bloodwork done at a lab and your general check-ups done at your doctor’s office. Then there are outpatient procedures, specialists and countless pharmacies. In days past the only thing that connected these medical facilities were phone and fax (or you transporting your paperwork), which was in no way ideal. The margin for error was simply too great. What’s more it could take days for results of testing or procedures to make it where they needed to go.

What connected healthcare is allowing us to do is use the internet to digitally transmit records, prescriptions, files and test results almost instantaneously. For some this may not seem necessary, the fact is however that our providers are dealing with more and more patients every single day. One example of this would be the fact that the workload of a medical secretary has nearly doubled in the last decade, and where more volume is added the risk of mistakes also increases. Using a digital method for transport will eliminate a lot of the potential for human error within our healthcare network.

That is truly only the start though. Using connected healthcare doctors, specialists, surgeons, imaging techs and pharmacists can all have access to the most up to date and accurate information about their patients. Undoubtedly this will come to benefit us all in ways we cannot even imagine.

We would like to hear your view of connected healthcare. To schedule a quick call use the following link


Pokemon Go’s Roots in Early Human Behavior

Pokemon Go’s Roots in Early Human Behavior
by Tom Simon on 08-11-2016 at 4:00 pm

The popularity of Pokemon Go is really no mystery – it has its roots in our hunter gatherer evolution. Pokemon Go was an App that was just waiting to happen. It’s a perfect storm. It is the scavenger hunt brought into the modern age. But more importantly it recapitulates what our ancestors had to do to survive. It taps primal and highly evolved programming to seek out significantly and subtlety differentiated items in our environment and bring them back as booty.

Humans are collectors. The same instinct plays out with shopping, bird watching, coin collecting, you name it any number of hobbies – things that have arisen in our culture to replace our very old and ingrained foraging skills. For a fascinating view of this I highly suggest reading Michael Pollan’s book The Omnivore’s Dilemma. It is the story of three meals and how they are brought to his table. The first is fast food, the second is sustainable, and the third is hunted and foraged – by the author himself. The part of the last meal that fascinated me the most is where he goes and forages mushrooms, with the help of an experienced mushroom hunter.

Those of you who know me are well aware of my interest in mushroom hunting. I have been going into the woods for many years during the winter to search out my own ‘Pokemon’ and cook them for dinner. I used to hate hiking, but once I discovered the pleasures of hunting difficult prey and learning to distinguish poisonous fungi from the delectable (sometimes they are the same thing), I would eagerly wait for the rains to come so I could get out into the field.

I sought out the pleasure of walking through the forest in rain and drizzle, searching under and around vegetation and fallen trees. Much like Pokemon there are an endless variety of mushrooms. There is a classification system that breaks them down into families that can be fairly easily distinguished. There are many thousands of species. That number is all the more impressive when you consider that most of the people reading this have only eaten two or maybe three species. Crimini, Portabello and white mushrooms are actually the same species. Perhaps you have eaten Oyster mushrooms or Shitake. In terms of the flavors and textures available in the mushroom world, these are some of the least interesting.

So what exactly is a mushroom? The organism that produces mushrooms lives underground and consists of networks of fine strands of white fibers called mycelium. They have thin cell walls and the ability to transport water and nutrients through small pores that connect the linked cells to each other. In fact, many mushroom organisms live in connection with the roots of trees and exchange nutrients between them to assist each other. In many cases the trees or mushrooms cannot live without each other. This is a very significant piece of information to have when searching for certain varieties. But I am getting ahead of myself.

Other mushroom species live off of decomposing wood, or plant fiber. If all mushrooms ceased to exist, fallen trees would not slowly disintegrate into the forest floor. The mushroom that we see pop up out the ground or out of a dead or dying tree is the reproductive organ that the mycelium forms when two distinct individuals connect underground. The mushroom serves to broadcast billions of microscopic spores, a tiny fraction of which might ever start a new mycelium.

Our ancestors learned to distinguish toxic from edible mushrooms, and humans have a long history of foraging them for food. Some also have medicinal properties. When I go hunting I am usually looking for four or five specific varieties that I know well and have extensive experience hunting. Some of my favorites are pictured below. First off is the very choice Chanterelle, followed by the scary looking but delicious Black Trumpet.


Last is an edible that few people eat called the Cocorra.

It’s probably a good thing that everyone playing Pokemon Go is not traipsing through the woods looking for edible mushrooms. It would be pretty bad for the mushroom habitat. Nevertheless, in the case of mushrooms, it’s pretty hard to “catch them all”, but even catching a few can lead to a rewarding meal. Modern humans are not so unlike our ancient ancestors after all.


Keynote: Silicon is the New Steel: Building the World’s First Terascale Network

Keynote: Silicon is the New Steel: Building the World’s First Terascale Network
by bkeppens on 08-11-2016 at 12:00 pm

Prof. Thomas Lee from Stanford University is the keynote speaker at the upcoming 38th EOS/ESD Symposium (September 11-16, Anaheim). The EOS/ESD Symposium is focused on discussing the issues and providing the answers to electrostatic discharge in electronic production and assembly.

Abstract:
Steel transformed civilization in the 20th century, shifting from high-tech material to commodity in the process. Silicon’s analogous shift from circuits to systems will similarly transform civilization in this century. This talk will argue that multiple convergent trends are pushing us toward the terascale age, presenting us with both historic opportunities and historic challenges. The latter extend from DC to the millimeter wave, and from design tools to hardening a trillion devices to ESD and other threats. Solving these problems will complete the transition of silicon from today’s ubiquity to tomorrow’s invisibility, the true mark of a successful technology.

38th EOS/ESD Symposium

Speaker bio:Thomas Lee received his degrees from MIT, and an honorary doctorate from the University of Waterloo. His 1989 doctoral thesis described the world’s first CMOS radio. He has been at Stanford University since 1994, having previously worked at Analog Devices, Rambus and other companies. He’s helped design PLLs for several microprocessors (notably AMD’s K6-K7-K8 and DEC’s StrongARM), and has founded or cofounded several companies, including the first 3D memory company, Matrix Semiconductor (acquired by Sandisk), and IoE companies ZeroG Wireless (acquired by Microchip) and Ayla Networks. He serves on the board of Xilinx, is an IEEE and Packard Foundation Fellow, has won “Best Paper” awards at CICC and ISSCC, and was awarded the 2011 Ho-Am Prize in Engineering. He is a past Director of DARPA’s Microsystems Technology Office, and owns between 100 and 200 oscilloscopes, thousands of vacuum tubes, and kilograms of obsolete semiconductors. No one, including himself, quite knows why.

Join us for this interesting keynote and more:
The 2016 EOS/ESD Symposium in Anaheim will address the latest research on EOS and ESD in the rapidly changing world of electronics through tutorials, workshops, technical sessions, invited talks, and through the products and services presented in the industry exhibits.

Download the entire program on the ESDA website or register for the event.

ESD Fundamentals: A six-part series on Electrostatic Discharge (ESD) prepared by the ESD Association

History & Background
To many people, Electrostatic Discharge (ESD) is only experienced as a shock when touching a metal doorknob after walking across a carpeted floor or after sliding across a car seat. However, static electricity and ESD has been a serious industrial problem for centuries. As early as the 1400s, European and Caribbean military forts were using static control procedures and devices trying to prevent inadvertent electrostatic discharge ignition of gunpowder stores. By the 1860s, paper mills throughout the U.S. employed basic grounding, flame ionization techniques, and steam drums to dissipate static electricity from the paper web as it traveled through the drying process. Every imaginable business and industrial process has issues with electrostatic charge and discharge at one time or another. Munitions and explosives, petrochemical, pharmaceutical, agriculture, printing and graphic arts, textiles, painting, and plastics are just some of the industries where control of static electricity has significant importance. The age of electronics brought with it new problems associated with static electricity and electrostatic discharge. And, as electronic devices become faster and the circuitry getting smaller, their sensitivity to ESD in general increases. This trend may be accelerating. The ESD Association’s “Electrostatic Discharge (ESD) Technology Roadmap”, revised April 2010, includes “With devices becoming more sensitive through 2010-2015 and beyond, it is imperative that companies begin to scrutinize the ESD capabilities of their handling processes”. Today, ESD impacts productivity and product reliability in virtually every aspect of the global electronics environment.

Despite a great deal of effort during the past thirty years, ESD still affects production yields, manufacturing cost, product quality, product reliability, and profitability. The cost of damaged devices themselves ranges from only a few cents for a simple diode to thousands of dollars for complex integrated circuits. When associated costs of repair and rework, shipping, labor, and overhead are included, clearly the opportunities exist for significant improvements. Nearly all of the thousands of companies involved in electronics manufacturing today pay attention to the basic, industry accepted elements of static control. ESD Association industry standards are available today to guide manufacturers in establishing the fundamental static charge mitigation and control techniques (see Part Six – ESD Standards). It is unlikely that any company which ignores static control will be able to successfully manufacture and deliver undamaged electronic parts.


The Higgs Boson and Machine Learning

The Higgs Boson and Machine Learning
by Bernard Murphy on 08-11-2016 at 7:00 am

Technology in and around the LHC can sometimes be a useful exemplar for how technologies may evolve in the more mundane world of IoT devices, clouds and intelligent systems. I wrote recently on how LHC teams manage Big Data; here I want to look at how they use machine learning to study and reduce that data.

The reason high-energy physics needs this kind of help is to manage the signal-to-noise problem. Of O(10[SUP]12[/SUP]) events/hour only ~300 produce Higgs bosons. Real-time pre-filtering significantly reduces this torrent of data to O(10[SUP]6[/SUP]) events/hour but that’s still a very high noise level for a 300 event signal. Despite this, the existence of Higgs has been confirmed with a significance of 5σ, but the physics doesn’t end there. Now we want to study the properties of the particle (there are actually multiple types), but the signal-to-noise problems appeared so daunting that CERN launched a challenge in 2014 to propose machine-learning methods to further reduce candidate interactions.

The tricky part here is that you don’t want to rush to publish your solution to quantum gravitation or dark matter only to find a systematic error in the machine learning-based data analysis. So standards for accuracy and lack of bias/systematic errors are very high, suggesting that the LHC may also be beating a path for the rest of us in machine learning.

The CERN machine-learning challenge required no understanding of high-energy physics. The winning method, provided by Gabor Melis, used an ensemble of neural nets. There’s a lot of detail to the method but one topic is especially interesting – the careful methods and intensive effort put into avoiding over-fitting data (aka false positives). I recently commented on a potential weakness in neural net methods. If you train to see X, you will have a bias to see X, even in random data. So how do you minimize that bias?

The method used both to generate training data and to test significance of “discoveries” in that data is Monte Carlo simulation, a technique which has been in use for many decades in high-energy physics (my starting point many years ago). The simulation models not only event dynamics but also detector efficiency. Out of this come many-dimensional representations of each event which form the input to training for each of the challenge participants’ methods. Since the data is simulated, it is easy to inject events of special interactions with any desired probability to test metrics for classification.

Deep neural nets and boosted tree algorithms dominated successful entries. The challenge was also important in enabling cross-validation and comparison between techniques. To ensure objectivity between entries, statistical likelihood measures were defined by CERN and used to grade the solutions from each competitor. The competition together with these measures is a large part of how CERN was able to have confidence in minimized bias in the algorithms. But they also commented that the statistical metrics used are still very much a work in progress.

I should also stress that these methods are not yet being used to detect particles. They are only being used to reduce the data set, based on classification, to a set that can be analyzed using more traditional methods. And in practice a wide variety of techniques are being used on Atlas and CMS experiments (two of the detectors at the LHC), including neural nets and boosted decision trees, plus pattern recognition on events, energy and momentum regressions, individual component identification in events and others.

And yet even with all this care, machine learning methods are not out of the woods yet. One of the event types of interest is decay of a Higgs boson to 2 photons – a so-called di-photon event. The existence of Higgs is in no doubt, but recent di-photon events looking in a different mass range found (with 3σ significance) an apparent resonance at 750 GeV, which might have heralded a major new physics discovery.

But subsequent experiments this year reversed the likelihood that a new particle had been detected. Whether the initial false detection points back to weaknesses in the machine learning algorithms or in human error, this should serve as a reminder that when you’re trying to see very weak signals in significant background, eliminating systematic errors is very, very hard. I think it also points to the power of multiple independent viewpoints or, if you like, the power of the crowd. This underpins a core strength of the scientific method: independent and repeatable validation.

You can learn more about the CERN challenge HERE. A more comprehensive discussion of the total solution can be found HERE. And a report on the non-existent 750GeV resonance can be found HERE.

More articles by Bernard…