RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

DAC 2017: How Oracle does Reliability Simulation when designing SPARC

DAC 2017: How Oracle does Reliability Simulation when designing SPARC
by Daniel Payne on 06-27-2017 at 12:00 pm

Last week at #54DAC there was a talk by Michael Yu from the CAD group of Oracle who discussed how they designed their latest generation of SPARC chips, with an emphasis on the reliability simulations. The three features of the latest SPARC family of chips are:

  • Security in silicon
  • SQL in silicon
  • World’s fastest microprocessor

Continue reading “DAC 2017: How Oracle does Reliability Simulation when designing SPARC”


New Concepts in Semiconductor IP Lifecycle Management

New Concepts in Semiconductor IP Lifecycle Management
by Daniel Nenni on 06-27-2017 at 7:00 am

Right before #54DAC I participated in a webinar with Methodics on “New Concepts in Semiconductor IP Lifecycle Management” with Simon Butler, CEO of Methodics, Michael Munsey, Vice President of Business Development and Strategic Accounts, and Vishal Moondhra, Vice President of Applications. Thewebinar introduced “percipient” and how it will not only extend IP Lifecycle Management, but allow for the modeling of the entire design ecosystem. Percipient was then featured in the Methodics booth at #54DAC with demos and presentations.


The premise is that IP lifecycle management and workspace management need to evolve as SoC’s become more and more complex. If you look at complex system design, like automotive and aerospace systems, those industries have evolved their ecosystem to keep track with the complexity of the systems they have been designing. Today’s SoC designs are truly systems, and have complexity rivaling the most complex systems in other industries.

Not only does IP lifecycle management need to keep pace with the increasing complexities of system design, but the ability to model the entire ecosystem for SoC deign must be accounted for as well. IP must be tracked not only as building blocks within an SoC, but as part of the entire ecosystem. A design team must be “percipient” or one that perceives, not only of the IP in a design, but the entire ecosystem that they are designing in. Engineering systems used by SoC design teams and the infrastructure of those design teams must be considered along with the IP building blocks.

The webinar is now up for replay:

New Concepts in Semiconductor IP Lifecycle Management

Today’s complex SoC design requires a new level of internal and external design traceability and reuse by tightly coupling IP creators with IP consumers. Join us for the introduction of an exciting new platform that allows companies to provide the transparency and control needed to streamline collaboration by providing centralized cataloging, automated notifications to design teams, flexible permissions across projects, and integrated analytics across diverse engineering systems. Come see how companies are realizing substantial cost and time to market savings by adopting IP lifecycle management methodologies.

About Methidics

Methodics delivers state-of-the-art IP Lifecycle Management, Design Data Management, and Storage and Workspace optimization and acceleration tools for analog, digital, SoC, and software development design teams. Methodics’ customers benefit from the products’ ability to enable high-performance collaboration across multi-site and multi-geographic design teams. The company is headquartered in San Francisco, California, and has additional offices and representatives in the U.S., Europe, China, Taiwan, and Korea. For more information, visit http://www.methodics.com


Open-Silicon SerDes TCoE Enables Successful Delivery of ASICs for Next-generation, High-Speed Systems

Open-Silicon SerDes TCoE Enables Successful Delivery of ASICs for Next-generation, High-Speed Systems
by Mitch Heins on 06-26-2017 at 12:00 pm

With 5G cellular networks just around the corner, there is an ever-increasing number of companies working to bring faster communications chips to the market. Data centers are now deploying 100G to handle the increased bandwidth requirements, typically in the form of four 28Gbps channels and that means ASIC designers are looking to integrate Serializer/Deserializer (SerDes) solutions that can reliably handle these speeds.

Few companies have the experience to do this correctly and they quite often look to outside sources to help with this part of their ASIC design. The hard work you put into your ASIC will be all for naught if you can’t get those high-speed signals on and off the die correctly, through the package and the board.

The choice of package can make or break your ASIC design both in terms of technical system specifications for the SerDes interface, as well as cost and reliability trade-offs. These are tough decisions to make that can largely impact project success, and possibly your career. To mitigate these risks, many customers turn to veteran ASIC design-services companies like Open-Silicon to mitigate their risk and help them make the proper package and board trade-offs that ensure final system success.

Open-Silicon has an impressive amount of experience in this area, having already integrated SerDes interfaces into over 100+ ASIC designs for high-speed systems used in the networking, telecom, computing and storage markets. In addition to their own experienced people, Open-Silicon also works with silicon-qualified SerDes IP providers who offer solutions that range across multiple technology nodes, foundries and communication protocols. Open-Silicon provides design services through their Technology Center of Excellence (TCoE). In the TCoE, Open-Silicon offers ASIC designers services such as: Channel Evaluation: Identifying the right SerDes solution by evaluating the channels and applications intended for the system

PCS and Controller Solutions: Evaluating the PCS and Controller/MAC requirements for the interface to the core and optimizing for interoperability of hard and soft macros

Physical Integration:Evaluating the metal stack compatibility, special layer and threshold voltage requirements, placement of SerDes on chip and bump plan for physical verification and packaging

Package/Board Design: Collaboratively working on packaging and board design including 3D parasitic extraction and crosstalk and simultaneous switching output noise analysis, signal/power integrity along with other system-level considerations

Silicon Bring-up:
Close coordination with design-for-test and test teams for final design bring-up and quick assessment on automatic test equipment.

I recently conversed with H. N. Naveen and Abu Eghan of Open Silicon who did a case study integrating high-speed SerDes into an ASIC design. The first thing they pointed out is that interactions between the ASIC and the environment have a tremendous impact on the success of your ASIC within the system. Interactions between die, package, and printed circuit board must be considered and optimized to get a solid, reliable, high-speed interface. Normally these are three very different domains handled by different people but when it comes to a high-speed SerDes interfaces, a company must think differently. All these areas must be co-designed to ensure correct performance at these high frequencies.

In the Open-Silicon case study, a 28nm SerDes IP was chosen for a 4-channel, 28Gbps/channel communications ASIC intended to be used in a 100G Ethernet back-plane. Naveen and Eghan’s task was to focus on ensuring that the package and board design were tuned to work with the chosen SerDes IP.

The first package choice was a high performance Low Temperature Co-Fired Ceramic (LTCC) Flip-chip substrate. Their analysis included examining the return and insertion loss and package crosstalk of this and other packages. Additionally, they also looked at the PCB stack to make trade-offs regarding surface roughness, pin assignments, signal escape and routing, edge conditions and signal loss at the board level.

The final package design was selected and optimized through simulations to meet targets culled from CEI specs including pair to pair isolation and substrate insertion loss. The main drivers for their selection and analysis process included overall performance of the SerDes through to the board connections, cost effectiveness of the package, the ability to handle the high-speed signals over wide bandwidth and consistency for fabricated products to ensure manufacturability.

As part of the analysis they also determined they could meet the system requirements using an alternative High Density Build Up (HDBU) type package that had less capabilities but would be somewhat cheaper. In the end, the LTCC version (HiTCE ceramic) was chosen to take advantage of the extra margins in its loss characteristics.

Signal integrity analysis is one of the most important activities to be performed on SerDes signal lines on a printed circuit board. The PCB materials, via’s, copper surface roughness etc., become very critical when the signal speeds are very high (>10Gbps). A channel model is created which represent the complete channel comprised of transmitter and receiver (die), package, PCB and connectors. The losses and signal quality are analyzed and the various parameters of the channel are fine-tuned to obtain the optimum operating conditions to meet the prescribed standards. Each and every component of the channel is critical and has to be optimized to ensure successful functioning of the system.

The end result of the case study was an optimized high-bandwidth package and board design for 28Gbps that met all of the test requirements without any issues. As said before, these kinds of trade-offs are not easy and Open-Silicon has demonstrated that they have the expertise and experience to help system designers overcome the challenges of choosing and implementing next generation high-speed 56Gbps SerDes for ASIC designs targeted at future networking, telecom, computing and storage applications.

Since completing the case study, Open-Silicon has productized their work into a 28Gbps SerDes evaluation platform for ASIC development enabling rapid deployment of chips and systems for 100G networks. The Open-Silicon SerDes platform includes a full validation board with a packaged 28nm test chip, software and characterization data. The chip integrates a 28Gbps SerDes quad macro, using physical layer (PHY) IP from Rambus, and meets the compliance needs of the CEI-28G-VSR, CEI-25-LR and CEI-28G-SR specifications.

For more details on the case study and the resulting evaluation platform from Open-Silicon I encourage the reader to follow the links at the end of the article.

See Also:
Open-Silicon Case Study
Open-Silicon SerDes TCoE


The Real Reason Siemens Bought Mentor!

The Real Reason Siemens Bought Mentor!
by Daniel Nenni on 06-26-2017 at 7:00 am

The Siemens purchase of Mentor last year for a premium $4.5B was a bit of a shock to me as I have stated before. I had an inkling a Mentor acquisition was coming but Siemens was not on my list of suitors. The reviews have been mixed and the Siemens commitment to the IC EDA market has been questioned so I spent some time on this at #54DAC.

First stop: Chuck Grindstaff’s (Executive Chairman, Siemens PLM Software Inc.) keynote, we were then afforded a Q&A session with Chuck and Wally afterwards. I also had a chat with Chuck in the hallway.

The Age of Digital Transformation

EDA has continually moved to higher levels of abstractions, changing how electronics are designed and created. Now we are seeing the need in the industrial world for further digitalization and virtualization.In his keynote, Chuck Grindstaff, Executive Chairman of Siemens PLM Software, will discuss the global impact of this digital transformation. EDA pioneered this revolution and paved the way for today’s digital industrial revolution that is transforming and disrupting all industries. For system companies, their products are evolving into advanced system of systems. As a result, SoCs and application software are now the core differentiation and enabling technologies. This is spurring growth and opportunity for IC designers in the convergence of semiconductor and systems. Siemens and Mentor together are setting the vision for this new era of digital transformation.

(Chuck’s talk starts 5:30 mins into it)

The Q&A was interesting but it was mostly press people. My question was: Who called who first on the acquisition? Chuck said he made the first call but Wally added that they had their first acquisition talk nine years ago when Cadence (Mike Fister’s last stand) tried a hostile takeover. Siemens was on Wally’s list of white knights. I then asked Chuck point blank if he was going to sell the Calibre unit since it is not part of the Siemens core business and competition for that market segment is stronger than ever before. Chuck bluntly said no, why would he? My reply was money and I added that I wasn’t in the market to sell my house but if someone offered a billion dollars for it I would sell. I also said the Mentor Calibre unit is worth more today than it probably will ever be. My bet is that there would be a feeding frenzy between Synopsys, Cadence, and a Chinese holding company. Chuck disagreed but did say maybe for the right price, then he reminded me of how big Siemens is so their right price is probably higher than one might expect. By the way, Siemens is an $89 billion company while Synopsys is $2.4 billion, Cadence is $1.8 billion, and Mentor is $1.3 billion.

The hallway chat was even more revealing. Chuck and I had a blunt conversation and my feeling is that Chuck is a true competitor like Mentor management has never seen before. Remember, Chuck has CAD running through his veins from his many years at Unigraphics which was acquired by Siemens and he IS from Texas…

Bottom line:
Disruption is good and EDA was overdue. For those of you who think Mentor will be assimilated into Siemens and the Calibre group will die a slow and silent death I think you will be seriously disappointed. For those of you who compete with Mentor watch your back, absolutely.


First Thoughts from #54DAC!

First Thoughts from #54DAC!
by Daniel Nenni on 06-24-2017 at 7:00 am

This was my 34[SUP]th[/SUP] DAC, yes 34. It is a shame blogging did not exist back then because I would have liked to have read thoughts from my eager young mind, or maybe not. The first thing that struck me this year is the great content. Before DAC I review the sessions I want to see and this year there were many more than I had time for. Thankfully most of them were recorded so I can go back and see the ones I missed because there were quite a few.

SemiWiki had five bloggers covering DAC this year so you will be seeing the resulting blogs for weeks to come. Next year we will have more bloggers because #55DAC is in San Francisco and I’m expecting even more great content.

The other thing that stuck out is the ages of the DAC attendees. There were many more under 35 than before and this tracks with the SemiWiki analytics. Currently (so far in 2017) the majority of our traffic is under 35 years old. That is a real positive sign, and the female readership is now in double digits and increasing steadily. I credit the DAC committee with attracting a younger crowd through University programs and such. The poster sessions this year were packed, the free food and drinks during the receptions probably helped attendance but that counts.

As for the total crowd, it seemed much lighter than I remember from last year and a lot less than San Francisco the year before but we will have to wait for the official word from DAC. There were quite a few first time exhibitors this year which is a great sign and you should expect even more next year in San Francisco. I took a look at the Solido meeting room schedule and was surprised to see both of their meeting rooms were booked up. Impressive! Other vendors I asked said the same. Pre-setting meetings is definitely the way to go.

The award for the best booth goes to OneSpin for sure. It was not the busiest booth but it definitely stood out. I would have to say that Cadence had the busiest booth as they usually do.


One of the trending topics at DAC and on SemiWiki is artificial intelligence and I would expect that to continue for years to come. On SemiWiki we track different application areas such as AI, Automotive, Mobile, IoT, and Security. We can then cross section that with geography, vendor, events, etc… The thing about AI is that it seems to touch all of the application areas and that is great news for semiconductors because AI will consume massive amounts of silicon for processing power and storage. The foundries will benefit because leading edge silicon will be in great demand and of course the memory makers will take their fair share of the profits.

Speaking of foundries, Synopsys was kind enough to host foundry events for TSMC, Samsung, GF, and Intel that were packed. TSMC, Samsung, and Intel were breakfasts and GF was dinner. Press was not allowed in the Intel breakfast but I attended the other three. As soon as the videos are posted I will publish my blogs because they are definitely worth viewing. We will do the same with the other bloggers and the recorded sessions they attended. I was told links would be posted in two weeks so stay tuned. Synopsys had a Press/Analyst table right up front and Aart de Geus sat with us both mornings. As I have said before, Aart is one of the most interesting people in EDA so sitting with Aart is a session in itself.

One of the more interesting discussions after the foundry sessions that I heard was: Why are the foundries releasing so many process versions? Some of the answers made me cringe. The easy answer is that customers are asking for them. In the case of TSMC that is a plausible answer because TSMC is very customer driven. You can call it collaboration but realistically TSMC builds capacity based on customer demand which is why TSMC has very high fab utilization rates. I also believe TSMC is using the quick node strategy to protect their customer base. For example, SMIC and UMC are shipping TSMC compatible 28nm. Now they will have to follow TSMC to 22nm. UMC and SMIC are also working on FinFET processes that will likely be “TSMC like”. Well, they had better be TSMC 12nm “like”.

DAC is a lot of work for everyone associated with it including exhibitors, presenters, panelists, committees, bloggers, etc… Please make sure your gratitude is well placed because without DAC I seriously doubt we would have the semiconductor audience we have today, absolutely.


Ransomware of $1 Million Crushes Company

Ransomware of $1 Million Crushes Company
by Matthew Rosenquist on 06-23-2017 at 12:00 pm

A South Korean web hosting company struggles for survival after agreeing to pay a ransomware extortion of $1 million to hackers.

New Record for Ransomware
Nayana, the South Korean web hosting firm, suffered a ransomware attack that resulted in 153 infected Linux servers. The resulting data that was encrypted by the malware impacted approximately 3500 small business clients. The ransomware targeted files, databases, and video. The compromise shuttered the hosting firm’s services.

The attackers demanded a colossal recovery fee of over $4 million dollars. Negotiations brought that figure down to $1 million, to be paid in several installments. This is a new record payout for ransomware victims. Sadly, it will fuel even greater motivations by cybercriminals to continue to press forward with more brazen attacks.

Failure to Manage Cyber Risks
Ransomware is a well-known problem and one that continues to grow in popularity with cyber-criminals. The malware that infected Nayana was a variant of the Erebus ransomware, specifically designed for Linux. Nayana was behind on proper updates and patching, running vulnerable systems using an outdated Linux kernel complied in 2008.

Once Erebus was able to gain a foothold, its sophisticated encryption methods began undermining the integrity of files and making them unusable by their owners. Erebus uses the RSA algorithm to encrypt unique AES keys that lock each file. Decryption is very difficult, likely impossible with current methods, without the RSA private keys held by the attackers to unlock the files. This variant of ransomware can target over 400 different file types, including Microsoft Office documents, databases, and multimedia files, but it is most adept at encrypting web server data.

Many organizations believe that Linux is more secure than Windows thus creating a false sense of security. Potential victims can be lulled into complacency with patches, updates, backups, monitoring, response planning, and security staffing. It is only when they discover their delicate house of cards crash down, does the thought of better security seem like a prudent idea.

In reality, Linux and Windows are not impervious to ransomware. Diligence and attention is required to maintain a proper security posture.

A Company Crushed
This may be the end of Nayana. The web hosting company at the mercy of the ransomware hackers. Since June 10th, the company has been struggling to find ways to resolve the issue and ultimately decided to negotiate with the attackers. In a posting to customers (http://www.nayana.com/bbs/set_view.php?b_name=notice&w_no=957) Nayana reported the incident and attempts to restore data. In a second post on June 14th (http://www.nayana.com/bbs/set_view.php?b_name=notice&w_no=961) the CEO discussed the frustration and challenges of the issue. He even posted communications to the hackers, stating he expects his business will not recover.

The first installments of the ransom have reportedly been paid. File decryption and validation has begun, but it remains to be seen if customers will stay with Nayana or leave for other service providers.

Who is Next?
Every company, reliant on digital services, must take cyber and ransomware risks seriously. This level of digital extortion is a new record for ransomware, resulting in a new victim destroyed. It raises the bar, but will soon become the norm.

The trend is unmistakable. Cyber criminals are becoming more technologically savvy and bolder in the targets and demands they make. Driven by greed, they are recognizing the huge potential heists available in the cyber landscape. Robbing banks, casinos, and armored cars at gunpoint seems antiquated and too risky compared to the safety and anonymity of the Internet. The new digital frontier holds much greater promise with far fewer challenges. Cyber-attacks will only get worse.

Those who protect themselves with vigor and professional security will rise above the pool of easy victims that criminals will target first. Every organization has a choice. Managing cyber risks is a real challenge, but one that should not be ignored. Investments in security must be commensurate with the value of what is being protected.

This incident must be a wake-up call and lesson to other companies. Those organizations who take cybersecurity for granted may be the next fatality.

Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.


Safety EDA

Safety EDA
by Bernard Murphy on 06-23-2017 at 7:00 am

It takes courage and perhaps even a little insanity to start a new EDA venture these days – unless you have a decently differentiated value proposition in a hot market. One company that caught my eye, Austemper, seems to measure up to these standards (though I can’t speak to the insanity part). They offer EDA tooling specifically around safety and span from safety analysis (FIT and fault metrics), through safety synthesis to safety verification.


Safety verification through fault injection is offered by bigger players but even here Austemper may have an angle to differentiate their offering. What intrigues me is that safety could quite likely evolve into a specialized in-house design service, like test or power, where experts may be open to end-to-end flows rather than a collection of in-house and vendor point tools. Which would play well to this kind of solution.

The company offers four tools in three functional areas, starting with SafetyScope, which computes the failures in time (FIT) and fault metrics for a design. The FIT calculation is based on a rather involved equation from the IEC 62380 model, where inputs can come from IP suppliers/other experts and can be augmented with user input. A safety plan can also be fed into this stage. Apparently, analysis can be “out of context” in which case it is essentially static or it can be “in context” in which case it can take usage data into account. The output of this stage is metrics across the design for FIT rate and diagnostic coverage required to get to target ASIL levels. This stage also generates fault-injection points to be used in the verification phase.

Safety hardening is handled by Annealer for big changes like duplication or triplication of blocks and Radioscope which does similar things at a finer-grained level (e.g. register banks). Here they replicate and inject logic to implement hardening. In Annealer selected logic can be automatically duplicated with comparison checks inserted to detect mismatches or selected logic can be triplicated along with majority voting. In Radioscope, similar automated replication occurs with parity checks for duplication and ECC for triplication. Radioscope can also add protocol checks to critical FSMs for legal states and legal transitions.

The final tool in the flow is Kaleidoscope which does fault simulation based on injected faults, as is generally required as a part of verification for safety-critical designs. Here they use their own fault simulator to simulate behavior for faults injected into the gate-level design but with wrinkles. First, they can take a VCD developed by any simulator as a starting point. It seems they also intelligently limit each fault simulation, in time and in design scope, to limit run-time. They can also run many injected faults in parallel to classify a large number of faults as masked or failed-state in a single run.

On customers, there’s the usual problem of not being able to reveal names, but Sanjay Pillay, the CEO, did tell me that they have been working for a year with a supplier to a tier-1 customer, who have now taped out. They are now working on their second customer. This sounds like an interesting company with some real-world (if not shareable) validation.

Austemper was founded in 2015 and is based in Austin. Sanjay previously led SoC development organizations at a variety of companies, including development for tier-1 companies. He also served as functional safety consultant in some of these roles. You can learn more about the company HERE.


Amazon eating Whole Foods is nothing as entire industries are about to become toast

Amazon eating Whole Foods is nothing as entire industries are about to become toast
by Vivek Wadhwa on 06-22-2017 at 12:00 pm

I doubt that Google and Microsoft ever worried about the prospect that a book retailer, Amazon, would come to lead one of their highest-growth markets: cloud services. And I doubt that Apple ever feared that Amazon’s Alexa would eat Apple’s Siri for lunch.

For that matter, the taxi industry couldn’t have imagined that a Silicon Valley startup would be its greatest threat, and AT&T and Verizon surely didn’t imagine that a social media company, Facebook could become a dominant player in mobile telecommunications.

But this is the new nature of disruption: disruptive competition comes out of nowhere. The incumbents aren’t ready for this and as a result, the vast majority of today’s leading companies will likely become what I call toast—in a decade or less.

Note the march of Amazon. First it was bookstores, publishing and distribution; then cleaning supplies, electronics and assorted home goods. Now Amazon is set to dominate all forms of retail as well as cloud services, electronic gadgetry and small-business lending. And its proposed acquisition of Whole Foods sees Amazon literally breaking the barriers between the digital and physical realms.

This is the type of disruption we will see in almost every industry over the next decade, as technologies advance and converge and turn the incumbents into toast. We have experienced the advances in our computing devices, with smartphones having greater computing power than yesterday’s supercomputers. Now, every technology with a computing base is advancing on an exponential curve — including sensors, artificial intelligence, robotics, synthetic biology and 3-D printing. And when technologies converge, they allow industries to encroach on one another.

Uber became a threat to the transportation industry by taking advantage of the advances in smartphones, GPS sensors, and networks. Airbnb did the same to hotels by using these advancing technologies to connect people with lodging. Netflix’s ability to use internet connectivity put Blockbuster out of business. Facebook’s WhatsApp and Microsoft’s Skype helped decimate the costs of texting and roaming, causing an estimated $386 billion loss to telecommunications companies from 2012 to 2018.

Similarly, having proven the viability of electric vehicles, Tesla is building batteries and solar technologies that could shake up the global energy industry.

Now tech companies are building sensor devices that monitor health. With artificial intelligence, these will be able to provide better analysis of medical data than doctors can. Apple’s ResearchKit is gathering so much clinical-trial data that it could eventually upend the pharmaceutical industry by correlating the effectiveness and side effects of the medications we take.

As well, Google, Facebook, SpaceX, and Oneweb are in a race to provide Wi-Fi internet access everywhere through drones, microsatellites and balloons. At first, they will use the telecom companies to provide their services; then they will turn them into toast. The motivation of the technology industry is, after all, to have everyone online all the time. Their business models are to monetize data rather than to charge cell, data, or access fees. They will also end up disrupting electronic entertainment — and every other industry that deals with information.

The problem for market leaders is that they aren’t ready for this disruption and are often in denial.The disruptions don’t happen within an industry, as business executives have been taught by gurus such as Clayton Christensen, author of management bible “The Innovator’s Dilemma”; rather, they come from where you would least expect them to. Christensen postulated that companies tend to ignore the markets most susceptible to disruptive innovations because these markets usually have very tight profit margins or are too small, leading competitors to start off by providing lower-end products and then scale them up, or to go for niches in a market that the incumbent is ignoring. But the competition no longer comes from the lower end of a market; it comes from other, completely different, industries.

Because they have succeeded in the past, companies believe that they can succeed in the future, that old business models can support new products. Large companies are usually organized into divisions and functional silos, each with its own product development, sales, marketing, customer support and finance functions. Each division acts from self-interest and focuses on its own success; within a fortress that protects its ideas, it has its own leadership and culture. And employees focus on the problems of their own divisions or departments — not on those of the company. Too often, the divisions of a company consider their competitors to be the company’s other divisions; they can’t envisage new industries or see the threat from other industries.

This is why the majority of today’s leading companies are likely to go the way of Blockbuster, Motorola, Sears and Kodak, which were at the top of their game until their markets were disrupted, sending them toward oblivion.

Companies now have to be on a war footing. They need to learn about technology advances and see themselves as a technology startup in Silicon Valley would: as a juicy target for disruption. They have to realize that the threat may arise in any industry, with any new technology. Companies need all hands on board — with all divisions working together employing bold new thinking to find ways to reinvent themselves and defend themselves from the onslaught of new competition.

The choice that leaders face is to disrupt themselves — or to be disrupted.

For more, read my book,Driver in the Driverless Car, and visit my website: www.wadhwa.com


Electronics upturn boosting semiconductors

Electronics upturn boosting semiconductors
by Bill Jewell on 06-21-2017 at 12:00 pm

Production of electronics has been accelerating in the last several months, contributing to strong growth in the semiconductor market. China, the largest producer of electronics, has seen three-month-average change versus a year ago (3/12 change) accelerate from below 10% for most of 2016 to 14.5% in April 2017. China’s April growth rate is the highest in over five years. United States electronics production 3/12 change has been over 5% since December 2016. U.S. electronics 3/12 change had not been over 5% in over ten years, since November 2006.


The European Union (EU) no longer releases monthly data on electronics production. EU total industrial production 3/12 change was 2.4% in April 2017. EU industrial production growth has been in the range of 1% to 3% since November 2013, following 20 months of 3/12 decline in 2012 and 2013. Japan electronics production has been volatile, with 3/12 change over the last seven years ranging from a 27% decline to 8% growth. March 2017 3/12 change was 1.6%, the first positive 3/12 change in the last 18 months. According to data from World Semiconductor Trade Statistics (WSTS), semiconductor market 3/12 change has accelerated dramatically over the last year from a 7% decline in May 2016 to 21% growth in April 2017. Accelerating electronics production has driven much of the semiconductor growth. Other driving factors are rising memory prices and inventory restocking.

Although China is the dominant Asian electronics producer, other counties play a significant role. Electronics production 3/12 change over the last year shows a mixed picture. Vietnam and India are significant emerging electronics producers. However, over the last year the 3/12 change for each country has decelerated from over 20% to about 2% in April 2017. Malaysia has shown steady growth in the 6% to 8% range. Thailand has bounced back from declines of 5% to 8% a year ago to double digit growth in each of the last two months.


South Korea and Taiwan have been historically significant electronics producers. South Korea has shown positive 3/12 change for the last 20 months after 12 months of decline in late 2014 to late 2015. Taiwan has been weak over the last couple of years, with 3/12 declines since April 2015. Taiwan has been the among the hardest hit countries with the shift of electronics manufacturing to China. In many cases, it has been Taiwan-based companies behind the shifts.

The table below shows average hourly manufacturing compensations costs (wages + benefits) by key countries. Location of electronics production is dependent on several factors, but compensation costs are a major consideration for labor intensive, low-skilled manufacturing. The U.S., Euro Area, Japan and South Korea are high cost areas with compensation costs over $20 per hour. Taiwan is trending toward high cost at about $10 per hour. China at $3.52 per hour is low cost compared to the above countries. However, Vietnam and India at about $1 per hour have costs less than one third of China’s. Thus India and Vietnam electronics manufacturing should grow faster than China over the next several years.


Trends in electronics production bear watching. A slowdown in the growth rate of electronics will lead to a downturn in the semiconductor market. A drop-off in electronics may also lead to falling memory prices and semiconductor inventory reductions – which could drive the semiconductor market negative. As we stated in our semiconductor forecast last month, this downturn could occur as early as 2019.


Can AI be Conscious?

Can AI be Conscious?
by Bernard Murphy on 06-21-2017 at 7:00 am

A little self-indulgence for the season, to lighten the relentless diet of DAC updates. I found a recent Wired article based on a TED talk on consciousness. The speaker drew a conclusion that consciousness was not something that could ever be captured in a machine and was a unique capability of living creatures (or at least humans). After reading an article on the the TED talk and watching a related talk, I’m not so sure but I am fairly convinced that whatever we might build in this direction may be quite different from our consciousness, will probably take a long time and will be plagued with problems.


The TED event (speaker Anil Seth, Professor of neuroscience at the University of Essex in the UK) is not posted yet, but there is a more detailed talk by the same speaker on the same topic, given recently at the Royal Institute, which I used as a reference.

First, my own observations (not drawn from the talk). AI today is task-based, in each case skilled at doing one thing. That thing might be impressive, like playing Go or Jeopardy, providing tax advice or detecting cancerous tissue in mammograms, but in each case what is offered is still skill in one task. A car-assembly robot can’t compose music and even Watson can’t assemble a car. Then why not put together lots of AI modules (and machinery) to perform lots of tasks or even meta-tasks? Doesn’t that eventually surpass human abilities?

I suspect that the whole of human ability might be greater than the sum of the parts. Most of us are probably familiar with task-based workers. If I am such a worker, you tell me what to do, I do it then wait to be told what task I should do next, as long as it is a task I can already do. Some other workers provide an obvious contrast. They figure out on their own what the next task should be, they develop an understanding of higher-level goals and they look for ways to improve/optimize to further their careers. This requires more than an accumulation of task or even meta-task skills. It requires adaptation towards goals of reward, a sense of accomplishment or a desire for self-betterment, which I’d assert requires (at a minimum) consciousness.

Which brings me to Anil Seth’s talk. He co-directs a center at the University of Essex for the scientific study of consciousness; the Royal Institute talk discusses some of their findings. To focus the research, he bounds the scope of study to accounting for various properties of consciousness, ducking obvious challenges in answering questions around the larger topic.

He narrows the scope further to what he thinks of as the first step in self-awareness which he calls bodily consciousness, which is awareness of what we see, feel and so on. His research shows a Bayesian prediction/reasoning aspect to this. Think of our visual awareness. We get input from our eyes, the visual cortex processes this, then our brain constructs a prediction of what we are seeing based on this and other input, and based on past experiences (hence the Bayes component) which is then compared again with sensory inputs and adapted. In his words, we create a fantasy which we adjust to best match between what we sense and prior experience; this we call reality. He calls this a controlled hallucination (hence the Matrix image in this piece).

This reality is not only based on what we sense outside ourselves; it is also based on what we sense inside our bodies. I see a bear and I sense the effects of adrenalin on my system, my heart runs faster, my hair (such as it is) stands on end and I feel the need to run (perhaps not wise). All of this goes into the Bayesian prediction which we continue to refine through internal and external sensing. I should add by the way that this is not mere philosophizing; all of this is derived from detailed experiment-based studies in the U.Essex consciousness group.

So just this basic level of consciousness, before we get to volition and sense of identity though our experiences and social interaction, is a very complex construct. It depends on sensory input from external sources certainly but it also depends on our biology which has evolved for fight or flight, attraction and other factors. So one takeaway is that reconstructing the same kind of consciousness without the same underlying biology would be difficult.

Anil Seth asserts that therefore to create consciousness without biology is impossible. That seems to me a bridge too far. What we are doing now in deep learning, in object recognition for example, transcends traditional machine behavior in not being based on traditional algorithms. And if we can reduce aspects of consciousness to mechanized explanations like Bayesian prediction, there is no obvious reason why we should not be able to do the same in a machine. We would have the same challenges probably in explaining the behavior of the machine, but not in creating the machine. This would be a non-biological consciousness (the machine could however introspect on its own internals), but not necessarily a lesser consciousness.


There’s an important downside. Just as the brain can have pathological behaviors in this controlled hallucination and those can have serious consequences not just for the owner of the brain but also for others, the same would be true for machines of this type. But understanding and control is potentially more difficult in the machine case because “reality” perceived by the machine may not align with our reality even in in non-pathological behavior. We may struggle to find reference points for normal behavior and struggle even more to understand and correct pathologies. Hence my view that trustable machine consciousness may take a while.

On that note, sleep well. What could possibly go wrong?