This tile is about an old timer talking with a smart ass that questions why experience is relevant in todays “fast paced” technology industry. It has shown up so much on LinkedIn that I thought I should make a separate post and copy my responses into it and just link in next time.
Continue reading “Dilbert Flopped – But We Still Laugh”
Integrated Photonics – Getting Light to the PIC by Embedding Polymer Waveguides in the PCB
This week I had the privilege of representing one of my clients at the AIM Photonics road map meetings held at MIT in Cambridge, MA. While this was a closed meeting for AIM members, it’s no secret that AIM Photonics, which stands forAmerican Institute for Manufacturing Integrated Photonics, is working diligently to bring up a viable ecosystem for integrated photonic design. While it was heartening to see all of the progress being made, it also became very clear to me that the term “integrated photonics” means a lot of things to a lot of different people. When asked by Dan Nenni what we should call this subject area on SemiWiki I wrongly followed my 30+ years of electronic IC design thinking and replied, “Silicon Photonics”.
While Silicon Photonics is the holy grail for fully integrating photonics and electronics I quickly realized that most photonics wasn’t being done in silicon yet. Upon realizing my mistake, I started referring to it as “Integrated Photonics”.
This week my eyes were opened yet again to the fact that I am a product of my upbringing. I had foolishly been thinking integrated would mean ‘on a chip’, whether it be made of silicon, indium phosphide, gallium arsenide or some other materials. The first half day of the sessions at MIT proved this to be woefully short sighted. Photonics has been around for the last 20+ years and it has slowly but surely been working its way from bulky discrete devices and glass-core fibers towards integration onto the chip. What I failed to remember is that you don’t get to the chip unless you go through a printed circuit board (PCBs) and some form of package.
Today active optical cables (AOCs), which are optical cables with built-in transceivers on each end, convert optical signals into electrical signals which then run through connectors and PCB traces to the electronic ICs (see figure from an OSA paper by IBM). The move now is to put the “active” optical parts of the transceivers closer to the electronics (i.e. on the board), leaving the “passive” optical parts in the cables. To do this however, you must have a way to move light around the board.
Make no mistake, there are already production photonic ICs (PICs) being put on boards but till now this has mostly been accomplished by using somewhat cumbersome connections called optical pigtails that carry light directly from the photonic I/O of the PIC to a connector at the edge of the board. Pigtails usually use fiber ribbons that ‘fly over’ the board. While this works, it is costly for several reasons. First it usually implies some kind of manual connection being made from the pigtail to the connector on the board. Additionally, routing of the fiber/ribbon is problematic and in many cases the ribbon is not mechanically held down to the board which makes for a not-so-robust environment where snagging of the ribbon is an ever present problem and in some cases the ribbon has been known to restrict airflow over the PCB causing heating problems.
Forgoing a lot of details, suffice it to say that significant progress is being made in the area of embedding polymer-based waveguides into chip carriers and the PCBs themselves to guide light around the board, much like what is being done to guide light through waveguides on chip. Additionally, there is focus on best methods to cheaply and accurately transmit the light from fiber into these on-board waveguides and from those waveguides onto their counterparts on the PICs. This includes novel structures for passively flip-chipping the PICS onto the carriers or substrates to make a photonic connection (see second figure from the IBM OSA paper).
More recently a new innovation has come out of Petra (Photonics and Electronics convergence Technology Research Association of Japan) called photonic pins that can transmit light vertically through windows in the package allowing transmission of the light from the PCB to the PIC and back (see figure from Petra’s Chip-scale packaging paper). It’s not as easy as you may initially think as there is a considerable difference in the size of the fiber to the PCB waveguide and then from the PCB waveguide to the PIC waveguides. Add to this is the push to move from multi-mode photonics which is inherently easier to manufacture to single mode photonics to enable wavelength division multiplexing. Moving to single mode photonics is challenging on the PCB as the waveguides must be made much smaller and to tighter specifications to confine the light to a single mode. None-the-less, significant progress was reported with one vendor showing a PCB with 16 electrical layers and 4 optical layers (albeit still in multi-mode) and another vendor showing novel connectors that would allow a PIC to be inserted into a chip carrier with ready-made light channels to make the connections to the board waveguides.
Embedding polymer waveguides into the interposers and PCBs promises to remove the need for pigtails, manual connections and flyover ribbons making for much more cost effective integration and leads me back to where I started which is that we are seeing an unprecedented jump in capabilities across the entire eco-system to truly enable “integrated photonics” from the system all the way down to the PICs including connectors, printed circuit boards, chip carriers and packaging.
LETI Day 2016 : Security in Lyon, Sensor at Semicon West on July 12th
It was the very first time I attended the LETI days, even if I know the research center for many years. LETI was created in the 60’s, as the subsidiary of the CEA (France agency in charge of Atomic Energy) in charge of Microelectronic. Now, for more than 50 years, 2000 research engineers are working to develop technologies, systems or specific IC and license it to company ranging from a small European industry to semiconductor giant like Intel. If you are a loyal Semiwiki reader, you certainly remember the numerous articles written about FD-SOI. We have mentioned STMicroelectronic great achievement, licensing FD-SOI to Samsung, but we should have said that the technology was initially licensed to STM by… LETI!
Being asked by LETI to help them prepare the LETI Day 2016 (23[SUP]rd[/SUP] June in Lyon) by creating dialogues between the presentations, I had the opportunity to look at the presentations in advance, interview the peoples from LETI and also attend to the conference. This year, the focus was “security” and we had people from many different industries from avionic (Dassault Aviation) to mining (Davey Bickford), smart card secure IC (Oberthur Technologies) or systems (Safran Identity and Security) to IoT platform (Intel) and FD-SOI manufacturer (GlobalFoundries) but also biotech start-up or hearth prosthesis manufacturer (Carmat). I will share a few of the much information I have learned during this conference.
When talking about security, you probably first think about algorithm, but LETI has also built and license complete wireless system, including specific protocol, with some of the partners, for example in the mining industry to replace wires used to connect as much as 4,000 detonators in mining industry. LETI’ vision about security is that it will be the enabler of the multiple innovations to be developed to create the IoT market. I share this vision at 100%: nobody will buy and use a system which has been identified to create threat in his life…
Let’s take Carmat example. This company has developed hearth prosthesis and started to implement it into patient body. Can you imagine that the wireless system used to monitor this heart could be hacked?
We had a very interesting panel involving several biotech actors, start-up or mature companies:
- Alain Grimme, CARMAT CTO
- Sylvain Rousson, DIABELOOP CTO
- Ali Agah, Associate Director with ILLUMINA
- F. Sauter from CLINATEC
- P. Caillat, Strategic Marketing LETI, in charge of Medical Services
- And Alain Merle, Security guru at LETI
There was no real debate, as all of them agreed about the need for secure wireless transmission of data, but the reason why their system need security were different: from life critical with Carmat and Diabeloop, to confidentiality of DNA data for Illumina. I recommend the presentation from Carmat, illustrated by animated slides showing heart movements!
In fact, Alain Merle from LETI has made a summary about security needs: we should think in term of security levels, implying that different applications may need for a different level of security (due to the cost of security). He has also noticed that the industry should quickly define standards for security, allowing assessing and validating the level of security. In fact, it seems that LETI has started to work on the standard topic, which is good because the research center is an independent entity… and users don’t want security to be captured by a unique company (think about HDMI protocol standard, for example, captured by Silicon Image…).
I am frequently attending to conferences since 1986, I must confess that my attention was completely kept by the presentation, which is not always the case… Probably because when innovation is the DNA, when the commercial part of the presentation is present, but low enough, you enjoy listening and learning.
Did I mention that Intel was present (Sameer Sharma, GM IoT New Market and Business Development) and Sameer shared Intel’ vision about IoT? This was probably linked with Intel licensing deal with LETI, not about FD-SOI but High Processing Device (HPC)… GlobalFoundries presentation about FD-SOI technology was very interesting to better understand how GloFo is marketing the 22FDX technology, addressing IoT platform (and not edge devices) market.
The LETI Day was in Europe a couple of weeks ago, but you can meet with LETI top management (CEO Marie N. Semaria, VP Sales & Marketing Jean-Eric Michallet) and MEMS and sensor experts next week in the Silicon Valley during Semicon West.
You can download the workshop agenda, to be held during Semicon West, by going here
Event Information: July 12, 2016 – 5-9 p.m. / W Hotel – 181 3rd St., San Francisco
Focus: “LETI complete portfolio of sensor technologies for automotive, advanced manufacturing, security, bio & health and smart buildings.”
Attend to stimulating discussion around LETI vision for sensors. LETI experts will present their latest achievements and technology roadmap for sensing.
From Eric Esteve from IPNEST
IoT Tutorial: Chapter 6 – IoT at the Edge
IoT and the Edge Computing Paradigm – Why Edge Computing? In previous chapters we illustrated how cloud computing enables nowadays IoT applications to benefit from its capacity, scalability, elasticity and pay-as-you-go nature. During recent years a number of IoT/cloud deployments have demonstrated the merits of integrating IoT and cloud. Nevertheless, the proliferation of IoT applications, including BigData applications that comprise IoT streams seems to drive conventional centralized cloud architectures to their limits.
For example, in large scale applications, the integration of millions IoT streams in the cloud leads to very high bandwidth consumption, significant network latency and the need to store a large amount of information with limited (or even zero) business value. This is typically the case when sensor data that does not frequently change (e.g., temperature information) is streamed in the cloud. Likewise, IoT applications are increasingly asking for higher flexibility in handling multiple distributed and heterogeneous devices, in a way that provides scalability and effective handling of data security and privacy, especially in cases where users need to limit access to their private data.
Also, data intensive IoT applications (including BigData applications) ask for a cost-effective and resource efficient handling of data streams, which might involve storage and processing of data closer to end-users and physical systems. Typical examples of such IoT and BigData applications are discussed in latter chapter and can be found in the areas of next generation manufacturing (Industrie 4.0), smart cities and media).
In order to cope with these requirements, the cloud industry has recently introduced the edge computing paradigm (also called “fog computing”), which extends conventional centralized cloud infrastructures with an additional storage and processing layer that enables execution of application logic close to end-users and/or devices. Edge computing foresees the deployment of edge/fog nodes, which may range from IoT embedded devices featuring limited storage, memory and processing capacity to whole data centers (i.e. “local clouds”) which are deployed close to end-users and physical infrastructures.
Overall, the edge/fog computing paradigm extends the cloud paradigm to the edge of the network. In this way, it also appropriate for serving mobile users, who typically have local short-distance high-rate connections and hence can benefit from computing, storage and communication resources at their vicinity, instead of interfacing to a centralized back-end cloud. The proximity of resources helps overcoming the high-latency that is associated with the provision of cloud services to mobile users.
Taxonomy of IoT Edge Computing Applications
Edge computing has therefore been introduced in order to deal with IoT applications that suffer from the limitations and poor scalability of the centralized cloud paradigm, including:
- Applications susceptible to latency, such as nearly real-time IoT applications. Typical examples of such applications are the ones based on Cyber-Physical Systems in the areas of manufacturing, urban transport and smart energy, where actuation has to be driven in real-time, based on data processing close to the physical infrastructures. Note that several of these applications require also predictable latency, which is hardly possible when processing data and invoking services from the back-end cloud.
- Geo-distributed applications, including sensor networks applications, which have to cope with data processing at local level, prior to streaming data to the cloud. The edge/fog computing paradigm enables these applications to deal with large scale geographically and administrative dispersed deployments. Typical examples include environmental monitoring and smart city applications, which are based on the collection and processing of streams from thousands or even millions of distributed sensors. Edge nodes enable the decentralization of these applications, thus facilitating scalable processing and boosting significant bandwidth savings.
- Mobile applications, notably applications involving fast moving objects (e.g., connected vehicles, autonomous cars, connected rail). As already outlined these applications require interfacing of moving devices/objects to local resources (computing, storage) residing at their vicinity.
- Large-scale distributed control systems (smart grid, connected rail, smart traffic light systems), which typically combine properties of the geo-distributed and real-time applications outlined above. Edge computing deployments enable them to deal with scalability and latency issues.
- Distributed multi-user applications with privacy implications and need for fine-grained privacy control (such as processing of personal data). These applications can benefit from a decentralization of the storage and management of private data to the various edge servers, thus alleviating the risk of transferring, aggregating and processing all private datasets at the centralized cloud. Furthermore, edge computing deployments enable end-users to have better and isolated control over their private data within the edge servers.
Mobile Edge Computing
Edge computing deployments that involve mobile and roaming devices are characterized as “Mobile-Edge” Computing deployments. They provide the means for accelerating content, services and applications to be accelerated, while ensuring increased responsiveness from the edge/cloud infrastructure. Moreover, they facilitate the engagement of mobile services operators, which can offer their services taking into account the radio and network conditions at the edge servers’ vicinity. “Mobile-Edge” computing deployments are characterized by the following properties:
- On-premise isolation: Mobile-edge deployment can be isolated from the rest of the IoT network, thus enabling M2M applications that required increased security. Indeed, this isolation renders M2M applications less prone to errors at other points of the network.
- Proximity for low-latency processing: Edge servers enable access to mobile devices, while running services (including data intensive services) close to them. This reduces latency and provides a basis for bandwidth savings and optimal user experience (e.g., due to better responsiveness).
- Location and context awareness: Edge servers are typically associated with specific locations and can therefore enable location based services, which is a prominent class of mobile/roaming services. Furthermore, this enables a wave of new business services that utilize network context and locations, such as services associated with users’ context, points of interest and events.
IoT/Edge Computing Convergence Challenges
Edge computing based deployments of IoT applications are still in their infancy, even though we expect them to proliferate due to their clear economic benefits (e.g., bandwidth savings, energy savings) and enhanced functionalities when compared to conventional IoT/cloud deployments. Nevertheless, a number of technical challenges exist, which have recently given rise to additional technological developments. One of the challenges, concern the integration of IoT nodes with the cloud, given the fact that cloud infrastructures are typically based on powerful computing nodes which do not support embedded systems.
In order to address this challenge, edge computing infrastructures integrate containers (e.g., Docker or the Open Container Initiative) as virtualization technology for application deployment and enactment. Based on their lightweight and resource-efficient nature, container technologies are appropriate for supporting rapid and flexible deployment of application logic in control nodes. Containers are indeed much smaller in size comparing to Virtual Machines (VMs), which facilitates their rapid transportation and deployment to edge devices/nodes outside the high speed networks of the cloud data center. Note that early efforts towards integrating container virtualization into edge computing are tailored to architectures and infrastructures of a single vendor and are not appropriate for large scale deployments comprising heterogeneous virtualization infrastructures and containers.
Likewise, there are no tools and techniques providing easy ways for run-time deployment and management of heterogeneous container technologies in the scope of large scale edge computing infrastructures. Thus, several edge computing deployments have still to deal with conventional high-overhead virtualization infrastructures.
Another challenge relates to the development and deployment of techniques for efficient distributed data analytics, given that non-trivial edge computing applications (such as large scale applications and/or application that are subject to QoS (Quality-of-Service) constraints) need to dynamically distribute analytics functions for an optimal placement between the edge and the cloud. At the same time efficient data analytics at the edge of the network is a key prerequisite for supporting a large number of real-time control applications, which exploit high-performance edge processing of data streams in order to drive actuation functionalities.
Moreover, despite the emergence of IoT analytics platforms, there are still no frameworks that can effectively deal with semantic interoperability across diverse high-velocity data streams i.e. in way that alleviates heterogeneity not only in terms of diverse data formats and protocols, but also in terms of the semantics of data streams. State-of-the-art streaming engines (e.g., Complex Event Processing (CEP) systems) for cloud computing, are not appropriate for edge computing, given their poor-performance when dealing with networked streams and associated requirements for very low latency.
Furthermore, state-of-the-art platforms are not able to dynamically allocate data processing to different devices at runtime, as well as to dynamically realize the optimum “split” between centralized (i.e. on central cloud infrastructures) and decentralized (i.e. on the edge) processing at runtime. The issue of IoT analytics will be more extensively discussed in coming chapters, notably chapter dedicated to IoT analytics and IoT/BigData convergence.
IoT deployments based on edge computing present also new security and privacy challenges. Indeed, they involve physical systems, sensing devices, embedded systems, smart embedded systems etc. which interact with multiple cloud infrastructures, thus imposing needs for trustworthiness at multiple layers, including secure and privacy friendly operation of containers, and secure information exchange across networks and clouds. Such challenges are become more pressing as several IoT applications (e.g., healthcare) involve processing of private data at the edge.
Finally, there is a need for “Edgification” of conventional applications. Significant effort has been recently allocated in the process of migrating conventional distributed services (e.g., SOA services) to the cloud. Nevertheless, the issue of migrating services in an edge computing environment involves the dynamic (re-)distribution of the service across edge and cloud parts and has not been adequately addressed yet.
In coming years we will be witnessing an increased number of edge computing deployments, which will be addressing the above-listed challenges. The proliferating number of edge deployments will be a direct result of the need to save bandwidth and energy costs, deal with scale and the resource constrained nature of IoT devices in a way that still leverages virtualization, as well as the need to increase data privacy for end-users.
Resources for Further Reading
For a general introduction to the fog/edge computing paradigm, please consult the following articles:
- Flavio Bonomi, Rodolfo Milito, Jiang Zhu, Sateesh Addepalli, «Fog computing and its role in the internet of things», Proceedings of the first edition of the MCC workshop on Mobile cloud computing, MCC ’12, pp 13-16.
- Flavio Bonomi, Rodolfo Milito, Preethi Natarajan, and Jiang Zhu. 2014. Fog computing: A platform for internet of things and analytics. In Big Data and Internet of Things: A Roadmap for Smart Environments. Springer, 169–186.
- Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud computing: A survey,” Futur. Gener. Comput. Syst., vol. 29, no. 1, pp. 84–106, 2013.
- Shiraz, A. Gani, R. H. Khokhar, and R. Buyya, “A Review on Distributed Application Processing Frameworks in Smart Mobile Devices for Mobile Cloud Computing,” Commun. Surv. Tutorials, IEEE, vol. 15, no. 3, pp. 1294–1313, 2013.
Edge Computing Applications and Architectures for specific application areas are discussed in the following publications, papers and books:
- IEva Geisberger/Manfred Broy (Eds.): Living in a networked world. Integrated research agenda Cyber-Physical Systems (agendaCPS) (acatech STUDY), Munich: Herbert Utz Verlag 2014.
- NMGN white paper Audiovisual media services and 5G – https://tech.ebu.ch/docs/public/5G-White-Paper-on-Audiovisual-media-services.pdf
- Industrial Internet Consortium, “Industrial Internet Reference Architecture”, version 1.7, June 2015.
Edge Computing Infrastructures and Applications are discussed in other LinkedIn posts as well, for example:
- https://www.linkedin.com/pulse/mobile-solutions-iot-raise-prospects-edge-computing-dan-bieler?articleId=7319588356554837017#comments-7319588356554837017&trk=sushi_topic_posts
- https://www.linkedin.com/pulse/mobile-edge-computing-market-size-patrick-lopez?articleId=9023473015607893633#comments-9023473015607893633&trk=sushi_topic_posts
- https://www.linkedin.com/pulse/case-edge-computing-iot-learie-hercules?articleId=8491621607555033925#comments-8491621607555033925&trk=sushi_topic_posts
Obama’s greatest legacy may be the global entrepreneurship he sparked
It is rare to go to a government event, especially where political leaders are speaking, in which you can stay awake or be truly inspired. Indeed, I had very low expectations of President Obama’s Global Entrepreneurship Summit (GES), which was held at Stanford University last week. I thought it would be nothing more than a publicity vehicle for the administration. But I left extremely impressed with the dynamism and energy that it generated and the positive impact it had on the entrepreneurs who were there from the United States and the developing world.
This was the seventh annual GES. The first, which was held at the White House in 2010, was announced by Obama in Cairo in 2009 to “deepen ties between business leaders, foundations and social entrepreneurs in the United States and Muslim communities around the world.” Its scope has since been expanded to include entrepreneurs from all communities.
Government efforts to promote entrepreneurship always fail because they focus on building science parks and top-down clusters. Policy makers believe that by erecting fancy buildings and providing subsidies to select industries and venture capitalists, they can create innovation hubs. This is the wrong approach; what needs to be done instead is to remove the obstacles to entrepreneurship and change the culture so that failure is accepted and experimentation is encouraged. And then entrepreneurs need to be educated and provided with mentoring, inspiration and seed funding. This is exactly what the GES is doing — by design or by accident.
The highlight of the Stanford event was the president on stage with Facebook chief executive Mark Zuckerberg. Obama interviewed fledgling entrepreneurs from Egypt, Rwanda and Peru and caught the audience off guard by removing his jacket and joking about his inability to “wear a T-shirt like Mark for at least another six months.” Mariana Costa Checa of Peru was still in shock when she said, “I’m still trying to get over the fact that you just introduced me.” Obama talked about the importance of building networks, changing cultures and having governments remove roadblocks. He also lectured entrepreneurs on how to pitch their start-ups to investors.
In the United States, we have the American Dream; we often put entrepreneurs on a pedestal. To the rest of the world, this is unimaginable, a culture shock.
United Arab Emirates-based investor Prashant (PK) Gulati told me about how rapidly policies changed after the 2012 GES, which was held in Dubai. There were many legal obstacles to e-commerce and Internet start-ups that were not getting resolved. On the sidelines of GES, enabled by the State Department, the emir of Dubai, Sheik Mohammed bin Rashid Maktum, convened a meeting of all stakeholders to remove the barriers and give priority to entrepreneurs. Four years later, Dubai has one of the most vibrant entrepreneurship communities in the Middle East, and one start-up, Souq.com, has achieved the status of a unicorn, with a billion-dollar valuation.
The 2013 summit in Kuala Lumpur led to the creation of the Malaysian Global Innovation & Creativity Center. Asran Dato Gazi, who heads this program, said GES had prompted their prime minister to work toward changing the country’s culture and removing regulatory obstacles to start-ups. The government also created educational and support programs, which have so far taught 15,000 entrepreneurs and incubated 150 companies.
The impact of having a U.S. president hyping entrepreneurship can also be seen in India, where Prime Minister Narendra Modi launched a program called Startup India to reduce regulations and fees, provide education and infrastructure and facilitate seed funding for start-ups. His focus is on lifting up disadvantaged communities and women. During his recent trip to the United States, Modi even persuaded Obama to hold the next GES in India.
U.S. Chief Technology Officer Megan Smith, whom I have known since the days she was working on moonshots at the secretive Google X labs, said that after spending time in Kenya, Uganda, Senegal and Nigeria, she learned the potential of uplifting entrepreneurs. She believes that this is the best way to boost these nations’ economies. Smith said that after she joined government, she also realized the opportunity to lift up American entrepreneurs, particularly those in the country’s rural parts. This is why she has been working on globalizing Silicon Valley’s best practices “to provide them with the resources and networks for funding, talent, partnerships, peers and more so they can grow their ideas, iterate and scale.”
The program Obama launched is timely and important. After all, the strongest weapon to shift geopolitical balances isn’t nukes or missiles any more, it’s technology. And there really is no better way of spreading American ideals and democracy than facilitating entrepreneurship across the globe.
For more, follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com
No Turning Back on Autonomous Driving
Politicians will tell you that Fridays are reserved for announcements (defeats, resignations, indictments) intended to be ignored or lost in the end of week news sink. In that context, the Friday before the U.S. Fourth of July three-day weekend may be regarded as second only to the Friday before Christmas as an ideal opportunity to bury an unpleasant bit of news.
This is why I found it puzzling that BMW, Intel and Mobileye chose this particular Friday to announce a major new strategic cooperation around autonomous driving. You can find the Webcast for the 10 a.m. event being held in Munich here: http://www.live.bmwgroup.com/2016pk/index.html
In light of the first fatal crash involving Tesla’s Autopilot, the timing of this news event now seems oddly prescient. With a little luck, Tesla’s and Mobileye’s most virulent critics will be off on vacation somewhere and unavailable to rain on the autonomous driving parade.
To date, Tesla has proven to be the Teflon-coated auto maker. Tesla’s have caught fire, been shredded in crashes, and seen in-the-field hardware retrofits and significant over-the-air software updates to fix flaws big and small or add or enhance features and functions – while the company has danced between the regulatory raindrops avoiding high profile recalls or even a sales cease and desist order. (Of course, many U.S. states do not allow the sales of Tesla vehicles, most notably Michigan.)
The latest apparent autopilot failure will surely be parsed and analyzed by no less than the U.S. National Highway Traffic Safety Administration, which has initiated an investigation, and Tesla itself. Tesla has not only been fairly Teflon-coated, it has also been fairly transparent, but this trial will be the ultimate test. A life was lost for the first time.
The sad reality of the situation, aside from the loss of life, is that this latest development is likely to cause some safety advocates in the industry to hit their own personal emergency brake. Rather than seeking a deeper understanding of what went wrong – sensor failure, software failure, sunglare, driver distraction – they will insist that Tesla was in the wrong and that the time has come to shut down all of this self-driving nonsense.
Let’s be clear about one thing. The cat is out of the bag. The horse is out of the barn. The autopilot is on the road. Advanced safety systems have saved lives and are saving lives but drivers are still obligated to pay attention and remain in control. Even Tesla’s require a hand on the steering wheel at least part of the time – meaning the driver is expected to pay attention and participate in the driving task even in autopilot mode.
Regulators and researchers are fond of blaming drivers for 90% of crashes, and yet we all want humans to remain in the driver’s seat paying attention. So maybe it’s time we stopped blaming drivers and start trying to better understand how to better help drivers be better at what they do so well. The reality is that it is human drivers that will and are teaching the machines how to drive “better” than the humans.
This BBC report includes a link to a video recorded by the driver killed in the crash showing how Tesla’s autopilot prevented a collision with a truck: http://www.bbc.com/news/technology-36680043
It is life-saving performance like that that likely contributed to a sense of over-confidence in the system, perhaps.
Let’s see what BMW, Intel and Mobileye have to say today about their cooperation – and let’s avoid the hysterical reactions that might cause us to turn away from the substantial progress that has been made to-date along the technological path to universal collision avoidance. Tesla now faces its greatest test of transparency. In the process we will all learn a little more about the technology that promises over the long term to steadily reduce the 1.2M deaths suffered every year on roadways around the world. It is a turning point for the industry, but there is no turning back from the pursuit of safer driving and safer cars.
Semiconductors out of step with electronics
The global semiconductor market has been on a decline (three-month-average change versus a year ago) since July 2015 according to World Semiconductor Trade Statistics (WSTS). Although numerous factors affect the semiconductor market (capacity and utilization, prices, inventory levels) in the near term, the long term growth is driven by the growth rate in electronics. The chart below shows three-month-average change versus a year ago for electronics production by key regions and for the global semiconductor market.
China is the largest producer of electronics. Its growth rate has decelerated from the 12% – 14% range in 2014 to the 10% – 13% range in 2015 and to the 8% – 9% range in the first four months of 2016. The slowdown in China electronics production has been fairly moderate. U.S. electronics production was in decline for most of 2014 through April 2015. In the last year U.S. growth has been positive, reaching a peak of 7% in November 2015. Japan’s electronics production has been extremely weak, with declines almost every month over the last two years. Although Japan electronics is a drag on the global semiconductor market, Japan is only 9% of the market. Europe monthly electronics production data is not available, but overall industrial production has been positive in the range of 1% to 3% over the last two years.
The chart below shows three-month-average change versus a year ago for electronics production in the key countries of Asia. Asia Pacific accounts for about 60% of the global semiconductor market. China is over half of Asia Pacific. Taiwan has seen generally declining production in the last few years as much of its manufacturing shifts to China. South Korea production growth turned positive in September 2015 after a year of negative change. Singapore turned positive in March 2016 after a year of declines.
Malaysian growth has remained positive in the last few years. Two key emerging countries, Vietnam and India, are currently growing at over 20%. Over the last year, the Asia Pacific semiconductor market has slowed from 7% growth in April 2015 to declines in December 2015 and January 2016. After returning to positive growth in February and March of 2016, the Asia Pacific semiconductor market dropped 4.2% in April 2016.
The global semiconductor market decline in the range of 5% to 6% over the last five months is out of line with electronics production. The Asia Pacific semiconductor market has been stronger than the global market, but is still out of step with electronics production. The short term factors of semiconductor pricing (especially in volatile memory business) and inventory level adjustments are likely driving the semiconductor market decline. Assuming electronics production levels remain positive through 2016, the semiconductor market should turn positive in the second half of 2016. We at Semiconductor Intelligence are holding to our May forecast of 1% growth for the semiconductor market in 2016.
21 months lining up OPNFV-on-ARM for telecom
Telecom infrastructure is one area where X86 architecture hasn’t dominated historically. Infrastructure gear is spread across MIPS, Power, and SPARC architectures, with some X86, and a relative newcomer: ARM, already claiming 15% share. That’s a stunning figure considering only a bit less than 5 years ago Continue reading “21 months lining up OPNFV-on-ARM for telecom”
A Brief History of Platform Design Automation
Two weeks ago I spoke on the phone with Albert Li, Founder and CEO of Platform DA about his EDA company. Prior to founding Platform DA in Beijing, Li worked at Accelicon which was acquired by Agilent in December 2011. Mr. Li graduated from Tsinghua University and Vanderbilt University, both in Electrical Engineering, and has written over 20 technical papers. His team of engineers are experts at transistor device modeling, cell libraries and creating PDKs (Process Design Kit) for use by foundries, IDMs and circuit designers. They also have branch offices in Shanghai and Taiwan Hsinchu. Their web site at www.platform-da.com features both English and Chinese content.
Related – EDA mergers continue: Accelicon acquired by Agilent
Products
There are three main EDA products and the first one is for device modeling and QA called MeQLaband it can be used for applications like:
- Device modeling for FinFET and planar devices
- Statistical modeling and mismatch
- High voltage device modeling, sub-circuit modeling
- Built-in modeling library and model card QA
- SRAM modeling
- Noise modeling and circuit analysis
- Design or process optimization
MeQLab, design or process optimization
PQLab is a tool for automating the QA of PDK libraries, saving engineering time and can be applied to:
- Foundry PDK developers needing to QA a PDK
- IC designers verify that a foundry PDK meets their requirements
- IC designers compare two or more PDKs
PQLab
For 1/f noise measurements and characterization they have the NC300 system to apply at the wafer level, device, circuit or even with sensors.
NC300
NC300 measurement results
Services
The three categories of services include:
- Modeling
- PDKs
- Semiconductor IP (Standard Cells, SRAM – compilers)
Engineers at Platform DA can design a test chip for you, perform the device modeling, create a new PDK and even provide you with a standard cell library. Device modeling services cover a wide variety of models:
- BSIM4, BSIM-CMG/IMG, PSPS, BSIM6
- BSIMSOI
- Scalable inductor, transformers, baluns
- HiSIM, HiSIM-HV
- GP, Mextram, HiCUM, VBIC
- Diode – Level 1, 2, 3
- RF Models
- Reliability models
- Statistical models
- Noise model
PDKs can be created to serve your exact needs:
- FinFET
- Logic, Mixed Signal, RF CMOS
- High speed, low power SOI
- Scalable High Voltage (LDMOS, BCD)
- PDK enhancements
If your design needs to work in a hardened environment then considering using the RHBD (Rad Hard By Design) service offered by Platform DA where special purpose EDA tools are used:
- RadEx – environment-aware model extraction
- SERSim – Chip level SER simulation for combinational logic circuits
The PQLab tool will be used to automate the QA of your new PDK with:
- Validation of CDF parameters and callbacks
- Automatic generation of test patterns for DRC and LVS
- Pre-layout and post-layout simulation QA and comparison
- Scalability validation of the correlation of device behavior with layout parameters
- Circuit validation with a combination of Pcells
- Layout parasitic validation
Summary
Platform DA has successfully served the Taiwanese and Chinese markets for the past four years, and is now growing into several new geographies: South Korea, Japan, Europe and the USA. Their founders have deep industry experience in device modeling, cell libraries, PDKs and services to tie it all together for foundries, IDMs and circuit designers. Expect to start hearing more news coming from Platform DA and their happy semiconductor design customers.
Brexit and Semiconductors
Interesting news last week with 51.9% of British voters saying yes to Brexit (exiting the European Union). What does it have to do with semiconductors? Plenty! After reading the media’s take on the subject and talking to friends (experts) in China, Taiwan, and Hong Kong, I must say that there is not a consensus to be found and there probably won’t be until the history books document it years from now.
Most compare Brexit to the Donald Trumpism that is sweeping America (angry but largely misinformed people acting out). But to me the risk is FUD (fear, uncertainty, and doubt) that will be used by devious people for their own personal and political advantage, absolutely. China is most certainly one of the countries to watch during Brexit, Russia as well, due to their focused political strength.
According to my friends, the EU is like a three legged stool with France, Germany, and the UK being the legs. Amongst those legs, France and Germany control the EU semiconductor business (STM and Infineon for example). The EU is also China’s top trading partner. If you take away Britain, which is China’s dominant trade partner in the EU, France and Germany are now a two legged stool under the weight of a country (China) with the fastest growing economy and the most FUD capable government the world has ever seen. China has also made semiconductors their National Charter which is an important piece of this Brexit puzzle (for me anyway).
Take the automotive semiconductor industry for example, which today is a $28B market. Three of the four top vendors are from the EU (NXP, Infineon, and STMicro). The market is expected to grow over the next five years at a CAGR of 5.8%. Most of this growth is in Asia and specifically China. The mobile (smartphone and wearables) and Internet of Things (IoT) semiconductor markets are also examples of industry revenue growth domination by China. The smartphone market is still growing at a 6% CAGR and the IoT chip market is estimated to grow at a 11.5% CAGR. Again, Asia is the dominant market growth for both smartphones and IoT and China is aggressively pursuing these segments as both a consumer and supplier.
In fact, according to a recent survey by SEMI.org, more than half of the new semiconductor fab starts in 2016 and 2017 are in China:
From what I understand, all EU trade agreements will have to be renegotiated and with China under political pressure to continue its record economic growth you can expect heavy handed trade negotiations with Britain, the EU, and the rest of the world. So really, Brexit could not have come at a better time to refuel China’s economic growth engine. Taiwan will also benefit as China’s number one semiconductor manufacturing partner.
Meanwhile, the United States, Japan, Korea, and the other economic super powers are at an all-time low in regards to trade negotiations strength so the Brexit winners here will be China and Russia, my opinion. That’s if Brexit actually happens of course, which is a whole different discussion, one that I have not had yet.

