CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Semiconductors Future Hinges on a Single Pillar

Semiconductors Future Hinges on a Single Pillar
by Pawan Fangaria on 01-03-2016 at 7:00 am

A unique phenomenon has started manifesting itself under the slew of mergers and acquisitions this year in the semiconductor landscape. This phenomenon is bound to intensify in the near future and would positions itself as a key factor for the future of the semiconductor industry. The winners and losers in the game would be determined on how successfully they are able to execute on this aspect. In fact, this is one of the key aspects responsible for pushing companies towards mergers too.

Mergers and acquisitions are part of usual cycle of consolidation after expansion. Considering the macroeconomic conditions, I have been foreseeing the increase of M&A activities since 2011, of course it is happening since much earlier than that but it intensified in 2015. With the recent announcements of mergers, which are expected to complete in 2016, the total deal size of semiconductor mergers in 2015 has gone much beyond $100 B. Earlier peaks were $70.3 B in 2000 and $75.2 B in 2006.

I have been keeping track of key mergers year after year and reflecting on main themes of some of those in my blogs on semiconductor landscape. This time there are too many mergers, large and small, so it’s pointless talking on particular mergers, but it makes sense analyzing the key themes around which number of mergers have taken place. A common theme around all these is new technology which will act as the main pillar for the semiconductor industry in the near future.

We have seen how macro economy has struggled since 2008 and yet not fully out of woods, although US FED has garnered confidence to start increasing interest rates. Thanks to business leaderships of companies who have been able to consolidate with good technology leaders who needed funds to grow. What’s next after such massive business consolidation this year?

In most of the current mergers, one key aspect was visible: technological expansion. It’s consolidation with a view to expand in emerging technologies; so it’s complementary as against concentration. In my view, macro economy and business in semiconductors will continue on their own pace, IP leadership will continue as usual. What will see a massive change in near future is technology, which will in turn infuse new life to semiconductor business going forward.

The future of semiconductor industry will depend on how new technologies pan out to support emerging businesses. Let’s review some of the common technological themes based on which some of the key mergers have taken place, and more may be seen.

  • Internet of Things – IoT has emerged as a general theme from which several verticals have originated from a business stand point, e.g. industrial, home, automotive, consumer, medical, wearable, and so on. From a technological stand point all of them converge on a system’s view with hardware and software integration, data analytics, big-data management and control, and real-time communication under different protocols. Clearly, IoT opened up many fronts for mergers of companies in hardware, software, as well as IP.

As an example big-data management initiated revamping of data center (cloud server) technology. Good progress was seen in IoT edge and gateway devices, and now the focus is shifting to cloud which will see a major innovation and development in near future.

Intel’s Altera acquisition is geared towards adding programmability, customer IP and security into the data center, and also increasing performance and decreasing cost and energy consumption. Also there are innovations happening elsewhere to improve performance and power efficiency of data centers. A big merger of Dell and EMC is on the horizon in the cloud computing space.

Considering the gateway solution, there have been many mergers, for example Bluegiga and Telegesis acquisitions by Silicon Labs, Lantiq acquisition by Intel, Wicentric and Sunrise Micro Devices acquisition by ARM, and so on.

Also there have been mergers or business arrangements around providing IoT devices in various segments such as health fitness, wearable, automotive, and so on. Dialogacquired Atmel to strengthen into microcontroller business that is a key to IoT. NXPand Freescale merger is another example of big consolidation.

  • System Design Automation – With SoCs being infused in many applications, and of course IoT, system’s view in design automation has become the key. In this space, the big EDA and IP companies are doing in-house innovation as well as acquiring key technologies to augment their system automation tools. Emulation and FPGA prototyping technologies are coming to the forefront.

Mentorrecently merged Calypto with itself. Earlier Mentor acquired Flexras Technologies to strengthen into FPGA prototyping and Tanner EDA for AMS and IoT solutions. Mentor’s Veloce interface with ANSYS’ PowerArtist is a big step in real-time and accurate power analysis of any application running on a system. Also Cadenceis betting big on virtual emulation with its Palladium technology. Synopsysis strengthening its IP portfolio and software solution for systems; it acquired Atrenta to strengthen its verification platform.

In IP world, differentiation has become important and that leads to difficulty in modeling non-standard IP. ARM, the IP giant acquired Carbon Design Systems to accelerate modeling and verification of their new cores, and testing of the cores in complex SoC designs.

Another area in focus for systems is big-data and analytics. ANSYSacquired Gear Design Solutions, Microsemiacquired Vitesse.

  • Security – With IoT, security is getting prime importance in hardware as well as software. There have been several mergers around innovation in providing secure systems. ARM acquired Offspark and Sansa Security. Synopsys acquired several software companies includingProtecode, Codenomicon, Quotium’s Seekerproduct and R&D team and Goanna Software for enhancing software security, integrity and privacy in various software applications. This space is ripe for innovative technologies to be acquired.

  • Semiconductor Technology – At one time process technology was thought to be ultimate at 28nm and 22 nm, but now 10nm is a reality and we are going to see 5nm. This brings new challenges and hence new areas to innovate. Towards consolidation, GlobalFoundries completed acquisition of IBM’s foundry business, and now China is eyeing to acquire GF. In process equipment space, Lam Research is acquiring KLA-Tencor, Cabot Microelectronics acquired NexPlanar, and there are others.

There are several other mergers in different segments of semiconductor design space as well. Avago acquired Broadcom, Samsung acquired Yesco, Western Digital acquired SanDisk, MediaTek acquired Richtek, Diodes acquired Pericom Semiconductor, ON Semiconductor is buying Fairchild Semiconductor and Microsemi is buying PMC-Sierra which recently saw a few other contenders as well. More mergers are to be seen in near future.

  • New Innovative Devices – These are new devices to automate work in many areas and bring disruption in services traditionally driven by human workforce. Although this can bring more efficiency in work and reduce cost, there can be several repercussions; let’s park that for a later discussion. For now we can see these technologies in the making – Freescale acquired CogniVue getting into autonomous vehicle segment, PTC acquired Qualcomm’s Vuforia to usher into augmented reality, and Qualcomm acquired KMel Robotics to strengthen its strategy into Drone and Robotics market. Also, Qualcomm invested in 3D Robotics. Even Intel invested in drone maker Yuneec. Earlier Intel invested in Airware, and PrecisionHawk, and also in VR headset maker Avegant.

Autonomous vehicle, Robotics, Drone, and VR are some of the upcoming technologies which can bring major disruptions in different service and logistics sectors.

Another twist in semiconductor mergers is being driven by geographic traction. China is working hard to build large presence in semiconductor business from its soil by in-house development as well as acquisitions in other countries.

A deeper insight into this expansion led consolidation can explain very well that the consolidation – expansion cycle will get elongated now. It’s time to watch how these new technologies bring major changes in our lives in the first quarter of this century.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


2015’s Unfinished Automotive Business

2015’s Unfinished Automotive Business
by Roger C. Lanctot on 01-02-2016 at 4:00 pm

The farther we come, the farther we have to go. While progress in advancing personal transportation was made in 2015, the year closes with glaring elements of unfinished business threatening to impede further progress toward mitigating highway fatalities and reducing emissions and congestion. These areas of unfinished business ought to serve as priorities for regulators, auto makers and their suppliers and service providers.
Continue reading “2015’s Unfinished Automotive Business”


The “Era of the Photon” is here!

The “Era of the Photon” is here!
by Tom Dillinger on 01-02-2016 at 12:00 pm

The 50 year anniversary of the publication of Moore’s Law was recently celebrated, highlighting the tremendous advances in the Microelectronics Eraof the period in human history known as the Information Age. However, the technical and economic challenges currently faced by the microelectronics industry are bringing into question the pace at which product innovations realized under Moore’s Law can continue. Nevertheless, the sheer data volume and demand for information processing throughput is accelerating.

I recently had the opportunity to meet with Gilles Lamant, Distinguished Engineer at Cadence, and several members of the development teams at Lumerical Solutions, Inc., and PhoeniX Software B.V., who presented a compelling case that the future of the Information Age will be defined in a new manner, namely The Era of the Photon.

The demand for faster signal communication in short-reach applications – e.g., both within and between compute servers in data centers – will rapidly transition from lossy copper to optical fiber connectivity, utilizing photonic integrated circuits (PIC’s).

Photonics is a broad term, which encompasses the use of light (photons) to represent information, spanning photon generation through transmission to detection. Optical signaling technology has been the backbone of the telecommunications industry for decades. Optical image sensors are the heart of numerous consumer and industrial product sectors. The utilization of optical communication for data system applications is relatively new, and is expected to grow rapidly.

Photonic devices for data representation and transmission incorporate familiar functionality as their microelectronic counterparts – e.g., multiplexers, modulators, amplifiers – with the additional requirement for electro-optical conversion and coupling to optical fiber. These PIC functions rely upon the precise dimensionality of waveguides fabricated on the IC, as illustrated in the layout below (from PhoeniX Software).

The design and simulation of the waveguide are extremely intricate steps, as the structure requires an (all-angle) curvature – more on that shortly. An attractive characteristic is that additional modulation can be achieved by changes in the refractive index of the waveguide induced by adjacent electrical or thermal stimulus, adding significantly to the simulation complexity. As a result of these intricacies, the current PIC industry is still very specialized, leveraging the expertise of companies such as Lumerical and PhoeniX to provide tools for design capture, model generation, simulation, and release to fabrication.

The fabrication of (discrete) PIC’s is also current rather specialized, typically leveraging unique III-V materials suitable for laser generation, waveguide implementations, and photon detection.

The growing demand for photonics is also driving unique implementations, such as the integration of separate (Si and III-V) signal processing and electro-optical conversion parts in a multi-die packaging solution. And, there’s extensive research underway to investigate fully-integrated, monolithic silicon photonics fabrication, extending existing CMOS process technology to include generation and detection feature. (The difference in refractive index between Si and SiO2 makes embedded silicon waveguides very feasible.) Indeed, several leading research teams have recently published promising results, in terms of potential aggregate bandwidth and low power dissipation per transmitted bit.

Cadence, Lumerical, and PhoeniX Software recognized that a key enabler to the growth of systems incorporating PIC designs is the availability of a productive and familiar design environment, whether for multi-die or monolithic devices. The three companies recently announced a photonic IC platform design flow, with interfaces between the following market-leading tools:

  • Cadence Virtuoso
  • PhoeniX Software’s OptoDesigner
  • Lumerical’s INTERCONNECT and DEVICE modeling and simulation engines

The figure below highlights the tool integration and inter-operability available in this platform.

Designers will now be able to develop electrical and optical “circuits” in a Virtuoso-based environment, with links to the specialized tools from Lumerical and PhoeniX. This architecture builds upon the platform concept behind Cadence’s Virtuoso Analog Design Environment, with additional design and simulation support for photonic elements. (The availability of this environment will also promote the standardization of a photonics PDK release from foundries.)

As mentioned earlier, there are intricacies to photonic elements that are distinct from the traditional chip design and analysis methodology:

[LIST=1]

  • Photonic layout designs utilize the definition of waveguide contours. This differs from the conventional vector representation in IC design, and requires special shapes processing for model generation and discretization for tapeout. OptoDesigner from PhoeniX incorporates unique layout support for photonic cells.
  • Lumerical provides model generation and (finite-difference time-domain, or FDTD) simulation support for the electro-optical system. The nonlinear effects of electric charge and thermal input modulation to the optical propagation system are supported. Optical simulations import electrical stimulus from a Spice simulation, generated in the Cadence environment.
  • The Cadence platform provides a familiar environment for the design capture, physical implementation, analysis, and simulation of the electronic circuity, including system-in-package (SiP) technology options. The cooperative simulation of electronic and photonic circuit elements utilizing Cadence and Lumerical technology offers a vastly improved analysis approach, when compared to using separate tools.

    Photonic IC’s will become a much more prevalent part of system design. The investment in R&D of new materials and processes (especially, silicon photonics) is growing. This has necessitated focus on a more productive design environment, with specialized tool interfaces. Cadence, Lumerical, and PhoeniX have released an initial design flow, and are committed to expanding the capabilities (e.g., PDK standards, DFM/DFY, reliability analysis, etc.).

    We are truly at the cusp of the Era of the Photon. It will be exciting to see how this next phase of the Information Age evolves.

    More information on the recent platform design flow for PIC’s is available here.

    -chipguy


  • MediaTek X20 Benchmark Leaks: A Prelude to 2016 Mobile Chipset Wars?

    MediaTek X20 Benchmark Leaks: A Prelude to 2016 Mobile Chipset Wars?
    by Majeed Ahmad on 01-02-2016 at 7:00 am

    MediaTek is making waves, again. The company’s flagship mobile system-on-chip (SoC)—Helio X20—is up against rivals like Apple, Qualcomm and Samsung, according to leaked Geekbench scores, and is leading in some of the performance benchmarks.
    Continue reading “MediaTek X20 Benchmark Leaks: A Prelude to 2016 Mobile Chipset Wars?”


    How Not To Be Incoherent

    How Not To Be Incoherent
    by Bernard Murphy on 01-01-2016 at 7:00 am

    The advantage of working with cache memory is the great boost in performance you can get from working with a local high-speed copy of chunks of data from main memory. The downside is that you are messing with a copy; if another processor happens to be working in a similar area, there is a danger you can get out of sync when reading and writing copies of the same main-memory addresses. That’s where cache-coherence protocols come in, to keep those copies in sync where needed, and since ARM is the de-facto supplier of cache-coherent bus-fabrics connecting their processors to cache memories and to other IPs, VIP becomes essential to verify correct usage across all the flavors of AMBA your design may contain.

    The Synopsys VIP for the AMBA4 ACE and AMBA5 CHI protocols is an excellent example of why use of proven VIP is so important in testplans. You have to contend with multiple protocols, coherent and non-coherent agents and interconnect, all the possible varieties of state transitions among the agents, ordering complications and more. Since the ACE specification alone is nearly 200 pages, it would be crazy to try to recreate all the required checks and particularly all the coherency protocol checks associated with these standards. Instead you buy a proven VIP which self-configures (given some guidance) to your design and generates a testbench that will run sequences to perform all the standard-required tests, provide related checks, do coverage analysis and more. Which leaves you free to focus on how your architecture applications perform to your market objectives.

    So yeah, VIP is good, etc, etc but isn’t this just another in a long line of VIP? Well not quite. Suppose just for the sake of argument that you had the time, money and interest to build this yourself (you don’t – there are no extra-credit projects in this area). Of course this would take lots of expertise and time to evolve/debug/prove what you had built. But that isn’t really so different from other VIP. What makes this domain especially challenging is the non-determinism in cache-based systems and potentially long latencies between source of errors and the ultimate effect of those errors, all made more complex in systems with coherency management. For all practical purposes, the number of types of software that can run on these systems, multiplied by the number of types of data they must process, multiplied by an unpredictable level of real-time interrupts gives you an infinite number of possible scenarios hitting caches in unpredictable sequences.

    What happens if you get this even a little bit wrong? Unlike a small error in a peripheral protocol, an error in cache behavior is an error in the heart of the machine – there really are no bounds to how badly (or, worse yet, subtly) that can affect behavior. You may not even see the impact of a low level bug inside the time-windows you are testing – an error of this type can manifest quickly or can take days to bubble-up to observable misbehavior on silicon. Which means that this is an area where any level of imperfection truly is not an option.

    For these reasons, cache coherence VIP must be developed by protocol experts in very close collaboration with the IP provider (ARM). It has to provide comprehensive tests and checks for all possible failure modes and it has to provide super-streamlined debug support, starting from a protocol view and drilling down to signal and logic root causes, because if it takes you too long to debug problems, you’ll run out of time before you really know the coherency interaction is really safe. That’s really why this VIP is, in an important sense, more critical than any other VIP.

    You can learn more about the Synopsys AMBA VIP HERE.

    More articles by Bernard…


    mbed OS abstraction battles IoT hyperfragmentation

    mbed OS abstraction battles IoT hyperfragmentation
    by Don Dingee on 12-31-2015 at 12:00 pm

    In the days of bit banging and single-threaded loops, programming a microcontroller meant grabbing a C compiler (or even before that, an assembler) and some libraries and writing bare metal code. High performance networking and multi-tasking was usually the purview of heavier real-time operating systems (RTOS) or, if an MMU was available, embedded Linux.
    Continue reading “mbed OS abstraction battles IoT hyperfragmentation”


    PUF the Magic (IoT) Dragon

    PUF the Magic (IoT) Dragon
    by Bill Montgomery on 12-31-2015 at 7:00 am

    Most people are familiar with Biometrics, the measurement of unique physical characteristics, such as fingerprints, and facial features, for the purpose of verifying human identity with a high level of certainty. The iris and even a person’s electrocardiogram (ECG) can be used as a secure biometric identifier.
    Continue reading “PUF the Magic (IoT) Dragon”


    2016 – Intelligent Things of the Internet

    2016 – Intelligent Things of the Internet
    by Pranay Prakash on 12-30-2015 at 4:00 pm

    Change is happening fast with the Internet of Things (IoT). Devices are getting smarter. We all know that and have seen the evolution over the last several years – from smart thermostats to toasters. But smartness is a relative term and as we enter 2016 we will see more devices/machines that are becoming intelligent. And when you’re intelligent you do things better, faster and a lot of times without the help of anyone. That is what’s happening in the intelligent devices world. In some cases humans are controlling and talking to these devices and in others devices are talking to each other.

    On one of my international flights in 2015, I grabbed the chance to watch Ex Machina, a wonderfully done movie which attempts to close the gap between humans and machines. If you haven’t watched, recommend giving it a shot. The main non-human character in this movie called ‘Ava’ is a humanoid robot built with Artificial Intelligence (AI) – she is smart but also has emotions. Ava is possibly the closest a robot can get to humans from an emotional standpoint and yet do a lot more in terms of smartness and efficiency. If I were to make a grand and ambitious prediction, I would say we’re going to get to super intelligent machines like ‘Ava’ connected to blazing speed wireless internet fast but realistically, I don’t feel it is a #tech2016 phenomenon 🙂 However, I am certain we are headed in that direction and the following trends in 2016 will help in getting there:

    Wireless Connectivity
    Everything is getting connected. We’re seeing technologies like WiFi and Bluetooth come default on many devices for home and even in commercial applications. Then there are ZigBee, Z-Wave protocols as well being used for connectivity in smart lighting etc. It is hard to say how protocol standardization will evolve but surely in 2016 device vendors will increasingly adopt some form of wireless connectivity. As a consumer, I get disappointed if I don’t see an option for wireless connectivity these days. At the upcoming CES event in January 2016, we can expect to see an explosion of wirelessly connected consumer devices.

    Wearing IoT
    We will see us move from carryingIT/IoT devices to wearing IoT devices. We’re a little distant from having connectivity chips planted in our bodies 🙂 however like clothes, wearable devices will become part of our everyday living. The biggest advantage of embracing wearables is your email, app, text is always on you vs the pain of trying to find your smartphone when you misplace and need it the most. In the last couple years, there have been quite a few jewelry based IoT startups that are trying to connect your ring, necklace etc in an attempt to connect YOU. This space will be very active in 2016.

    Software Driven Everything
    Post connectivity, these devices need to be managed. Whether it is simple upgrade, security fix or control of connected devices, software is important. Software will manifest in multiple different ways – there are already a plethora of apps out there for simple day to day operations e.g. turning your home temperature up and down remotely. The revolution is already begun and we’re seeing software driven cars, drones, home appliances and many applications in commercial buildings and other industries.


    Self-driving cars will become a ‘practical’ reality over the next few years but it will need internet connection and coherent interaction of software, machines and sub-systems. Software is also critical in the shape of IoT platforms to help develop apps, manage devices and more. In 2016, we will see new IoT software developer ecosystems evolve and there will be more interesting apps and possibly consolidation of IoT platforms as we go forward. Cloud will play a very active role as platforms evolve – Microsoft, IBM, Amazon and others are already pushing and positioning their cloud platforms for IoT.

    Analytics and Algorithms
    Without data analytics and algorithms connected devices will still be pretty dumb. In the journey to ‘Ava’, we will see analytics taking a center stage. The ability to process data fast, visualize information, draw conclusion, make decisions and act at lightning speed – we will need machines to do all of that if we want to evolve to an automated world. 2016 will be the year when we will see self-learning capabilities added to more machines. This is critical to the successful formation of the humanoid brain which will be always connected and unlike human brains will be made of silicon and software.

    I am looking forward to an exciting#BigIdeas2016 year as IoT transforms. Let the ‘Intelligent Things of the Internet’ talk in 2016!

    Also on LinkedIn


    The Silicon Valley Apocalypse!

    The Silicon Valley Apocalypse!
    by Daniel Nenni on 12-30-2015 at 12:00 pm

    Based on the Behavioral Sink Experiments in the 1950s it is hard for me to believe that Silicon Valley will continue with the unicorn fueled hyper expansion we are currently experiencing without some very serious repercussions, both social and financial. First let’s talk about the social issues which to me are the most interesting.

    The Behavioral Sink Experiments found that populations of social animals (mice and rats) in space and resource limited environments would explode then implode into extinction as a result of social decay. The hypothesis was that nature has its limits in how social animals can thrive/survive:

    Among the males the behavior disturbances ranged from sexual deviation to cannibalism and from frenetic overactivity to a pathological withdrawal from which individuals would emerge to eat, drink and move about only when other members of the community were asleep.

    After being part of the Silicon Valley rat race for 30+ years I have never seen anything like the crowds we have today. We are over building and over populating way beyond what our environment can handle. The roads are the clearest sign. Back in the day you could miss traffic by leaving before 7am and coming home before 5pm. Now you have to leave before 5am and start your trip home before 3pm otherwise a 40 mile commute turns into an 80 to 120 minute stress test. Leaving at 10am and returning home at 7pm no longer works either. Traffic is now a way of life and with gas prices at record lows it is only going to get worse. Traffic accidents, which are now an everyday thing, also increase social decay.

    And is there anybody in Silicon Valley that is working a 40 hour week? No, we are working even more hours per week for the same or less pay then we used to, absolutely. So you have to ask yourself, “how is this all going to end socially?”

    Financially speaking we have seen two meltdowns in the past two decades: The Dot Com Bubble and the Subprime Mortgage Crisis. The only remotely positive result socially is that the divorce rate actually went down during these two periods. Partly because couples could not afford two separate households but I digress…

    Clearly we did not take those two financial life lessons to heart since we are probably approaching yet another financial meltdown that can be called “The Unicorn Walking Dead” or maybe “The New Age of Unicorpses”.

    Those at the top tier of the billion-dollar funding club, companies such as Airbnb Inc. and Uber Technologies Inc., will survive relatively unscathed, said Max Wolff, chief economist at Manhattan Venture Partners, but he expects a “meaningful correction” to ripple through the club. Some of today’s unicorns will keep being fancy pets and some of them will be meat, Wolff said.

    By the way, out of the 130+ unicorns not one of them are ACTUAL semiconductor companies, even though without semiconductor innovation none of those unicorns would exist. Something for the unicorn breeding VCs to think about wouldn’t you say?


    Lighting Up The Cloud

    Lighting Up The Cloud
    by Bernard Murphy on 12-30-2015 at 7:00 am

    In our rush to imagine a world populated with IoT devices, tech advances at the top end of this ecosystem (the cloud) don’t seem to get much airtime. But this isn’t because they are limited to modest refinements. As one example, there is active technology development in connectivity around fiber-based communications within the datacenter.

    I always like to start with why changes are happing; an Intel shareholder update explains this quite well. These clouds (datacenters) are seeing three big shifts: massive growth in scale driving massively increased bandwidth demands, the same growth in in the scale of the building(s) containing the datacenter, requiring cabling reach up to 2km, and a change in traffic patterns from hierarchical enterprise flows (masters feeding/controlling slaves) to cross-system flows, emerging thanks to network function virtualization (NFV) and software-defined networking (SDN), requiring more flexible connectivity (less hierarchy).

    Copper can’t keep pace with these demands, hence for some time fiber has been the preferred medium for connectivity, for all the usual reasons – low attenuation, security, material cost and more. The preferred light source is vertical-cavity surface-emitting lasers (VCSEL) operating at 850nm. But high-speed VCSELs typically have large spectral width, leading to dispersion in the fiber which limits reach to ~100m; they are also tend to higher power requirements than demanded by large datacenters.

    Recent articles have suggested that silicon photonics may provide an answer. VCSELs are built in specialized devices distinct from the compute engine, but there is an intuitive appeal in building the optical link into the same chip (or package) as that engine, potentially improving performance, power and cost, also leveraging the technology and capabilities already developed for semiconductor manufacture. Intel and IBM certainly believe this. The Intel shareholder update I mentioned earlier shows silicon photonics can already meet the 2km reach requirement and is therefore able to bridge the gap between VCSEL reach and full datacenter size (I assume this would be as a result of narrower spectral width from the lasing source though Intel does not elaborate on this point). Better yet, the promise is that silicon photonics can drive 4 optical channels into one fiber, increasing capacity at no additional cost over cheaper single-mode fiber, where high performance VCSEL connections require multi-mode fiber.

    So much for the promise – reality seems to be less clear. First, the silicon photonics domain appears to be as susceptible to hype as other domains; unsurprising since some companies have been investing in this area for years without serious (commercial) progress. On a brighter note, you can now buy devices (from Cisco, Mellanox and Infinera, for example) which use the technology, also the compelling needs remain, so this is hardly a failure. But silicon-based lasers still struggle with reliability (unlike VCSELs), which may be why Intel had a false-start release early in 2015, though they are now shipping production silicon. And the VCSEL folks are not standing still; research work has demonstrated the feasibility of VCSELs operating in the same range claimed for silicon photonics.

    The net seems to be that VCSELs are proven today in production usage but have reach and power limitations which in principle could be overcome though not yet demonstrated outside the lab (?), whereas silicon photonics has promise but still seems to be teetering on the edge between promise and delivery. At the end of the day, this race may be decided as much by bill-of-materials and manufacturing cost as by technology, and that’s where silicon photonics should have the edge.

    An article introducing the promise of silicon photonics can be found HERE. You can read the Intel shareholder update HERE. An article reviewing whether we are past the silicon photonics hype phase is HERE. A nice comparison of VCSELs versus silicon photonics can be found HERE. For an example of research in extending useful operating parameters of VCSELs, see HERE.

    More articles by Bernard…