CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Fabless and IDMs Training up on Integrated Photonics

Fabless and IDMs Training up on Integrated Photonics
by Mitch Heins on 11-28-2016 at 12:00 pm

I had the good fortune to be able to attend a very informative five-day photonic integrated circuit (PIC) training this last week in Santa Clara, CA. The training was organized by Erik Pennings of 7 Pennies consulting and hosted by Tektronix. Several ecosystem partners from the design automation, photonic foundries and photonic packaging and test industries presented to a full room of more than 25 trainees from 15 different companies. The mix of companies was intriguing as there was almost an equal mix of system houses and IC providers with PIC providers outweighing the electrical IC (EIC) providers 2-to-1 and at least one of the system houses that was also doing both PICs and EICs. Of the companies doing EIC design, it was roughly an equal split between IDMs and Fabless component makers.

Training content was rich and started with a general tutorial on different types of passive and active photonic components along with basic principles behind how those components work. This was followed by an overview of the different photonic material platforms being employed for each. It was quite clear that the III-V ad II-VI group materials are here to stay for lasers, optical amplifiers and photonic detectors. There is however a definite shift underway to make use of Si and Si-based materials to enable smaller, denser and in theory lower cost photonic devices. Methods for integrating light sources and amplification to these silicon-based solutions is still up for grabs with lots of competing solutions. For detectors Ge is being grown on the Si to form SiGe based detectors.

In conjunction with the move to use hybrid photonic solutions is the push to move the photonic components closer and closer to the electronics with which they communicate. The biggest impetus for this is the next jump in modulation speed per channel. Most 100G applications are using 25Gbps channel modulation with some form of higher level encoding such as QPSK to increase effective baud rates. As the industry moves to 200G and higher rates there will be a push to move the channel speed up to 50Gbps modulation rates and when that happens there will be a push to reduce or eliminate the metal RF traces on the boards between the electronics and photonics. Flip chip seems to be the method of choice to shorten these leads by using through silicon vias (TSVs) and bump technology between the electronic-based driver chips and the photonics (see picture from Luxtera). This will however will require some help from the design automation industry to put in place more robust CAD for 2.5D and 3D design and verification methodologies.

The training rounded out with hands-on sessions from design automation vendors VPI Photonics, Lumerical Solutions, PhoeniX Software and Cadence Design who covered photonic system-level design and verification through PIC design, verification and implementation. Presentations were also given by photonic MPW aggregator JePPIX, and Si-photonics foundries CEA-Leti, imec, IHP, VTT. Presentations were also given by silicon nitride foundry LioniX as well as InP foundries HHI/Fraunhofer and Smart Photonics. Advanced photonic packaging was covered by Chiral Photonicsand photonic test and measurement were covered by Tektronix and Venista. Lastly, design housesBright Photonics and VLC Photonics each spoke about their photonic design services offerings.

Other key concepts from the training included:

  • Integrated photonic solutions may at first need to be sold at the system level. Disruptive change doesn’t happen at a single component level. It tends to impact the entire system which includes software and hardware infrastructure changes that must happen together. Look for these kind of changes from system suppliers that will use photonics to disrupt the current status quo.

  • The advent of 100G has provided great momentum for PICs especially with 100GbE (with LR4 and ER4 requiring 4 wavelength channels) and 100Gbps coherent (DP-QPSK). The volumes for these devices will be sufficient to boot up the manufacturing infrastructure to the point that other photonics markets will become cost viable. As a result, the market for PICs is now growing at >35% / year

  • Package design and 2.5D/3D integration with a mixture of EIC and PIC will become crucial to enable higher speed solutions. Thermal analysis of these modules will be important as the EICs will be generating a considerable amount of heat and designers will need tools to understand and accommodate for inadvertent heating of the photonics.

All-in-all this was a very comprehensive training class that was both wide in breadth but also comprehensive in its depth. I learned a lot and would encourage anyone interested in photonics to look into future classes offered of this nature.


CEO Interview: Rene Donkers of Fractal Technologies

CEO Interview: Rene Donkers of Fractal Technologies
by Daniel Nenni on 11-28-2016 at 7:00 am

Fractal is another one of those very successful emerging EDA companies that you don’t read a lot about, except on SemiWiki. Rene Donkers is co-founder and CEO of Fractal Technologies, a company addressing IP quality assurance. This is a niche in the SoC tooling market that deserves some justification. Why not use an IP as-is if it comes from a trusted vendor? And if IP needs to be checked – why would you need yet another tool to do so?


What does Fractal do?
We help our customers make sense of IP they receive before they include it in their design flow. With any component you use to build your SoC, you want to make sure that everything you need is there and is consistent. Regardless of whether it came from an external supplier or an in-house design group, you cannot afford to take quality for granted. Think of trivial issues like missing pins in a Milky Way view or SPICE file. But also of trends in power and timing arcs in the terabytes of .lib files that library IP typically is made of these days. You don’t want to find something’s missing there when you’re running your final design-validation. At that stage debugging is both very hard as well as mission-critical, so catching IP issues before the IP is completely integrated is vital to our customers.

All this doesn’t sound very new, design teams have been running incoming inspection on IP for years?
We have seen all our customers transition from some form of home-grown validation scripts to a dedicated tool like Crossfire. The scripts and maintenance that used to be sufficient a few process generations ago all started to break in several places. The need for including yet another database or format was often the trigger for engaging with Fractal. The reason is that it wasn’t only the parsing and some extra checks: the checks to be run on for example CCS are far from trivial, on top of that the amount of data is huge. And then you get to fix things in a radical cost-cutting regime that leaves no resources left for proprietary tooling development. CAD-groups are simply not coping anymore without bringing in dedicated QA tooling.

There’s another aspect of IP data volume I’d like to point out, which is that you have to engineer your tools from the ground up to cope with it. Within Crossfire we have a distributed data-management framework where the different formats are managed by dedicated servers in the computing infrastructure. Once that is established, checks can be run in parallel, requesting different parts of the data when needed without being concerned with data management. This allows an extensive Crossfire check-set to be passed within a couple of hours.

More data sure, but Moore’s law is also giving us faster computers isn’t it?
Moore’s law is completely not catching up with the increase in IP data. In node-transitions during the last 2 years we have observed a 4X increase of IP data – that’s 4 TB. landing in your ‘Inbox’ to be inspected these days. Following Moore’s law, 2 years at best only doubles the amount of transistors. And that increase is mostly driven by economics, speed increases for next generation process nodes are only there in history books.

IP comes in many flavours, does Crossfire support them all? And what do you check?
In fact we do, as that is exactly how the tool has grown and matured. We started out 10 years ago with standard-cell library qualification, then added IO cells, and then moved into analog and digital Hard IP- think of memories, AD converters and physical interfaces like USB cores. Checking synthesizable IP is of course also included.

Obviously not all checks apply to all IP categories. Basics like presence-checks or terminal-equivalence are pretty universal and provide a good sanity check to start with. If timing models are provided, we make sure all arcs are present so that back-annotation will always work. Timing and power models need extensive trend checks across the 100’s of process corners for which a typical Hard-IP is characterized these days. Netlist related checks are particularly rewarding– like checking bulk connections for the different power domains. Finally we see double patterning manufacturing driving all kinds of layout specific checks that typically affect formats like LEF, layout views and GDS.

It’s important to note that all these checks and different IP formats were made for and requested by our customers. The good news is that they’re not customer exclusive in any way. By supporting and enhancing Crossfire, Fractal accumulates QA demands from the industry and is able to deliver a more complete solution with every release of Crossfire.

Does Crossfire enforce a one-size-fits-all quality standard?
That’s now how it works. it’s always the user that decides which checks to be applied to which IP. This is what we call a ‘standard’: a collection of checks to be applied, for which we have our own format called Transport. In Transport, IP users specify their QA requirements which they then communicate to their IP providers. During designing the IP, the provider already may use Crossfire to make sure the final shipment fulfills all the Transport requirements.

You can compare it to exchanging a DRC-deck between foundry and design-team: these are the rules the design-team needs to stick to if they want their GDS processed properly. For a DRC-deck, designers and foundry can have a conversation on the interpretation of certain rules. Similarly, Transport provides a communication handle between IP-designers and IP-integrators. Because of the easy-to-read descriptions in Transport and the unambiguous implementation of the rules in Crossfire it’s now possible for the first time to discuss and improve QA requirements, so parties can jointly develop improved formulations that better serve their needs. Both sides have a vested interest here, IP integrators want properly qualified IP, but overly rigorous checks do not help their suppliers in providing efficient IP releases in time.

Getting back to those design-teams that have internal qualification tools, how do you get them on board?
First of all by pointing out that we embrace and not compete with internal solutions. These internal checks have been developed to serve the specific needs of the design-flow and the application area the customer focuses on. What we’re proposing is to integrate those dedicated checks into the Crossfire framework. Not by re-writing but by simply calling the existing code and making sure the checks show up in our index and that errors can be debugged from within Crossfire. This way the experts working on these checks can dedicate themselves to adding corporate-specific value, rather than having also to spend time on infrastructure-related subjects like format-parsing, visualization and parallelized data access – all areas where Crossfire excels, but which can take 70% or more of your time. So by working with Crossfire, the experts become a lot more productive.

This opportunity to focus on adding unique value is the way to engage customers, knowing that they will also benefit from a “QA-subscription” as Crossfire is continuously extended with new QA requirements from the entire industry.

That implies Fractal already has an industry-wide adoption, who are your customers?
You find our customers in all categories, we have system companies but also foundries, IDMs, independent IP design houses and fabless companies. All use Crossfire either during their design-flow, to have an end-of-line check before IP shipment or as an incoming inspection tool. I can’t disclose names, but half of the top-20 semiconductor companies already use Crossfire, and we expect the rest to come on board in the near future.

We support them preferably through local AE’s that speak their language and can be on-site quickly to deal with any issues or train new users.

What do you see as the next challenges for Fractal?

Managing growth is of-course key. We organically grow the engineering and AE teams so we get the right, qualified people on board and can make a proper investment in training them in our way of working. At the same time without compromising on the timing and quality our deliverables.

For our customers Fractal’s position as an independent QA tool provider is of prime importance. After all, who would buy a QA tool from either an IP provider or an EDA company? Absolutely no-one: in such a scenario a lack of test-coverage would only be the least suspicion. In essence, Crossfire would no longer have the authority it now has as an independent QA certification tool. The adoption of Crossfire and the Transport formalism by the industry is allowing us to continue with this strategy.

Also Read:

CEO Interview: Albert Li of Platform DA

CEO Interview: Mike Wishart of efabless

CEO Interview: Chouki Aktouf of Defacto Technologies


Thin film Semiconductor Solutions for an Energy-Efficient Future

Thin film Semiconductor Solutions for an Energy-Efficient Future
by Anuja More on 11-27-2016 at 4:00 pm

For an industry with an estimated revenue potential of $22 billion by 2022, and a CAGR of 14% throughout 2016–2022, the slightest innovation in current modules holds considerable profit potential. The thin film semiconductor market has evolved from the previous generation of semiconductor electronics in the best interests of mother nature. Having said this, it becomes equally important to relate the markers. It is unnecessary to revisit the importance of semiconductor performance in the control and functional application of a complex system. From reduction in fuel consumption in automobiles to providing custom data support to inbuilt navigation memory in the same, the semiconductor electronic components have covered it all.


Technological superiority over conventional silicone chips motivated the field experts to explore immeasurable opportunities with the thin film semiconductors. The next-generation information technology requirements are inclusive of the flexible memory technologies for ultra-speed processing and nanoelectronics that can be accommodated in teemed systems. The convergence of development routes in the field of material and applied science along with those in power electronics has put forward novice possibilities for several vertical industries. On the basis of manufacturing technology, the market structure can be studied under two deposition techniques—chemical vapor deposition and physical vapor deposition (PVD).

The stark difference in process stages lead to different virtues for both the films. In accordance to the criterion of the working environment for the thin film semiconductor, any one deposition technique is adopted out of the two. CVD based thin film semiconductors are particularly used in photovoltaic industry, electronic circuits, and communication equipment among others while PVD based thin film semiconductors are majorly used in cutting tools and in microelectronic circuits for protection.
According to the latest findings of a research firm, the thin film semiconductor deposition market is led by the CVD technology in terms of net annual revenue.

The chemical process involves the mixing of a source material with volatile precursor that helps deposit a thin film of the artificial semiconductor compound on the substrate. The gaseous film is formed within a temperature range from 450°C to 1050°C. Multiple attributions are accountable for the popularity of this technology of physical deposition techniques. The most impactful of these are the lower cost and energy factors associated with this process. Even in terms of the administration of the process, minimal efforts are to be put in by the professional to fabricate these films. The solid films generated by this method have higher purity and performance index.

The CVD technique has been widely employed to complement the modern dynamic random access memory (DRAM) technology, solar or photovoltaic modules, advanced sensors in smart devices, organic light-emitting diode (OLED) display, and active matrix OLED (AMOLED) displays. The vertical industries, which have high expectations from the progress of this segment include IT & telecom, consumer & commercial electronics, energy & power, automotive, aerospace & defense, healthcare, and industrial production & manufacturing. The PVD deposition, on the other hand is used for producing thin film semiconductors that are used to cut tools and protect microelectronic circuits.

With per capita energy consumption listed on the growth metrics for a region, nations have been trying to improve their performance accordingly. In regard to the intended increase in energy production through economically viable and ecologically sustainable sources, solar energy has gained much attention. Multiple research groups have dedicated their work to the deployment of the chemical deposition technology to develop optimum quality photosensitive films for energy conversion. The thin film photovoltaic cells so obtained can be produced at a comparatively lower cost owing to the usage of mixed source material, which if profitable for the manufacturer in the long run.

The technique itself ensures needful usage of materials and ensure minimal wastage. In addition, the absence of any complicated production procedure and any external electrical force minimizes the total energy required during the process of production. This makes it feasible for small-scale enterprises to adopt the technology in their practices and come forward with better products. First Solar, a C solar energy solution provider, has set the perfect example for new entrants in the solar energy segments on how to channelize their resources to fix their market position. Similar companies can benefit from integration of the innovative thin films and claim their edge over their existing competitors.

With regard to the source compounds, Cadmium telluride (CdTe) has so far led the market revenue figures, yet is doomed to suffer a decline in future due to safety considerations over poisonous properties of Cadmium. At the same time, CiGs will experience greater growth in overall market shares in terms of volume. The quest for ultra-efficient compounds has revived the attention of leading corporate players in the thin film semiconductor market. To exploit the prevailing opportunities, they have restructured their investment agenda, channelizing a huge portion of it in the towards research and development activities. We await to witness the consequentially transformed scenario in near future.

Your questions and comments would be greatly appreciated. Thank you for reading.


Foggy 5G Forecasts Coming into Focus

Foggy 5G Forecasts Coming into Focus
by Roger C. Lanctot on 11-27-2016 at 12:00 pm

My colleague and automotive safety system guru at Strategy Analytics, Ian Riches, is fond of citing Amara’s Law. Named for Roy Amara (1925-2007), research, scientist, forecaster and long-term president of the Institute for the Future, the “Law” states: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

The reverse may be true of 5G wireless technology. A lot of observers appear to be underestimating the short-term impact of 5G and exaggerating the long-term as if to say: “Yeah, 5G is going to be awesome, but it’ll be a decade before it has an impact.”

Some of these 5G deniers are sandbagging the latest evolution of cellular technology because they are heavily invested in current generation LTE technology or they have a horse in the race to connect cars to each other in the form of 802.11p dedicated short-range communication technology. I understand the lack of enthusiasm for 5G among LTE enthusiasts, but I can’t abide the doubting Thomas’ from the DSRC camp.

Hopefully all involved got a wake-up call last week from Ericsson which decreed that standardized 5G networks will be operational by 2020 with 25% of all North American subscriptions on 5G by 2022. Ericsson has no interest in exaggeration. It isn’t good for business. If Ericsson says 5G is knocking on the door, I’m letting 5G in … to my head, to my pocket and, sooner than most think, to my car.

Still, there is the problem of the short-term. Amara’s Law was Amara’s way of noting that analysts are prone to hyping new technologies while underestimating the amount of time it may take to incubate a new technology to bring it to full maturity. Of course, researchers and analysts have a powerful interest in hyping the short-term prospects: the bigger the hype, the more likely market reports are to be snapped up by willing clients.

But clients aren’t being naive in buying reports with over-hyped forecasts. Companies seeking funding or seeking to be acquired might well want to purchase a report that paints a rosy picture of prospects for a particular market, technology, company or industry. How else are they going to get the attention of potential customers, investors or their boards of directors?

5G wireless technology has become a little like fully automated cars. Nearly every day some new study is forecasting an earlier than previously anticipated arrival of fully autonomous cars, while some respected expert is claiming full automation is decades away.

With 19 automated fleets already plying controlled-use areas I am inclined to buy the early onset assessment of automation vs. the over-the-horizon outlook from skeptics. Even my colleague, Ian, has bumped up his short-term AND long-term outlook for all levels of vehicle automation.

The same holds true for 5G. But there is a separate scenario within the cellular industry, where generational transitions can, indeed, take many years to complete.

Disagreement among wireless experts can influence implementation outcomes. The resulting confusion threatens to impede the adoption of new technologies as car makers, in particular, may cling to more familiar solutions.

In the case of 5G, forecasts of distant vs. near-term adoption have given some auto makers pause in their planning for implementing embedded connections. Why bother with LTE, the thinking goes, if 5G is just around the corner. Conversely, an equally valid thought process might be: “Best to put in LTE now, because 5G adoption is so far off.” It all depends on what you believe.

The crazy reality is that both viewpoints are accurate. 5G cellular technology is much closer to market implementation and adoption than most people believe, but that adoption will be spotty and regional, even if it is rapid. The good news is that 5G is an evolution of existing LTE technology so the transition should be less jarring than previous generational shifts.

SOURCE: Ericsson “Mobility Report” 11-2016
Most interesting of all as far as 5G is concerned is the involvement of the automotive industry in setting and testing the standard. For the first time auto makers and wireless carriers are actually seeking common ground around the creation of the new standard. In fact, the priorities of auto makers are in the forefront as the use cases are particularly suited to safety and smart city applications.

Is 5G technology a decade away? I think not. Ericsson has described the rationale behind its perspective in the just-released “Mobility Report.”

Ericsson “Mobility Report” – http://tinyurl.com/jeet6pg

In fact, Ericsson has a heavyweight partner in bringing this optimistic outlook to fruition: Huawei Technologies. I am attending Huawei’s Mobile Broadband Forum this week. The future looks bright indeed from the 16th floor of the New Otani Makuhari hotel and the automotive industry is an important part of that future – for the first time.


Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


CEO Interview: Albert Li of Platform DA

CEO Interview: Albert Li of Platform DA
by Daniel Nenni on 11-27-2016 at 7:00 am

Platform Design Automation, Inc (PDA). recently closed a US$6 million pre Series A investment round, and the company has shifted its focus from providing SPICE modeling related software and services to forming a complete AI-driven ecosystem from probing to simulation. Albert Li was the GM of Accelicon, a leading EDA tool and service vendor on device modeling, and it was acquired by Agilent in 2012. Albert is now founder and CEO of PDA, a 4-yr old high tech company that provides a comprehensive set of products and services on semiconductor device characterization instruments, device modeling and PDK validation software.


SPICE Modeling market is saturated and shrinking, what new products or technologies that PDA developed to attract a major investment?
Yes, my team was known for providing SPICE modeling software and services for leading semiconductor companies, and it is also true that the marketing is shrinking due to the consolidations of semiconductor manufacturing and design companies. But with process continues to scale down, we did discover strong needs of having faster device characterization solutions to obtain large samples of measurements to account for the increasing process variations, and having sufficient silicon data is really the key to enable accurate simulations. So we expanded our scopes to device characterization solutions, where we have a lot of hands-on experiences and know-how, also we have been working on optimization technologies (for model extraction purpose) for more than a decade, so we were one of the pioneers to apply machine-learning algorithms to achieve faster simulations such as statistical simulations and we now apply these algorithms to achieve faster measurements.

Our first device characterization instrument is NC300, the world’s fastest 1/f noise characterization solution, providing 10X speed improvements over other products on the market and for production tests, it can achieve 100X speed improvements thanks to our AI driven technologies. NC300 was quickly adopted by the leading semiconductor companies, which gave us the confidence to develop new semiconductor parametric test solutions with broader scopes, we combined machine learning algorithms, user know-how and huge amount of previous data and experiences for the training and these have generated very promising results and we will soon release a new product line that can achieve much faster parametric testing speed for semiconductor applications.

What are the products that PDA is currently offering?

There are three main EDA products and the first one is for device modeling and QA called MeQLaband it can be used for applications like:

  • Device modeling for FinFET and planar devices
  • Statistical modeling and mismatch
  • High voltage device modeling, sub-circuit modeling
  • Built-in modeling library and model card QA
  • SRAM modeling
  • Noise modeling and circuit analysis
  • Design or process optimization

PQLab is a tool for automating the QA of PDK libraries, saving engineering time and can be applied to:

  • Foundry PDK developers needing to QA a PDK
  • IC designers verify that a foundry PDK meets their requirements
  • IC designers compare two or more PDKs

For 1/f noise measurements and characterization they have the NC300 system to apply at the wafer level, device, circuit or even with sensors.

You have been working on device modeling throughout your career, what are the challenges in device modeling posed by the latest process technologies such as FinFET?
Again, the increasing process variations are the key challenges, a lot of layout dependencies such as stress related layout dependencies, lithography introduced dependencies, need to be taken into account during device modeling. For FinFET, even though the new process introduced more dependencies, for example sometimes even the shape of metal2 over a device plays significant impact on device performance, but thanks to the rigorous design rule, there are only a few fixed layout combinations that are allowed by the design rule, so the required modeling efforts are actually less. To model the process variations accurately, the key is still to obtain sufficient silicon data to produce accurate statistics, and again faster measurements are in great needs. And for designers, the foundry corner models or statistical models are often conservative, in my experience, having some silicon data on the design side to adjust model or corner is the quickest way to achieve design margin, as the model/design kit itself is a source of margin loss.

What are the future technology trend for SPICE Modeling?
Having sufficient data is really the key to the problem, if data is sufficient, model can be automatically generated or synthesized. The concept has already been applied to the case of passive device modeling, such as modeling inductors. EM solvers play the role of proving more “data” or the synthesizers to generate models automatically. We’ve been working with the same concept for the active devices for quite a while, one way is to enable faster measurements, so that a lot more data can be collected and the other way is to achieve huge amount of data based on limited silicon through machine learning, which requires deep understanding of device behaviors, device modeling knowledge, data for the training and years of training experiences, we have already successfully applied the methodologies to our service projects, and tedious tasks such as model re-targeting is now purely done by machines.

Machine Learning enabled model targeting from tweaking model parameters to just defining the targets and let the machine finish the job automatically

What are other areas in semiconductor you see that Machine Learning can help?
We’ve published 3 papers in the past few years related to machine learning, and we used machine-learning algorithms to help on speeding up soft error simulation of logic circuits, automatic statistical modeling, and automatic RF front-end design,so the areas of machine-learning applications are massive. Algorithms, expertise, data and risk are the four key components to access Machine-Learning applications, take device characterization and modeling as examples, we have been working on the machine learning algorithms for over a decade, and we are definitely the experts in device characterization and modeling, we also have huge amount of data and models from previous projects, and these enabled us to train our software or instrument to achieve faster measurements and automatic model generations.

How do you position PDA?

Our products address device characterization, modeling and PDK validation, and all of our products will be driven by machine learning to achieve faster measurement and faster simulation. We have 3 dedicated Ph.ds from Tsinghua working on algorithm development for years, and we continue to increase our investment on experimenting different algorithms, and training. So I would position PDA as a solution company with AI as the core competence and our ultimate goal is to continuously improve speed and effeciency for our customers from probing to simulations.

Also Read:

CEO Interview: Mike Wishart of efabless

CEO Interview: Chouki Aktouf of Defacto Technologies

Executive Interview: Vic Kulkarni of ANSYS


REUSE 2016 is Next Week at the Computer History Museum!

REUSE 2016 is Next Week at the Computer History Museum!
by Daniel Nenni on 11-26-2016 at 7:00 am

The first REUSE Semiconductor IP Tradeshow and Conference is next week at the Computer History Museum in Mountain View, CA. Given the importance of IP I would strongly suggest attending this event. The presentation abstracts are up now and there are a few I want to highlight as they are companies that we work with on SemiWiki:

HBM2 IP Subsystem Solution for High Bandwidth Memory Applications:
The most common memory requirements for emerging applications, such as high performance computing, networking, deep learning, virtual reality, gaming, cloud computing and data centers, are high bandwidth and density based on real-time random operations. High Bandwidth Memory (HBM2) meets this requirement and delivers unprecedented bandwidth, power efficiency and small form factor.

HBM2 (X1024) offers the maximum possible bandwidth of up to 256 GBps compared to 4GBps with DDR3 (X16) at 1/3[SUP]rd[/SUP] of the power efficiency. HBM2 and 2.5D silicon interposer integration unlock new system architectures, therefore, causing HBM2 ASIC SiP (system-in-package) to gain popularity among OEMs. One of the key IPs used to develop these ASIC SiPs is the HBM IP subsystem that consists of controller, PHY and die2die I/O. Open-Silicon’s HBM2 IP subsystem fully complies with the HBM2 JEDEC® standard.

The IP translates user requests into HBM command sequences (ACT, Pre-Charge) and handles memory refresh, bank/page management and power management on the interface. The high performance, low latency controller leverages the HBM parallel architecture and protocol efficiency to achieve maximum bandwidth. The IP includes a scalable and optimized PHY and die-to-die I/O needed to drive the interface between the logic-die and the memory die-stack on the 2.5D silicon interposer.

Open-Silicon’s HBM2 IP subsystem addresses the implementation challenges associated with interoperability, 2.5D design, overall SiP design, packaging, test and manufacturing. Multiple built-in test and diagnostic features, such as probe pads and loop-back for issue-isolation within the various IP subsystem components, not only address the test and debug challenges, but help in yield management and yield improvement, while ramping HBM2 ASIC designs into volume production. Open-Silicon’s HBM2 first implementation solution in TSMC 16nm FF+ features 2Gbps per pin data rate at up to 5mm trace length. This enables a full 8-channel connection from a 16nm SoC to a single HBM2 memory stack at 2Gbps, achieving bandwidths up to 256GB/s.

2:30pm-3:00pm Boole Room: Dhananjay Wagh , Principal IP Architect & Innovation Manager of Open-Silicon

A Vibrant 3rd Party IP Ecosystem is Critical to the Growth of the Semiconductor Industry
The third-party IP ecosystem plays a critical role in the growth of the semiconductor industry. Taher Madraswala, president and CEO of Open-Silicon, will discuss the state of the IP market and how the functional integration of IPs is driving new market applications.

He will discuss the importance of choosing the right IPs in order to achieve first time silicon success, as well as the benefits of leveraging third-party IP compared to internal IP development. Taher will describe case studies of complex SoCs, completed for leading OEMs, that were highly successful through leveraging the third-party IP ecosystem. Designers are finding new ways to produce less expensive SoCs with 2.5D interposer based system-in-package (SiP) designs, which enable a mix and match of chip/IP components at optimum process nodes.

This approach will greatly increase the reuse of IP developed at older process nodes. Additionally, as IP integration costs are increasing due to the rising number of discrete IP blocks in the current generation of SoCs, designers are leveraging IP subsystem-based design methodologies to lower development cost and risk. Continued developments in the third-party IP ecosystem, for new trends like 2.5D SiP and IP subsystems, will enable the semiconductor industry to continue to innovate and evolve.

4:30-5:00pm Hahn Auditorium: Taher Madraswala, President and CEO of Open-Silicon
And don’t forget I will be giving away signed copies of “Mobile Unleashed” at the cocktail reception from 5pm-7pm. Register for REUSE 2016 for free HERE. You can read more about Open-Silicon on SemiWiki HERE.

I hope to see you there! By the way, “Mobile Unleashed” currently has a five star rating on Amazon!

Also read: Bringing the Semiconductor IP Community Together!


Automobility: The End of Infotainment

Automobility: The End of Infotainment
by Roger C. Lanctot on 11-25-2016 at 4:00 pm

Padmasree Warrior, CEO and chief development officer of NextEV USA, kicked off day two of the LA Auto Show’s Automobility LA with a powerful perspective on the current state of the automotive industry. She rattled off the usual litany of congestion, highway fatalities, emissions and changing usage and ownership models before defining NextEV’s vision of driving to be built around the car as a safe, green companion.

Warrior’s voice is an important one as the only female CEO in the automotive industry other than Mary Barra at General Motors and for her headline-grabbing move from chief technology officer of Cisco to NextEV. Her vision of automotive architecture is a comprehensive reimagining of the automotive hardware and software stack to focus on safety, connectivity gateway, firewall and Ethernet-based network technology.

She noted that consumers are losing minutes, hours, weeks and years of their lives stuck in traffic while car makers offer nothing more than buttons, gadgets and displays. She said innovation must be focused on safety and autonomy in order to give back time to consumers to restore the attractiveness of the car as an object of aspiration.

Through her comments, Warrior highlighted the fact that Apple and Alphabet, with their CarPlay and Android Auto integrations, have commoditized in-vehicle infotainment. The unintended consequence is that infotainment is no longer essential or differentiating. What is important is that the car delivers a safe operating environment acting as a companion to the owner/driver.

The car should seamlessly detect and provide for the needs and desires of the occupants with an emphasis on avoiding harm and distraction. The safety first message is at odds with the bells and whistles obsession of auto makers and their suppliers. The de-emphasis of infotainment was a powerful counter-argument to the news item that started the week of the show early Monday morning – Samsung’s acquisition of Harman International, the number two supplier by revenue – according to Strategy Analytics estimates – of automotive infotainment systems.

With her comments at Automobility Warrior nearly singlehandedly let the air out of the tires of financial handicappers who hailed the Samsung acquisition – with some, including commentator Jim Kramer, asserting that Apple missed an opportunity. For Warrior, as she concluded her presentation, “It’s not about driving. It’s about being.”

It’s a compelling statement that only heightens that anticipation of what NextEV may have in store. It’s challenging to bring a new vision to a 125-year-old industry served by dozens of car makers, but Warrior appears to have done just that.

Delivering on that vision will be Warrior’s next challenge. Fellow automotive EV startup Faraday Future kicked off its own ambitious program with a snazzy supercar announced at CES 2016 along with plans for a billion-dollar factory across town from Tesla Motors’ own gigafactory. We are still waiting for the full Faraday vision to unfold – and now we await NextEV.

But the message was clear and important in the wake of the Samsung-Harman hookup. We are seeing the end of infotainment as we know it. Consumers are looking for a safe, green companion, according to Warrior. Buckle up.


Microsoft Ignite 2016: Stepping On The Enterprise Accelerator

Microsoft Ignite 2016: Stepping On The Enterprise Accelerator
by Patrick Moorhead on 11-25-2016 at 12:00 pm

At the Microsoft Ignite Conference in Atlanta, Microsoft announced a series of major capabilities across Windows, Office, Azure, Dynamics and Cortana. Microsoft’s CEO Satya Nadella highlighted, like he has in so many other of his big-tent speeches, the importance of enabling IT to drive the digital transformation in businesses today. While I’ve heard the digital transformation storyline many times from many vendors, Nadella always finds a way to make it differentiated if not compelling and interesting. He just gets it.


Microsoft CEO Satya Nadella and “Neon” Deion Sanders ham it up on-stage with NFL chat-bots at Ignite (Photo credit: Patrick Moorhead)

At Ignite, Microsoft highlighted major improvements to security, intelligence, and their cloud offerings under the theme of “empowering” IT pros. Ignite isn’t about departments or end users going around IT, this is about IT. Those were the folks in the audience so this makes perfect sense. Microsoft already has a good reputation of enabling IT pros tied to Microsoft’s cornerstone products like Windows, Office, dev tools like Visual Studio, server-side products like Exchange and SharePoint, and Azure. Microsoft’s job at the show was to show they can accelerate that into the future while not ignoring their current investments.

The Ignite announcements are all part of Microsoft’s gradual transformation towards more intelligent and secure platforms that utilize machine learning and AI to transform the way we all get stuff done. I wanted to talk about a few security, cloud and AI announcements from the show and share my quick take on it. I’ll be following up in details on specific areas in the future.

Security and Application Guard
While it’s always a challenge to make security exciting, the most important part of Microsoft’s announcements at Ignite were the company’s improvements to security, and it’s driven by the realities businesses attending Ignite face today and will into the future. When you consider the threat plane wideningwith mobility and IoT, and threats getting deeperwith state-sponsored hacking, security has to be the first thing that gets talked about before any feature or capability enhancements.

I believe the most interesting security announcement was Window Defender Application Guard. It’s designed to make Microsoft’s Edge browser the most secure browser for the enterprise, powered by a virtualization-based security technology and containers. I can still see the classic enterprise using Internet Explorer for older web apps, but using Edge with Application Guard for everything else including newer enterprise apps. If users use Application Guard and hit a website that isn’t white-listed, the page visually gets flagged, users can use the website, but it has limited functionality. It can’t screen scrape, access the file system nor does it have access to network resources.

Application Guard is one of this “why didn’t this happen before” ideas and gets you wondering, “why didn’t see get this before”? Architecturally, Windows 10 improved the ability to secure everything, adding a system container to place all trusted executions inside. Microsoft made everything more secure by removing it from the classic Kernel-Platform Services-Apps stack and tied security directly to a separate “security stack” that is tied to hardware VBS or “virtualization-based security”. VBS is tied right to ARM Holdings, Advanced Micro Devices and Intel’s hardware-based virtualization.


The Windows 10 stack separates OS from security (Photo credit: Microsoft)

While I have talked about Application Guard only in the context Edge browser, expect to see it across other applications over time.

Additionally, Microsoft’s Windows Defender Advanced Threat Protection (WDATP) and Office 365 ATP now share security intelligence mutually across both services. Office 365 ATP threat protection will also be extended to Word, Excel, PowerPoint, SharePoint Online and OneDrive for Business. Microsoft also talked about their upcoming Secure Productive Enterprise E5, which is designed to offer the most advanced security and productivity across all of Microsoft’s core products. This is being paired with enterprise mobility to enable expanded security to help the move towards the cloud and mobility.

Azure Cloud

Microsoft surprised many with their ascension to a top-tier, public cloud player. In fact, they’re the #2 market share cloud player only to Amazon.com’s AWS. Microsoft said at Ignite they had 120K new customer sign-upsmonthly. Yes, monthly.


Microsoft offers Azure on-prem, IaaS, PaaS, and in SaaS cloud (Photo credit: Patrick Moorhead)

They have been successful by investing billions into global datacenters, creating a full (IaaS, PaaS, SaaS) cloud stack, by supporting what would have been considered taboo like Linux and Docker containers, and through a good understanding of what enterprises really want related to SLAs. Microsoft gets IT, many others don’t. Just because a company is great in consumer cloud services doesn’t make you good at business services. In fact, history shows that it’s difficult to nearly impossible to do well in commercial markets if your core is consumer, and vice versa. Some of the improved capabilities announced at Ignite include new Azure monitoring abilities combined with updates to their operations management suite. Microsoft gets IT, many others don’t.

Microsoft’s cloud approach is “have it your way” by supporting a “hybrid” approach, from one spectrum with on-prem private cloud to a full SaaS model. Azure Stack is a huge play here and at Ignite Microsoft announced its three lead partners for the platform, Hewlett-Packard Enterprise, Dell EMC, and Lenovo. Microsoft also announced its second technical preview of Azure Stack. Interestingly, only Lenovo doesn’t sell their own cloud stack as Dell EMC has vCloud Air/Suite and Hewlett Packard Enterprise (HPE) has Helion. I’ll be keeping my eye on the Lenovo-Microsoft Azure cloud play very closely. If Microsoft can be the first company to seamlessly connect on-prem legacy to on-prem cloud to public cloud, they and their partners could find themselves in the cloud driver’s seat. Ironic, right?

Democratizing AI and FPGA “magic blue crystals”
At the end of day one, Microsoft CEO Satya Nadella, came back on stage to deliver the message that Microsoft “democratizes AI” by providing any level of AI, be it agent (Cortana), applications, services and infrastructure. Microsoft does, in fact, provide AI and ML capabilities in many different forms and is taking different but related approaches versus Google, Amazon.com, or IBM. Nadella delivered the vision but also showed some cool NFL AI chat bot examples aligned with their CaaP (conversations as a platform) strategy. Somehow, Microsoft managed to find a fun way to get Neon Deion Sanders on stage to ham it up. I thought it worked and wasn’t cheesy.

At the end of Nadella’s AI stage presence, Microsoft intelligently focused on some of the “magic blue crystals” (my description) of some things that distinguish Microsoft, and that was FPGAs. Microsoft got wonderfully geeky, even to my chip-making past desires, going into AI workload benchmarks. It was great. They explained what they described as the world’s largest deployment of custom developed FPGAs that span 15 countries and 5 continents. This is potentially huge for the future of Microsoft’s cloud platform as it signifies Microsoft’s own investment into FPGAs as a way to accelerate machine learning and AI. Senior analyst Karl Freund wrote about Microsoft’s FPGA Ignite disclosures here.


Behold the power of Microsoft FPGA deployed at Azure (Credit: Patrick Moorhead)

I find it immensely ironic that hardware is held up as a differentiator.

Wrapping up

At Ignite, Microsoft needed to show IT that they not only understood them and could support them today, but also be a reliable and innovative supplier of next generation technology for tomorrow. I believe Microsoft did both as they sprinkled AI dust on Windows, Office, Dynamics, and security while wowing the crowd with details on their AI and intelligence future with a hat tip to their hardware differentiation with FPGAs. And in the center of it all is Azure and Azure Stack, the glue that holds everything together. It’s amazing to me how far Microsoft has come in the past two years and Ignite was a good showcase for Microsoft to show this off to their IT customers.


Is your #IoT cup half empty or half full?

Is your #IoT cup half empty or half full?
by Diya Soubra on 11-25-2016 at 7:00 am

While the optimist and the pessimist argued about whether the cup was half empty or half full, I drank it !

I had seen the above statement a while ago and it jumped to mind while I was considering the state of #IoT today. That funny statement describes exactly the mindset of the players in the market. While the majority argue back and forth about privacy, security, data formats and alliances, others are busy deploying solutions and capturing revenue.

Yes, there are many items that we would prefer to be in a different stable state but there is nothing really blocking deployment, proof being the variety of success stories out there. Here is my take on the items at the centre of the debate.

Security

We all agree that the system needs to be secure end-to-end. While the argument continues about the level of required security, the cryptography algorithms and secure protocols to use, others are deploying nodes with today’s state of the art security and learning from field experience what is enough and what isn’t. The key is not just how to secure the solution but how to recover it once it is hacked since it is by now clear that all known security of today will for sure be hacked in the near future. Make sure the node can be updated with new firmware.

Privacy

This is my favourite item. People still want to believe that they have some privacy. Well, guess what, If you carry a smart phone then you have already given up all your privacy. But let us avoid that discussion. What about the thousands of #IoT applications that are not about reporting people related information. Why not start capturing revenue with those and once a privacy decree is handed down then go after people related #IoT markets. Sell the products that you have today.

Data Format

It does not matter what format the data is in when it arrives from the node since it has to be reformatted any way to fit the analytics system it is destined for. The conversion will be done in the cloud using web technology where we have infinite affordable compute. Why wait for that magic universal data base that will have all formats predefined for all types of possible sensors. Conversion is simple and affordable.

Radios

What is all the fuss about the radio standard to be used. Yes, there are many different radio standards in use today and each is better suited for a specific product market. Eventually, mobile operators will figure out a reasonable business model and NB-IoT will win over everything else but there is no need to wait since that day may never come. The radio is just the means to transport the data. As technology improves we will see new variations. Go with the flow.

Protocols
IP to the edge, CoAP, mesh, Thread, or any other discovery and transport protocol, the list of acronyms is very long. Does it really matter that everyone should agree with your choice? It would be good if everyone agreed to use the same set but they will not for a while. Meanwhile, whoever is shipping is lowly becoming the de-facto standard. So do not wait, populate.

Cost

The serious revenue is in the analytics and predictive services. The end points are just the means to generate the data that fuels the analytics engine. There is no reason to waste any effort or resource in a cost reduction exercise for endpoints since the economies of scale will kick in as volume picks up. Besides, whatever the cost of the endpoint maybe it is for sure tiny compared with the revenue generated from the analytics. Would any one think twice about investing $10 when it will produce $1000 or more?

I prefer to continue down the road while watching for the pot holes instead of staying in my spot waiting for someone to pave the road for me and everyone else. Lead, don’t follow.

That’s what I think. What do you think?