CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

It’s Better than SUPREM for 3D TCAD

It’s Better than SUPREM for 3D TCAD
by Daniel Payne on 12-06-2016 at 12:00 pm

Process and device engineers have a tough task to model and simulate an IC process prior to fabricating silicon, however this approach is much better than the alternative choice in the 1970’s of just running multiple lots of wafers and then making measurements to see if your node was meeting specifications. Out of Stanford University came a process simulator tool named SUPREM some 30 years ago, and the industry adopted it along with researchers. Nothing lasts forever, and so is the case for SUPREM because alternative tools for process simulation are available.

An ideal process simulator should have many capabilities, like:

  • 1D/2D/3D process simulation
  • Adding new materials
  • Create new processes
  • Low learning curve
  • A path from SUPREM decks
  • Flexible methods for simulation meshes
  • Supporting layout-driven process simulation

When I think of TCAD tools the company name of Silvaco comes to mind, and they are presenting a webinar on December 8th at 10AM PDT. More info here.


Simulation of a SRAM cell

Silvaco has both a 1D/2D/3D process simulator (Victory Process) and a 2D process simulator (Athena), and the focus of this webinar is on Victory Process. Here’s what you will learn at the webinar:

 

  • Key differences between Athena and Victory Process
  • How to translate legacy SUPREM-based decks to VP2D using DeckBuild utility A2VP
  • How to set, control and refine/unrefine simulation mesh
  • How to use layout driven process simulation
  • How to use basic geometrical and physical etching and deposition models
  • How to export 2D structures for visualization and device simulation

​The presenter for this webinar is Dr. Misha Temkin, and he’s a Senior AE at Silvaco in the TCAD group. Misha started out some 40 years ago in the USSR and has authored papers and books on TCAD topics. Dr. Temkin has been with Silvaco since 1989 and is an expert in using both Athena and Victory Process tools.


The Future of FPGA Prototyping!

The Future of FPGA Prototyping!
by Daniel Nenni on 12-06-2016 at 7:00 am

This interview originally appeared as the foreword to our book “Prototypical: The Emergence of FPGA-based Prototyping for SoC Design” but I thought it would be worth publishing for those of you who have not downloaded it yet. I also wanted to mention that our friends at S2C are currently offering a 50% discount on the Prodigy Kintex UltraScale Prototyping Solution Package to get you started on your prototyping journey.

Remember, the health of the semiconductor industry revolves around design starts which translate to wafer starts and ultimately product shipments. If all goes well the semiconductor industry continues to grow and we live happily ever after, right?

Unfortunately, all does not always go well with design starts and nobody knows that better than Mon-Ren Chene, the CTO of S2C. His 30+ year semiconductor career spans: EDA, FPGA, Emulation, and FPGA prototyping so he is definitely the right person to ask about a successful design start:

Nearly two decades ago during our time at Aptix, my S2C co-founder, Toshio Nakama and I recognized the power of prototyping. At that time, prototyping was only accessible by large design houses with the budget and means to employ a prototyping architecture. We also recognized that FPGAs had become a popular alternative to the much more expensive and rigid ASICs. It was then that we both decided to team up to develop a prototyping board around an FPGA, and S2C was born. Our commitment to our customers has been to push the limits of what FPGA prototyping can do to make designing easy, faster, and more efficient.

Our goal has always been to close the gap between design and verification which meant that we needed to provide a complete prototyping platform to include not only the prototyping hardware but also the sophisticated software technology to deal with all aspects of FPGA prototyping. Fast forward to today and you’ll find that FPGAs and FPGA prototyping technology has advanced so much that designers and verification engineers can no longer ignore the value that they bring, especially when dealing with the very large and complex designs that we see today. These advances have made FPGA prototyping poised to become a dominant part of the design and verification flow.

This book will hopefully give you a sense of how this is achieved.But what’s next for FPGA prototyping? Having dedicated my time to working with our customers in developing the evolution of FPGA prototyping, I have figured out two things: FPGA prototyping needs to be part of the conversation early on in the design process, and FPGA prototyping needs to move to the cloud. What do I mean by these two statements? Well, let’s break it down.

Design for FPGA Prototyping

Making FPGA prototyping part of the design process early means actually thinking about how the design will be prototyped via an FPGA as you design – Design for FPGA Prototyping. Designing for prototyping will significantly speed up the FPGA prototyping process downstream. It will aid in the act of synthesis, partitioning, and debug. I’ve outlined six ways that this is achieved:

(1) Prototyping-friendly Design Hierarchies
Design architects can make the job of prototyping much easier for engineers to implement FPGA prototyping by modifying the design hierarchy to work better in a prototyping environment. The engineers who perform implementation or verification usually have very little ability to improve prototyping performance once the design hierarchy is fixed. The need to do partitioning down to the gate level can be removed if the size of each design block can be kept to one FPGA. Furthermore, modifying the design hierarchies early can help to avoid pin congestion as many times a design becomes very difficult to implement in an FPGA or becomes very slow because there’s a central block that has tens of thousands of signals that need to go to multiple blocks in different FPGAs. Design architects can also ease prototyping by providing guidance to their FPGA implementation team(s).

(2) Block-based Prototyping

Instead of hoping the entire design will magically work when mapped and downloaded to multiple FPGAs, bringing up subsystems of the entire design, block by block, will allow quick identification of both design issues in a sub-block as well as any issues related with mapping the design to the FPGA(s). Blockbased prototyping works well especially with designs that contain many 3rd party IPs that also needs a lot of real time testing and early software development.

And very often, designers don’t even have the RTL source code for the IP blocks from 3rd parties (for example, ARM processors) and therefore cannot map the IP to the FPGAs themselves. This can be solved by requesting the IP provider to supply the encrypted netlist so that you can synthesize and partition the entire design while treating that IP as a black-box. As long as you specify the correct resources (LUT, registers, I/Os), the prototype compile software should take those resources into account when partitioning to multiple FPGAs. You can then integrate the encrypted netlist during the place and route stage.

I’ve come across customers that want to do an FPGA implementation but are reusing some very old blocks with only the ASIC netlist and without RTL. Implementation becomes very difficult since the details of the design are unknown. These legacy designs are usually only accompanied by a testbench. In this case, the best approach is to covert the ASIC gates to an FPGA and to use a co-simulation environment (such as S2C’s ProtoBridge™) to verify if the functionality of the block is correct before integrating it with the entire design. Unfortunately, this is still a painful process so designers should consider either not using those legacy blocks or re-writing them.

Note that a reconfigurable and scalable prototyping system is needed for a block-based prototyping methodology, as well as a robust partitioning and FPGA prototyping software flow.

(3) Clean and Well-defined Clock Network for Prototyping

Many ASIC designs have tens or even hundreds of clocks and most of them are just for power management/saving. Even with the most complex designs there are usually a few real system clocks plus some peripheral clocks such as PCIe and DDR. Peripheral clocks usually reside in a single FPGA which has the actual external interface pins and therefore are easy to implement. System clocks, however, need to go to every FPGA and therefore should be clean for FPGA implementation.

ASICs use a lot of gated clocks to save power. Today’s FPGA synthesis tools have advanced to take care of most of the gated clocks, but there may still be some gated clocks that go undetected and therefore cause design issues. This can easily be avoided by creating two different RTL clock implementations for the ASIC and the FPGA by using IFDEF.

Internally generated clocks can also be a problem for an FPGA prototyping environment as they all need to get on the FPGAs’ global clock lines and synchronize among all the FPGAs. A Multi-FPGA prototyping system will have a limitation on how many of these global clocks can be supported therefore the number of the internally generated clocks should be restricted (or again use two implementations in the RTL: one for ASIC, and one for FPGA).

4) Memory Modeling
ASICs support many different types of memories while FPGAs usually support two types: synchronous dual port memories, or the use of registers and LUTs to build custom memories. The latter one consumes large amounts of logic resources and might cause place and route congestion. Most ASIC memories can be re-modeled to take advantage of the block memories in the FPGA but a manual process may be required to do that. Again, instead of having the engineers who try to implement the ASIC design in a FPGA model the memories, a better approach would be to have the architects plan the designs with two memory implementations both for ASICs and FPGAs. The RTL designers then code using IFDEF to have the two implementations. FPGA prototyping becomes easy by just instantiating the correct memory implementations.

5) Register Placement on the Design Block I/Os

FPGAs usually have a lot of register resources available for the design but most ASIC designs try to use less registers to save area and power. Ideally, all block I/Os should be registered for FPGA implementation to achieve the best results. At a minimal all outputs should be registered so no feed-through nets (which impact system performance by half) will be created after design partitioning. As a result, there will be a noticeably higher FPGA prototyping performance with this approach.

6) Avoid Asynchronous or Latch-based Circuits
Asynchronous circuits and latch-based designs are FPGA unfriendly. It is very hard to fine-tune timing in an FPGA with every FPGA having to be re-place and re-routed multiple times. These issues become even worse when the asynchronous circuits have to travel across multiple FPGAs.

Moving to the Cloud

We are living in an age where design teams no longer reside in one geographic location. No matter how big or small, companies have multiple design teams in multiple locations. A cloud-based FPGA prototyping system is an ideal way for dispersed teams to manage the prototyping process and resources.

Furthermore, as smaller IoT designs proliferate the market, FPGA prototyping must become accessible to these designers. Today’s FPGA prototyping, although effective, can be costly for smaller IoT designers to adopt. The reusability of boards becomes less viable so costs cannot amortized over multiple design starts. By moving to the cloud, FPGA prototyping solutions can become a shared resource and thus can
reduce cost inefficiencies. The future of FPGA prototyping is strong. It has and will continue to demonstrate itself as one of the most effective solutions to realizing the full potential of any design.


It’s Apple TomTom Time Again

It’s Apple TomTom Time Again
by Roger C. Lanctot on 12-05-2016 at 4:00 pm

It’s December and time for the annual Apple-should-buy-TomTom rant. Of course, we know Apple prefers younger, smaller companies with brighter and clearer long-term prospects, but we also know Apple navigation sucks and if there is one thing TomTom does well it’s navigation… and traffic.

To stir the pot, TomTom released a statement two days ago:

“Amsterdam, 02 December 2016 – Responding to questions in Dutch media today, TomTom (TOM2) announces that within the Consumer business unit 170 roles have been reorganized. Of those roles, 110 will move to other areas within the company and 60 roles are made redundant, of which 24 are in The Netherlands.”

The announcement points to nothing in particular other than disappointing results from the consumer group. Regular followers of TomTom’s quarterly earnings will be well acquainted with the painfully slow evolution of the company’s business away from consumer-focused portable navigation devices and toward automotive, fleet and service and software oriented businesses. TomTom has cleverly launched products for golfers along with fitness devices and its Bandit action cams, but all of these categories have yet to fill the crater created by declining PND sales.

A few things are different this year. First of all, on this date, one year ago, TomTom’s stock was at a nine-year high of $12.10. For most of 2016 the stock has traded in a narrow range after falling from that high.

Second of all, TomTom has built its fleet business to a last-reported total of 671,000 subscribers for one of the largest fleets in the world behind LeasePlan, Enterprise Holdings, UPS, FedEx, Verizon and a few others. The achievement ought to command a higher perceived value – especially given the frothy merger and acquisition environment.

The roster of potential acquirers is legion and includes Apple, Alibaba, Baidu, Alphabet or maybe a Tier 1 auto industry supplier (Panasonic, Sony, Continental), a car maker consortium (Toyota, Renault-Nissan, Mazda), or a carrier (Vodafone, AT&T). The list no longer includes Samsung.

The fact that neither Verizon nor Samsung saw fit to grab TomTom suggests that TomTom is either asking too much or is perceived as possessing too much baggage and/or overhead. Somehow, the existing TomTom book of business from Volvo, Ford, Volkswagen, Subaru, Renault-Nissan and others was not enough to catch Samsung’s eye. But, again, TomTom is no doubt looking for an $8B-$10B exit at least.

There is reason to hold out for that kind of pricing given Verizon’s recent Fleetmatics acquisition ($2.4B) or the acquisition of video telematics provider Lytx by GTCR in February 2016 for $500M. A wave of dashcam companies have elbowed their way into the market including Nauto, Navdy, Netradyne, Carvi and others.

Most of these dashcam wannabes are targeting usage-based insurance, automated driving and advanced driver assist technologies along with thinly veiled intentions to build their own map databases to compete with TomTom and HERE. TomTom already has a map. So why isn’t TomTom getting into the dashcam game? Good question. None of the financial analysts on the last earnings call asked it.

There is one ominous reason TomTom’s stock isn’t on the rise in the manner it was 12 months ago. The latest speculation around Apple is that the company is looking into using drones to enhance its map-making capabilities. The rumor seemed a little zany to me, but what isn’t zany is Apple’s courtship of the OpenStreetMap community and increasing preference for using OSM maps over TomTom maps in 100 overseas markets.

The narrative of an eventual or even possible Apple acquisition of TomTom has steadily unraveled during 2016. So this is my last Apple-should-buy-TomTom rant. TomTom is clearly an automated driving and, increasingly, a fleet play and, as such, remains a prime acquisition target.

TomTom, more than any other company, is in the existential throes of trying to determine whether it is, was or will be primarily a business-to-business or business-to-consumer company. The B2B call is strong, but the company’s DNA has long derived from B2C. It may take TomTom another year, but the time is rapidly approaching when the company must pick one or the other.


How The Snapdragon X50, World’s First 5G Modem, Puts Qualcomm Ahead Of The Curve

How The Snapdragon X50, World’s First 5G Modem, Puts Qualcomm Ahead Of The Curve
by Patrick Moorhead on 12-05-2016 at 12:00 pm

AAEAAQAAAAAAAAljAAAAJDZhYmY5NTg1LTk2YWUtNGZiYi1iY2IyLTM1YWEzMmM5YWJhYg

Regardless of what part of technology you come from, the entire tech industry has been talking about 5G. 5G will reshape the way we will use mobile devices, deliver self-driving cars and smart cities, and even the way get content delivered to our homes. Some companies talk about it to be part of the conversation, while others lead and set the way. Many wireless companies have been jostling for a position within the upcoming 5G wireless world and Qualcomm has shown incredible leadership in 5G.

The company showed significant strength in 3G and even more in 4G and doesn’t look to be letting up any time soon. Because of Qualcomm’s CDMA heritage and 4G expertise, it only seems natural that the company would also want to have a leading place in 5G. This is further exemplified by their exhaustive R&D in 5G as well as industry wide-partnerships with operators, infrastructure vendors and device makers. Qualcomm is looking to extend their leadership in 5G today at the 4G/5G Summit in Hong Kong today by announcing the world’s first 5G modem, the Snapdragon X50.

Snapdragon X50 5G Modem (Credit: Qualcomm)


The first 5G modem, Snapdragon X50

To my knowledge, the Snapdragon X50 5G modem is the first and only 5G modem in existence. Qualcomm says the Snapdragon X50 5G modem is capable of download speeds of up to 5 Gbps which is 5x faster than the fastest 4G modem, which also happens to be a Qualcomm modem, the Snapdragon X16. Qualcomm announced productization of the X16 in hotspots and SoC and I wrote about it today right here.

The Snapdragon X50 supports 28 GHz mmWave spectrum initially, the very high frequency, short wave length kind of radio communication that has been associated with 5G. This 28 GHz band of spectrum is designed to support both Verizon’s 5GTF and Korea Telecom’s 5G-SIG specifications. That means we will very likely see the Snapdragon X50 in both companies’ deployments of 5G technology which are due in 2018.

To attain speeds of up to 5 Gbps, the X50 must do things like adaptive beam-forming and beam tracking for when the device isn’t in direct line of sight. The Snapdragon X50 does 8X carrier aggregation (CA), combining 8 different 100 MHz blocks of mmWave spectrum. By comparison, the Snapdragon X16 LTE modem can only do 80 MHz of carrier aggregation with 4X CA in 20 MHz blocks.

X50 pairs with next-gen Snapdragon 800 with X16 for 4G and 5G
For the Snapdragon X50 to truly work in the mixed world of 5G and 4G, Qualcomm has paired it with a Snapdragon processor with built-in gigabit-class LTE. Qualcomm’s just announced the upcoming Snapdragon 800 series processor will have an integrated Snapdragon X16 modem in it. By pairing a gigabit-class LTE modem and a gigabit-class 5G modem, Qualcomm can provide a complete solution for anyone looking to deploy a 5G wireless product in 2018. This makes Qualcomm’s solution a multi-mode solution, something that has served the company well since the 3G days. I believe that multi-mode solutions, even if with two modems are going to be crucial for the implementation of 5G and ensuring consistent connectivity because of mmWave’s inherent weaknesses. Keeping in mind that 5G will not only encompass mmWave technology, but right now that is the primary type of spectrum being utilized. Down the road we could see 5G in sub 6GHz bands of spectrum and not just the mmWave spectrum we are seeing today.

Multi-mode 4G and 5G supported (Credit: Qualcomm)


The 5G road ahead

The reality is that while this is being called a 5G modem, there is still a lot that needs to be done for this to officially be a 5G modem. There still isn’t a set global standard for 5G modems, but this modem will very likely adhere to all the standards that are expected to pass in 2018 and launch in 2019 and 2020.

Qualcomm is heavily especially investing to in accelerating 5G NR with trials and early deployments around 2018 and 2019 ahead of the expected 2020 date. Qualcomm isn’t alone in 5G development; they do have competition from Intel and Samsung Electronics who are actively developing their own 5G technologies. Most other players in the 5G space like Ericsson, Huawei Technologies, Nokia and Samsung are all looking to build infrastructure rather than modems.

The 5G landscape is finally starting to become more real with actual chips, which Qualcomm says they’ll be sampling in 2H 2017 with commercial devices in 1H 2018. Qualcomm hasn’t said what process node they are using to manufacture this 5G chip, but I suspect it’s going to be manufactured on one of TSMC’s leading nodes like 10nm or 7nm. The Snapdragon X50 finally delivers us one of the missing pieces of enabling 5G connectivity and offers an insight into what kind of performance we can start to expect. Once again, Qualcomm demonstrates its wireless leadership, this time with the world’s first 5G modem.

You can find more info here and Qualcomm also produced a video:


CEO Interview: Randy Caplan of Silicon Creations

CEO Interview: Randy Caplan of Silicon Creations
by Daniel Nenni on 12-05-2016 at 7:00 am

For the next installment in our series of semiconductor CEO interviews we meet with Randy Caplan from Silicon Creations. Randy has helped build the company from a small startup to one of the world’s leading providers of interface and clocking IP. Almost every new chip developed these days has a requirement for PLLs and SerDes. Since many products are differentiated by these blocks, this discussion should provide good insight into some key factors affecting chip performance.

Why are PLLs and SerDes so important in chip design?
As technology evolves, the requirements for faster data processing and increased throughput are growing rapidly. At the same time, demands for reduced power consumption, smaller silicon die area, and shortened product development cycles are putting pressure on designers just to keep chip performance the same.

PLLs generate the high speed clocks that are the heart of every digital design. Improving clock quality in digital designs increases timing margin, allowing more information to be processed in a given time. For data converters, higher resolution and faster conversion rates can be achieved. High speed interfaces also benefit from improved PLL performance.

For SerDes, the increased demand for data throughput is becoming a limiting factor in both system performance and power consumption. The characteristics of transmission materials such as FR4 are not changing, so on-chip techniques must advance as data rate requirements grow. When single lane bandwidth limits are reached, parallel channels must be added, making the power and area of the SerDes a larger percentage of the overall SoC. Since data rate and clock quality are typically defined by the interface standard, area and power of the SerDes quickly become a dominant consideration as more channels are added.

Silicon Creations has grown from a small design service company to providing IP to most of the major chip companies in the world. What’s the secret?
There are many factors, but I think one of the biggest is the way we engage with our customers. We consider ourselves part of the customer’s team, as opposed to an outsider just trying to hand off a black box design. Our primary goal is always success of the customers’ chip. I think this mindset leads to more open communication, resulting in potential issues being resolved early in the design cycle rather than after the silicon comes back, when it’s costly to make changes.

We’ve been fortunate to have assembled a team of talented engineers who created innovative new designs, with performance often far exceeding industry standards. At the same time, we’re aware of the limitations of each design, and go to great lengths to make sure what we promise is what our customers achieve. Many of our customers are shipping hundreds of millions of chips that rely on our PLL and SerDes to function properly. Therefore, better than six sigma quality is often required when considering the joint probability of failure from all the blocks on a complex SoC. We’ve implemented processes and methods that ensure our designs meet these high statistical reliability standards, and our customers know this.

Does your business focus on any particular market segments?
In some ways our business is a proxy for the semiconductor market as a whole, since almost every chip requires a PLL and most require SerDes. Historically, our sales have been distributed across markets in similar proportion to overall chip development.

Over the past several years we’ve seen large growth in the consumer markets, mainly driven by cell phone sales. Our track record of reliability in high volume production has made us a preferred vendor for many of the leading smartphone providers. Home electronics such as flat panel displays and networking have also been big markets for us.

The growth of IoT in the past couple years has influenced us as well. We have several products optimized for ultra-low power applications, including PLLs that consume less than 4µW and still meet very aggressive requirements for clock quality.

To support the new developments in smart cars and automotive electronics, we are partnering with several of the leading foundries to develop and validate a full line of PLL and SerDes products compliant with ISO-9001, ASIL and ISO-26262 Functional Safety standards. Some of our customers are already shipping fully qualified chips for automotive applications.

With so many different semiconductor foundries, geometries, and process flavors, how do you decide process nodes to port your designs to?
In a similar way to how our business mirrors the distribution of various market segments, our IP development scales proportionally to process volume. We’ve worked with nearly every major foundry, in almost all transistor geometries from 350nm down to 7nm, and most with multiple process flavors (low power, high performance, etc.). All in all, we’ve supported well over 100 process variants!

The key for us has been a robust porting flow, based on an internally developed PDK that can be quickly mapped from one process to another. All we need from the foundry is a reasonably accurate simulation model and a DRC deck, and we can deliver a design that works the first time. This has led to ports yielding in processes very early in their life cycle – well prior to their release to ASIC customers.

Our highest production volumes are in 28nm, with over 100 chips using our IP, and millions of wafers shipped. From a product perspective, we have over 300 unique products distributed in approximately equal amounts between 7nm-16nm, 28nm, 40/45nm, 55/65nm, and 80nm-350nm.

So many IP vendors offer PLLs and SerDes. How are you differentiated?
The semiconductor IP market has grown significantly since we started ten years ago, and many companies are now offering quality products for high performance clocking and high speed interfaces. Some may offer a competing product in one case, but could be our partner in another where our products complement each other. We have the highest respect for these companies.

Many times our customers choose our IP because it has the best performance specifications available, because it’s stable in high volume production, or because we can customize a design on an aggressive schedule.

We also offer highly programmable designs that can be digitally configured to work optimally in multiple applications, which adds a lot of value for those developing complex SoCs with many different functions. For example, one customer had a chip with our fractional PLL instantiated 17 times generating all on-chip clocks for applications ranging from ARM core clocking, to PHY reference clocks including Ethernet and PCIe. Another example is our multi-protocol SerDes which is compatible with over 50 different interface standards, with minimal power and area overhead compared to dedicated PHY. Not only is this programmability cost effective by allowing one IP to be used for multiple applications, it allows new applications to be prototyped with existing silicon, simply by reconfiguring the design.

Based on feedback I’ve received from many of our repeat customers, I can also say that our close involvement with customer design teams, our commitment to their success, flexibility to adjust when unexpected changes occur, consistency with meeting schedules, and ability to accurately forecast silicon performance are all key factors in choosing our IP versus other similarly specified products.

What are the challenges facing analog IP providers today?
The pressure to achieve first-time working silicon is immense and growing. Standards for quality (like the ISO26262 Automotive safety standard) deeply constrain what we do, and immense production volumes and short times to market of our products demand designs that yield first time, every time. IP providers don’t get do-overs.

Wafer processes have become much more complicated and process controls are improving but not as fast as the devices get smaller. This means that even modest analog designs have become more complicated and arduous to design, and often times digital control and calibration is necessary. Whereas a few years ago single designers could make a whole chip, teamwork is now necessary for most circuits and the team often includes customers or even competitors. Designers have to be good communicators and have had to learn to write software to do their jobs. This makes solid training essential.

Silicon Creations is now completing its tenth year. Where do you see the company ten years from now?
I expect we will continue to see increasing growth for the foreseeable future. We’ve built a strong base of repeat customers and are regularly adding new ones. The reputation we’ve earned for delivering quality products and world class support makes us a top choice for those selecting IP for their new projects.

The company is privately held and self-funded, so we have the luxury of defining our own path without outside pressure from investors. We’ve built a culture where our employees are excited about the projects they’re working on and about the company’s future. This culture is part of our company’s core, and will be a solid foundation for future growth.

The opportunities we see ahead seem almost unlimited. We hope in the next ten years to take advantage of as many of these opportunities as possible, while continuing to maintain our high standards for product quality, customer support, and quality of life for our employees.

How will Silicon Creations help to address the requirements for next generation clocking and interface IP?
We continue to exploit the performance advantages of each new process technology, as well as innovating to optimize design architectures for improved figure of merit (lower power and area for a given data rate). Our SerDes products are currently being used in applications requiring up to 20Gb/s, with power per lane below 5mW/Gbps for short reach and below 8.5mW/Gbps for long reach. We will soon be extending this to 28Gb/s and beyond, with further optimization in power an area.

Our PLL specifications define the state of the art in performance / power / area figure of merit. We continue to extend this through an active research and development program, and expect to extend what’s possible in on-chip clock generation with new PLL architectures as well as continual optimization of our existing ones. It is becoming more and more frequent that our customers replace discrete clocking ICs in their systems with our fully integrated solutions.

We are constantly looking for ways to push the limits of what’s possible in semiconductor technology. Each time we take a step forward in the performance of our IP, our customers are enabled to also push the limits of their technology, which in turn encourages us to push further. I think it is this iterative push towards progress that will enable the next generation of technology.

You can read more about Silicon Creations HERE.

Also Read:

Expert Interview: Rajeev Madhavan

CEO interview: Rene Donkers of Fractal Technologies

CEO Interview: Albert Li of Platform DA


Heads Up! Photonics West is Just Around the Corner

Heads Up! Photonics West is Just Around the Corner
by Mitch Heins on 12-04-2016 at 4:00 pm

As I write about integrated photonics I continue to hear from long-time experts in the field who lament that integrated photonics has been around for decades and other than telecom/datacom, it seems to never be a mainstream technology. It’s hard to argue that this time around it will be different as those people have lived through some very lean times for photonics and are rightfully pessimistic. They had high hopes ten or fifteen years ago and those hopes didn’t come to fruition. It takes a brave soul to take the risk of getting re-energized and passionate about something knowing full well those hopes could be dashed yet again. Then there is another set of people who are newly curious about integrated photonics and these people inevitably run into those industry experts and suddenly we have self-fulling prophesy. We convince ourselves that photonics didn’t take off before so it won’t take off now.

I am however nothing if not an optimist and a dreamer. I’m an inventor looking for ways to make the world better and I am forever intrigued by the wonders of nature and its physics and thus I forge on. It’s with that mind set that I am excitedly looking forward to attending the upcoming SPIE Photonics West show to be held at the Moscone Center in San Francisco January 28[SUP]th[/SUP] – February 2[SUP]nd[/SUP]. If you have never been to the Photonics West show and if you are the least bit curious about photonics, integrated or otherwise, you should take the time to visit scenic San Francisco and check out this show. The full program can be found here www.spie.org/pwprogram. The show boast two free expositions and three for-pay conferences in one venue and if you are a gadget person, you will simply be amazed at the array things on display at these exhibitions. The two free expositions are BIOS (the world’s largest biomedical optics and biophotonics exhibition) and Photonics West (the premier photonics and laser event of the year).

The BIOSexposition and conference runs Saturday and Sunday, January 28[SUP]th[/SUP] and 29[SUP]th.[/SUP], and focuses on topics like biomedical optics, photonic therapeutics and diagnostics, neurophotonics, tissue engineering, translational research, tissue optics, clinical technologies and systems, biomedical spectroscopy, microscopy, imaging, and nano/biophotonics. Free exhibitions are included from 200 companies in this space.

The Photonics Westexpositions run Tuesday through Thursday, January 31st – February 2nd. There are two conferences coincident with the exhibitions, LASE and OPTO, that concurrently for the entire week from January 28th through February 2nd. LASE focuses on topics like laser source engineering, nonlinear optics, laser manufacturing, laser micro-/nano-engineering, and 3D fabrication. OPTO focuses on topics like optoelectronic materials and devices, photonic integration, displays and holography, nanotechnologies in photonics, advanced quantum and optoelectronic applications, semiconductor lasers and LEDs, MOEMS-MEMS, and optical communications from devices to systems. Free exhibitions are available during Photonics West from over 1,300 companies

The show is expected to draw over 20,000 attendees with over 4,800 papers and 72 courses and workshops available for those who want to tune up their technical skills. As part of the free exhibitions there are also a couple of forums that you might want to attend to better understand the market opportunities that lay ahead. The first is these is the Biophotonics Executive Forum which is a half-day session that includes discussion on global optics and biophotonics markets and emerging technologies. The second is the SPIE Photonics Industry Analysis: 2017 Update which will cover sizes of markets, geographic distribution, components and applications. Lastly there are a number of panel sessions that are available as part of the free exhibitions. These cover a wide range of topics from virtual reality, 3D printing, silicon photonics and ICs, solid-state lighting and advice for photonic startups. Check out the SPIE website here for more details on how to register and attend.

If this conference doesn’t light your fire for photonics, your wood is wet. I hope to see you there!


The post election Semicap bubble just burst in one day

The post election Semicap bubble just burst in one day
by Robert Maire on 12-04-2016 at 12:00 pm

Back to a more normal reality… Market gets”De-Fanged”… Where to from here? The “Icarus” Effect… Much of the market, and especially Tech & “FANG” (Facebook, Amazon, Netflix & Google) stocks gave back most all of their post election day gains in one session. The faster the stocks rose, the faster they gave it all back with some of the higher fliers such as Lam , Ultratech and FORM off 7% on Decemebr 1st.

In general SEMICAP stocks seemed to be off by an average of roughly 5%. The reality is that the stocks went up for no real reason (as nothing substantial had changed over the last month) and came back down for no real reason- no downgrades or major announcements in the space. Nothing much changed either way, up or down.

Nothing new to drive the stocks…
Over the last month the stocks have been rising for no good reason other than a post election rally. There was no significant news in the semiconductor space and no discernible change in momentum. We were well past earnings and no surprises or pre-announcements to speak of. Nothing to move the stocks other than the stock market itself…..

Business remains good….but we knew that already….
3D NAND, 10NM and 7NM spending remains on track and DRAM has been spending much better so all has been good, in fact better than expected, but this has not been new news as we have known for a while that the second half would be better than expected and even overflowing into Q1 of 2017. Expectations and estimates haven’t changed, and most companies still are seeing record or near record business.

Life in a fourth order derivative market and stocks…
As it has always been, Semicap stocks and Semicap companies remain at the end of the whip and most levered, both up and downward to variations in the economy and markets. The Global market, drives the US market, drives tech stocks that drive semiconductor companies that ultimately drives semiconductor capital equipment spending and companies. The global market has the sniffles, the US sneezes, tech stocks catch a cold, semis get pneumonia, and semicaps drop dead…… thats why semicap stocks were off 5% in one day, while tech was off about 1.5% but also why its fun and exciting to invest in them.

Will there be a “dead cat bounce”?
We didn’t see much of a bounce during the trading session as the stocks fell hard in the morning and continued to drift downward for the remainder of the day. We think that the selling was driven by a bit of an element of fear and greed as some investors wanted to sell to try to lock in recent gains coupled with fears of a bigger correction. Investors we spoke to did not seem to be willing to fight the tape and may stand on the sidelines tomorrow hoping that not much happens on a Friday.

Falling through some support levels…
Some stock broke below some important psychological barriers that may be a bit more concerning. LRCX fell hard through the $100 mark , dropping 7%. ASML broke just below $100, closing at $99.99, down 3%. AMAT just managed to hold onto a “3” handle. Even though AEIS was off 5.6% it still was up overall over the past month so there could be more of a correction coming for some stocks like this that haven’t given back all their gains yet.

Industry Cyclicality and cheesy horror movies….
Fear of the cyclicality “boogey-man” remains a significant overhang in the stocks. Although company management claims the industry is no longer cyclical or that at the very least , cyclicality has been greatly reduced, most investors don’y buy it…..Its like telling a child having a nightmare that there’s no such thing as the “boogey-man”.

Cyclicality is much like Jason in the Friday the 13th series of movies. He gets run over, shot, drowned, burned and dies a hundred deaths yet always seems to rise again to terrorize investors……you just can’t seem to keep him buried…..

So whats an investor to do???….
We don’t think we want to be in the way of potentially falling spears or be a hero trying to swim against what was a very strong tide of selling. It feels a lot like the stocks should settle back in to where they were before things got stupid, meaning somewhere around pre-election levels.

There is no significant catalyst to believe otherwise. We don’t think Trump’s election is really going to spur 3D NAND consumption or speed the pace of 10NM and 7NM investments. If anything, Trump may be less of a friend to the Semicap industry that has many technical workers under a visa program he wants to get rid of. He continues to threaten a trade war with China. Also, we live in an industry where much of the actual manufacture of products has been moved overseas over the last ten to twenty years.

This leads us to believe that the stock bubble we experienced was more of an empty, knee jerk reaction rather than calculated analysis of the potential benefits of a new administration to the industry. Investors who realize that the stocks went up for no reason will not likely bid them up again in the near term.

If there were an overreaction on the negative side to below pre-election levels, we might be tempted to step in and selectively buy stocks with good fundamentals or those that got overly beat up for no good reason.


Scalability of Industrial IoT applications

Scalability of Industrial IoT applications
by Akeel Attar on 12-04-2016 at 7:00 am

AAEAAQAAAAAAAAhtAAAAJDg2ZjgxYzIyLTg3MjQtNDBhOS1iODA5LWMyMThmNWNhMGYyMg

Industrial IoT applications typically start with a small scale pilot application to demonstrate how some items of equipment and their related sensors can be connected to the cloud and to understand what algorithms are required to automatically extract value from the arriving streams of measurement data. Following on from a successful pilot application the next step will usually be scaling up the solution to automatically monitor many more items of equipment which is likely to require a solution capable of processing data from thousands of devices and hundreds of thousands of measurements.

The XpertRule Rules, Decision & Analytics engine supports large scale IoT deployments through its unique ability to deploy distributed decision making to any part of the IoT ecosystem; IoT edge/fog, Cloud and Mobile devices all from the web / cloud based Decision authoring environment.


Cloud based deployment
The cloud based Decision Engine deployment supports large scale applications through its modular software structure and ability to automatically deploy mirrored instances of the same application across multiple servers. The modular software structure allows separate definition of libraries of data processing and predictive algorithms and rules based templates for monitoring different types of equipment and devices. This makes adding new equipment straightforward and changes to monitoring algorithms and templates are automatically propagated across all equipment types using these libraries. As more devices and equipment are monitored further servers and instances of the XpertRule engine are used to provide increased processing capacity.


IoT edge hub deployment
Deploying Rules, Decision and analytics engine at the IoT edge reduces the reliance on a connection to the cloud and can provide real time response to local problems and minimise the amount and frequency of data sent to the cloud. In addition data remains local where there are restrictions on sharing with the cloud due to confidentiality / security requirements. Reduction in data traffic between edge devices and the cloud can be substantial as there is only a need to report significant events or aggregated data to the cloud. This in turn greatly reduces the requirements for cloud based monitoring algorithms and minimises the need for increased processing capacity as more devices and equipment are added.

Mobile device deployment
Deployment of the XpertRule Decision and Analytics engine on mobile devices supports local troubleshooting of equipment and device problems by field personnel. Guided workflows can guide non expert users through diagnosing and resolving problems so that the requirement for visits from centralised specialist resources are minimised. This has the advantage of the faster resolution of problems as well as preventing the availability of specialist resources becoming a bottleneck to large scale solution deployment.


Nvidia Drives into New Market with Deep Learning and the Drive PX 2

Nvidia Drives into New Market with Deep Learning and the Drive PX 2
by Tom Simon on 12-03-2016 at 7:00 am

Nvidia has found that video games are the perfect metaphor for autonomous driving. To understand why this is so relevant you have to realize that the way self-driving cars see the world is through a virtual world created in real time inside the processors used for autonomous driving – very much like a video game. It’s a little bit like the Matrix, only it is real. Perhaps this was the realization that caused Nvidia to enter the autonomous driving (ADAS) market. Regardless it certainly also has a lot to do with the truly massive computing power they are putting into their latest products.

Nevertheless, the Occupancy Grid which is the virtual world that the ADAS system creates to model the real world around the vehicle is very much like a video game. I learned more about how it works at a recent event hosted by Synopsys and SAE. Shri Sundaram a Product Manager from Nvidia spoke about their offerings for ADAS. He framed his discussion by reviewing where our current era of computing fits into the big picture.

In 1995 there were probably one billion PC users, by 2005 there were two and a half billion mobile users. Today we are looking at a world with hundreds of billions of devices – this is the age of AI and intelligent devices. It is fueled by deep learning and GPU’s. This has set the stage for self-driving vehicles. They promise to be safer, will allow us to redesign our transportation infrastructure, and offer a wider range of mobility services.

It will be great for people – driving is not always fun and can be difficult. It is this difficulty that makes automating it a challenge. Not just a technical challenge either. There is consumer acceptance and the regulatory environment to deal with. Yet, just focusing just on the technical side there are many challenges.

Shri lists the technical challenges as: sensors and fusion, massive compute requirements, algorithm development, and assuring functional safety. He spoke first about sensors. Sensors are used for navigation, detection and classification, and avoidance. We see GPS, inertial motion unit (IMU), multiple cameras, LiDar, radar, sonar and ultrasonic sensors. These create huge bandwidth requirements. Cameras are each over 2MP and running at 30fps. Lidar can provide over 500K samples per second. All of this needs to be fused in real time to create the occupancy grid.

Localization is the first task – using sensors and map data, the ADAS system needs to determine the location of the vehicle. At the same time, more information is needed to successfully navigate through the environment. The ADAS system will combine all the map data, location data, and perceived environment to plan actions and execute them. As Shri pointed out, there are many exceptions – snow or leaf covered roads, pedestrians, people directing traffic, construction zones, emergency vehicles, etc.

Nvidia has developed an SDK for self-driving cars called Driveworks. It uses deep learning to help perform path planning for self-driving vehicles. Driveworks runs on Nvidia’s Drive PX 2 ADAS hardware. It uses multiple layers of deep learning to separately perform operations like road surface identification, object detection, lane detection, and others and then combine those into a composite model.

Drive PX 2 sports 2 Parker Tegra Cores, 2 Pascal GPU’s and consumes about 80W when in normal operation. Shri said that it has the equivalent compute power of about 150 Mac Book Pro’s. Their next generation will have even more processing power and will require only about 20W.

We have all heard about the recent announcement of the partnership between Tesla – the auto company, not the namesake Nvidia GPU. In addition, Shri spoke about several other self driving cars that are in development and using Nvidia AI. These include Baidi, nuTonomy, Volvo, TomTom, and WEpods.

It’s clear that Nvidia is taking a lead role in the rapidly developing ADAS market. It’s no surprise that they accepted the invitation to present during the SAE and Synopsys Seminar. Nvidia uses a number of Synopsys tools in the development of systems like Drive PX 2. The range of Synopsys products that address ADAS systems is wide. Starting at system design and modeling and moving all the way down to IP and physical implementation. For a better understanding of these offerings be sure to look more closely at the Synopsys Automotive web page.

Read More Articles by Tom Simon


These 2 Markets to Drive IC Market Growth through 2020

These 2 Markets to Drive IC Market Growth through 2020
by Daniel Payne on 12-02-2016 at 8:00 pm

Spotting trends is an essential insight for marketing folks, general managers and C-level executives in our semiconductor industry. You could read hundreds of press releases, attend dozens of conferences, and interview all of the major thought leaders to help spot an emerging trend, or you could subscribe to a service like IC Insights to answer your questions. I just read their latest research bulletin and they conclude that the top two semiconductor market segments are:

  • IoT
  • Automotive

The highest growth rate comes from IoT (Internet of Things) which could be at a 13.3% CAGR to $12.8B by 2020.

The number 2 segment is automotive, growing some 12% to $22.9B by 2020. Other growth rates in declining order include:

  • Medical
  • Digital TVs
  • Servers
  • Cellphones
  • Standard PCs
  • Set-top boxes

The two markets projected to decline through 2020 are:

  • Tablets
  • Game Consoles

Our family uses three tablet devices and two game consoles, so it makes me feel a bit sad to see declines in these markets. Readers of SemiWiki see that IoT and Automotive are quite popular topics, and that there are menu links at the top of each page for these two market categories so you can get all the latest news. EDA and semiconductor IP companies are also aligning their product and service offerings in these two growth areas: IoT and Automotive.

Looking at just pure market size, we see the two stalwart segments of Cellphones and Standard PCs, totaling $128.8B in estimated sales. There was an estimated 5% decline in Standard PC integrated circuit sales in 2016, while game console IC revenues dipped 4%, and tablets fell at 10%.

The actual report is a heavyweight tome at 492 pages, so the level of detail should satisfy the most curious marketer. Pricing for the report is $3,690 for individuals and $6,790 for a corporate license.

Read the complete research bulletin here as a PDF document.