NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

An Steegen ISS Talk and Interview – Patterning Options for Advanced Nodes

An Steegen ISS Talk and Interview – Patterning Options for Advanced Nodes
by Scotten Jones on 02-28-2017 at 12:00 pm

At the ISS Conference in January, An Steegen EVP of Semiconductor Technology & Systems at imec gave a talk entitled “Patterning Options for Advanced Technology Nodes”. I was present for her talk and had the opportunity to have a follow up interview with An.

Continue reading “An Steegen ISS Talk and Interview – Patterning Options for Advanced Nodes”


Simulation done Faster

Simulation done Faster
by Bernard Murphy on 02-28-2017 at 7:00 am

When it comes to functional verification of large designs, huge progress is being made in emulation and FPGA-based prototyping (about which I’ll have more to say in follow-on blogs), but simulation still dominates verification activity, all the way from IP verification to gate-level signoff. For many, while it is much slower than hardware-assisted options, it’s easier to setup, easier to debug and easy to parcel out to simulation farms where you can run thousands of regression jobs in parallel.

So Cadence made a shrewd move in acquiring Rocketick last year to provide acceleration help to the simulation-bound. But designs continue to get bigger and hairier so, according to Adam Sherer (Group Dir Marketing at Cadence), they’ve turned the crank some more, this time to an approach that has significant potential for scaling (tis the season for scalability roadmaps). The big news here is that this solution (Xcelium) is now able to efficiently split an Incisive simulation across multiple cores based on a careful examination of dependencies, and it does this in such a way that it can deliver significant acceleration over single-core simulation.

This is no small feat. Many attempts have been made at parallel simulation but have struggled to deliver significant improvement because it has been so difficult to decouple activity in one part of a circuit from any other part. Breaking a circuit into pieces might speed up the pieces but you lose most of that gain in heavy inter-piece communication overhead (that’s my technical term for it). According to Adam, it’s the nature of the SoC verification problem that makes multi-core more effective in this case. Subsystem blocks in an SoC commonly operate concurrently, creating a high density of events which would slow down a simulator needing to run in a single core. But RocketSim can figure out intra/inter-block dependencies and partitions the circuit across cores so it can handle those higher event densities with greatly reduced impact (see the first table below). A similar argument applies to running multiple concurrent test scenarios (a common task in SoC verification); run-times are found to grow much more slowly with number of tests on multi-core than on single-core.


Cadence has done quite a bit to justify a new brand on this creation. First, some applications still work best on single-core, such as IP verification with meaty UVM test-benches. These can’t be partitioned easily, yet Xcelium still offers ~2X speedup for IP sims on that single core over the previous generation. They’ve have also added an improved randomization engine which is both faster and delivers better distributions to help constrained-random reach coverage more quickly. Meanwhile SoC sims are running 5-10x faster on multi-core, again over prior generation performance. This should be of interest in many contexts, for example in DFT sims which tend to be very high activity, also for teams wanting to speed up multi-week gate-level sim signoffs.

Another important point is that the solution is purely software and requires no special hardware. While RocketSim started with GPUs, it now runs on standard server platforms. Cadence apparently has recommended configurations for servers, but otherwise the solution is available to everyone who has access to datacenter-class servers. Which means it can help all simulation users, not just a privileged few.

The cool thing about the multi-core part of this solution is that it scales with available cores. I’m sure you’re thinking – “nah, that’s not right – everyone knows that software parallelization improvements tail off as the number of cores increase”. But I think Cadence is onto something. Even as SoC sizes increase, they’re still made from IPs and even the big IPs are made from little IPs. Cadence won’t share the details of the partitioning algorithm (patent pending), so we’re left to speculate. Certainly the speedup they are seeing is significant, so it seems the IP/bus-based nature of SoC design must make the problem tractable if you have the right engine to attack it. Of course, nothing lasts forever; I’m sure performance will tail off at some point, but I think Cadence probably have quite a bit of runway with this solution.

In the interest of full disclosure, Adam told me that they’re still working on bringing up multi-core support for VHDL. But you SystemVerilog and gate-level users are good to go right now.

Naturally all of this isn’t just on Cadence’s say-so. They have endorsements from ARM and ST in their Xcelium press release. And I imagine a lot more simulation customers are going to be jumping on this bandwagon – who wouldn’t want faster simulation? You can learn more about the solution HERE.

More articles by Bernard…


Prototyping: Sooner, Easier, Congruent

Prototyping: Sooner, Easier, Congruent
by Bernard Murphy on 02-28-2017 at 7:00 am

DVCon 2017 is a big week for Cadence verification announcements. They just released their Xcelium simulation acceleration product (on which I have another blog) and they have also released their latest and greatest prototyping solution in the Protium S1. This is new hardware based on Virtex UltraScale FPGAs on Cadence-designed boards, offering 6x higher capacity and an average 2x increase in performance. You can go from a single board at 25MG to a box at 200MG, and chain these together to get to 600MG. All that is important, but in one sense it’s not the most important aspect of this release. What’s really significant is getting to that power sooner, easier and with more confidence.


I talked about this in an earlier blog on Aspirational Congruence, based on a discussion with Frank Schirrmeister (Sr. Dir of Marketing at Cadence) on the importance of pipelining software development with hardware development and the importance to that goal of closely coupling emulation and prototyping. The fast version of that discussion is this. Embedded software development needs to start much earlier than late design implementation, yet some development and validation needs more accurate modeling than is available in virtual prototypes. FPGA-based prototyping is the best way to get there, but lengthy (months) and complex prototype setup has discouraged starting before the design is locked down, because there’s no time to do over if the design changes. This doesn’t help accelerate software development.

The way to cut this Gordian knot is to make prototype setup as fast and as hands-free as possible, while also closely tracking design verification models so you know that that behavior software developers will see on the prototype will be mirrored exactly in the behavior design verification engineers saw when that snapshot was taken. Frank gave me a prelude a few weeks ago to this concept of congruence between emulation and prototyping, meaning closely linked build, behavior and ease of transition between the two. Of course, this was a setup. He told me last week that Cadence are rolling out the solution this week at DVCon. Part of the solution is the Protium S1 but an equally important part is its close linkage with Palladium Z1 emulation.


Let’s start with compile. The platforms share a common compiler to the point that what you build for the emulator is guaranteed to behave the same way on the prototyper. Which means that you can check behavior in faster-setup emulation before committing to a prototype build, and you can be confident there won’t be surprise mismatches between the two. Even the post-partitioning model can be taken to the emulator for further debug and validation.


Then there’s place and route in each FPGA. Timing closure in FPGAs can be tricky which is one reason the S1 flow creates clocks locally in each FPGA. The flow automatically generates P&R constraints and guarantees hands-free closure across the design. Of course, you can still break into this flow to hand-tune for even higher performance. But based on stats they have published, out-of-the-box-performance is already pretty decent. And note that’s for a ~10x or more decrease in setup time.

Accessories such as speed-bridges can be reused between emulation and prototyping, another factor in congruence; your ICE modeling in emulation carries directly over to prototyping. Similarly transaction interfaces can be reused.


Cadence have also put a lot of work into the debug interface. For hardware, you can view waveforms across the design do force/release signal setting and set monitor probes, but of course the big focus in debug (given the target users) is for software. The S1 release includes a number of advances which should attract software developers and validators. Through a JTAG connection teams can download and upload/overwrite memory, control clocks, start and stop the design and they can write scripts around debug and test, all the features software developers expect in a full-featured debug environment. And naturally they can access the prototype remotely.

Cadence have a fulsome endorsement from the Xilinx integration and validation team who have validated the value early in product development, and apparently they have other early users in networking, consumer and storage applications.

You can read the press release HERE and read more about the Protium S1 platform HERE.

More articles by Bernard…


EUV is NOT Ready for 7nm!

EUV is NOT Ready for 7nm!
by Daniel Nenni on 02-27-2017 at 8:00 am

The annual SPIE Advanced Lithography Conference kicked off last night with vendor sponsored networking events and such. SPIE is the international society for optics and photonics but this year SPIE Advanced Lithography is all about the highly anticipated EUV technology. Scotten Jones and I are at SPIE so expect more detailed blogs on the keynotes and sessions this week.

Attend SPIE Advanced Lithography
Come to the world’s premier lithography event. For over 40 years, SPIE has brought together industry leaders to solve the latest challenges in lithography and patterning in the semiconductor industry.
Check out the 2017 News & Photo page andStay on top of what is happening before, during, and after the 2017 SPIE Advanced Lithography meeting in San Jose.

The many BILLION dollar question of course is: When will EUV be ready for high volume manufacturing?

According to Intel EUV Manager Dr. Britt Turkot, at this point in time, EUV is not ready for HVM and may not be ready for 7nm. Britt has been with Intel for 20+ years and is a regular presenter at SPIE. In fact, Britt did a similar presentation last year which was nicely summarized by Scotten Jones: TSMC and Intel on the Long Road to EUV, by Scotten Jones, Published on 02-23-2016 05:00 AM. You can get a full list of Scotten’s blogs HERE.

If you look point-for-point, according to Britt, not much has changed. As Scotten pointed out, three years ago the key issues were: Photoresist – line width roughness (LWR) and outgassin, Tools – source power and availability, and Reticle – killer defects and pellicles.

Photoresist technology continues to improve but no breakthroughs have been reported.

The current power roadmap is to have 250 watts in the 2016-2017 timeframe, >250 watts in the 2018-2019 timeframe. From what I have heard thus far, power in the field is closer to 100 watts than 200 so we still have a ways to go before HVM.

One of the most interesting points was particles and pellicles. According to Britt, particles are a much bigger problem than ASML has disclosed so pellicles will be required. I’m sure we will hear more about this during the conference but pellicles are a double edge sword. They do reduce the number of wafer defects caused by particles but they also draw source power which is already a key issue for throughput and machine availability.

EUV photomask inspection was also discussed. Intel has been pushing for an actinic based inspection tool and that push continues. The question of course is: Who is going to pay for it? My guess is that, as with most semiconductor manufacturing roadblocks, there will be an inspection workaround to get EUV into production before R&D dollars are spent on actinic.

As we already know TSMC has skipped EUV for 7nm but is planning on exercising EUV at 7nm in preparation for EUV at 5nm. At last year’s SPIE, Intel, Samsung, and GLOBALFOUNDRIES still had EUV planned for 7nm but we have heard some waffling on the subject. It will be interesting to get another EUV update on 7nm and 5nm from the people who are actually using it.

Later today Intel will again keynote SPIE and present “EUV readiness for HVM” and Samsung will again present “Progress in EUV Lithography toward manufacturing”. Scotten will do thorough blogs on the conference as he has in the past. You can read Scotten’s very technical event related blogsHERE. If you are attending SPIE it would be a pleasure to meet you, absolutely.

Also read:An Steegen ISS Talk and Interview – Patterning Options for Advanced Nodes


CTO Interview: Jeff Galloway of Silicon Creations

CTO Interview: Jeff Galloway of Silicon Creations
by Daniel Nenni on 02-27-2017 at 7:00 am

It is clear that IP companies play an important role in modern semiconductor design, in fact, I would say that they are imperative. Founded in 2006, Silicon Creations is one of those imperative IP companies that provide silicon proven IP to customers big and small around the world. To follow-up on our conversation with Silicon Creations CEO Randy Caplan, CTO Jeff Galloway provided a closer look at the technology behind their success.

What sets Silicon Creations apart technically?
First of all, I believe we’ve architected robust and versatile products. For example, we have a PLL architecture that has scaled from 65nm at company creation in 2006, down to 7nm. Over the last 10 years, it’s been ported all the way back to 180nm. The same robust proven architecture is silicon-proven from 180nm to 10nm and 7nm silicon is due very soon.

The architecture has scaled not only across geometries but also along a huge power/performance curve. For example, the same PLL architecture that allows our high speed SerDes to achieve a fantastic power/performance ratio with jitter < 400fs also allows us to be the differentiated PLL provider for 10uW PLLs for application processors and other IoT devices.

Secondly, given a robust architecture, it’s important to have a methodology to port the design from node to node. Typically, an SoC company concentrates product line on a specific geometry. An IP company like us, on the other hand, must have it’s IP available in a wide variety of nodes, or able to do so. We have a robust design flow methodology in place that allows us to port from foundry to foundry and node to node quickly and efficiently.

We have delivered approximately 300 IP products (from 7nm to 180nm), so we certainly have a very wide range of IP. We have a schematic flow, which allows our core IP to be shared across geometry and foundry, reducing time to market and reducing risk. We also have a layout flow that allows similar flexibility, further reducing risk and providing robust products across variety of nodes.

Thirdly, we have an automated test lab. We’ve characterized nearly 50 chips at this point (many in the last 5 years) and have generated over 100 test chip reports. This critical ability has allowed us to launch our customers’ 10nm products (and soon 7nm) and multi-protocol SerDes.

Can you tell us a little more about your IoT PLL product and what makes it different, more appealing to design engineers?
Low-power and fast lock times are critical metrics for IoT PLL products. Most “low-power” PLLs are on the order of 1mW. The active power is critical for our customers, especially in leading mobile products. Our architecture achieves less than 10uW while keeping area low. Secondly, we have an innovative architecture that frequency locks in fewer than 20 cycles, or just 3 cycles if calibrated. The PLL also phase locks in fewer than 40 cycles. This capability gives us an advantage in addressing the fast lock time requirements that is critically important for IoT where the reference clock is often 32.768KHz and energy shouldn’t be lost waiting on lock.

You mention the PLL architecture that is in your 10uW IoT PLL is also in your Multi-Protocol SerDes?
Yes, the same low power architecture is used. But we size the multi-protocol SerDes PLL on a different point on the power/jitter tradeoff curve, obviously . Nevertheless, we still achieve excellent power efficiency. For example, we have a 28nm product with ~5mW/Gb/s for SerDes operation that achieves < 400fs RMS RJ for 10G-KR. The low jitter for TX and RX along with an optimized front-end equalizer (CTLE+DFE) allows the SerDes to communicate over very long backplanes and other difficult channels.

We think this SerDes product has the most flexibility on the market. Our lead customer for this was an FPGA customer (MicroSemi), so needed the flexibility and functionality. This single PMA can cover a 50:1 range from 250Mb/s to 12.7Gbs, and cover the following standards:

Your products look differentiated, but have they found commercial success?
We have a large number of production chips in silicon from 7nm (this month) to 180nm.

Is the broad adoption because of performance?
I believe a lot of the early design wins we had were based on performance. For instance, w e have a fractional-N product with long-term jitter low enough to clock AFEs or SerDes. That helps us stand out. But the same PLL, programmed to a different power/performance point, could clock digital logic power efficiently.

Our customers quickly realized that they could replace many different PLLs with one Silicon Creations PLL.

Due to the fractional-N capability, the PLL also can perform many frequency and phase functions such as spread-spectrum generation, DPLL/clock recovery and arbitrary phase shifting.

We put tremendous effort to focus on customer needs, and we think that it greatly differentiates us. This is one of the prominent reasons why our customers keep returning for our PLLs for their next product.

About Silicon Creations
Silicon Creations is focused on providing world-class silicon intellectual property (IP) for precision and general-purpose timing (PLLs), Chip-chip SerDes and high-speed differential I/Os. Silicon Creations’ IP is proven from 7 to 180-nanometer process technologies. With a complete commitment to customer success, its IP has an excellent record of first silicon to mass production in customer designs. Silicon Creations, founded in 2006, is self-funded and growing. The company has development centers in Atlanta, Ga., and Krakow, Poland, and worldwide sales representation. For more information, visit www.siliconcr.com.

Also Read:

CEO Interview: Srinath Anantharaman of ClioSoft

CEO Interview: Amit Gupta of Solido Design

CEO Interview: David Dutton of Silvaco


Webinar: FPGA Prototyping and ASIC Design

Webinar: FPGA Prototyping and ASIC Design
by Bernard Murphy on 02-26-2017 at 4:00 pm

When you think about working with an ASIC service provider like Open-Silicon, you probably think about handling all the architecture, design and verification/validation in your shop, handing over a netlist and some other collateral, then the ASIC services provider takes responsibility for implementation and manufacturing. Plus or minus some options, this is the standard ASIC service model.

REGISTER NOW

Tue, Mar 21, 2017 8:00 AM – 9:00 AM PDT

But this neat division isn’t always ideal. Detailed verification and validation before implementation may require a model of the design than is more accurate than a virtual prototype, close to the detailed design behavior yet much faster than a simulation. In semiconductor companies FPGA-based prototyping is already a popular solution to this need for an accurate, close to real-time performance model and has become indispensable in early driver development, in checking performance, and in simply getting through high volumes of use-case testing.

These needs apply equally even if you are outsourcing back-end design, with an additional constraint that you may also need a working model to loosen up funding for silicon samples. Yet systems design teams often lack the experience to deal with the arcane details of FPGA prototyping, given the high levels of expertise required in partitioning, setup and making sure your prototype implementation will reasonably match the likely ASIC implementation.

Open-Silicon has the answer to your problem. They offer a service to provide, based on your design, configuration to standard prototyping solutions or an option in which they provide custom FPGA boards. Now system design teams can have access to early prototypes to build demonstration systems, start software development and debug in advance of silicon and greatly accelerate RTL verification over large regression and compliance suites. I’ll be moderating a discussion with two experts on why and how customers may want to take advantage of this capability.

REGISTER NOW

This joint Open-Silicon and PRO DESIGN Electronic webinar, moderated by Bernard Murphy of SemiWiki, will address the benefits of FPGA-based prototyping in the ASIC design cycle, and the role it plays in significantly reducing the risk and schedules for specification-to-custom SoC (ASIC) development and the volume production ramp. Early software development and real time system verification, enabled by FPGA prototyping, offers a cost-efficient high-end solution that shortens process cycles, boosts reliability, increases design flexibility, and reduces risk and cost. The panelists will outline best practices to overcome technical design challenges encountered in FPGA prototype development, such as design partitioning, real-time interfaces, debug and design bring-up. They will also discuss the key technical advantages that FPGA-based prototyping offers, such as architectural exploration, IP development, acceleration of RTL verification, pre-silicon firmware and software development, proof of concept and demonstrations. They will also talk about its effect on performance, scalability, flexibility, modularity and connectivity.


Who should attend
This webinar is ideal for hardware system architects, hardware designers, SoC designers, ASIC designers, and SoC firmware and software developers.

About Open-Silicon
Open-Silicon transforms ideas into system-optimized ASIC solutions within the time-to-market parameters desired by customers. The company enhances the value of customers’ products by innovating at every stage of design — architecture, logic, physical, system, software and IP — and then continues to partner to deliver fully tested silicon and platforms. Open-Silicon applies an open business model that enables the company to uniquely choose best-in-industry IP, design methodologies, tools, software, packaging, manufacturing and test capabilities. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has successfully completed over 300 designs and shipped over 125 million ASICs to date. Privately held, Open-Silicon employs over 250 people in Silicon Valley and around the world. www.open-silicon.com


Strong pickup in semiconductors in 2017

Strong pickup in semiconductors in 2017
by Bill Jewell on 02-26-2017 at 12:00 pm

World Semiconductor Trade Statistics (WSTS) is an organization of semiconductor companies created to collect market data. The members of WSTS also meet twice per year to develop forecasts for the semiconductor market. The “forecast by committee” approach of WSTS usually results in conservative forecasts. However, WSTS called it right for 2016. The WSTS forecast released in December 2015 predicted the semiconductor market would grow 1.4% in 2016. The actual growth in 2016 was 1.1%. The chart below shows 2016 forecasts made in the October 2015 to January 2016 period, prior to any 2016 monthly data availability.


After WSTS, the closest was Gartner’s October 2015 prediction of 1.9%. Other forecasts were either higher in the 3.9% to 6.0% range (with our Semiconductor Intelligence forecast the highest) or negative. The 2016 market started out weaker than expected with a 5.3% decline in 1Q 2016 and a weak 0.9% growth in 2Q 2016 – normally a seasonally strong quarter. By mid-2016 basically all the forecasts were negative. Robust 11.6% growth in 3Q 2016 and healthy 5.4% growth in 4Q 2016 turned the year positive.

What is the outlook the semiconductor market in 2017 and 2018? WSTS in November 2016 projected 3.3% growth in 2017 and 2.3% in 2018. WSTS appears conservative for 2017, with other forecasts ranging from 4.4% to 11%. Our latest forecast from Semiconductor Intelligence is for 10% growth in 2017 and 6% in 2018.


What is driving the uptick in the market in 2017? One factor is memory prices. Gartner’s Ganesh Ramamoorthy said increasing memory demand and prices added $10 billion to their 2017 forecast. IC Insight’s Bill McClean expects the memory market to grow 10% in 2017, double the rate of the overall IC market.

Another driving factor is an uptick in the global economic outlook for 2017 and 2018. The International Monetary Fund (IMF) in January projected global GDP will rise 3.4% in 2017, up 0.3 percentage points from 2016. 2018 is expected to add 0.2 points to bring GDP growth to 3.6%. The advanced economies contribute only moderately to 2017 and 2018 GDP acceleration. U.S. GDP growth should pick up from 1.6% in 2016 to 2.5% in 2018, but still below 2015’s 2.6%. The Euro Area and the United Kingdom are project to show flat to decelerating growth over the next two years after the UK voted to withdraw from the European Union. Japan GDP will remain stuck below 1% annual change.


The drivers for global GDP will be the emerging and developing economies, with GDP expansion accelerating from 4.1% in 2016 to 4.5% in 2017 and 4.8% in 2018. China’s GDP growth rate continues to slow, but should remain above a healthy 6%. India and the ASEAN-5 (Indonesia, Malaysia, Philippines, Thailand and Vietnam) offset China with accelerating GDP growth. Latin America will contribute by rebounding from a 0.7% GDP decline in 2016 to a 2.1% increase by 2018.

The electronics markets driving semiconductor market growth will shift from the old standbys of computing (PCs and tablets) and mobile phones (including smartphones). According to Gartner, PCs and tablets will improve from a 9.9% decline in units shipped in 2016, but only to 1.4% growth in 2018 and 2019. Mobile phones units are also expected to increase no better than 1.4% through 2019.


IC Insights expects semiconductor sales for the Internet of Thing (IoT) will show a 13.3% compound annual growth rate (CAGR) from 2015 to 2020. Automotive semiconductors are another key driver with a 2015 to 2020 CAGR of 10.3%.

Our Semiconductor Intelligence forecast of a 10% increase in the semiconductor market in 2017 is based on:
· quarterly market trends driven by a strong second half of 2016
· moderate improvement in the global economy
· increasing memory prices
· modest improvement in the largest applications – PCs, tablets and mobile phones
· continued proliferation in emerging areas such as IoT and automotive
2018 growth will moderate as memory prices stabilize (or decline) but should be a healthy 6%.


OEMs Lead the Way on Self Driving Tech

OEMs Lead the Way on Self Driving Tech
by Roger C. Lanctot on 02-25-2017 at 7:00 am

It’s never a good sign when car makers are called before Congress. It’s almost as bad as being asked to visit the President. But last week the meeting didn’t involve allegations or investigations. It was just an occasion for a friendly chat regarding “Self-Driving Cars: Road to Deployment.”

IEEE Spectrum was kind enough to excerpt notable moments from the Q&A which inadvertently highlighted the confusion prevailing among car makers as to the evolution of automated driving. Links to the full video and the IEEE Spectrum report appear below.

The hearing was held before the U.S. House Subcommittee on Digital Commerce and Consumer Protection and included Gill Pratt from Toyota Research Institute, General Motor’s Vice President of Global Strategy Mike Ableson, and Anders Karrberg, Vice President of Government Affairs at Volvo Car Group, along with Lyft’s Vice President of Public Policy Joseph Okpaku, and Nidhi Kalra, Co-Director and Senior Information Scientist, at the RAND Center for Decision Making Under Uncertainty.

Most notable among the responses elicited by the Congressional representatives were some key contradictions which conveyed the impression that the car makers are still seriously flummoxed by automated driving – and possibly less clear headed regarding the regulatory path forward than you might expect. Two issues, in particular, stand out: Level 3 vs. Level 4 development plans and the role of government.

Frank Pallone (D-N.J.) asked: “Volvo has said that it will skip Levl 3 automation and go from Level 2 to Level 4. Can you explain that decision?

Karrberg of Volvo replied: “At Level 3, the car is doing the driving. The car is doing the monitoring. But the driver is the fallback. So, you could end up in situations where the driver has to take back control, and that could happen within seconds. We are concerned about the Level 3 state, and therefore we are targeting Level 4.”

Kalra of RAND agreed: “There is evidence to suggest that Level 3 may show an increase in traffic crashes, and so it is defensible and plausible for automakers to skip Level 3. I don’t think there’s enough evidence to suggest that it be prohibited at this time, but it does post safety concerns that a lot of automakers are recognizing and trying to avoid.”

Reality is that Volvo has already launched its DriveMe project in Gothenburg, Sweden with drivers monitoring the automated driving in a Level 3 style:

http://www.volvocars.com/intl/about/our-innovation-brands/intellisafe/autonomous-driving/drive-me

Soon GM will be on a similar development path with human drivers monitoring the computer driving. Toyota, too, has adopted this vision. Level 3 automated driving is a critical step in the evolutionary path via which humans will “teach” the machines how to drive.

The greatest strength and weakness of Google/Alphabet/Waymo’s approach to automated driving has been the emphasis on driverlessness – no steering wheel, no pedals. This approach rules out the learning process and appears to be a limiting factor on Waymo’s commercial inroads thus far.

In fact, Pratt of Toyota, cast a bit of shade on Waymo in his testimony (without mentioning the company by name) by noting that the data sharing requirements for tracking driver interventions (in California) seems to favor companies taking a particular approach: “It’s the Federal government that we believe should take the leading role. As you may know, in California there’s a requirement, if you’re doing autonomous car development, that you report to the government what your disconnection rate is—every time that your car has a failure of a certain kind. That’s not such a bad idea, but that information then becomes publicly available, and it creates a perverse incentive, and the incentive is for companies to try to make that figure look good, because the public is watching. And that perverse incentive then causes the company to not try to test the difficult cases, but to test the easy cases, to make their score look good.”

But don’t get the idea that Pratt, or any car company, is prepared to fully embrace government intervention in automated driving development. Debbie Dingell (D-Mich) asked: “Are there specific things that Congress should avoid doing that would stifle the development of autonomous vehicles?”

Both GM and Toyota appear to agree that the government role should be limited. Says Ableson of GM: “We wouldn’t want to see [the] government taking steps to specify a specific technology or specific solution. I think as long as we keep in mind that the goal is to prove that the vehicles are safer than drivers today, the NHTSA guidelines published last year are a very good step in that direction, in that they specify what the expectations are before vehicles are deployed in a driverless fashion.”

Pratt of Toyota agreed: “An evidence based approach is really the best one, where the government sets what the criteria are for performance at the federal level, but does not dictate what the ways are to meet that particular level of performance.”

The message from Toyota and GM is clear. Tell us what to do, not how to do it.

These two perspectives are curious in that they conflict with both Toyota’s and GM’s position on the implementation of vehicle-to-vehicle communications currently facing a potential mandate from the National Highway Traffic Safety Administration. It ws no surprise, then, that Gus Bilirakis (R-Fla.) wanted to know where the respondents stood on V2V tech and how it “fit into the overall blueprint of deploying self-driving cars.”

Pratt took the lead, ultimately calling for preservation of the spectrum set aside for V2V applications: “Vehicle to vehicle as well as vehicle to infrastructure communication is of critical importance to autonomous vehicles. Of course, we drive using our own eyes to see other vehicles, but the potential is there for autonomous vehicles to use not only the sensors on the vehicle itself, but also sensors on neighboring vehicles in order to see the world better. And so, for example if you’re going around a corner, and there’s some trees or a building that’s blocking the view, vehicle to vehicle communication can give you the equivalent of x-ray vision, because you’re seeing not only your view, but also the view from other cars as well… We have to give ourselves every possible tool in the tool chest in order to try and solve this problem. So I think … saving spectrum for that use is also very important.”

In essence, Pratt distills a quite contentious and complex V2V debate into an argument for a government mandate for inter-vehicle communications. Such an argument is only consistent in the context of cellular-based solutions being considered as candidate alternative solutions for connecting cars to each other and infrastructure.

In the end, what emerges is a picture of ongoing confusion regarding the kind of help the automotive industry desires and the roll of Congress or even the U.S. Department of Transportation. It’s worth noting that the Federal Communications Commission is still testing the parameters governing spectrum sharing and an appropriate path forward. Testing of security protocols and standards for V2V are also ongoing.

The lack of a perspective or testimony from either Tesla Motors or Waymo is notable as is the lack of representation from state and local legislators and regulators and organizations advocating (both pro and con) on behalf of consumers – to say nothing of representatives of commercial trucking companies, rental car companies and taxi and limousine associations. In fact, the voices of consumer groups and employers are the ones most missed in this hearing.

If legislators could hear the demand side of the conversation more clearly it would clarify the confusion and contradictions currently ruling the space. Until these voices are heard, car makers will be left to their own aimless devices and regulators and legislators may well go awry creating delays and roadblocks or leading the industry down blind alleys.

Full video of the testimony can be found here: https://www.c-span.org/video/?423974-1/auto-industry-executives-testify-selfdriving-cars


Another Live Event at Samsung!

Another Live Event at Samsung!
by Daniel Nenni on 02-25-2017 at 7:00 am

Last week Samsung hosted the GSA Silicon Valley “State of the Industry” Meet-up which was well attended by the semiconductor elite, myself included. The agenda started with an update on the semiconductor industry outlook followed by deep dives into Automotive, IoT, Artificial Intelligence, and Cybersecurity all of which are tracked quite closely on SemiWiki.com. It was a very informative session with great food and networking. Thank you GSA and special thanks to Samsung for hosting the event at their new HQ in San Jose.

Next week Samsung is hosting another live event with eSilicon and Rambus, space is limited so register now. I’m not acquainted with two of the speakers, Mohit Gupta and Dr. Kang Sung Lee, but I do know Bill Isaacson and can tell you he is an excellent no nonsense speaker. Bill started the first half of his 20 year semiconductor career at LSI Logic and the second half at eSilicon, so Bill knows the ASIC business, absolutely.

Advanced ASICs for the Cloud-Computing Era:
Succeeding with 56G SerDes, HBM2, 2.5D and FinFET

A dramatic increase in network bandwidth and cloud-computing infrastructure is on the way. Fueled by applications such as deep machine learning and massive data volumes from a connected world, the performance demands of ASICs to support these new applications are daunting.

Join eSilicon, Rambus and Samsung Foundry for an overview of the advanced technologies being deployed to address these challenges. We’ll discuss HBM technology and the associated PHY, high-speed SerDes technology, 2.5D integration, high-performance ASIC design, interposer/package design and the manufacturing and packaging technologies available to address this class of FinFET-based designs.
There is no charge for this live event.

Registration closes at noon Pacific time on Friday, March 3, 2017. For security reasons, only those who have pre-registered may attend.

Agenda
3:30 – 4:00Check in at the South Lobby reception area
4:00 – 4:15Welcome, overview of HBM/2.5D market and applications
4:15 – 4:45 Enabling IP
4:45 – 5:15 ASIC, interposer and package design
5:15 – 5:45 Fabrication and packaging
5:45 – 6:00 Panel discussion
6:00 – 7:00 Networking reception; drinks and light hors d’oeuvres

Event Details
Wednesday, March 8, 2017
4:00-7:00 PM
Samsung Semiconductor
3655 N First Street
South Lobby
Palace Auditorium
San Jose, CA 95134

Register>>>

SPEAKERS

eSilicon
Bill Isaacson manages all aspects of strategic account relationships for eSilicon. Previously, he managed advanced development for eSilicon’s custom design initiatives. Prior to this position, he ran the customer solutions engineering function at eSilicon for eight years. Bill was previously at LSI Logic, where he was a design center manager. Bill received his BS/EE degree from the University of Illinois at Urbana-Champaign.

Rambus
Mohit Gupta is currently a senior director of product marketing in the Memory and Interfaces Division at Rambus managing the SerDes IP portfolio. Prior to joining Rambus in 2015, Mohit has led multiple IP development, application engineering and pre-sales teams at Open-Silicon, Infineon Technologies and ST Microelectronics. Mohit holds BE, MS and MBA Degrees from premier universities in India.

Samsung Foundry

Dr. Kang Sung Lee is currently a principal engineer in Samsung Foundry marketing. He is responsible for defining and promoting Samsung Foundry technology offerings for customer product needs. Dr. Lee joined Samsung in 2007 as a customer engineer and supported worldwide customers in multiple market segments, from mobile AP, GPU, FPGA, networking to cognitive computing and etc.


PowerTree — a data repository and simulation platform for PCB power distribution networks

PowerTree — a data repository and simulation platform for PCB power distribution networks
by Tom Dillinger on 02-24-2017 at 12:00 pm

The difficulty of managing the power domains on a complex SoC led to the development of a power format file description, to serve as the repository for data needed for functional and electrical analysis (e.g., CPF, UPF). Yet, what about complex printed circuit boards? How can the power domain information be effectively represented (for one or more boards), and used as the repository for subsequent analysis? How can electrical analysis be pulled “into the design phase”, reducing the PCB optimization effort? How can the PCB power format information be derived automatically, and submitted for simulation?

Specifically, the issues with accelerating power integrity analysis for a complex power distribution network (PDN) include:

  • PDN connectivity is difficult to visualize, as it is embedded in the detailed schematics
  • strong version management of the PDN and component model library is required during the design phase, enabling a quick comparison highlighting differences in the PDN design of successive versions
  • initial pre-layout simulation support is needed, to help identify any gross errors before layout
  • simulation set-up is a pain

Cisco recently shared an example of the magnitude of the power domain topologies from a current product (link below) — e.g., ~20 power rails, ~250 components, and ~500 power nets (including nets around filter components). Specifically, the system-level board design includes a wide diversity of component types, with varied constraints on the allowed I*R (DC) supply voltage drop and component power pin impedance profile versus frequency (AC, with optimized decoupling capacitance sizing and placement).

I recently had the opportunity to chat with Brad Griffin, Product Management Director, Custom IC & PCB Group at Cadence. Brad described how Cadence is helping address the requirement to bring PI analysis forward into the design flow. “With the assistance of customers like Cisco, we have developed an advanced feature in our recent Sigrity Power Integrity and OptimizePI toolset. The PowerTree repository is a unique method to capture and visualize complex board design information and related component model constraints.”

Setup of the PowerTree configuration is straightforward — a screen shot of the “Build Power Tree” dialog is included below.

A screen shot of the PowerTree application representing a complex PDN is included below.


The component bill-of-materials and connectivity netlist is derived from Cadence Allegro. A single view consolidates data across a potentially large number of schematic pages. Component models provide both electrical behavior and verification checking limits. Designers can include additional design constraints and component model data in the PowerTree repository.

Brad highlighted, “From the PowerTree environment, PCB designers can quickly generate and run DC analysis simulations in Sigrity PowerDC, initially from the schematic, and then with detailed layout parasitics. After optimizing the DC profile, the design is then provided to the power integrity expert for decap selection and positioning to optimize the frequency-dependent power impedance profile. The PI expert receives a higher quality design, to start their work with Sigrity OptimizePI. “

Brad and I both acknowledged that power integrity experts are a precious resource, and always overworked. 🙁 The opportunity for board designers to quickly derive and view the power topology and component data using the Sigrity PowerTree feature, then simulate to ensure correct DC margins are provided, will be a tremendous aid to the PI analysis activity.

For more info on the new Sigrity PowerDC and PowerTree features, please follow this link.

For additional insight into the collaboration with Cisco and Cadence on PowerTree, a link to a recent joint presentation at CDNLive is here.

-chipguy