CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Car Companies Confront Data Sharing

Car Companies Confront Data Sharing
by Roger C. Lanctot on 04-06-2016 at 12:00 pm

Senior OnStar executives have long intoned at industry events that the customer owns his or her data. The only problem is that the customer is only allowed glimpses of his or her data. They don’t have control of that data in spite of their so-called ownership of it.

It’s a complex challenge especially given the fact that car companies have obligations to preserve privacy and ensure security – and there is the possibility that vehicle data might be used against the interest of the auto maker. Nevertheless, the voices are growing for data sharing with the loudest of those voices coming from the automotive aftermarket.

So it was somewhat surprising that an aftermarket association stepped forward last week to help quash legislation seeking to empower consumers with full control of and access to their vehicle data, as described in legislation before the Rhose Island legislature:

RELATING TO MOTOR AND OTHER VEHICLES -CONSUMER CAR INFORMATION AND CHOICE ACT

The Auto Care Association and the Coalition for Auto Repair Equality (CARE) applauded the Rhode Island House Legislature for considering legislation (HB 7711) requiring car companies to provide car owners with the ability to control where information transmitted by vehicle telematics systems is sent. The two associations then asked the legislature to set aside the bill while the aftermarket task force works cooperatively with auto makers to resolve the data question.

The two organizations testified before the Rhode Island House Committee on Corporations regarding the benefits of telematics to the auto care industry, including the ability for shops to obtain diagnostic data from a vehicle before it arrives at the shop, which could improve service bay efficiency and speed the vehicle repair process. The significance of this testimony derives from the potential influence of individual U.S. states on national policy governing vehicle repairs. It was Massachusetts’ adoption of Right to Repair legislation which helped to open up access to OEM diagnostic data in the first place – paralleling similar initiatives in Europe.

Aftermarket service providers are seeking to extend their repair rights further by enabling wireless or remote access to vehicle diagnostic codes. Consumers and repair shops can gain access to at least some of those codes today with aftermarket devices via OBDII plug-in devices such as those from Automatic, Vin.li or Verizon Hum. But car makers are working to limit the access of these devices beyond standard codes for emissions testing and other purposes.

Car makers are concerned that the OBDII port ultimately represents a source of vehicle vulnerability to hacking. Ford, Subaru, General Motors and some others have begun to provide the means for consumers to share their driving data with their insurance companies for the purposes of obtaining discounts. Ford and GM have also enabled – again, along with a growing list of competitors – consumers to access data on vehicle health and performance. It’s a start.

The groups testified that, “All of the data available from embedded systems currently goes to the vehicle manufacturer, allowing them, and only them, to reap the benefits of this technology. Specifically, armed with the extensive data about a customer’s vehicle, combined with the means to communicate directly with the driver in real time, the vehicle manufacturer has the ability to steer the motorists to the dealership or to a service establishment that may be a strong purchaser of their parts and information.

“While our associations both applaud and support the goal of HB 7711, at this time we cannot support passage.” In the end, the bill was set aside.

In the testimony, the two groups stated that, “While legislation may be necessary in the near future, we strongly believe that a collaborative approach would be faster and more effective, and we are more than willing to work to make that happen.”

CARE and the Auto Care Association explained that both groups are working with other associations as part of the Aftermarket Telematics Taskforce, which has been meeting with the car companies in an attempt to find common ground. “This process is in its early stages, and therefore it is difficult to judge whether we will be successful. Should our attempt to find an agreement not be successful in the near future, we likely will begin pursuing a legislative resolution and would welcome the help of the Rhode Island Legislature in order to resolve this very critical issue.”

No two car companies have taken the same approach to vehicle connectivity or the sharing of or access to vehicle data. There are few standards governing so-called vehicle “gateways’ for accessing data and security and privacy have become increasingly severe barriers to greater sharing of vehicle data.

Organizations such as automobile clubs, AAA in the U.S., have been advocating loudly for more open access to vehicle data on behalf of their commercial concerns for coveted vehicle repair and insurance business opportunities. It’s reassuring to see more reasonable voices prevailing in this debate.

Auto makers are struggling to come to terms with sharing vehicle data. The last thing they need at this particular moment is a legislative mandate.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Silicon Valley Myth Keeps Growing

Silicon Valley Myth Keeps Growing
by Daniel Isenberg on 04-06-2016 at 7:00 am

I thought my ears were deceiving me when a commentator (I believe it was David Brooks) gushed with admiration on NPR a few years ago that it was great that Jeff Bezos of Amazon was injecting Silicon Valley thinking into the Washington Post. What?!?!?! Unless my Google got jammed, Amazon is in Seattle. But Silicon Valley got the tagline. “Jeff Bezos is bringing the Seattle Rainforest to the Washington Post” just doesn’t sound cool.

Silicon Valley is unique and wonderful, yada yada, so is Amazon’s rainforest, but it gets blind uncritical credit as the be-all and end-all of entrepreneurship. Having helped raise four kids, I am used to nobody ever listening to me, but even I was surprised that noone noticed when Harvard Business Review published my 2010 exhortation to Stop Emulating Silicon Valley” and I am perplexed at the persistence and self-perpetuation of Siliconology(my favorite is Billy-Can Valley.) (Hey, why not SiliFran Valley, anyhow, you heard it hear first – now that the former prune capital has grown into the City?)

Here is just today’s latest juicy example

Kleiner Perkins, Google Ventures and friends (yes, mostly HQ’ed in SiliFran Valley) invested $120 million in “Silicon Valley Startup Juicero.” Is it cool? I Tesla-vate just from the picture. Think Keurig, Segway, SodaStream, Coravin, and other standalone, hard-to-make, complex consumer products. Some succeeded, some flopped (think Segway), and for some the jury is still out.

Will the $700 Juicero succeed? Who knows? All else being equal, I suppose a complex consumer product with big direct competitors (like Tesla) with $120 million of funding has a much better chance at success than one that has less funding. Do I hope it succeeds? Sure, why not?

Is it a SiliFran Valley startup? NOPE. Juicero is one of hundreds or thousands of Silicon Valley Transplants. It has its roots in New York, and in a failed juicery, Organic Avenue, started in 2002. One more time: ROOTS IN NEW YORK! Long story, but as a result, co-founder David Evans, apparently without even a (gasp!) accelerator or a hackathon, hacked away in his Brooklyn (you read it here: BROOKLYN – OMG!) apartment and came up with a prototype, and so on. Then after his innovative juicer started to work and the risk started to recede, (admittedly reading between the lines and based on seeing it happen), the SV money guys (yep, still mostly guys) said “Come to Silicon Valley.”

Am I a grumpy, snow-bound winter-weary Bostonian? Of course, but I lived, entrepreneur-ed and invested for 22 years in Israelicon Valley, way before it was popular, and I have seen surprising and impressive entrepreneurship almost everywhere I have seen people. That’s what successful entrepreneurs do – they surprise us. And they can do that anywhere in the world.


The Most Important Point You May Have Missed at CDNLive 2016!

The Most Important Point You May Have Missed at CDNLive 2016!
by Daniel Nenni on 04-06-2016 at 4:00 am

This was the best keynote lineup I can remember at a user group meeting. All four speakers are visionaries but from very different perspectives. The video of the event will be up later this month but from my first count the word “System(s)” was mentioned 32 times and the underlying message will transform the semiconductor industry and EDA yet again, absolutely.


In case you haven’t noticed, SemiWiki added a navigation bar with our main sources of traffic. The new categories (Mobile, IoT, Automotive, and Security) can also be viewed as systems because in our world that is what they are and that is where our traffic is taking us. Systems companies, or more specifically, Fabless Systems Companies, are now leading the semiconductor industry and EDA is quick to follow. The EDA Consortium even renamed itself Electronic Systems Design Alliance (ESD Alliance).

All good news, right?, not necessarily, especially if you are a small to medium “point tool” EDA company…

First up was Cadence CEO Lip-Bu Tan speaking on the expanding system design opportunities. This is one of the best keynotes I have seen from him, very clear and definitely inside his comfort zone. The one thing that really got me excited was when he mentioned the Deep Learning Revolution which I agree with 100%. Devices are getting exponentially smarter and that means we will never have enough compute power and that will continue to drive new process nodes and tools keeping us all employed.

Second was Steve Mollenkopf, CEO of QCOM. Steve started out calling QCOM a systems company which I can attest to. We did the history of Qualcomm in our latest book, Mobile Unleashed, if you want to know the backstory. According to Steve, QCOM spent the first 30 years connecting people and the next 30 years connecting everything else. The Snapdragon SoC platform is a big part of that but 5G is the critical link because super processing power is only as good as the link to the cloud. And you can’t talk about 5G without talking about Qualcomm.

Third was Sanjay Jha, CEO of GlobalFoundries. Since my day job is working with the foundries I have a sincere appreciation for what he is trying to accomplish. Sanjay mentioned that power consumption is a critical measurement of semiconductors which is why GF is offering FinFET and FD-SOI technology, both of which have been trending on SemiWiki for the last three years. In fact, “low power” has been one of the top search terms since we started in 2011. Sanjay also talked about 5G and Silicon Photonics which is our newest SemiWiki category and something you will be reading much more about in the near future.

Fourth was Tom Beckley, EVP of Custom IC and PCB at Cadence. Notice that he owns both custom IC and PCB? Systems, systems, systems… Tom talked about systems design enablement, vulnerability, and functional safety. The centerpiece of course was the 25[SUP]th[/SUP] anniversary of Virtuoso and the new unified simulation environment that was announced this morning. Tom Dillinger covered it for SemiWiki: Analog Design Verification — Traceability is Required

Okay, that concludes my 500 words. To truly appreciate the keynotes you will have to watch them but let me end with this:

FinFET design is complex, double and quadruple patterning is difficult, chip power and performance are critical, system level design will require fully integrated tools, and the next generation of system fabless companies have VERY tight product schedules and are much more risk adverse than what we are used to.

Bottom line: The rich EDA companies will get richer and the point tool vendors will be forced to innovate or die… Just my opinion of course.


Innovation in Transistor Design with Carbon Nanotubes

Innovation in Transistor Design with Carbon Nanotubes
by Students@olemiss.edu on 04-05-2016 at 4:00 pm

The New York Times article “IBM Scientists Find New Way to Shrink Transistors” by John Markoff focuses on the goal of the semiconductor industry to create smaller transistors in order to remain competitive while emphasizing cutting-edge design strategies with the use of carbon nanotubes. By switching from traditional methods to carbon nanotubes, IBM’s Physicist Wilfried Haensh, of the research group, highlights the improvements in power saving and ultimately increasing the speed of IBMs microprocessors sevenfold.

The transistor industry constantly faces what is known as the “Red Brick Wall”, or the inability to shrink the transistor due to technical limitations. In the mid-1980s, technical limitations surrounded ways to break the one-micron barrier in which methods using optical technology proved to be the solution. Early in 2000, these technical limitations changed to include gate stack, interconnects, and others. Still, the transistor industry discovered new ways to adapt to these crises and continue. Today, IBM faces technical limitations include a variety of factors such as electrical resistance, temperature, and materials.

Electrical resistance refers to the difficulty of passing electrical current through conductive material and with more resistance less electricity will flow. Electrical resistance also refers to the relationship between voltage and current. In essence, higher resistance will limit the electric current able to flow through the wire. Reducing electrical resistance will also increase processor chip speed while shrinking the physical limitations.

Along with electrical resistance, as transistor temperature increases, so will the collector current. A continual increase in heat will cause thermal runaway to occur, ultimately breaking the transistors. Limiting both electrical resistance and heat becomes a priority in order to maintain transistor stability as well as higher performance gains. All of these technical limitations compound to impact processor chip speed and pose challenging design choices for chip designers.

For a little over a decade, multi-core technology has been implemented for more efficient power consumption as opposed to increasing the processing speed, due to resistance and heat increases in smaller transistors sizes in conventional silicon transistors. In response to these technical barriers, IBM has chosen to switch from silicon to carbon nanotubes because of its useful properties in its transistors. Carbon nanotubes are strong, light, and conductive tubular cylinders of carbon atoms which form a one atom thick matrix. Carbon nanotubes also have none of the major physical degradation common to other metals, making it more stable as well as 15 times more conductive and with 1000 times more current capacity than copper. Dario Gil at IBM recognizes the clear advantages of carbon nanotubes over other materials, stating: “Of all the possible materials, this one is at the top of the list by a long shot.”

Another key innovation with the use of carbon nanotubes involves IBM’s new design approach towards the placement of the carbon nanotubes in transistors. The new design features the use of carbon nanotubes in parallel rows to connect ultrathin metal wires together, all the while focusing on decreasing the contact size to metal wires. By finding ways to align the carbon nanotubes close together, IBM can shrink the size of each transistor. In this way, the carbon nanotube will be used for electrical switching and help perform the essential functions of a transistor. IBM proposes future enhancement to design in the next decade by decreasing the contact size from 40 to 28 atoms in width.

Zach Allen & Parun Thamutok


References

J. Markoff, “IBM Scientists Find New Way to Shrink Transistors,” The New York Times, 01-Oct- 2015. [Online]. Available at: http://www.nytimes.com/2015/10/02/science/ibm- scientists-find-new-way-to-shrink-transistors.html?_r=1. [Accessed: 10-Mar-2016].
“Nanocomp Technologies | What Are Carbon Nanotubes?,” Nanocomp Technologies | What Are Carbon Nanotubes?, 2014. [Online]. Available at: http://www.nanocomptech.com/what-are-carbon-nanotubes. [Accessed: 10-Mar-2016].
M. Lapels, “Has The IC Industry Hit A ‘Red Brick Wall’?,” Semiconductor Engineering, 09-Jun-2014. [Online]. Available at: http://semiengineering.com/will-ic-industry-hit-red-brick-wall/. [Accessed: 10-Mar-2016].


Analog Design Verification — Traceability is Required

Analog Design Verification — Traceability is Required
by Tom Dillinger on 04-05-2016 at 9:45 am

Digital verification engineers have developed robust, thorough metrics for evaluating design coverage. Numerous tools are available to evaluate testbenches against RTL model descriptions — e.g., confirming that simulation regressions exhaustively exercise signal toggles, RTL statement lines, individual statement sub-expressions, individual conditional paths in a case or if/then/else construct, etc. Hardware description language standards have evolved to include (non-functional) statement definitions that allow verification engineers to add specific, complex measurement tests — e.g., asserts, covergroups.

Digital verification teams have also often deployed methods to track coverage progress throughout the design cycle, to identify areas in the overall design where additional (directed or biased-random) testcases need to be written and added to the verification suite. Increasingly, formal model property verification methods are also being applied to augment coverage. Upon evaluation of this coverage “dashboard”, the verification team lead can signoff for tapeout with high confidence.

To date, the verification of analog IP functionality to specification has typically been much less structured, with various ad hoc methods developed to assess the overall quality of the simulation strategy:

  • specification documentation and testplan reviews
  • schematic and layout reviews, with testcase simulation waveforms
  • signoff documentation reviews

Just what every engineer looks forward to — more meetings. 🙁

Realistically, it is extremely difficult to apply metrics to reflect how the design of analog components and parasitics affect circuit behavior. Nevertheless, a more structured methodology for analog IP verification is definitely needed. This necessity is further advanced by the imposition of quality standard by various organizations, which define requirements for product release information — e.g., ISO 26262 for automotive market products.

To help address this issue, Cadence is announcing a major update to their popular Analog Design Environment (ADE) platform.

I had the opportunity to get a preview of the extensive verification features added to the new ADE product family from Steve Lewis, Product Marketing Director, Custom IC and Packaging Group.

Steve began by highlighting, “This is a new platform, consisting of ADE Explorer, Assembler, and Verifier. The focus is to enable advancements in analog verification methodologies. We will still be maintaining the existing ADE L/XL/GXL products — yet, we anticipate designers will eagerly want to adopt the capabilities of these new applications.”

The following figure illustrates the high-level focus of this announcement, using the ISO 26262 automotive quality standard as an example. The key bullet in this figure is“Traceability”.


Briefly, traceability implies that the product release must include documentation recording:

  • what tests were run
  • what environmental conditions were used
  • the link between tests and design specifications
  • what ran the tests, and when
  • demonstrated success that the design specification was met by all related tests

Ad hoc verification methods to capturing and maintaining this information will not scale with the complexity of analog IP currently in development.

Steve described the new ADE products in some detail, using the diagram appended below:


ADE Explorer
Explorer is the day-to-day environment that the design engineer will typically use. Cadence re-allocated some capabilities from the existing ADE family into the base Explorer feature, recognizing that the advanced features of ADE L/XL/GXL are now de rigueur for all designers — e.g., Monte Carlo simulation support, sensitivity analysis.

Significant usability features for faster iterative design closure were added, as well. For example, the figure below highlights thedesign tuning panel in Explorer, which facilitates rapid updates to schematic and environment parameters dispatched to Spectre or Virtuoso AMS Designer for simulation. Waveform balloons annotated to the schematic show comparative results, to more quickly iterate on design optimizations.


The key is that the simulation test results are maintained by Explorer in a model view used by ADE Assembler and Verifier.

ADE Assembler
Steve associated the Assembler feature with prevalent use by the “block-level” verification team. Complex verification plans can be represented visually, as illustrated in the figure below. Conditional and interdependent testcase relationships (“run plans”) can be identified — for example, one set of test results are to be used in the next testbench set. Using the new ADE Variation feature, the verification team can apply high-sigma statistical, optimized Monte Carlo simulation tests.


ADE Verifier
The verification plan originates and comes together with ADE Verified, an encompassing application that is used throughout the design cycle. Initially, as illustrated below, the correlation between design specification and testcase plan is established, a key facet to the traceability requirements of the product release standards.


During the design, Verifier provides the overall dashboard data that verification project managers will use, as depicted in the figure above.

As the design approaches signoff, Verification can also serve as the batch simulation regression test manager, as well.

Configuration Management
I asked Steve, “These ADE features look great, but they are clearly very dependent upon the configuration and version data management (DM) policies used during design. How is that handled?”

He replied,“Seamlessly. These new ADE features work directly with existing DM tools that ADE teams are using. These features utilize a new model view — maestro — that records and maintains all the traceability information required. The OA data model for the traditional Virtuoso views remains unchanged — e.g., schematic, layout, symbol views. The maestro view is added to the design database for the new Explorer/Assembler/Verifier features. We worked closely with the DM software tool providers to optimize the maestro view for performance. Existing ADE L/XL/GXL products are easily imported into the new ADE features, and the maestro view data accumulated.”

More robust analog IP verification methodologies are needed, to assist with increased IP complexity and the requirements for standards traceability. Yet, to gain rapid deployment, these approaches much be evolutionary, building upon existing tools and interfaces, providing usability features that are intuitive to analog designers and verification teams. With the new capabilities added to ADE — Explorer, Assembler, and Verifier — Cadence has enabled this rigorous transition to be quickly and easily adopted.

For more information on the new ADE product family, and the new Virtuoso product announcement, please follow this link.

-chipguy


Path FX – the Production Proven Answer to Static Timing Analysis with Variation

Path FX – the Production Proven Answer to Static Timing Analysis with Variation
by Isadore Katz on 04-05-2016 at 7:00 am

I want to compliment ChipGuy on a very nice write-up of a complex topic – how to model process variation in static timing.

Continue reading “Path FX – the Production Proven Answer to Static Timing Analysis with Variation”


Optimizing memory scheduling at integration-level

Optimizing memory scheduling at integration-level
by Don Dingee on 04-04-2016 at 4:00 pm

In our previous post on SoC memory resource planning, we shared 4 goals for a solution: optimize utilization and QoS, balance traffic across consumers and channels, eliminate performance loss from ordering dependencies, and analyze and understand tradeoffs. Let’s look at details on how Sonics is achieving this. Continue reading “Optimizing memory scheduling at integration-level”


IoT + Big Data + Cloud + AI Integration Insights from Patents

IoT + Big Data + Cloud + AI Integration Insights from Patents
by Alex G. Lee on 04-04-2016 at 12:00 pm

IoT Big Data Aggregation
US20140297826 illustrates a system for big data aggregation in a sensor network. The most important part of the Internet of Things (IoT) big data analytics is collecting data before storing the data. The Hadoop big data platform supports collecting data in Hadoop Distributed File System (HDFS). HDFS is an open source for storing big data dispersedly, that is, a technology for storing collected data reliably. The big data aggregation system includes a sensor network which comprises many sensor nodes connected to each other over a wired/wireless network and is configured to transfer sensor data generated by each of sensor nodes to a big data management unit by setting a destination address in the sensor data as an address of a big data management unit. The big data management unit configured to distribute and dispersedly store the sensor data based on the set destination address of the sensor data.
IoT Big Data Platform
The Hadoop big data platform is based on the MapReduce framework. US7650331 describes the MapReduce framework. US20110313973 illustrates the MapReduce framework including the shuffle function using the DFS. US20150012502 illustrates a big data central intelligence system for managing, analyzing, and maintaining large scale, connected information systems such as the IoT device networks.

IoT Big Data Real Time Processing

US20150134704 illustrates a system for processing large scale unstructured data in real time. The interconnected IoT sensing devices continuously generate massive information at a very high speed. Thus a technology for effectively processing a huge amount of information in the form of a data stream in real time is very important. The real time big data analysis system includes a receiver for receiving streamed input data from live data sources, a pattern generator for deriving emergent patterns in data subsets, a pattern identifier for identifying a repeating pattern and corresponding data subset within the emergent patterns, a compressor for reducing the identified data subset and identified pattern to a compressed signature and a repository for storing the streamed input data with the compressed signature and without the identified data subset in which the data subset can be rebuilt if necessary using the compressed signature.

IoT Big Data Cloud

US20130227569 illustrates the system that can gather data from thousands of the IoT sensors/devices and analyze the data in the cloud without the massive amount of investment in the server and big data analytics infrastructure. The cloud based IoT big data system provides a virtual IoT sensors/devices cloud as an Infrastructure as a Service (IaaS) and a service cloud as a Software as a Service (SaaS), to provide a flexible and scalable system. The IaaS provides flexibility by handling heterogeneous IoT sensors/devices. The SaaS provides scalability by relieving end users of computational overheads, and enabling on-demand sharing of IoT sensors/devices data to requesting end users. The SaaS also relieves end users from specifying IoT sensors/devices characteristics, locating physical IoT sensors/devices, and provisioning for the physical IoT sensors/devices. The end user, via a device (e.g., smartphone), requests and receives services provided by the system.

IoT Big Data Analytics

US20150179079 illustrates a system and for real time monitoring a patient’s cognitive and motor response to a stimulus. The big data analysis of massive data obtained by the IoT healthcare/medical devices can provide many value-added healthcare services. US20150186972 illustrate a big data analytics system for the business IoT applications. The business IoT devices can collects a large amount of data regarding products, product attributes, prices, and price attributes. To be understood by a person, this large amount of data and analytic output must be summarized, personalized, and organized in relevant terms. The summarization and personalization of such a large and complex set of data presents challenges in the selection and refinement of information as well as with respect to identification of patterns and arrangement of information in a user interface. The big data analytics system provides a user interface to summarize and personalize a large amount of price and product information, to identify patterns therein, and to generate recommendations in relation to the information.

Artificial Intelligence for IoT

Artificial Intelligence (AI) is essential to provide value added IoT services by finding the patterns, correlations and anomalies in user behaviors for autonomous context-aware actions of the IoT system surrounding the user. US20150039105 illustrates the smart home intelligence system to fulfill the special needs of each family member exploiting AI. US20140073486 illustrates a heart rate monitoring system by providing the best type of sensor to use at a given time determined by AI based on the level of motion (e.g., via an accelerometer) and whether the user is asleep (e.g., based on movement input, skin temperature and heart rate). US20140108307 illustrates the AI exploitation in the connected car applications. Base on the profile information and/or contextual information, AI system provides suggestions to the driver. US20140340236 illustrates the AI application for securing the distributed power distribution networks in the IoT smart grids.

IoT+ Big Data + Cloud + AI Integration

US20150227118 illustrates the IoT Cloud Big Data AI system for facilitating automatic control of the smart home devices based on past device behavior, current device events, sensor data, and server-sourced data. Cloud-based big data analytics is accessible via a server system for analyzing data associated with persons or buildings in a geographic region about the building, such as local news and weather information and data pertaining to appliances within the geographic region, such as a neighborhood, zip code, and so on. The analyzed data is used to develop the control rules to control smart home devices automatically.

The automatic control of the smart home devices enable various benefits, such as triggering lights to automatically turn on when a user enters a particular room at a particular time; activating a sprinkler system when server-side data indicates that a fire is nearby; automatically turning on a heater in advance of a home owner’s return at a particular time when the home temperature is below a predetermined level; turning off a sound system and lights in various rooms after data indicates that a user is preparing to sleep; turning off lower priority devices that may conflict with higher priority devices, and so on.

Cloud-based big data analytics also can be used to make a prediction about the future device usage and/or device behavior and/or user behavior exploiting AI. The device usage and/or device behavior and/or user behavior predictions can be used to generate control rules. The prediction can be derived by comparing collected data with a sample table of data to determine whether a correlation exists between the collected data and data in the sample table of data. The prediction can be generated based on a correlation between the collected data and data in the sample table of data. The prediction also can be based on a frequency of occurrence of an instance of data in the collected data (and timing information associated with occurrences of the instances of data) to generate a probability estimate. The probability estimate is employed to determine the prediction.


PCB Design Requires Both Speed and Accuracy of SI/PI Analysis

PCB Design Requires Both Speed and Accuracy of SI/PI Analysis
by Tom Dillinger on 04-04-2016 at 8:00 am

The prevailing industry trends are clear: (1) PCB and die package designs are becoming more complex, across both mobile and high-performance applications; (2) communication interface performance between chips (and their related protocols) is increasingly demanding to verify; (3) signal integrity and power integrity issues are more intricate (e.g., the impact of power distribution noise on nearby signal integrity); and significantly, (4) the design resources with detailed SI and PI expertise are very limited. Project schedules are often adversely impacted by both the available bandwidth of the SI/PI specialists and the long iterative loop between board design, model extraction, SI simulation, and feedback to the physical designers.

The industry requires an integrated design environment, where SI/PI analysis can be launched easily, run quickly, and provide accurate results back to the designer. Although perhaps obvious, the same fast/accurate requirement applies to design rule checking for manufacturability and EMI/EMC compliance.

I recently had the opportunity to speak with Dave Wiens, Business Development Manager, and Dave Kohlmeier, HyperLynx Product Line Director, at Mentor Graphics, on how the HyperLynx development team is addressing these challenges. Indeed, they were excited to convey features in the latest HyperLynx release that offers a significant productivity boost to designers. Here are some of the highlights of our discussion.

Performance
Signal integrity verification consists of setup, simulation runtime, and post-results analysis. Setup involves generation of models (e.g., S-parameters for non-uniform 3D regions) for SI simulation, and is often a time-consuming step.

For this release, the HyperLynx team incorporated unique features to accelerate tool performance — i.e., advanced pattern matching, and cross-section caching — as depicted in the figure below:

The result is that (thousands of) cached structures enable extensive reuse during model build. Performance is further improved through multi-threaded execution.

And, a key feature is the automatic definition of sections of the design model to be directed to specific integrated solvers — e.g., structures that require 3D full-wave analysis are identified and detailed models generated. The judicious application of 2.5D and 3D engines to their respective geometries improves setup performance dramatically.

The tool environment also includes the familiar HyperLynx design wizards. In Dave K.’s words, “The wizards embed domain knowledge to conduct a small interview with the designer to identify model details for analysis, such as the DDRx Wizards. These applications provide both an easy-to-understand design reference, and an overall boost to post-results analysis productivity.”

Accuracy
Performance improvements would be of little value, if model simulation accuracy is compromised.

As mentioned above, 3D full wave field solver methods are applied to appropriate structures, such as differential vias. The ports introduced by this model partitioning are managed automatically, when re-constructing the full model. Full-wave solvers utilize excitation sources and reference planes to analyze the structure, for calculation of EM fields. As these sources/planes do not represent the exact field profile at the ports, there is a source of error. Calibration “de-embedding” of port discontinuity errors from this partitioning is crucial to model accuracy, and is also automatic.

Resulting Touchstone S-parameter files — with potentially very many ports — are analyzed for quality (e.g., passivity, causality, reciprocity). The Touchstone Viewer also provide visual feedback of model characteristics, such as insertion loss deviation, return loss, insertion loss to crosstalk ratio, etc.

Algorithmic support is included for the most advanced trace surface roughness and frequency-dependent dielectric models.

Optimum geometrical meshing of the structures also ensures the appropriate accuracy/runtime tradeoff.

HyperLynx supports the recent power-aware IBIS model standard, to accurately reflect the impact on signal integrity due to coupling through the power distribution network (PDN) — e.g., simultaneous switching noise on a parallel interface, and via-to-via coupling through the PDN.

HyperLynx performance and accuracy are industry-leading for analysis of DC drop in the PDN, and calculation of the PDN frequency-dependent impedance.

In addition, the latest Hyperlynx release environment includes a new capability for advanced SerDes signal integrity analysis.

An emerging approach to validation of channel performance is the Channel Operating Margin, or COM, calculation. (This is the de facto analysis methodology for the 100 Gigabit Ethernet standard.) For SerDes protocols that support the COM method, such as 100GbE, simulation support is provided in HyperLynx to generate the channel pass/fail margin result.

Mentor has been a leading proponent on the definition of COM measurement techniques — for more information on COM and recent recommendations for improving the accuracy of COM simulation to BER measurement, please refer to the following article.

In addition to the SI/PI features mentioned above, HyperLynx DRC performs PCB full-board design rules checks, detecting both irregular physical topologies and EMI/EMC structures of concern, such as interrupted signal return paths.

Usability
HyperLynx is an integrated suite of analysis applications, available in a single user environment, with a unified data model.

Both Dave K. and Dave W. emphasized that the HyperLynx team has focused on usability (and a quick learning ramp), while not compromising accuracy. The theme of our discussion was “the fastest time to accurate results”.

I think these guys are on to something. From an overall project perspective, the SI/PI specialists in the organization are inevitably stretched thin, often supporting multiple board and package designs. An integrated environment for physical designers, with performance optimizations to speed closure while maintaining highest accuracy, is definitely needed.

For more general information on HyperLynx, please follow this link.

-chipguy


CMOS Radio Frequency Image Sensor Process

CMOS Radio Frequency Image Sensor Process
by Students@olemiss.edu on 04-03-2016 at 4:00 pm

Image censoring with radio frequency (RF) in CMOS is a combination of light sensing chips and wireless communication. Typically, we were first engaged in the article, “RF Design Issues and Challenges in a CMOS Image Sensor Process”, because of the circuit design process required to make a functioning Radio Frequency transceiver. Radio Frequency transceivers are electronic devices that receive and demodulate radio frequency signals, and then modulate and transmit new signals. By using these radio frequency signals, it is a way to limit local interference and noise. We thought it would be an interesting concept to grasp for our future benefit if we decide, design and implement these communication devices into CMOS Image Sensors or CIS.

The first type of CMOS pixel circuits created were called passive pixels. They had good fill factors but suffered from very poor signal to noise performance. [2] Modern CMOS designs mostly use active pixels, which put an amplifier in each pixel, typically constructed with three transistors. [2] The more transistors added to CMOS Image Sensor designs the less noise interference, which is a good thing especially for the CIS with RF since the RF will produce noise interferences.

CIS is a relatively simple circuit that involves Photodetectors, these are very light sensitive, when activated they take in how much light each one is being exposed to and stores it by collecting these readings into pixels. Typically, each circuit would form four pixels; two green, one blue, one red. These sensors individual circuits are expanded until a resolution pixel size is met. CIS is very popular among many devices these days for many reasons, the biggest is perhaps how easy they are to produce which makes them cheap. What makes CIS so cheap is there are many manufacturers making CMOS chips at very affordable prices, the other option for digital image capturing is CCD, Charge Coupled Device.

CCD is cheaper to make individually when compared to CMOS but at large scale the CMOS CIS prove to be much cheaper thanks to the large number of foundries that produce various CMOS circuits. Furthermore, other than being very economically efficient they are capable to be very smart. What gives them the capability to be smart is that since the circuit is almost entirely CMOS, calculations can be made without an intense data path and processor carrying out corrections on each individual CIS circuit, the image can be corrected on each individual pixel and then passed on which is beneficial for many reasons such as; low processing consumption, more accurate pixel correction, and quicker image processing.

The article selected makes some notable points relatable to what we have already talked about in class. The circuit being described is 0.18um, in class the transistors are sized at 0.22um. The 0.22um size used in class, for at least a few examples, were never defined as the standard that needed to be used. It is impressive that the circuits designed in the article are that small, furthermore, the circuits in class are strictly transistors but here inductors were involved in the LVS. This is interesting because lecture gave the impression the CMOS would be a circuit and the components, such as the inductors in this article, would be two separate portions. This is not how the circuit is designed, however, the foundry needed to put inductors in the CMOS portion.

This demonstrates that if this same circuit from the article had an issue of overheating and it needed to be monitored constantly then a simple thermistor could be included at the foundry and a CMOS RF Image sensor processing with self-monitoring temperature would be made. Another point this article related to class was that, in class everything is about achieving maximum efficiency from the circuits. An interesting thing here is they have two goals; Wirelessly Communicate and Detect light for the Photodetectors. If they focus on the Photodetectors running as efficiently as possible then the inductors will either not perform or perform poorly, so instead they added a fifth metal at a thickness of 2um at the foundry so that the inductor performs at its absolute best, assuring the wireless communication is solid. In the process of gaining maximum performance for wireless communication, the transistors will have a smaller dopant level thus limiting their performance to lower than what it could have been.

The article extensively mentions one of topics from the CMOS class and that is the idea of power distribution and noise in these circuits. It is always an innovative idea to be able to control the amount of power that is consumed and not carelessly drain it. Hence, you must be able to put into prospective the Energy and Power equations. To find energy, the first thing that needs to happen is define the Instantaneous Power. Instantaneous Power is found by P(t) =I(t)V(t).

Energy is the rate of power consumption, and is defined by equation (1) listed below. Average power is the energy consumed by a circuit over a given period time. It can be found using the equation (2) listed below. The article mentions the chips must be low power due to the small die size. If the chip already has minimal power and is also receiving radio frequencies, that are at extraordinarily high frequencies, it seems that the level of noise in the chip would become a major issue, perhaps severe enough to hamper its communication capability or the behavior of the transistors. Capacitors are always a good source to reduce the amount of noise interference when dealing with AC circuits. Granted the diagram shows a capacitor which filters noise when it leads to ground, this capacitor doesn’t directly lead to ground.

The article mentions the chips must be low power due to the small die size. If the chip already has minimal power and is also receiving radio frequencies that are at extraordinarily high frequencies, it seems that the level of noise in the chip would become a major issue, perhaps severe enough to hamper its communication capability or the behavior of the transistors. Capacitors are always a good source to reduce the amount of noise interference when dealing with AC circuits.

Granted the diagram shows a capacitor which filters noise when it leads to ground, this capacitor doesn’t directly lead to ground. These topics bring up the famous topic of CCD vs. CMOS. Both sensors were created around the same time, but CCD was preferred at the beginning. CMOS required smaller features that could not be attained at the time and CCD operated at a faster speeds. Later, better technology was created to implement CMOS sensors. It was also realized that CCD sensors have higher bandwidth and more noise.

“Students in The University of Mississippi Electrical Engineering’s Digital CMOS/VLSI Design course researched a contemporary issue and wrote a blog article about their findings for presentation on SemiWiki. Your feedback is greatly appreciated.”

References:
[1] L. Truong, D. Zhang, T. Leitner, and B. Mansoorian, “RF Design Issues and Challenges in a CMOS Image Sensor Process,” Image Sensors. [Online]. Available at: http://www.imagesensors.org/past workshops/2013 workshop/2013 papers/07-09_069-truong_paper.pdf. [Accessed: 11-Mar-2016].

[2]“CMOS Fundamentals,” CMOS Fundamentals. [Online]. Available at: http://www.siliconimaging.com/cmos_fundamentals.htm. [Accessed: 11-Mar-2016].

[3] Matthew Morrison. (2016, March 1). CMOS/VLSI-Lecture 11 [Online]. Available at:
https://www.youtube.com/watch?v=V4pvT8FjXEA [Accessed: 11-Mar-2016].

[4]“Teledyne DALSA – A Teledyne Technologies Company,” CCD vs. CMOS. [Online]. Available at: https://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/. [Accessed: 11-Mar-2016].

[5]“Theory & Definitions,” Voltage, Current, Power & Energy: Definitions. [Online]. Available at: http://meettechniek.info/measurement/theory-definitions.html. [Accessed: 11-Mar-2016].

[6]Mattias Myrman, “De-aggregating and dispersing dry medicament powder into air,” [Online]. U.S. Patent WO 2003086517 A1, October 23, 2003. [Accessed: 11-Mar-2016].