NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Electrothermal Analysis of an IC for Automotive Use

Electrothermal Analysis of an IC for Automotive Use
by Daniel Payne on 05-16-2017 at 12:00 pm

Automotive ICs have to operate in a very demanding environment in terms of both temperature and voltage ranges, along with the ability to withstand g-forces and be sealed from the elements. Not an easy design challenge. For many consumer ICs we see output drive currents on the IO pins measured in mA, however in automotive if you want your IC to drive something like a DC motor then you can expect to see values in the Amp range, big difference. Engineers at an automotive IC design group in Toshiba recently had the challenge of designing a channel brushed DC motor driver chip with specifications of:

  • PWM mode: H-bridge driver
  • Output current: 5A
  • Low Ron: < 0.45 ohms
  • Operating voltage, 4.5V to 28V
  • Operating temperature, -40C to 125C

Here’s a block diagram of the Toshiba TB9051FTG chip:

When these large output driver transistors get turned on and start to draw currents the heat levels on the chip next to these transistors begin to rise, which in turn effects the transistor performance. If your circuit simulator doesn’t take into account the thermal effects of transistors driving large currents, then your simulation results will be overly optimistic and probably won’t meet your tight specifications. Toshiba uses DMOS (Double-diffused Metal Oxide Semiconductor) transistors for the high drive output pins, and the commercial SPICE simulator that supports fully-coupled electrothermal simulation is Eldo, from EDA vendor Mentor Graphics (a Siemens business).

Having a simulator that can simultaneously account for electrical and thermal properties is really mandatory for the design of this type of automotive chip. Here’s a block diagram showing the EDA tool flow for electrothermal analysis:

Inputs to the SPICE simulator are an extracted layout netlist, Verilog-A netlist and a thermal netlist. The Eldo simulator then produces a transient analysis showing current values and device temperature as a function of time. On this chip is a thermal unit which controls the output DMOS transistors as shown below:

The blue arrows in the block diagram are electrical data flow, while the red arrow is the thermal flow. With the Eldo circuit simulator there is a simultaneous solver for both electrical and thermal equations, giving you results that are both accurate and fast. Other approaches that use a relaxation technique to solve electrical and thermal coupling are vastly inefficient and have much longer run times. So with the concurrency in Eldo you get to see transient analysis results that are accurate and fast for both device temperature and current drive:

Looking at the current drive value in red we see that as the DMOS transistor turns on in the middle of the waveform there is a large increase in current, but then as the transistor heats up this current drive level tapers off to a smaller value than the peak value.

Related blog – Mentor DefectSim Seen as Breakthrough for AMS Test

Electrothermal Example

To get a grasp of how an electrothermal simulation works with Eldo let’s look at a simplified example with just three transistors that are thermally coupled:

The DMOS output transistors are shown on the left as XMH3 and XMH4, where they are controlled by pulse width modulation (PWM) signals, then there’s a current sensor transistor XM_ISD1 to control the input signal of the power DMOS devices. Simulating this example using an analog solver without any thermal coupling will produce a DMOS transistor temperature (blue) that doesn’t change even as the currents are toggling (pink and red):


Simulating again but with self-heating effects turned ON produces a very different result where we can start to see the transistor temperatures dynamically moving (blue yellow) as currents are toggling (pink and red):

Notice that the current shown in the middle curve (pink) has a high value that is tapered lower than its peak value, just what we’d expect to see as the higher temperatures begin to reduce the drive current levels.

Simulated Versus Silicon Measurements

The ultimate accuracy test of any SPICE circuit simulator is measuring silicon results on the bench and comparing them versus the simulated values. Toshiba engineers did this comparison and found that Eldo was producing current results within 1.5% of measurement using a Tektronix MSO405.4 oscilloscope during two time windows: 0 to 4ms (left), 86-90ms (right)

So now we know that Eldo does accurate electrothermal simulations, but what about the speed of circuit simulation? Mentor offers two speed versions of their circuit simulator, and on this particular chip simulating for 90ms of transient the run time comparisons for electrothermal simulations are:

  • Eldo with 1 core, 38h 48m
  • Eldo Premier with 1 core, 11h 50m
  • Eldo with 8 cores, 16h 38m
  • Eldo Premier with 8 cores, 7h 31m

Summary

If you need electrothermal simulation results accurately and fast then use Eldo, for faster results try Eldo Premier, and for the fastest run times use Eldo Premier with 8 cores. The chip designers at Toshiba have shown that you can expect simulated results to be within 1.5% of measured silicon, so Eldo has a very accurate electrothermal simulator that is ready to go on any circuits that have high current drive and localized heating within the IC.

White Paper

There’s a 10 page white paperon this topic online that requires a brief registration form.


High Frequency Trading and EDA

High Frequency Trading and EDA
by Bernard Murphy on 05-16-2017 at 7:00 am

Pop quiz – name an event at which an EDA vendor would be unlikely to exhibit. How about The Trading Show in Chicago, later this month? That’s trading as in markets, high-frequency trading, blockchain and all that other trading-centric financial technology. This is another market, like cloud, where performance is everything and returns easily justify investments in hardware design.


Of course, what these people are aiming to build is not smartphones or routers in the very latest semiconductor processes. They’re much more interested in personalized design – specialized applications (in this case high-frequency trading – HFT) built for proprietary advantage and never intended to be offered in the open market. And like their compatriots in the cloud, they’re very attracted to FPGA-based design for its flexibility, the rapidly increasing capability in those platforms and (comparatively) low development cost.

In automated trading, latency in trade information can have a huge impact on money made or lost. Simply aggregating ticker feeds from as many as 200 markets, each supporting its own format, was a task historically handled by software but with some latency in managing and massaging those inputs. Replacing that software with FPGA-based feed management can reduce latency in this stage by 5-10X. As in the cloud, there’s still a balance between the flexibility of software and the performance of hardware; some processing remains on CPUs, while more stable functions which need to be as fast as possible move to FPGAs. Solutions of this type are already in production use.

It’s not just about quickly consolidating ticker data. The trades themselves must be decided much faster than a human trader could respond. Deciding which trades to make requires intelligence, unsurprisingly using machine learning (ML) methods. That could be an opportunity for FPGAs, especially in ML algorithms with high-levels of parallelism benefitting more from FPGAs than multi-core CPUs, but lacking the neural net characteristics which fit so well to GPUs. However, I couldn’t find anything suggesting FPGAs are being used in this way today.

Risk management and order execution are other aspects of automated trading where FPGAs can help. While ML may suggest a trade, risk management monitors these suggestions in the background to ensure they do not fall outside trader-advised windows. Order execution, consolidating planned trades and fanning them out to the relevant markets to execute buys and sells, is yet another area where FPGAs naturally offer advantages in parallelism (sort of the inverse of the ticker aggregation task). In both these cases, FPGA-based solutions are in production today.


So now back to my opening question – why would an EDA vendor be exhibiting at a trading conference? FPGAs are playing a bigger role in automated trading, leveraging multi-billions of dollars in transactions. But these folks are traders. They have lots of mathematicians and lots of programmers, but they don’t have a lot of FPGA designers. Yet their differentiation hinges on how well their hardware performs – and that’s a moving target. Also, suppliers emerging around servicing these requirements, for smaller traders and retail companies serving day-traders, have the same needs. Which means that that it has become very important to be able to design and often re-design these systems to squeeze yet more advantage as competitors advance.

None of this has been lost on Aldec who, as an FPGA design verification and prototyping supplier, seem especially well positioned to take advantage of this opportunity (they apparently already have several customers in this space). They are exhibiting at the Trading Show in Chicago this year and Louie de Luna (Marketing Director at Aldec) was interviewed by the group that puts on the show. I’ll just touch on just a couple of points from that interview.


One that may trigger apoplexy in the ASIC verification community is momentum behind Python/cocotb for building testbenches (see here for background and a demo), rather than UVM. HFT engineers apparently don’t care about our sacred cows. They’re more than happy to switch to any solution that can get faster to completed verification. Louie cited one example where a UVM approach took 5k lines and 30 days to get to a result that missed some bugs, where the Python/cocotb approach completed with 500 lines of code in 1 day and caught those bugs. I’m sure this won’t always be true, and would probably not be generally true for large designs, but it is an interesting stat in its own right. No-one is suggesting UVM be replaced by this solution for mainstream ASIC design. At the same time, UVM may have already lost the battle in HFT and that may portend similar shifts in other personalized design flows.

Louie also talked about the importance of prototyping boards being available to test designs. He stressed that different algorithms often need different resources, perhaps more DSP primitives for floating-point-intensive calculations, or multiple small memories for highly-parallelized transaction processing. It is difficult to fit all these into one solution so Aldec offers a family of boards based on Xilinx Virtex-7 and UltraScale in multiple configurations, along with the software to manage them.

As an editorial sidebar, I should add that I am becoming quite impressed by this company. They’ve been around for over 30 years and most of us have probably dismissed them as a minor player in a space dominated by the EDA giants and others close to the core of “real” EDA (I’m guilty too). But something has changed at Aldec. I have no idea what triggered this but they seem to be getting a lot more aggressive, especially in FPGA-based design and even more in supporting applications where FPGAs themselves are growing rapidly. Relevance to high-frequency trading is just one recent example. Aldec are reinventing themselves and arguably, by complementing mainstream (and some not-so-mainstream) EDA solutions with a range of application-specific prototyping (perhaps even beyond prototyping) boards, they may be redefining the span of EDA.

You can read the interview with Louie de Luna in full HERE and the latest board announcement for HFT applications HERE.


Building Better Digital Content Protection

Building Better Digital Content Protection
by Tom Simon on 05-15-2017 at 12:00 pm

Back in college my roommates figured out that the TV cable coax wire was still connected to our apartment. As a result, I was able to watch the Richard Pryor movie Silver Streak about 30 times without a cable box, however the screen was partially jumbled from the simple content protection used back then. This was possible by aggressively adjusting the vertical and horizontal hold knobs. (Some readers may even be asking what these knobs were.) Such easy piracy has gone by the wayside in the intervening years. It’s probably a good thing. Digital Media is a large market worth huge sums of money.

The dilemma today is that consumers want seamless access to media, that absolutely needs to be protected, on a wide range of devices – from 4K video displays to hand held mobile phones, and everything in between.

The most widely used system for encrypting and securing content is the High-bandwidth Digital Content Protection (HDCP) standard that was developed by Intel. Its use is managed by the Intel subsidiary Digital Content Protection, who provide licenses for use of the technology to producers of secured devices.

The system relies on a set of forty 56-bit keys that is unique to each secured product. These keys must not be released publicly. There is a list of compromised devices whose keys have been revoked. The system also relies on random number generation. Each frame is encrypted with a unique key that results from an algorithm that uses a random subset of the 40 unique keys for that device. The source and sink devices stay in sync by sharing Key Selection Vectors (KSV’s).

HDCP works with DisplayPort, DVI, HDMI, GVIF and UDI. DisplayPort has become a popular interface to transport video to display devices. It’s confluence with Thunderbolt connectors using the mini DisplayPort helped propagate it widely. DisplayPort differs from the familiar HDMI in that it is a more structured protocol and offers more flexibility and interoperability. Indeed, DisplayPort can be converted to HDMI, DVI or a number of other formats at the connection to monitor. Of course, HDMI is still probably the number one interface used for connecting display devices.

Nevertheless, DisplayPort will play an important role as video interfaces move from HDMI to USB Type-C. The perennial problem with laptops and even desktop computers is the proliferation of cables and connectors. It starts out simply enough – first comes the power cable, after which you connect your HDMI. Then comes a USB device like a mouse or external camera. Finally, often there is external storage for data backup, and so on. Soon there is a Gordian Knot of cables you must contend with.

USB Type-C combines power (up to 100W), high and low speed USB and has provisions for using some of its high speed pairs for other data streams. USB Type C will always provision USB 2.0, but has up to three other pairs that can carry high speed data. This is perfect for video. This means that USB Type-C can provide connections for most everything needed by a laptop, plus sophisticated power management and charging. Indeed, my 2015 MacBook has a single lonely headphone jack and one USB Type-C connector – period.


Video over USB Type-C is handled with what is called DisplayPort Alternate Mode. Native DisplayPort is sent over some of the high speed pairs. The USB Type-C specification allows for automatic configuration of direction, data channel usage, etc.

There is a wave of new products that incorporate USB Type-C, including laptops and peripherals. This is bringing about a demand for HDCP Intellectual Property (IP) for integration into SOC’s. This requires proper implementation of a laundry list of security features. Synopsys has published an article that looks at what is required. Devices need to do more than implement HDCP, they must do it properly to stay secure. For instance, if the device keys are compromised the entire production run of end user product can have its HDCP license revoked. The video processing pipeline needs to be protected. There also needs to be a NIST SP800-90C compliant random number generator on board. And that is not all.


The Synopsys article covers present day requirements for content protection implementations, and it addresses how to accommodate potential future developments to make sure that products remain viable.

While we enjoy the convenience of high quality video almost anywhere we want it, there is a lot of thought that needs to be put into developing the products that support this functionality. The Synopsys article does a good job of laying out all the element. A copy of the article can be found here.


CEO Interview: Vincent Markus of Menta

CEO Interview: Vincent Markus of Menta
by Daniel Nenni on 05-15-2017 at 7:00 am

What is Menta all about?
Menta was founded to add hardware-programmability within SoCs. We deliver FPGAs in hard IP form that can be readily embedded within an SoC to make certain hardware functions reconfigurable at-will, post-production. This enables customers to dynamically adapt to evolving standards, perform security updates, or even fix hardware bugs quickly and efficiently. All of this dramatically reduces the likelihood of respins, and extends the lifecycle of SoCs.

What do you think is most unique about Menta’s technology?
Probably the most unique aspect about our technology is that we use 100% ASIC standard cells, while all of our major competitors incorporate full-custom cells to various degrees. The impact of Menta’s early decision to use 100% standard-cells is that Menta eFPGAs not only can integrate seamlessly into any customer’s EDA design flow, but more importantly, can be straightforwardly ported to literally any standard-cell based CMOS technology within just 3-4 months. When using any custom cells, this is not the case.

Using standard-cells exclusively also guarantees maximizing yield, and ensuring speed and power consumption vs. the original specification. That is because we rely upon the tested, validated simulation models of the provider supplying the standard-cell library. In contrast, because all our major competitors use full-custom cells to various extents, they are forced to rely on their own created simulation models for those custom cells, which are not able to be tested/validated to the same extent as those coming from a standard cell library provider. So in summary, what is unique about our technology is how we offer faster portability across foundries/processes, and better yield and performance-reliability vs. the original specification.

Menta has been established for quite a while, with little visibility up to now – what happened during this time?
Menta originally was founded in 2007, but the work on eFPGA started even two years before, at the LIRMM laboratory in Montpellier, France. Originally, Menta was an EDA company developing an FPGA array compiler, but then we saw more commercial opportunity in becoming a semiconductor IP company. Nevertheless, EDA is still in Menta’s DNA.

Fast-forwarding a few years, Menta then developed two eFPGA architectures, that were used within several successful French-state and European-funded projects. For example, we developed a MRAM-based FPGA for the ANR project, and Menta’s eFPGA technology was also chosen by the European Defense Agency (EDA) for a project called EDA-SoC, involving several major European defense companies.

Afterwards, Menta continued to work closely with various customers and partners to optimize our eFPGA architecture in order to meet industrial and commercial requirements. That was the point at which I decided to invest in Menta through the FJ EN private equity firm.

FJ EN’s investment enabled Menta to develop its 3[SUP]rd[/SUP] generation technology in mid-2015 and Menta had its first commercial success shortly thereafter. Finally, in 2016, Menta released its latest 4[SUP]th[/SUP] generation technology with improved density and performance. The excellent response we received from customers all over the world, caused us to significantly accelerate our business development and hiring.

As a final note, I should add that FJ EN most recently invested an additional US$ 5M in Menta.

How is Menta different from competitors?
Until our 3[SUP]rd[/SUP] generation “v3” technology was released, I would say we were really not much different from our competitors. Before our v3, we talked with multiple prospects, and while everyone found our technology interesting and potentially useful, what they really needed was for our IP to be integrateable, testable, and verifiable in the same manner as all the other standard-cell based logic IPs they were using.

So we took certain radical engineering decisions. We decided to focus not only on maximizing density and performance, but also on maximizing flexibility, testability, and verifiability of the eFPGA IP. As a result, our IPs now not only use 100% ASIC standard-cells, but they also use flops instead of SRAM, to store the eFPGA configuration. That allows Menta to deliver IP in any technology node with whatever options, based on any requested standard cell library within 3-4 months. We also offer standard scan-chain DFT with 99.8% test coverage. This latter capability required a number of unique engineering breakthroughs which we patented.

We like to say the best eFPGA IP is like a piece of wood, almost useless, without the right RTL-to-bitstream software. Our programming software, Origami Programmer, is an advanced toolsuite with a carefully crafted GUI – although TCL can also be used for command-line eFPGA programming. Origami Programmer includes synthesis, embedding the Verific parser – so there is no need to buy an external tool. With each IP, we deliver dedicated timing files so that exact timing information can be extracted and used for timing driven place & route. Origami Programmer not only outputs the bitstream file and the information for analysis, but also a simulation model with timing information to run formal verification and system level simulations. To complement Origami Programmer, we also offer an innovative eFPGA planning/specification software, Origami Designer. It allows our customers to determine the eFPGA resources they require to implement their RTL designs – not only in term of LUTs, but also DSPs and embedded SRAM.

Tell us about your IP delivery model and why you decided to provide Hard IP.
The hard IP we provide is actually a soft IP that we harden for customers as a service, based on the technology process they select. The reason Menta wants to do the hardening is that the RTL mapping process on the eFPGA can be extremely timing-critical. To add all the post-routing timing information with the desired accuracy into the Origami Programmer libraries, we found the most reliable approach would be to deliver those timing files with each IP to the customer, rather than have the customer try to generate those themselves. We felt this was the safest and most reliable approach to ensure our customers would be successful.

Are Menta eFPGA IPs silicon proven?
Yes. We have delivered our IP in STM 65nm and GLOBALFOUNDRIES 14LPP already. We taped-out a testchip in TSMC 28HPC+, and will have validation boards ready first half of 2017 to ship to prospects and customers. In addition, we are now part of GLOBALFOUNDRIES 32SOI and 14LPP IP catalog, and are working with other major foundries to integrate their offering.

What are the applications for eFPGA?
We have seen the earliest adopters in the Aerospace & Defense industry. However, we now see lot of interest in automotive, for ADAS chips, and also in various parts of IoT industry, such as using the eFPGA as a sensor hub. Menta’s eFPGA is well suited for the automotive industry, as using flops instead of SRAM to store the bitstream makes us immune from Single Event Upset (SEU) faults, and using 100% standard cells makes our IP much easier to get certified with the rest of the SoC.

In certain high-performance computing applications, eFPGA is the only solution capable of implementing parallel intensive deterministic computations with the required minimum clock-cycle latency and maximum throughput, or enabling adaptability of the system in the field within the required cost and size.

Hardware acceleration, for networking, image processing or HPC is also an emerging application.

eFPGAs can also be used to reduce time-to-market by implementing evolving algorithms directly in hardware post-production, cope with evolving standards, or even as a safety measure, to allow correcting bugs post production. Why tolerate the cost, and extra board-space of adding a small FPGA next to your ASIC, when you can have that functionality embedded within the ASIC itself? For high-volume products especially, the ROI is easy to see.

Finally, we are seeing a growing demand for programmability to implement security algorithms post production, to prevent security breaches at the production stage.

Can you share any success story with SemiWiki readers?
One of the top three US Aerospace & Defense companies late last year selected Menta to integrate an eFPGA inside their ASIC. We delivered the IP in less than four months, ahead of schedule, in GLOBALFOUNDRIES 14LPP. Everything worked the first time. As a result they decided to adopt our technology going forward and we are working with them on several new projects.

What comes next for Menta?
First, we are continuing to evolve our eFPGA architecture to accommodate even larger arrays without sacrificing performance. Second, our Origami programming software will soon allow improving eFPGA speed in the field by up to 2x, using software updates. Third, we are working on several partnerships to extend Menta’s ecosystem and provide our customers with readymade applications. Finally, Menta is building an eFPGA embedded DSP catalog to address customer needs for various applications. To accomplish all of this and support our growing customer commitments, we are hiring aggressively engineers and business developers all over the world.

Also Read:

CEO Interview: Alan Rogers of Analog Bits

CTO Interview: Jeff Galloway of Silicon Creations

CEO Interview: Srinath Anantharaman of ClioSoft


AI is the Catalyst of IoT!

AI is the Catalyst of IoT!
by Ahmed Banafa on 05-14-2017 at 7:00 am

Businesses across the world are rapidly leveraging the Internet-of-Things (#IoT) to create new products and services that are opening up new business opportunities and creating new business models. The resulting transformation is ushering in a new era of how companies run their operations and engage with customers. However, tapping into the IoT is only part of the story [6].

For companies to realize the full potential of IoT enablement, they need to combine IoT with rapidly-advancing Artificial Intelligence (#AI) technologies, which enable ‘smart machines’ to simulate intelligent behavior and make well-informed decisions with little or no human intervention [6].

Artificial Intelligence (AI) and the Internet of Things (IoT) are terms that project futuristic, sci-fi, imagery; both have been identified as drivers of business disruption in 2017. But, what do these terms really mean and what is their relation? Let’s start by defining both terms first:

IoT is defined as a system of interrelated Physical Objects, Sensors, Actuators, Virtual Objects, People, Services, Platforms, and Networks [3]that have separate identifiers and an ability to transfer data independently. Practical examples of #IoT application today include precision agriculture, remote patient monitoring, and driverless cars. Simply put, IoT is the network of “things” that collects and exchanges information from the environment [7].

IoT is sometimes referred to as the driver of the fourth Industrial Revolution(Industry 4.0) by industry insiders and has triggered technological changes that span a wide range of fields. Gartner forecasted there would be 20.8 billion connected things in use worldwide by 2020, but more recent predictions put the 2020 figure at over 50 billion devices [4]. Various other reports have predicted huge growth in a variety of industries, such as estimating healthcare IoT to be worth $117 billion by 2020 and forecasting 250 million connected vehicles on the road by the same year. IoT developments bring exciting opportunities to make our personal lives easier as well as improving efficiency, productivity, and safety for many businesses [2].

AI, on the other hand, is the engine or the “brain” that will enable analytics and decision making from the data collected by IoT. In other words, IoT collects the data and AI processes this data in order to make sense of it. You can see these systems working together at a personal level in devices like fitness trackers and Google Home, Amazon’s Alexa, and Apple’s Siri [1].

With more connected devices comes more data that has the potential to provide amazing insights for businesses but presents a new challenge for how to analyze it all. Collecting this data benefits no one unless there is a way to understand it all. This is where AI comes in. Making sense of huge amounts of data is a perfect application for pure AI.

By applying the analytic capabilities of AI to data collected by IoT, companies can identify and understand patterns and make more informed decisions. This leads to a variety of benefits for both consumers and companies such as proactive intervention, intelligent automation, and highly personalized experiences. It also enables us to find ways for connected devices to work better together and make these systems easier to use.

This, in turn, leads to even higher adoption rates. That’s exactly why; we need to improve the speed and accuracy of data analysis with AI in order to see IoT live up to its promise. Collecting data is one thing, but sorting, analyzing, and making sense of that data is a completely different thing. That’s why it’s essential to develop faster and more accurate AIs in order to keep up with the sheer volume of data being collected as IoT starts to penetrate almost all aspects of our lives.


Examples of IoT data
[4]:

  • Data that helps cities predict accidents and crimes
  • Data that gives doctors real-time insight into information from pacemakers or biochips
  • Data that optimize productivity across industries through predictive maintenance on equipment and machinery
  • Data that creates truly smart homes with connected appliances
  • Data that provides critical communication between self-driving cars

It’s simply impossible for humans to review and understand all of this data with traditional methods, even if they cut down the sample size, simply takes too much time. The big problem will be finding ways to analyze the deluge of performance data and information that all these devices create. Finding insights in terabytes of machine data is a real challenge, just ask a data scientist.

But in order for us to harvest the full benefits of IoT data, we need to improve:

  • Speed of big data analysis
  • Accuracy of big data analysis

AI and IoT Data Analytics
There are six types of IoT Data Analysis where AI can help [5]:
1. Data Preparation: Defining pools of data and clean them which will take us to concepts like Dark Data, Data Lakes.
2. Data Discovery: Finding useful data in the defined pools of data
3. Visualization of Streaming Data: On the fly dealing with streaming data by defining, discovering data, and visualizing it in smart ways to make it easy for the decision-making process to take place without delay.
4. Time Series Accuracy of Data: Keeping the level of confidence in data collected high with high accuracy and integrity of data
5. Predictive and Advance Analytics: a Very important step where decisions can be made based on data collected, discovered and analyzed.
6. Real-Time Geospatial and Location (logistical Data): Maintaining the flow of data smooth and under control.

AI in IoT Applications[1]:

  • Visual big data, for example – will allow computers to gain a deeper understanding of images on the screen, with new AI applications that understand the context of images.
  • Cognitive systems will create new recipes that appeal to the user’s sense of taste, creating optimized menus for each individual, and automatically adapting to local ingredients.
  • Newer sensors will allow computers to “hear” gathering sonic information about the user’s environment.
  • Connected and Remote Operations- With smart and connected warehouse operations, workers no longer have to roam the warehouse picking goods off the shelves to fulfill an order. Instead, shelves whisk down the aisles, guided by small robotic platforms that deliver the right inventory to the right place, avoiding collisions along the way. Order fulfillment is faster, safer, and more efficient.
  • Prevented/Predictive Maintenance: Saving companies millions before any breakdown or leaks by predicting and preventing locations and time of such events.

These are just a few promising applications of Artificial Intelligence in IoT. The potential for highly individualized services are endless and will dramatically change the way people lives.

Challenges facing AI in IoT

[LIST=1]

  • Compatibility: IoT is a collection of many parts and systems they are fundamentally different in time and space.
  • Complexity: IoT is a complicated system with many moving parts and non –stop stream of data making it a very complicated ecosystem
  • Privacy/Security/Safety (PSS): PSS is always an issue with every new technology or concept, how far IA can help without compromising PSS? One of the new solutions for such problem is using Blockchain technology.
  • Ethical and legal Issues: It’s a new world for many companies with no precedents, untested territory with new laws and cases emerging rapidly.
  • Artificial Stupidity: Back to the very simple concept of GIGO (Garbage In Garbage Out), AI still needs “training” to understand human reactions/emotions so the decisions will make sense.

    Conclusion
    While IoT is quite impressive, it really doesn’t amount to much without a good AI system. Both technologies need to reach the same level of development in order to function as perfectly as we believe they should and would. Scientists are trying to find ways to make more intelligent data analysis software and devices in order to make safe and effective IoT a reality. It may take some time before this happens because AI development is lagging behind IoT, but the possibility is, nevertheless, there.

    Integrating AI into IoT networks is becoming a prerequisite for success in today’s IoT-based digital ecosystems. So businesses must move rapidly to identify how they’ll drive value from combining AI and IoT—or face playing catch-up in years to come.

    The only way to keep up with this IoT-generated data and gain the hidden insights it holds is using AI as the catalyst of IoT.

    Ahmed Banafa Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016

    References:

    1. https://aibusiness.com/ai-brain-iot-body/
    2. http://www.creativevirtual.com/artificial-intelligence-the-internet-of-things-and-business-disruption/
    3. https://www.computer.org/web/sensing-iot/contentg=53926943&type=article&urlTitle=what-are-the-components-of-iot-
    4. https://www.bbvaopenmind.com/en/the-last-mile-of-iot-artificial-intelligence-ai/
    5. http://www.datawatch.com/
    6. https://www.pwc.es/es/publicaciones/digital/pwc-ai-and-iot.pdf
    7. http://www.iamwire.com/2017/01/iot-ai/148265
    Figures Credit: Ahmed Banafa


  • Webinar: Next Generation Design Data & Release Management

    Webinar: Next Generation Design Data & Release Management
    by Daniel Nenni on 05-12-2017 at 12:00 pm

    Design Data Management (DDM) is a bit like insurance. It’s something every semiconductor company has to have, and as a result it’s probably something taken for granted. In order to make their products more useful, the DDM vendors have added more functionality to manage more of the lifecycle of design data.

    Dassault’s Synchronicity DesignSync is over 15 years of age, so it is a mature product, but it has been continuously enhanced and improved and is the mostly widely used design data management solution in the semiconductor industry.


    Consensia, a channel partner of Dassault, is holding a DesignSync webinar to explain how DesignSync is now part of a platform based approach that Dassault takes to managing the lifecycle of all design data, including internally developed and external IP management.

    WEBINAR REGISTRATION
    Thu, May 18, 2017 8:30 AM – 9:30 AM PDT

    In the webinar, Consensia will explain the differences between file based and changed based DDM, something that they have offered since 2007. They are also going to discuss why the change based approach, or DesignSync Modules along with caching methods, and new administration tools, make it easier to set up new projects, replicate user workspaces with less data, tag an entire manifest once (instead of tagging hundreds or thousands of individual files) and reduce storage needed to support design teams.

    As well as undertaking version control, DesignSync manages the release process for design teams developing and releasing an SOC or Bill of IP comprised of multiple data types from different sources – including other DDM repositories.

    Consensia’s view is that DDM solutions have to interoperate across the enterprise to be of greater value to its customers. With all of the recent M&A and consolidation in the semi industry, even some of the ‘smaller’ semi companies have design teams comprised of groups using different flows, and oftentimes, different DDM systems.

    For example, an SOC may be comprised of digital content created by designers using a Synopsys flow with SVN for version control, and an analog team using Cadence with DesignSync for version control. There may also be a third team using Perforce for version control. The process of pulling all of the design data (including internal and 3[SUP]rd[/SUP] party IP) together from different sources to release the design is likely to be a manual process that can take days or even weeks to complete.

    Consensia will show how DesignSync hierarchical references, or HREFs to address this problem. The manifest for a DesignSync module version stores a list of individual file versions along with the associated directory structure, and if a hierarchy is included, references to sub-modules, known as HREFs. HREFs are processed when design data is fetched into a workspace. If a module version that contains an HREF is fetched, the HREF is resolved and the reference module versions are fetched as well.

    This process enables DesignSync to pull in design data from other DDM repositories, and generate a release with a complete genealogy of the design making it possible to resurrect the design, as it existed at a previous point in time.

    WEBINAR REGISTRATION
    Thu, May 18, 2017 8:30 AM – 9:30 AM PDT

    Also Read:

    CEO Interview: Sanjay Keswani of Consensia

    IP Traffic Control

    Synchronizing Collaboration

    Behind the 3DEXPERIENCE for silicon


    Power Checks for Your Libraries

    Power Checks for Your Libraries
    by Bernard Murphy on 05-12-2017 at 7:00 am

    When your design doesn’t work, who owns that problem? I don’t believe the answer to this question has changed significantly since semiconductor design started, despite distributed sourcing for IP and manufacturing. Some things like yield can (sometimes) be pushed back to the foundry, but mostly the design company owns the problem. Even if you finally figure out the problem was in one of those outsourced technologies, it’s often too late to matter. Which is why incoming inspection is growing in popularity. Better to find a potential problem up-front when you still have time to get the supplier to fix it or you can possibly switch to another option.


    Of course, you could reasonably argue that it’s not your job to QA vendor IP, but that’s not going to be much consolation if a bug gets through – see above. That said, IP suppliers generally do a very good job of comprehensive checking before they ship a release; very good, but not always perfect. This is just as much the case in foundation and custom IP as it is in big digital blocks. Some functions or corners have been better tested than others and some characterization errors will be fixed only as they are detected during the lifecycle of the IP. And, of course, some of that hard IP will be your own; are you sure that Liberty models for that PHY have been fully validated against the Spice netlist?

    Power characterization is one area that may be more prone to this kind of problem than others, simply because aggressive power management and representation of these features is relatively recent. One very obvious source of potential problems here is in power/ground (PG) associations. Design for low-power depends on multiple power and ground options; planning which of these connect to which IP directly affects power architecture, yet the physical PG connections are not evident even as late as gate-level design and cannot be checked carefully until late-stage physical implementation.

    The real PG connections in say a foundation or analog IP are apparent in the Spice netlist for that IP and you can be quite confident that a reputable IP supplier will have run LVS between the netlist and the cell layout. You might be slightly less confident that they have thoroughly checked the Liberty power modeling features for the cell against the Spice netlist, in this case checking the related_power_pin and related_ground_pin specs in the Liberty model. This is something Crossfire from Fractal Technologies, with their 1100 family of checks, will do for you. These checks cover Liberty delay and power models describing conditions, related power-pins and power-down functions, in all cases comparing between the Liberty model and the Spice netlist topology.


    There’s a litany of reasons why design teams don’t do these checks; I heard similar reasons for soft IP when I was at Atrenta. First, “not my problem”. Maybe not, but it’s going to be somebody’s problem if an error slips through. Second, “the design tools will catch it before we get to implementation”. That’s not necessarily so, especially for hard IP. Missing arcs, or even pins or conditions can slip through. Even in implementation, timing analysis as one tool example checks against what the model specifies, not what it should specify.

    Some problems, particularly PG problems, will eventually be caught at LVS. But waiting to that stage to catch a PG problem is probably suicidal. A PG issue could require rework of the floor plan and the power distribution network with corresponding ripple through all other aspects of implementation, just after you swore on an aged relative’s grave that you were going to tapeout tomorrow. All because you felt the IP providers/teams should do their job properly and you shouldn’t have to double-check their work.

    Some brave souls have in the past scripted their own checking, but it’s looking like that approach has become very challenging for the latest technology nodes. And there’s no evidence these kinds of checks are available in other commercial tools or flows. So if you haven’t written and maintained a checker to work with the latest revs of Liberty, you really are gambling.

    Yes, these problems are infrequent but they do happen. And when they do, particularly in hard IP, the downside can be pretty catastrophic, as hopefully I have made clear. You might give some thought to why there are some seriously heavyweight IP consumers using Crossfire today. They are big believers in “trust but verify”, not a bad philosophy when you have so much on the line. You can read the Crossfire white-paper HERE.


    Webinar on TFT and FPD Design

    Webinar on TFT and FPD Design
    by Daniel Payne on 05-11-2017 at 12:00 pm

    I knew that the acronym for TFT meant Thin Film Transistors, but I hadn’t heard that FPD stands for Flat Panel Detectors. It turns out the FPD are solid-state sensors used in x-ray applications, similar in operation to image sensors for digital photography and video. I’ll be attending and blogging about what I learn at a webinar next week:

    • Accelerating TFT and FPD Design
    • Tuesday, May 16th, 2017 from 10AM to 11AM PDT
    • Hosted by Silvaco

    Webinar Overview
    This webinar will provide an overview of different techniques for TFT and FPD design enabled by the Silvaco EDA tool portfolio. With recent advances in display technology, circuits in display designs have enhanced their functionality and grown rapidly in size. Design and simulation of these enormous circuits are a challenge for the traditional SPICE simulators. Fast-SPICE-like simulators are far better suitable for these applications. In this webinar, we will discuss several key aspects which make these simulators popular in large circuit domain and demonstrate how to simulate a TFT-based display panel using Silvaco’s SmartSpicePro – event-driven multi-rate simulator. Next, we will be discussing the new features of Silvaco’s Layout Editor tool, Expert for improving design productivity on FPD layout. And finally, we will discuss early voltage drop analysis techniques on large scale TFTs. InVar Prime offers an easy solution for early TFT layout analysis.


    Webinar Highlights
    • Overview and demo of new FPD-related features of Expert layout editor:– Improved Equal Resistance Router
    – 3D RC extraction interface
    – Improved compatibility with OpenAccess
    • Key features of TFT-based display panels from the circuit simulation point of view
    • Key aspects of event-driven multi-rate circuit simulation methodology
    • Simulation performance and accuracy trade-off
    • Challenges in TFT-based display panel simulation
    • Simulation of TFT-based display panel examples
    – transient simulation
    – result accuracy comparison with SmartSpice
    – simulation performance and accuracy control

    • Key power integrity problems of layout design for TFT-based display panels
    • Transient nature of TFT displays and requirements for reliability analysis
    • Analysis performance and accuracy trade-off with layout only data
    • Challenges in TFT-based display panel voltage drop analysis
    • Examples of TFT-based display panel voltage drop analysis
    • Transient simulation performance and accuracy control

    Webinar Registration
    You need to register online and it’s a simple process with just a few fields to fill out, then check your email inbox for a confirmation message with a link to the webinar. Something that makes Silvaco webinars different than most is that the presenters are actually expert users of the tool, and expect to see live demonstrations followed by a Q&A session at the end.

    About Silvaco, Inc.
    Silvaco, Inc. is a leading EDA provider of software tools used for process and device development and for analog/mixed-signal, power IC and memory design. Silvaco delivers a full TCAD-to-signoff flow for vertical markets including: displays, power electronics, optical devices, radiation and soft error reliability and advanced CMOS process and IP development. For over 30 years, Silvaco has enabled its customers to bring superior products to market with reduced cost and in the shortest time. The company is headquartered in Santa Clara, California and has a global presence with offices located in North America, Europe, Japan and Asia.


    Polishing Parallelism

    Polishing Parallelism
    by Bernard Murphy on 05-11-2017 at 7:00 am

    The great thing about competition in free markets is that vendors are always pushing their products to find an edge. You the consumer don’t have to do much to take advantage of these advances (other than possibly paying for new options). You just sit back and watch the tool you use get faster and deliver better QoR. You may think that is the least you should expect, but remember in effective monopolies (cable for example?) you don’t even get that. Be happy there is real competition in EDA 😎


    For quite a while Synopsys held the pole position in logic simulation with VCS. The top metric by far in this domain is speed and here Cadence recently upped their game with Xcelium. Around the same time, Synopsys recently announced their next step in acceleration with fine-grained parallelism (FGP) support in VCS and it looks to me like we’re back to a real horse race. I have no idea who has the better product today – that’s for the market to decide – but VCS FGP looks to me like a worthy runner.

    VCS FGP gets to faster performance through parallelism – no big surprise there. The obvious question is how well this scales as the number of processors increases. One thing I like about Synopsys is that they let us peek behind the curtain, at least a little bit, which I find helps me better understand the mechanics, not just of the tool but also why emerging design verification problems are increasing the value of this approach.

    First, a little on platforms for parallelism. Multi-core processors have been around for a while, so we’re used to hearing about 8-core processors. In what seems to be an Intel-ism (apologies, I’m not a processor guru), when you get above 8 processors, the label switches to many-core, reflecting no doubt advances to support higher levels of inter-CPU support (even more sophisticated cache coherence, etc.). Most important, the number of cores per processor is growing quickly and expected to reach ~200 by 2020, providing a lot more scope for massive parallelism within a single processor. Processors at this level are commonly used in exascale computing and are also promoted as more general-purpose accelerators than GPUs (after all, not every modern problem maps to neural net algorithms).

    Which brings me back to simulation. Earlier acceleration efforts at the big EDA vendors were offered on GP-GPUs, but it seems telling that both have switched back to CPU platforms, perhaps because of wider availability in clouds/farms but presumably the GPU advantage for this application can’t be too significant versus many-core acceleration.


    Synopsys give (in a webinar HERE and a white paper HERE) a nice intro to their use of this level of parallelism in simulation. The basic (non-parallel) case runs on one processor. All event modeling and I/O is serial in this case. The first step towards parallelism divides RTL (or gate-level) blocks between cores; within a core, modeling and I/O remains serial but now you have parallelism between cores. This is multi-threading in which each block goes into a thread and there can be as many threads as there are cores. Synopsys calls this coarse-grained parallelism. The next step is to partition at the task level (always block, continuous assign, ..). This fine-grained (hence FGP) distribution should allow for optimum use of parallelism, especially across many more threads/cores, if tasks are optimally distributed.

    How they do that is something Synopsys chooses not to share, unsurprisingly. But they do share (in the webinar) some guidelines on how to best take advantage of VCS FGP. Designs that will benefit most will be those with a high level of design activity across the design (a common theme with other solutions). Examples they cite include graphics, low power designs, multi-core (surprise, surprise) and networking designs, all at RTL. They also cite gate-level PG simulations and scan test simulations. All should have a lot of activity across the chip at any given time – lots of IPs lit up a lot of the time, the way many modern designs operate.

    Procedural and PLI/DPI content are less friendly to this kind of parallelism. Which doesn’t mean your design can’t use these constructs; they’ll just limit potential speed-up. In theory you might expect that high-levels of communication between tasks in different threads would also limit performance gains. This seems to be an area where vendors like Synopsys have found a pragmatic approach which offers significant speed-up for many realistic designs and objectives, even though it can’t provably do so for all possible designs/objectives (reminds me of the traveling salesman problem and practical routing solutions which work well in physical design tools).


    Of course some level of inter-thread communication is unavoidable if the design is to do useful work so the benefit of adding more cores for any given design and workload will saturate somewhere. Synopsys show this in the above curves for an RTL design and a gate-level design. But this kind of limit will apply to all approaches to parallelization.


    Synopsys has posted some pretty impressive performance gains across a variety of designs (results shown here are the latest, not seen in the webinar or the white-paper, as of the time I saw it). Settle back and enjoy the gains!


    Circuit Design: Anticipate, Analyze, Exploit Variations – Statistical Methods and Optimization

    Circuit Design: Anticipate, Analyze, Exploit Variations – Statistical Methods and Optimization
    by Daniel Nenni on 05-10-2017 at 12:00 pm

    We are happy to publish book reviews, like this one from Dr. Georges Gielen of the KU Leuven in Belgium, for the greater good of the semiconductor ecosystem. So, if you have a semiconductor book you would like to review for fame not fortune let me know.
    Continue reading “Circuit Design: Anticipate, Analyze, Exploit Variations – Statistical Methods and Optimization”