CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

3 in 1 Hardware Verification

3 in 1 Hardware Verification
by Bernard Murphy on 11-14-2016 at 12:00 pm

Aldec has offered front-end EDA tools for over 30 years but may not be a familiar name to mainstream IC design engineers. That’s probably because for most that period they haven’t really targeted IC design. They have been much more focused on PC-based design for FPGAs particularly where requirements traceability has been important, for example in avionics design, where DO-254 compliance is mandatory.

But there have been important shifts in interesting markets over the past few years which move closer to Aldec’s center of gravity. Fragmenting market needs demand device volumes that are not cost-effective in custom IC implementations, a problem further compounded by rapidly evolving standards, such as communications protocols. This drives a trend to FPGAs at higher unit costs but much lower total cost. Additionally, standards which are either regulatory or de-facto regulatory have become more important in rapidly-growing markets like automotive (ISO 26262), industrial (IEC-61508) and medical (IEC-60601).

You may be even more surprised to hear that Aldec has a prototyping/emulation product. Why would their customers need such a thing? Because FPGA/multi-FPGA designs are getting to be too big and too software-driven for burn-and-churn debug to be practical. Just as FPGA designers are turning to UVM and formal proving, they’re also turning to hardware help in verification. That’s where the latest HES release, based on Xilinx UltraScale devices, becomes interesting.


One HES7XUS1320BPX board (pictured at the beginning of this piece) containing three XCVU440 devices on a single PCB has an estimated capacity of 79 Million ASIC gates. For larger designs the system can be scaled up with a standards-based backplane that can interconnect up to four boards to provide capacity of 316 Million ASIC gates. What’s more it can be used as an emulator or an emulation slave to a master simulation.

Of course hardware alone doesn’t make an emulator. Part of what you need is reasonable setup times. Judging by comments on earlier generations it seems this is why Aldec went with the biggest Xilinx devices – to reduce the need for partitioning for many of the designs they target. But to echo a comment in a reference below, it is less obvious how well this scales if you need to go to 2 or more boards.

The other thing you need is fine-grained debug support. HES offers up to 16 groups of 16kbits of “static probes”, spread across all FPGAs in the system. These seem to be effectively instrumented into a multi-chip logic analyzer. They also offer something called dynamic probes which you can select during runtime, allowing for debug access anywhere, though at slower speeds with Xilinx readback. You also get a backdoor interface for read and write of memory.


Aldec also provides support for using emulation mode in ICE (in-circuit emulation) modeling, with support for speed bridges, as a simulation accelerator, for co-emulation with virtual models and for software debug. Apparently a pretty comprehensive solution though to dig deeper you’ll need to talk to your local distributor.

One last thing. It’s always tricky to get information on pricing but I did find this reference which suggests that in 2012, HES-7 was under $20k for a single board solution. That is a very different price range from mainstream emulation and prototyping solutions. I can’t answer to how HES could address your needs, but pricing alone should pique your interest. You can read more about the latest Aldec HES capabilities HERE. There’s also a somewhat more detailed report on HES HERE.

More articles by Bernard…


CEO Interview: Chouki Aktouf of Defacto Technologies

CEO Interview: Chouki Aktouf of Defacto Technologies
by Daniel Nenni on 11-14-2016 at 7:00 am

As a 30+ year semiconductor veteran I can tell you with 100% certainty that start-ups are the lifeblood of EDA. The mantra is “Innovate or Die!” and that is exactly what Defacto is doing. After more than 10 years of innovating in Design for Test at RTL, Defacto is now offering a complete EDA solution based on generic EDA tools to cover advanced Design Restructuring, Design Verification, Low Power Design, IP Integration, and RTL Signoff.

The development of Defacto’s technology began at the National Polytechnic Institute of Grenoble (INPG-France) in 1997, under the leadership of Chouki Aktouf, PhD. More than 18 man-years of work were invested in Defacto’s unique DFT technology. Dr. Aktouf and his team did the early market assessment and established proof of concept by working with a large European semiconductor manufacturer to validate the benefits of the company’s technology.

In 2003, Dr. Aktouf, Michel Oger, Philippe Duchene, and James Girand founded Defacto and the company raised Series A to D from two major investors in France, Innovacom and CIC-CM.

What does Defacto do?
Defacto provides RTL design solutions which help users to build a unified design flow where different standards like RTL for design description, UPF for power intent, SDC for timing constraints, LEF/DEF for physical design information, are considered jointly

What are the challenges facing EDA companies today?
Main challenges are three fold, first, different mergers between major semiconductor companies.

Second challenges are the new opportunities around design solutions especially for killing apps like for automotive, IOT (Internet of Things) and the ability to provide compelling solutions.

Last but not least are the emerging FPGA based solutions for complex designs, where EDA offers are still very limited compared to ASICs.

But why partitioning at RTL?
Partitioning and re-architecturing complex SoCs during or after logic synthesis is just unrealistic, knowing the complexity of the today chips with the related runtime and performance in general. So partitioning at RTL means analyzing different configurations, different scenarios, given several criteria: power, DFT, reliability, timing, physical information, etc. It’s just the way to go.

You spoke about unified flow what are the benefits to have this kind of platform?With the Defacto-based unified flow, RTL designers for example not expert in low power designs or timing will be able to automatically update the related databases UPF or SDC respectively when RTL change. Imagine an RTL designer who is able in minutes to (1) change complex RTL, (2) then update automatically UPF and SDC files and release all changes. This has a great benefit compared to traditional ways of updating manually UPF and SDC databases and different interactions between several teams to get a consistent RTL+UPF+SDC database.

What’s new this year?
Several breakthrough technologies are announced this year. First is around the unified flow as mentioned earlier. We are ready to demonstrate the related value to major semiconductor companies. Also this November at ITC (International Test Conference) in Fort Worth Texas, we will be demonstrating for the first time, a platform which will help exploring at RTL complex DFT architectures to help DFT engineers and DFT architects to decide about how much DFT logic is needed at different levels. The ultimate goal is to fit into an area overhead+test time budget given test coverage criteria. A typical DFT architecture includes test compression, memory BIST, etc.

How will chip companies benefit from Defacto STAR design solution?
Defacto tools are Tcl based and easily customizable to help and interoperate with existing DFT flows. Defacto doesn’t compete with existing DFT offers. Defacto augment existing DFT flows.

What are your major challenges?Is to demonstrate the benefits of this RTL DFT solution on complex chips on real projects, maybe the most important challenge is to convince users to start as early as possible to explore complex RTL design configurations. In other words, changing mindset is one of the challenges we face daily!

Which markets do you feel offer the most and best opportunities for STAR over the next few years and why? Is there a killer app somewhere in these markets?
Several, especially emerging markets. For example, the automotive market is now highly demanding of reliable, secure and testable chips: a higher test coverage where chips are tested when applications are running. This mean DFT requirements are higher. It is one of the reasons why to start building and configuring DFT architectures as soon as possible.

Also Read:

Executive Interview: Vic Kulkarni of ANSYS

CEO Interview: Taher Madraswala of Open-Silicon

CEO Interview: Simon Butler of Methodics


Hotz Tech Crunched

Hotz Tech Crunched
by Roger C. Lanctot on 11-13-2016 at 4:00 pm

George Hotz, founder of Comma.ai told the world at TechCrunch that he was going to ship a $999 aftermarket autopilot system – the Comma One. The smartphone-sized device was designed to replace the rearview mirror enabling an automated driving experience in appropriately equipped cars – initially certain Acura and Honda models.

Last week the U.S. National Highway Traffic Safety Administration sent Hotz a letter seeking answers to questions regarding the functionality of his device. They also noted that a failure to respond to their outreach would lead to $21,000/day fines.

The agency, a division of the U.S. Department of Transportation, made clear that since the installation of the Comma.ai device required disabling and removal of existing safety technology in the car it violated various aspects of the Safety Act. Hotz quickly folded his tent indicating that he had no interest in tangling with lawyers and regulators. He also took issue with the NHTSA for what he perceived as a one dimensional communication that allowed no room for a conversation or negotiation regarding an actual market introduction of his device.

In his TechCrunch presentation Hotz more or less anticipated the demise of his efforts even as he was bragging about his achievement of actually delivering a product. He touted “shipability” as the key differentiator between Comma.ai and all other innovators in the space.

He took particular aim at Cruise Automation, long ago acquired by General Motors for a rumored $1B, describing the company as a “sellout.” He further ascribed mafia-like characteristics to Mobileye’s domination of the self-driving market.

In fact, he emphasized the value and power of his independence and his ability to control his own destiny since he had access to raw data that he was free to analyze and aggregate as he saw fit without any pre-processing or filtering. We can expect to hear more about this independent approach in the future, but for now Hotz has been sidelined.

NHTSA’s arrival on the aftermarket self-driving car scene raises questions about the ability of innovators like Hotz to bring their systems to market or at least test their ideas in real-world circumstances. Hotz claimed at TechCrunch to have more than 730 beta users in the field.

There are at least five other companies working in the aftermarket space including Pilot Automotive, Pearl Auto, Perrone Robotics, TorcRobotics and Paravan Industry. Presumably these companies have a path to testing and product development that will not run afoul of NHTSA.

Having pulled the plug on one player, though, it is possible that NHTSA may become more assertive. Kicking Hotz to the curb (or throwing him under the self-driving bus?) is like swatting a fly for NHTSA. Given the fact that Tesla Motors’ automated driving system offers a value proposition comparable to Comma One should we expect to see NHTSA exerting veto power over future enhanced cruise control or other safety systems from car companies?

It will be a shame if NHTSA’s entry into the automated driving conversation actually slows or delays development of this technology. The emergence of Cruise and Comma.ai and others reflects the reality that the tools for creating built-in and aftermarket self-driving systems are proliferating bringing with them a corresponding reduction in cost.

NHTSA will soon have its hands full. The best outcome will be for NHTSA to find a way to constructively engage with these innovators and developers rather than simply waiting on the sidelines with their finger on the termination trigger. Lives are truly at stake. With highway fatalities on the rise in the U.S. and around the world it seems clear that we need to advance this technology as rapidly as possible. Perhaps Hotz is just an overheated outlier. Let’s hope that is the case.


DDoS Attack: A Wake-Up Call for IoT

DDoS Attack: A Wake-Up Call for IoT
by Ahmed Banafa on 11-13-2016 at 12:00 pm

AAEAAQAAAAAAAAdvAAAAJGY1OGVmZTA1LTQyYTYtNDFiYi04MDhiLTg3YjI2Mzc3MGNhNg

Welcome to the world of Internet of Things wherein a glut of devices are connected to the internet which emanates massive amounts of data. Analysis and use of this data will have real positive impact on our lives. But we have many hoops to jump before we can claim that crown starting with a huge number of devices lacking unified platform with serious issues of security standards threatening the very progress of #IoT.
Continue reading “DDoS Attack: A Wake-Up Call for IoT”


The Ramifications of Not Accepting Industry 4.0

The Ramifications of Not Accepting Industry 4.0
by Bill McCabe on 11-13-2016 at 7:00 am

In the last couple of years, Industry 4.0 has significantly affected manufacturing on a global scale. With a heavy focus on the Internet of Things, the use of smart machines and other devices has become a critical part of Industry 4.0. With new networks of intelligence on the horizon, there is no doubt that Industry 4.0 will continue to spread and prove to be a critical part of manufacturing.

While the benefits are explored time and time again, some may wonder what would happen if they don’t embrace this advanced technology. Early on, the biggest nuisance would be higher costs. Older machinery will continue to age and will eventually need to be replaced. Those who have embraced Industry 4.0 will find that their marginal costs decrease while production flows smoothly and effortlessly. This allows a higher output and fewer issues along the way.

There is also a greater risk of running out of product when you need it the most. Human error can make estimating the amount of raw product you need for a week difficult. When you utilize Industry 4.0 technology, you can keep track of everything in real time. Based on the speed of machines, and the amount of raw materials you have in stock, the system can analyze and predict what the output will continue to be, and how long until you run out of essential items. Since the system can be set up to handle the reordering process when it is low, and ensure you never run out of product. So those who don’t embrace it will not benefit from this.

Those who do not embrace it will find that they are unable to remain competitive in the changing market. With the technology, there is less waste of raw materials, better supply chains, and lower operating costs due to improved efficiency. Even product output levels increase, so those who accept and utilize Industry 4.0 are poised to succeed. Those who do not, will find themselves operating too slow and at too high a cost to obtain the highest number of customers possible. After all, buyer expectations are changing in today’s world and it is critical that you keep up with it.

When a problem occurs during the production process, there is also the ability to note any machinery issues that take place. If something breaks down, or if there is a misfire that could damage product, the system can stop at once. It will then alert maintenance of any concerns that exist so you are down for shorter periods of time, and you don’t face any surprises along the way.

As you can see, it is incredibly important to embrace Industry 4.0. Take the time to explore how you can best utilize it within your own company and avoid many of the unpleasant surprises that can take place if you push off the conversion process to “save money”.

For more information about IOT and Industry 4.0 visit our new website www.internetofthingsrecruiting.com – For Ideas/Help with you next IOT Search use this link : http://internetofthingsrecruiting.com/schedule-a-conference/
internet of things iot


Dolphin Webinar “The proven recipe for uLP SoC”

Dolphin Webinar “The proven recipe for uLP SoC”
by Eric Esteve on 11-11-2016 at 12:00 pm

Dolphin will hold a live webinar on November 15, 9:00 AM PST or November 22, 10:00 AM GMT. This webinar targets the SoC designers wanting to learn how to quickly implement ultra-low power (uLP) techniques, using proven recipes.
Continue reading “Dolphin Webinar “The proven recipe for uLP SoC””


Flexible IoT Wireless

Flexible IoT Wireless
by Bernard Murphy on 11-11-2016 at 7:00 am

There’s been quite a bit of debate about what is the “best” wireless option for the IoT, coming down usually in favor of there being no single best option. Applications are so widely varied that different solutions are needed to ideally fit different requirements. However, IoT economics require we settle on a limited set of options, of which Bluetooth-5 (BT5) and two 802.15.4 options, ZigBee and Thread, seem to be the front-runners. But suppose we didn’t have to compromise, or at least not as much as we think? I talked to Teppo Hemiä, CEO of Wirepas, at ARM TechCon, to understand how the Wirepas solution (Pino – Finnish for Stack) can help.

Teppo makes the argument that cellular support, while long-range, is uneconomical for the IoT at least as a primary path to edge nodes and still lacks coverage in some locations critical for IoT (such as basements). BT is economical but lacks scalability and range. He also argues that to be effective LPWAN requires building infrastructure which would ultimately rival that for cellular, making it uneconomic as a universal solution. Teppo asserts that a better solution should leverage a combination of cellular and decentralized mesh networks, which is what they aim to provide with Pino.

Pino is software only, running he said on top of any radio, certainly on top of the physical layer of BT5 and 802.15.4. Wirepas replaces part of the wireless stack without need to add hardware or OS support. Most importantly, there is no need to build new infrastructure; any Pino-enabled device added to the network can route and extend a network supporting thousands of devices per gateway, with gateways connecting ultimately to cellular or WiFi networks or through Ethernet.

Communication can be multi-hop across homogenous devices with control based on local decision making. Operation parameters can be tuned to optimize bandwidth, latency, range and power consumption. For each device there can be multiple routing options and multiple eventual gateways for backhaul.

I had a couple of questions and I’m sure readers will have more. First up was power consumption. If ultra-low power devices also have to act as routers, won’t that drain batteries faster? Teppo said that standby consumption of a router can be less than 20uA, much lower than ZigBee, allowing for a 5-year battery life or 10 years with larger batteries. My second question (occurred to me after the meeting) was around adaptability/customization. Yes, because the product is purely software you can adapt to any IoT niche requirements in principle, but do you ultimately lose all your margin in high-cost customization? That is answered by their business strategy – going after large-scale installations where customization costs can be amortized over licensing fees at high-volume.

Wirepas have chalked up an impressive win in the Oslo region (Norway) where they are deployed in 700k Aidon electricity meters. Also Nokeval has released a series of environmental sensors using the same wireless networking technology. Both examples are consistent with their business strategy – large-scale installations in applications like metering, sensors, lighting and beacons. Another application Teppo mentioned is as a replacement for RFID. He claims this solution can be cheaper than RFID, and obviously you would no longer need readers (or people to bring readers near the devices) because devices are already connected. He said (but didn’t elaborate) that this application is already in deployment.

The company is quite young – founded in 2010 though raising their first round of funding only last year and only recently opening their first office in the Bay Area (Palo Alto). Of course their concept is not entirely new. Work in wireless mesh network architectures is very active with several areas still in research. In fact, a lot of this work came out of and continues at the university in Tampere (Finland) where these guys are based. It is encouraging to see a commercial solution emerge and already deployed at a city scale. This can only help push further progress. You can learn more about Wirepas HERE.

More articles by Bernard…


Ford Seeks Own Path to Car Sharing and IoT

Ford Seeks Own Path to Car Sharing and IoT
by Roger C. Lanctot on 11-10-2016 at 4:00 pm

It’s hard to be a thought leader around the future of transportation when the entire market seems to be moving in one of three directions simultaneously: either ride hailing (Uber, Lyft), car sharing (Zipcar, Car2go) or automated driving (Google, Tesla). If you’re Ford Motor Company and you care about whether you are adding to or mitigating existing congestion with new driving options, it is even harder.

The goal of any new transportation solution ought to be to reduce the traffic load on existing highways. The consensus opinion is that the available infrastructure is finite and any new transportation solution should be intended to reduce the demand rather than increase it.

Car sharing clearly is adding to the vehicle load on existing infrastructure. That load can be expected to grow around the world with car sharing service penetration currently so low and with the wave of car sharing startups still rushing in. Experts have already sounded the alarm that car sharing services – as well as ride sharing suppliers – are drawing consumers away from public transportation.

While experts have suggested that car sharing and ride hailing services will diminish the demand for new cars no less a voice than Boston Consulting Group has only attributed a diminution of vehicle demand of about 800,000 vehicles five years hence out of a market of 100M vehicles sold. Clearly, ride hailing and car sharing, if anything, represent a net addition to the number of cars on the road.

What is emerging is a fragmentation of transportation which is akin to the fragmentation of content consumption taking place in the car. Radio broadcasters are concerned that in-car listening has been diminished by the increasing access to streaming services via connected smartphones. The simple reality is that people are still listening to their car radios, but they are divvying up their listening among different sources.

So it goes with transportation. Car sharing options on streets and parking garages create new use cases which are not mutually exclusive. Ford has sought a different path and that path is reflected in its acquisition of Chariot.

Unlike the majority of existing car sharing services offering smaller vehicles like Daimler’s Smart or BMW’s i3, Chariot makes use of Ford’s Transit Connect vans and focuses on pooling passengers largely though not exclusively to and from public transit stops – crowdsourcing its routes based on demand. In reality, many city planners have discovered that this is precisely the application served by Uber and Lyft. The difference is the focus on carpooling – though this is also served by Uber and Lyft.

This positions Ford in direct opposition to Uber and Lyft without posing a threat to the existing taxi fleet. Ford has threaded the transit needle and is poised to take this opening to cities beyond its San Francisco beachhead.

What is even more unique about the Chariot play by Ford though is that the vehicles being used come from Ford’s fleet division, for which car sharing is a natural extension. Car sharing initiatives belong with fleet applications. Ultimately, shared vehicles will be networked and share information – setting the stage for the connection of passenger cars in the future.

It is no coincidence that Ford’s year-old Go Drive initiative in London was shut down at the end of October. Ford has likely concluded that just adding car sharing to the streets of London or any other city around the world is only adding to the vehicle load on already stressed streets.

Ford Executive Chairman William Ford has been advocating for four or five years for connected and shared transportation. With the acquisition of Chariot, Ford Smart Mobility is starting to take shape while shaping a leadership position on the future of transportation for Ford.

At the same time, Ford is beginning to make progress on its wider vision of automated driving and a connected transportation world built upon IoT principles. Ford appointed Laura Merling to head autonomous vehicle development within the Smart Mobility Group. Merling brings to Ford an IoT background from her work at SAP and AT&T.

At the same time, Ford announced a long-term tie-up with IoT kingpin Blackberry. Blackberry says it is dedicating a team to work with Ford on expanding the use of Blackberry’s QNX Neutrino operating system, Certicom security technology, QNX hypervisor and QNX audio processing software.

Ford, like a growing roster of other auto makers, is looking to partner with cities to help resolve transportation challenges. Ford’s vision of shared transportation based on crowdsourced routing is clearly intended to reduce the vehicle load within city limits.

Of course, the Chariot-related strategy might also negatively impact vehicle sales, but Ford is clearly making the calculation that either this is not the case or, if so, it is worth the sacrifice. The Ford Focus’s deployed in London for the Go Drive trial are gone suggesting that passenger-vehicle-based car sharing is not in the cards for Ford – at least not at the moment.

The last missing piece of the strategic puzzle is FordPass. What appeared at introduction to be a payment, parking and pedestrian navigation platform has yet to pan out. One of the handful of original apps, FlightCar, is now defunct.

FordPass may yet help to knit Ford’s IoT vision together. My personal recommendation has been to build in navigation and related location resources and services – integrated with Ford’s existing in-vehicle app resources.

Ford has not been shy about grabbing headlines – particularly with its announced plans to be mass producing automated vehicles by 2021. But taking that announcement in the context of the subsequent Chariot acquisition paints a clearer picture of Ford’s goal to reduce the need for private vehicle ownership within cities – with the ultimate goal of automating that transportation and enabling connectivity between transportation assets and infrastructure.


How does the IoT get to 20 Billion?

How does the IoT get to 20 Billion?
by David G. Simmons on 11-10-2016 at 12:00 pm

Not long ago I was asked the question “How do we get to 20 billion IoT devices?” (Actually, I’ve been asked this question multiple times over the past 10+ years.) Great question! How, exactly, do we get to 20 billion (or 30 billion, or a trillion) IoT devices? We’re certainly not going to get there with wearable devices and other personal gadgets. Well, we might but it would be a stretch, and the probability is near-zero. Why do I say it’s not going to happen with wearables, etc.? Well, again, let’s do some simple calculations.

There are (roughly) 7 billion people on the planet. For argument’s sake, let’s say every single person on the planet gets fitted out with 3 wearable devices. There, we made our 20 billion number with some spare. Done. We can all go home now. But not quite so fast. Only 4.5 Billion people have access to working toilets, so I’m going to guess that they might buy a toilet before they buy a FitBit or an Apple Watch. I know I would. You would too. Only 3 billion people are internet so that cuts down the number of possible devices quite a bit quite a bit too. Suffice it to say that we’re not going to get to 20 billion IoT devices anytime soon if we base it on the number of people on the planet.

The number of people on the planet simply isn’t interesting. It’s a limited market. I’ve been saying this since 2004. I call it the Internet of People (IoP). It’s what gets all the press because let’s face it, it’s fun and sexy and you get to buy cool toys and play with them. But in a real sense, it is uninteresting.

Great, so now we know how we’re not going to get there. Helpful, I guess, but not really in the way you wanted. So let’s take a simple example, and do some more simple math.

Let’s say we want to put strain/crack/breakage sensors on every window of a skyscraper. Let’s make this skyscraper 300 feet tall (about 30 storys), and put, say, 100 windows per floor. That gives us 30,000 windows. (Remember that number, because I’m coming back to it.) Now let’s make 10 of those per city. We’re up to 300,000 windows. Let’s make that for 100 cities. 300,000,000 windows to put sensors on. And that’s just ONE IoT application on (a few) buildings. I can think of about a dozen more without breaking a sweat, each one requiring about the same number of sensors. Now can you see how we get to 20 Billion devices? I sure can, and none of it has anything to do with consumers, wearable devices, or almost any of the other currently trendy “IoT” topics.

on actual Things is virtually limitless. If you’re looking for true market potential, this is where the interesting things will happen. This is where the real money is to be made. It’s where the truly difficult problems will be solved. It’s where the really interesting work is.

Now, let’s go back to that number I told you to remember. 30,000 windows on that building. It’s one thing to set about the task of placing sensors on all of those windows. That job alone would take you 6 months or more (look up how long it takes to wash all the windows on a skyscraper). But what if someone had to go back every year and replace 7,500 batteries on those sensors? Or even 1,000. What?! You’d basically have to have a full-time crew of battery-changers working on every one of your buildings. Great for unemployment world-wide. Not great for the economics of owning the building. Again, the tricky part is going to be in removing the battery from the equation. Make that sensor a solar-powered, stick-on sensor and your window-washers can stick one on each window during one cleaning cycle and then … never touch them again.

This is how the the IoT gets to 20 billion. It’s by connecting things to the internet, not just connecting people in more ways to the internet. My rule: If your IoT solution is based on the number of people on the planet, it’s a self-limiting solution. If it’s based on the number of things then it’s nearly limitless.

Also Read: What’s Really Going to Limit the IoT?