CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

CEO Interview: Marie Semeria of LETI

CEO Interview: Marie Semeria of LETI
by Eric Esteve on 10-13-2016 at 12:00 pm

Marie Semeria of LETI

Laboratoire d’électronique des technologies de l’information (LETI) is a French research center, affiliate to the CEA (Commisariat a l’Energie Atomique). Since LETI creation in 1967, this affiliation has two consequences, the money was flowing from the deep pocket of the atomic industry to sustain advanced research at LETI and the secrecy was part of the research center DNA. The first point was true until recently, when the French state decided that the LETI operations should be balanced with external funding coming from the industry for more than 40% of the budget. As a research center, LETI still highly valorize confidentiality, but can now speak more openly to promote some of the research.

This decision has generated a bunch of opportunities for LETI to find industry partners, from small or medium-sized enterprises (SMEs) benefiting from LETI’s engineer expertize to develop a product, up to the large semiconductor companies like Intel, Qualcomm, ST-Microelectronics or GlobalFoundries. Marie-Noelle Semeria is LETI CEO since October 2014 and she has a strong vision about the evolution of the electronic industry and about how LETI can play an active role in this revolution. I had the opportunity to speak several times with MN Semeria during interviews or open discussions, and I really appreciate her scientist background. She knows about the technology and she has a global view about electronic, even if her main tasks are to manage a 1300 researcher’s team (1900 including assignees) and to find new markets for LETI.

According with MN Semeria, several technologies developed by LETI to support emerging electronic systems are currently licensed by top semiconductor companies, like CoolCube (3D chips integration), High Performance Computing (HPC) disruptive architectures or Silicon on Insulator (FD-SOI), not mentioning nanotechnologies for biology and healthcare. If you talk with a project manager involved in advanced chip design, you will probably discuss about the challenges linked with moving from 14nm to 10nm, the always increasing development cost you need to pay to follow Moore’s law and so on. Talking with LETI CEO opens new doors as the research center is developing some technologies that will complement Moore’s law like FD-SOI, and also certain being disruptive enough to permit the creation of post-Moore silicon based industry.

FD-SOI is one of the hot topics these days, it was the focus of this interview and MN Semeria had a lot to share with Semiwiki about the technology.

FD-SOI: Power consumption, Performance and Cost

Better power consumption is one of clear benefit bring by building a Field Effect Transistor (FET) on an insulated substrate, by opposition with a bulk. In fact, a SOI wafer is a regular silicon wafer where a thin silicon oxide has been deposited. Because they are raised on this oxide box (see picture) and not directly on the silicon substrate, the drain or source parasitic capacitance is almost nulled. If you remember the formula (P=aCV[SUP]2[/SUP]) you know why the parasitic power consumption linked with source/drain to substrate capacitance has disappeared. Many others parasitic capacitances between the metal lanes and the substrate will also significantly decrease, if not disappear. Transferring a bit of information in an IC consist in charging a metal wire (which can be as long as the IC half perimeter) and the gate of the MOSFET you are addressing to, that’s why decreasing these parasitic capacitances has a real effect on the power consumption. Taking the 10nm FinFET as a reference, the work made in LETI labs says that you can reduce the power consumption by 40% for a 12FDX device.

In term of pure performance MN Semeria agrees that FinFET will always lead the pack (at similar geometry) and says that for applications like data center or high-end mobile application processor, chipmakers will continue to stick with FinFET technologies. She is not trying to put the two technologies in direct opposition, but she says that both are complementary. It could make sense to design low-end mobile application processors on FD-SOI. When you need to boost a FD-SOI device performance, you can use a kind of “overdrive”, the forward bias. In fact, as you can access the (deep) silicon substrate (see BP on the picture), you can apply a forward voltage to the body, or forward body bias (FBB) to increase the raw performance of the FET device. This is like benefiting from high performance, but just on-demand, and low-power the rest of the time. According with MN Semeria, you can get identical performance with 12FDX and 10nm FinFET.


If you consider the technology evolution when strictly following Moore’s law (28nm and 20nm planar, 16nm, 10nm and 7nm FinFET), you quickly realize that each time you go down by one node to benefit for better performance and lower power consumption, you pay the bill in term of development cost or non-recurring engineering cost (NRE). This is true if we look at the design cost, and this is also true if we take into account the mask and wafer processing costs. These extra processing costs are linked with double, triple or even quadruple patterning. According with MN Semeria, processing a 12FDX wafer only require double patterning, allowing 40% cost savings compared with 10nm FinFET. If we expect the semiconductor industry to keep innovating, and see many chip design starts in the future, some of these will have to target technologies offering lower development cost and still decent performances.

FD-SOI: Ecosystem, Roadmap and the 80 Tape-out

If you remember, the origin of SOI technology was linked with the atomic industry, and the reason is that SOI devices have a much better immunity to radiation. This radiation immunity is a benefit for RF and memory designs at chip level and for the automotive industry at system level. When designing for Internet of Thing (IoT) or for wireless mobile, the long term trend will be to integrate RF into a super-application processor. Trying to integrate RF into a FinFET chip seems to be extremely difficult, it’s due to the quantum architecture (one Fin, two Fin, etc.) not allowing a smooth analog design as with planar.

The automotiveindustry has become very dynamic and we see numerous advanced SoC design start to support the emerging automotive applications (ADAS…). But automotive is also an industry where you have to design in a harsh environment (temperature, vibrations, dust, etc.) and FD-SOI appears to be well suited. Automotive is also very different from the mobile industry for example, as the design cycle and product lifetime are very long. In other words, the automotive industry needs to rely on a solid roadmap.

We have seen in a recent article that GlobalFoundries has announced the availability of the 12 FDX technology for tape out in 2019, designs are on-going on 22FDX, and Samsung is thinking about 20FDSOI and count TO on 28FDSOI. LETI is a research center, as such they have to explore the next FDSOI steps and MN Semeria says that they did so for 10nm and 7nm. LETI has test results on silicon at 10nm and they have checked 7nm feasibility. In fact, you have to remember that when the FDSOI technology was licensed by GlobalFoundries, LETI has sent a 10 engineer team to Dresden to support, first 22FDX, then 12FDX, development. This team is complemented by the Grenoble based characterization lab in LETI campus. To summarize, the FDSOI technology roadmap is solid: 28nm (ST-Microelectronics and Samsung), 22nm (GloFo and Samsung), 12nm (GloFo with LETI), 7nm (LETI development). For more precise information about the FDSOI technology nodes, see this blog from Scotten Jones.

I was surprised when MN Semeria mentioned that 80 chips will be tape-out on FD-SOI, but after checking the number, it’s made of 50 customers already engaged with GloballFoundries (as announced by Alain Mutricy at SOI Shanghai this September), 12 tape-out announced by Samsung for 2016 and the rest being split between ST-Microelectronics and chips in design at Samsung.

Last but not least, the ecosystem created around FDSOI technology is real. LETI has designed the foundation IP to support GloballFoundries customers on 22FDX (we can guess that Samsung’s customers on 28nm have used the IP developed by ST-Microelectronics). LETI is also supporting GlobalFoundries FDXcelerator Partner Agreement, aimed at offering a complete ecosystem of IP and services to customers to design on FDSOI. If you search for more complex IP, like CPU cores, you should know that ARM supports both 28nm and 22nm and for interface (USB, PCI Express, etc.), these IP are supported by many vendors (Synopsys, Cadence, Verisilicon, Sankalp…).


I expect to tell you more about these technologies, in some cases disruptive, that we will have to consider in the future, if we want the semiconductor industry to stay the place where innovation takes place. Moore’s law is not dead, it will continue through FinFET technology development, but this technology will have to be complemented with other technology options (FD-SOI…), new chip architectures (HPC…), new packaging (3D, TSV, 2.5D…) and more (than Moore!).

From Eric Esteve from IPNEST

Also Read:

CEO Interview: Geoff Tate of Flex Logix

CEO Interview: Xerxes Wania of Sidense

A Candid Conversation with the GlobalFoundries CEO!


Machine Learning – Turning Up the Sizzle in EDA

Machine Learning – Turning Up the Sizzle in EDA
by Bernard Murphy on 10-13-2016 at 7:00 am

There’s always a lot of activity in EDA to innovate and refine specialized algorithms in functional modeling, implementation, verification and many other aspects of design automation. But when Google, Facebook, Amazon, IBM and Microsoft are pushing AI, deep learning, Big Data and cloud technologies, it can be hard not to see EDA as something of a backwater compared to the more dynamic (and more widely relevant) world of big software. Which makes you wonder if we also could benefit from some of those methods and whether that might also catalyze new directions in EDA, attracting new (and young) talent and ideas to a domain that seems to have been largely indifferent to the software mainstream.

I now believe this is changing because I see forward-thinkers in EDA working on ways they can leverage hot ideas from mainstream computing. I mentioned in earlier posts Ansys’ big data analytics around power integrity and reliability analysis. In this post I want to talk about some very interesting work Synopsys has been doing around machine learning (ML), particularly in application to enhancing ease of use in formal verification. Manish Pandey (Synopsys) introduced this area in the first tutorial at the Formal Methods for CAD (FMCAD) conference a couple of weeks ago.

I should be clear up-front that the tutorial and this blog are on directions, not products. Don’t bother calling your Synopsys sales rep; there is nothing you can buy, yet, but this isn’t whiteboard stuff either. In my discussion, Manish was understandably cagey about how far they have got but it sounded like they are in active prototyping. So it seems timely to talk about why Manish and Synopsys feel ML is important for formal verification and what it can enable.

Why ML is Interesting for Formal

Formal has become a lot more accessible through pre-packaged apps requiring little specialized understanding to use. And a growing group of verification engineers are becoming more comfortable with some level of custom property and constraint definition, enabling more definition of architecture-specific checks without need for a team of formal PhDs. But some of what needs to be checked is getting harder: Cache-coherent interconnects, security management, safety management and power management dynamics are some messily inter-dependent examples. The properties that need to be defined are complex in their own right but even more challenging can be defining constraints which will reasonably bound run-times yet not mask real problems. ML methods could potentially help both in assisting engineers to debug why properties they expect to pass are failing, and in helping them build more reliable constraint sets.

Machine Learning – a quick review


Given the press we see on neural nets/deep learning, you could be forgiven for thinking that all AI is now encompassed in that topic. Actually it is a subset technique in the broader scope of machine learning. All such methods use training data to effectively self-program recognition of complex scenarios in images or other datasets.

Machine learning architectures also place significant emphasis on methods to access rich and distributed data sources. Which leads naturally to a need for Big Data methods; in verification you might want to harvest training data from many different designs sites and in different formats: simulation, emulation and formal verification. Hadoop is probably the best-known Big Data platform, but it doesn’t seem to be a hot choice in EDA, primarily thanks to specialized needs in access to EDA datasets. One such need is to do very fast iterative analysis on data, not a particular strength for Hadoop. Manish mentioned Apache Spark as an alternative headed in the right direction, but reserved judgment on whether it was the best possible platform for Synopsys needs. Whichever way this goes, the solution will require a world-class SW stack providing in-memory analysis and computation on distributed data.

ML and Formal Applications

So if you could setup a powerful ML system with efficient access to a rich, distributed set of functional analysis data, what kinds of analysis might that enable? One example is in helping do root-cause analysis on a failed assertion. Manish illustrated this through an imagined dialog between a trained ML system and an engineer, where the engineer identifies an unexpected behavior, the ML asks a few clarifying questions, does a root-cause assessment then suggests a change in constraints that would lead to expected behavior (and launches a re-run with the new constraints).

Another promising area is in specification mining – searching though prior verification datasets to find likely reasonable properties and constraints. Like all machine-learning methods, this is a probabilistic exercise – what you find is not guaranteed to be absolutely true in a mathematical sense. But given a sufficiently rich learning dataset, it may be true enough to represent all reasonable use-cases. And what you find as exceptions may be sufficiently revealing to prompt you to add safeguard logic to block those cases, or to add a strongly-worded warning to the documentation, or perhaps to lead you to rethink the property you are checking.


Manish mentioned other areas that he sees as promising though further out, of which I’ll touch on just one here: using ML to aid in theorem-proving. This task is generally required in proving that a design (or subsystem) meets predefined system specifications through a series of proof steps. Since these often span significant amounts of logic (and perhaps time), they must be guided by heuristics to minimize human effort in decomposing the problem into tractable sub-components. Developing these heuristics typically requires a great deal of expertise, but relatively recent work has shown that there is a functional relationship between the conjecture to be proved (plus axioms) and the best heuristics to use for the proof search, which makes ML a very suitable assistant in guiding proofs. That could make ML+formal even more useful in proving things like compliance safety and security specifications.

You can learn more about the Synopsys presentation HERE and more background information on the theorem-proving topic HERE.

More articles by Bernard…


Case study illustrates 171x speed up using SCE-MI

Case study illustrates 171x speed up using SCE-MI
by Don Dingee on 10-12-2016 at 4:00 pm

As SoC design size and complexity increases, simulation alone falls farther and farther behind, even with massive cloud farms of compute resources. Hardware acceleration of simulation is becoming a must-have for many teams, but means more than just providing emulation Continue reading “Case study illustrates 171x speed up using SCE-MI”


Do You Know the (Green) Wave in San Jose?

Do You Know the (Green) Wave in San Jose?
by Roger C. Lanctot on 10-12-2016 at 12:00 pm

No. A green wave isn’t something you do at a New York Jets or a Michigan State Spartans game. A green wave is that thing your dad or obsessive friend or maybe YOU do when you try to synchronize your driving with the changing of sequential traffic lights.

Connected Signals, BMW and Argonne National Lab are kicking off a study in San Jose which will run for six months. The study is intended to help determine the real world safety and fuel-efficiency benefits of connecting traffic light data to vehicles.

Automakers and the Federal government in the U.S. believe that providing signal information can reduce fuel consumption and greenhouse gas emissions by 10% or more. To prove their point these organizations require data confirming these benefits to guide their future decision-making.

To overcome the lack of data, the City of San Jose is supporting the study by providing real-time traffic light information from the city’s traffic management system. Together with predictive information about upcoming light changes developed by Connected Signals, this data should help the 400 participating drivers safely cruise through more lights while they are green, and warn them to slow down approaching lights that will be red.

Connected Signals says that instruments in participating vehicles will let ANL’s experts analyze how providing this information affects safety and fuel consumption. It is hoped that the results of this study, which is supported in part by the US Department of Energy’s Small Business Vouchers Pilot program, will help shape policy, infrastructure, and technology adoption for connected and autonomous vehicles in the coming years.

Connected Signals already provides the Enlighten app for informing existing BMW owners – using the BMW Apps interface – of the signal phase and timing of traffic lights as displayed in the infotainment system for appropriately equipped BMWs. But the app is only useful in those cities where Connected Signals has gained access to the traffic light information.

The test being run by Connected Signals is an excellent example of an effort to prove a claim previously taken for granted. For further details or to sign up for the pilot check these links:

https://connectedsignals.com/studies/

http://www.testmiles.com/know-when-that-stop-light-going-change/

And drive safely.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Robots could eventually replace soldiers in warfare. Is that a good thing?

Robots could eventually replace soldiers in warfare. Is that a good thing?
by Vivek Wadhwa on 10-12-2016 at 7:00 am

The United States has on its Aegis-class cruisers a defense system that can track and destroy anti-ship missiles and aircraft. Israel has developed a drone, the Harpy, that can detect and automatically destroy radar emitters. South Korea has security-guard robots on its border with North Korea that can kill humans.

All of these can function autonomously — without any human intention.

Indeed, the early versions of the Terminator are already here. And there are no global conventions limiting their use. They deploy artificial intelligence to identify targets and make split-second decisions on whether to attack.

The technology is still imperfect, but it is becoming increasingly accurate — and lethal. Deep learning has revolutionized image classification and recognition and will soon allow these systems to exceed the capabilities of an average human soldier.

But are we ready for this? Do we want Robocops policing our cities? The consequences, after all, could be very much like we’ve seen in dystopian science fiction. The answer surely is no.

For now, the U.S. military says that it wants to keep a human in the loop on all life-or-death decisions. All of the drones currently deployed overseas fall into this category: They are remotely piloted by a human (or usually multiple humans). But what happens when China, Russia and rogue nations develop their autonomous robots and acquire with them an advantage over our troops? There will surely be a strong incentive for the military to adopt autonomous killing technologies.

The rationale then will be that if we can send a robot instead of a human into war, we are morally obliged to do so, because it will save lives — at least, our soldiers’ lives, and in the short term. And it is likely that robots will be better at applying the most straightforward laws of war than humans have proven to be. You wouldn’t have the My Lai massacre of the Vietnam War if robots could enforce basic rules, such as “don’t shoot women and children.”

And then there will be questions of chain of command. Who is accountable in the event that something goes wrong? If a weapons system has a design or manufacturing issue, the manufacturer can be held accountable. If a system was deployed when it should not have been deployed, all commanders going up the chain are responsible. Ascribing responsibility will still be a challenging task, as it is with conventional weapons, but the more important question is: Should the decision to take a human life be made by a machine?

Lethal autonomous weapons systems would violate human dignity. The decision to take a human life is a moral one, and a machine can only mimic moral decisions, not actually consider the implications of its actions. We can program it, or show it examples, to derive a formula to approximate these decisions, but that is different from making them for itself. This decision goes beyond enforcing the written laws of war, but even that requires using judgment and considering innumerable subtleties.

And the steady seepage of military technologies into civilian life will see these military systems being deployed in our cities.

Artificial systems have the benefit of not experiencing destructive emotions, such as rage. But they also lack critical positive emotions, such as sympathy and compassion. As Maj. Daniel Davis of the U.S. Army points out: “In virtually every war involving the U.S. … the enemy discovered that although GIs could be as ruthless and vicious as any opponent, the same soldier could extend mercy when appropriate.” The point of war is to attain peace on our terms; the human connection is an important part of facilitating it.
The only way to avoid untenable situations is to create and enforce an international ban on lethal autonomous weapons systems. Unilateral disarmament is not viable. As soon as an enemy demonstrates this technology, we will quickly work to catch up: a robotic cold war.

The precedent for this sort of ban is well established. Barbed spears, chemical weapons and blinding lasers are all weapons that society has agreed should never be used. (Unfortunately, nuclear weapons are not specifically banned, though their use may violate other international laws limiting civilian casualties and long-lasting effects; the main factor curtailing their use is the fear of massive retaliation.)

There is hope for such a ban. Efforts are underway by the U.N. Convention on Certain Conventional Weapons (CCW), leading scientists and the Campaign to Stop Killer Robots to have the world’s governments consider a multilateral treaty that would remove the temptation to build a bigger, better swarm of autonomous killer robots and deploy them sooner than the next potential enemy can. But we are collectively responsible for considering these moral questions and deciding whether we want this technology to be used in war.

Robotics and artificial intelligence both offer great potential for helping society — from searching collapsed buildings for survivors, to sifting massive data for new treatments for cancer. It is up to us whether we harness their potential to build peace and enrich our lives or to ensure endless war and cheapen human life.

Coauthored withAaron Johnson, who is an Assistant Professor of Mechanical Engineering and Robotics at Carnegie Mellon University and writes about themoral implications of technology.

For more, follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com


SOC Design Techniques that Enable Autonomous Vehicles

SOC Design Techniques that Enable Autonomous Vehicles
by Tom Simon on 10-11-2016 at 4:00 pm

Robots – we have all been waiting for them since we were young. We watched Star Wars, or in the case of the slightly longer-lived of us, we watched Forbidden Planet or Lost in Space. We knew that our future robot friends would be able to move around and interact with their environment. What we did not foresee long ago was that instead of moving among us, we would be riding inside of the first widely produced robots – namely autonomous cars.

It’s pretty clear now to see that cars are the perfect platform for a machine that autonomously interacts with their environment. Typically, they traverse a smooth flat surface, have well defined interactions – starting, stopping, turning. They have the room, cooling and power for the substantial computing requirements necessary for their operation. Automating driving will also provide huge benefits to people. Instead of needing to be fully engaged in operating a vehicle – “drivers” will ultimately be able to focus on other activities while in their cars. In the near term autonomous vehicle will improve traffic safety.

The task of assisting or driving a vehicle requires creating a virtual 3D world inside the driving system that accurately reflects the outside physical world. A vast array of sensors is required to do this. Data from optical, radar, LIDAR, inertial and other sensors needs to be combined in real time to accomplish this. Then of course, the system has to make decisions based on projected future movements of itself and the surrounding objects.

Performing these operations in real time will require more than general purpose processors. Neural networks are already being used for many of the tasks necessary for object recognition. These systems have to handle extremely high bandwidth and do it in real time. Low latency is essential. We are seeing commercial subsystems that are targeting this market. NXP has introduced its “Blue Box” that contain specialized processor chips – the S32V234 and the LS2085A. These powerful SOC’s are specifically designed for the workloads seen in autonomous driving. They have multiple ARM cores with substantial caches and memory interfaces. They also have IO subsystems for communicating with each other and the sensors.

At the same time Nvidia also has its own solution called Drive PX 2, which is built with 2 Tegras each having an integrated Pascal GPU along with quad A57’s. There are also two discrete Pascal GPU’s. During the Linley 2016 Processor Conference at the end of September, on-chip network IP provider Arteris presented on the topic of using cache coherent networking to improve the operation of the kinds of SOC’s found in the processing units aimed at the autonomous driving market.

Going back to basics, we know that ADAS systems require low latency high bandwidth computation. The SOC’s being developed for this application have many processors and additional components such as accelerators, specialized processing units and interfaces to numerous sensors. In the world of CPU’s it is a long standing practice to add custom hardwired memory caches to reduce time consuming reads and writes to external RAM. With a handful of processors and long development cycles it made sense to custom build memory cache systems for CPU’s chips.

Thing have changed. Processor cores are used frequently in larger numbers in SOC’s. What’s more is that there is a huge benefit in having the other blocks in the SOC share cache coherency with each other and the processors. The performance and power benefits are immense. It’s no longer possible to build custom cache designs for SOC’s – what is needed is a flexible and systematic way to implement cache coherency interfaces for SOC’s, which have increasing complexity and shorter development cycles.

Arteris already has a robust solution for replacing hardwired buses in SOC’s with a configurable and flexible interconnect network. Just as we have moved away from using dedicated printer, keyboard and mouse cables, FlexNoC from Arteris let’s designers quickly size and implement an on-chip networks to move data with lower power and real estate requirements. Packets of data are transferred along a network topology of high speed interconnect between blocks. It has built in error correction and makes the best use of on-chip resources.

Arteris has used this as a foundation layer to implement their Ncore IP for providing cache coherent memory interfaces within an SOC. With the supercomputer level of performance needed in ADAS systems, a high performance cache coherency solution is ideal. However, the feature that takes Ncore to the next level is its ability to take blocks that are not designed with cache capability and give them full cache coherency, even providing then with their own local proxy cache.

Ncore allows the addition of their Non-Coherent Bridge blocks and Proxy Caches to make IP blocks that had no cache capability into full-fledged members of the on-chip cache scheme. This comes with all the benefits, such as pre-fetch effect, write gathering effect and optimized coherent memory access. Arteris also has added a number of powerful optimization to Ncore, like multiple snoop filters to ensure that the cache coherency uses the smallest amount of area and has the lowest possible latency.

We can expect to see a number of larger and more powerful SOC’s for neural networks, image processing and autonomous vehicle control. Of course, infotainment will also drive chip complexity. These chips will probably lead the industry for complexity and sheer processing power and speed. Their designers will look to use the most advanced technology to achieve the highest performance within the shortest development cycle. On-chip networking is already a necessity, as is cache coherency, for these designs. For more information on how Arteris is working in this market, look here on their website.


‘Que Legal,’ Uber é Legal

‘Que Legal,’ Uber é Legal
by Roger C. Lanctot on 10-11-2016 at 12:00 pm

Uber went live in Florianopolis on September 30, a week before my wife and I arrived for some down time. But rumors suggested that the service was shuttered almost as soon as it started with a couple of drivers detained and their vehicles impounded. The word was spreading that the service was considered illegal.

As fate would have it, we stumbled into an Uber driver recruitment meeting and discovered that the service was indeed alive and well and legal – with huge demand and an inadequate supply of drivers. Turns out that Uber is legal throughout Brazil based on federal laws, but that local state and municipal authorities have twisted regulations to make life difficult for Uber drivers.

Depending on the city in Brazil, the local taxi concession may be woven into the fabric of local politics. It is not unusual for the mayors of major cities to own hundreds of taxi licenses, giving them a vested interest in making life difficult for Uber and its drivers.

The resistance in Brazil is equivalent to the resistance to Uber around the world and is based on the training and certification required for taxi drivers who must also acquire expensive medallions or certificates to enable them to pick up and drop off passengers. The same holds true for Brazil where taxi drivers must pass background checks and receive certification at the state, municipal and Federal level.

Uber drives can acquire all the certification they need online with virtually no background check, training and a professional driver endorsement on their driver’s license. The background check amounts to the driver giving his or her word that they have not committed any crimes.

While Uber is legal, the intimidation of drivers in Florianopolis has been effective. One Uber drive told my wife and I that there are only 10 active Uber drivers among a total of 60 certified drivers servicing Florianopolis – serving a population of more than 1 million citizens.

Many of the drivers who have been certified are afraid to start picking up customers for fear of being arrested and their cars being impounded. This fear persists in spite of the fact that Uber representatives told driver candidates at the recruitment meeting that they will handle all legal problems and legal expenses if there is any problem. The overriding message from Uber: “Uber is legal in Brazil.”

Uber’s arrival in Florianopolis is significant as Uber has been unsuccessfully confronted, protested and opposed in Rio and Sao Paulo. As in many other countries and cities around the world, Uber is filling a need for convenient, inexpensive transportation – and existing taxi services resent the intrusion.

Uber has existing app-based Brazilian competition in the form of EZtaxi and Taxi999. Consumers are attracted by Uber because the drivers are universally better educated and more polite than regular taxi drivers – and their vehicles are cleaner and newer.

The allure of Uber is powerful in Brazil. Transportation is a nightmare in most Brazilian cities with impenetrable traffic jams and, in some cases, limited public transportation options. Florianopolis is especially ripe for Uber given the limitations of the public transportation alternatives.

While Brazil was, until recently, one of the fastest growing automotive markets in the world, the economy has plunged into recession driving down vehicle sales. Even before the recession cars were expensive and heavily taxed – making app-based transportation even more enticing.

Alas, like most markets, the financial equation justifying Uber driver participation is as tenuous in Brazil as it is in most other places in the world. The $R1.10/km that Uber charges is rapidly eaten away by Uber’s 25% take, the cost of fuel and vehicle maintenance, insurance and food for the driver.

The economics of Uber are broken down by this Brazilian blogger:
http://tinyurl.com/ztwwpor (in Portuguese)

Our intrepid Uber driver, Igor, said it took him only a week to realize that he’d better hang on to his job as a building superintendent along with his freelance videography. Even driving non-stop every day for Uber would never produce a livable income.

So, if you are a tourist visiting Brazil there is a good chance you will find Uber available and legal but frowned upon and harassed by local authorities and resentful taxi drivers. The long-term viability of Uber remains in doubt, but the short-term savings are impossible to resist.

The real revelation of Uber, aside from the convenience, which can be matched by local competitors, is the charm of the drivers themselves. But the blogger (noted above) points out that these charming drivers may turn nasty over time as they realize they cannot make a living charming Brazilians and visitors alike for pocket change.

It’s worth noting that the taxi drivers my wife and I met and chatted with in Porto Alegre were, in many cases, equally charming and interesting as Igor in Florianopolis. There is something to be said for getting a ride from a properly trained and certified professional who isn’t coping with the sneaking suspicion that he or she is being ripped off. For now, Uber is que legal. Oba!


AI and the black box problem

AI and the black box problem
by Bernard Murphy on 10-11-2016 at 7:00 am

Deep learning based on neural nets and many other types of machine learning have amazed us with their ability to mimic or exceed human abilities in recognizing features in images, speech and text. That leads us to imagine revolutions in how we interact with the electronic and physical worlds in home automation, autonomous driving, medical aid and many more domains.

But there’s one small nagging problem. What do we do when it doesn’t work correctly (or even more troubling, how do we know when it’s not working correctly)? What do we do when we have to provide assurances, possibly backed up by assumption of liability, that it will work according to some legally acceptable requirement? In many of these methods, most notably the deep learning approaches, the mechanisms for recognition can no longer be traced. Just as in the brain, recognition is a distributed function and “bugs” are not necessarily easy to isolate; these systems are effectively black-boxes. But unless we imagine that the systems be build will be incapable of error, we will have to find ways to manage the possibility of bugs.

The brain, on which neural nets are loosely modeled, has the same black-box characteristic and can go wrong subtly or quite spectacularly. Around that possibility has grown a family of disciplines in neuroscience, notably neuropathology and psychiatry to understand and manage unexpected behaviors. Should we be planning similar diagnostic and curative disciplines around AI? Might your autonomous car need a therapist?

A recent article in Nature details some of the implications and work in this area. First, imagine a deep learning system used to diagnose breast cancer. It returns a positive for cancer in a patient but there’s no easy way to review why it came to that conclusion, short of an experienced doctor repeating the analysis, which undermines the value of the AI. Yet taking the AI conclusion on trust may lead to radical surgery where none was required. At the same time, accumulating confidence in AI versus medical experts in this domain will take time and raises difficult ethical problems. It is difficult to see AI systems getting any easier treatment in FDA trials than is expected for pharmaceuticals and other medical aids. And if after approval certain decisions must be defended against class-action charges, how can blackbox decisions be judged?

One approach to better understanding has been to start with a pre-trained network in which you tweak individual neurons and observe changes in response, in an attempt to characterize what triggers recognition. This has provided some insight into top-level loci for major feature recognition. However other experiments have shown that trained networks can recognize features in random noise or in abstract patterns. I have mentioned this before – we humans have the same weakness, known as pareidolia, a predisposition to recognize familiar objects where they don’t exist.

This weakness suggests that, at least in some contexts, AI needs to be able to defend the decisions to which it comes so that human monitors can test for weak spots in the defense. Which shouldn’t really be a surprise. How many of us would be prepared to go along with an important decision made by someone we don’t know, supported only by “Trust me, I know what I’m doing”. To enable confidence building in experts and non-experts, work is already progressing on AI methods which are able to explain their reasoning. Put another way, training cannot be the end of the game for an intelligent system, any more than it is for us; explanation and defense should continue to be available in deployment, at least on an as-needed basis.

This does not imply that deep learning has no place. But it does suggest that it may need to be complemented by other forms of AI, particularly in critical contexts. The article mentions an example of an AI system rejecting an application for a bank loan, since this is already quite likely a candidate for deep learning (remember robot-approved home mortgages). Laws in many countries require that an adequate explanation be given for a rejection. “The AI system rejected you, I don’t know why” will not be considered legally acceptable. Deep learning complemented by a system that can present and defend an argument might be the solution. Meantime perhaps we should be adding psychotherapy training to course requirements for IT specialists, to help them manage the neuroses of all these deep learning systems we are building.

You can read the Nature article HERE.

More articles by Bernard…


Targeting Cat-NB1 instructions delivers power savings

Targeting Cat-NB1 instructions delivers power savings
by Don Dingee on 10-10-2016 at 4:00 pm

If one wireless IoT technology fit every possible use case, we would have one specification. Many tradeoffs – battery life, mobility, indoor coverage, licensed versus unlicensed spectrum, and more – have made for many potential solutions. A heated discussion right now is over the future of LPWAN technologies, with LoRA, SIGFOX, Ingenu, and Weightless in the mix, versus the potential for the evolution of cellular-based technologies to handle IoT needs.

3GPP has been working very hard on the latter. Thankfully, the once-passionate “war” of M2M versus IoT has come to an end. With mobile revenue flattening and the use cases starting to overlap, the carrier community has embraced the IoT in hopes of reigniting growth beyond what M2M solutions can provide. That in turn drives a need for an entirely new class of chips, ones able to handle a sophisticated protocol stack at low power consumption.

In their June 2016 update “The Evolution to Narrow Band Internet of Things”, the GSA (Global mobile Suppliers Association) discusses the trends in NB-IoT at length. Acceptance of LTE Cat-1 is global, and almost all major carriers are already deployed or in trials. With most carriers skipping Cat-0, interest accelerated in NB-IoT standardization. 3GPP Release-13 now formalizes definitions for Cat-M1 and Cat-NB1, and operators are already chasing pre-commercial trials and targeting full commercial rollout by mid-2017.


The traditional approach to wireless sensor networks was to grab a microcontroller-class core and add a hardware radio. Increasingly, that approach is becoming uncompetitive, burning more area and power than a more optimized IoT solution. This is especially true for Cat-NB1, where the baseband workload in layer 2 with encryption and compression is substantial.

Looking at the diagram above explains in part why CEVA has been pursuing its new strategy with the CEVA-X framework. The same architecture spans the range of LTE requirements by changing the number of scalar execution units. To get to Cat-NB1, CEVA has moved to one scalar execution unit in the new CEVA-X1, its third CEVA-X family member and smallest so far.

“Notice we don’t say DSP here.” When Emmanuel Gresset said that during our briefing, it was telling. LTE requires both efficient control processing and DSP elements. That goes beyond simply adding a fast multiplier to a microcontroller engine, and reaches more into efficient addressing and pipelining made for both control and signal processing needs.

CEVA has taken its basic CEVA-X ISA combining CPU and DSP processing in a single scalar unit, and added “less than 10 specific instructions” for Cat-NB1 processing. The CEVA-X1 delivers a CoreMark/MHz of 3.3, nearly equal to that of an ARM Cortex-M4 core, while handling the DSP capability and instruction acceleration for a full software modem implementation. Gresset added that unlike bigger LTE SoC implementations that benefit from complete hardware accelerator units, Cat-NB1 performance is improved by targeting particular instructions. In the case of the CEVA-X1, another 30% power savings come from using these dedicated Cat-NB1 instructions.


Cache is optional in the CEVA-X1; Gresset explains that many implementations focus on tightly-coupled memory (TCM) instead. Also, the core interfaces via either AHB or AXI, making it more flexible for integration. The CEVA-X1 can also handle processing for other wireless needs beyond Cat-NB1. For example, it can also perform GNSS processing, which is an interesting use case since unlike mobile phones doing precise mapping while streaming data, a Cat-NB1 device probably wouldn’t need to do positioning and data transmitting simultaneously.

CEVA’s slides indicated RTOS support for this core, and of course that usually means FreeRTOS. Gresset is seeing the same thing I am, however – many requests for Apache Zephyr. That’s a noteworthy trend particularly given the ability to extend the instruction set.

Richard Kingston of CEVA indicates that over 50% of CEVA’s current revenue is coming from China. Both China Mobile and China Unicom are betting heavily on NB-IoT technology. CEVA’s wireless expertise combined with a growing market and what may be some backlash against the ARM acquisition puts them in a good position.

For more on the CEVA-X1, here’s the press release:

CEVA Introduces Lightweight Multi-Purpose Processor for the Massive Internet of Things

The bigger question is does NB-IoT win out over the LPWAN solutions? I can see niche cases where those LPWANs, particularly LoRA and Ingenu, may hold up. As a wise engineer once told me, “Never bet against Ethernet”, simply because as new versions of the specification appear, advantages of the alternatives are nullified. I see NB-IoT fitting the same pattern; if all the major carriers support it and get the business model right, evolution will eventually win on the larger playing field.


Climbing the dimensions (part 2)

Climbing the dimensions (part 2)
by Claudio Avi Chami on 10-10-2016 at 12:00 pm

In the first part of this article we tried to present a way to capture the essence of the tesseract. We did that by “climbing” the dimensions from the point (no dimensions), through the segment (1-D), square (2-D), cube (3-D) and finally tesseract (4-D).

In the following figures we present other attempts at visualizing what we can barely imagine, a 4-D solid.


Figure 2-1 – Another representation of the tesseract

Figure 2-1 represents the tesseract by using segments which are all of the same length. Figure 2-2 shows another representation. In this one, two cubes (one blue, one pink) are shown with their connection in the fourth dimension. The image can be explored to see that all eight cubes forming the tesseract are there. The cubes are deformed because of the perspective used.


Figure 2-2 – Yet another representation of the tesseract


Unfolding the tesseract

Let’s go back to our 3-D world for a minute. A cube, as we know, has six square sides. Six squares, connected in certain ways, can be folded to form a cube. Most of us have done something like this in elementary school. There are several ways in which six squares can be connected so to form a cube when folded. One of them is shown in figure 2-3.


Figure 2-3 – An unfolded cube


An unfolded 3-D cube is a 2-D image. Hence, an unfolded 4-D tesseract is a 3-D body. One of the possible ways to unfold the tesseract is shown on figure 2-4. If we could fold the tesseract (in the fourth dimension), the faces marked with the same letters in the figure would be in contact.


Figure 2-4 – An unfolded tesseract

The unfolded tesseract appears in a famous picture from Dali, the crucifixion of Christ.

According Wikipedia: “Just as the concept of God exists in a space that is incomprehensible to humans, the hypercube exists in four spatial dimensions, which is equally inaccesible to the mind”.





Figure 2-5 – Tesseract in art – Dali’s “Corpus hypercubus”

To end this two part series, I invite you to watch a video of world famous Carl Sagan, speaking of some hypothetical interactions of 3-D beings with 2-D beings:

Also read: Climbing the dimensions (part 1)

My blog: FPGA Site