CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

ISO 9001:2015 – Not Just for the Big Guys

ISO 9001:2015 – Not Just for the Big Guys
by Tom Simon on 12-02-2016 at 7:00 am

If you are like me, you remember the banners that large companies put up years ago when they achieved ISO 9001 compliance. It seemed at the time that this was something only for large companies. Since its introduction in 1987 ISO 9001 has both evolved as a standard and has become an achievement that not just large manufacturing companies can attain. There have been updates to the standard in 1994, 2000, 2008 and the most recently in 2015. The latest version of the standard is approximately 30 pages long. However, it differs from the previous version in several important ways.
First off – some of the basics. W. Edwards Deming is famous for saying that if something is not measured, it cannot be improved. The ancillary observation is that once something is measured, it will improve. ISO 9001 aims to allow an organization of any size determine what efforts are necessary to improve quality and then undertake and improve those efforts over time to continuously increase quality. Another of Deming’s ideas is Plan-Do-Check-Act (PDCA), which is a circular way to carry out processes. By embracing PDCA, a mechanism for continuous improvement is built into ISO 9001.
Companies need to define and perform processes to meet this objective. It worth noting that a process is not a procedure. A procedure is a step by step way of doing something. As such it tends to be a static list of steps for a given task. A process is established by defining in the inputs and desired results. The actual method is left open, with the assumption that there is expertise possessed by those performing the process. ISO 9001 relies on processes not procedures for the very reason that there can be evolution and improvement.

ISO 9001 starts with “context of the organization”, which essentially is a thorough way of identifying all the things that affect quality and that are affected by quality. The categories that these are drawn from is extensive. There is an external context and an internal context. These include suppliers, contractors, regulatory agencies, customers, competitors, management, employees, etc. Context can include social, technological, environmental, ethical, political, legal, and economic factors. The ISO 9001 process entails documenting these and then exploring the relationships with all of them. This can lead to open ended thinking about how to improve customer satisfaction, increase business, improve products, improve marketplace goodwill, or positively affect other tangible or intangible success metrics.

One of the biggest changes in ISO 9001:2015 is the shift to “risk based thinking” instead of the narrower “preventative action” from earlier revisions. One counterintuitive notion in ISO 9001:2015 is that “risk” can include the possibility of a positive or beneficial outcome. Risk is defined broadly as an effect of uncertainty. Any source of uncertainty, internal or external, can affect the quality of a result. Companies adopting ISO 9001:2015 need to thoroughly think through and document all sources of risk. To manage risk various actions can be taken. With the broader net that risk based thinking encourages, organization may identify risks proactively, instead of waiting for problems to manifest before taking preventative action.

ISO 9001:2015 puts increased emphasis on engagement with the highest levels of management. If a quality management system (QMS) is an afterthought carried out with no high-level sponsorship, it will not have the aegis or resources to be effective. ISO 9001:2015 requires up front involvement with the organization’s leadership. If this is baked in at the outset, the likelihood of success is dramatically improved.

Companies have wide latitude in how they initially document their quality management system and how they use it in practice. However, certification is becoming increasingly important. The ISO organization does not do the certification themselves. ISO published the standard which is drafted by a technical committee known at TC 176. There are accredited third parties that perform certification. An organization seeking certification would use one of them.


As I began this article – it’s not just the big car makers or telecom companies that are taking advantage of ISO 9001:2015; rather a number of smaller companies are seeing the advantages for themselves and their customers in adopting the standard. In EDA and IP there is an increasing number of these. Nowhere is this more important than for IP that relates to security and storage of vital information. One such company that has adopted ISO 9001:2015 is Sidense – supplier of IP for one time programmable non-volatile memory.

It’s easy to see why IP consumers would want to have assurance that the IP they are using in their designs is designed, test and supported with processes that focus on continuous improvement. To see more about how an ISO certified supplier operates, take a look at the Sidense website.

More articles by Tom Simon


IoT Security – Part 1 of 3: IoT Security Architecture on the Device and Communication Layers

IoT Security – Part 1 of 3: IoT Security Architecture on the Device and Communication Layers
by Padraig Scully on 12-01-2016 at 4:00 pm

The massive scale of recent DDoS attacks on DYN’s servers that brought down many popular online services in the US, gives us just a glimpse of what is possible when attackers are able to leverage up to 150,000 unsecure IoT devices as malicious endpoints.

To address the growing fear and uncertainty out there surrounding the IoT security architecture, our IoT security research practice teamed up with the IoT Security company Ardexato help companies implementing IoT double-check that their solutions are built secure.

Part 1 of this 3-part security-focused blog series presents anintroduction into the overall IoT security architecture and highlights six key principles as explained by George Cora, CEO of Ardexa.


Developing secure end-to-end IoT solutions involves multiple levels that fuse together important IoT security architecture features across four different layers: Device, Communications, Cloud, and Lifecycle Management.

A. Secure Device Layer
The device layer refers to the hardware level of the IoT solution i.e., the physical “thing” or product. ODMs and OEMs (who design and produce devices) are increasingly integrating more security features in both their hardware and software (that is running on the device) to enhance the level of security on the device layer.
Important IoT security architecture features:

  • Some manufacturers are introducing chip security in the form of TPMs (Trusted Platform Modules) that act as a root of trust by protecting sensitive information and credentials (i.e., not releasing encryption keys outside the chip).
  • Secure booting can be used to ensure only verified software will run on the device.
  • Even physical security protection (e.g., full metal shield covering all internal circuitry) can be employed to guard against tampering if an intruder gains physical access to the device.

While these “hard identities” or “physical protection barriers” may be valuable in specific situations, it is the proposed data movements and ability of the device to handle complex security tasks that will determine the level of risk. Edge processing and complex security functions within a device are important principles to get right from the start.

IoT Security Architecture Principles on the Device Layer:

1. Device “intelligence” is required for complex, security tasks

  • Many devices, appliances, tools, toys or gadgets available today have the ability to ‘talk’ to a service, cloud or server via Ethernet or wi-fi. But many of these ‘devices’ are powered by nothing more than a microprocessor. These devices are ill-equipped to handle the complexities of Internet connectivity, and should not be used for the front-line duty in IoT applications.
  • Effective and secure connectivity must be powered by a “smart” device able to handle security, encryption, authentication, timestamps, caching, proxies, firewalls, connection loss, etc. Devices must be robust and able to operate in the field with limited support.”

2. The security advantage of processing at the edge

  • Having smart devices is about giving your device the power to evolve, making it more powerful/useful/helpful over time. For example, machine learning algorithms can now enable these small devices to process video streams in ways which were not foreseeable (or computationally possible) a few years ago. Edge processing means that these smart devices can process data locally before it is sent to the cloud, eliminating the need to forward huge volumes of video to the cloud.
  • Can this be used for enhanced security? Absolutely. It means that sensitive information (usually in bulk) need not be sent to the cloud. Furthermore, it means processed data, packaged into discrete messages, sent securely to various entities is now possible. Thoughtful execution of the processing power at the device layer helps strengthen the overall network.

Insights from George Cora, CEO at Ardexa

B. Secure Communications Layer
The communication layer refers to the connectivity networks of the IoT solution i.e., mediums over which the data is securely transmitted/received. Whether sensitive data is in transit over the physical layer (e.g., WiFi, 802.15.4 or Ethernet), networking layer (e.g, IPv6, Modbus or OPC-UA), or application layer (e.g., MQTT, CoAP or web-sockets) unsecure communication channels can be susceptible to intrusions such as man-in-the-middle attacks.
Important IoT security architecture features:

  • Data-centric security solutions ensure data is safely encrypted while in transit (and at rest) so that even if intercepted, it is meaningless except to users (i.e., a person, device, system, or application) who have the right encryption key to unlock the code.
  • Firewalls and intrusion prevention systems, designed to examine specific traffic flows (e.g., non-IT protocols) terminating at the device, are also increasingly being used to detect unwanted intrusions and prevent malicious activities on the communication layer.

IoT Security Architecture Principles on the Communication Layer:

3. Initiate a connection to the cloud, and not in the reverse

  • The moment a firewall port is opened to a network, you literally and metaphorically open your network up to significant security risks. Opening a firewall port is only really required to allow someone or something to connect to a service. Yet, field devices are not likely to be supported to the same degree as hosted applications such as web/email or voice/video servers. They will not have an administrator patching, reconfiguring, testing and monitoring software that normally applies to a cloud service.
  • For this reason, it is usually a bad idea to allow a connection from the Internet to the device. The device must initiate the connection to the cloud. It must not allow incoming connections. A connection to the cloud can also facilitate a bi-directional channel, thereby allowing the IoT device to be remotely controlled. In most cases, this is required.
  • Closely related to this principle is the use of Virtual Private Networks (VPNs), to access an IoT device. However, VPNs can be just as dangerous as allowing incoming services, since they allow an individual, or a network, access to resources inside one’s own network. The scale of the security task has now grown significantly, and often beyond reasonable control. Again, VPNs have a role to play but in very specific circumstances.

4. The inherent security of a message

  • Communications to the IoT device (regardless of whether they are to or from the device) should be treated with care. Lightweight message-based protocols have a number of distinct advantages that make them a good choice for IoT devices including options for double encryption, queuing, filtering and even sharing with third parties.
  • With correct labeling, each message can be handled according to the appropriate security policy. For example, one may restrict access to messages that allow ‘remote control’ functions, or allow ‘file transfers’ in only one direction or (say) double encrypt all messages carrying client data to protect it when it traverses a message switch. It becomes possible, with such an infrastructure, to control message flow to the desired destination(s). Messaging, and its related access control and security benefits is a very, very powerful tool on the communication layer of the IoT.”

Insights from George Cora, CEO at Ardexa
To combat the inherent challenges of securing the IoT, sticking to these key principles at both the Device and Communications layer will help reduce future headaches, particularly in trying to compensate for poor underlying design fundamentals and inadequate IoT security architectures.
Stay tuned for Part 2 of our IoT security blog series in December where we take a closer look at Secure Cloud and Secure Lifecycle Management.

Find out more details about the 4 levels of IoT security architecture and how the overall IoT security market is shaping up in our upcoming IoT Security Market Report (to be released in the coming months) – contact us now to gain exclusive access.


Car Connectivity Lost

Car Connectivity Lost
by Roger C. Lanctot on 12-01-2016 at 12:00 pm

The frustration in the room was palpable yesterday at the annual Vehicle Connectivity Workshop gathering of the Telecommunications Industry Association (TIA). The prospect of dedicated short range communication (DSRC) technology achieving its long-sought mandate to connect cars and infrastructure hung tantalizingly over the crowd like mistletoe. Meanwhile speakers touted the merits of competing and complementary wireless connections – with an emphasis on 5G cellular.

Unspoken for much of the meeting, though, was the major disconnect between the wireless, intelligent transportation (ITS), information technology and automotive industries – to say nothing of disconnected insurance and regulatory interests. All of these parties were well represented at the gathering but, as always, they talked past one another and a chance to achieve true alignment was missed once again.

I’ll state the underlying conflict in simple terms. Wireless carriers don’t understand car companies and car companies can’t stand wireless carriers.

It comes down to motivations. Everyone is motivated by profit. Car companies are primarily interested in extracting revenue from wireless connections in cars. Wireless carriers are primarily interested in extracting revenue from car companies and their customers. These objectives are in conflict, though they do not have to be.

Meanwhile, the ITS community is interested in using wireless technology to improve vehicle throughput at toll plazas and high-occupancy-vehicle lanes. Regulators and insurance companies are interested in reducing rising highway fatality levels. Startups and Silicon Valley types are interested in disruption.
The prospect of enabling connections between cars, regardless of the technology to be used, means car companies will have to cooperate with one another on application development for the first time. It is not at all clear how that will work with DSRC technology. Using cellular technology will introduce wireless carriers as a unifying force – a role carriers will increasingly play as the world of interconnected things continues to emerge.

This new post-IoT world will also elevate the role of IT companies – Oracle, IBM, HP, Microsoft and Amazon. Not all of these voices were represented at the TIA gathering.

I don’t hold TIA responsible for the failure to calm the waters and part the clouds hanging over vehicle connectivity. I blame the car makers.

The onset of 5G connectivity in cars and the reality of LTE connectivity already in cars has shifted connectivity responsibility into a cross carlines responsibility. Some car companies treat connectivity as a crash response priority, others treat it as a source of monetize-able Wi-Fi connections, still others want to use vehicle connectivity for promotional and marketing opportunities.

At the forefront of the conversation yesterday was the prospect of 5G connections enabling enhanced security, over-the-air software updates and automated driving. All of these use cases will transform the industry and they suggest that the wireless connection in the car will soon be touching all departments from powertrain to safety to infotainment, marketing, customer relationship management, finance and dealer relations.

But even sorting all of that out leaves vehicle-to-infrastructure and vehicle-to-pedestrian connections unaddressed. The companies building the roads and bridges are at a loss in trying to gain traction with the correct auto industry execs. This holds true for smart city initiatives and mobility as a service strategies. No one knows who to talk to and there does not appear to be a single executive at any car maker with comprehensive responsibility for defining the connectivity vision.

European car makers have made the greatest strides here by nominating executives with such titles as “chief digitalization officer” or the like. Talk with one of these executives and he or she will run down a Trump-sized list of teams and initiatives over which they are responsible. It’s too much for a single man or woman – and profit motives still sadly take precedent over safety.

It all comes down to infrastructure. I participated in the event on a panel with Chrisopher Bluemle of Crown Castle. Chris’ selection to participate in the event was a clever one not only because his company is responsible for tower and fiber optic infrastructure enabling the connections we are so keen to create, but also because his background is finance. Unfortunately, the financing of the backbone of vehicle connectivity is likely to come from all of our wireless bills – which will probably be less than if it came out of our taxes.

There was no consensus in the room regarding DSRC vs. 5G yesterday. This debate will rage on. But the bigger question is whether wireless carriers, car companies, ITS infrastructure companies and IT providers can find sufficient common ground to craft an effective strategy for guiding automated vehicle development.

This is not about taking sides. It’s about assigning responsibility appropriately with the goal of saving lives. The next move is up to the car makers. This was the one constituency insufficiently represented in the presentations and panel discussions. The one dominant auto-maker voice came from Dr. Gary Smyth, executive director Global R&D Laboratories of General Motors.

Smyth is a fierce but friendly advocate of DSRC. He firmly planted the DSRC flag as the focal point of the TIA gathering. The event concluded with word of President-elect Trump’s appointment of a new Department of Transportation Secretary, Elaine Chao, but no word regarding DSRC rule making.

Rule making and regulators won’t solve the various connected vehicle disconnections. Only inter-industry and intra-industry cooperation can solve this problem. Perhaps the best news emerging from the TIA event was the creation of the 5GAA (5G Automotive Association) which is bringing together car makers, carriers, and infrastructure partners. AT&T, Verizon, Ford, Denso and others are expected to soon join Audi, Daimler, BMW, Qualcomm, Ericsson, Huawei, and Vodafone already involved. Perhaps the clouds will yet part for the world of connected cars. Watch this space.


The Failure of IoT Platforms

The Failure of IoT Platforms
by Glen Allmendinger on 12-01-2016 at 7:00 am

AAEAAQAAAAAAAAf AAAAJDI1YzVmZWFmLTEyNjktNDE4MS1hNWQzLWNhNWFkMTFhNTY5ZA

Creative Evolution and the Post Platform Era
When telephones first came into existence, all calls were routed through switchboards and had to be connected by a live operator. It was long ago forecast that if telephone traffic continued to grow in this way, soon everybody in the world would have to be a switchboard operator. Of course that did not happen because automation was built into the systems to handle common tasks like connecting calls.

We are quickly approaching analogous circumstances in the IoT arena with the proliferation of connected devices. The tools we are working with today to make products “smart” were not designed to handle the diversity of devices, the scope of interactions and the massive volume of data-points generated from devices. Each new device requires too much customization and maintenance just to perform the same basic tasks. These challenges are diluting the ability of organizations to efficiently and effectively manage development.

Today, platforms for the Internet of Things are still a kludgy collection of yesterday’s technology and architectures that do not address the most basic development challenges. Even though many companies are telling fantastic IoT marketing stories about what their solutions can do, you wouldn’t know it from today’s fragmented collection of incomplete platforms, narrow point-solutions, and software incompatibility.


We need better software to empower users and developers to exploit the vast potential of the Internet of Things.

Download Our Storyboard on IoT Frameworks

We’re Having a Crisis of Perception About “Future Computing”
In times of radical change, crises of perception are often the cause of significant failures, particularly in large established companies. Such failures result from the inability to see emergent discontinuities. We believe this is the case with most large developers and suppliers of technology attempting to address the emerging Internet of Things opportunity. Many players’ assumptions about future architecture for Smart Systems are being shaped by the past and are being extrapolated into the future in a linear fashion. Most of the large established IT equipment, software and network players appear to be stuck in this tyranny of replication.

Today the world of smart communicating devices is mostly organized in hierarchies with smart user interface devices at the top and the dumb devices [often analog or serial sensors and actuators] at the bottom. Within this structure, there are typically various types of “middle box” supervisory and gateway devices forming a point of connectivity and control for the sensors and actuators as well as the infrastructure for the network. From our perspective, this description of today’s IoT systems architecture looks very familiar and is largely organized like client-server based computer systems….. no surprise given they were designed in the 1990′s.

As the Internet of Things opportunity matures, the sensor and actuator devices will all become smart themselves and the connectivity between them (devices, for the most part, that have never been connected) will become more intelligent and the interactions more complex. As the number of smart devices grow, the existing client-server hierarchy and the related “middle boxes” acting as hubs, gateways, controllers and interfaces will quickly start to blur. In this future-state, the need for any kind of traditional client-server architecture will become superfluous. In a future Smart Systems world, the days of hierarchical models are numbered.

We can now begin to imagine an application environment where there will be widely diverse operational technology (OT) computing devices running applications dispersed across sensors, actuators and other intelligent devices sharing and leveraging the compute power of a whole ‘herd’ – a smart building application, for example, where the processor in an occupancy sensor is used to turn the lights on, change the heating or cooling profile or alert security.

In this evolving architecture, the network essentially flattens until the end-point devices are merely peers and a variety of applications reside on one or more [OT] computing devices. In a smart systems application designed to capture, log and analyze large volumes of data from sensors, such as we are describing here, peer computing devices will carry out the process of taking raw data and distilling it into information “locally.” Local processing is required to reduce the otherwise untenable Internet traffic challenges that arise from connecting billions of devices.

This is the move we’ve been waiting for…….. to a truly distributed architecture because today’s systems will not be able to scale and interact effectively where there are billions of nodes involved. The notion that all these “things” and devices will produce streaming data that has to be processed in some cloud will simply not work. It makes more sense structurally and economically to execute these interactions in a more distributed architecture near the sensors and actuators where the application-context prevails.
Dispersed computing devices will become unified application platforms from which to provide services to devices and users where the applications run, where the data is turned into information, where storage takes place, and where the browsing of information ultimately takes place too – not in some server farm in a cloud data center. Even the mobile handsets we admire so much today are but a tiny class of user interface and communications devices in an Internet of Things world where there will be 100 times more “things” than humans.

From our view the movement towards peer-to-peer, and the view that many people hold that this is somehow novel, is ironic given that the Internet was originally designed for peer-to-peer interactions. We seem to be heading “back to the future.”

Today’s IoT Platforms Don’t Liberate Data; They Trap It
In the course of the last two decades, the world has become so dependent upon the existing ways computing is organized that most people, inside IT and out, cannot bring themselves to think about it with any critical detachment. Even in sophisticated discussions, today’s key enabling information technologies are usually viewed as utterly inevitable and unquestionable.

The client-server model underlying today’s computing systems greatly compounds the problem. Regardless of data-structure, information in today’s computing systems is machine-centric because its life is tied to the life of a physical machine and can easily become extinct. With today’s IoT platforms information is not free (and that’s free as in “freedom,” not free as in “free of charge”). In fact, thanks to today’s platforms and information architectures, it’s not free to easily merge with other information and enable any kind of systemic intelligence.

All of this adds up to a huge collection of information-islands whether on your servers, your service provider’s servers or anywhere else. Assuming the islands remain in existence reliably, they are still fundamentally incapable of truly interoperating with other information-islands. This is the issue with all of the so-called IoT platforms that have flooded the market – they are really “data traps” and information islands. We can create bridges between them, but islands they remain, because that’s what they were designed to be.

What would truly liberated information be like? It might help to think of the atoms and molecules of the physical world. They have distinct identities, of course, but they are also capable of bonding with other atoms and molecules to create entirely different kinds of matter. Often this bonding requires special circumstances, such as extreme heat or pressure, but not always. In the world of information technology, such bonding is not all that easy.

Smart Systems requires we fundamentally change this paradigm, treating data from things, people, systems and the physical world as “neutral” representations. In other words, treating diverse data types equally. But even this makes too many assumptions about what the Smart Systems phenomenon will be. Encoded information in physical objects is also smart computing—even without intrinsic computing ability, or, for that matter, without being electronic at all. Seen in this way, a printed bar code, a DVD, a tag, a disc, a house key, or even the pages of a book can have the status of an “information device” on a network.

Today’s holdover client server architectures are just making matters worse. With each additional layer of engineering and administration, computing systems come closer and closer to resembling a fantastically jury-rigged Rube Goldberg contraption.

The reason for all of this is simple. Today’s computing systems were not really designed for a world driven by pervasive information flow and are falling far short of enabling adaptable real-time intelligence.

Creative Evolution Will Force a “Post-Platform” World
Machine learning, artificial intelligence and the Internet of Things are all in some way trying to break from today’s computing paradigms to enable intelligent real-world [physical] systems. As these devices and systems become more and more intelligent, the data they produce will become like neurons of the brain, or ants in an anthill, or human beings in a society, as well as information devices connected to each other. The many “nodes” of a network may not be very “smart” in themselves, but if they are networked in a way that allows them to connect effortlessly and interoperate seamlessly, they begin to give rise to complex, system-wide behavior that usually goes by the name “emergence.” That is, an entirely new order of intelligence “emerges” from the system as a whole—an intelligence that could not have been predicted by looking at any of the nodes individually. There’s a distinct magic to emergence, but it happens only if the network’s nodes are free to share information and processing power.
Today’s platforms for Smart Systems and the IoT should be taking on the toughest challenges of interoperability, information architecture and user complexity. But they’re not.

We need to creatively evolve to an entirely new approach that avoids the confinements and limitations of the today’s differing platforms. We need to quickly move to a “post platform” world where there is a truly open data and information architecture that can easily integrate diverse machines, data, information systems and people – a world where smarter systems will smoothly interact to create systemic intelligence – a world where there are no artificial barriers between different types of information.

In our years of experience, we have all too often seen the unfortunate scenarios that managers create when uncertainty and complexity force them to rely on selective attention. Unfortunately, when this happens, selective attention naturally gravitates toward what’s readily available: past experience, existing tools and uncertain assumptions. Today’s IT and telco infrastructure players are doing just this. By ignoring important trends simply because it’s difficult to perceive an alternative future, these managers are certainly leaving the door open for competition that will lead to their eventual obsolescence…which will make for a very interesting world to live in…

For more information on how to overcome these obstacles in the market email us


OpenCL hits FPGA-based prototyping modules

OpenCL hits FPGA-based prototyping modules
by Don Dingee on 11-30-2016 at 4:00 pm

OpenCL brings algorithm development into a unified programming model regardless of the core, working across CPUs, GPUs, DSPs, and even FPGAs. Intel has been pushing OpenCL programming for some time, particularly at the high end with “Knights Landing” processors. Where other vendors are focused on straight-up C high-level synthesis for FPGAs, Intel is taking Altera technology deeper into OpenCL.

Using OpenCL, a developer can write an algorithm once, emulate it on a PC, then choose what hardware to run it on – or partition it across several different types of hardware depending on cost and packaging. Intel’s FPGA SDK for OpenCL helps abstract out FPGA complexity for hardware acceleration. Their compiler can perform over 300 optimizations, then synthesize the FPGA in a single step.

Several different hosts are supported, including ARM Cortex-A9 cores typical of SoCs, IBM POWER Series processors, and X86 CPUs. The solution can be scaled across multiple FPGAs, which makes it ideal for the FPGA-based prototyping scenario. Instead of taking overt partitioning steps and spreading out RTL across several FPGAs, OpenCL code distributes seamlessly across FPGA devices. This is a huge advantage for HPC teams who want to concentrate on software, not hardware, and especially not the nuances of FPGA programming.


Screenshot from video: https://www.altera.com/content/dam/altera-www/global/en_US/video/opencl-overview-tutorial.mp4

In fact, OpenCL could be one of the key differentiators between the Intel/Altera combination and the ARM/Xilinx ecosystem. There are OpenCL ports on lots of platforms since it is an open standard, but Intel has gone all in on optimization for OpenCL across the board including its FPGA offering. Combining the benefits of an OpenCL development flow with the power of an Altera Arria 10 FPGA brings a bunch of algorithm acceleration possibilities.

S2C has solved the problem of how to get many FPGAs interconnected in a single prototyping platform. With their new Arria 10 Prodigy FPGA Prototyping Logic Module, users can have anywhere from a single Arria 10 1150GX FPGA to a scaled-up system with 16 FPGAs in the Cloud Cube chassis. As the name implies, a single Arria 10 logic module has 1150K logic elements along with a full suite of programmable I/O including 48 transceivers running at up to 16Gbps, and 576 high performance I/Os. It’s an incredible leap, not only for those interested in working in the Intel/Altera environment, but for those working on OpenCL.

You can read more about the Intel FPGA SDK for OpenCL, and download a copy, here:
Intel FPGA SDK for OpenCL

For more information on the S2C Arria 10 Prodigy FPGA Prototyping Logic Module, here’s the full S2C press release:

S2C Expands Its FPGA Prototyping Library with Arria 10 Solution and Delivers Its Powerful Technology to the High Performance Computing Market

The thing about FPGA-based prototyping is it is becoming less about the FPGA and more about the software running on the platform. While the entire S2C prototyping portfolio including expansion daughter cards, configuration, and debug capability comes to bear, the real news here is how OpenCL speeds up the software development process. The shape of HPC is changing from big, expensive iron to reconfigurable, accelerated computing with FPGAs underneath the hood.


IOT – Think Big – Start Small – Scale Quickly

IOT – Think Big – Start Small – Scale Quickly
by Bill McCabe on 11-30-2016 at 12:00 pm

There’s an old philosophy that business coaches often use. It’s the saying that you think big, start small, and then scale quickly. If you follow it closely, you have the potential to make a lasting impression in an industry and achieve actual results in the process.

Let’s look at this in terms of the internet of things to see how it pertains to this industry. The first thing we do is think big. This means to think about the transformation in the industry and how it will not only impact you, but others. With this, you’ll know what technology you need to be successful, and have the building blocks in place that others can come to you as they need your technology in order to operate more effectively.

Now that you understand the big picture, you can start small. Begin to work a process into the latest trends. Identify any weaknesses the competition has, and work to design and processes to help combat these weaknesses. Consider adjusting the structure and then release products that address these concerns. You can begin to gain attention as you do this, and others will follow your suit. Chances are, other technology companies will be willing to work with you to address their own internal concerns.

It’s at this point, you scale quickly. You begin to unroll solutions quickly, release prototypes, and aggressively work to be the leader in the industry. The goal at this time is to show you are on the cutting edge of things and to drive the process further harder. As you do this, make sure you keep looking at the future, especially since you know the direction you are taking trends and work on building from this. Even though you did start small, you have cornered a section of the market at the head. That way, people will keep looking to you in order to determine the future of things.

The thing to remember is that as long as you are innovative, and follow through with the process, there is no reason why you cannot succeed. Mobile technology has used this approach for years and it continues to propel the smart phone industry. With more devices headed toward total connectivity, it will pay to be the company who decides to start small and scale quickly, and unleash the new popular trends that will propel the internet of things into the future.

Please check out our new website for more information www.internetofthingsrecruiting.com


Car Companies Should Steer Clear of Uber’s Red Ocean

Car Companies Should Steer Clear of Uber’s Red Ocean
by Roger C. Lanctot on 11-30-2016 at 7:00 am

It is no secret that Uber drivers struggle to make a living driving for Uber. The most popular guidance for Uber drivers is to use the service to supplement existing income, not as a full-time job. But Uber is transforming transportation with billions of dollars of investment, billions of dollars in fares and billions of dollars of self-driving car research. Car companies wanting to participate in this transformation should beware.

Most users of Uber have had the delightful experience of chatting with Uber drivers thereby discovering usually non-professional drivers who have turned to Uber as a result of mid-career pivots or unanticipated unemployment or underemployment. The average Uber driver I have encountered around the world has usually been on the job for well under a year.

For many of these drivers taking fares from place to place is still a somewhat new and novel experience and likely one that they don’t anticipate making a permanent career choice. Also, in chatting with these drivers, you almost invariably hear the same assessment of Uber as an oppressive master constantly manipulating driver compensation and fares to manage supply and demand.

Of course, Uber’s not-so-invisible hand is often perceived as unfair and driver compensation insufficient. That insufficiency may be less than obvious, though, to an Uber driver who has not figured in the costs associated with maintaining his or her vehicle. One of the most positive aspects of the Uber experience, as a passenger, is the generally nearly-new condition of most Uber cars.

It’s somewhat sad and not unusual to see a nearly-new Toyota or Audi or Tesla racking up mileage at the rate of 25K or more within a 3-4 month period. It’s true that maintenance intervals for new cars are getting longer and longer – stressing out new car dealers that depend on service revenue – but usage rates for ride hailing service providers – using their own cars – is a big red flag and usually the final straw that breaks the back of the business model.

This is just one of many reasons behind the high rate of churn of Uber drivers. It is also the reason behind Uber’s widening efforts to put drivers who don’t even own cars into leased or rented vehicles. Of course, this strategy simply substitutes the additional cost of the lease or rental for the burdensome cost of vehicle maintenance – all to overcome the diminishing pool of available or interested drivers.

Of course, in some markets, where the demand is high, there are too many drivers. This, too, contributes to the churn challenge and the inability to make a living.

But my real motivation for writing this brief note is the fact that Uber is using its drivers to help build the infrastructure that will put these drivers out of work entirely. Uber is putting a sizable chunk of its investor capital into developing driverless cars – and the data gathered from its test mules and existing vehicles will be used as part of that development effort.

In effect, Uber is using the independent weavers to build the looms that will put them out of business – to make a slightly tortured analogy to the 19th century Luddite movement in Northern England.

A car maker I was chatting with yesterday described the Uber and Lyft ride-hailing sector as a Red Ocean – as part of describing why his company was not planning to subsidize Uber or Lyft leases, or invest in either of these companies. A red ocean is a market where everyone is talking a different version of the same thing versus a blue ocean which is an uncontested market sector.

I had to chuckle, because, unfamiliar with the expression “red ocean” I simply envisioned a sea of “red ink,” which is in fact what Uber has been producing in its run up to ad hoc transportation domination. The whole concept of Uber seems intended to undermine both the taxi and rental car industries using financially oppressed drivers offering untenably low fares.

We know how this ends. Existing services are disrupted. Professional people and organizations may be forced out of work or out of business. The insurgent takes over and new forms of discrimination and pricing emerge.

Uber’s reason for being is predicated upon a variety of shortcomings all or some of which may exist in a particular market including: poor taxi availability, high fares, poorly trained or rude drivers, and dirty, old or poorly maintained vehicles. Given the wide disparity in the quality and availability of taxi services around the world, I understand the convenience, economy and efficiency offered by Uber.

At the same time, I depend upon those taxis lined up at airport taxi ranks to be there when I need them. Uber threatens the availability of those taxis at the expense of drivers who have a limited professional stake in delivering me to my destination.

Under these circumstances it is probably best for car companies to steer clear of the Uber’s of the world. The auto industry is in transition from a B2B to a B2C business where direct interaction with consumers will increasingly be the rule.

Car companies are naturally allied with the taxi and rental car industries. Collaboration with these incumbents to create more customer friendly offerings makes much more sense to me.

But don’t take it from me. Hail yourself a taxi instead of an Uber today. You may have to download a different app to do it. But chances are you will get a professional driver in a well maintained car who will know your destination without the need of a map and most likely speaks your language.

I truly enjoy chatting with Uber drivers – but I feel the most sympathetic thing I can do for them is to not encourage them. It’s a lousy living and the employer is actively seeking to put them out to pasture.


Expert Interview: Rajeev Madhavan

Expert Interview: Rajeev Madhavan
by Daniel Nenni on 11-29-2016 at 12:00 pm

This blog was originally posted on Paysa.com but since Rajeev Madhavan is one of our EDA Heroes I thought it was worth a re-post. In case you do not know Rajeev, he started his EDA career at Cadence then was co-founder and VP of Engineering at LogicVision (acquired by Mentor). Next he was Founder, President, and Chairman of Ambit Design Systems (Acquired by Cadence) and finally he was Founder, Chairman and CEO of Magma Design Automation (Acquired by Synopsys):

Rajeev Madhavan is a Founder and General Partner at Clear Ventures. With $120 million of capital, Clear is a VC firm that is purpose-built to help startup teams win in business technology and services.

We recently asked Rajeev for his advice to recent graduates in the engineering field on best practices for their career and what the steps to success look like. Here’s what he had to say:

Tell me about your early years and how they contributed to who you are today?
When I was a kid, I absolutely loved reading comic books. I would go through comic books quickly, reading the latest edition and be ready for the next, but my dad was not a fan of comic books. So, I built a lending service on the school bus – I charged a small fee to read my comic books—which enabled me to support my hobby. It was a comic book business with a few friends. We started to run into trouble when it came to collection though, especially with the bigger kids. We solved the problem by getting one of the bigger kids to collect for us, with a deal that he would get to read the books first. However, the school principal caught us and we were suspended from the school bus. It took away all business interests in me for quite a while!

What business lessons did you learn?

This very early business-building experience taught me several business basics. I learned valuable practices like how much money to charge, the delicate art of money collection and, perhaps most importantly, the role of rules and regulations in business. Even the suspension in that story had a critical effect on how I operate in a business setting.

Who has been your biggest influence?

There were 3 people who contributed to my successes – one was my manager, Ed Vopni at my first job – he challenged me to execute better, work harder and set better goals for myself. Another was Lucio Lanza who I met when he was an Executive VP at Cadence and a General Partner at USVP. His questions on various ideas that were being proposed to him made me want to think of doing startups. The last person was Jim Solomon, founder of Cadence who told me when we were on a customer trip to Florida, that he should have done the analog business at Cadence (the business group I was in) as a new startup – this made me realize that I am better off doing a startup. Hit with that deep desire, I joined with two others to do my first startup, LogicVision. One year into it, we had a product and our first customer, Apple. During that time, I learned that creating technology is not enough. Extracting value is what is the most important. I then started Ambit Design Systems, and Ambit was sold to Cadence for $280M. Then I started Magma, which went public on NASDAQ and was eventually acquired in 2012 by Synopsys for $650M.

What motivates you to succeed?

I love to compete. So, the fact that 30 VC firms turned me down when I was looking for funding for Ambit Design was an invitation for me to prove them wrong. Being able to show them I could succeed in spite of this was incredibly rewarding for me.

I also love the opportunity to create a new, exciting product and bring it to life. The mantra has always been that failure is not an option. As an entrepreneur, I believed that I needed to find a way to make it, whether that means needing to pivot, redoing or changing direction.

What career and life advice do you give to new college grads?

Not everyone is built to do a startup. Some may want to work in a large company because it offers stability and less risk. There is nothing wrong in choosing either of these paths. But if you truly want to pursue a startup in the tech space, Silicon Valley is the place to be. Building great relationships is important. Doing a self assessment of your strengths and weaknesses and building a team that addresses your weaknesses is very important. It takes a village to succeed and the invaluable, supportive infrastructure here will help pave your way to success, if you tap into the right relationships. In the same vein, you should take any opportunity that presents itself to build your network.

How do you define success?

Success means different things for different people.

To me, success meant success for all the people in the village that helped build the company. Success means having happy customers who rely on you to do key pieces of their day to day tasks and ultimately success to me implies changing the status quo.

Ensuring that everyone succeeds with you is something which is important to any good entrepreneur. After all is said and done, ask yourself, “Have I done the right things to make sure that my employees have been rightfully rewarded? Am I helping them to succeed too? Have you built a culture that ensures people can provide their honest, open feedback to improve the company?”

Do you have a mentor and if so who is it?
I have been lucky that in the Silicon Valley Village, I have had great mentors during my startup days. Mark Perry, now a retired General Partner at New Enterprise Associates, has given me several pieces of advice about how to put the right systems in place for my company. He also helped me understand the complex financial and legal sides of business.

Andy Bechtolsheim was also an important mentor for me. I always got a lot of much appreciated advice, which was always great, to the point, and highly relevant.

What do you think of the opportunities out there today for engineers and their salary and career potential?
In the valley, engineers carry tremendous value. It’s important for every engineer to know his or her true worth and that over time, that worth increases. It’s easy for engineers to move from company to company to company. There is a lot of distraction out there. It’s great to have the opportunity, but it’s important to assess all angles of a career move before taking the leap. A bigger project name might be appealing, but the management might not be a culture fit. You should stay focused on what really matters to you in your career. Seeing an opportunity for engineers to understand their true worth, prompted us to invest in Paysa.

What is the one piece of advice you’d give to young, entrepreneurial-minded engineers who want to launch a company?
Know your strengths and weaknesses from the get-go, but also know what resources you have at your disposal. Improve and cultivate the appropriate skills you need to succeed. Leverage the experience and knowledge of your team and look for opportunities within your desired startup arena. Building a cohesive unit and having the right co-founders often creates the best outcome.

What are you passionate about outside of your career?
I’m passionate about gardening. I started with 3 or 4 rose bushes in my first house. Now I have 400 varieties of roses and 2,000 bushes in my home. Walking around in the garden always has calmed me and given me peace of mind. So for me, gardening is a past-time that helps balance the immense pressure of the start-up world. That being said, I still haven’t been able to shake my competitive nature to my wife’s dismay – even in my gardening. I see beautiful places like Filoli Gardens and feel determined to out-do their roses.

What were your best/worst subjects in school?
My best subject was math, and all language classes were a challenge for me.

Why did you choose your profession?
I did not have much of a choice – I had a tiger mom who wanted me to be an engineer or doctor – I could not stand blood, so engineer it became. Half way through I wanted to become a CPA (called CA in India). Luckily my dad made me realize that I was too close to finishing to quit. I coasted through to my first job at Bell Northern Research and had the right manager to light up the fire. I knew I had an entrepreneurial streak in me though, dating all the way back to my comic book business. I believe that entrepreneurial spirit is inherent to a degree. The first chance I had to pursue something entrepreneurial, I ran with it. I only worked two years with large companies, but the rest of my career has been in startups.

What do you think about most of the day?
With my current companies, think about new technologies I’ve seen that we could be pursuing. Silicon Valley is fantastic at coming up with new technologies that change status quo – I am lucky to be in the valley and to meet and work with world class entrepreneurs who have passion and wealth of knowledge in so many areas of technology.

Also Read:

CEO interview: Rene Donkers of Fractal Technologies

CEO Interview: Albert Li of Platform DA

CEO Interview: Mike Wishart of efabless


How to Secure a SoC while Keeping Area and Power Competitive?

How to Secure a SoC while Keeping Area and Power Competitive?
by Eric Esteve on 11-29-2016 at 7:00 am

I have attended LETI conference last June and remember the paper presented by Alain Merle, their security guru. Alain said that smart cards are secured because up to 50% of the Silicon area is dedicated to security. When you design a SoC to address applications like smart metering, NFC payment or embedded SIM, you know in advance that these will require more protection, but your challenge is to define a competitive architecture.

The chip area may increase if you implement certain feature in H/W instead in S/W to improve the level of security, but if your architecture is smart enough, the chip area will not necessarily double. Synopsys is proposing a good illustration of this concept with the Trusted Execution Environments (TEE), allowing creating a secure perimeter in the SoC. If TEE is still unclear for you, take a look at the picture below, or, even more efficient, listen to thiswebinarBalancing Advanced SoC Security Requirements with Constrained Area and Power Budgets”.


There’s different ways of implementing a trusted execution environment. A simple way would be using a physical separate module, or on a SoC, use a separate CPU. But a more efficient solution, instead of using a second CPU or even a separate module, is when the trusted and the normal computation can be combined on a single processor.

The designer can define a secure, isolated area of the processor to guarantee code and data protection for confidentiality and integrity where you can securely run software, trusted software. So, the environment basically guarantees confidentiality, integrity, and authenticity of the software running in that trusted execution environment. The designer can now separate the application in parts that are less security critical and parts that are more secure critical. For the less secure critical, you apply normal software engineering. For the highly secured or the highly secure critical part, you can do additional security hardening.

The memory is protected by using secure MPU with per region scrambling protects memory based on privilege levels based on privilege levels. Accesses to secure peripherals and system resources are restricted by using secure APEX or system bus signaling. In secure mode, the Trusted Execution Environment can’t be accessed from the peripherals.

Synopsys ARC processors are actually extendable processors as users can (at design time) add their own instructions, this is APEX technology: ARC processor extensions. Access to those extensions can be controlled making the extensions very secure by using SecureShield technology. SecureShield enables you to combine trusted and normal applications on a single processor, and resources can be shared.

ARC EM CPU pipeline has been designed to be tamper resistant as there is no store in the 3-stages pipeline. The CPU can detect tampering and software attacks, thanks to integrated watchdog timer detecting system failures and enable counter measures.

In fact, ARC-EM Enhanced Security Package interleaves protected processor pipeline registers and in-line instruction and data encryption ensure decrypted instructions are never stored or accessible, protecting algorithms from reverse engineering without impact to the timing of instructions. Sourcing both the processor IP and the security package to the same provider is the key for maximum protection allowing optimized implementation, reducing area and power consumption.

The Enhanced Security Package with SecureShield is a part of Synopsys’ comprehensive portfolio of security IP solutions, which also includes the CryptoPack option for ARC EM processors as well as the DesignWare Security IP solutions, which comprise a range of cryptography cores and software, protocol accelerators, root of trust, platform security and content protection IP. On top of these security hardware features, Synopsys provides content protection, platform security and cryptographic cores. The designer will benefit from common crypto algorithms such as AES, 3DES, ECC, SHA-256 or RSA.

During this webinar, Ruud Derwig, who started his career at Philips Corporate Research, worked as a Software Technology Competence Manager at MXP Semiconductors, and now Software and Systems Architect at Synopsys will tell you many, many things about security. You will learn about side channel analysis, the non-pervasive attacks using information leaked by an implementation, like simple power analysis (SPA) or differential power analysis (PPA), which can reveal secrets, like cryptographic keys. You will also learn how to use simulation based power analysis to implement counter measures against the power analysis for data dependency.

This webinar is essential as the description of the various threats is very precise, so you clearly understand how the security can be built by using the different solutions proposed by Synopsys. Instead of providing “one size fits all” type of solutions, Synopsys propose various techniques to implement the right level of security in respect with the applications, taking into account the specific power, performance and area requirements.

You can attend to a webinar replay here.

From Eric Esteve from IPNEST