NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

The Year of the eFPGA

The Year of the eFPGA
by Tom Dillinger on 01-17-2017 at 12:00 pm

EFLX controller example

The start of the new year is typically a time for annual predictions. Prognostications are especially difficult in our industry, due to the increasing difficulty in Moore’s Law technology scaling and greater design complexity challenges. There is one sure prediction, however — this year will see the emergence of embedded FPGA (eFPGA) IP integration into a diverse set of SoC products.

I recently had the opportunity to chat with Geoff Tate, CEO of Flex Logix Technologies, Inc., a start-up IP development company, whose innovative eFPGA IP is helping lead this adoption. Geoff educated me on the markets where eFPGA IP is being integrated, as well as the unique characteristics and customer requirements for eFPGA design enablement.

The market opportunities for eFPGA integration are vast. A data encryption accelerator IP core may utilize different (and evolving) security algorithms, depending upon the target application. Digital signal processing (DSP) algorithms can be implemented in programmable eFPGA hardware, executing more efficiently than utilizing software running on a core processor. Data bus communication protocols, both on-chip and off-chip, are also varied and evolving — the availability of a programmable controller would accelerate the release availability of a new SoC design.


Flex Logix EFLX eFPGA IP as an Advanced Peripheral Bus controller

And, perhaps most importantly, an eFPGA core enables a single SoC part number to reach multiple markets, saving development, qualification, and production costs over a comparable SoC implementation.

In many respects, embedded FPGA IP is similar to the standalone, commercial FPGA module. The logic functionality is reconfigurable — parts can be updated at production test, at system test, or in the field. The eFPGA implementation flow utilizes an RTL synthesis algorithm optimized for mapping to the programmable logic capabilities of individual Look-Up Table (LUT) cells — the Flex Logix flow incorporates the familiar Synopsys Synplify synthesis tool. Subsequent placement and routing algorithms define how the existing switch and interconnect segment resources will be assigned. The EFLX compiler is provided for physical implementation and bit file configuration. The compiler provides delay models for subsequent static timing analysis.

Commercial FPGA technology is well-established. Given the market potential described above, why hasn’t eFPGA IP been more readily integrated by now? Why will 2017 finally be the year of the eFPGA?

Geoff shared some insights into the challenges of developing eFPGA technology, and the unique approaches that Flex Logix has pursued. “The customer requirements for reconfigurable functionality are extremely diverse, from high-performance networking to low-power, cost-sensitive applications. Flex Logix has silicon testsite data in process technologies ranging from 40nm (e.g., TSMC’s 40LP/ULP)to 16nm(e.g., TSMC’s 16FF+ and 16FFC). The low-power customers have access to the full sleep-mode power domain implementation available in the eFPGA IP core.”

In addition to the breadth of technology support, Geoff impressed upon me the key characteristics of the Flex Logix eFPGA strategy. The LUT logic resources available in each eFPGA are available in two variants:

 

  • a more conventional programmable function (e.g., 2 X 4-input logic input functions plus two available flop inputs in each LUT, expanding to more logic inputs in newer process technologies)
  • a unique “DSP-centric” LUT, with functionality that accelerates multiply/add/accumulate (MAC) computation

These different LUT types can be readily intermixed, when implementing the embedded IP core.


EFLX logic and DSP LUT building blocks can be freely intermixed

In addition to the variability in LUT cells, Flex Logix has implemented a unique “hierarchical” switch network.


Illustration of FPGA conventional and hierarchical switch array. Source: Wang, et al., “A 1.1 GOPS/mW FPGA Chip with Hierarchical Interconnect Fabric”, VLSI Technology Symposium 2011.

Commercial FPGA parts utilize large switch transistor crossbar arrays to connect to interconnect resources — for example, 80% of the transistors used on an FPGA are often utilized for interconnect. Conversely, Flex Logix has pursued a more modular architecture, where small groups of LUT’s share a local switch matrix and local route segments, with additional stages of switches utilized for global routes. The fabricated switches and connections are designed to minimize routing congestion, and maximize the percentage utilization of the available LUT logic resources (e.g., 90% utilization).

A modular building block is developed, comprised of sets of (hierarchical) LUT’s. Flex Logix offers two blocks, one with 120 LUT’s and one with 2500 LUT’s.

These characteristics of the Flex Logix eFPGA enable the main feature required to accelerate adoption — namely, the embedded IP is available in a very wide range of LUT capacity, intermixing base and DSP functionality.

In Flex Logix terms, the EFLX building blocks are designed to be “tiled together”, with support for a wide range of vertical and horizontal aspect ratios for floorplanning flexibility. Designers can readily build eFPGA IP cores as small as 120 LUT’s to greater than 100K LUT’s. (To simplify the eFPGA tiling implementation, a single clock domain per IP core assumption is made.) SRAM arrays can be compiled, and integrated within the EFLX building block tiles. The number of available building block input and output signals is generous, as well, enabling wide datapath functions to be realized.


EFLX building blocks can be tiled in a wide range of sizes and aspect ratios.

The emergence of eFPGA IP on future SoC’s requires supporting diverse customer requirements for performance, power, computational complexity, and size/cost of the resulting programmable logic core. The architecture of the Flex Logix EFLX offering addresses these requirements, offering both logic and DSP functionality, and a modular “tiled” approach for optimal core area for the target application (with high logic/interconnect resource utilization).

Look for additional articles in the near future describing additional, unique features of the EFLX IP.

For more information on the Flex Logix EFLX product set, please refer to: www.flex-logix.com . There is an excellent introduction to eFPGA technology at: http://www.flex-logix.com/fpga-tutorial/ .

-chipguy


Why 2017 is the Year of the Bot

Why 2017 is the Year of the Bot
by Vivek Wadhwa on 01-17-2017 at 12:00 pm

In the 2013 movie “Her,” Theodore Twombly, a lonely writer, falls in love with a digital assistant designed to meet his every need. She sorts emails, helps get a book published, provides personal advice and ultimately becomes his girlfriend. The assistant, Samantha, is A.I. software capable of learning at an astonishing pace.


Samantha will remain in the realm of science fiction for at least another decade, but less-functional digital assistants, called bots, are already here. These will be the most amazing technology advances we see in our homes in 2017.

Among the bestsellers of the holiday season were Amazon.com’s Echo and Google Home. These bots talk to their users through speakers, and their built-in microphones hear from across a room. When Echo hears the name “Alexa,” its LED ring lights up in the direction of the user to acknowledge that it is listening. It answers questions, plays music, orders Amazon products and tells jokes. Google’s Home can also manage Google accounts, read and write emails, and keep track of calendars and notes.

Google and Amazon have both opened up their devices to third-party developers — who in turn have added the abilities to order pizza, book tickets, turn on lights and make phone calls. We will soon see these bots connected to health and fitness devices so that they can help people devise better exercise regimens and remember to take their medicine. And they will control the dishwasher and the microwave, track what is left in the refrigerator and order an ambulance in a case of emergency.

Long ago, our home appliances became electrified. Soon, they will be “cognified”: integrated into artificially intelligent systems that are accessed through voice commands. We will be able to talk to our machines in a way that seems natural. Microsoft has developed a voice-recognition technology that can transcribe speech as well as a human and translate it into multiple languages. Google has demonstrated a voice-synthesis capability that is hard to differentiate from human. Our bots will tell our ovens how we want our food to be cooked and ask us questions on its behalf.

This has become possible because of advances in artificial intelligence, or A.I. In particular, a field called deep learning allows machines to learn through neural networks — in which information is processed in layers and the connections between these layers are strengthened based on experience. In short, they learn much like a human brain. As a child learns to recognize objects such as its parents, toys and animals, neural networks too learn by looking at examples and forming associations. Google’s A.I. software learned to recognize a cat, a furry blob with two eyes and whiskers, after looking at 10 million examples of cats.

It is all about data and example; that is how machines — and humans — learn. This is why the tech industry is rushing to get its bots into the marketplace and are pricing them at a meager $150 or less: The more devices that are in use, the more they will learn collectively, and the smarter the technology gets. Every time you search YouTube for a cute cat video and pick one to watch, Google learns what you consider to be cute. Every time you ask Alexa a question and accept the answer, it learns what your interests are and the best way of responding to your questions.

By listening to everything that is happening in your house, as these bots do, they learn how we think, live, work and play. They are gathering massive amounts of data about us. And that raises a dark side of this technology: the privacy risks and possible misuse by technology companies. Neither Amazon nor Google is forthcoming about what it is doing with all of the data it gathers and how it will protect us from hackers who exploit weaknesses in the infrastructure leading to its servers.

Of even greater concern is the dependency we are building on these technologies: We are beginning to depend on them for knowledge and advice and even emotional support.

The relationship between Theodore Twombly and Samantha doesn’t turn out very well. She outgrows him in intelligence and maturity. And she confesses to having relationships with thousands of others before she abandons Twombly for a superior, digital life form.

We surely don’t need to worry yet about our bots becoming smarter than we are. But we already have cause for worry over one-sided relationships. For years, people have been confessing to having feelings for their Roomba vacuum cleaners — which don’t create even an illusion of conversation. A 2007 study documented that some people had formed a bond with their Roombas that “manifested itself through happiness experienced with cleaning, ascriptions of human properties to it, and engagement with it in promotion and protection.” And according to a recent report in New Scientist, hundreds of thousands of people say ‘Good morning’ to Alexa every day, half a million people have professed their love for it, and more than 250,000 have proposed marriage to it.

I expect that we are all going to be suckers for our digital friends. Don’t you feel obliged to thank Siri on your iPhone after it answers your questions? I’ll make a confession: I do, and have done so.

For more, visit my website: www.wadhwa.com and follow me on Twitter: @wadhwa


IoT and Blockchain Convergence

IoT and Blockchain Convergence
by Ahmed Banafa on 01-17-2017 at 12:00 pm

AAEAAQAAAAAAAAkGAAAAJDAyYzJjYjFhLWY3NGItNDNjOS05YmNkLWQ1YjUxODlhNTc1MQ

The Internet of Things (IoT) as a concept is fascinating and exciting, but one of the major challenging aspects of IoT is having a secure ecosystem encompassing all building blocks of IoT-architecture. Understanding the different building blocks of IoT, identifying the areas of vulnerability in each block and exploring technologies needed to counter each of the weaknesses are essential in dealing with the security issue of IoT.

Figure 1: IoT Architecture
IoT architecture can be represented by four building blocks:

  • Things: These are defined as uniquely identifiable nodes, primarily sensors that communicate without human interaction using different connectivity methods.
  • Gateways: These act as intermediaries between things and the cloud to provide the needed connectivity, security, and manageability.
  • Network infrastructure: This is comprised of routers, aggregators, gateways, repeaters and other devices that control and secure data flow.
  • Cloud infrastructure: Cloud infrastructure contains large pools of virtualized servers and storage that are networked together with computing and analytical capabilities.

Challenges to secure IoT deployments
Existing security technologies will play a role in mitigating IoT risks but they are not enough. The goal is to get data securely to the right place, at the right time, in the right format. It’s easier said than done for many reasons, and here is a list of some of the challenges:

  • Many IoT Systems are poorly designed and implemented, using diverse protocols and technologies that create complex and sometimes conflicting configurations.
  • Limited guidance for life cycle maintenance and management of IoT devices
  • IoT privacy concerns are complex and not always readily evident.
  • There is a lack of standards for authentication and authorization of IoT edge devices.
  • Security standards, for platform configurations, involving virtualized IoT platforms supporting multi-tenancy is immature.
  • The uses for Internet of Things technology are expanding and changing—often in uncharted waters.

In addition to the above list, new security technologies will be required to protect IoT devices and platforms from both information attacks and physical tampering, to encrypt their communications, and to address new challenges such as impersonating “things” or denial-of-sleep attacks that drain batteries, to denial-of-service attacks (DoS). But IoT security will be complicated by the fact that many “things” use simple processors and operating systems that may not support sophisticated security approaches.

A prime example of the urgent need for such new security technologies is the recent massive distributed denial of service attack (DDoS) that crippled the servers of popular services like Twitter, Netflix, NYTimes, and PayPal across the U.S. on October 21st, 2016. It was the result of an immense assault that involved millions of internet addresses and malicious software. One source of the traffic for the attacks was devices infected by the Mirai malware. The attack comes amid heightened cybersecurity fears and a rising number of internet security breaches. All indications suggest that countless IoT devices that power everyday technology like closed-circuit cameras and smart-home devices were hijacked by the malware, and used against the servers.

The problem with the current centralized model
Current IoT ecosystems rely on centralized, brokered communication models, otherwise known as the server/client paradigm. All devices are identified, authenticated and connected through cloud servers that sport huge processing and storage capacities. Connections between devices have to exclusively go through the internet, even if they happen to be a few feet apart.

While this model has connected generic computing devices for decades and will continue to support small-scale IoT networks as we see them today, it will not be able to respond to the growing needs of the huge IoT ecosystems of tomorrow.

Existing IoT solutions are expensive because of the high infrastructure and maintenance cost associated with centralized clouds, large server farms, and networking equipment. The sheer amount of communications that will have to be handled when there are tens of billions of IoT devices will increase those costs substantially.

Even if the unprecedented economic and engineering challenges are overcome, cloud servers will remain a bottleneck and point of failure that can disrupt the entire network.

Decentralizing IoT networks
A decentralized approach to IoT networking would solve many of the issues above. Adopting a standardized peer-to-peer communication model to process the hundreds of billions of transactions between devices will significantly reduce the costs associated with installing and maintaining large centralized data centers and will distribute computation and storage needs across the billions of devices that form IoT networks. This will prevent failure in any single node in a network from bringing the entire network to a halting collapse.

However, establishing peer-to-peer communications will present its own set of challenges, chief among them the issue of security. And as we all know, IoT security is much more than just about protecting sensitive data. The proposed solution will have to maintain privacy and security in huge IoT networks and offer some form of validation and consensus for transactions to prevent spoofing and theft.

To perform the functions of traditional IoT solutions without a centralized control, any decentralized approach must support three foundational functions:

  • Peer-to-peer messaging;
  • Distributed file sharing;
  • Autonomous device coordination.

The Blockchain Approach
Blockchain, the “distributed ledger” technology, has emerged as an object of intense interest in the tech industry and beyond. Blockchain technology offers a way of recording transactions or any digital interaction in a way that is designed to be secure, transparent, highly resistant to outages, auditable, and efficient; as such, it carries the possibility of disrupting industries and enabling new business models. The technology is young and changing very rapidly; widespread commercialization is still a few years off. Nonetheless, to avoid disruptive surprises or missed opportunities, strategists, planners, and decision makers across industries and business functions should pay heed now and begin to investigate applications of the technology.

What is Blockchain?
Blockchain is a database that maintains a continuously growing set of data records. It is distributed in nature, meaning that there is no master computer holding the entire chain. Rather, the participating nodes have a copy of the chain. It’s also ever-growing — data records are only added to the chain.
A blockchain consists of two types of elements:

  • Transactions are the actions created by the participants in the system.
  • Blocks record these transactions and make sure they are in the correct sequence and have not been tampered with.

What are some advantages of blockchain?
The big advantage of blockchain is that it’s public. Everyone participating can see the blocks and the transactions stored in them. This doesn’t mean everyone can see the actual content of your transaction, however; that’s protected by your private key.

A blockchain is decentralized, so there is no single authority that can approve the transactions or set specific rules to have transactions accepted. That means there’s a huge amount of trust involved since all the participants in the network have to reach a consensus to accept transactions.

Most importantly, it’s secure. The database can only be extended and previous records cannot be changed (at least, there’s a very high cost if someone wants to alter previous records).

How does it work?
When someone wants to add a transaction to the chain, all the participants in the network will validate it. They do this by applying an algorithm to the transaction to verify its validity. What exactly is understood by “valid” is defined by the blockchain system and can differ between systems. Then it is up to a majority of the participants to agree that the transaction is valid.
A set of approved transactions is then bundled in a block, which gets sent to all the nodes in the network. They, in turn, validate the new block. Each successive block contains a hash, which is a unique fingerprint, of the previous block.

The blockchain and IoT

Figure 2: Key Benefits of Using Blockchain for IoT

Blockchain technology is the missing link to settle privacy and reliability concerns in the Internet of Things. Blockchain technology could perhaps be the silver bullet needed by the IoT industry. It can be used in tracking billions of connected devices, enabling the processing of transactions and coordination between devices; this allows for significant savings for IoT industry manufacturers. This decentralized approach would eliminate single points of failure, creating a more resilient ecosystem for devices to run on. The cryptographic algorithms used by blockchains would make consumer data more private.

The ledger is tamper-proof and cannot be manipulated by malicious actors because it doesn’t exist in any single location, and man-in-the-middle attacks cannot be staged because there is no single thread of communication that can be intercepted. Blockchain makes trustless, peer-to-peer messaging possible and has already proven its worth in the world of financial services through cryptocurrencies such as bitcoin, providing guaranteed peer-to-peer payment services without the need for third-party brokers.

The decentralized, autonomous, and trustless capabilities of the blockchain make it an ideal component to become a foundational element of IoT solutions. It is no surprise that enterprise IoT technologies have quickly become one of the early adopters of blockchain technology.

In an IoT network, the blockchain can keep an immutable record of the history of smart devices. This feature enables the autonomous functioning of smart devices without the need for centralized authority. As a result, the blockchain opens the door to a series of IoT scenarios that were remarkably difficult, or even impossible to implement without it.

For example, by leveraging the blockchain, IoT solutions can enable secure, trustless messaging between devices in an IoT network. In this model, the blockchain will treat message exchanges between devices similar to financial transactions in a bitcoin network. To enable message exchanges, devices will leverage smart contracts which then model the agreement between the two parties.

One of the most exciting capabilities of the blockchain is the ability to maintain a duly decentralized, trusted ledger of all transactions occurring in a network. This capability is essential to enable the many compliances and regulatory requirements of industrial IoT (IIoT) applications without the need to rely on a centralized model.

What are the challenges?

Figure 3: IoT and Blockchain Challenges
In spite of all its benefits, the blockchain model is not without its flaws and shortcomings:

  • Scalability issues pertaining to the blockchain that might lead to centralization, which is casting a shadow over the future of the cryptocurrency.
  • Processing power and time required to perform encryption for all the objects involved in a blockchain-based ecosystem. IoT ecosystems are very diverse. In contrast to generic computing networks, IoT networks are comprised of devices that have very different computing capabilities, and not all of them will be capable of running the same encryption algorithms at the desired speed.
  • Storage too will be a hurdle. Blockchain eliminates the need for a central server to store transactions and device IDs, but the ledger has to be stored on the nodes themselves. And the ledger will increase in size as time passes. That is beyond the capabilities of a wide range of smart devices such as sensors, which have very low storage capacity.
  • Lack of skills: few people understand how blockchain technology really works and when you add IoT to the mix that number will shrink drastically.
  • Legal and compliance issues: It’s a new territory in all aspects without any legal or compliance code to follow, which is a serious problem for manufacturers and service providers. This challenge alone will scare off many businesses from using blockchain technology.

The Optimum Platform
Developing solutions for the Internet of Things requires unprecedented collaboration, coordination, and connectivity for each piece in the ecosystem, and throughout the ecosystem as a whole. All devices must work together and be integrated with all other devices, and all devices must communicate and interact seamlessly with connected systems and infrastructures. It’s possible, but it can be expensive, time-consuming, and difficult.
The optimum platform for IoT can:

  • Acquire and manage data to create a standards-based, scalable, and secure platform.
  • Integrate and secure data to reduce cost and complexity while protecting your investment.
  • Analyze data and act by extracting business value from data, and then acting on it.

Security needs to be built in as a foundation of IoT systems, with rigorous validity checks, authentication, data verification, and all the data needs to be encrypted. At the application level, software development organizations need to be better at writing code that is stable, resilient and trustworthy, with better code development standards, training, threat analysis and testing. As systems interact with each other, it’s essential to have an agreed interoperability standard, which is safe and valid. Without a solid bottom-top structure we will create more threats with every device added to the IoT. What we need is a secure and safe IoT with privacy protected. That’s a tough trade off but not impossible and blockchain technology is an attractive option if we can overcome its drawbacks.

Ahmed Banafa Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016
This article was published on IEEE-IoT : http://iot.ieee.org/newsletter/january-2017/iot-and-blockchain-convergence-benefits-and-challenges.html

Further reading
http://tech.economictimes.indiatimes.com/news/internet/5-challenges-to-internet-of-things/52700940
http://www.mindanalytics.es/2016/03/01/gartners-top-10-internet-of-things-technologies-for-2017-2018/?lang=en
http://www.cnbc.com/2016/10/22/ddos-attack-sophisticated-highly-distributed-involved-millions-of-ip-addresses-dyn.html
https://www.spiceworks.com/marketing/reports/iot-trends/
http://www.cio.com/article/3027522/internet-of-things/beyond-bitcoin-can-the-blockchain-power-industrial-iot.html
https://techcrunch.com/2016/06/28/decentralizing-iot-networks-through-blockchain/
http://www.blockchaintechnologies.com/blockchain-internet-of-things-iot
https://postscapes.com/blockchains-and-the-internet-of-things/
https://bdtechtalks.com/2016/06/09/the-benefits-and-challenges-of-using-blockchain-in-iot-development/
https://blogs.thomsonreuters.com/answerson/blockchain-technology/
http://www.i-scoop.eu/internet-of-things/blockchain-internet-things-big-benefits-expectations-challenges/
https://www.linkedin.com/pulse/20140403055037-246665791-bitcoin-accepted-here?trk=mp-author-card
https://www.linkedin.com/pulse/securing-internet-things-iot-ahmed-banafa?trk=mp-author-card
https://www.linkedin.com/pulse/industrial-internet-things-iiot-challenges-benefits-ahmed-banafa?trk=mp-author-ca


DesignCon 2017 and Mentor Graphics

DesignCon 2017 and Mentor Graphics
by Daniel Nenni on 01-17-2017 at 7:00 am

It’s hard to believe but this is DesignCon #22 and being a Silicon Valley conference I have attended my fair share of them. This year it seems like high speed communications will take the lead followed by the latest on PCB design tools, power and signal integrity, jitter and crosstalk, test and measurement tools, parallel and memory interface design, ICs, semiconductor components, etc…

About DesignCon
DesignCon is the world’s premier conference for chip, board, and systems design engineers in the high-speed communications and semiconductor communities.DesignCon, created by engineers for engineers, takes place annually in Silicon Valley and remains the largest gathering of chip, board, and systems designers in the country. This three-day technical conference and expo combines technical paper sessions, tutorials, industry panels, product demos and exhibits from the industry’s leading experts and solutions providers. More information is available at: designcon.com. DesignCon is organized by UBM Americas, a part of UBM plc (UBM.L), an Events First marketing and communications services business. For more information, visit ubmamericas.com.

The conference theme this year lands squarely inside Mentor’s wheelhouse so they will be hard to miss. The Mentor HyperLynx product family will be front and center in the Mentor booth. If you are interested in any of the following technologies you will definitely want to stop by booth #1043 for demos and chats with experts:

HyperLynx Signal Integrity
Quickly identify and resolve Signal Integrity issues. Includes advanced tools for optimizing DDRx design, SERDES design projects, FastEye diagram analysis, S-parameter simulation, and BER prediction.

HyperLynx Power Integrity
Accurately model power distribution networks and noise propagation mechanisms throughout the PCB design process.

HyperLynx DRC
Accelerates electrical signoff with built-in comprehensive rule-sets or customized rule checks for issues affecting EMI/EMC, signal integrity, and power integrity.

HyperLynx Full-Wave Solver
A powerful 3D, broadband, full-wave electromagnetic solver providing unprecedented speed and capacity, while preserving gold-standard Maxwell accuracy.

Frontline InStack Design
An automatic stackup design solution to find the best possible stackup for your board, optimizing and balancing between quality, manufacturability and price.

Special live presentations include:

  • Modeling and simulating DDR transactions involving buffer transitions between receive and transmit states
  • Channel Operating Margin (COM) for PAM-4 links with support for Tx non-linearity and time skew
  • Optimization methods for high speed SerDes channels using COM metric

Mentor has a copy of the DesignCon 2016 Best Paper Award winner available HERE. This paper analyzes the computational procedure specified for Channel Operating Margin and compares it to the traditional eye/BER analysis.

In concert, Mentor also has “A Practical, Hands-on Essential Principles of SI Boot Camp” featuring Eric Bogatin on January 31[SUP]st[/SUP] at their Fremont campus. From what I am told Eric is an SI guru so you are not going to want to miss this. In case you do miss it we will have a SemiWiki blogger there so stay tuned to SemiWiki.com for complete coverage.

Abstract:
If you are confused about signal integrity and want to get a jump start understanding the most important essential principles in signal integrity, this is the workshop for you. We will explore the principles and best design practices using simulation exercises in HyperLynx.

Using short lectures and demos, we introduce more than 50 important design examples everyone will work through as virtual prototypes.

Eric Bogatin received his BS in physics from MIT and MS and PhD in physics from the University of Arizona in Tucson. He has held senior engineering and management positions at Bell Labs, Raychem, Sun Microsystems, Ansoft and Interconnect Devices. He has written six technical books in the field and presented classes and lectures on signal integrity world wide.


Intel Conveys Compute Card Capabilities at CES

Intel Conveys Compute Card Capabilities at CES
by Tom Simon on 01-16-2017 at 12:00 pm

Intel is once again adding a new computing form factor to the mix. At CES Intel announced its new Intel Compute Card. It combines CPU, GPU, DRAM, storage, WiFi, and communications inside a small modular housing slightly larger than a credit card and about 5mm thick. Intel already offers its Compute Stick, but it is limited in its interface options. The compute Stick only supports HDMI along with USB and WiFi, making it a bit limited. Unlike the Compute Stick which seemed to be promoted as a highly portable computer that can turn and HDMI monitor into a useful PC, the Compute Card is intended to provide the brains for a number of applications, such as smart TV’s, appliances, IoT devices, etc.

The question is, what does it offer in these applications that ‘hardwired’ processing does not allow?

The Compute Card has a proprietary connector set on its end that allows it to plug into its host. Intel describes the interface as a modified USB-C. This enables it to connect to a wide range of devices, such as hdmi, storage, PCIe, potential future interfaces, etc. Because standard interfaces are not brought out to connectors, it will not operate as a standalone device.

The Compute Stick was said to have low performance, although it was improved in the second generation. The Compute Card is planned to offer a wide range of CPU’s not just the low-end Atom cores. The upper end of the power dissipation is said to be around 6W. One nice difference from its predecessor is that the Compute Card has no cooling fan, which could have been a potential reliability issue. Apparently the dock (or socket) provides some cooling in addition to a ‘locking mechanism’ to prevent removal where security is a factor.

So, is the Compute Card an embedded processor or is it a portable compute device? In the embedded processor space there is a wide range of options, from both Intel and through ARM based processor providers. It seems that major appliances that need would be built around a specific processor and chipset. While the Compute Card touts upgradability and future proofing for its hosts, it’s not clear that upgrading the processor, if it proves practical, will extend the life of appliances. Indeed, smart appliances may not actually outlive their processing units.

Nevertheless, it is conceivable that repairs could be made easier by a plug in compute unit. But this could be offset by connector or thermal issues with the Compute Card packaging/dock.

If the goal is to provide a portable computing resource, it needs to be compared to alternatives offered through virtual machines. A thumb drive or SD card can easily contain a complete virtual machine environment that can run on a wide range of hardware. Why not just use this to offer a portable environment? Alternatively, the cloud has become a given in terms of user environment. With Google Drive or DropBox, you can easily pull up your personal documents or environment just about anywhere on any compute resource.

To be fair, I was skeptical of virtual machines, but as technology improved they became practical. This took a few decades in actuality. And, they never be as efficient as bare metal, rather they offer flexibility and convenience without a severe penalty. In fact I know of several websites that solved their server bandwidth issues by reverting to bare metal. But that is another story.

Intel has signed up partners to help develop Compute Card enabled products. These partners include Dell, Lenovo, HP, Sharp, as well as InFocus, Seneca and others. The rest of us will need to wait until June of 2017 to get pricing and detailed specifications. Likewise, they will be available for purchase in the middle of the year.

A lot of the utility depends on the actual specifications, price/performance ratio, and the details of the necessary ancillary hardware, such as the dock. Presumably the Compute Card will run Windows and likely Linux too. The OS will also play heavily into potential applications and market acceptance.

It is too early to tell if this will be part of a significant shift in the development of smart products or for the Internet of Things. I plan on watching it with guarded expectations. Much of what we have seen recently points to the extremely high utility of products based on custom SOC’s and advanced packaging such as TSMC’s CoWoS and InFO technology. For instance, ARM cores are available for silicon integration through a large number of SOC and Virtual ASIC vendors. However, just as with virtual machines, only time will tell if there will be significant market acceptance for the Intel Compute Card.


Fed Panel Asks Today: Why Waymo?

Fed Panel Asks Today: Why Waymo?
by Roger C. Lanctot on 01-16-2017 at 12:00 pm

The U.S. Department of Transportation (USDOT) is holding the first meeting today of a new advisory committee focused, in its own words: “on automation across a number of modes.” The committee, made up of an array of experts from a variety of fields, is “to immediately begin work on some of the most pressing and relevant matters facing transportation today, including the development and deployment of automated vehicles, and determining the needs of the Department as it continues with its relevant research, policy, and regulations.”

The big question facing the panel (members listed below) is: Why? Why are we pursuing automated driving?

It was the U.S. government that got us into the automated driving business with the DARPA (Defense Advanced Research Projects Agency) Challenge in 2004. The intent of the original project was to make one third of ground military forces autonomous by 2015. By the time of the multiple Gulf Wars the objective was to develop robotic vehicles that might help protect military personnel from roadside improvised explosive devices.

DARPA’s efforts were preceded by decades of work around the world to perfect automated driving. DARPA was the first organization to identify a commercially viable use case – though we still do not have an automated battlefield.

Google picked up the mantle years later with its own self-driving car effort – advancing the technology philosophically by suggesting that the steering wheel and brake and accelerator pedals be removed and by testing the vehicles on public roads. Google was the first to propose that self-driving vehicles belonged on public roads.

More provocatively, Google presented the case that nothing less than testing on public roads would be sufficient to enable automated driving. In spite of or in reaction to that proposition California regulators required the inclusion of steering wheels and brake and accelerator pedals and the presence of a driver in autonomous test vehicles. It was at this point and on these terms that the current struggle has unfolded.

The business case for Google was to provide transportation options for economically or physically disadvantaged populations. One branch of the Google team ultimately left to pursue commercial vehicle automated driving opportunities (Otto). More than a dozen other startups have targeted public and private driverless shuttle transportation opportunities.

What remains is defining the objective for automated driving and a reasonable and regulated path to market. Auto makers are building Potemkin villages for the purposes of testing cars in real-world-ish scenarios, while technology companies are building simulators to more rapidly compile and analyze the billions of miles necessary to achieve certifiable results. It is clear, though, that nothing less than testing in real world scenarios will advance the technology to the goal of full automation.

But why is the industry suddenly obsessed with this activity – especially given the reality that self-driving cars likely represent a completely different usage scenario not likely to include actual ownership? The interest of the public revolves around testing on public roads and the ultimate objective of reducing highway fatalities. The interest of car makers and regulators is the evolutionary path to autonomy that is expected to make cars safer to drive … or ride in.
Waymo talks about creating a “better driver” which is too high a bar for short-term commercial opportunities. What is a better driver, anyway?

Actually building an automated driving system capable of surpassing human drivers in all driving circumstances will be a decades-long exercise. It is not a reasonable objective.

Toyota Research Institute CEO Dr. Gil Pratt describes this “better driver” dream as the “chauffeur” mode, where the vehicle is doing all of the driving vs. the guardian angel mode where the vehicle assists in what it perceives as urgently dangerous circumstances or simpler tasks. Regulators have done their best to bring guardian angel technologies to the market such as electronic stability control (ESC) and anti-lock brakes (ABS).

Now the National Highway Traffic Safety Administration (NHTSA) is pushing for the adoption of Automatic Emergency Brakes on a voluntary basis. But this voluntary effort points up the limitations of the agency. Were NHTSA to press for a mandate, the process would like require nearly a decade of regulatory action.

It was DARPA that opened up the autonomous vehicle conversation. It was Google that issued the call to action – forcing a regulatory response. Now it is the auto industry, insurance companies, city/state/Federal regulators that are coming together to find common cause.

The “why” of autonomous vehicles boils down to:

[LIST=1]

  • Reducing highway fatalities (along with congestion and emissions)
  • Creating transportation opportunities for economically and physically disadvantaged populations
  • Improving safety, productivity and efficiency in the commercial trucking and public transportation industries
  • Creating new networked urban transportation alternatives
  • Extending U.S. leadership in global transportation

    In the words of outgoing Transportation Secretary Anthony Foxx: “This new automation committee will work to advance life-saving innovations while boosting our economy and making our transportation network more fair, reliable, and efficient.”

    For car makers, insurance companies, taxi/bus/truck drivers, automated driving actually represents a threat to business as usual. But Federal, city and state regulators are now in the driver’s seat with everything to gain from a future defined by automated driving. It will be interesting to see how these parties find common ground. The choice of co-chairs, alone, speaks volumes.
    1. Co-Chair:Mary Barra- General Motors, Chairman and CEO
    2. Co-Chair: Eric Garcetti- Mayor of Los Angeles, CA
    3. Vice Chair: Dr. J. Chris Gerdes- Stanford University, Professor of Engineering
    4. Gloria Boyland- FedEx, Corporate Vice President, Operations & Service Support
    5. Robin Chase- Zipcar; Buzzcar; Veniam, Co-founder of Zipcar and Veniam
    6. Douglas Chey- Hyperloop One, Senior Vice President of Systems Development
    7. Henry Claypool- Community Living Policy Center, Policy Director
    8. Mick Cornett- Mayor of Oklahoma City, OK
    9. Mary “Missy” Cummings- Duke University, Director, Humans and Autonomy Lab, Pratt School of Engineering
    10. Dean Garfield- Information Technology Industry Council, President and CEO
    11. Mary Gustanski- Delphi Automotive, Vice President of Engineering & Program Management
    12. Debbie Hersman- National Safety Council, President and CEO
    13. Rachel Holt- Uber, Regional General Manager, United States and Canada
    14. Lisa Jackson- Apple, Vice President of Environment, Policy, and Social Initiatives
    15. Tim Kentley-Klay – Zoox, Co-founder and CEO
    16. John Krafcik- Waymo, CEO
    17. Gerry Murphy- Amazon, Senior Corporate Counsel, Aviation
    18. Robert Reich- University of California, Berkeley, Chancellor’s Professor of Public Policy, Richard and Rhoda Goldman School of Public Policy
    19. Keller Rinaudo- Zipline International, CEO
    20. Chris Spear- American Trucking Association (ATA), President and CEO
    21. Chesley “Sully” Sullenberger- Safety Reliability Methods, Inc., Founder and CEO
    22. Bryant Walker Smith- University of South Carolina, Assistant Professor, School of Law and (by courtesy) School of Engineering
    23. Jack Weekes- State Farm Insurance, Operations Vice President, Innovation Team
    24. Ed Wytkind- President, Transportation Trades Department, AFL-CIO
    25. John Zimmer- Lyft, Co-founder and President

    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


  • CEO Interview: Toshio Nakama of S2C

    CEO Interview: Toshio Nakama of S2C
    by Daniel Nenni on 01-16-2017 at 7:00 am

    I haven’t sat down to speak with S2C since we collaborated on the book, PROTOTYPICAL, published just before DAC 2016 and even then, I hadn’t spoken to Toshio Nakama, their CEO. Toshio splits his time between the San Jose headquarters and the Shanghai headquarters so getting time to meet face-to-face has been challenging. I was finally able to sit down with him to discuss the latest in FPGAs and prototyping and what’s next for S2C. Here is a snapshot of our discussion.

    How Has FPGA Prototyping Evolved and What’s Next?
    Toshio: FPGA prototyping has traditionally been thought of as an adjunct to emulation – something that was nice to have but because of its complexities of employment was never fully adopted. Times have changed however, FPGA prototyping has become much easier to use and because of increasing design complexity it has become a must-have tool in the design and verification flow. Partitioning the design has always been one of the hurdles to adoption, but FPGA partition tools have advanced to handle even the largest designs while FPGAs themselves have increased in capacity to reduce the number of FPGAs needed.

    As to the second part of your question, FPGA prototyping is being used more and more by dispersed design and verification teams. No longer are singular teams all sitting together in the same room or the same office for that matter. To accommodate this FPGA prototyping must take to the cloud and allow globalized teams to share resources. In fact, FPGA prototyping solutions must also allow teams working on multiple designs to share resources and not limit the shared resources to a single design or instance.

    What Is Unique About S2C?
    Toshio: We are experts in FPGA prototyping. We are proud to have the largest team dedicated to the technology of FPGA prototyping. Our expertise comes from the development of hundreds of customized prototyping environments that have created a very robust engineering knowledge base for developing reliable, easy-to-implement, and extremely efficient solutions. We have consistently made sure to push the boundaries of what FPGA prototyping can do so that our customers can gain a competitive advantage.

    Through our many interactions with customers, we’ve realized that scalability is a key component. Our customers’ complex designs have given rise to untethered scalability. Increased I/O counts, significant memory throughput, and a large number of DSPs have become commonplace.

    How Do Designers Get the Most Out of FPGA Prototyping?
    Toshio: There are many ways for that to happen. One significant solution is to make FPGA prototyping part of your overall design process. I believe, Mon-Ren Chene, our CTO, mentions this in the book that you published. Designers can benefit greatly by designing with prototyping in mind. Doing so will speed up the prototyping process down the road and help with synthesis, partitioning and debug. As we outlined in the book, there are six ways that designing for prototyping can be achieved: Prototyping-friendly design hierarchies, block-based prototyping, a clean and well-defined clock network, memory modeling, register placement, and avoiding asynchronous or latch-based circuits.

    Do You See FPGA Prototyping Expanding Its Role Beyond SoCs?
    Toshio: Advances in FPGAs has opened the door to new areas of adoption. We’re seeing the benefits for areas related to high-performance computing and artificial intelligence. We’re already addressing the HPC market with our upcoming unveiling of our Prodigy Arria 10 Logic Module. We’re excited to see that the benefits of FPGAs can be realized beyond traditional markets. Stay tuned for lots more to come from S2C on these fronts.

    About S2C
    Founded and headquartered in San Jose, California, S2C has been successfully delivering rapid SoC prototyping solutions since 2003. S2C provides:

    With over 200 customers and more that 800 systems installed, S2C’s focus is on SoC/ASIC development to reduce the SoC design cycle. Our highly qualified engineering team and customer-centric sales force understands our users’ SoC development needs. S2C systems have been deployed by leaders in consumer electronics, communications, computing, image processing, data storage, research, defense, education, automotive, medical, design services, and silicon IP. S2C is headquartered in San Jose, CA with offices and distributors around the globe including the UK, Israel, China, Taiwan, Korea, and Japan. For more information, visit www.s2cinc.com.

    Also Read:

    CTO Interview: Mohamed Kassem of efabless

    IEDM 2016 – Marie Semeria LETI Interview

    CEO Interview: Dündar Dumlugöl of Magwel


    California Rules the Road

    California Rules the Road
    by Roger C. Lanctot on 01-16-2017 at 7:00 am

    California’s influence on the global automotive industry remains intact at the start of 2017 in spite of the state’s strict licensing for autonomous vehicle testing on public roads. California managed to chase Uber away with that licensing requirement, but in the process the state has established a benchmark for data collection from autonomous vehicles that has provided a glimpse into the performance of these vehicles on California roads.

    As of Dec. 8, 2016, 20 companies had registered for the program and Alphabet’s Waymo spinoff has proudly touted the steady improvement of the miles-driven-between-intervention statistics for its automated vehicles. California’s greatest impact on the automotive industry, though, has emanated from the California Air Resources Board (CARB) and its emissions testing regime.

    We have CARB to think for the global requirement for an OBDII port in internal combustion or diesel equipped passenger vehicles. OBDII ports and emissions testing played a prominent role in the industry in 2016. That impact is continuing into 2017.

    But California legislators may extend their influence further with the onset of new vehicle-related laws and regulations that took effect Jan. 1, 2017. The California Highway Patrol was kind enough to reach out to California drivers to bring them up to speed on these new laws:

    https://www.chp.ca.gov/PressReleases/Pages/New-Traffic-Safety-Laws-Take-Effect-in-2017.aspx

    Most prominent of these laws was AB 1785, Use of Wireless Electronic Devices. Given the wide variation and enforcement of anti-texting and anti-phone-use-while-driving laws across the 50 U.S. states, one might hope the rest of the nation and indeed the world might take its cue from California in this case – even if the state is actually following Europe’s lead.

    The new law is very specific in that it forbids motorists from holding a wireless telephone or electronic wireless communications device while driving a motor vehicle. According to the new law, mobile devices must be mounted in the 7-inch square in the lower corner of the windshield farthest removed from the driver or in a 5-inch square in the lower corner of the windshield nearest to the driver.

    Another option is to affix the device to the dashboard in a place that does not obstruct the driver’s clear view of the road and does not interfere with the deployment of an air bag. The law does allow a driver to operate one of these devices with the motion of a single swipe or tap of the finger, but not while holding it.

    I have long advocated a national don’t-touch-your-phone-while-driving law for the U.S. California is now enforcing the nation’s first. And given the fact that it is easier to enforce, I advise caution and compliance when driving in California.

    Other California laws that took effect Jan. 1, 2017 include:

    AB 53: Children less than two years of age must ride rear-facing in an appropriate child passenger safety seat. Children weighing 40 or more pounds, or standing 40 or more inches tall, are exempt. California law continues to require that all children under the age of eight be properly restrained in an appropriate child safety seat in the back seat of a vehicle.

    SB 1046
    : Requires a DUI offender to install an ignition interlock device on their vehicle for a specified period of time in order to get a restricted driver’s license or to reinstate their license. The law also removes the required suspension time before a person can get a restricted license, provided that the offender installs an IID on their vehicle. The law extends the current four-county pilot program until Jan. 1, 2019, at which time all DUI offenders statewide will be required to install the device to have their license reinstated. Currently Sacramento, Los Angeles, Alameda and Tulare counties are piloting the program.

    AB 51
    : Lane splitting by a motorcyclist remains legal if done safely. This bill defines lane splitting as driving a motorcycle, which has two wheels in contact with the ground, between rows of stopped or moving vehicles in the same lane. The bill permits the California Highway Patrol to develop lane splitting educational safety guidelines in consultation with other state traffic safety agencies and at least one organization focused on motorcycle safety.

    SB 1072
    : Requires all school buses, school pupil activity buses, youth buses and child care motor vehicles used to transport school-age children to be equipped with a child safety alert system. Every school is required to have a transportation safety plan with procedures to ensure that a pupil is not left unattended in a vehicle.

    SB 247
    : All buses manufactured after July 1, 2020, will be required to have emergency lighting fixtures that will turn on in the event of an impact or collision. The law also requires a bus company to ensure the driver of the charter bus provides oral and written, or video instructions to all passengers on safety equipment and emergency exits on the bus before any trip.

    AB 1677
    : Requires the CHP to develop protocols for entering into a memorandum of understanding with local governments to increase the number of inspections for tour buses operated within their jurisdiction.

    There are other areas where California may take the lead. California is currently piloting road use charging (RUC) hardware and software for mileage-based tolling similar to a system already in place in Portland, Ore. California is also ground zero for car sharing and ride hailing clashes and San Francisco, in particular, is a proving ground for innovative parking systems and solutions.

    California is also home to private toll roads and road use restrictions – as in the case of Market Street in San Francisco. And both Los Angeles and San Francisco have long histories seeking to leverage Waze’s traffic app to their advantage while mitigating the negative impacts of its ad hoc traffic advice.

    California may see its influence on the automotive industry erode, though, if it fails to ease its restrictions on autonomous vehicle testing. California’s mild weather makes for an ideal testbed for autonomous vehicles, but developers appear determined to seek more forgiving venues.

    And, finally, the State of New Jersey’s legislature may steal a bit of California’s thunder should it pass legislation calling for the creation of an Emergency Contact Notification database. The legislature will see introduction of such a measure next Monday, Jan. 23rd. Should it pass it would set in motion the creation of a nationwide National Law Enforcement Telecommunications System (NLETS) database tied to the vehicle identification number.

    The impetus for the New Jersey legislation derived from the efforts of the family of Sara Dubinin, an unconscious crash victim who passed away before she could be identified and family members notified of her condition. Similar legislation is pending in California, but New Jersey may now take the lead. The impact of this legislation will transform emergency response at crash scenes and will serve as a model for emergency response globally.

    Many auto industry observers have expressed concern at the fragmentation that can result from individual states pursuing different regulatory paths. But sometimes, that fragmentation actually pays dividends when individual states like New Jersey take the opportunity of their local authority to lead and advance the cause of safety.


    Three Interesting Things from TSMC!

    Three Interesting Things from TSMC!
    by Daniel Nenni on 01-13-2017 at 12:00 pm

    First, the TSMC Museum of Innovation is now open and it’s quite impressive. Located right below Fab 12, it is definitely worth an hour of your time. Second, Morris Chang was on the investor call which made it much more interesting, especially his comments on the recent Report to the President on U.S. semiconductor leadership. Third, TSMC will be the first with EUV in production at 7nm.

    The TSMC Museum of Innovation encompasses three exhibition galleries: “A World of Innovation”, “Unleashing Innovation”, and “Dr. Morris Chang, TSMC Founder”. Through interactive technology, digital content, and historical documents we will learn about the pervasiveness of ICs in our daily lives and about their continued advancement. In addition, we will learn how ICs are making our lives more fulfilling and how they are driving technology beyond our imagination. We will also learn how TSMC contributes to global IC innovation and to Taiwan’s economy.

    Unfortunately, I was on a plane during the TSMC investor call but I did listen to the replay and read the transcript. As I predicted in my Double Digit Growth and 10nm for TSMC in 2016! blog, TSMC had a very good 2016 and I will again predict double digit revenue growth for 2017, absolutely.

    In case you have not seen it yet, the REPORT TO THE PRESIDENT Ensuring Long – Term U.S. Leadership in Semiconductors was published last week so of course it came up in the TSMC call Q&A. The response came from Morris who countered and said TSMC has created thousands of jobs by authoring the pure-play foundry business model in 1987 (yes this is the 30[SUP]th[/SUP] year of TSMC and the fabless semiconductor industry). Morris also pointed out that this report was to Obama and not to Trump and shared an interesting anecdote about presidential reports:

    I mean, we have history to guide us. In fact, just tell you an anecdote, in 2006 I met President Bush, then President of the United States, and at that time his, President Bush’s, task force, advisory task force on Iraq, had just submitted a report, basically recommended the U.S. withdrawing from Iraq. And President Bush did not adopt the recommendation. He actually adopted the contrary, which was to increase his troops in Iraq. So I mean that’s just an example that quickly came to my mind, when somebody talks about, ah, report has been written.

    One thing you should know about Morris is that he is a very well read military history enthusiast and has a remarkable memory. While I’m not necessarily equating business to war there is much to be learned in regards to strategy, leadership, and human nature.

    The other interesting nugget on the call is about 7nm and EUV. TSMC now has definitive plans to insert EUV into 7nm:

    Mark Liu
    So we think 7 nanometer is a well adopted node by all the customers and we plan for the subsequent technology to shore up the demand continuously. And we hope to use this technology – I mean the second-year technology to prepare for the EUV production experience for the full fleshed EUV technology on 5. So then our customers can have a very hopefully smooth getting to from our 7 to our 5 nanometer technology. So that is the how we maintain our technology competitiveness.

    Translation: TSMC will be the first to 7nm EUV production, yes?