CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

IoT Prototyping Workshop in Monterey CA!

IoT Prototyping Workshop in Monterey CA!
by Daniel Nenni on 04-03-2016 at 12:00 pm

With the coming onslaught of IoT designs from big companies and small, the opportunity for IoT FPGA prototyping deserves a closer look. This session will start off with a keynote “The Internet of Trust and a New Frontier For Exploration” and will be followed by a discussion with industry experts Don Dingee, Frank Schirrmeister, Tom De Schutter, and Toshio Namamo. Frank was kind enough to offer his perspective to open the conversation.

The discussion about the Internet of Things (IoT) and its potential almost always makes me smile think of one of my favorite technology quotes. Science fiction author William Gibson once said that “the future is already here – it’s just not very evenly distributed”. It seems that I am at the early adoption curve because for me the IoT has become part of daily life.

When I wake up my sleep tracker informs we of my sleep score – how long did I sleep, how many deep sleep cycles did I have, how long did it take me to fall asleep and how much snoring did occur last night. It does so by way of a tracker under my sheet (the “thing”), connected to my phone (“the hub”), interacting with the cloud comparing all this to a set of representative previous nights. Then another tracker, always at my wrist, counts my morning workout calories and steps during the day. Just like my sleep tracker, it helps me with health aspects. By the time I dropped my daughter at school and arrived at work, my car has become “the thing” delivering via my phone (“the hub”) my movement to Google/Waze servers (“the cloud”) tracking traffic indirectly and helping me with my optimal path through traffic.

So for me the future of IoT is already here, part of my daily life. At EDPS in Monterey we will talk about prototyping for the age of the IoT. To me the common defining characteristics of the IoT from my examples above include:

  • A connected system of things, hubs, networking and cloud for data processing poses classic system design problems: What are my channel latencies? How much compute power is needed? What bandwidth do my channels have to support?
  • Value being derived across the chain from the “overall system”. Take out a component and it will break down, the value to the end user goes away.
  • “Things” are plentiful and of heavily varying complexity – from wearables through watches and fitness trackers to even cars (although one could argue that the thing for Waze is the phone in the car). The “things” need advanced sensors to pick up all the activities users do, implying needs for analog/mixed signal integration as well as low power given that trackers need to run for days at a time at least.
  • Protocols for communication, networking from and to the hubs in the IoT as well as the compute needs to be carefully verified, all in the context of software.

Looking at the four characteristics above, prototyping will take a central role in the development for applications in the IoT, of course as part of an overall set of connected engines from virtual through RTL simulation, acceleration, emulation, prototyping and even the real silicon. Prototyping allows users to make sure the system of things, hubs, networks and servers interacts correctly and is configured appropriately. It will allow to show the overall system value to make business decisions and prototype what valuable data end users can derive from IoT applications. Prototyping will allow to verify algorithms, protocols and interactions between the digital and analog worlds, and last not least will enable software development of varying complexities, from real time software control software in the “things” to middleware and OS validation in hubs and servers, and of course up to the actual applications that are presented to the end users.

The IoT is most certainly here but not evenly distributed yet. Prototyping will play a key role for its development. Come join the discussion at the EDPS in Monterey on April 21[SUP]st[/SUP]. Use the discount code SemiWiki-EDPS2016 for $50 off.

Also Read: IoT Workshop in Beautiful Monterey California!


How my 17 year old daughter will drive Silicon Photonics into the Mainstream

How my 17 year old daughter will drive Silicon Photonics into the Mainstream
by Mitch Heins on 04-03-2016 at 7:00 am

I read with interest a recent article in the San Jose Mercury News (Live Video) about how the availability of better quality cameras on smartphones and the growing appetite for on-demand content on social media now have Facebook and Twitter competing head to head to encourage more people to stream raw footage. Pre-recorded videos on web pages are the norm now but the growth of live videos is exploding. Twitter recently purchased live-video-streaming app Periscope which already has over 200 million broadcasts. Add to this the recent release of Snapchat’s Chat 2.0 which emulates face-to-face communication while making it easier to switch between video, texting and calling. Upon reading all of this I realized that it will be people like my 17 year old daughter who seem to be forever glued to their smartphone that will push the use of silicon photonics ICs (PICs) into the mainstream. What, you ask do 17 year olds and live video streaming have to do with PICs? Just imagine how much bandwidth will be required within and between our data centers and mobile devices to handle random, bidirectional, live video streaming for Facebook’s 1.6 billion users. It boggles the mind. Layer on top of this the latest ideas about the Internet of Everything where millions of smart devices will be vying for your attention and you will quickly realize that the current 10Gbps connections in today’s data centers will be woefully inadequate to handle the amount of traffic that is coming their way.

Data Center providers are already responding to the increased traffic needs by building mega data centers
(Mega-Datacenters are the Future) but these centers come with their own new challenges. Traditional data centers dealt mainly with independent jobs with only course inter-server interactions. Memory and storage were co-located close to the servers meaning that latency across the data center was not a big concern. This is changing rapidly now with the advent of software configurable mega data centers that use virtualization of processors, memory, long term storage and networks to provide unique and customized services to their customers. Virtualization brings with it the need for scalable lower latency network communications across the data center as decisions about how resources will be combined are held to the last moment and change over time. The new mega data centers are now looking at network requirements across the data center that are much more akin to what would be seen in high performance computing environments using fine grained inter-process communications. Add to this the sheer size of the mega data centers stretching to kilometers and copper wire networking gets very expensive both in terms of latency (including multiple hops through switches to get across the data center) as well as power consumed to drive those switches and cables.

Enter the advent of Silicon Photonics and new network architectures that use modulated laser light through fiber connections to reach across the 2km data center in a fraction of the time required for copper cables and switches and at a fraction of the power. Silicon Photonics will not only enable faster, less power consuming communication but it will also enable more direct communications by integrating the photonics either on die or in-package with the server’s components eliminating the need for multiple levels of power hungry switches. A good example of commercial progress towards this end is the recent product announcement by MACOM for 100G silicon photonics-based communication solutions within the data center using standard QSFP28 optical connectors (MACOM).

Nothing drives invention and paradigm shifts like necessity and I can’t think of too many forces of necessity stronger than my 17 year old daughter needing to communicate with her friends, unless it would be my wife needing me to take out the garbage.


In the Valley & thinking about FD-SOI for your next chip design? Epic (and free) symposium 13 April

In the Valley & thinking about FD-SOI for your next chip design? Epic (and free) symposium 13 April
by Adele Hars on 04-02-2016 at 7:00 am

If you’re in the chip biz in Silicon Valley, check out the SOI Consortium FD-SOI Symposium on April 13th in San Jose. They’ve been running these things since 2009, and I have to say that this one is the most comprehensive to date. Headliners include Cisco, Sony, NXP, SigmaDesigns, ARM, Ciena plus the big FD-SOI foundries, EDA companies, design partners, chipmakers and analysts. There is a special session dedicated to RF and analog design innovation on FD-SOI with STMicroelectronics, Stanford and others. In short, we’re going to get a chance to the see FD-SOI ecosystem in action.

To attend, all you have to do is register in advance – click here to go to the registration page. It’s free and open to everyone who registers.

It’s really a terrific agenda – check it out:

08:00AM – 09:00AM – Registration
08:55AM – 09:00AM – Welcome by Carlos Mazure, SOI Consortium
09:00AM – 09:30AM – Aglaia Kong, Cisco Systems, CTO for Internet of Everything
09:30AM – 10:00AM – Thinh Tran, Sigma Designs, CEO
10:00AM – 10:30AM – Ron Martino, NXP, VP, Application Processors & Advanced Technology Adoption
10:30AM– 10:50AM– Coffee Break
10:50AM – 11:20AM – Subramani Kengeri, GLOBALFOUNDRIES, VP CMOS Business Unit
11:20AM – 11:50AM – Will Abbey, ARM, GM Physical IP
11:50AM – 12:20PM – Kelvin Low, Samsung Semiconductor, Senior Director, Foundry Marketing
12:20PM – 1:40PM Lunch
1:40PM – 2:10PM – Kenichi Nakano, SONY, Sr. Manager, Analog LSI Business Division
2:10PM – 2:40PM – Dan Hutcheson, VLSI Research, CEO
2:40PM – 3:05PM – Mahesh Tirupattur, Analog Bits, EVP
3:05PM – 3:30PM – Mike McAweeney, Synopsys, Sr. Director, IP Division
3:30PM – 4:00PM – Coffee Break
4:00PM – 4:30PM – Naim Ben-Hamida, Ciena, Senior Manager
4:30PM – 4:55PM – Rod Metcalfe, Cadence, Group Director, Product Engineering
4:55PM – 5:20PM – Prof. Boris Murmann, Stanford, on “Mixed-Signal Design Innovations in FD-SOI Technology”
5:20PM – 5:45PM – Frederic Paillardet, STMicroelectronics, Sr. Director, RF R&D
5:45PM – 6:00PM – Ali Erdengiz, CEA-LETI, Silicon Impulse
6:00PM – 6:05PM – Closing remarks by Giorgio Cesana, SOI Consortium

Seriously – this good. Plus during breaks there will be poster sessions with GSS, sureCore, Soitec, SEH and the SOI Consortium.

Please note that if you’ve already registered last month when the first announcement went out, the location has changed. The SOI Consortium FD-SOI Symposium will be held on Wednesday, 13 April 2016, from 8am to 6:30pm at the:Doubletree Hotel San Jose
2050 Gateway Place
San Jose, California 95110, USA

If you can’t make it, not to worry – I’ll be there taking notes for a round-up and follow-up articles. Plus I’ll be doing plenty of tweeting and retweeting (follow me @AdeleHars – look for the hashtag #FDSOI). And of course you’ll want to follow the Twitter feeds of participating companies, and of the SOI Consortium @SOIConsortium.org.

Most of the presentations will also be available on the SOI Consortium website following the event. In the meantime, you can click here to peruse the presentations from previous events.


Managing and Reusing IP in a Build-Borrow-Buy Era

Managing and Reusing IP in a Build-Borrow-Buy Era
by Don Dingee on 04-01-2016 at 4:00 pm

Make-versus-buy inadequately describes what we do now in electronic systems design. We are on a continuum of design IP acquisition and use decisions, often with a portfolio of active projects and future projects depending on the outcome. Properly managing IP means adopting a build-borrow-buy mindset and tools capable of handling all aspects of the process.

In a binary make-versus-buy decision, teams either bought a complete subsystem (say a single board computer or real-time operating system) or made it from scratch. The potential for the buy option to disenfranchise designers often led to resistance. Overcoming that objection meant enabling design teams to add their value while still buying a base platform – such as expansion connectors for daughter boards and libraries and APIs for operating system extensions.

Subsystems became more modular and reusable, if properly defined with some kind of structure. Abstraction of IP eased integration and fostered reuse. Tradeoffs became much more granular. Rather than managing a design as a whole, functional blocks of software code or hardware descriptions could be brought into play quickly and managed as individual pieces integrated into a greater scheme.

Build-borrow-buy is a better term for two reasons. First, it represents what we really do. Second, it recognizes that reuse is manifested in several ways. Build is straightforward, with an IP block made in its entirety by internal or contracted design teams. Borrow suggests use of a block that was built or bought and proven in a previous design, or one that was lifted from an open source community for this design (and presumably proven in some use there). Buy delivers a block either to our exact specifications or commercial source files that can be tuned quickly to meet the needs.

Objectives in managing software and hardware IP are quite similar. Typical needs begin with version control and bug tracking. Once those are under control other uses appear, such as collaboration and access control, royalty tracking, licensing pass-through, artifacts for patent applications and compliance audits, and M&A due diligence. Automation of IP management provides huge benefits as the complexity of individual projects grows and a portfolio develops.

However, dissimilarities between software and hardware quickly emerge when the acquired IP is to be synthesized and integrated into a working product. A software configuration management tool such as Subversion handles source code and metadata needed for compilation in a high-level language such as C or Java. What these tools lack is any knowledge of a hardware design flow or the ability to handle multiple file formats (likely with different EDA tools) and relationships involved when working with hardware IP. Extending these software-centric tools to integrate with typical EDA tools would be a massive undertaking.

Just as it makes sense to evaluate IP on a build-borrow-buy scale, it also makes sense to consider EDA tools in that same context. While borrowing an open source tool like Subversion sounds attractive, buying tools appropriately designed for hardware IP management can rapidly pay for itself in complex environments. The number of IP blocks in designs has swollen, with larger designs now containing 100 or more. A typical large design team is now distributed, drawing on expertise from around the globe.


ClioSoft has built a robust, extensible hardware IP design management system in SOS7. Its focus on collaboration and integration with multiple design flows – including digital, mixed-signal, and RF – means that design teams can start managing IP quickly with a minimum learning curve. Instead of stepping outside a familiar design environment to perform additional steps (such as checking files in and out of Subversion), the SOS platform leverages existing EDA workspaces and optimized network storage. SOS handles the nuances of file and metadata organization without burdening the designer with the need to understand where information resides before attempting to use it. The fault-tolerant architecture handles synchronization, backup, and security of the repository.

The bottom line here is hardware teams should be managing the IP and the design, and not spending a lot of effort on improvising with tools not designed for the job. Knowing where IP has been, where it is and what state it is in right now, and where it is going is a competitive advantage that frees design teams from otherwise manual tracking methods. Selection of IP in a build-borrow-buy approach, including how internal teams have modified IP after obtaining it, is a fundamental element of efficient IP reuse.

We’ll be exploring ClioSoft SOS7 capabilities, how it facilitates IP reuse and collaboration, and how IP management fits in the bigger build-borrow-buy picture in future posts.

Also Read

Reinventing Power Management ICs for Mobile

Evaluating the performance of design data management software

The Case for Data Management Amid the Rise of IP in SoCs


Google v. IBM v. Microsoft Artificial Intelligence Strategy Insights from Patents

Google v. IBM v. Microsoft Artificial Intelligence Strategy Insights from Patents
by Alex G. Lee on 04-01-2016 at 7:00 am

Patents can provide insights regarding the state of the art of artificial Intelligence (AI) technology innovation, and thus, a strategic move of a company for the AI innovation leadership. To compare the technology innovation strategy of the three leading companies in the AI business, Google, IBM and Microsoft, patent information is exploited for the cross-competitor analysis.

According to Dr. Benjamin Gilad with the Academy of Competitive Intelligence, the goal of the cross-competitor analysis is to enable one to simplify predictions of competitors’ moves and countermoves when multiple competitors are involved. Through the cross-competitor analysis, one can understand that: Competitors’ behavior when there are several significant competitors; Paradigm shifts in industries undergoing rapid change or transition; Entry of new competitors into the competitive landscape; Future directions in strategic move among industry contenders. The strategic map is a tool used in the cross-competitor analysis for visualizing the competitive landscape: A chart for a strategic parameter No.1 (e.g. product/service portfolio, distribution channel etc.) vs. a strategic parameter No.2 (e.g. product/service price, quality, brand etc.).

To do the cross-competitor analysis for the AI, US issued patents of Google, IBM and Microsoft that are related to the AI are reviewed for identifying patents that cover the major AI technology innovations: machine learning, neural network, expert system, fuzzy logic and genetic algorithm, and AI Applications. Total of 1087 Google, IBM and Microsoft patents are selected for the AI cross-competitor analysis. The key AI patents of Google, IBM and Microsoft account about 10% of total key AI issued patents in the US as of 1Q 2016.

Following figure shows the Activity Index vs. technology innovations strategic map for Google, IBM and Microsoft. The size of the circle represents the total number of the patents for each technology innovation. Activity Index is a measure of a company’s relative technology innovation activities in a specific technology innovation field: Activity Index = share of a specific innovation sub-class in a company/share of a company’s patent in total patents, where share of a specific innovation sub-class in a company = patents (innovation sub-class)/patents (a company) and share of a company’s patent in total patents = patents (a company)/patents (total companies).


The map clearly shows each company’s competitive advantage in the major AI technology innovations. Google’s AI technology innovation is focused on the neural network. In essence, a neural network is an attempt to simulate the human brain. IBM’s AI technology innovation is focused on the genetic algorithm. A genetic algorithm is a search heuristic that mimics the process of natural evolution, which used to generate useful solutions to complex problems. Microsoft holds comparatively large patents in the machine learning. A machine learning refers to intelligent systems that can learn from data, rather than merely follow explicit programmed instructions.


My morning with Andy Grove

My morning with Andy Grove
by Tom Mahon on 03-31-2016 at 4:00 pm

I worked briefly at Intel in 1991-2. At the time, the corporate culture was based on the theory of ‘constructive confrontation.’ For most, that meant that in the clash of good ideas, the best one would prevail. For some at Intel, however, constructive confrontation was a blood sport. (I trust things have improved in the past quarter century.)

On my second morning on the job I was walking down a hall and somebody I didn’t know rushed up to me, pointed to a nearby conference room, and said, “You’re supposed to be in that meeting. Get in there right now! Go on, get in there!” (I did as ordered. I was so low on the company’s org chart that, were Intel a ship, I wouldn’t have had assigned lifeboat space.)

So I went in to the meeting, and as soon as I sat down I realized that I was the victim of corporate hazing. It was a meeting of Andy Grove’s executive staff, and just as I was about to get up and get out, Dr. Grove walked in and took the only remaining seat. Right next to me.

Andy Grove was very smart, very observant and very outspoken. But although he looked directly at me as he sat down he didn’t notice, or didn’t seem to mind, there was a stranger in his staff meeting. So maybe he wasn’t all that paranoid, after all.

Andy opened the meeting by announcing that tomorrow during the quarterly financial analyst call, he was going to announce that Intel was about to become the second largest advertiser in the US, after Procter & Gamble. The program was to be called “Intel Inside,” and the budget was going to be (as best I remember) about a quarter of a billion dollars.

An unassuming man in a black suit, white shirt, and black tie across the table advised Andy that the better way to make the announcement would be to say, “We plan to increase revenue by X percent but it will require an investment of $Y.” “No,” said Grove, “I want to lead with the dollar figure. It will get their attention.”

“I really don’t think you should do it that way, Andy.”

“Well, I’m going to.”

“All right. But I’ll remember this when I do your performance review this year.” And with that, Dr Gordon Moore got up and left the room. And that was that.

That’s the day I learned that everybody – everybody! – reports to somebody. Grove reported to Moore, and maybe Moore went home to report to his wife.

Later in the morning meeting, a topic came up resulting in a heated conversation among Grove and his staff. It happened to be an issue I knew something about (tho I forget now what it was).

So I contributed my two cents, and immediately wanted to bite my tongue. What the hell was I doing!

But with that, Grove spun in his chair, looked right and me, slammed his palm on the conference table and said to his staff, “He’s right!’ Followed immediately by, “Whoareyou?”

And that was my introduction to Andy Grove. I wasn’t fired or promoted. But after that, whenever we passed in the halls, he’d nod and say hello. R.I.P.

© 2016, Tom Mahon


IOT – Job Killer of Job Creator

IOT – Job Killer of Job Creator
by Bill McCabe on 03-31-2016 at 12:00 pm

Is the IOT a Terminator or a Transformer? Where to look to get the most value out of the Internet of Things revolution. The rebooted Terminator movie came out earlier this summer. Its blasted, futuristic landscape of robot killers and gun-toting, warrior humans probably started with enhanced computer technology similar to what we are experiencing today with The Internet of Things. I’ll be in the theater with my popcorn wondering: Will all this connectivity ultimately enhance our human experience or will we end up like the people on-screen, fighting to keep our place in this new world?

Of course, the Terminator movie is science fiction. But let’s look at the connected devices trends that will either displace or generate new opportunities for those of us in the trenches.

Healthcare
In a recent Goldman Sachs report (June 29, 2015), analysts predict that the healthcare arena is slated to experience extremely high levels of change based on the IOT.

The service side of health care (hospitals, managed care) stands the most to gain from the adoption of digital health and IoT. Better patient management, streamlining the care continuum, reducing costly (and in some cases unnecessary) admissions all have the potential to improve the future economics for health care services,” said the Goldman Sachs report. “The first wave of health care IoT technologies that prove successful will be those that drive specific action to improve patient care and correspondingly reduce waste and cost.

I believe that this scenario provides more creative opportunities for connected Internet of Things developers in the healthcare space. Where can we take wearables in the digital age? We can reduce waste on the primary care side of things while creating opportunity for patients to gain unprecedented control of their health. And our IT geniuses can come up with new apps to connect it all.

Manufacturing
In a June 30, 2015 article in the Wall Street Journal, Ernst and Young’s acquisition of “the systems consulting arm of manufacturing intelligence firm Entegreat Inc.” is just the latest in the mergers and acquisition free-for-all in the IOT (more accurately, the Industrial Internet of Things) space. The opportunities for eliminating waste in the industry are almost as plentiful as the thousands of connections that result when every node in a supply chain—from suppliers to customers and back—is integrated. The production floor in an IOT-enabled factory will look quite different—yes, and probably will have fewer humans involved. However, the opportunity for job creation is endless—think about developers working to integrate old-school systems of record like MRP and ERP into new, cloud-based, mobile solutions. What about app developers—shop floor personnel might one day work from home—how can you translate inventory data streams, customer orders and work-in-process data to a tablet or mobile phone? These are the questions that new and emerging IT talents can sink their teeth into.

Everywhere Else
If you want to unlock the job creation potential of the Internet of Things, look no further than the latest McKinsey report. They’ve identified nine areas of growth to reach the $4 trillion to $11 trillion of value inherent in the IOT’s potential. I’m taking liberties here in placing the remaining seven (we’ve already talked about what McKinsey characterizes as the “human” (healthcare) and “factories” (manufacturing) categories) together in an overarching category of “everything else” with a few characteristics in common: Business Model and Modality Disruptions.

McKinsey talks about Business Model opportunities where the Internet of Things will create brand new ways of doing business. Its focus on “everything as a service” disrupting the traditional back-and-forth of business transactions is spot-on. However, the most opportunities for job creation (aside from the fact that these new business models might very well need a brand new breed of MBA) are what I call “modality disruptions.” These are the “how I will live my life” changes that provide the most value. For developers and IT professionals, this means that their discipline’s value will experience a sea change in the eyes of their leaders. With all of the changes in Cities; Homes; Vehicles, and among all of the categories of emerging value in IOT, the modality disruption of how we do business will ensure IT is not only an enabler; never again a not-so-benign cost center; but a true game changer whose capabilities will guarantee a company’s future—or its demise.

Give us your take on the Job Killer or Job Creatordebate – where do you stand, and what do you think will be the outcome.


IoT or Smart Everything?

IoT or Smart Everything?
by Daniel Payne on 03-31-2016 at 12:00 pm

I just attended a keynote presentation at SNUG from Aart de Geus, CEO of Synopsys. This event is well attended with some 2,500 people that are learning from the 96 presentations on all things Synopsys, semiconductor. IP, and foundry trends. There are big name sponsors like: GLOBALFOUNDRIES, Samsung, socioeconomic, TSMC, Fujitsu, SMIC, TowerJazz and UMC. India SNUG actually had a bigger attendance than Silicon Valley, and this is the 26th year for SNUG. When you count all 13 SNUG events world wide it comes out to about 10,000 attendees, that’s a lot of engineers.

The Technical Committee named their best paper award winner as GLOBALFOUNDRIES and their paper on FD-SOI at 22nm, known as 22FDx. We’ve blogged quite a bit about FD-SOI here on SemiWiki, and the potential to add lower power and lower costs than planar CMOS technology.

Aart gave another high energy talk and started with the revenue trends for the semiconductor industry showing a 4.4% CAGR. He sees three major industry eras for semiconductors as:

  • Compute – PC, Internet, Networking, Server, Cloud
  • Mobility – Phone, Smart Phone, Tablet, Apps
  • IoT

With IoT there are four possible places to sell products or services:

  • At the edge
  • In the fog, between edge and cloud
  • In the Cloud
  • The Apps

With sensor-rich IoT devices there are enormous amounts of data being collected, leading to the need for management and analysis, thus Big Data. Security is an immediate issue with IoT devices and in general anything that relies on cloud storage.

For software developers there’s an increased awareness to improve quality, security and safety. Is there a way to offer sign-off for software security? Synopsys thinks so, and has acquired companies in this space which produce about $100M in revenues.

Aart prefers to use the phrase “Smart Everything” instead of IoT because it is more descriptive of the general trend that consumers see today in Smart Phones, Smart Cars, Smart Homes, etc.

With IoT there are distinct market segments, like: Wearables, Health, home, city, auto, industrial, finance. In the automotive market there are existing and emerging standards for safety, quality and reliability. Synopsys offers their own IP that has been designed to meet the auto standards. Even DesignWare is growing to include a subsystem to handle all of the sensors typically used in IoT applications.

A popular slide showed the number of IC design starts using Synopsys tools by process node and time, so it’s exciting to see 10nm and 7nm designs coming to life so quickly.


IC Design Starts by Process Node over Time. Source: Synopsys

Following the industry trends, the keynote focused on improvements to specific Synopsys tools for logic synthesis, test, place & route, DRC, LVS, static timing analysis. Synopsys continues to collaborate closely with Eco-system partners like ARM for their processor IP, foundries, and leading-edge systems design companies.

One new development was in the area of Custom IC design, which has historically been a very manual-oriented process. Aart talked about Custom Compiler as a way for transistor-level designers to use a visually assisted automation approach, instead of the older schematic driven layout methods. Internally at Synopsys there are 2500 IC designers, and some of them have started to use Custom Compiler which now provides them with time reductions from 1 hour to just 5 minutes using templates and quick iterations. With FInFET transistors the IC designers certainly need help to deal with the increase in design rules, complexity of transistor fingers, increased amount of parasitic RCL elements.

A new technology called Cheetahwill speed up the VCS simulator on both RTL and gate-level runs from 5X to 30X by exploiting Fine Grain Parallelism. You’ll have to wait a while for actual product announcements, so stay tuned.

Synopsys is really trying to stretch the entire process from Silicon to Software by growing into the Software developer space.

In summary, we’re in the third wave now where Smart Everything (aka IoT) will drive new semiconductor designs and revenues.


Guard Vehicles from Cyber Attacks!

Guard Vehicles from Cyber Attacks!
by Roger C. Lanctot on 03-31-2016 at 7:00 am

Law enforcement officers, emergency responders and commercial fleet operators cannot afford to operate vehicles without the assurance of security. A police officer, emergency medical technician or truck driver cannot live with any uncertainty regarding the integrity of their vehicle’s safety systems and powertrain.

That overwhelming and immediate need for security has given rise to an aftermarket for devices intended to secure the OBDII diagnostic port in most cars (made after 1996) and many commercial vehicles. The OBDII port is the same port used by Progressive’s Snapshot usage-based insurance device and Automatic’s diagnostic dongle.

The most prominent examples of these aftermarket security devices – themselves plug-in OBDII devices – come from RunSafe Security and Argus Security. A third company, Autocyb, offers a physical lock and key for the OBDII port.

Autocyb (left) and RunSafe (right)

The urgency of this need was made apparent from multiple conversations at the recent International Communications Data & Digital Forensics seminars put on recently by the Metropolitan Police at a venue outside London. For attendees at this event the connected car presents new opportunities while creating new vulnerabilities.

Criminals continue to gain access to and steal cars, a process made easier by the presence of the OBDII port. The new kid on the block is the cybercriminal using virtual or remote access to the car for nefarious purposes.

Once inside a car, access to the OBDII port greatly eases the criminal’s task of disabling or taking the car. But the emergence of automotive cyber attacks has created the need for a means to secure cars from wireless attacks on multiple vehicle networks for the purposes of remote control mischief, ransom or terrorist activities.

Police in the U.S. state of Virginia have been testing the RunSafe device, created by an offshoot of Kaprica Security. According to the company, which opened its doors just last year, RunSafe’s “App and OS Guardian are a preventative security overlay for native code that mitigates widespread return oriented programming (ROP) attacks.

“(The RunSafe applications) increase security by leveraging randomization (binary stirring) or novel control flow integrity (CFI) concepts. The overlay is an example of a defensive technology called run-time application self-protection (RASP). They can “shrink” app or OS attack surfaces by up to 90%.”

Argus says its technology identifies malicious attacks using its patent-pending deep packet inspection algorithms – scanning all traffic in a vehicle’s network, identifying abnormal transmissions and enabling real-time response to threats. Argus’ aftermarket solution is designed to provide a comprehensive overview of cyber attacks and irregularities, allowing car makers to identify unauthorized attempts to tune or change an ECU’s behavior.

Unlike the RunSafe plug-in device which must be removed to allow for service diagnostic tools to be connected, Argus has shown a secure OBDII plug-in that provides a port to allow OBDII connection THROUGH its device.

Both Argus and RunSafe offer embedded and cloud-based security solutions for cars. Argus is also offering its technology as an add-on for aftermarket devices from insurance companies and others.

It’s notable, in the wake of the FBI and U.S. Department of Transportation warnings regarding connecting devices to cars and the correlated risk to security, that the OBDII port is seen by both companies as a means toward enhancing vehicle security. The one weakness of OBDII-based security is the use of such technology on newer cars. As cars adopt over-the-air software update technology, aftermarket devices may come to interpret software updates as malicious code. This will pose a challenge to aftermarket solutions.

The most important aspect of the emergence of these devices is the “productization” of security. While security as a service is the more accepted and familiar model as in desktop and portable computers today – ie. Norton and McAfee Antivirus products – cars present unique security challenges creating the demand for unique solutions.

The arrival of these products demonstrates the immediacy of the need for automotive security. Fleet operators of all kinds won’t wait for car makers.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


The time Andy Grove came to Fortune and refused to meet with the editors

The time Andy Grove came to Fortune and refused to meet with the editors
by Rik Kirkland on 03-30-2016 at 4:00 pm

In my nearly thirty year career at FORTUNE magazine, I got to know a host of larger than life characters. But few loom larger in memory than the diminutive dynamo who sadly passed away last night, Andy Grove.

Amid the stream of obits and reminiscences rightly hailing Andy’s extraordinary career as CEO of Intel, his major contributions to management thinking in books such as Only the Paranoid Survive and High Output Management and his moving autobiography Swimming Across, which vividly relates how young Andras Grof escaped war-torn Hungary to reinvent himself in America as Andy Grove, I have two small stories to offer.

Both capture what to me is Andy’s essence, what defined him as a leader and a man – his extraordinary intellectual energy, harnessed to an incessant willingness to challenge bluntly both himself and everyone around him.

In May 1996, when I was deputy editor, Andy wrote a cover story for FORTUNE, “Taking on Prostate Cancer,” in which he clinically examined the choices he had confronted when diagnosed with the disease. One exhibit included a chart he had proudly crafted himself. But in the course of checking the math, a 25-year-old first year Fortune reporter named Bethany McLean, called him out on an error. He exploded in anger – and then quickly backed down once he re-examined the facts and realized she was right. (Bethany would go on to prove she was more than capable of holding her ground and facing down angry older white men when she wrote in 2001, a month after I became managing editor, the first national story to question Ken Lay and Enron’s then high-flying stock.)

When Andy came to visit our offices in New York shortly after the piece appeared, he had little interest in seeing me and or my boss, John Huey, who had commissioned the story. “Never mind you guys,” he roared, “I want to meet this Bethany McLean!” He was quick to challenge but equally ready to admit—and celebrate –if he made a mistake.

Some months earlier, Andy had appeared on stage in San Francisco at the FORTUNE 500 Forum with the other most-prominent CEO of that era, GE’s Jack Welch. In the course of their dialog, Andy suddenly turned to Jack and asked him if he used a computer. Jack admitted he did not. (Yes, kids, 20 years ago it was still possible to run one of the world’s biggest and best companies without personally using either the PC or the Internet!) Andy shook his head and, leaning in towards Welch with a mix of empathy and horror, said in his heavily accented English: “Jack, you really need to get a PC.” Welch must have . . . eventually, because a few years later his company, which had long focused mainly on Six Sigma as its core cross-cutting initiative, announced it was launching a new one — on digital and e-commerce.

Andy Grove: shouter, teacher, mentor, challenger, change-agent—and most important, life-long learner. He broke the mold and set an example for us all.