CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The Emerging Battle for Your Car’s Data

The Emerging Battle for Your Car’s Data
by Roger C. Lanctot on 03-13-2016 at 8:00 pm

The Future Networked Car gathering put on by the International Telecommunications Union at the Geneva Motor Show last week highlighted the intensifying debate over automotive data privacy. A representative from FIA, the international federation of car clubs, and Stephan Appt, legal director and attorney at Pinsent Masons, highlighted fundamental contradictions facing car makers and consumers.

FIA is in the forefront of a global effort by car clubs to alert consumers to the data collection capabilities of automobiles. FIA has been leading the MyCarMyData campaign to educate consumers regarding the vehicle data collection activities of car makers and the potential consequences.

FIA is concerned that consumers know their rights to privacy but the organization is also advocating for consumer choice. FIA believes consumers should have the right to choose their telematics service providers and vehicle repair options.

The FIA position reflects the organization’s perception that connected cars will increasingly be tied to the car maker’s eco-system of service providers. FIA says its consumer surveys show that:

  • 90% believe the car data is owned by the car owner or user
  • 95% want legislation to protect user data
  • 78% want to choose their service providers
  • 76% believe that consent to access data should be for a limited time or per-ride basis

The irony and the reality is that very little car data is being gathered in real-time today, though some data is being gathered periodically. But the onset of connected and autonomous cars is rapidly altering the industry mindset around vastly increased data collection.

FIA’s focus on consumer choice relates to the roadside assistance and insurance services offered by car clubs, which are increasingly introducing aftermarket telematics systems to connect to their customers and compete with car makers.

Car makers are still remarkably conflicted regarding connecting cars. Some car makers may themselves be interested in privacy protection for their customers and for themselves. It was only two years ago that former VW CEO Martin Winterkorn warned that the car was becoming a Datenkrake (Data Octopus) and that VW was committed to protecting the privacy of its customers.

Winterkorn’s words revealed the profound ambivalence prevailing in the auto industry regarding privacy and data collection, particularly in the wake of two years of record-breaking recall levels. Car makers still aren’t quite sure they want to collect all that vehicle data.

It is clear that vehicle data can not only be used against the driver by law enforcement, marketers or insurance companies, it can also be used against the car companies by regulators or consumers. Additionally, vehicle data has become a battleground as governments such as Russia and China insist that car makers locate their data collection servers within the borders of those countries and as regulators throughout the world specify how long data must be preserved or how quickly it must be destroyed.

The last thing any car maker wants to do today is get into the business of selling its data. Any vehicle or customer data that might escape into the wild, even via a valid commercial agreement, could contain the seeds of a devastating lawsuit or regulatory action.

There are exceptions to this ambivalence. Tesla Motors proudly maintains its lifetime always-on connectivity. By and large car companies are not gathering vast quantities of data. But that is about to change.

Appt of Pinsent Masons doesn’t see how car companies can possibly avoid collecting data on their cars and he pointed out the need for clear customer disclosures and opt-in procedures in advance of vehicle usage data collection. He also noted the requirements associated with event data recorders and the regional limitations placed on dashcam data collection.

For all their ambivalence about collecting data, though, car makers have an obligation and a need to collect data. Vehicle data may turn out to be incriminating, but Appt says car makers are obliged to collect and analyze data since they are answerable for the performance of the vehicle and the safety of the customer.

In the context of security concerns, the need for car companies to collect data has only increased. Car makers are increasingly recognizing they have a need to monitor vehicle systems as much as possible in real-time to ensure the integrity of vehicle performance and to detect and prevent the intrusion of malware.

Appt notes that efforts are underway to rationalize and harmonize privacy laws in Europe and around the world, but these efforts are at the earliest stages. In the meantime, car makers are caught like deer in the headlights. They are answerable for vehicle failures, recalls or security intrusions but they have a limited set of tools to take on these responsibilities and are confronting a fragmented legal framework around privacy and their customers are increasingly wary regarding vehicle connections and data collection.

On a separate panel at the Fully Networked Car event a moderator asked about the right of consumers to opt out of connectivity and the impact that might have on safety systems based on vehicle-to-vehicle communications. The FIA notes that 91% of the respondents to its survey said they wanted the right to turn their car connections off.

Consumers shutting car connections off may create the peace of mind of an escape from the intrusive data gathering eyes of car makers, marketers and insurers, but it does not let the car maker off the hook for liability regarding the safe and secure operation of the vehicle. It also undermines safety systems designed to use connectivity to avoid collisions.

Finally, it isn’t enough to collect the data. If a car company collects vehicle data there is an implied obligation to thoroughly analyze the data. This is yet another reason why car companies remain ambivalent. They will clearly be held liable for collecting data which might contain evidence of vehicle malfunctions. Yes, the days of plausible deniability are officially past.

As car companies collect and analyze data they will be expected to notify vehicle owners and drivers in a timely manner as to imminent vehicle system failures. Ultimately, existing guidelines for postal notifications of potential malfunctions or flaws will no longer be sufficient. Real-time, in-dashboard warnings and alerts will ultimately be implemented by all car makers.

Herr Winterkorn was correct in observing that his industry was confronting a Datenkrake, but his prescription was wrong. The auto industry must embrace connectivity and all of the responsibilities that it entails. Data is neither good nor bad. It is only a resource to be used to better serve and protect the customer.

More articles from Roger…


Apple Protects Its Designs With Custom Silicon And You Can Too

Apple Protects Its Designs With Custom Silicon And You Can Too
by Bob Frostholm on 03-13-2016 at 4:00 pm

In the February 22-28 issue of Bloomberg Businessweek magazine, Johny Srouji, Apple’s senior vice president for hardware technologies, discusses Apple’s winning strategy of owning its own silicon. It began with the acquisition a Silicon Valley chip startup called P.A. Semi in April of 2008 and since then, Apple has never looked back.

The Bloomberg article cites that “When the original iPhone came out in 2007, Steve Jobs was well aware of its flaws. It had no front camera, measly battery life, and a slow 2G connection from AT&T. It was also underpowered. A former Apple engineer who worked on the device said that while the handset was a breakthrough technology, it was limited because it pieced together components from different vendors.”

In an earlier paper, I likened this to the Frankenstein Effect where I wrote:

Mary Shelley’s 1818 novel, Frankenstein, tells the story of a monster created with parts collected from random cadavers. The creature stands eight feet tall due to an inability to integrate all the necessary components into a standard humanoid form factor. Additionally, this haphazard collection of limbs and organs lacks sufficient neural network connections, accounting for its awkward gate and general stiffness of its arms and shoulders as it walks with forearms extended. This is perhaps the first documented evidence of the problems that can occur when designing a system using ‘point-products’, parts selected for their unique special functions without regard for their perfect interoperability. Clearly, Mary Shelley was a visionary.

The BBW article continues, “Steve came to the conclusion that the only way for Apple to really differentiate and deliver something truly unique and truly great, you have to own your own silicon, Srouji says. ‘You have to control and own it.”

Not every company can afford to acquire and successfully maintain a semiconductor development center dedicated to supporting their internal needs. But you can gain many of the same benefits by working with a reputable ASIC (Application Specific Integrated Circuit) semiconductor company to design and produce custom ICs for you.

No one can deny that Apple and a handful of other high tech companies are anomalies. They have the wherewithal and financial resources to do just about anything they want including acquiring at the drop of a hat capabilities they don’t already have. Good for them. Your company may not be so fortunate. That doesn’t mean you have to throw your hands in the air and give up. There are lesser cost alternatives that can generate similar technological, cost and size advantages to ignite your sales.

First and foremost is to eliminate any Frankenstein Effects in your design.

In the world of standard product ICs, there are chips that can do just about any function you can imagine…just like the arms, legs, torso and brain Mary Shelley wrote about. The trick is to get them to connect and interoperate in a smooth and efficient manner. If every chip interfaced smoothly with every other chip in a design, engineering would be so easy that even a finance major could do it. Unfortunately that isn’t the case, so engineers spend a disproportionate amount of their time and effort getting part A to communicate with part B and getting part B to communicate with part C, etc., and thus designs can quickly become awkward and cumbersome as additional components are added to bridge these transitions.

Apple’s solution was nothing short of brilliant timing but not unique. Companies have grown dramatically by gaining needed technology by acquiring other companies for decades; look into Cisco’s history. These big ticket approaches are reserved for companies with deep pockets. A less costly approach is to develop a relationship with an ASIC chip company to roll your critical design elements into a custom system on a chip. You don’t even need to go that far. Often a subset of a system integrated into a single silicon chip can be the needed catalyst to jump start your product’s success.

ASICs are no longer the purview of the high volume users. Many ASIC companies entertain volumes as low as a few tens of thousands of units per year; some even as low as a couple of thousand. And some offer NRE & Tooling rebate programs that over time make the development costs zero, putting total cost of ownership on par with off the shelf components.

Whether through acquisition or partnering with an ASIC company, the benefits are nearly identical: Faster Time to Market, Improved Performance, Lower Total System Cost, Reduced Size, IP Protection, Lower Power Dissipation, and Improved Reliability.

No one ever accused Apple of being stupid. Investigate for yourself and see how you can keep Frankenstein out of your designs.


Is Smart Bluetooth de facto standard for IoT Wearable, Beacons, Fitness and Health ?

Is Smart Bluetooth de facto standard for IoT Wearable, Beacons, Fitness and Health ?
by Eric Esteve on 03-13-2016 at 7:00 am

Synopsys launch BTLE PHY IP, qualified by the Bluetooth Special Interest Group (SIG) and meeting compliance with the Bluetooth® Smart v4.2 specification. The company has built a partnership with Mindtree to provide a complete solution, integrating Synopsys’ Bluetooth Smart PHY IP and Mindtree’s production-proven BlueLitE link layer and software stack IP. The PHY IP has been developed on TSMC 55nm (and 180 nm) and is currently ported on TSMC 40 nm. Synopsys’ marketers think that the semiconductor content of IoT edge systems like wearable, beacons, fitness or health should be low cost to see a strong adoption. By low cost, they mean a few dollars and definitely less than 5$ for the semiconductor content.

But the selected technology node (55 nm) also means relatively low development cost, at least much, much lower than on 28 or even 22 nm. Here we have to remember the low complexity of an IoT edge device: a sensor providing a small amount of data to a MCU, once processed the data to be wirelessly transmitted via on-chip BTLE protocol to the gateway. You simply don’t want to target an aggressive technology node and invest a huge amount of money in development cost for such a relatively simple system, comparable to a 2$ MCU in term of complexity.

This Bluetooth PHY IP, designed for IoT battery powered application, operates below one volt supply to extend battery life and there is no need to target advanced technology nodes to save power. If you expect IoT devices count to reach analysts’ forecast, then the inferred proposal is that the development cost should be reasonable enough so we can see a multiplicity of IoT systems coming to the market. That’s why I think that offering IP targeting 55nm rather than 28nm, allowing much cheaper NRE design cost, sounds like cleaver market positioning… and common sense.

Device cost and power consumption, efficient wireless connection, these should be the main features for a successful IoT edge device. Synopsys’ Bluetooth smart PHY IP with Mindtree’s Link Layer and Software Stack IP is a complete, production proven solution. Mindtree brings a software part which consists of all the mandatory and optional features of the Bluetooth Smart core stack and all the adopted profiles. Synopsys’ PHY IP including an integrated on-chip transceiver matching network and single pin-to-antenna interface helps reducing reduce BOM cost and simplify board design. By definition, a wired or wireless connectivity IP has to be interoperable. The IP conformance to BLE 4.2 has been validated and all tests “Pass” in corner conditions for an operating voltage in the 0.95V ~ 1.2V range and operating temperature from -40C to 85C. How to make sure that an IP has been qualified? Just take a look at the Bluetooth Qualified Design Listing (QDL) and check for:

To address IoT design requirements, Synopsys propose a specific IP portfolio and take special care to support:

  • Connectivity
  • Security
  • Energy efficiency
  • Sensor processing

Synopsys had to make major investments in expanding portfolio, like for this Bluetooth PHY IP coming from the acquisition of Silicon Vision assets, or by re-architecting existing IP for IoT, optimized for low power consumption and for 55-nm and 40-nm IoT process technologies. This IP portfolio counts wireless, security (accelerators and security modules), interfaces (USB, MIPI DSI & CSI, Ethernet, ADC…), memories (EPROM, ROM, RAM and VNM), logic libraries, processors (ARC and vision processors) and sensors and control subsystems.

What’s the next step to develop IoT edge system even more cost and footprint optimized? When the sensor itself will be integrated into the IoT SoC, the edge system cost, footprint and power consumption should reach a minimum. This integration path can be valid, but only if the target SoC technology node allows to support sensor integration. I am not sure that it will be the case for 28-nm or below…

From Eric Esteve from IPNEST

More about Bluetooth smart IP from Synopsys:


Explore Google Chromium USB Type-C example designs using GoArks USB – C Thru

Explore Google Chromium USB Type-C example designs using GoArks USB – C Thru
by Rajaram Regupathy on 03-12-2016 at 12:00 pm

AAEAAQAAAAAAAAPRAAAAJDUxNTI2NmQxLTc1YmItNDRkMy1iZTNiLTM2NjhlM2VjNjM2ZA

One of the early adopters of USB Type-C and USB Power Delivery is Google for their Chromium projects. More interestingly Google shared the complete design of the USB Type-C products in public domain right from schematic to source code of the solutions. This article explores how to use USB C-Thru board to explore Google’s designs there by enabling you to develop custom USB Type-C design of your own.
This article enables you to make your own Google USB-PD Sniffer aka “Twinkie” using USB C-Thru and a STM32 development board for just 65$ in 3 steps

Hardware requirements:

[LIST=1]

  • USB C-Thru (www.goarks.com) from https://www.crowdsupply.com/goarks/usb-c-thru
  • STM32F072B development kit from distributor. (http://www.mouser.in/search/ProductDetail.aspx?R=0virtualkey0virtualkeySTM32F072B-DISCO)

    Step – 1: Preparing software and firmware binary for Google’s USB PD Sniffer


    Figure 1: List of USB Type-C design examples from Chromium website

    Step – 2: Setting up Hardware STM32F072B-DISCO with USB C-Thru
    Having setup the necessary software and flashing the board with firmware, let us explore how to setup the hardware to build a Twinkie.


    Figure 2: Schematic of Twinkie indicating CC pins

    • Now connect pins PA1(grey), PA3(orange) and ground(black) from the STM32 board to USB C-Thru( PA1 to CC1, PA3 to CC2) as shown in the Figure-3 below:


    Figure 3: Schematic of Twinkie indicating CC pins

    • This setup gives you a part of the Twinkie design that is sufficient to act as a USB PD sniffer.

    Step – 3: Start sniffing USB PD data with homemade sniffer
    With USB C-Thru board and STM32 discovery board we have the sniffer ready to get deployed for debugging. In the example setup I am using a laptop with a USB Type-C charger and USB-C Thru based Sniffer connected to my Ubuntu PC.

    • Using a USB2.0 mini cable connect the STM32 development board to the host PC in which the Google’s USB PD sniffer software was setup (Step 1). A “lsusb” on the host PC shows the USB PD Sniffer as shown in the Figure-4 below:


    Figure 4: Enumeration of STM32 development board as Twinkie device

    • Plug USB C-Thru to the laptop and the USB Type-C Charger to the receptacle end of the USB C-Thru as shown in Figure-5 below:


    Figure 5: Google’s USB Type-C example design Twinkie implementation usingUSB C-Thru and STM32

    Conclusion

    This article provided a quick walk through of how to use USB C-Thru board with an existing development kit and make your own USB Type-C example design. You can explore other Chromium USB Type-C design examples in similar manner with few wire wrapped electronics or a breadboard setup.

    You can order USB C-Thru ( www.goarks.com ) from https://www.crowdsupply.com/goarks/usb-c-thru and write to us for any clarifications at info@goarks.com


  • Where is the Money in IoT?

    Where is the Money in IoT?
    by Daniel Nenni on 03-12-2016 at 7:00 am

    As we all know IoT (Internet of Things) is the “next big thing” across many different industries including the fabless semiconductor ecosystem. The first recorded IoT blog on SemiWiki was in 2014 and currently we have 173 IoT blogs posted that have earned more than 600,000 views and counting. So yes, IoT is the next big thing, absolutely.

    One of the most frequent IoT questions we get is, “Where is the money in IoT?” and that is what we will be discussing in detail next week during the IoT Summit at the Santa Clara Convention Center on March 17-18[SUP]th[/SUP].

    IoT Summit is a forum to present, highlight and discuss the latest products, applications, development, and business opportunities in IoT. The market for IoT, sensors, wearables,cloud, and related technologies is expanding at a phenomenal rate. The conference brings together researchers, developers, and practitioners from diverse fields including scientists and engineers, research institutes, and industry. IoT Tech Summit is the 4th event produced by SensorsCon and is sponsored by the International Society for Quality Electronic Design.

    Partial list of the topics explored in this conference:

    • IoT Applications: Wearables, health & fitness, Automotive, , energy, smart power grid, environmental monitoring, consumer, security, military, nautical, aeronautical and space, robotics automation
    • IoT enabling technologies such as sensors, cloud, low power and energy harvesting, sensor networks, machine-type communication, resource-constrained networks, real-time systems, IoT data analytics, in situ processing, and embedded software.
    • IoT architectures such as things-centric, data-centric, service-centric architecture, platforms, cloud-based IoT, system security and manageability.
    • IoT services, applications, standards, and test-beds such as streaming data management and mining platforms, service middleware, open service platform, semantic service management, security and privacy-preserving protocols, design examples of smart services and applications, and IoT application support.

    You can see the conference program HERE. Most notably is the panel discussion on Friday that I am moderating:

    Where is the money in IoT?
    The market for Internet of Things (IoT) is expanding at astonishing speed and there are plenty of enthusiasm and euphoria in industry about its enormous prospects. This panel discussion attempts to explore business opportunities and challenges in IoT. Internet connected devices are being touted from door knobs to smoke detectors, cars, baby diapers, health monitors, etc. with new products coming to market in rapid successions. As IoT is transforming the business landscape and creating enormous opportunities worldwide, many in Silicon Valley and elsewhere are wondering if the excitement about IoT is short lived as what witnessed in Solar, Nano, Smart Power Grid, etc. or it is different and why? What are the killer apps, challenges, and roadblocks? What does an explosively growing IoT market means to my career? What and where are job opportunities? What area of IoT is the most promising for employment? What areas are ripe for innovations and development of products and services? What pitfalls to avoid? What opportunities to pursue? How can I invest in IoT? The panel will attempt to answer these and other important questions as much as time would allow.

    Panelists
    Craig Harper – CTO, Sysorex
    Jim Aralis – CTO, Microsemi
    Elliott Yama – VP, Apttus
    Steven Woo – VP, Rambus

    Panel Moderator:
    Daniel A. Nenni – Founder, SemiWiki

    I hope to see you there, it would be a pleasure to meet you!


    Design units come to faster Riviera-PRO release

    Design units come to faster Riviera-PRO release
    by Don Dingee on 03-11-2016 at 4:00 pm

    For the latest incremental improvements to its Riviera-PRO functional verification platform, Aldec has turned to streamlining random constraint performance. The new Riviera-PRO 2016.02 release also is now fully supported on Windows 10 and adds a new debugger tool. Continue reading “Design units come to faster Riviera-PRO release”


    Verdi Update and NVIDIA on Verification Compiler

    Verdi Update and NVIDIA on Verification Compiler
    by Bernard Murphy on 03-11-2016 at 12:00 pm

    Synopsys hosted a lunch session on Thursday of DVCon. Michael Sanie of Synopsys opened the session, with a look back at the last DVCon where he had talked about Verification Compiler (VC) and extending the platform to Verification Continuum, which adds emulation and FPGA-based prototyping (HAPS – there was a very cool HAPS demo in the Expo Hall).

    Verification Compiler and Continuum
    Michael gave a high-level update on Synopsys verification:
    · Great momentum behind VC Formal – including a native integration with simulation and coverage analysis to drive root-cause analysis of problems detected in simulation
    · Lots of native integrations (some examples were Unified compile flow for simulation and debug)and Integration of the Siloti technology for smaller debug databases
    · The VC apps library has grown beyond 300 entries (they stopped counting after 300)

    On SpyGlass integration, sounds like the plan is to infuse key SpyGlass capabilities into VC and key VC capabilities into SpyGlass, which seems a reasonable low-disruption approach to keep everyone happy. For example, the SpyGlass Verdi plugin combines the Verdi hierarchy tree, HDL navigation and waveform viewer with SpyGlass messages, shell and incremental schematic.

    He wrapped up with a mention of some of the work they are doing for Automotive design and their recent acquisition of WinterLogic, which will be key to functional safety verification and ISO 26262 compliance testing.

    Verdi Update
    Vaishnav Gorur (Synopsys verification product marketing) gave an update on Verdi. Their most recent innovation is AMS debug, a pretty cool way to visualize together digital and analog waveforms along with corresponding RTL and Spice netlists. Other very nice features include:
    · Reverse debug – debug backwards in time from whatever point you stopped/paused. You can even reverse step in debug
    · An integrated protocol and memory protocol analyzer
    · A performance analyzer, where you can set thresholds to trigger debug around violating transactions
    · Integrated coverage management, linked to your verification / coverage plan
    · Accelerating power analysis leveraging the Siloti correlation technology (mapping RTL to gate simulations) and PT-PX

    NVIDIA experience with VC Formal and simulation
    Syed Suhaib, who is the formal verification project manager at NVIDIA, presented his experiences in combining the best of formal and simulation to get to verification closure. He started by saying that manual effort still dominates verification closure for reachability testing in cases which seem resistant to coverage.

    Especially at the tail-end of the project when schedules are tight, the last few coverage cases are hard to close, many because they are unreachable. These can be analyzed using formal techniques to assess if cases are truly unreachable (in which case you can define an exclusion) or to get hints on how to reach those cases.

    But setting up formal proofs case-by-case is very tedious, error-prone and impractical. A much better way starts with formal proof natively integrated with simulation. Proofs can be triggered in parallel runs from those last few cases. NVIDIA collaborated with Synopsys on development of this flow and estimated it has saved them 2000 engineering hours on their last project.

    To learn more about Verdi and especially Verdi-AMS, click HERE.

    More articles by Bernard…


    Intel Adds ‘Authenticate’ Multi-Factor Security Feature

    Intel Adds ‘Authenticate’ Multi-Factor Security Feature
    by Patrick Moorhead on 03-11-2016 at 7:00 am

    Last summer, Intel launched their 14nm, 6th Generation Core processors, code-named ‘Skylake’, alongside Microsoft’s new Windows 10 operating system. As things usually go in the enterprise world, the commercial 6th Generation of Intel’s Core vPro processors weren’t too far behind with increased security and manageability features included. We at Moor Insights have already talked extensively about some of the reasons businesses may want to to upgrade to Intel’s new Skylake family of processors. We also also wrote a new paper that covers new workplace trends and the performance improvements that technologies like Skylake delivers over systems they’re replacing from about 5 years ago, which we believe are of increasing importance to millennials.

    Tom Garrison, VP and GM of Business Clients presenting at the Intel event

    The new 6th Generation Intel Core vPro processors announced today in a webcast and Intel says are “explicitly designed with the modern workforce and Windows 10 in mind”. I agree with that as it relates to PCs, for sure. Coincidentally (OK, not really), last week, Microsoft released a blog detailing how closely they worked with Intel on Skylake to maximize the performance of Windows 10on Intel’s new platform. Microsoft goes into a lot of detail about business use cases for Windows and even mentions Intel’s Business Launch of Skylake, which is the 6th Generation vPro family. These new processors can be found in a multitude of very attractive new PCs including the new Lenovo X1 series, Dell Inspiron 7000 Series and HP’s (Hewlett-Packard) EliteBooks. These new notebooks bring a multitude of new features to the table beyond just being more attractive than the previous generations. They are almost all thinner, lighter, more powerful and in many cases have nicer displays too. When you compare these versus the clunkers that are installed in most businesses, particularly medium and large businesses, it is truly black and white.

    There is also one important reason why these new systems with Intel’s new 6thGeneration Core vPro processors are so important to businesses and that’s security. With these new processors and Windows 10, there is much better support for seamless and holistic security that fixes many of the issues out there today. One of the aspects of this better and more seamless security platform is Intel’s “Authenticate” solution. Intel Authenticate is a hardware-based, multi-factor authentication (MFA) solution that hardens the users’ PC to make it more difficult for unauthorized users to gain access. Intel Authenticate is designed to confirm the identity of a user by using a combination of up to three hardened factors at the same time.

    Intel likes to use three phrases to represent these three different potential factors. They say that these factors are made of something you know,something you haveand something you are. This means combining something like a personal PIN number that you might know with something you might have like a smartphone and something that you are like a fingerprint or a facial image. IT administrators can decide how many and which factors are required for a user to authenticate with their PC and allows companies to move away from the inherently insecure password. Intel Authenticate is not limited to Microsoft Windows 10 only as it will also work on Windows 7 and Windows 8 systems as well, but the fastest and most secure solution is without a doubt Windows 10. Intel is making Authenticate available to more than Windows 10 users because they understand that some companies would like to get the benefits of Skylake but their software simply may not allow them to upgrade to Windows 10 just yet. While I’d like to see enterprises move more quickly to Windows 10 and believe enterprises need to be more nimble and quicker, the reality is that some of them may not be able to as quickly due to homegrown and mis-behaved applications.

    The company is also introducing updates to their Intel Unite collaboration platform which the company introduced last year. I wrote extensively about Intel Unite, have personally used it, and its purpose in the workplace and simplifying the meeting experience for small, medium and large businesses. With the update to Intel Unite, Intel is expanding display capabilities thanks to the added graphical capability of the Skylake processors and their added performance. This means being able to share applications on screen as well as add extended display. There is also a built-in auto disconnect feature that has been added as well. I haven’t used it yet, but I hope it helps make an already pretty good user experience with Intel Unite even better. I’m also hoping we will see broader adoption as a way to improve meeting setup and interactions. Intel Unite currently supports collaboration across both Microsoft Lync and Skype for Business video conferencing services and would love to see on something like Google Hangouts.

    Overall, Intel is announcing a pretty strong family of processors that offer a good set of reasons for companies to upgrade their current crop of business PCs. Compared to the 5 year old clunkers in the installed base, the new 6thGeneration Intel Core vPro family of processors can enable a better experience for users with easier access and higher performance, while also enabling the highest levels of security the company has ever offered with Intel Authenticate. Many companies are simply unaware of how much added productivity they are missing out on while also being more vulnerable to the onslaught of hackers that continue to break into companies on a daily basis. These attacks are no longer just about losing sensitive data, but also costing companies hundreds of millions of dollars and certain CIOs and CEOs their jobs. Security on users’ PCs in the enterprise has become a much more serious topic and I believe Intel is helping their partners address that concern with these new processors and security features.

    Later this month, we will be publishing a paper on security. Stay tuned.


    More from Moor Insights and Strategy


    Intel EUV Photoresist Progress and ASML High NA EUV

    Intel EUV Photoresist Progress and ASML High NA EUV
    by Scotten Jones on 03-10-2016 at 4:00 pm

    SPIE Days 3 and 4:

    Anna Lio of Intel presented EUV resists: What’s next?

    Intel wants to insert EUV at 7nm but it has to be ready and economical. Critical Dimension Uniformity (CDU), Line Width Roughness (LWR) and edge placement/stochastics are all stable on 22nm, 14nm and 10nm pilot lines.
    Continue reading “Intel EUV Photoresist Progress and ASML High NA EUV”


    Mentor at DVCon – Visualize This

    Mentor at DVCon – Visualize This
    by Bernard Murphy on 03-10-2016 at 12:00 pm

    Steve Bailey entertained us during lunch on Tuesday with a talk on debug and visualization in the Mentor platform. Steve is based in Colorado, so had to spend the first part of his talk gloating about their Super Bowl win, but I guess he deserves that.

    On a more technical note, he showed us a familiar survey they had completed with the Wilson Group in which 37% of verification time, the largest part of the cycle, is spent in debug. So whatever vendors do to improve verification, it better have a big impact on improving debug. For Mentor that means four things:
    · A single debug solution to cover the whole flow
    · Get to more data, faster, the first time
    · Root cause failure symptoms faster
    · Strong navigation of design and activity

    On a single solution to cover the whole flow, Visualizer Debug is the common debug interface whether you are using Vista Virtual Prototype, Questa Formal, Questa Simulation, Veloce Emulation or FPGA prototyping, or any combination of these. To get to the right data, faster, you can now start with a minimum trace set to run fast and small and still be able to reconstruct untraced signals quickly. Mentor has a 2X improvement in time to debug with Veloce and 4x improvement in debug performance.

    Mentor does a nice job on root cause traceback and the standard navigating capabilities (hierarchy – rtl – schematic – activity cross-probing) but I’d like to focus on three features that I found to be leading-edge – testbench debug, CodeLink and post-silicon debug.


    I’m not sure what the stats are on typical time spent debugging a testbench versus time spent debugging the design, but I’m pretty sure testbench debug consumes a sizeable percentage. So you need just as much debug support for the testbench as you do for the design. But UVM or similar standards look a lot more like software than design, so you need the kind of capabilities you would expect to find in a state-of-art software debugger: component hierarchy browsers, class browsers, object browsers and more. Then you want to be be able to probe values and objects and view transactions in a waveform or a stripeview, and debug around the problem. Visualizer gives you all of this.


    CodeLink ™ takes the next obvious step by allowing you to debug hardware and software together. Now you have a software debugger which can reach all the way down into the underlying hardware, which is important in all sorts of ways. In one talk I heard a user expert mentioning the need to cycle through firmware just to get out of reset. Power management controllers straddle the line between hardware and software. Then there’s multi-threading, cache-coherency – it has really become impossible to draw a sharp line between hardware and software. Debug platforms like CodeLink have become essential to manage this blurring of domains.


    Finally there’s silicon debug. Certus™ is most commonly used with FPGA designs and FPGA prototypes. I find the prototyping use most interesting in its role to complete the flow of engines to functionally model design. FPGA prototyping is enormously valuable in providing software developers a platform which can run at speeds acceptable enough to support early software development. But debugging prototype hardware is not easy – there’s a lot less visibility into internal nodes than you would have in simulation or emulation. Certus provides a way to instrument your FPGA prototype so you can capture very deep internal traces for review in debug through the JTAG port.

    You can learn more about Visualizer , CodeLink and Certus HERE.

    More articles by Bernard…