Banner Electrical Verification The invisible bottleneck in IC design updated 1

ARM and embedded SIM

ARM and embedded SIM
by Bernard Murphy on 03-06-2018 at 7:00 am

It seems that a hot ticket at Mobile World Congress this year was embedded SIM announcements. As a reminder of why this space is hot, cellular communication for provisioning and data uploads is a very real option for many IoT devices. In agricultural, smart energy and asset tracking applications for example, near-range options like Wi-Fi and Bluetooth obviously aren’t viable. And since IoT is scaling up fast (GSMA suggests a $1.8T opportunity for operators by 2026) and conventional SIM cards are hopelessly unscalable as a way to manage credentials and keys, embedded SIM solutions, updatable over the air (OTA), look like the way to go.

ARM just announced their Kigen (pronounced Keegan or perhaps Keygen) product in this space, building on their acquisition of Simulity, a provider of operating systems for SIMs along with related server support. They call this iSIM which they position as more advanced MFF2 (chip SIM) though others seem to consider eSIM to be more in line with what ARM is promoting. Whatever – in either case the idea is a device soldered into the system (rather than plugged into a SIM socket) which can be provisioned and updated OTA.

This being ARM, the hardware can be built around their MCUs and Cordio radio IP with iSIM providing the SIM function, typically based on a secure enclave through CryptoIsland and the Simulity OS software (Kigen OS+). Also, following the philosophy of their PSA architecture, there is a server component to the solution, which I believe is also from the Simulity acquisition. This is designed to meet the needs of mobile network operators (MNOs), IoT service providers, OEM and module makers and enterprises that will be using these solutions.

From ARM’s perspective, for enterprises their iSIM solution enables in-field flexibility, a longer lifespan of deployment and lower-cost devices. For operators it offers the ability to scale IoT deployments while maintaining the integrity of their networks. And for OEMs and device/system-makers it offers the ability to serve global markets with local provisioning.

ARM was quite clear in their pre-announcement discussion that customers are free to mix and match hardware and software in their total eSIM solutions, probably wise since this is an emerging and therefore probably evolving space. That said, it is interesting to compare a turnkey ARM solution with other options. One thing ARM does well is to provide for the total system – from cloud to edge device. Adding iSIM brings in network credentialing. And their strong position in edge hardware and software, particularly given their experience in security, is unarguable. So if the shortest path to a complete solution, widely-supported, is what you need it would be difficult to fault ARM’s offering.

Who wouldn’t want that? Many would, but one reason there may be a healthy market for alternatives would be security through diversity. The eSIM/iSIM authenticates whoever wants access rights so that’s a rather critical part of the chain. When it comes to security, we’re starting to realize that there’s no such thing as too many defensive walls. The technical walls are clear – secure zones, encryption and so on but a couple of methods have nothing to do with technology. Security through obscurity (don’t publicize details of your security design, providers, etc), for all it is derided, it still widely practiced and, as long it’s not your only line of defense, rightly so. Why make it easy for hackers to know where to start?

Security through diversity takes the view that you shouldn’t use the same solutions that everyone else uses. If you’re a hacker and you know there are 3 widely-used security solutions in the market, where will you focus your energy? Probably on the most popular solution; who has time to work on three? So another layer of defense is to use one of the less widely-adopted solutions. If they meet all other needs, why not?

All makes for an interesting world and healthy competition. You can learn more about Kigen and iSIM HERE.


ClioSoft and SemiWiki Winning

ClioSoft and SemiWiki Winning
by Daniel Nenni on 03-05-2018 at 7:00 am

There is a bit of nostalgia here since ClioSoft was one of the first companies that we (SemiWiki) worked with 7 years ago. Back then it was hard for an emerging EDA company to get noticed by the mainstream media much less collaborate with them. Since then we have published 80 blogs with ClioSoft that have garnered more than 350,000 views. While we cannot take full credit for the huge success of ClioSoft, their class and ability to collaborate with media, partners, and customers is the reason for their success and we are proud to be part of that, absolutely.

As an example, take a look at the SemiWiki ClioSoft landing page and see the top blogs over the last seven years, the company history and the CEO interview. And don’t forget ClioSoft does one of the best DAC parties that I never miss so I hope to see you at #55DAC in San Francisco.

ClioSoft Closes 2017 with Sustained Growth and Profitability for its 18[SUP]th[/SUP] year


Introduction of designHUB, the customizable ecosystem collaboration platform contributed to growth along with the SOS7 design management platform

FREMONT, Calif., February 28, 2018— ClioSoft[SUP]®[/SUP], Inc., a leader in system-on-chip (SoC) design data and intellectual property (IP) management solutions for the semiconductor design industry, today announced that the company has achieved 18 years of record bookings and revenue, while maintaining profitability. A 25% increase in new bookings in 2017 came in part from the adoption of designHUB[SUP]®[/SUP], the next-generation IP reuse ecosystem created primarily to enable enterprises to efficiently utilize their existing design resources. Bookings also rose thanks to an increase in sales of ClioSoft’s SOS7[SUP]®[/SUP] design-management platform by new and existing customers. With 30 new customer accounts in 2017, ClioSoft continued its steady growth as a market leader in SoC design and IP management software.

In 2017, ClioSoft announced the designHUB ecosystem platform on the concept that untapped ideas, design expertise or any intellectual property–including semiconductor IPs–can be shared seamlessly across a company and leveraged to produce remarkable results. Using designHUB, designers can search and compare IPs across geographical or business silos of a company to select the most suitable IP, and then integrate it into their design. Design teams can use the designHUB ecosystem to efficiently collaborate and manage their design projects and easily package their designs as IPs within a secure environment. The designHUB platform works on top of most commonly used data management systems such as SOS7, Perforce, Git, and Subversion etc.

“We have proven that design reuse can be a reality within a company,” said Srinath Anantharaman, founder and CEO of ClioSoft. “designHUB, has been positively received among design teams over the past year. By using the concept of crowdsourcing, designHUB bridges the gap between the IP developer and the IP user all within a single platform. The adoption of the designHUB ecosystem and the SOS7 platform continues our vision of delivering a best-in-class SoC and IP management solutions to our customers for design reuse at an enterprise level and to deliver their SoCs in-time. SOS7 continues to remain the only design management platform for all types of designs with integration to tools from all major EDA tool providers.”

About SOS7 Platform:
ClioSoft’s SOS7 design-management platform empowers single- or multi-site design teams to collaborate efficiently on complex analog, digital, RF and mixed-signal designs from concept to GDSII within a secure design environment. Tight integration with tools from various EDA vendors, provides a cohesive design environment for all types of digital, analog, RF and mixed-signal designs. In addition to enabling design engineers to manage design data and tool features from the same cockpit, SOS7 provides integrated revision control, release and derivative management and issue-tracking interface to commonly used bug-tracking systems. Using SOS7 facilitates easy design handoffs between various design handoffs and mitigates the possibility of design re-spins.

About designHUB:
The designHUB platform provides a collaborative IP reuse ecosystem for enterprises. With built-in analytics and collaborative tools, designHUB not only improves IP reuse by providing an easy-to-use workflow for designers to leverage their internal resources but it also enables design teams to collaborate efficiently to develop SoCs faster. To enable designers to be more productive, designHUB tracks and collates all activities for design projects an engineer may be working on or has been involved in and displays the notifications and tasks assigned in a dashboard for easy review.

About ClioSoft:
ClioSoft is the pioneer and leading developer of enterprise system-on-chip (SoC) design configuration and enterprise IP management solutions for the semiconductor industry. The company provides two unique platforms that enable IP design management and reuse. The SOS7 platform is the only design management solution for multi-site design collaboration for all types of designs – analog, digital, RF and mixed-signal and the designHUB platform provides a collaborative IP reuse ecosystem for enterprises. ClioSoft customers include the top 20 semiconductor companies worldwide. The company is headquartered in Fremont, CA with sales offices and distributors in the United States, United Kingdom, Israel, Europe, India, China, Taiwan, South Korea and Japan. For more information visitwww.cliosoft.com

Also Read

IoT SoCs Demand Good Data Management and Design Collaboration

ClioSoft’s designHUB Debut Well Received

The Official SemiWiki #54DAC Party Guide!


SPIE Advanced Lithography 2018 – EUV Status

SPIE Advanced Lithography 2018 – EUV Status
by Scotten Jones on 03-05-2018 at 7:00 am

This year the Advanced Lithography Conference felt very different to me than the last couple of years. I think it was Chris Mack who proclaimed it the year of Stochastics. EUV has dominated the conference for the last several years but in the past the conversation has been mostly centered on the systems, system power and uptime.

I will be writing up more detailed blogs from interviews with ASML and imec and their presentations, but I wanted to first present some overall impressions.

ASML is now delivering systems with acceptable power (at least for initial use) and uptime is improving, although not there yet.

The conversation about EUV has now shifted into the practical details of the process parameters and there are lots of details to work through!

Perhaps the key issues remaining with EUV relates to dose and line edge roughness (LER). Dose is the number of photons delivered multiplied by the photon energy. because EUV photons are so much more energetic than deep UV (DUV) photons there are 18 times less photons in EUV for the same dose. The small number of photons leads to shot noise at low doses, but this is just the tip of the iceberg.

The random defects from EUV exposure can lead to micro bridges and open lines for line/space pairs and missing or bridged contacts in dense contact arrays. In his Keynote address Yan Borodovsky noted that complex designs can have up to one billion vias and a single bad via can kill the circuit. Under this condition even a five-sigma process will result in zero yield!

At Litho Vision held the Sunday before SPIE, John Biafore of KLA Tencor noted that current EUV photoresists only absorb about 20% of the incident EUV photons. Ideally a photoresist should absorb around 45% of the photons to maximize sensitivity while insuring even exposure down through the depth of the photoresist. This presents an opportunity to double the sensitivity of the photoresist doubling the effective dose for a given delivered dose. Of course, if this was easy it would already be done. The photoresist needs to absorb EUV photons and generate secondary electrons with the correct characteristics that trigger photo events.

In Patrick Naulleau’s talk “The implications of shot noise on EUV patterning” he made several interesting observations:

LER goes up exponentially at lower does and even at high dose it is never zero with around 1.5nm as a minimum.

For chemically amplified photoresists there are five reaction variables that determine LER:

[LIST=1]

  • Photon shot noise (RV1)
  • Photons generate electrons that generate photo acids (RV2)
  • Acids are only generated where you have PAGs (RV3)
  • Reaction diffusion (RV4)
  • Protection groups (RV5)

    The relative importance of the different mechanisms for a 16nm feature on line width roughness (LWR) are:

    • Photon noise = 2nm
    • Acid generation = 1.2nm
    • PAG = 0.6nm
    • Quencher = 1.9nm
    • Protecting groups =0.1nm

    The interesting conclusion for this is material related effects are more important than simple photon shot noise!

    There continues to be a lot of work down on post exposure smoothing of photoresist and while there are promising results for specific features the broad applicability still need works.

    Self-aligned blocks (SAB) and fully self-aligned (FSAV) are another interesting area that could in theory reduce sensitivity to LER/LWR but again the ability to broadly create all the required features needs work.

    In “EUV photolithography: Resist progress and challenges” presented by JSR and Cornell it was noted that the current Chemically Amplified Photoresist (CAR) used for DUV is reaching it limits. CARs depend on polymers and they are big, on the order of the feature sizes. Adding metal sensitizers can make CAR much better. Photoresist can be made using metal nanoparticles and there is promise here.

    Another problem with photoresists for EUV or even DUV at the small feature sizes now being printed is that to prevent pattern collapse resist aspect ratios have to be less than ~2:1 and that yields very thin photoresist layers for small features.

    In “Introduction of pre-etch deposition technique in EUV patterning” from GLOBALFOUNDRIES and IBM the idea of depositing a polymer in etching systems before etch was discussed. Anisotropic dry etches are typically a balance of polymer deposition and etching. In this technique a “top heavy” polymer is deposited prior to etching. The polymer deposits more heavily on the top of the photoresist then it does down in the trenches. A descum is then used to clear out the trench bottoms and the net photoresist is thicker than it was prior to the deposition and descum. There was also some discussion in another paper of sputtering silicon to harden the photoresist.

    There were many other papers along similar line discussing various photolithography and etch optimizations. There is clearly a lot of opportunity for improvement but also a lot of work to be done and that brings me to my biggest concern about EUV.

    The throughput of an EUV exposure system is given by:

    Exposure time = wafer overhead time + steps per wafer x (exposure time per field + stepping time)

    ASML announced a few months ago that they had achieved 125 wafers per hour (wph) at a 20mJ/cm2 dose with a 250-watt source, 96 steps per wafer and no pellicle. At SPIE they announced 140wph at a 20mJ/cm2 dose with a 246-watt source, 96 steps per wafer and no pellicle. Presumably the improvement is from improved wafer overhead time and or stepping time. ASML continues to work on improving system throughput by speeding up the system and increasing source power.

    These are impressive accomplishments, but:

    • Logic steps are around 110
    • It may be possible to expose contacts and vias without a pellicle but a pellicle will be required for metal layers and pellicles currently only transmit 83% of the EUV light.
    • Photoresist doses are likely to be higher than 20mJ/cm2 at least initially.

    At SPIE I presented a simple throughput model calculation based on the 125wph ASML results, see figure 1.



    Figure 1. EUV throughput.

    Lithographers I have talked to are predicting 30mJ/cm2 as a usable dose for 7nm logic. Greg McIntyre (director of lithography at imec) mentioned to me that he thought initial doses will be 40 or 50mJ/cm2 at 7nm or even higher.

    There is a rule of thumb that cutting feature sizes in half requires 8x the dose.

    As I discussed at ISS (my presentation is written up here) I expect 7nm logic to begin using EUV early next year (2019). 5nm and 6nm logic from Samsung and 5nm logic from TSMC are both due to enter preproduction in 2019 as well.

    I think EUV is close to being ready for 7nm logic and that as we get high-volume manufacturing experience next year we will work though a lot of issues and likely bring doses for 7nm down to around 30mJ/cm2. The key problem in my view is that at 5nm we need doses of around 50mJ/cm2 or lower to achieve acceptable throughput on EUV tools. If the doses are 70mJ/cm2 or higher we won’t have enough EUV capacity to support the needed ramp. There is very little time to solve this problem!


  • Analog-to-Digital Converter IP for IoT Designs

    Analog-to-Digital Converter IP for IoT Designs
    by Tom Dillinger on 03-05-2018 at 6:00 am

    The projected revenue growth rate for IoT electronics remains strong, across a wide range of applications – e.g., visual object identification, voice recognition, machine automation, health and fitness applications, environmental and energy controls. A key component of these designs is the analog-to-digital conversion (ADC) functionality between the associated sensor(s) and the computational logic.

    Synopsys has recently released an online webinar that provides insights into the architectural requirements and technical challenges for selecting ADC IP for IoT designs. Manuel Mota, Product Marketing Manager for Analog and Wireless IP, discusses the engineering tradeoffs for analog sensor data acquisition when developing an IoT solution.

    IoT Electronics, Process Selection, and ADC Requirements

    Manuel described a product differentiation in the IoT space, spanning from endpoint devices to sensor hubs.

    The endpoints impose very strict power dissipation limits, with implications on power management features required of the ADC. The cost and power constraints are driving integration of diverse IP. “55nm is the current sweet spot for IoT electronics. With the introduction of non-volatile memory at 40nm, that will soon emerge as the preferred node.”, Manuel indicated. At the other end of the spectrum, applications requiring high-performance computation are pursuing more aggressive process nodes – yet, there is still a requirement to integrate ADC IP for these sensor hubs.

    ADC Architecture

    There are several ADC architectures widely used – e.g., pipelined, flash comparators, successive approximation register (SAR), sigma-delta converters. “The SAR architecture is the appropriate choice for IoT designs, with the right balance between low power and high bandwidth.”, Manuel highlighted.

    The power dissipation of an SAR ADC scales with the sampling rate. For many IoT applications, a moderate data conversion rate is sufficient, enabling power savings.

    A block diagram of an SAR ADC is depicted below. The sampling of the sensor voltage input to the ADC is following by a succession of n comparisons for an n-bit register output. The control logic sequences through n cycles, where each bit of the SAR register is either set to ‘1’ or ‘0’, from MSB to LSB.


    (From “Understanding SAR ADC’s”, Maxim Integrated Products, Inc.)

    The key to the SAR is the internal n-bit digital-to-analog converter (DAC) implementation. For the first cycle, the SAR MSB register bit is set to ‘1’, all bits having previously been cleared. A corresponding voltage (Vref / 2) provided by the DAC is compared against the sampled sensor input voltage. Based on the comparator output, the MSB remains at ‘1’ (Vin > Vref/2) or reset to ‘0’ (Vin < Vref/2). The next cycle evaluates the (MSB-1) bit position in a similar manner, repeating the Vin versus DAC output voltage comparison down to the LSB. A simple 4-bit comparison sequence is shown below.

    The critical feature of the SAR is the accuracy of the internal DAC. A simplified illustration is depicted below, illustrating the scaled capacitive array used by the DAC + comparator.

    (From “ADC Architectures”, Analog Devices Tutorial MT-021)

    During sampling, the sensor voltage is connected to all capacitors, charging the total capacitance to Vin. After acquisition, individual capacitors are toggled successively for each bit position, from MSB down to LSB. The scaling of each capacitance value provides a coupling event whose magnitude is a binary fraction of Vref, from 1/2 down to 1/(2**n). The comparator determines whether the coupled transition is greater or less than the original stored charge associated with the Vin sample. After the comparator settling time, the SAR register bit value either remains at ‘1’ or is reset to ‘0’ before evaluating the next bit position.

    There are manufacturing variations associated with fabrication of the DAC capacitor array. The SAR control logic will likely include a calibration mode to provide error compensation. (For a novel implementation of SAR compensation, see reference [1].)

    The SAR ADC IP will also likely integrate a low-dropout (LDO) voltage regulator, to isolate the IP from SoC supply noise.

    Manuel described additional power states implemented in the ADC, consistent with the focus on power dissipation reduction. The figure below illustrates power states ranging from simple control logic gating to a full deep sleep state, with a sample rate illustration including recalibration after power gating recovery.

    IoT ADC Challenges

    Manuel described some of the engineering tradeoffs for interfacing sensor(s) to the ADC.

    • sensor output resistance

    High sensor output resistance results in a longer time constant to charge the ADC input capacitance – either a longer sample time is required, or buffering will need to be inserted between sensor and the ADC input, with impacts on IP area and power.

    • sensor voltage excursion

    Commonly, the Vref of the ADC is programmed to match the Vin voltage range of the sensor. However, a limited swing sensor may necessitate inserting an amplifier to achieve the desired resolution, with impacts to area, power, and Vin noise magnitude.

    • sensor noise voltage

    Speaking of sensor input noise, the IoT design implementation may opt to pursue oversampling to average out the input noise, if the sensor data acquisition rate allows.

    ADC integration options

    There are challenges associated with both the electrical and architectural integration of the ADC. Electrically, the ADC IP should be isolated from other noise sources on the SoC. An internal LDO provides supply isolation. Manuel also recommended paying attention to ground and substrate noise sources – one option was shifting the sample interval from the IoT SoC system clock, to minimize exposure to the ground bounce from digital switching activity.

    Architecturally, the IoT designer is faced with tradeoffs on SoC package pins, the number of sensors to observe, and (potentially) the need for very high-speed acquisition. Several topologies for sensor multiplexing and dual ADC’s for sample rate interleaving were shown.

    There is an architectural design tradeoff on the ADC output, in terms of the internal bus traffic inside the SoC – the design may need to incorporate a local FIFO to store, then send burst data from the sensor(s).

    Synopsys offers a rich set of (silicon-proven) SAR ADC IP, with the requisite performance and power state support. I would encourage those interested in learning more about IoT sensor + ADC design requirements and challenges to view Manuel’s webinar, available at this link. (A simple registration process is required.)

    -chipguy


    MWC 2018: The Anonymous Car

    MWC 2018: The Anonymous Car
    by Roger C. Lanctot on 03-04-2018 at 12:00 pm

    European regulators are poised to once again shift European car makers to the back of the queue when it comes to realizing the value of connected cars. While the rest of the world is obsessively pursuing the creation of autonomous vehicles, the European Commission with the help of the GSMA is working toward the creation of the anonymous vehicle (with apologies to my Strategy Analytics colleague, Chris Taylor, who coined the expression in a cab with me in Barcelona this week).

    There are two reasons for this: eCall and GDPR
    The first step on the path to the anonymous car arrives next month as the European eCall – for emergency call – mandate takes effect requiring all new type-approved vehicles to include an embedded telecommunication device for automatically calling the nearest public service access point in the case of a crash and airbag deployment. There is no question that this is a good idea and may well save as many as 1,200 lives annually by speeding first responders to the scenes of crashes, but the road to ruin is paved with good intentions, as we know.

    During the decade-long process of bringing the eCall mandate to the market car companies and carriers expressed concern at having so many embedded devices pinging the network – and thereby costing car companies pennies/month/car – while generating little or no real revenue. The early visions of the technology suggested that the same device being used for eCall could also deliver non-emergency services and connectivity satisfying wireless carrier concerns while offering a value-add proposition for car makers.

    At the time, car makers were being dragged kicking and screaming into the business of connecting cars, so the idea of added-value services – with complicated and expensive (for consumers) subscriptions to manage – was more or less a non-starter. Further, car companies didn’t want to pay the pennies/month/car necessary to enable the eCall devices to regularly ping the network – even when not in use.

    So the GSMA, in its infinite wisdom, created the “dormant SIM.” The dormant SIM allowed for the embedding of an eCall device that would essentially lie dormant inside a vehicle until the moment it was needed to alert authorities to a crash.

    The folly of this solution is clear in retrospect. Today, car makers are racing to connect their cars and collect the data being broadcast from their vehicles. Pennies/car/month suddenly seems like a trivial concern and the dormant SIM an anachronism in the context of leveraging vehicle connections to facilitate automated driving and dramatically reduce highway fatalities.

    But the folly lives on and many car makers will no doubt deploy dormant SIMs. The dormant SIM is a wireless device that, in effect, may never do anything while representing an appendix-like added cost to the consumer. Worse yet, since it is dormant neither the consumer nor the car maker or dealer will have any way to determine if it is capable of functioning properly should the unfortunate day arrive when it is needed.

    The second step on the path to the anonymous car is the General Data Protection Regulation which becomes enforceable starting on May 25th, 2018. This regulation, intended to protect consumers from misuse or abuse of their personal data has thrown a curve ball into the connected car business forcing car companies to reconsider their data collection strategies at the precise moment that data collection is becoming an essential task on the evolutionary path to creating autonomous vehicles.

    The GDPR arrives just car makers are commencing the introduction of open APIs for data collection and sharing and business models are emerging for aggregating and productizing data in ways that are likely to ultimately subsidize the very vehicle connections expected to soon save lives – and time and money and emissions. The industry is just coming to grips with the unintended consequences of GDPR which include, for instance, speech recognition companies such as Nuance Communications accelerating their shift from cloud-based recognition systems to embedded.

    Cars are, in essence, browsers on wheels. As such, the potential value of location information along with the value of cloud-based speech recognition are equivalent to the billions of dollars in revenue Alphabet extracts from online advertising annually.

    GDPR throws a spanner in the works, complicating an already daunting process of data collection and extraction from vehicles – with the agreed participation of the consumer – in the interest of monetizing vehicle data connections. If cloud-based speech recognition, which has the capacity to enable vehicle-based search, is impaired,the process of justifying expensive vehicle connections will be slowed as well.

    The irony is that car companies and carriers had finally made some progress on mitigating the costs of roaming and simplifying the process of reprovisioning cars between carriers, with the help of the GSMA’s eUICC protocol. Just as car connections are becoming more manageable, data collection is becoming more problematic.

    There is hope. BMW is attempting to show the way with its opt-in based vehicle data management platform BMW CarData. It remains to be seen how quickly consumers will embrace this approach – via which BMW acts as a trusted neutral data broker between service providers and BMW owners. BMW may yet fall afoul of GDPR, but the program has clearly been launched in response to GDPR, anticipating its requirements for approval of the consumer and transparency.

    I, for one, feel that car makers should be REQUIRED, not discouraged, to collect vehicle data. My expectation is that my car maker will be obliged to tell me – in a proactive way – when my vehicle is misbehaving. It could very well save my life.

    GDPR gives car makers an excuse, themselves, to opt out of data collection. Such a prospect will be deleterious to the industry and an abrogation of fiduciary responsibility. Car makers ARE obligated to collect vehicle data, no matter how difficult regulators make it.

    The battle continues. Both the European Commission and GSMA are poised to step in with regulations and standards, respectively, governing autonomous driving and cybersecurity. Let’s hope they do better in these two areas or, at least, let’s hope they do less.

    More from Roger on SemiWiki


    Second Line of Defense for Cybersecurity: Blockchain

    Second Line of Defense for Cybersecurity: Blockchain
    by Ahmed Banafa on 03-04-2018 at 7:00 am

    In the first part we covered AI as the first line of defense for cybersecurity, the goal was to keep the cyber-criminals at bay, but in case they managed to get-in and infiltrate the network we need to initiate the second line of defense; #Blockchain. With the fact that cybercrime and cyber security attacks hardly seem to be out of the news these days and the threat is growing globally. Nobody would appear immune to malicious and offensive acts targeting computer networks, infrastructures and personal computer devices. Firms clearly must invest to stay resilient. Gauging the exact size of cybercrime and putting a precise US dollar value on it is nonetheless tricky. But one thing we can be sure about is that the number is big and probably larger than the statistics reveal.

    Read: First Line of Defense for Cybersecurity: AI

    The global figure for cyber breaches had been put at around $200 billion annually.

    New blockchain platforms are stepping up to address security concerns in the face of recent breaches. Since these platforms are not controlled by a singular entity, they can help ease the concerns created by a spree of recent breach disclosures. Services built on top of #blockchain have the potential to inspire renewed trust due to the transparency built into the technology.

    Developments in blockchain have expanded beyond recordkeeping and cryptocurrencies. The integration of smart contract development in blockchain platforms has ushered in a wider set of applications, including cybersecurity.

    By using blockchain, transaction details are kept both transparent and secure. Blockchain’s decentralized and distributed network also helps businesses to avoid a single point of failure, making it difficult for malicious parties to steal or tamper with business data.

    Transactions in the blockchain can be audited and traced. In addition, public blockchains rely on distributed network to run, thus eliminating a single point of control. For attackers, it is much more difficult to attack a large number of peers distributed globally as opposed to a centralized data center.

    Implementing Blockchain in Cybersecurity
    Since a blockchain system is protected with the help of ledgers and cryptographic keys, attacking and manipulating it becomes extremely difficult. Blockchain decentralizes the systems by distributing ledger data on several systems rather than storing them on one single network. This allows the technology to focus on gathering data rather than worrying about any data being stolen. Thus, decentralization has led to an improved efficiency in blockchain-operated systems.

    For a blockchain system to be penetrated, the attacker must intrude into every system on the network to manipulate the data that is stored on the network. The number of systems stored on every network can be in millions. Since domain editing rights are only given to those who require them, the hacker won’t get the right to edit and manipulate the data even after hacking a million of systems. Since such manipulation of data on the network has never taken place on the blockchain, it is not an easy task for any attacker.

    While we store our data on a blockchain system, the threat of a possible hack gets eliminated. Every time our data is stored or inserted into blockchain ledgers, a new block is created. This block further stores a key that is cryptographically created. This key becomes the unlocking key for the next record that is to be stored onto the ledger. In this manner, the data is extremely secure.

    Furthermore, the hashing feature of blockchain technology is one of its underlying qualities that makes it such a prominent technology. Using cryptography and the hashing algorithm, blockchain technology converts the data stored in our ledgers. This hash encrypts the data and stores it in such a language that the data can only be decrypted using keys stored in the systems. Other than cybersecurity, blockchain has many applications in several fields that help in maintaining and securing data. The fields where this technology is already showing its ability are finance, supply chain management, and blockchain-enabled smart contracts.


    Advantages of Using Blockchain in Cybersecurity

    The main advantages of blockchain technology for cyber security are the following:

    Decentralization

    Thanks to the peer-to-peer network, there’s no need for third-party verification, as any user can see network transactions.

    Tracking and tracing
    All transactions in blockchains are digitally signed and time-stamped, so network users can easily trace the history of transactions and track accounts at any historical moment. This feature also allows a company to have valid information about assets or product distribution.

    Confidentiality
    The confidentiality of network members is high due to the public-key cryptography that authenticates users and encrypts their transactions.

    Fraud security
    n the event of a hack, it’s easy to define malicious behavior due to the peer-to-peer connections and distributed consensus. As of today, blockchains are considered technically ‘unhackable’, as attackers can impact a network only by getting control of 51% of the network nodes.

    Sustainability
    Blockchain technology has no single point of failure, which means that even in the case of DDoS attacks, the system will operate as normal thanks to multiple copies of the ledger.

    Integrity
    The distributed ledger ensures the protection of data against modification or destruction. Besides, the technology ensures the authenticity and irreversibility of completed transactions. Encrypted blocks contain immutable data that is resistant to hacking.

    Resilience
    The peer-to-peer nature of the technology ensures that the network will operate round-the-clock even if some nodes are offline or under attack. In the event of an attack, a company can make certain nodes redundant and operate as usual.

    Data quality
    Blockchain technology can’t improve the quality of your data, but it can guarantee the accuracy and quality of data after it’s encrypted in the blockchain.

    Smart contracts
    Software programs that are based on the ledger. These programs ensure the execution of contract terms and verify parties. Blockchain technology can significantly increase the security standards for smart contracts, as it minimizes the risks of cyber-attacks and bugs.

    Availability
    There’s no need to store your sensitive data in one place, as blockchain technology allows you to have multiple copies of your data that are always available to network users.

    Increase customer trust

    Your clients will trust you more if you can ensure a high level of data security. Moreover, blockchain technology allows you to provide your clients with information about your products and services instantly.


    Disadvantages of Using Blockchain in Cybersecurity


    Irreversibility
    There’s a risk that encrypted data may be unrecoverable in case a user loses or forgets the private key necessary to decrypt it.

    Storage limits
    Each block can contain no more than 1 Mb of data, and a blockchain can handle only 7 transactions per second in average.

    Risk of cyberattacks
    Though the technology greatly reduces the risk of malicious intervention, it’s still not a panacea to all cyber threats. If attackers manage to exploit the majority of your network, you may lose your entire database.

    Adaptability challenges

    Though blockchain technology can be applied to almost any business, companies may face difficulties integrating it. Blockchain applications can also require complete replacement of existing systems, so companies should consider this before implementing the blockchain technology.

    High operation costs
    Running blockchain technology requires substantial computing power, which may lead to high marginal costs in comparison with existing systems.

    Blockchain literacy

    There are still not enough developers with experience in blockchain technology and with deep knowledge of cryptography.

    Conclusion
    Blockchain’s decentralized approach to cybersecurity can be seen as a fresh take on the issues that the industry faces today. The market could only use more solutions to combat the threats of cyberattacks. And, the use of blockchain may yet address the vulnerabilities and limitations of current security approaches and solutions.

    Throwing constant pots of money at the problem and knee-jerk reactions is not the answer. Firms need to sort out their governance, awareness, organizational culture and critically look at the business purpose and processes before they invest in systems to combat cybercrime.

    The roster of these new services provided by Blockchain may be limited for now and of course they face incumbent players in the cybersecurity space. But this only offers further opportunity for other ventures to cover other key areas of cybersecurity. Blockchain also transcends borders and nationalities, which should inspire trust in users. And, with the growth of these new solutions, the industry may yet restore some of the public’s trust they may have lost in the midst of all these issues.

    Overall, blockchain technology is a breakthrough in cyber security, as it can ensure the highest level of data confidentiality, availability, and security. However, the complexity of the technology may cause difficulties with development and real-world use.

    Implementation of blockchain applications requires comprehensive, enterprise- and risk-based approaches that capitalize on cybersecurity risk frameworks, best practices, and cybersecurity assurance services to mitigate risks. In addition, cyber intelligence capabilities, such as cognitive security, threat modeling, and artificial intelligence, can help proactively predict cyber threats to create counter measures, that’s why AI considered as the first line of defense while Blockchain is the second line.

    Ahmed Banafa Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016

    Read more articles at IoT Trends by Ahmed Banafa

    References
    https://www.ibm.com/blogs/insights-on-business/government/convergence-blockchain-cybersecurity/

    https://www.forbes.com/sites/rogeraitken/2017/11/13/new-blockchain-platforms-emerge-to-fight-cybercrime-secure-the-future/#25bdc5468adc

    http://www.technologyrecord.com/Article/cybersecurity-via-blockchain-the-pros-and-cons-62035

    https://www.allerin.com/blog/blockchain-cybersecurity

    All figures: Ahmed Banafa


    Semiconductors could be up 12% in 2018

    Semiconductors could be up 12% in 2018
    by Bill Jewell on 03-02-2018 at 12:00 pm

    The global semiconductor market grew 21.6% for the year 2017, according to World Semiconductor Trade Statistics (WSTS). The market was much stronger than anticipated at the beginning of year. Semiconductor Intelligence tracked publicly available forecasts to determine which was the most accurate. We used forecasts made in late 2016 and early 2017 prior to the availability of the January 2017 WSTS data in March 2017. The winner was Future Horizons with a projected 11% increase in the 2017 semiconductor market. Our Semiconductor Intelligence forecast of 8% was the second closest. Other forecasts ranged from 3.3% from WSTS to 7.2% from Gartner. A booming memory market was the key driver, up about 60% according to WSTS. The semiconductor market excluding memory grew about 9%, closer to expectations at the beginning of 2017.

    What is the outlook for 2018? The perennial optimists at Future Horizons are calling for 21% semiconductor market growth in 2018, about the same rate as in 2017. We at Semiconductor Intelligence are staying with our December 2017 forecast of 12% growth in 2018. Other recent projections are in a narrow range from Mike Cowan’s 5.9% to IC Insights’ 8.0%.

    The growth rate of the 2018 semiconductor market is largely dependent on the first quarter. The first quarter is the seasonally weakest, averaging a 3.5% quarter-to-quarter decline over the last five years. Revenue guidance from major semiconductor companies confirms a likely decline in 1Q 2018. Double digit quarter-to-quarter revenue declines are projected by Intel, Qualcomm, MediaTek and STMicroelectronics. The memory companies expect revenue gains, with Micron guiding +2.9% and Toshiba guiding 3.0%. Samsung and SK Hynix did not provide specific guidance, but both expect strong memory demand to continue in 1Q 2018. A weighted average of the revenue guidance from the companies below points to a decline of over 3%. Using the upper end guidance points to a decline of about 2%.

    The key 2018 assumptions behind our December 2017 forecast are unchanged:
    • Steady or improving demand for key electronic equipment
    • Slight improvement in global economic growth
    • Moderating, but continuing strong memory demand
    • Strong quarterly growth set in 2017 drives healthy 2018

    The outlook for 2019 is weaker. As shown in the table below, Gartner expects PC and tablet unit shipments to recover from a 3.6% decline in 2017 to flat in 2018 and 2019. Mobile phone units should bounce back from a 2% decline in 2017 to 2.6% growth in 2018 and slow to 1.1% in 2019. The International Monetary Fund (IMF) January 2018 forecast called for a slight acceleration in GDP growth from 3.7% in 2017 to 3.9% in 2018. 2019 growth remains at 3.9%, healthy but with no acceleration from 2019. Our Semiconductor Intelligence forecasting models show the rate of semiconductor market growth is more closely linked to acceleration or deceleration in GDP than to the level of GDP growth.

    [table] border=”1″ cellspacing=”0″ cellpadding=”0″ align=”left”
    |-
    | style=”width: 179px; height: 19px” | Annual Growth Forecast
    | style=”width: 84px; height: 19px” | 2017
    | style=”width: 90px; height: 19px” | 2018
    | style=”width: 72px; height: 19px” | 2019
    | style=”width: 144px; height: 19px” | Source
    |-
    | style=”width: 179px; height: 18px” | PC & Tablet units
    | style=”width: 84px; height: 18px” | -3.6%
    | style=”width: 90px; height: 18px” | 0.0%
    | style=”width: 72px; height: 18px” | 0.0%
    | style=”width: 144px; height: 18px” | Gartner, Jan. 2018
    |-
    | style=”width: 179px; height: 19px” | Mobile phone units
    | style=”width: 84px; height: 19px” | -2.0%
    | style=”width: 90px; height: 19px” | 2.6%
    | style=”width: 72px; height: 19px” | 1.1%
    | style=”width: 144px; height: 19px” | Gartner, Jan. 2018
    |-
    | style=”width: 179px; height: 19px” | Global GDP
    | style=”width: 84px; height: 19px” | 3.7%
    | style=”width: 90px; height: 19px” | 3.9%
    | style=”width: 72px; height: 19px” | 3.9%
    | style=”width: 144px; height: 19px” | IMF, Jan. 2018
    |-

    The strong memory market will probably not continue into 2019. The key question is whether the memory boom will end with a bust (severe declines in demand and prices) or a soft landing (moderation in demand and prices). We currently expect a soft landing in the memory market based relatively steady demand for electronic end equipment. Our preliminary Semiconductor Intelligence for 2019 is low single digit growth of about 1% to 4%.


    Processing Power Driving Practicality of Machine Learning

    Processing Power Driving Practicality of Machine Learning
    by Tom Simon on 03-02-2018 at 7:00 am

    Despite their recent rise to prominence, the fundamentals of AI, specifically neural networks and deep learning, were established as far back as the late 50’s and early 60’s. The first neural network, the Perceptron, had a single layer and was good certain types of recognition. However, the Perceptron was unable to learn how to handle XOR operations. What eventually followed were multi-layer neural networks that performed much better at recognition tasks, but required more effort to train. Until the early 2000’s the field was held back by limitations that can be tied back to insufficient computing resources and training data.

    All this changed as chip speeds increased and the internet provided a rich set of images for use in training. ImageNet was one of the first really significant sources of labeled images, the type needed to perform higher quality training. Nevertheless, the theoretical underpinnings were established decades ago. Multilayer networks proved much more effective at recognition tasks, and with them came additional processing requirements. So today we have so called deep learning which boasts many layers of processing.

    While neural networks provide a general-purpose method of solving problems that does not require formal coding, there are still many architectural choices that are needed to provide an optimal network for a given class of problems. Neural networks have relied on general purpose CPU’s, GPU’s or custom ASICs. CPU’s have the advantage of flexibility, but this comes at the cost of lower throughput. Loading and storing of operands and results creates significant overhead. Likewise, GPU’s are often optimized to use local memory and perform floating point operations, which together do not always best serve deep learning requirements.

    The ideal neural network is a systolic network where data is moved directly from processing element to processing element. Also, deep learning has become very efficient with low precision integer operations. So, it seems that perhaps ASIC’s might be the better vehicle. However, as architectures of neural networks themselves evolve, ASIC might prematurely lock in an architecture and prevent optimization based on real world experience.

    It turns out that FPGA’s are a nice fit for this problem. In a recent white paper by Achronix, they point out the advantages that FPGA’s bring to deep learning. The white paper, entitled “The Ideal Solution for AI Applications — Speedcore eFPGAs”, goes further to suggest that embedded FPGA is even more aptly suited to this class of problems. The paper starts out with an easily readable introduction to the history and underpinnings of deep learning, then moves on the specifics of how processing power has created the revolution we are now witnessing.

    Yet, conventional FPGA devices introduce their own problems. In many cases they are not optimally configured for specific applications. Designers must accept the resource allocation available in commercially available parts. There is also the perennial problem of off chip communication. Conventional FPGA’s require moving the data through IO’s onto board traces and then back onto the other chip. The round trip can be prohibitively expensive from a power and performance perspective.

    Achronix now offers embeddable FPGA fabric, which they call eFPGA. Because it is completely configurable, only the necessary LUT’s, memories, DSP, interfaces, etc. need to be included. And, of course, the communication with other elements of the system are through direct bus interconnection or an on-chip NoC. This reduces silicon that is needed for IO’s on both ends.

    The techniques and architectures used for neural networks are rapidly evolving. Design approaches that provide maximum flexibility require experimentation and evolution. Having the ability to modify the architecture can be crucial. Embedded FPGA’s definitely have a role to play in this rapidly growing and evolving segment. The Achronix white paper is available on their web site for engineers who want to look deeper into this approach.

    Read more about Achronix on SemiWiki.com

    Related Blog


    Robust Reliability Verification – A Critical Addition To Baseline Checks

    Robust Reliability Verification – A Critical Addition To Baseline Checks
    by Alex Tan on 03-01-2018 at 12:00 pm

    Design process retargeting is acommon recurrence based on scaling orBOM(Bill-Of-Material) cost improvement needs. This occursnot only with the availability of foundry process refresh to a more advanced node,but also to any new derivative process node tailored towards matching design complexity, power profile or reliability needs. While many design companies rely on foundry supplied baseline DRC (Design Rule Checks) and LVS (Layout Versus Schematic) rule decks that correspond to each process roll-out, the shift to new technology such as FD-SOI(Fully Depleted Silicon On Insulator) and FinFET injected more complex design verification needs.

    During the past five years, the continuous process migration prompted DRC rules explosion in term of complexity and quantities such as due to multi-patterning, voltage-aware DRC, or FinFET specific requirements (e.g. cell alignment, polygon shift). Figure 1 shows the trend. Hence, the traditional DRC rule based verification which involved running foundry rule-sets is no longer adequate. Instead, a robust reliability verificationenvironment is necessary to ensure a successful tape-out. In fact, foundry selection is increasingly hinged upon its availability.

    Intellectual Property (IP) reuse is an integral part of a design refresh and is taking a significant portion of design remapping efforts in addressing these aspects:

    • IP porting needs (physical footprint, power target, etc.)
    • IP validation in new context and across different IP’s.
    • If process scaling involved, handling special IP design aspects such as the Electro-Static Discharge (ESD) requirement for IO pin protection and its adjoining interconnect.
    • Validation of IP interaction at full-chip context to further complement stand-alone block level checks, which includes performing its reliability verification.

    There are a few key aspects covered in reliability verification as illustrated in Figures 2 and 3:

    • Design level ESDESD is widely known and normally causes irreversible circuit damage. Several protection schemes to mitigate this includes common double-diode ESD network. Mentor’s Calibre PERC high-level checks GUI enables the description of these protection circuits in the form of a Calibre rule-check with minimum effort.

    • Device levelElectrical Over Stress (EOS)EOS could be described as a thermal induced damage due to over-voltage or over-current application to a device.In low-power applications, the presence of high-voltage signals and the use of thin oxides introduce vulnerability of layout to electrical overstress and may lead to oxide breakdown. In multi-voltage domain design, depending on how nets traverse within a design, signals of different voltages may be near each other. This difference in voltage values can create electrical fields that can influence sensitive areas on the chip and lead to reliability issues, particularly for automotive and other high power applications. To protect these nets from Time-Dependent Dielectric Breakdown (TDDB), usually caused by having nets too close to each other for their respective voltages, additional spacing rules are developed that specify power domain spacing based on the voltage delta.Calibre PERCtool voltage propagation feature enables designers to perform automated static analysis on large designs efficiently.

    • Voltage Aware DRCOnce netlist is extracted from the layout, Calibre PERC traces voltages throughout a design without the use of SPICE simulations or manual markers. It identifies nets and devices subject to voltage-aware DRC constraints, pinpoint nets voltages of interest and its gradient with relevant net counterparts then used them to run DRC net spacing checks. These checks not only enable robust protection against TDDB, but also enable design teams to save significant design space by applying only the spacing required for each voltage combination.

    • Interconnect Robustness Checks – Interconnect linking IP to the ESD protection circuitries at device level by using Point-to-Point (P2P) or assessing Current Density (CD) violation to complement chip-level validation. Charge Device Model (CDM) checking is crucial on gates that are directly connected to power/ground due to shrinking gate-oxide thickness.


    Most foundries nowadays have provided baseline reliability rule decks and leveraging Calibre PERC reliability platform. TSMC rolled-out TSMC9000IP for both library and IP quality management program. On supported nodes all TSMC IP’s with 100% score have been validated by Mentor’s Calibre PERC. Moreover, itwas selected as the EDA reliability platform by the RESCAR2.0 program. It is driven by a consortium of six major car and supplier companies (Audi, BMW, etc.) and the German government. Their aim is to enhance the reliability and robustness electronic automotive components, which reflects conformance to the international functional safety standard ISO 26262. The collaboration has also yielded Calibre automotive reliability checks. Tower Jazz is the first commercial foundry to incorporate them into their standard Calibre PERC design kit offering.

    In summary, demanding markets such as automotive and IoT dictate rigorous need of both internal and third-party IP’s validation, which should include reliability verification. A more streamline and robust set of checks is crucial to complement foundry-provided, rule based checks. Mentor’s Calibre PERC platform provides such design kit and accommodates further customization needs to satisfy such demands. For more info on Calibre, please check Mentor’s white paper here.


    Concluding Inconclusives

    Concluding Inconclusives
    by Bernard Murphy on 03-01-2018 at 7:00 am

    Formal methods are a vital complement to other tools in the verification arsenal, but they’re not without challenges. One of the more daunting is the “inconclusive” result – that case where the tool seems to be telling you that it simply gave up trying to figure out if a particular assertion is true or false. Compounding the problem, these inconclusive results aren’t rare events; they can actually be quite common, especially when you’re still on the learning curve. When I was first introduced to formal I thought that this made formal at best a minor tool in verification. If proving assertions was this hit-and miss, how could it play a major role?

    Turns out I was wrong, but I had to learn a bit more about formal methods to find out why. An inconclusive result doesn’t mean that all hope is lost for that assertion. As in most things, you can try harder or you can try smarter to prove the assertion. You can also change your approach to the proof. Mentor recently released a white paper illustrating some of these methods through a flow and an example. I particularly like the example so I’ll focus on that here.

    This is based on an ECC-wrapped memory, common enough, especially in safety-critical designs. The function reads a (vector) data input and forwards that together with a syndrome value to (in this case) a FIFO. The decoder pulls entries from the FIFO and outputs the data. Through this process, errors in two bits or less can be corrected. So a natural way to approach a formal proof would be to assert that the output data should always be equal to the input data, add a mechanism to inject errors on 0, 1 or 2 bits, then launch the formal prover.

    If you do this, you’ll probably get lots of experience with inconclusives, thanks to the fairly complex logic in the encoder and decoder and long sequences that must be followed through the FIFO. So the first trick is to break the design into pieces; in this case, first bypass the FIFO and prove that the assertion always holds when the output of the encoder is connected directly to the input of the decoder.

    How do you inject errors? The white-paper suggests a common approach with a clever wrinkle. A simple way to error a data bit is to cut that line, which you can accomplish through an external “cutpoint” command. A formal engine will assume a cut line can take any possible value and will test for all of those values, some of which will obviously differ from the (pre-cut) input values.

    You want to test that the ECC will recover from errors on two or less bits, so you need to add two or less of these cuts, but it would be cumbersome to list all of the possibilities, so here comes the wrinkle. The paper suggests adding a random bus with the same width as the data bus, also undriven, so formal will consider all possible value on the bus. Then cutpoints are added to those bits on the data bus where the corresponding bit on the random bus is high. Finally, the proof is constrained to only consider cases where two or less bits on the random bus are high. In this way the formal engine does the hard work of iterating over possible combinations of errors during the course of proving.

    Finally, you need to prove that the FIFO operates correctly. The good news here is that formal tools generally provide a support library (assertions and possibly constraints) to deal with common components. For example, the Mentor Questa formal tool has a predefined setup to handle FIFOs. Since you are just checking the FIFO, you can cut the data and syndrome inputs to the block, allowing the proof to consider any possible values.

    You might want to do one more thing – add a couple of constraints to avoid potential false errors. If read-enable is issued when the FIFO is empty or write-enable when the FIFO is full, that could be considered out-of-spec usage, or at least beyond the bounds of this proving task. Your choice, depending on what you want to prove. Either way, you can now run a proof using the pre-packaged assertions/constraints and verify the FIFO behaves correctly under all conditions.

    In summary, inconclusives are manageable, in this case by breaking the problem down into pieces and through judicious use of cutpoints, constraints and a pre-existing assertion-model for the FIFO. You just have to approach the problem in the right way. You can read the white-paper HERE.