CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

IBM z13 Helps Avoid Costly Data Breaches

IBM z13 Helps Avoid Costly Data Breaches
by Alan Radding on 07-07-2016 at 12:00 pm

A global study sponsored by IBM and conducted by the Ponemon Institute found that the average cost of a data breach for companies surveyed has grown to $4 million, representing a 29 percent increase since 2013. With cybersecurity incidents continuing to increase with 64% more security incidents in 2015 than in 2014 the costs are poised to grow.


z13–world’s most secure system


The z13, at least, is one way to keep security costs down. It comes with a cryptographic processor unit available on every core, enabled as a no-charge feature. It also provides EAL5+ support, a regulatory certification for LPARS, which verifies the separation of partitions to further improve security along with a dozen or so other built-in security features for the z13. For a full list of z13 security features click here. There also is a Redbook, Ultimate Security with the IBM z13 here. A midsize z, the z13s brings the benefits of mainframe security and mainframe computing to smaller organizations. You read about the z13s here on DancingDinosaur this past February.


As security threats become more complex, the researchers noted, the cost to companies continues to rise. For example, the study found that companies lose $158 per compromised record. Breaches in highly regulated industries were even more costly, with healthcare reaching $355 per record – a full $100 more than in 2013. And the number of records involved can run from the thousands to the millions.


Wow, why so costly? The researchers try to answer that too: leveraging an incident response team was the single biggest factor associated with reducing the cost of a data breach – saving companies nearly $400,000 on average (or $16 per record). In fact, response activities like incident forensics, communications, legal expenditures and regulatory mandates account for 59 percent of the cost of a data breach. Part of these high costs may be linked to the fact that 70 percent of U.S. security executives report they don’t even have incident response plans in place.


The process of responding to a breach is extremely complex and time consuming if not properly planned for. As described by the researchers, the process of responding to a breach consists of a minimum of four steps. Among the specified steps, a company must:

  • Work with IT or outside security experts to quickly identify the source of the breach and stop any more data leakage
  • Disclose the breach to the appropriate government/regulatory officials, meeting specific deadlines to avoid potential fines
  • Communicate the breach with customers, partners, and stakeholders
  • Set up any necessary hotline support and credit monitoring services for affected customers

And not even included in the researchers’ list are tasks like inventorying and identifying the data records that have been corrupted or destroyed, remediating the damaged data, and validating it against the last known clean backup copy. Am surprised the costs aren’t even higher. Let’s not even talk about the PR damage or loss of customer goodwill. Now, aren’t you glad you have a z13?


That’s not even the worst of it. The study also found the longer it takes to detect and contain a data breach, the more costly it becomes to resolve. While breaches that were identified in less than 100 days cost companies an average of $3.23 million, breaches that were found after the 100-day mark cost over $1 million more on average ($4.38 million). The average time to identify a breach in the study was estimated at 201 days and the average time to contain a breach was estimated at 70 days. The cost of a z13 or even the lower cost z13s could justify itself by averting just one data breach.


The researchers also found that companies with predefined Business Continuity Management (BCM) processes in place found and contained breaches more quickly, discovering breaches 52 days earlier and containing them 36 days faster than companies without BCM. Still, the cheapest solution is to avert breaches in the first place.


Not surprisingly, IBM is targeting the incident response business as an up and coming profit center. The company increased its investment in the Incident response market with the recent acquisition of Resilient Systems, which just came out with an updated version that graphically displays the relationships between Indicators of Compromise (IOCs) and incidents in an organization’s environment. But the z13 is probably a better investment if you want to avoid data breaches in the first place.


Surprisingly, sometimes your blogger is presented as a mainframe guru. Find the latest here.


DancingDinosaur is Alan Radding, a veteran information technology analyst writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


Latest Pinpoint Release Tackles DRC and Trend Lines

Latest Pinpoint Release Tackles DRC and Trend Lines
by Don Dingee on 07-06-2016 at 4:00 pm

After reading previous SemiWiki coverage on Dassault Systèmes and their ENOVIA Pinpoint solution, one big item seemed missing: how does this thing actually work? With all due respect to our other bloggers who covered when Dassault Systèmes acquired Pinpoint from Tuscany Design Automation, why Qualcomm is using Pinpoint, and what it does for Continue reading “Latest Pinpoint Release Tackles DRC and Trend Lines”


IoT Tutorial: Chapter7 – IoT data and IoT-BigData Convergence

IoT Tutorial: Chapter7 – IoT data and IoT-BigData Convergence
by John Soldatos on 07-06-2016 at 12:00 pm

Introduction to IoT Data and their Characteristics
Most IoT applications up to date involve the collection and processing of IoT data i.e. data stemming from IoT sources such as sensors, wearables and other internet connected devices. In the majority of cases the business benefits of an IoT application stem from the processing of IoT data. Typical examples include:

  • Security applications involving processing of information from multiple cameras deployed in urban areas in order to timely identify security events.
  • Urban mobility applications relying on the processing of data from traffic sensors in order to identify and alleviate traffic congestion.
  • Healthcare applications involving the collection and processing of behavioral information of a subject (based on streams from cameras, accelerometers and wearables) towards identifying lifestyle patterns.
  • Sports and fitness applications processing information from wearables in order to track statistics and training parameters for athletes.
  • Smart city applications entailing collection and processing of information from smart meters towards energy management at various timescales.

As evident from the above examples, data-intensive IoT applications involved processing of data from various sensors and devices. In several cases these applications can also combine data from other sources such as open data sources and social media. Furthermore, these IoT applications process IoT data in various timescales ranging from real-time processing for operational applications (e.g., traffic rerouting in case of congestion) to data processing at a weekly, monthly or yearly basis as part of strategic level applications (e.g., transport planning).

Apart from applications (such as the above-listed ones), whose business logic is the IoT data processing itself, there are also other IoT applications which focus on actuation and real-time control rather than on providing data to their end-users. Typical examples of such applications including for example CPS systems controlling robots in manufacturing plants or actuators in connected cars applications. Despite their emphasis on control (rather than data provision) these applications are in most cases also driven by IoT data processing, since decisions are usually based on the collection and analysis of IoT data from different data sources.

IoT data feature certain characteristics, which distinguish them radically from other types of data sources and respective applications (e.g., classical transaction applications). These characteristics include their streaming and real-time nature, their spatial and temporal characteristics, as well as their special security and privacy requirements (e.g., in cases where collection and processing of personal data are involved). The special characteristics and related challenges for IoT data processing applications can be listed as follows:

  • Heterogeneity of IoT data streams: IoT data streams tend to be multi-modal and heterogeneous in terms of their formats, semantics and velocities. Hence, IoT analytics applications expose typically variety and veracity. BigData technologies provide the means for dealing with this heterogeneity in the scope of operationalized applications.
  • Varying data quality: Several IoT streams are noisy and incomplete, which creates uncertainty in the scope of IoT analytics applications. Statistical and probabilistic approaches must be therefore employed in order to take into account the noisy nature of IoT data streams, especially in cases where they stem from unreliable sensors.
  • Real-time nature of IoT datasets: IoT streams feature high velocities and for several application must be processes nearly in real-time. Hence, IoT analytics can greatly benefit from data streaming platforms, which are part of the BigData ecosystem.
  • Time and location dependencies of IoT streams: IoT data come with temporal and spatial information, which is directly associated with their business value in a given application context. Hence, IoT analytics applications must in several cases process data in a timely fashion and from proper location. Cloud computing techniques (including edge computing architectures) can greatly facilitate timely processing of information from given locations in the scope of large scale deployments.
  • Privacy and security sensitivity: IoT data are typically associated with stringent security requirements and privacy sensitivities, especially in the case of IoT applications that involve the collection and processing of personal data.
  • Data bias: As in the majority of data mining problems, IoT datasets can lead to biased processing and hence a thorough understanding and scrutiny of both training and test datasets is required prior to their operationalized deployment. To this end, classical data mining techniques can also be applied in the case of IoT applications.

IoT Data-Intensive Applications Lifecycle
The development of IoT applications entails the following activities, which are usually combined towards developing and deploying non-trivial IoT data applications:

  • IoT Data Collection, including interfacing to IoT sources (i.e. internet connected devices) and enrichment of these data with appropriate contextual metadata, such as location information and timestamps. As already outlined, the collection process needs typically to deal with the heterogeneity of the IoT data sources and their data streams, including heterogeneity of interfaces to data sources and of data formats.
  • IoT Data Validation, including validation of the format and source of origin of the data. The process includes also the validation of their integrity, accuracy and consistency.
  • IoT Data Semantic Unification and Interoperability, which deal with the unification/homogenization of the semantics of IoT streams stemming from different sources, as a prerequisite for their unified processing.
  • IoT Data Structuring and Storage, which involves the persistence of validated and interoperable data in an appropriate database such streaming database, object database or even graph database.
  • IoT Data Analysis,which deals with the application of data mining and machine learning techniques (e.g., regression, neural networks, decision tree, clustering) towards transforming IoT data streams to actionable knowledge.
  • Deployment of IoT analytics algorithms,which involves the actual deployment and operationalization of machine learning and data mining schemes for data analytics.
  • IoT data visualization,which emphasizes the presentation of IoT data in a graphical format, including their browsing across the temporal and spatial dimensions of the IoT datasets.
  • IoT data repurposing and reuse,which entails access to IoT datasets towards reusing them across different applications.

IoT and BigData Convergence
The above-listed IoT data processing challenges and activities are very closely related to the wave of BigData technologies. Indeed, IoT data are characterized by the Vs that are commonly associated with BigData technologies. In particular, BigData systems refer to data processing and management systems, which feature one or more of the following characteristics (Vs):

  • Volume: Very high data volumes, beyond those that can be handled by state-of-the-art data management systems.
  • Velocity: Data streams with very high ingestion rates, which cannot be handled by state-of-the-art systems and databases.
  • Variety: Data featuring extreme heterogeneity in terms of velocities, formats and semantics.
  • Veracity: Data that are characterized by uncertainty and unreliability.

IoT analytics applications are typically characterized by:

  • High-data volumes, since in several cases they have to collect and process streaming information from thousands of sensors.
  • High-velocity streams, since they usually involve streaming data that are collected and in several cases processed in real-time.
  • High-Variety, since it is usual to interface and leverage data from heterogeneous sensors and internet-connected devices.
  • High-Veracity, as sensor data are typically noisy and prone to errors and the unreliability of the devices.

Nevertheless, IoT data have also several differences from conventional BigData analytics, in particular:

  • IoT data collection consumes bandwidth, network, energy and other resources. Furthermore, data collection depends on multiple layers of the network.
  • IoT data analytics should consider optimized data analytics considering the available resource and cross-layer optimisations (i.e. the so called deep IoT analytics).
  • Contrary to conventional BigData systems, IoT analytics solutions should work across multiple systems and platforms.
  • IoT analytics applications integrate in several case physical, cyber and social dataset.
  • IoT analytics and IoT processing are in several cases part of real-time control systems, through providing actionable information.

Note that IoT analytics systems are commonly deep IoT analytics involving multiple platforms (e.g.. IoT/cloud platforms, publish/subscribed platfoms), networks, IoT data sources etc. i.e. the whole ecosystem of IoT platforms and technologies. Such systems combine data from multiple sources, (near-) real time analytics, visualisation and semantic representations towards transforming raw IoT data to insights and actionable knowledge. The development and deployment of deep IoT analytics systems is challenging, given that they integrate and/or transcend multiple networks, clouds, IoT platforms and more, thus requiring optimization across multiple levels.

Beyond the systemic aspects of IoT-based data-intensive applications, the development of IoT analytics applications requires the blending and integration of machine learning schemes and data science with IoT platforms. This is discussed in one of the next chapters of the tutorial.

Resources for Further Reading

View all IoT Tutorial Chapters


STT-MRAM – Coming soon to an SoC near you

STT-MRAM – Coming soon to an SoC near you
by Tom Dillinger on 07-05-2016 at 4:00 pm

An increasing percentage of SoC die area is being allocated to memory arrays, as applications require more data/instruction storage and boot firmware. Indeed, foundries invest considerable R&D resources into optimizing their array technology IP offerings, often with more aggressive device features than used for other IP (e.g., specific SRAM bit cells) and/or with unique process options altogether (e.g., embedded DRAM, non-volatile memory technology). Yet, what are the characteristics of an ideal SoC array? Is there a single technology option that could cover all or most of the application requirements?

The holy grail of an IP memory offering would provide:

  • high density
  • low additional cost, ideally leveraging an existing CMOS process with minimal FEOL changes required and minimal additional masking layers
  • low power (active power, and especially, leakage power)
  • fast read access time, random access/addressability
  • non-volatility (with long retention)
  • low wear-out (very large number of write cycles)
  • thermal stability
  • high yield, high reliability
  • low error rates, low susceptibility to an event upset

An integrated circuit memory technology that has been actively researched for several years is magnetoresistive RAM, or MRAM, for short. As will be discussed briefly below, there is an evolving MRAM technology option that represents many of the preferred characteristics listed above.

At the recent DAC conference in Austin, I had the opportunity to chat briefly with Kelvin Low, Senior Director Foundry Marketing, at Samsung Semiconductor. He was extremely excited (and justifiably proud) to highlight the Samsung exhibit demonstrating a pre-production silicon implementation of a 28nm STT-MRAM array. This offering will be available to Samsung Foundry 28FDSOI customers in 2018.

There are some recent, commercial MRAM memory parts available, but to my knowledge, this is the first IP availability announcement by a major foundry for SoC customers.

STT-MRAM Introduction

Unlike traditional IC array technologies, MRAM does not rely upon the presence/absence of (active/dynamic) electrical charge on a storage node, but rather on the polarity of a local magnetic moment in one or more materials. The operation of the memory is current-based, rather than voltage-based.

A write current is applied to set the magnetic moment orientation at the memory bit location. A (lower) read current senses the magnetic polarity. The magnetic moment orientation modulates the electrical resistance through the material layers – that resistance difference is non-destructively sensed during the read cycle. The technical developments in the disk drive industry – and disk drive heads, in particular – is being applied to integrated circuit processing. (Or, for us old-timers, think back to magnetic core memory technology.)

There are several MRAM technology options that have been researched. The specific method used to “flip” the orientation at the array bit location is primarily what differentiates the various MRAM technologies.

Samsung has selected the Spin-Transfer Torque method (STT-MRAM), as illustrated above. The STT-MRAM bit cell consists of a sandwich of three materials. The base or fixed layer is magnetically strong. (There are actually multiple material layers deposited and patterned for the fixed layer, which are simplified to a single layer in the figure.) A very thin electrically-insulating material – i.e., a few atomic layers thick – separates the fixed layer from the freematerial layer, which is magnetically weak. The magnetic polarity of the free layer will define the bit storage value.

As the function of the cell utilizes electron tunneling through the thin intermediate layer, the STT-MRAM cell is also commonly denoted as a magnetic tunneling junction (MTJ).

Electrical connectivity to the cell is provided by a traditional access transistor, leveraging existing CMOS processing. The STT-MRAM materials are added later in the overall process flow, residing above the transistors, minimizing the FEOL process disruption. The unique circuit topology of the STT-MRAM is illustrated below – in addition to the conventional array word line and bit line, the MTJ is connected to a sense line, due to the current-based operation.

The operation of the STT-MRAM cell relies upon the behavior of electron tunneling through a thin dielectric, as mentioned above.

Simplistically (and with apologies to my quantum mechanics professor), the ferromagnetism of a material derives from the presence of unpaired electrons in the atoms, and thus, unpaired electron spin. The localized motion of these unpaired electrons results in a net atomic orbital magnetic vector (magnitude and direction), which applies a torque on adjacent atoms to align.


Write cycle for parallel magnetic orientation (Source: Samsung Foundry)


Write cycle for anti-parallel magnetic orientation (Source: Samsung Foundry)

The STT-MRAM bit cell utilizes this general property of electron spin and magnetic vector angular momentum to establish the magnetic moment in the free layer. Referring to the figures above, application of a write current through the STT-MRAM cell cross-section from free layer (FM2) to fixed layer (FM1) is achieved by electron tunneling through the dielectric from FM1 to FM2. Electrons now present in FM2 with the prevalent spin orientation from FM1 will apply a net torque, with the net result of an overall parallel magnetic orientation between the two layers at the end of the write cycle.

A cell write current in the opposite direction is a little more complex, as it depends upon the spin-dependent transmission and reflection coefficients at the FM1 material interface (within ~1-2 atomic lattice constants) of each of the two electron spin states originating from FM2. The net is that an anti-parallel magnetic moment orientation will be present in the two layers at the end of the write current cycle.

The read cycle current through the cell is significantly less than the write current that is required to set the magnetic orientation in the free layer. The key feature is the difference in the electrical resistance through the cell, depending upon whether the orientation is parallel or anti-parallel. This resistance difference and the read current results in a voltage differential that is sensed to determine the cell stored value.

Tunnel Magnetic Resistance ratio = (R_anti-parallel – R_parallel) / R_parallel

Referring again to the ideal memory IP characteristics list above, an STT-MRAM memory array indeed represents many of the desired properties.

  • A DRAM-like single access-transistor cell with storage node for high density – check.
  • Low power – check.
  • Fast (non-destructive) read access time – check.
  • Non-volatility – check.
  • High reliability, with low wear-out (e.g., no charge-pumped, high-voltage write operation required) – check.

As the STT-MRAM IP technology approaches production qualification at Samsung Foundry, look for additional Semiwiki articles with more technical details.

For now, it would be worthwhile to envision how your future products could leverage the unique characteristics of this array offering. Indeed, before long, the MRAM acronym may just as easily signify “Must-have” RAM. 😀

For more details about Samsung Foundry technology, please follow this link.

-chipguy


Semiconductor is Definitely NOT Business as Usual!

Semiconductor is Definitely NOT Business as Usual!
by Daniel Nenni on 07-05-2016 at 12:00 pm

Next week is SEMICON West where more than 26,000 of my compatriots will meet to discuss the future of the all-important semiconductor industry. How important is the semiconductor industry you ask? Well, your life literally depends on it, absolutely. Scott Jones and I will be covering the event live for SemiWiki so stay tuned. If you will also be there it would be a pleasure to meet you, drinks are on me.
Continue reading “Semiconductor is Definitely NOT Business as Usual!”


Smart Buildings are Stupid and Insecure?

Smart Buildings are Stupid and Insecure?
by Bill McCabe on 07-05-2016 at 7:00 am

An Internet of Buildings (IoB) that really works and can’t be hacked? The IOT holds great promise for nearly every aspect of society and, of course, is rife with business opportunity, as well. One of the most exciting opportunities on both fronts remains the opportunity to create connected buildings.
Continue reading “Smart Buildings are Stupid and Insecure?”


Emerging Disruptions from Blockchain

Emerging Disruptions from Blockchain
by Raman Chitkara on 07-04-2016 at 4:00 pm

For several years, Bitcoin has captured headlines not only for becoming the leading digital currency, but also for wild fluctuations in its value. Will Bitcoin succeed? The jury’s still out. But now the underlying technology – an encrypted, distributed digital ledger called blockchain – is riding a wave of adoption for many new use cases.
Continue reading “Emerging Disruptions from Blockchain”


Dilbert Flopped – But We Still Laugh

Dilbert Flopped – But We Still Laugh
by H.B. on 07-04-2016 at 12:00 pm

This tile is about an old timer talking with a smart ass that questions why experience is relevant in todays “fast paced” technology industry. It has shown up so much on LinkedIn that I thought I should make a separate post and copy my responses into it and just link in next time.
Continue reading “Dilbert Flopped – But We Still Laugh”