CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Positive pointers from Samsung, GF, Renesas, NXP/Freescale, ST, Soitec – so will 2016 be the year of FD-SOI?

Positive pointers from Samsung, GF, Renesas, NXP/Freescale, ST, Soitec – so will 2016 be the year of FD-SOI?
by Adele Hars on 02-10-2016 at 4:00 pm

A little over a month into 2016 and we already have a raft of FD-SOI news from Samsung, GlobalFoundries, NXP/Freescale, Renesas and more. Quite a bit of it came out of the recent SOI Consortium forum in Tokyo. Many of the presentations are now available on the SOI Consortium website (click here to see what’s there) – but keep checking back for more.
Continue reading “Positive pointers from Samsung, GF, Renesas, NXP/Freescale, ST, Soitec – so will 2016 be the year of FD-SOI?”


Early Structural Reliability Analysis of a Chip-Package-System design is a must!

Early Structural Reliability Analysis of a Chip-Package-System design is a must!
by Tom Dillinger on 02-10-2016 at 11:00 am

2015 will be remembered as the year when chip-package-system (CPS) physical co-design and electrical/thermal analysis methodologies took center stage.

Continue reading “Early Structural Reliability Analysis of a Chip-Package-System design is a must!”


Semiconductor, Oil, and GDP – Correlated? What’s Expected?

Semiconductor, Oil, and GDP – Correlated? What’s Expected?
by Pawan Fangaria on 02-10-2016 at 7:00 am

In last 3 decades of semiconductor market, the largest growth in IC sales was at 33% in 2010. At that time global recession had started due to financial crisis and in 2009 oil prices fell more than 30%. It appeared that oil prices were negatively correlated with semiconductor market growth. Today again there is another sharp decline in oil prices, but the growth in IC sales is not expected to rise significantly. The reason is that there are several other factors that can affect the semiconductor market negatively.

From economic standpoint it is understandable how commodities (and finished products) produced, exported, and imported by different countries impact the GDP of the respective countries and the world in general. Impact of low oil prices can be different for different countries. Let’s analyze the GDP data of countries which are dominant in electronics and oil production.

It’s important to recognize the importance of a country’s share in world GDP in determining how it can influence the direction of certain things in the world economy. USA has highest share in world GDP. In 2014, USA’s share was 22% in world GDP followed by China at 12.4%, and Japan at 8%; Eurozone’s combined share was at 16%. In 2015, the rank remained same; USA followed by China, Japan, Germany, U.K., France, India,… The oil rich countries other than USA (Russia, Nigeria, Middle East, …) represented between 0.5 – 3% each in world GDP; Russia’s share was maximum at 2.8%.

Clearly USA, China, Japan, and Eurozone have high contribution of electronics in their GDP. Russia, Nigeria, and Middle East derive their GDP from oil; contribution of electronics is negligible in their GDP. Now that explains how the world GDP had also increased by 4% in 2010, along with semiconductor market growth at 33%; semiconductor is key ingredient for electronics. About 25% of electronic systems content comes from semiconductors.

Let’s analyze this chart from an IC Insights report that represents electronic industry interdependence between 2014 – 2016.


The semiconductor market is expected to grow by 4% in 2016, after a decline of 1% in 2015. The semiconductor market has a positive correlation with electronic systems which is also expected to grow by 2% in 2016 after a decline of 2% in 2015. In 2016, a decline of 1% in semiconductor capex and 4% increase in semiconductor materials market is also explainable.

Coming to the explanation of why we cannot see an even higher growth in semiconductor market in 2016 – there are several factors. The world GDP is expected to grow by 2.7%, just above the recession threshold of 2.5%. Why is it so? There are other pain points – recession in metals and mining; there is oversupply and less demand of metals in the world. Construction activities have also slowed down and there is overcapacity of cement. These factors affect overall GDP.

China, the second in world GDP produces half of world’s steel which is in surplus now. China’s GDP growth is significantly down, weakest in last seven years, pegged at 6.3% in 2016. China being the lead in consumer electronics market (including PCs, smartphones, TVs, and so on), its lower GDP growth definitely affects semiconductor growth.

Also Japan, the third in world GDP is in recession, actually it is going into deflation. Eurozone doesn’t show any rosy picture either. However, the top ranked countries in world GDP drive significant electronic market. So, that has a positive impact on semiconductors compared to other negative impacts of GDP.

The top GDP contributor, USA is expected to grow its GDP at 2.5% in 2016. However, with 22% of worldwide GDP, USA definitely provides a very positive impact on semiconductors and that has a positive correlation with falling oil prices.

Within semiconductors, after massive consolidation in 2015, new manufacturers are not expected to enter the market in near future. Also capital expenditure will be lower baring flash memory segment, according to IC Insights report.

However, semiconductor R&D spending remains modest with overall 1% increase in 2015 reaching $56.4 B; Intel spent the highest ($12.1 B, i.e. 24% of sales) and TSMC, the 5[SUP]th[/SUP] among top R&D spenders increased its R&D spend by 10%, the most among semiconductor leaders. The pure-play foundries will keep up the innovation and keep providing thrust to IC manufacturing.

Considering the positive factors on semiconductors and all the other negative factors impacting the world GDP, 4% increase in worldwide semiconductor market in 2016 appears to be very good.

The IC Insights reports on 2016 IC Market and semiconductor R&D spend in 2015 can be referred here and here. Also read: 30+ Years of Semiconductors – The base matters!

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Smart Phones and the Chinese Marketplace

Smart Phones and the Chinese Marketplace
by Daniel Payne on 02-10-2016 at 6:00 am

My first mobile phone was from Motorola and it was fondly called the “brick phone” because of its crude shape. That phone helped me be more efficient while living and working in Silicon Valley, because long commute times in the car were common. At one time Motorola was the best-selling mobile phone brand in the world, then Nokia had their turn in the lead, while today we see Samsung as the number one mobile phone provider.

Looking at the largest sized market for consumer electronics we see China with a population of 1.376 billion, so a natural question is, “Who are the leading mobile phone companies in China?” According to analysis from Canalys and Counterpoint Research here are the market standings for smartphones in China:

[LIST=1]

  • Xiaomi, 15.9%
  • Huawei, 15.7%
  • Apple, 12.2%
  • Vivo, 9.1%
  • Oppo
  • Samsung

    Source: Tech Crunch


    I’m using a Samsung Galaxy Note 4 phone and love the 5.7″ display, stylus, build and features. Here in the USA we don’t hear much about the Xiaomi and Huawei brands, but just wait for their marketing campaigns to arrive. Samsung used to be the number one smartphone supplier in China, but for now four of the top five suppliers in China are all local Chinese companies. Many of these Chinese smartphones are considered to be in the low to mid-range price ranges, while Samsung is mostly selling high-priced smartphones like the Galaxy Note 5 and Galaxy S6 devices.

    So Samsung has fallen from favor in the Chinese market from number one in 2014 all the way down to number six in 2015, yet continues it’s worldwide leadership in spite of increased global competition.

    Both Huawei and Xiaomi are rising stars in China because their smartphone products are quite compelling:

    • Xiaomi Mi 4c – 5 inch display with 1080p, Snapdragon 808 CPU, 13MP camera, about $200
    • Huawei Nexus 6P – 5.7 inch display at 1440×2560 pixels, Snapdraon 810 CPU, 12MP camera, about $350


    Xiaomi 4c


    ​Huawei Nexus 6P

    My local BestBuy store sells the Huawei Nexus 6P smartphone, but not any of the Xiaomi products so it appears that Huawei is a bit further ahead at creating their USA marketing than Xiaomi.

    Summary
    In China we see now that about half of their best-selling smartphones are supplied by Chinese companies, while Samsung has fallen out of the top 5 position into number 6. Apple is still at a respectable number 3 position with their high-end smarthphone, the iPhone 6S. Lenovo from China now owns the Motorola brand, so in 2016 we may see Lenovo grow into the top 5 position. American distribution of Chinese smartphones is still quite small, but I expect that to change soon.

    Related Blogs


  • Quantum Computing – A Quick Review

    Quantum Computing – A Quick Review
    by Bernard Murphy on 02-09-2016 at 4:00 pm

    This topic comes up periodically but for me had always been one of those things I’d get around to understanding better someday. A recent blog in SemiWiki got me looking a little harder and determined to write a blog to get this out of my system, if for no other reason than getting rid of excess tabs on my browser. So here’s my quick review. Apologies as always to experts in the field.

    Core Principle
    If you have a little quantum physics background and a decent amount of imagination, this is pretty simple. Conventional computing relies on Boolean logic – bits are either 0 or 1. Quantum computing (QC) relies on bits (called qubits) based on things like electron spins which can be up or down (corresponding to 0 or 1) but can also be in superposition states, where say there is 50% probability a qubit is in one state and 50% probability it is in the other state.

    Now put ‘n’ of these superposition states together and run an algorithm, say searching for all prime numbers representable in that set of qubits (up to 2[SUP]n[/SUP]). If you did this with conventional bits, you’d have to check each number serially (maybe a few numbers simultaneously on each pass if multi-processing). But with qubits you can check all 2[SUP]n[/SUP]possibilities simultaneously, at least in principle, because each qubit carries both a 0 state and a 1 state.

    Who cares?
    Since QC can reduce some high-complexity problems to very low complexity, it has the security industry concerned. Some popular encryption techniques exchange a public key which is the product of two large prime numbers. Decryption is only possible if you know the two prime numbers and security lies in the extreme difficulty of factoring in these cases.

    But with QC, Shor’s algorithm has been shown to be able to factor large numbers with ~O(logN)[SUP]2[/SUP] complexity which would make standard encryption techniques easily crackable at any reasonable key size. (In fact all the standard techniques appear to be vulnerable to QC, not just the product of primes methods).

    How does QC work?
    Not much like Boolean logic; you have bits and gates but there the similarity ends. You’re working with superposition states so the logic looks quite different. A qubit is modeled as a 2-component vector, and a single-input logic function can be represented as a 2×2 matrix. One example is the Pauli-X gate, which is the QC equivalent of a Boolean invertor:
    The input vector is transformed by the matrix and the output is a 2-component vector. For a 2-input quantum gate, each input requires a 2-component vector and the logic function has to grow to a 4×4 matrix. For example, a Controlled-NOT (CNOT) gate looks like:
    Don’t worry about the math or the gate-types – what’s important is that the inputs are superposition states (2-component vectors per qubit) and the outputs also have to be superposition states. All QC gates can be reduced to a small set of universal quantum gates – from these you can build whatever logic you want. The math has been figured out, the best gates to build are known, now you just have to build them.

    Implementation Realities
    Before you get too excited, building a QC is not easy. There are at least 3 main problems: building the qubits themselves, building the logic and managing decoherence.

    Building qubits is somewhat under control, at least in the lab (and D-Wave). Early mechanisms used superconducting Josephson junctions, but these commonly depend on fairly exotic materials like niobium-tin alloys or yttrium-barium-copper-oxide (YBCO) which makes them less than friendly to traditional semiconductor manufacturing flows.

    More recently qubit implementation in quantum dots on silicon substrates has been demonstrated; this is closer to something that might eventually be manufactured in volume.

    Logic gates have made similar progress, but only very recently (late last year) for 2-input gates in silicon, in which an Australian group demonstrated that a large number of 2-qubit operations could be completed within the “quantum coherence time”, making scalability look achievable.

    This coherence time is critically important in the QC world. Superposition states are very fragile, so any uncontrolled interaction with the environment (through thermal noise for example) can cause a state to decohere into a single state or some unrelated superposition, destroying any progress made in calculation. This is why QCs run at very low temperatures (tenths of a kelvin or even milli-kelvins) – not to support superconductivity but to reduce thermal contributions to decoherence.

    For silicon-based devices it is also important to use isotopically-purified silicon. [SUP]29[/SUP]Si nuclei (one neutron added to the more common [SUP]28[/SUP]Si) can spin-couple to qubits, causing decoherence. NIST has been working in this area and has now demonstrated purification to 99.9999%.

    So there are probably no insuperable barriers in the silicon technology direction but the cooling requirement, at least in the mainstream, is substantial: liquid nitrogen to get down to 77K, liquid helium to get from there down to 4K and adiabatic demagnetization to get from there down to sub-K temperatures. And you have to vacuum pump the enclosure before you start cooling. (In fairness, there are mechanical alternatives to the first two steps but these generate significant vibration – also unfriendly to coherence – and are very power-hungry.)

    This is not something you can shrink onto a chip. If you look at a picture of an any opened up QC, you see the kind of structure that sits inside an ultra-low temperature physics experiment. Most of the computer volume is going to be dedicated to cooling, a good piece of which is simply thermal separation between the part that has to get very cold and the rest of the ambient temperature world.

    There is some intriguing work being done on electron cooling in quantum wells inside room-temperature semiconductors, though so far this only gets down to about 45K and probably not close to thermal capacities adequate to cool a full QC chip.

    To be complete, there is a concept of quantum error correction which can help sustain coherence longer than might otherwise be possible. This is interesting technology in its own right – you can’t just look at qubits, figure out what’s wrong and correct them as in conventional EDC. Looking at a qubit constitutes measurement, which immediately causes collapse of the superposition into one of the measurable states. There’s some real cleverness in how this is done to avoid collapse. But this is just a way to extend coherence times – it’s not good enough to get rid of the need for cooling.

    Summing up
    QC is not even remotely close to a panacea – it’s good for solving problems which benefit from having bits in superposition states – list searching in general and related problems like factorization are good examples, also modeling certain quantum-mechanical problems beyond the reach of conventional computing.

    But there are other encryption methods for which QC methods are not known to improve significantly over existing methods. A recent proof asserts that the very great majority of classical algorithms would not benefit at all (or only marginally) from QC. So even in principle, this will remain a special-purpose (though quite possibly highly valuable) form of computing.

    The highest published successful factorization I have seen is for 56,153 (not on a D-Wave as far as I can tell) – not yet grounds for concern over security hacks. Of course, who knows what the NSA (or even Google) have not published. D-Wave claims a 512-bit qubit machine which in principle could do much better, though I would have thought universities would be eager to claim bragging rights for higher numbers and they’re not constrained by secrecy or manufacturability considerations.

    The technology is advancing, still has a way to go on silicon, but cooling requirements will limit this for quite a while to installations that can support the liquid nitrogen and liquid helium infrastructure (perhaps eventually superior methods, but don’t bet on Peltier coolers – they’re not even close). Probably wise not to expect desktop versions in your lifetime…

    You can learn more about Josephson qubits HERE, Quantum dot qubits in silicon HERE, Logic gates in silicon HERE, electron cooling in quantum wells HERE and the highest published QC factorization I could find HERE.

    More articles by Bernard…


    India: Transcending Transportation Transformation

    India: Transcending Transportation Transformation
    by Roger C. Lanctot on 02-09-2016 at 12:00 pm

    Dr. Werner Heisenberg’s “uncertainty principle” states that the more precisely the position of some particle is determined, the less precisely its momentum can be known and vice versa. Transportation is the same way. The more we study the current state of transportation the less precisely we understand its momentum.
    Continue reading “India: Transcending Transportation Transformation”


    Internet of Things : Realizing Business Value through Platforms, Data Analytics & Visualization

    Internet of Things : Realizing Business Value through Platforms, Data Analytics & Visualization
    by Pranay Prakash on 02-09-2016 at 7:00 am

    Over the past few years of being immersed in the Internet of Things (IoT), I have found that customers have very specific problems they are trying to solve; e.g. gaining energy efficiency, early fault detection or remote diagnosis and maintenance of equipment. Decisions are driven by the need to reduce Operational Expenditure (OPEX) and save on Capital Expenditure (CAPEX).
    Continue reading “Internet of Things : Realizing Business Value through Platforms, Data Analytics & Visualization”


    Low end LTE UE categories seeing more action

    Low end LTE UE categories seeing more action
    by Don Dingee on 02-08-2016 at 4:00 pm

    Most of our attention goes toward the higher end of the LTE UE categories – ones designed for moving large amounts of multimedia data from smartphones and tablets concurrently with voice traffic. An equally interesting discussion is taking shape at the low end of the LTE UE categories targeting M2M and IoT devices with power-efficient, optimized silicon. Continue reading “Low end LTE UE categories seeing more action”


    Sleep Monitoring and Aiding Devices Insight from Patents

    Sleep Monitoring and Aiding Devices Insight from Patents
    by Alex G. Lee on 02-08-2016 at 12:00 pm

    US9192326 illustrates a sleep monitoring system that can be embodied within a wearable device or in a mobile device. The system includes an accelerometer to monitor a user’s movements. The system determines when the user is falling asleep into a sleep session based on the user’s movements. The system also identifies the sleep session as a power nap or a longer sleep based on a current time of day, a time since a last longer sleep, and a location of the user. The system notifies the user to change the user’s location when the user falling asleep would have a negative effect on the user.

    When the user falling asleep does not have a negative effect on the user, the system determines a time to wake up the user based on a combination of the current time of day, the time since a last longer sleep, the location of the user, user preferences, and measured information regarding the sleep session.

    US20150182164 illustrates a sleep monitoring system that can be embodied within a wearable ring (e.g., on a digit of a hand and/or a toe of a foot). The system includes the biometric sensors comprise a heart rate sensor, a respiration sensor, a temperature sensor, a skin conductance sensor, a skin conductance response sensor, a galvanic skin response (GSR) sensor, an electromyography (EMG) sensor, an electrodermal activity sensor, and an electrodermal response sensor.

    The biometric sensors generate biometric signals such as an arousal signal indicative of arousal in a sympathetic nervous system. The system analyzes the biometric signals to classify a user’s sleep state as several sleep types. The system notifies the user regarding the sleep state that is an indication of the negative health status (e.g., inflammation, fatigue, stress). The system provides a coaching that includes advising or offering suggestions to the user for changing behavior or to improve some aspect of the wellbeing of the user.

    US20150351693 illustrates a sleep monitoring system that can be embodied within a bed or a mattress. The system includes pressure sensors that are spatially arranged in a predefined planar geometry. The system extracts the biophysical variables from the biophysical signals of a user resting on a bead or mattress that are obtained by the sensors. The person’s sleep state is inferred from the biophysical variables.

    A good life quality is usually built up with the good sleep quality. Unfortunately, according to the research findings, about 11.7% of Americans (i.e. about 32 million people) suffer from the problem of sleeplessness. US20150217082 illustrates a sleep aiding system for assisting in easing hardship of falling asleep. The system monitors a bio-condition of the sleeper to collect bioinformation of the sleeper. The bioinformation includes a heartbeat, a body temperature, a blood pressure, a skin conductivity, and a respiration rate. The system determines a falling-asleep hardship index based on the received data to indicate hardship of falling asleep for a user. The system provides the sleep guidance in visual form audio form to adjust the environment for building an optimum sleep environment.

    US20140316192 illustrates a virtual reality system for promoting sleep. The system includes a virtual reality device and a wearable or mobile sensor device. The sensor device communicates wirelessly with the virtual reality device (e.g., eye wear and headphones). The sensor device detects various different physiological signals and determines the physiological parameters. The sensor device determines a stage of the immersive virtual environment based on the values of the physiological parameters. The virtual reality device presents the stages of the immersive virtual environment that are designed to promote sleep by providing a different arrangement of sensory stimuli.

    More articles from Alex…


    Complexity And Security

    Complexity And Security
    by Bernard Murphy on 02-08-2016 at 7:00 am

    From time to time when talking about security, it is useful to look at the big picture, but not to further lament the imminent collapse of the sky. We all know that the problem is big and we’re really not on top of it. A more productive discussion would be about what we can do to reduce the scope of the problem. And that has to start with a more scientific approach driving first-principle ideas for improvement. My thanks to @ippisi who pointed me at langsec.org which is a fascinating resource for anyone interested in fundamental work in improving software security (and perhaps ultimately hardware security since hardware is not so different from software).

    The Growth of Complexity
    Here I’ll just concentrate on one aspect – complexity – and I take a lot of insights from the keynote at langsec.org last year, with a few of my own thoughts thrown in. Complexity can be quantified but to avoid getting mathematical, I’ll rely on an intuitive sense. As systems grow in size, complexity inevitably grows also. Even if the system growth is just replication of simpler systems, those systems have to intercommunicate, which leads to more complexity, especially in the Internet of Things (IoT). Then we add still more complexity to manage power consumption, different modes of communication and, paradoxically, security.

    The level of complexity is important because it limits our ability to fully understand system behavior and therefore our ability to protect against attacks. And that points to a real concern: that the complexity of the systems we are building or planning to build is fast out-stripping our ability to fully understand them, much less protect them.

    Consider first just the classical Internet (forget about the IoT). Dan Geer, the langsec 2015 keynote speaker, found in researching an article that we are having increasing problems bounding or discovering what the Internet actually is. It seems many reachable hosts have no DNS entry, complete reachability testing in network trees became impossible a long time ago (the number of paths in a tree grows exponentially with tree size) and what we consider endpoints in end-to-end views of connectivity has anyway become quite unclear in a world of virtual machines and software-defined networking. So the Internet, pre-IoT, has unknown complexity. Building out the IoT, I assume, would compound this problem.

    OK you say, but at least I fully understand the system I designed. Exceptionally clever people could possibly have made this claim when software and hardware were created from scratch. But now design in both domains is dominated by reuse and that leads to dark content. Not dark in the sense of powered-down from time-to-time, but dark in the sense of never used, or you don’t know it’s there or if you do, you don’t know why, or what it does.

    A non-trivial percentage of software may be dark, especially through legacy code but also through third-party code supporting features you don’t use, and also through code that no-one wants to remove because the person who wrote it left long ago and who knows what might break if you pull it out. Projects to understand and refactor this class of code get very low priority in deadline-driven design, so it stays in.

    This problem applies as much to hardware as to software – lots of legacy logic you only partly understand and unknown code in boatloads of third party IP. Dark code amplifies complexity and indications (mentioned in the langsec keynote) are that it is growing fast. Forget about hidden malware – we don’t even know if innocent but untested (for your intended use) darkware harbors possible entry points for evil-doers.

    Then there’s innate or architectural complexity – what you build when you create a significant function and when you put a lot of large functions together. We try to manage complexity through function hierarchies and defensive coding practices, which say that we should code for graceful handling of unexpected inputs and conditions.

    But there are practical and subjectively-judged limits to how far any designer will take this practice. You defend against misbehaviors you think might be possible, and self-evidently not against behaviors you can’t imagine could happen (or you didn’t have time to imagine). And since it would be impractical to defend everywhere, you defend only at selected perimeters and assume within those perimeters that you can rely on expected behavior. But if any of those defenses are breached, all bets are off. These defenses limit complexity in a well-intended but rather ad-hoc (and therefore incomplete) manner.

    The Effect of Complexity on Test
    And then there is the issue of how we test these complex systems. For large systems it would be wildly impractical to test at every possible level of the functional hierarchy, so we test (or presume already well-tested) only at those levels for which we believe we understand expected behavior – the total system and some well-defined sub-functions. Our tests at the sub-function level, even with fuzzing or constrained random, probe only a small part of the possible state-space of those functions.

    And at the system level we are limited to testing representative sets of use-cases, perhaps with a little randomization in highly constrained channels. We effectively abandon any hope of fully exploring the complexity of what we have built. Again this is becoming as much of a problem in hardware as it has been for years in software. Throughout systems, complexity is growing faster than our ability to understand and manage defenses against attacks on weak areas in behavior we don’t even know exist, much less understand.

    How We Might Manage Complexity
    So what can we do (at a fundamental level)? Formal is a de-facto answer (for both software and hardware) but is very limited since it explodes very quickly on large problems. Bounded proofs of constrained objectives are sometimes possible but only if multiple assumptions are made to limit the search space, which limits its value as a general solution to managing complexity.

    An alternative is to constrain the grammar you use in design. As a sort of reduced version of Gödel/Turing’s reasoning, if you make a grammar’s expressive powers simple enough, you make it easier to use existing (e.g. formal) or comparable proof methods to fully prove properties (e.g. a statement about security) of a logic function in that language. There are preliminary efforts in this direction in reported in langsec.

    Another more speculative though potentially less disruptive idea (my contribution, based on capabilities in the BugScope software we sold at Atrenta) is to focus on the observed (tested) behavior of function interfaces during normal behavior. You infer de-facto assertions from observed behavior and accumulate unions of these assertions – this integer interface was always less than 14, that interface was never active when this signal was 0, and so on. Then you embed those in the production software/hardware as triggers for out-of-bounds behavior, where the bounds are these observed/tested limits.

    In use, if an assertion triggers, you don’t know that something bad is going to happen, but you do know the software/hardware is being exercised outside the bounds it was tested. This is effectively a tested-behavior fence – not foolproof by any means, but potentially higher coverage than even user-supplied assertions (which tend to be complex, difficult to create and therefore sparse in possible behaviors). In practice it would be necessary to adjust some of these bounds as continued use “proved” the relative safety of some out-of-bounds excursions, so there has to be a learning aspect to the approach.

    In either approach darkware would either prove to be harmless (does not cause a proof to fail or behavior lies inside acceptable bounds) or will reveal itself through unexpected proof failures or unexpected bounds.

    There are plenty of other methods suggested in langsec proceedings for managing/restricting complexity (for software). I heartily recommend you read Dan Geer’s keynote HERE and branch from there to the 2015 proceedings HERE. The keynote is full of interesting insights and speculations. For anyone with too much time on their hands, I wrote a blog last year about an way to develop a security metric for hardware based on the complexity of the hardware. You can read that HERE.

    More articles by Bernard…