Banner Electrical Verification The invisible bottleneck in IC design updated 1

Autonomous Vehicles Upending Automotive Design Process

Autonomous Vehicles Upending Automotive Design Process
by Tom Simon on 12-28-2017 at 12:00 pm

The automotive industry has a history of bringing about disruptive technological advances. One only needs to look at the invention of the assembly line by Henry Ford to understand the origins of this phenomenon. Today we stand on the brink of a massive change in how cars operate and consequently how they are built. A number of automotive manufactures have promised autonomous vehicles by 2021. The move toward this objective will be a continuous progression of adding technology at each step of the way over the next 3 or 4 years. At the same time consumer expectations about usability and reliability will be huge market drivers.

The design challenges associated with these changes will be enormous. Cars are already probably the most complex consumer items made today. With the addition of the sensor, communications, processing and power train enhancements that are expected, overall complexity will increase. Numerous architectural and design decisions will have to be made. Puneet Sinha, Automotive Manager at Mentor and John Blyler, System Engineer/Program Manager at JB Systems have completed a white paper entitles “Key Engineering Challenges for Commercially Viable Autonomous Vehicles” that delves into the specifics of what we can expect to see coming down the road.

They break the topics involved into 6 different categories. Starting with sensors, they explain how the number of sensors in an automobile is going to grow from the current number between 60 and 100 up to a much higher number. To fully implement 360 degree ‘vision’ a variety of different types of sensors are needed. The diagram below provides an overview of which types of sensors are used for specific tasks. The operating environment for each of these sensors is challenging. The goal of reducing sensor size conflicts directly with thermal management requirements. Automobiles also have to operate in extremely cold and hot environments and often in conjunction with other automotive systems, which exacerbates thermal and reliability issues.

The section I found most interesting discussed sensor fusion. From the advent of the internet of things there has been a push to move sensor fusion to the edge. It made a lot of sense to use smaller processors adjacent to, or integrated with, the sensor to take raw sensor data and convert it into more easily digestible data, which also has the benefit of being smaller in size. However, the Mentor paper points out that in future automobiles there will literally be hundreds of these smaller MCU’s distributed throughout the vehicle. This can lead to reliability issues, as well as potentially creating thermal and power issues. They also point out that edge sensor fusion can lead to an exploding BOM.

Mentor’s approach is to use the high speed data busses on the vehicle to transport raw data and centralize sensor fusion. Mentor has announced a product called DRS360 that enables this centralized fusion approach. They also point out that the raw data can be combined in unique ways if it is processed centrally, thus enabling higher quality results. The end result is a 360 degree perspective of the car and its environment. Their experience implementing this has shown that the overall power usage is dramatically reduced.

The third area they discuss is the electronic and electrical architecture. The automotive wiring harness is actually the third heaviest component in a vehicle. It has grown from the original point to point wire of the first cars to multiple communications networks, each with a specific purpose – Controller Area Network (CAN), Local Interconnect Network (LIN) and Automotive Ethernet. Because there is so much interplay between the electronics connectivity and the physical design of the vehicle, significant planning and interdisciplinary design is required. In many ways solving this design problem looks a lot like place and route used on SOC’s.

The power train is another design domain that is undergoing rapid change due to the advent of electric vehicles. This can include both hybrid and fully electric drive systems. It turns out that autonomous vehicles will have different design parameters than human piloted vehicles. One surprising piece of information from the Mentor white paper is that autonomous drive vehicles will not need to be designed for the so called 90th percentile driver, who can be very rough on the drive train. There are other implications in this area from full autonomous driving.

There will also be dramatic changes in vehicle safety and in-cabin experience. The interior of a fully autonomous vehicle will be quite different from today’s vehicles. Occupants will interact even more with navigation and entertainment systems. Passenger displays will also change quite a bit. Safety requirements will pervasively affect every element of the automotive design process.

Last but not least the white paper addresses vehicle connectivity. The moniker applied to this area is V2V communications. Cars will be communicating with the cloud, each other and possibly the road and other infrastructure elements. There is great opportunity to increase safety and situational awareness in the automotive navigational and safety systems.

Mentor has a comprehensive suite of tools that address these design areas and challenges. The white paper does an excellent job of detailing each of the design areas and also lays out the relevant tools that can be used to deal with the system level integration problems in those areas. I recommend a thorough read of the white paper to fully understand the design challenges and to learn which Mentor tools can address them. Mentor has a long history of working at the system level, and the coming changes in the automotive space are creating a lot of opportunity for them to become a major player.


A Picture is worth a 1,000 words

A Picture is worth a 1,000 words
by Daniel Payne on 12-28-2017 at 7:00 am

Semiconductor IP re-use is a huge part of the productivity gains in SoC designs, so instead of starting from a clean slate most chip engineers are re-using cells, blocks, modules and even sub-systems from previous designs in order to meet their schedule and stay competitive in the market place. But what happens when you intend to re-use some IP with the notion of adding some new features to it? How in the world do you learn about a previous IP block if you weren’t the person responsible for creating it in the first place? If someone hands you 10,000 lines of VHDL or SystemVerilog code, how would you go about learning how it was created in order to modify it? Sure, you could read the documentation, or even start to just look at the source code to glean some insight. Is there a better way? Yes, there is a better way and that better way is to read in your HDL code and then automatically create a graphical representation of it using blocks and state machines.

Sigasi is an EDA company with a tool that does just that, however at first use of their BlockDiagram view you may just see a bunch of cell instances connected with wires which can be a bit messy to infer much info from:

If you could color instances and use some busses then the block diagram would be more legible and make understanding its operation a whole lot easier:

The Sigasi Studio tool also automatically finds state machines in your HDL code and creates a StateMachine view, which is how most IC designers think of their logic to start out with:

Here’s an updated view of the same state machine, this time with some coloring and re-grouping to emphasize state transitions:

So how do you go from the default diagrams created by Sigasi Studio to the ones with colors and groupings?

You use something called a Graphic Configuration file. Some of the benefits of using a text file for Graphic Configuration are:

  • Easier to manager with your favorite version control system, allowing easy compares and merges.
  • Debug is straight forward, saving you time.
  • Sigasi Studio features like auto-complete, validations and formatting are all built-in to the tool.

With a Graphic Configuration file you can do five important tasks:
[LIST=1]

  • Group states, instances or wires together.
  • Hide states or blocks.
  • Collapse blocks.
  • Coloring, as shown in the two examples above
  • Regex matching.

    Some Examples
    Let’s say that you want to color a specific block to green and have the internals hidden, the Graphic Configuration file syntax is:block my_block { color green collapse }


    That was pretty simple and compact to write.

    Continuing that first example a bit, now we want to access a block within a block, calling for a nest configuration like this:
    block my_block {

    [INDENT=2]color green
    block block_within_name {

    [INDENT=3]color red

    [INDENT=2]}

    }

    Changing how your state machine looks is just like the block diagram we just learned except the first keyword is “state” instead of “block”.

    Summary
    A picture really is worth a 1,000 words, and now with the automated visualization feature in Sigasi Studio you can have a lot more control over how it looks by using a Graphic Configuration file, therefore making understanding your HDL code much easier than staring at text alone. To read more about this topic there’s a blog article written by Titouan Vervack at the Sigasi site.


  • Lipstick on the Digital Pig

    Lipstick on the Digital Pig
    by Bill Montgomery on 12-27-2017 at 12:00 pm

    I have a lot of friends in the real estate industry, and two of the most common sales tactics are to create “curb appeal,” and to “stage” the interior of the residence being sold. Curb appeal, of course, refers to making the home looks as appealing as possible upon first impression. Update the landscaping. Add flowers. Make sure the lawn is well maintained. Maybe add a coat of fresh paint. You get the idea. And on the inside, remove everything that made the house a home, and bring in a professional interior designer to “stage” the place by painting, enhancing lighting, bringing in rental furniture etc. – essentially, transitioning the abode to a “model home” that is highly pleasing to the eye.

    If the house is in need of better wiring, updated plumbing, new home heating/cooling – Hell, if the foundation is crumbling, it doesn’t matter. The goal is to make the prospective buyer feel good about the property and envision living in this “beautiful” residence.

    Putting “lipstick on the real estate pig” works. Professionals will tell you that creating curb appeal and a well-staged home will sell faster, and for more money than a comparable house whose agent/seller does not adopt these tactics.

    We have a similar situation occurring in the world of cybersecurity, particularly in the emerging world of IoT. We have a “Digital Pig” that is part of our everyday connected existence, and layers of brightly colored lipstick are being slapped on this porker every single day. I’m referring to PKI – a system for the creation, storage and distribution of digital certificates which are used to verify that a particular public key belongs to a certain entity. To be clear, it’s the elaborate PKI needed to support the certificates that’s the problem. And the sad reality is if certificate issuance and key exchange is involved, the cyber security solution is doomed from the start.

    We’re talking PIG KI.

    Why is that?

    It’s because the entire certificate industry has been so badly compromised by fake, flawed, highly-vulnerable self-signed and un-revoked certificates, that it is beyond repair. It’s not like issuing new certificates and adding greater capabilities in certificate management or better cyber intrusion detection can eliminate the problem. The Digital Pig is insidethe connected-world barn, and closing the doors after it’s already pervasively entrenched in our cyber spaces just won’t work. And this isn’t just my opinion. According to a Sept 2016 article posted by the EU Agency for Network and Information Security (ENISA), “Certificate Authorities are the weak link of Internet Security.”

    Symantec knows it. It finally gave up on the certificate issuance game, selling its website security business to DigiTrust after Google and Mozilla began a process of distrust in its TLS certificates.

    But how bad can things be, really? The situation is well beyond bad – it’s horrifying. According to Netcraft a whopping 95% of HTTPS servers are vulnerable to Man-in-the-Middle attacks. How is that possible? Well, for sure human error and technical incompetence is part of the problem. But that’s never going away. The real problem is that the reliance on certificates. Flawed, broken, faked certificates.

    Certificates: Impossible to Kill
    Of the certificates already in use, the Ponemon Institute reports that 54% of organizations do not know how many certificates are in use within their infrastructures, where they are located or how they are used – and they have no idea how many of these unknown assets are self-signed (open source) or signed by a Certificate Authority.

    Netcraft’s Mutton writes, “ killing bad certs is difficult…it is not unusual to see browser vendors making whole new releases in order to ensure that the compromised – or fraudulent – certs are no longer trusted…it could remain trusted…for months or years.”

    I’ll argue that certificates, being the root of the problem, have to be eliminated. The recent discovery of the ROCA and ROBOT attack highlights the serious vulnerability of the extant implementations of the RSA algorithm and that RSA itself ought to be deprecated.

    What’s the solution? Kill the PIG! Start with the premise that cybercriminals can’t get in the middle of communication protocols that don’t exist.
    #Certificate-less


    Using Sequential Testing to Shorten Monte Carlo Simulations

    Using Sequential Testing to Shorten Monte Carlo Simulations
    by Tom Simon on 12-27-2017 at 7:00 am

    When working on an analog design, after initial design specs have been met, it is useful to determine if the design meets specs out to 3 or 4 sigma based on process variation. This can serve as a useful step before going any further. It might not be a coincidence that foundries base their Cpk on 3-sigma. To refresh, Cpk is the ratio of the lesser of the upper or lower process parameter specification boundary and their 3-sigma deviation– making a Cpk of 1 working out to meeting process specs at 3 sigma. Higher Cpk’s point toward meeting spec out to a higher sigma – providing better yields. Still, running Monte Carlo analysis on a design across process variations to validate proper performance out to 3 or 4 sigma can be a daunting task.

    During the MunEDA user group meeting in Munich during November I had the opportunity to hear a presentation on an interesting technique that can possibly reduce the number of Monte Carlo runs necessary to reject or qualify a design during Monte Carlo variation analysis. The name of the technique is Sequential Testing. The short version is that it uses the results from a smaller number of samples to determine the likelihood of the final result being above or below thresholds for acceptance or rejection. Let’s break this down a bit.

    If you have a jar of 100 randomly mixed black or white marbles and you draw a small number, you will start to get an idea of the composition of the entire jar. Of course, there will be some uncertainly, but if you are willing to accept a range as your answer, you can get a pretty good idea of the percentage of black or white balls with just a few samples. In essence, we are talking about using a smaller number of samples to get a probability that we meet spec at a specific sigma.

    This process works better when the design in question is further away, either better or worse, from the target sigma. Any way you look at it when you can have confidence that a design is either failing or beating its target sigma, you can save a lot of time running Monte Carlo simulations. So, as you might gather, the key is selecting the right level for acceptance or rejection. These are known as the acceptance quality limit (AQL) and the rejection quality limit (RQL), respectively. Given that we are chip designers and not statisticians it’s nice that MunEDA offers some help here. Their Dynamic Sampling option in their Monte Carlo simulator will help automatically set the percentages for AQL and RQL.

    So how does this translate into time savings on Monte Carlo analysis? Their presentation contained some examples of applying this feature in their tool. If we look at a circuit that has a 3 sigma requirement and run a full Monte Carlo we expect to have 5000 runs. However, if the circuit we are analyzing only has a sigma-to-spec robustness of 2.5 sigma, we can expect to learn this after only 192 simulations when we use the sequential testing feature. This results in an impressive 26x speed up. Though we won’t be happy to learn the design fails, at least significant Monte Carlo simulation time is saved.

    The same effect would be observed if the circuit exceeded the target sigma by a margin. If the circuit yielded out to 3.5 sigma, this can be predicted with only 318 runs. Still far fewer than 5000. To use the interface, users specify the desired yield and then choose dynamic as the sampling method. Simulations will be run until one of the specs is rejected, or until all specs are accepted.

    MunEDA offers the sequential testing option in both their WiCkeD Monte Carlo and their BigMC tools. In the WiCkeD tool they offer pass/fail and sigma-to-spec sampling. In BigMC they offer sigma-to-spec sequential testing. Both help with automatic determination of RQL and AQL. In particular, BigMC is interesting because it can handle very large netlists, ~100MB or 500k devices. Overall MunEDA’s prowess in statistical analysis show through quite clearly. During the user group meeting in Munich there were many papers presented on diverse topics – from flip/flop optimization to using their worstcase analysis to model a MEMS design. For more information on this and the other topics, I suggest looking at their website.


    Neural Networks Leverage New Technology and Mimic Ancient Biological Systems

    Neural Networks Leverage New Technology and Mimic Ancient Biological Systems
    by Tom Simon on 12-26-2017 at 12:00 pm

    Neural networks make it possible to use machine learning for a wide variety of tasks, removing the need to write new code for each new task. Neural networks allow computers to use experiential learning instead of explicit programming to make decisions. The basic concepts related to neural networks were first proposed in the 1940’s, but sufficient technology to implement them did not become available until decades later. We are now living in an era where they are being applied to a multitude of products, most notable of these being autonomous vehicles. In a presentation from ArterisIP, written by CTO Ty Garibay and Kurt Shuler, they assert that the three key ingredients for making machine learning feasible today are big data, powerful hardware, and a plethora of new NN algorithms. Big data makes available vast amounts of training data, which can be used by the neural networks to create the weights or coefficients for the task at hand. Powerful new hardware is also making it possible to perform processing that is optimized for the algorithms used in machine learning. This hardware includes classic CPU’s, as well as GPU’s, DSP’s, specialized math units, and dedicated special purpose logic. The final ingredients are the algorithms that are used to assemble the NN itself. The original basis for the design of all machine learning is the human brain, which uses large numbers of computing elements (neurons) connected in exceedingly elaborate and ever-changing ways. Looking at the human brain, it is clear that much more real estate is dedicated to the interconnection of processing elements than to the processing elements themselves. See the amazing image below to compare the regions of gray and white matter.


    The ArterisIP presentation offers a dramatic chart showing prominent NN classes as of 2016. Again, we see that the data flow and interconnection between the fundamental processing units, or functions, is the significant characteristic of Neural Networks. In machine learning systems, performance is bounded by architecture and implementation. On the implementation side we see, as noted above, that hardware accelerators are frequently used. Also, implementation of the cache and system memory has a profound effect on performance.


    SoC’s for machine learning are being built with IP that supports cache coherency and also with a large number of targeted function accelerators that have no notion of memory coherency. To optimize these SoC’s such that they can achieve the highest performance, it is helpful to add a configurable coherent caching scheme which allows these blocks to communicate efficiently on-chip. ArterisIP, as part of their interconnect IP solution, offers a proxy cache capability that can be custom-configured and added to support non-cache coherent IP.


    ArterisIP points out in their presentation that data integrity protection is also needed in many of the applications where NNs are being used. For instance, in automotive systems, the ISO 26262 standard calls for rigorous attention to ensuring reliable data transfer. ArterisIP addresses this requirement with configurable ECC and parity protection for critical sections of an SoC. Also, their IP can duplicate hardware where needed in the interconnect system, in order to dramatically reduce the likelihood of a device failure. ArterisIP has extensive experience providing interconnect IP to the leading innovators developing Neural Network SoCs. The company recently publicly announced nine customers that are designing machine learning and AI SoC’s. The application areas targeted by Arteris IP’s customers include data centers, automotive, consumer and mobile. Neural networks will continue to become increasingly important for computing systems. As the need to write application specific code diminishes, the design of the neural network itself will become the new key design challenge, including both the NN software and the specific hardware implementation implementing the system. ArterisIP’ s interconnect IP can address many of the design issues that arise in the development of these SoCs.

    I found their presentation, which is available on their website, to be very informative, and it provides a unique perspective on the topics relating to efficient and reliable NN systems.


    HLS Rising

    HLS Rising
    by Bernard Murphy on 12-26-2017 at 7:00 am

    No-one could accuse Badru Agarwala, GM of the Mentor/Siemens Calypto Division, of being tentative about high-level synthesis. (HLS). Then again, he and a few others around the industry have been selling this story for quite a while, apparently to a small and not always attentive audience. But times seem to be changing. I’ve written elsewhere about expanding use of HLS. Now Badru has written a white paper which gets very aggressive – you’d better get on-board with HLS if you want to remain competitive.


    Now if that was just Badru, I might put this passion down to his entirely understandable belief that his baby (Catapult) is beautiful. But when some very heavy hitting design teams agree, I have to sit up and pay attention. Badru cites public quotes from Qualcomm, Google and NVIDIA, backed up by detailed white papers, to make his case. Naturally these applications center around video, camera, display and related applications. But for those who haven’t been paying attention, these areas represent a sizeable percentage of what’s hot in chip-design today. The references back that up, as do applications in the cloud, gaming, and real-time recognition in ADAS.

    NVIDIA was able to cut the schedule of a JPEG encoder/decoder by 5 months while also upgrading, within 2 months, two 8-bit video decoders to 4K 10-bit color for Netflix and YouTube applications. In their view, these objectives would never have made it to design in an RTL-based flow. An important aspect of getting to the new architecture with a high QoR was the ability to quickly and incrementally refine between a high-level functional C model and the synthesizable C++ model, and run thousands of tests to ensure compatibility between these models, something that simply would not have been possible if had to run on RTL. In an integration with PowerPro, they were also able to cut power by 40%. NVIDIA say that they are no longer HLS skeptics – they plan to use this Catapult-based flow on future video and imaging designs, whether new or re-targeted.

    Google are big fans of open-sourcing development, as much in hardware as in software. They are supporting a project called WebM for the distribution of compressed media content across the web and want to provide a royalty-free hardware decoder IP for that content, called the VP9 G2 decoder. Again, this must handle up to 4K resolution playback, on smart TVs, tablets, mobile devices and PCs/laptops. In their view, C++ is a more flexible starting point than RTL for an open-source distribution, allowing users to easily target different technologies and performance points. To give you a sense of how big a deal this is, VP9 support is already available in more than 1 billion endpoints (Chrome, Android, FFmpeg and Firefox). So, hardware acceleration for the standard has a big customer base to target. Starting an open-source design in C++ rather than RTL is likely, as they say, to move the needle. Did I mention that Google uses Catapult as their HLS platform?

    Beyond the value of C++ being a more flexible starting point for open-source design, Google observed a number of other design advantages:

    • Total C++ code for the design is about 69k lines. They estimate an RTL-based approach would have required ~300k lines. No matter how you slice it, creation, verification and debug time/effort scales with lines of code. You get to a clean design faster on a smaller line-count.
    • Simulation (in C++) runs about 50X faster than in RTL. They could create, run verification and fix bugs in tight loops all day long rather than spinning their wheels waiting for RTL simulation runs to complete.
    • Using C++ they could use widely available tools and flows to collaborate, share enhancements to the same file and merge when appropriate. You can do this kind of thing in RTL too, but in a system-centric environment there must be a natural pull to using tools and ecosystems used by millions of developers rather than thousands of developers.
    • An HLS run on a block (14 of them in the design) took about an hour. Which allowed them to quickly explore different architectures through C++ changes or synthesis options. They believe this got them to the final code they wanted in 6 months rather than a year.

    Qualcomm has apparently been using HLS and high-level verification (HLV, based on C++/ SystemC testbenches) for several years, on a wide range of video and image processing IP, some of which you will find in SnapDragon devices. They apparently started with HLS in the early 2000’s, partnering with Calypto, the company that created Catapult and is now a part of Mentor/Siemens. A big part of the attraction has been the fast turns that are possible in verifying an architecture-level model; they say they are seeing an even more impressive 100-500X performance improvement over RTL. Qualcomm also emphasizes that they do the bulk of their verification (for these blocks) in the C domain. By the time they get to HLS, they already have a very stable design. Verification at the C level is dramatically faster and proceeds in parallel with design, unlike traditional RTL flows where, no matter how much you shift left, verification always trails design. Verification speed means they can also get to very high coverage much faster. And they can reuse all of that verification infrastructure on the synthesized RTL produced by HLS. The only tweaking they have to do on the RTL is generally at the interface level.


    I know we all love our RTL, we have lots of infrastructure and training built up around that standard and we unconsciously believe that RTL must be axiomatic for all hardware design for the rest of time. But we’re starting to see shifts, in important leading applications and in important leading companies. And when that kind of shift happens, doggedly refusing to change because we’ve always used RTL or “everyone knows” that C++ -based design is never going anywhere – these viewpoints may not be healthy. Might want to check out the white-paper HERE.


    China is right: The world doesn’t need Silicon Valley

    China is right: The world doesn’t need Silicon Valley
    by Vivek Wadhwa on 12-25-2017 at 7:00 am

    Ever since the Chinese Government banned Facebook in 2009, Mark Zuckerberg has been making annual trips there attempting to persuade its leaders to let his company back in. He learned Mandarin and jogged through the smog-filled streets of Beijing to show how much he loved the country. Facebook even created new tools to allow China to do something that goes against Facebook’s founding principles — censor content.

    But the Chinese haven’t obliged. They saw no advantages in letting a foreign company dominate their technology industry. China also blocked Google, Twitter, and Netflix and raised enough obstacles to force Uber out.Chinese technology companies are now amongst the most valuable — and innovative — in the world. Facebook’s Chinese competitor, Tencent, eclipsed it in market capitalization in November, crossing the $500 billion mark. Tencent’s social-media platform, WeChat, enables bill payment, taxi ordering, and hotel booking while chatting with friends; it is so far ahead in innovation that Facebook may be copying its features. Other Chinese companies, such as Alibaba, Baidu, and DJI, are racing ahead in e-commerce, logistics, artificial intelligence, self-driving cars, and drone technologies. These companies are gearing up to challenge Silicon Valley itself.

    The protectionism that economists have long decried, which favors domestic supplies of physical goods and services, limits competition and thereby the incentive to innovate and evolve. It creates monopolies, raises costs, and stifles a country’s competitiveness and productivity. But this is not a problem in the Internet world.

    Over the Internet, knowledge and ideas spread instantaneously. Entrepreneurs in one country can easily learn about the innovations and business models of another country and duplicate them. Technologies are advancing on exponential curves and becoming faster and cheaper — so every country can afford them. Any technology company in any country that does not innovate risks going out of business, because local startups are constantly emerging that have the ability to challenge them

    Chinese technology protectionism created a fertile ground for local startups by eliminating the fear of foreign predators. And there was plenty of competition — coming from within China.

    Silicon Valley’s moguls openly tout the need to build monopolies and gain unfair competitive advantage by dumping capital. They take pride in their position in an economy in which money is the ultimate weapon and winners take all. If tech companies cannot copy a technology, they buy the competitor.

    Amazon, for example, has been losing money or earning razor-thin margins for more than two decades. But because it was gaining market share and killing off its brick-and-mortar competition, investors rewarded it with a high stock price. With this inflated capitalization, Amazon raised money at below market interest rates and used it to increase its market share. Uber has used the same strategy to raise billions of dollars to put potential global competitors out of business. It has been unscrupulous and unethical in its business practices.

    Though this may sound strange, copying is good for innovation. This is how Chinese technology companies got started: by adapting Silicon Valley’s technologies for Chinese use and improving on them. It’s how Silicon Valley works too.

    Steve Jobs built the Macintosh by copying the windowing interface from the Palo Alto Research Center. As he admitted in 1994, “Picasso had a saying, ‘Good artists copy, great artists steal’; and we have always been shameless about stealing great ideas.”

    Apple usually lags in innovations so that it can learn from the successes of others. Indeed, almost every Apple product has elements that are copied. The iPod, for example, was invented by British inventor Kane Kramer; iTunes was built on a technology purchased from Soundjam; and the iPhone frequently copies Samsung’s mobile technologies — while Samsung copies Apple’s.

    Facebook’s origins also hark back to the ideas that Zuckerberg copied from MySpace and Friendster. And nothing has changed since: Facebook Places is a replica of Foursquare; Messenger video duplicates Skype; Facebook Stories is a clone of Snapchat; and Facebook Live combines the best features of Meerkat and Periscope. Facebook tried mimicking Whatsapp but couldn’t gain market share, so it spent a fortune to buy the company (again acting on the Silicon Valley mantra that if stealing doesn’t work, then buy).

    China opened its doors at first to let Silicon Valley companies bring in their ideas to train its entrepreneurs. And then it abruptly locked those companies out so that local business could thrive. It realized that Silicon Valley had such a monetary advantage that local entrepreneurs could never compete.

    America doesn’t realize how much things have changed and how rapidly it is losing its competitive edge. With the Trump administration’s constant anti-immigrant rants, foreign-born people are getting a clear message: Go home; we don’t want you. This is a gift to the rest of the world’s nations, because the immigrant exodus is boosting their innovation capabilities. And America’s rising protectionist sentiments provide encouragement to other nations to raise their own walls.

    Here is an India-focused version of this article in Hindustan Times.

    For more, visit my website: www.wadhwa.com and read my book, The Driver in the Driverless Car: How Our Technology Choices Will Create the Future– A 2017 Financial Times & McKinseyBusiness Book of the Year and Nature Magazine “Best Science Pick


    2017 Semiconductors +20%, 2018 slower

    2017 Semiconductors +20%, 2018 slower
    by Bill Jewell on 12-24-2017 at 7:00 am

    The global semiconductor market in 2017 will finish with annual growth of about 20%. Recent forecasts range from 19.6% to 22%. World Semiconductor Trades Statistics (WSTS) data is finalized through October, thus the final year results will almost certainly be within this range. We at Semiconductor Intelligence have raised our forecast to 21% from 18.5% in September. 2017 will be the highest annual change since 32% in 2010. Memory, specifically DRAM and NAND flash, is the major market driver. WSTS projects the memory market will grow 60% in 2017, while the semiconductor market excluding memory will increase 9%. Memory was 23% of the semiconductor market in 2016 but accounts for two-thirds of the $70 billion change in the 2017 semiconductor market versus 2016.

    The announced forecasts for 2018 range from Mike Cowan’s 3.2% to Future Horizons’ 15.6%. We at Semiconductor Intelligence raised our 2018 projection to 12% from 10% in September. The assumptions behind our forecast are:

    · Steady or improving demand for key electronic equipment
    The market for PCs and tablets has been weak, with declines in 2016 and 2017. Gartner expects a slight improvement in 2018 to a roughly flat market at 0.2% change. IDC believes smartphone unit growth will accelerate from 1.4% in 2017 to 3.7% in 2018. IC Insights projects ICs for automotive and internet of things (IoT) applications will be key market drivers over the next few years. These two categories should each show robust increases of 16% in 2018.

    [table] border=”1″ cellspacing=”0″ cellpadding=”0″ align=”center”
    |-
    | style=”width: 184px; height: 19px” | Annual Growth
    | style=”width: 98px; height: 19px” | 2017
    | style=”width: 92px; height: 19px” | 2018
    | style=”width: 184px; height: 19px” | Source
    |-
    | style=”width: 184px; height: 19px” | PC & Tablet units
    | style=”width: 98px; height: 19px” | -3.2%
    | style=”width: 92px; height: 19px” | 0.2%
    | style=”width: 184px; height: 19px” | Gartner, Oct. 2017
    |-
    | style=”width: 184px; height: 19px” | Smartphone units
    | style=”width: 98px; height: 19px” | 1.4%
    | style=”width: 92px; height: 19px” | 3.7%
    | style=”width: 184px; height: 19px” | IDC, Nov. 2017
    |-
    | style=”width: 184px; height: 19px” | Automotive IC $
    | style=”width: 98px; height: 19px” | 22%
    | style=”width: 92px; height: 19px” | 16%
    | style=”width: 184px; height: 19px” | IC Insights, Dec. 2017
    |-
    | style=”width: 184px; height: 19px” | Internet of Things IC $
    | style=”width: 98px; height: 19px” | 14%
    | style=”width: 92px; height: 19px” | 16%
    | style=”width: 184px; height: 19px” | IC Insights, Dec. 2017
    |-

    Slight improvement in global economic growth
    The International Monetary Fund (IMF) October 2017 economic outlook called for global GDP to rise 3.6% in 2017, an acceleration of 0.4 percentage points from 3.2% in 2016. 2018 is expected to show a slight acceleration to 3.7% growth. Advanced economies are projected to decelerate from 2.2% GDP change in 2017 to 2.0% in 2018. Among the advanced economies, an acceleration in U.S. GDP growth is more than offset by slower increases in the Euro area, United Kingdom, and Japan. The acceleration in global GDP in 2018 will be driven by emerging and developing economies, moving from a 4.6% change in 2017 to 4.9% in 2018. A deceleration in China from 6.8% in 2017 to 6.5% in 2018 is more than offset by accelerating change in India, steady growth in the ASEAN-5 (Indonesia, Malaysia, Philippines, Thailand and Vietnam), and a continuing recovery in Latin America.

    · Moderating, but continuing strong memory demand
    In the last 25 years, there have been four cycles where the memory market has shown at least one year of growth over 40%. These cycles usually ended with a major decline the memory market, ranging from 13% to 49%. The exception was 2004-2005, when the memory market went from 45% growth to a modest 3% change. The upside of the memory cycles generally lasted two to four years. The exception to this was 2010, where a 55% memory increase was preceded by a 3% decline in 2009 and followed by a 13% decline in 2011. Based on this history, we will most likely see one more year of solid memory growth in 2018 before a decline in 2019.

    · Strong quarterly growth set in 2017 drives healthy 2018
    The 2017 semiconductor market has exhibited robust gains in each quarter versus a year ago, starting at 18% in 1Q 2017 and peaking at 24% in 2Q 2017. 3Q 2017 grew 22% from a year ago and 10.2% versus the prior quarter. 3Q 2017 was only the second double-digit quarter-to-quarter increase in the last eight years (after 11.6% in 3Q 2016). This quarterly pattern will drive healthy year 2018 growth with only modest quarter-to-quarter change in each quarter of 2018. The quarterly forecast below supports our 12% annual target for 2018.

    We have not yet finalized our forecast for 2019. The most probable scenario is a low single-digit increase or a slight decrease as memory demand eases. After the 2019 correction, the semiconductor market should recover to moderate growth in 2020.


    IEDM 2017 – Intel Versus GLOBALFOUNDRIES at the Leading Edge

    IEDM 2017 – Intel Versus GLOBALFOUNDRIES at the Leading Edge
    by Scotten Jones on 12-22-2017 at 9:00 am

    As I have discussed in previous blogs, IEDM is one of the premier conferences to learn about the latest developments in semiconductor technology.

    Continue reading “IEDM 2017 – Intel Versus GLOBALFOUNDRIES at the Leading Edge”


    "The Year of the eFPGA" 2017 Recap

    "The Year of the eFPGA" 2017 Recap
    by Tom Dillinger on 12-22-2017 at 7:00 am

    This past January, I had postulated that 2017 would be the “Year of the Embedded FPGA”, as a compelling IP offering for many SoC designs (link). As the year draws to a close, I thought it would be interesting to see how that prediction turned out.

    The criteria that would be appropriate metrics include: increasing capital investment; increasing customer adoption; support for a diverse set of applications; and, an emerging set of standard product offerings to accelerate adoption. To be sure, qualified test vehicles fabricated on multiple foundry process nodes are also crucial, as is a solid methodology flow for design synthesis and physical personalization.

    If you have been following eFPGA technology, you have no doubt seen recent press releases highlight the growing investment and the customer endorsements. In addition, previous Semiwiki articles have described how eFPGA features are addressing both high-performance and low-power requirements, as well as the ease with which the IP block is connected to the pervasive AMBA bus protocols (link, link). So far, the prediction is looking pretty good. 🙂

    The last metric – the introduction of standard product offerings – has received less attention, perhaps. To gain a better understanding of the eFPGA product strategy, I recently met up with Aparna Ranachandran, Tony Kozaczuk, and Cheng Wang at Flex Logix. I asked how their technology offerings are evolving, as the customer interest grows.

    Cheng indicated, “A key requirement is to address the applications where programmable eFPGA functionality also incorporates significant memory storage. Many customers are seeking a product that optimally integrates SRAM within the eFPGA logic tiles. They do not intend to invest a lot of resource in physical implementation – i.e., designing and floorplanning SRAM blocks adjacent to the eFPGA IP. These customers want a flow from their HDL description through synthesis to an off-the-shelf eFPGA product with programmable logic and memory.”

    “To that end, we will soon be releasing an integrated design for silicon qualification, as a standard product.”, Aparna highlighted.

    Tony added,“With lots of customer input, we have selected a combination of programmable logic capacity and array storage that will span a wide range of upcoming customer designs. We are leveraging the existing HDL synthesis flow support that provides block RAM’s in the output netlist, inferring the array topology from the HDL model. Our EFLX compiler maps each BRAM in the synthesis netlist to a corresponding configuration of SRAM macros integrated in the eFPGA IP.”

    The use of Block RAM’s is the standard representation for synthesizing and implementing arrays for commercial FPGA products – so, this flow is a natural extension for eFPGA IP.The initial Flex Logix programmable logic + array offering is illustrated below.

    Aparna is the lead designer, and provided a description of some of the technical features:

    • eFPGA array macros are based on qualified TSMC bit cells. (The initial process node will be 28nm.)
    • MBIST test controller design logic is provided.
    • The array macros are optimally configured between tiles – specific attention is given to the I/O connections from the tiles to the arrays, without adversely impacting the logic signal routing capacity between tiles.
    • The EFLX placement algorithm will automatically assign the BRAM netlist instances to the integrated SRAM macros, leveraging timing-driven optimization calculations. (Unused array macros are tied to inactive levels.)

    The overall flow for realizing the eFPGA logic + memory design is illustrated in the figure below.

    The initial front-end EFLX analysis step provides customers with resource estimates, for both the programmable logic LUT usage and the array macro utilization. The subsequent steps complete the physical personalization, including the array macro connectivity.

    “Our customers are seeking silicon-proven IP products – this offering will expand the application base to designs requiring integrated storage.”, Cheng said. (For specific customers who are interested in a unique integrated configuration, the Flex Logix team would assist them with preparation of the flow input descriptions shown as “optional” in the figure above, as well as the IP physical implementation.)

    So, it looks like the eFPGA technology market is indeed expanding to offer customers with product(s) that will accelerate adoption, combining complex logic and storage requirements with a well-defined implementation flow. This past year has indeed been the “year of the eFPGA” – it will be interesting to see what 2018 brings.

    For more information on the Flex Logix logic + array offering, please follow this link.

    Have a Happy Holiday season!

    -chipguy