CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Short History of the Fourth Industrial Revolution

Short History of the Fourth Industrial Revolution
by Bill McCabe on 10-31-2016 at 12:00 pm

In 2016, many companies are using Industry 4.0 as a buzzword. This doesn’t mean that the old industry has been revolutionized into a new version. On the contrary, this is an extension of what has currently existed, with the dawn of the modern variation arriving about 2010 in Germany.

While the first reference to Industry 4.0 would not occur until 2011, the German Federal Ministry of Education and research began to explore the various trends that were taking place. They wanted to identify things in high level technology that could help to improve the world and boost technology. This would allow those seeking future employment in the industrial sector to have a simplified work experience while allowing us to do more in a fraction of the time.

By 2012, the Germans had collected a great deal of research and they used this information to hold the first presentation. As part of this presentation, they took the smart factory setting and began to showcase some of the potential that was there. This allowed potential customers and industry professionals to gain a deeper understanding of what all was possible. Now machines could almost think and react to real life situations in order to boost effectiveness and help to make the industry more incredible than ever before. The German government was thrilled with the results and they began to boost funding to the research in the hopes it would advance their country and help them to become a front runner during the Industrial Revolution.

Once the research was determined and there was an understanding that the internet was far more powerful than originally believed, the incorporation of information relay over the internet helped to further propel the internet of things, which was already gaining significant prominence in other countries at this time. Funding was not at a new high through Germany’s manufacturing industry and the invention of the process was solidifying. It was at this time that the Platform of Industry 4.0 was introduced. But it was still a ways from where we find Industry 4.0 today.

In 2014, companies outside of Germany began to step in. There was more virtulization and input from neighboring countries, so that effective work solutions could be created. Decentralization became a key component for the process, and ensuring that digital manufacturing would ultimately benefit from the new processing the most. This is the point where the internet of things became perfectly aligned with the industrial revolution and a sweet harmonious union was formed.

Further evolution occurred as new things began to appear thanks to the research and development that has taken place during the fourth industrial revolution. This includes advanced medical technology, effective cost saving mechanics for production plants and so much more. This is an exciting time in our world to be alive and witness the incredible changes that are taking place.

This is the 1st in a Series – be on the lookout for additional articles on this topic.

For more information about us check out www.internetofthingsrecruiting.com

Also read: Manufacturing Singularity is Comng!


CEO Interview: Taher Madraswala of Open-Silicon

CEO Interview: Taher Madraswala of Open-Silicon
by Daniel Nenni on 10-31-2016 at 7:00 am

Taher Madraswala started his career at Intel designing microprocessors and later overseeing ASIC development before joining Open-Silicon at its inception. During his 25 year semiconductor career Taher has experienced more than 300 tapeouts across a wide variety of applications.

Today Open-Silicon applies an open business model that enables the company to uniquely choose best-in-industry IP, design methodologies, tools, software, packaging, manufacturing, and test capabilities. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has shipped over 120 million ASICs to date.

How do you view the current state of the ASIC market?
We believe we are at a real crossroad of choices that the industry will make on custom silicon. While Networking, Telecom, Storage and Computing (NTSC) applications are pushing the performance envelope with leading edge process technologies, mixed signal/ IoT applications are leveraging the mature process technologies that are optimized for low power applications. Even though many platform designers will want to create a differentiation with custom hardware, the rising cost of masks and wafers may make them rethink. However, ASIC enabled product differentiation provides a competitive advantage for many applications. Those who run the race of performance, power and product differentiation to distinguish their solutions will continue investing in ASICs.

What do you see as barriers to growth and innovation?

Lack of appetite to fund new architectures in silicon and a shrinking ecosystem of IP providers. To overcome this barrier, Open-Silicon has joined forces with Silicon Catalyst, which is an incubator for semiconductor solution startups to enable them to increase silicon innovation opportunities and pursue big ideas at a much lower cost through strategic partners. Reducing upfront costs enables startups to become higher value investments. Follow-on funding then leads to true innovation and value creation.

What kinds of design/technology innovations do you think are the biggest game changers, and why?
There are two. One is ASIC development platforms. These platforms can speed custom design while retaining the ability to differentiate. Creating ASIC platforms requires thinking like a system company, or even like a startup, and requires the consideration of end use cases.

The other is packaging technology, specifically system in a package (SiP) and 2.5D. These will have a large impact on the future of our industry by creating a new wave of system integration techniques that will exploit the benefits of the footprint compression that these packaging technologies provide.

How is Open-Silicon helping to bring these innovations to fruition?
We are investing in ASIC development platforms for emerging applications. Our Specification-to-Chip IoT ASIC Platform is a perfect example.Open-Silicon’s IoT platform includes pre-designed Register-Transfer Level (RTL) field-proven components along with a support ecosystem of software and services for a variety of protocols, operating systems and analytics. The design is scalable and allows for variations in hardware/software partitioning, as well as the integration of custom IP. With the hardware blocks already designed and the associated software already developed, the project can begin at a point that is months ahead of a full custom design.

We are also aggressively investing in solving the die-to-die and processor-to-memory links with internally developed IP, such as our High Bandwidth Memory (HBM) total solution and interposer technology development to support the SiP and 2.5D technologies.

Open-Silicon provides full turnkey ASIC solutions translating customer ideas into real silicon. Why is this significant?

The industry is transitioning very quickly from innovating at the hardware level to innovating at the application level. By providing expertise that can translate ideas into real silicon, we encourage and help innovators spend more of their time in listening to their customers rather than building and managing infrastructure to implement their ideas. From self-driving cars to virtual reality, the inventors and idea managers should invest their time into defining ground-breaking concepts. We want to help revive innovation by allowing dreamers to think and envision, rather than just manage.

What advancements in technologies, like 2.5D and HBM, is Open-Silicon working on that you would like to share with SemiWiki subscribers?
Open-Silicon made an early investment in 2.5D, which has allowed us to offer an ASIC package with integrated 3D memory stacks using silicon interposer 2.5D technology. The result is higher performance, lower power and a smaller form factor system — a three-way win. 2.5D and 3D stacking creates ways to mix and match chip components, meaning products can be divided into multiple dies. Some functions can be at a less expensive process node, or mixed with other functions that require a high frequency and/or low power.

Another significant advancement is Open Silicon’s HBM IP subsystem, which enables 1024-bit wide memory paths to ASICs using a 2.5D SiP solution. ASIC applications in networking, deep learning, virtual reality, gaming, cloud computing and data centers can improve their access to memory by applying this HBM SiP approach along with the necessary IP and JEDEC-compliant HBM memory chips, which come in stacked-die 3D versions.

What advice would you give to students or to those just entering the field of chip design engineering?
This is one of the most exciting times to be innovating with semiconductors. Never has there been more focus on the ability to interface machines with human users. Mega-trend opportunities in IoT, biotech, wearables, energy, autonomous vehicles and mobile will all have new semiconductor innovation at their core. You are joining a workforce that will continue to profoundly change the lives of humans, and that is both exciting and extremely rewarding.

Also Read:

CEO Interview: Simon Butler of Methodics

CEO Interview: Charlie Janac of Arteris

CEO Interview: Marie Semeria of LETI


3 Steps To Choosing The Right IoT Vendor

3 Steps To Choosing The Right IoT Vendor
by Padraig Scully on 10-30-2016 at 8:00 pm

There are thousands of contrasting IoT vendors in the market today. A strong push from hardware companies, communication providers, independent software vendors, system integrators, startups and IoT cloud platforms (of which there are360+ competing providers in this market alone) has resulted in a complex and confusing market. As a result, it can be difficult for an OEM to evaluate which IoT vendor is the best fit for their connected solutions. But this is a very important decision that will shape an OEMs’ IoT journey as they will likely be reliant on that vendor for years to come.

The process of identifying the right IoT vendor was recently analyzed as part of an industry white paper we published with the title “Guide to IoT solution development”. In the white paper, we discuss the IoT Solution development process across 5 major phases:

[LIST=1]

  • Business case
  • Build vs. Buy Decision
  • Proof of Concept
  • Piloting
  • Commercial Deployment

    According to the paper, there are three important steps to choosing the right IoT Vendor:

    [LIST=1]

  • Mapping the engineering requirements
  • Deciding on build vs. buy
  • In case of buy: Selecting the actual vendor

    1. Requirements Engineering– Understanding what is needed for your IoT Solution.
    Assuming you have nailed the business case (i.e., you have a clear vision for your IoT solution) and have double checked the basic assumptions (i.e., expected ROI) for your business case you will need to formalize your engineering requirements. This is necessary (at least on a high level) so that you can craft the right IoT initiative for your organization, perform the Build vs. Buy decision and consult the right vendors or partners.

    a). Asking the right questions

    Firstly, you should come up with answers to operational questions such as:

    • What end points will provide the data?
    • What data points should be collected?
    • Which analyses will generate strategic insights?
    • Which enterprise systems need to be connected?
    • What services do I need to offer?

    IoT needs to be thought through from end-to-end or device-to-cloud. Keep in mind that the true value of IoT solutions resides in the data generated by your connected products – from which you derive actionable intelligence and feed timely insights back into products, processes, and operations to transform the entire business.

    b). Mapping the requirements by area
    As a second step, you should make a rough draft of your end-to-end solution according to 5 distinct layers: 1. Device, 2. Communication, 3. Cloud Services, 4. Applications, and cross-layer 5. Security. (For more details on the 5 layers see our white paper). For each component ask questions such as: Do we have the technology expertise in-house? Can we keep pace with the technology evolution and future customer requirements?

    For example, it is important to know how much data will be generated, in which form and how fast it will be retrieved. This will determine which kind of database and storage solution is required and whether you will be able to build this on top of your existing data infrastructure or not.

    2. The Build vs. Buy decision

    After assessing the engineering requirements, you need to decide which components of the solution you want to build from scratch. In many cases, it is beneficial to work with existing solutions by third-party vendors i.e., out-of-the-box solutions.IoT projects increasingly rely on existing out-of-the-box solutions

    The paper highlights that recently more and more IoT projects rely on existing out-of-the-box solutions.

    WHY COMPANIES GO WITH “OUT-OF-THE-BOX” SOLUTIONS

    Benefits & Reasoning:

    • Quicker Time To Market — Critical infrastructure in place by default.
    • Access to crucial skills — Readily available partner network with expertise across domains.
    • Secure by design — Secure development lifecycle builds in security from outset
    • Optimized to work with wider ecosystem — Aligned with industry standards across partner ecosystems e.g., IIC
    • Scale with ease — Modularized and optimized for large scale deployments
    • Enable a more end-to-end offering — Multiple parts work together from one vendor e.g., OS, Cloud, Analytics

    Before deciding to go with an out-of-the-box solution, companies should however evaluate the related costs as well as the threat of becoming “locked-in”. Being “locked-in” with the wrong vendor may strip away certain degrees of freedom in the overall solution or lead to uncontrollable support, maintenance and customization costs in the long run.

    Most vendors offer the ability to perform an initial pilot trial. While companies may initially test some features for free, it should be noted that a certain budget needs to be planned in for the pilot phase as some integration effort and data modelling is always necessary to get the pilot project up and running.

    3. The vendor selection

    There are numerous reasons to choose one IoT solution vendor over another. In an industry survey we asked 144 companies currently building IoT Solutions: Which vendor is primarily in the lead to co-ordinate your IoT solution development?

    Most companies looking to IoT Cloud Platforms for solution development:

    The analysis shows that most companies developing IoT solutions see IoT Cloud / Platform companies in the lead (29%). While 21% of respondents see no vendor in the lead, instead they are building in-house. (See Exhibit). However, finding the most suitable IoT Cloud / Platform vendor is difficult with hundreds of competing providers in the market today.

    One should also note, at this point (Q3/2016) there is no single IoT vendor that can provide the complete end-to-end out-of-the-box solution. However, as our 2016 IoT platforms market report verifies some companies offer more than others and together with their partner ecosystem some can provide complete end-to-end IoT solution support.

    Comparing key IoT Solution vendors
    Correctly assessing the capabilities of each possible vendor against your requirements definition is crucial for your selection. While there are hundreds of existing Enterprise IoT projects, the use case at hand determines your solution requirements, the vendor selection process largely depends on the components the vendors offer and how they fit into your solution.

    To assist companies in better understanding the offerings of IoT Solution Vendors, we showcase a high-level comparison of 8 major IoT solution providers including Microsoft, Amazon, IBM, Intel, GE, Google, PTC and SAP.

    The complete comparison as well as other best practices for OEMs, ODMs, and device manufacturers on how to transform their companies and build solid IoT Solutions can be found in the “Guide to IoT solution development” which is available for download free of charge.

    More IoT Articles on SemiWiki!


  • The IoTrojan Horse – an army of toasters

    The IoTrojan Horse – an army of toasters
    by Bill Montgomery on 10-30-2016 at 4:00 pm

    Most everybody is familiar with the term Trojan Horse, drawn from Greek mythology. It’s a tale from the Trojan War where, after a fruitless 10-year attempt to capture the city of Troy, the Greeks constructed a huge wooden horse, left it outside the city walls, and then sailed away, seemingly accepting defeat. The Trojans were elated, celebrated, and pulled the horse into Troy, as a victory trophy. Unbeknownst to them, the massive horse was filled with Greek soldiers.

    During the night, the Greek force crept out of the horse, and opened the gates for the rest of the Greek army, which had sailed back under cover of night. The Greek army entered and destroyed the city of Troy, decisively ending the war.

    Flash forward to 2016.

    In an insightful article (read it here) published this past February by Popular Science, Kelsey D. Atherton wrote, “About two and a half centuries after America declared independence, over 150 years since the end of the Civil War, and 66 years since the Soviet Union became the second country in the world to possess nuclear weapons, the greatest threat the intelligence community sees facing the United States is Wi-Fi-enabled toasters. No really.”

    Atherton’s article has proven to be prescient. Atherton’s toaster is just one of hundreds of millions of like devices – soon to be billions – that are permeating our lives on myriad levels. And last week, routers, DVR’s and IP cameras – basically millions of unprotected internet-enabled devices, joined forces at the direction of a bunch of amateur hackers – and launched a crippling DDoS attack against Dyn Inc. The IoTrojan Horse attack created overwhelming traffic to a number of high-level domains, such as Twitter, Amazon, Netflix and PayPal, effectively shutting them down. (read about it here).

    I can almost hear the folks in Hollywood noodling over this one. Let’s see. We’ve made Bad Teacher, Bad Grampa…hmm…how about Bad Toy Story? Or maybe Bad IoToy Story?

    Only this movie wouldn’t be engaging, or uplifting or funny. It would be a tragedy – a tragedy that is on the verge of happening in real life.

    How can this be? There are many reasons but one that is most apparent is the lack of standards within the IoT sector.

    Standards – a necessary evil then. A mission-critical requirement now.
    Standards bodies are typically packed with representatives from governments and enterprises, and their decisions are mostly based on politics and their respective agency or company interests. The process at arriving at standards has always been time-consuming and laborious, but in essence, it worked. Mostly because time was never a consideration in reaching global consensus on things like EDI standards. When they happened, they happened.
    Not today. Today, time is of the essence and procrastination is only going to make matters worse. With no standards to adhere to, companies worldwide are rapidly rushing IoT products to market for fear of losing out on the predicted IoT gold rush. Just check out the list of manufacturers (here) whose devices were conscripted to attack Brian Krebs’s KrebsOnSecurity website. It’s absolutely ridiculous that this has been allowed to occur.
    Things have to change, and fast.

    Cyberwar Measures Act – a radical approach to a dangerous problem
    In 1970, Canada’s Prime Minister, Pierre Elliot Trudeau, invoked the War Measures Act in response to the FLQ’s (a terrorist group bent on independence for the Province of Quebec) kidnapping and murder of Pierre Laporte, a senior elected official. The Act gave the government sweeping powers, allowing it to arrest and detain anyone they believed was affiliated with the FLQ. While controversial at the time, the desired effect was realized. A second kidnapped victim, a British diplomat was released, and the Act effectively squashed the FLQ’s efforts to break up the country.

    The US and indeed the entire world is in a similar state of crisis with far more dire consequences, and I feel strongly that it’s time to dispense with the slow, plodding standards-based way we deal with change in our connected world in favour of dramatic actions which will rapidly protect us from future attacks.


    Furthermore, while we are wont to blame North Korea for the Sony hack, Russia for email hacks, or other nations for the attacks on our connected world, the sad reality is that the doors are so wide open that clever kids in their parent’s basements in any part of the world could be launching IoT-driven cyberattacks.

    So, what should we do?

    Invoke a Cyberwar Measures Act approach.

    First, governments everywhere should steadfastly refuse to allow importation of any connected products that have hard-coded passwords (firmware) that cannot be changed, and those which do not enforce strong password setting at time of installation.

    Second, every IP address that was used in the Dyn attack should be disabled, and any of the things, which were connected at those IP addresses, which cannot be secured as described above, should be denied reconnection.


    Third, the remaining IP addresses with known to be insecure ‘things’ connected (devices similar to those used in the recent DDoS attacks), should also be disabled.


    Fourth, let’s immediately ban the importation of the devices that Brian Krebs revealed were used in that particular IoT DDoS attack, putting the onus on the manufacturers to prove their devices are sufficiently secure before reinstating them as IoT safe manufacturers.


    The IoTrojan horse has arrived, but unlike the citizens of the city of Troy, we can still win this battle if we act quickly.


    Governments of the world, are you listening? It’s time to step up and do what you are meant to do…serve and protect the citizens of your respective nations.


    Also Read:
    Top 5 Things to Know About Recent IoT Attacks


    Top 5 Things to Know About Recent IoT Attacks

    Top 5 Things to Know About Recent IoT Attacks
    by Matthew Rosenquist on 10-30-2016 at 12:00 pm

    Recent internet attacks resulted in popular sites becoming unreachable, such as Twitter, Etsy, Spotify, AirBnB, Github, and the New York Times. These incidents have brought to light a new threat to online services: Internet of Things (IoT) botnets. Distributed Denial of Service (DDoS) attacks have been commonplace for over a decade but rarely been too troublesome. For the past several years’ network providers’ security services have been able to absorb such attacks to keep online properties available. But the game has now changed.

    In essence, when a number of devices can be controlled to simultaneously flood a destination with network requests, the target becomes overloaded and legitimate requests cannot be processed. Traditional network filters are smart enough to recognize a handful of systems attempting this malicious behavior and simply drop all requests from them. But when thousands of different systems mount an attack, the normal filters fail to recognize legitimate from malicious traffic and the availability of the system crumbles.

    Cybercriminals and hacktivists have found a new weapon in this war, the Internet of Things (IoT). Billions of IoT devices currently exist and can be as small as a piece of jewelry or larger than a tractor. They all have one thing in common, they connect to the Internet. This has tremendous benefits as people can monitor their home with cameras from afar, check the contents of their refrigerator while at the store, and do a myriad of other great things with these connected beneficial gadgets. We cannot forget however; these are just tools. They can be wielded for good or employed for malice. To hackers, each one of these devices is a potential robotic soldier, which they might be able to recruit into their bot-army.

    The most recent attack, against a major DNS provider has highlighted this very fact to millions of Internet users. Botnets containing tens or hundreds of thousands of hijacked IoT devices can bring down major pieces of our beloved Internet. There is a lot of hype, fear, and speculation bubbling out of the shadows. We are at a tipping point. IoT devices now represent a new and formidable threat. The next few months will be telling. For now, let us cut through the hype and understand the important aspects of recent IoT DDoS attacks.

    Here are 5 things you should know about the recent IoT attacks:

    1. Insecure IoT devices pose new risks for everyone. For every IoT device which can be hacked, it is another soldier in a botnet army which could be used to bring down important parts of the Internet. Such attacks can interfere with your favorite sites for streaming, social media, online-shopping, banking, etc. If you own such weak or poorly configured devices, then you could be contributing to the problem.

    2. IoT devices are valuable to hackers and they won’t give them up without a fight. Although these attacks, with malware like the Mirai botnets, are simple in nature, they will evolve as quickly as they need to for the attackers to remain in control. IoT devices are hugely valuable to hackers, as they empower them to conduct devastating DDoS attacks with little effort.

    3. DDoS attacks from IoT devices are severe and tough to defend against. Identifying and filtering out attacks from a handful of systems is easy. When faced with tens or hundreds of thousands, it is near impossible. The amount of resources needed to fend off attack is tremendous and costly. A recent attack to knock Brian Krebs’s security-reporting site offline, resulted in Akamai’s vice president of web security to state “If this kind of thing is sustained, we’re definitely talking millions” of dollars in cyber security services to keep the site available. That is powerful. Look for attackers to not give up easily. These always-connected devices are perfect for DDoS botnets.

    4. Cybercriminals and hacktivists are driving these attacks. There is speculation and fear that nation states are behind the latest string of attacks. That is highly unlikely. Authors of Mirai, one of hundreds of botnets, voluntarily released the code to the public, something a professional government offensive team would never do purposefully. However, it is a good bet that after witnessing how powerful IoT botnets are, nation states are probably working on similar strategies but with much more advanced capabilities. In the short term, cybercriminals and hacktivists will remain the main culprits of these attacks. Over the next few months, expect criminals to find angles which they can make a financial profit, like extortion.

    5. It will get worse before it gets better. Unfortunately, most of IoT devices that have been deployed to date, lack strong security defenses. The ones being hacked now are the easiest, with default passwords that are published for anyone to lookup. Hacker software simply connects and logs into the device, unless the owner has gone out of their way to change the default password. Unsurprisingly, most have not taken this important step. Instantly, the attackers have another soldier to do their bidding. In order for this situation to get better, several aspects must be addressed. Devices must be designed with security in mind, configured properly, and managed to keep security updated. This will take both technical and behavioral changes in the long-run to keep pace with evolving hackers.

    Also read: How to Secure the Future of IoT
    Hacking IoT devices is now a problem for everyone. Due to the ease of compromise and massive numbers of IoT devices which are connected to the Internet, cybercriminals and hacktivists have a vast resource to fuel powerful DDoS campaigns. We are just starting to see the attacks and issues around IoT security. It will continue to be a problem until more comprehensive controls and behaviors make us all more secure.

    Interested in more? Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

    Also read:How to Secure the Future of IoT


    The IP Paradox: Sales are growing despite Semi Consolidation

    The IP Paradox: Sales are growing despite Semi Consolidation
    by Eric Esteve on 10-29-2016 at 7:00 am

    IPnest is launching the “Interface IP Survey” since 2009, and we did it last September again. To build the survey as accurately as possible, I have followed the “divide and conquer” strategy. Interface protocols are varied, ranging from PCI Express, USB, or Ethernet, to memory controller (DDR3, DDR4, LPDDR3, LPDDR4 and more) and HDMI, DisplayPort, SATA, SAS, or MIPI specifications (CSI, DSI, I3C, M-PHY, D-PHY, C-PHY…).
    Continue reading “The IP Paradox: Sales are growing despite Semi Consolidation”


    Can one flow bring four domains together?

    Can one flow bring four domains together?
    by Don Dingee on 10-28-2016 at 4:00 pm

    IoT edge device design means four domains – MEMS, analog, digital, and RF – not only work together, but often live on the same die (or substrate in a 2.5D process) and are optimized for power and size. Getting these domains to work together effective calls for an enhanced flow.

    Historically, these domains have not played together in silicon. Designs were executed at a PWB level, bringing together chips with different design rules and packaging technology. This was a risk reduction maneuver; each domain could be debugged more or less independently, then integration issues such as crosstalk, interference, and signal integrity solved using macro techniques for mitigation. Domain experts usually didn’t cross disciplines, except to understand the interfaces between the domains.

    Now, interaction between domains is much more critical to success in constrained IoT edge designs. Mixed signal design is pretty much taken for granted now, but more and more people are having success in on-chip RF and MEMS integration. The bar has been raised by better EDA tools that handle a unified flow from capture to simulation to layout.

    Note I said tools. It’s not necessarily a single tool, but rather an integrated suite operating from the same data repository with the same user interface that is important. What kills productivity is switching costs. Bringing together these four domains is easier said than done, however. Analog types are used to working in schematics, digital types in RTL, and MEMS and RF designers are often working at the metal layers.

    This challenge is really the motivation behind the Mentor Graphics purchase of Tanner EDA a year and a half ago. Integrating those disparate domains and bringing a full suite of EDA tools together in one comprehensive flow is a big job that the Tanner teams have been working on relentlessly since the acquisition. Mentor’s PWB tools such as PADS, created with small teams in mind, also factor in to the flow. For IoT edge devices, it goes even deeper – Mentor’s purchase of CodeSourcery was all about optimizing chips for real-time software.

    A new Mentor white paper authored by Jeff Miller has an interesting premise: if we have a better design flow for IoT devices, handling all four domains, we get more optimized parts. (He’s gone as far to suggest there is “a new breed of designers.” That has a lot to do with how engineers come through school. I’m not sure deep expertise in all four domains are possible in a single person, but I am sure that designers are becoming much more familiar with cross-domain design and collaboration in small teams.)

    The Tanner IoT design flow handles all the way from capture through simulation to layout:


    Miller walks through how the tools tie together in this scenario. His discussion on simulation is particularly interesting. For example, in a mixed-signal simulation, T-Spice and ModelSim work together, passing data back and forth whenever signals change at the analog/digital boundary.

    What about the MEMS part of the job? Miller suggests that creating 3D models and then trying to derive a 2D mask is difficult and can lead to errors. He suggests a mask-forward flow, bringing in a 2D mask from Tanner L-Edit and then generating the 3D model. And the RF? Tanner Eldo can be used to analyze RF circuits with several algorithms and help optimize the results for various types of circuits.

    Just reading through this discussion, it is clear the different domains call for different handling, and there has been a lot of progress in Tanner EDA tool integration. The entire white paper is available for download (registration required):

    Driving Intelligence to the IoT Edge Invents a New Breed of Designers

    As I said, I don’t think there is any magic that transforms a single designer into an expert in all four domains just by adding tools. What we are seeing is EDA vendors starting to think about their tools as part of a bigger system design effort with various facets, with a collaborative design team working in one environment with integrated tools.


    Is That PDK Safe to Use Yet?

    Is That PDK Safe to Use Yet?
    by Daniel Payne on 10-28-2016 at 12:00 pm

    In our semiconductor ecosystem we have foundries on one side supplying all of that amazing silicon technology, and IC designers on the other side that take their system ideas then go implement them in a SoC using a specific foundry. The required interface between foundry and chip designers has been the Process Design Kit (PDK), a collection of files that define how the silicon should work:

    • SPICE models for transistor behavior
    • Layout Parasitic Extraction (LPE) decks that define the physical interconnect in terms of resistors, capacitors and inductor
    • Design Rule Checks (DRC) that define how the physical IC layout should be done in order to yield properly
    • Layout Versus Schematic (LVS) decks that specify how transistor-level netlists should compare between layout and logical

    Getting the PDK files right is really important because with small process nodes we have Layout Dependent Effects (LDE), for example the Vt of one transistor depends on how close it is physically placed next to another transistor or even a contact. Same issue with the mobility of a transistor, it depends on physical placement. Parasitic values can now dominate the speed of a transistor, so knowing how to extract them properly impacts the accuracy of timing analysis tools..

    We all know that software is mostly written by hand, so that means that bugs can creep into the tool by accident. Well, the PDK is just a bunch of files that can be manually or automatically generated, and yes, these files may be off a bit, so what to do? If you’re an automation company you would come up with a way to create a QA tool for PDK creators and PDK users. This is exactly what the engineers at Platform DA have come up with, a QA toolset for the foundries that create the PDKs and for the circuit designers that use PDKs. They call their tool PQLab, and I just learned more about it.

    Related blog – Are your Transistor Models Good Enough?

    A chip designer has certain questions about the PDK:

    • What just changed when going from PDK v.1 to v.2?
    • How does a PDK change impact my IC project?
    • Which foundry should I use for my next IC design?
    • Is there a way to benchmark different PDKs of two different design flows quickly?
    • How does LDE, statistical variation and parasitics impact my design?

    At the foundry the PDK engineering team has their own set of questions:

    • Are all of my PCell combinations DRC clean?
    • Will all of my PCell combinations be LVS clean?
    • Can I compare the pre-layout versus post-layout circuit simulation results for typical cells?
    • What just changed when going from PDK v.1 to v.2?

    The approach used by PQLab is to help answer these questions through a set of QA features designed just for PDKs:

    Starting with the DRC and LVS side of QA first, the idea is to automatically and randomly place cells from the Pcell library next to each other and then run a popular DRC/LVS tool like Calibre from Mentor Graphics to check that all combinations are actually clean and without any errors:

    If DRC or LVS errors are found with certain cell combinations, then the foundry goes back and fixes those cell layouts and re-runs QA to ensure that each error has been fixed.

    For the simulation QA there are three major tasks:

    • Correlate SPICE models pre-layout versus post-layout
    • Compare the device simulation specs like Vth, Idsat, etc. at pre-layout and post-layout conditions across a range of circuits
    • Compare any differences between design flows with popular circuit simulators (HSPICE, Spectre) and extraction-based netlists from extractors (StarRC from Synopsys, Quantas QRC from Cadence, Calibre XRC from Mentor)

    The third and final area of QA checking with the PQLab tool is PDK comparisons, where there are five criteria:

    • PDK file comparison
    • PCell property comparison
    • CDF comparison
    • PCell default layout and original position
    • Simulation results comparison

    Summary
    PDK files continue to be the way that foundries and designers interface, so we need to be sure that all of the PDK files are consistent and correct. The PQLab tool from Platform DA provides the needed automation for PDK developers to ensure that they have the highest quality before releasing. IC designers can now quickly determine if a particular foundry PDK is going to provide them the performance and power requirements being sought and know what has changed between versions of a PDK. The QA process for a PDK doesn’t have to take weeks using semi-automated methods, now with some automation it can take only hours to complete. Foundries are using the PQLab tool to save time and produce PDK files that are solid.


    Mentor Webinar Series: Integrating the Systems Engineering Flow

    Mentor Webinar Series: Integrating the Systems Engineering Flow
    by Bernard Murphy on 10-28-2016 at 7:00 am

    Product lifecycle management is probably not the most gripping topic for most design engineers. You want to get on with architecture, design, verification and implementation. But if you are building products for any safety-sensitive application in a car, a medical appliance, avionics, railway applications in Europe – to name a few – you know that now you have to demonstrate compliance with pretty detailed process standards like ISO26262.

    Register for the Webinar

    You could do all of this, in principle, by exchanging pieces of paper and emails but that approach quickly becomes unmanageable. More likely you plan to build (or have partly built) your own system of spreadsheets and shared/ revision-tracked documents. You going to need some of that anyway. But integrating design flows and best-in-class design tools to provide the compliance and traceability required by standards like 26262 is complicated by differing needs, processes and data views among tools.

    Mentor is offering a series of four webinars (the first has already posted) on what you have to consider (make sure you understand the full scope of the problem) and a better approach to managing the design flow process:

    • Going beyond PLM solutions to manage design change (Oct 19)
    • Simplifying overwhelming project data by putting it into context (Nov 3)
    • System design management simplifies ARP4754A compliance (Nov 10)
    • System design management for automotive functional safety (Nov 17)

    While this might be fun project for an internal software team, it can take years to build and get right something this complex. It takes EDA expertise as well as understanding of standards like 26262, so don’t let the IT team tell you they can take care of this. You need something built by experts, not enthusiasts 😎, especially when compliance and safety are determined by the effectiveness of these solutions.

    Register Here

    Increasing the productivity of an engineering team and improving first-time quality of their end product requires maintaining coordination and synchronization of the information needed between software environments. However, data integration between software tools intended to be used together in a work flow remains challenging. While individual software providers address some challenges, the problem is compounded when workflows employ best-in-class tools contributed by different suppliers.

    A true Systems Engineering approach to integration manages relationships between tools throughout design disciplines; coordinates changes, dependencies, and impacts; and integrates with a user’s current tools and flows. In this session, we look at an innovative solution to this problem based on OSLC, extended by the Mentor Graphics approach to incorporate a central organizing structure for data tracking, history, and analysis.
    You may sign up for one, several, or all four seminars. Each seminar stands alone.

    Mentor Graphics systems experts discuss four unique systems
    integration topics, two of which show a way to verify safety certification
    efforts, such as ARP 4754A and ISO 26262. You may sign up for one,
    several, or all four seminars. Each 30-minute seminar stands alone.

    What You Will Learn:

    • A new way to track changes in a design flow – without changing any design
    • tools
    • A way to share information where needed in a structured and organized
    • manner (that’s not a PLM environment)
    • A way to manage relationships among all facets of a design as it evolves
    • A way to easily access legacy material
    • A way to meet standards inspections

    Who Should Attend:

    • Systems engineers
    • Requirements engineers
    • Project managers
    • Safety analysts