CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

3 Essential Rules for IoT Businesses

3 Essential Rules for IoT Businesses
by Rushi Gajjar on 09-28-2015 at 4:00 pm

There are multiple opinions on from different sources which make the world so obsessed with the IoT. But really, Internet of things is far bigger than anyone realizes. Some people tell that it’s the term given to the connected things and it’s just about providing IPv6 to any “thing” that is available in the vicinity.

“Total Number of internet connected devices reached 8.7 billion in 2012” – X source

“The growing network of connected objects referred to as the “Internet of Things” is estimated to be in the billions by 2020.” – Y Source

“The Internet of Things (IoT) is one of the fastest growing areas of tech – covering everything from consumer wearable devices to high-tech industrial systems.” – Z Source

“IoT is completely a disruptive technology according to analysis.” – Z’ Source

Be it the massive infrastructures, be it your pillow, be it your home, or be it your clothes, it’s not about connecting the sensor to the multiple Arduinos and RaspberryPis and connecting it to the internet, It’s about building the ecosystem. It’s not about connecting relay to the Internet to control the appliances, but It’s about the intelligent architecture that very well suits the need of the users, may it be using artificial neural networks or may it be using the multiple algorithms for enhancing the user experience. Honchos are building the Skytranand that is the kind of solutions the world really wants. “Air” is built with the same vision and philosophy. We are here to deliver the extra mile!

When developers are talking about delivering an extra mile to enhance the human living experience, they’re still not thinking big enough to justify the needs of the world. It’s not a lack of creativity; it’s a lack of scrutiny. Future is always within our vision, and you don’t need to visualize or build what’s already there.

“A solitary fantasy can transform a million realities” – Maya Angelou


3 essential rules for IoT Businesses

Here is my take! There may be many rules for the successful IoT business, but I would like to encompass these three as the essentials to build the successful IoT product. It’s about M2M and IoT (It took me long to understand difference between M2M and IoT, and if you don’t know, Here is a good article) therefore it’s all on us, to understand where it’s leading to.

Rule 1: Data

It’s just not about delivering the product, It’s about leveraging the data which is created from millions of end nodes. Obviously, we would require powerful data storage capability and tremendous remote processing power to understand the sheer amount of collected data and to build the intelligence in the system for better predictions and controls.

Rule 2: Security and Robustness
Who likes it when connectivity becomes unstable? Nobody enjoys the interrupted signals and improper codecs. Thus the connection protocol aspect of IoT is incredibly critical to give users that seamless, “always on and interacting” feel — what’s the point of always having technology with you if it isn’t going to be always connected? Consumer devices in particular – from fitness trackers to home appliances – are generating more granular information. And when that information is about people or their health, it’s more sensitive.

Rule 3: User Interface/Experience
Similar to the previous point, nobody enjoys the broken user experience. Be it developers or the product end users, they need a flawless system to hack and play around with minimal hassles.
“IoT is not the ‘Thing’ that gains the Internet, It’s that the Internet gains from the Thing.” – Patrick Isacson

Accessories
There is an idea of many developers who are working for IoT, to build an ecosystem(SDKs) for the developers to build on their own platform. It may take many skills with many skill resources to design and deliver a successful IoT platform that is both scalable and extensible to be versatile. According to the Reuters “The IoT Platform Companies Database 2015“, there are 250+ platforms available to start development on IoT so there are 180+ startups, 45+ SMEs, and 25+ MNCs which offers such platforms. This is really alarming! Who will connect these all platform to make the “ONE internet of things”? This is really needed to leverage the data mining and to provide better analytics. “ONE Internet of things” not only can make businesses more efficient but makes the businesses ready for future!

Why do I call it an accessory? Because, this is currently “announced” as an add-on to almost every internet of things product sold! It is treated as an accessory! As the Internet of things is said so abstract and developers and designers are busy creating their own hardware and software platforms with different open/proprietary protocols, and shouting that their platform having cool X-Y-Z features is too much obsession for the people. This will ignite developers to start talking about the Web of Things! Similarly to what the Web (Application Layer) is to the Internet (Network Layer), the Web of Things provides an Application Layer that simplifies the creation of Internet of Things applications. Web of Things reuses existing and popular Web standards. But I am still concerned at the common/cross-platforms solutions on which multiple devices can communicate. It is an open discussion, though!

Security

As everyone started turning to Internet of things and as everybody is talking about the IoT, the term “Security in Internet of Things” has become too popular to discuss on every IoT tech geek’s breakfast table. Some devices fall short of enough stack in the tiny microcontrollers used as an actuator end-point, or some devices are just about being low power without any security engines running. As users become more reliant on smart devices and wearables, an increasing amount of sensitive data is being accessed through these devices and transferred among them. The developers must strengthen the defenses by taking clues from the smartphone developers and industry. But it is not easy as to just talking about the security. There’s much more work needed for low power and low memory embedded devices.

Product development—at least for products that anyone expects to be successful—has always been iterative, incremental, and collaborative.

Now, it’s upon us being the builders, the innovators, the creators or the end user to bring the IoT to a stage where all work on a unified platform. It’s a big task to create “ONE internet of things” but filled with too many opportunities for everyone around us to change the world we see today!

Thanks in advance for your Likes and Shares. It would be great to have your added thoughts on this.

Rushi Gajjar


Prototyping the Future of Semiconductors!

Prototyping the Future of Semiconductors!
by Daniel Nenni on 09-28-2015 at 12:00 pm

With major semiconductor mergers and acquisitions running rampant in 2015 (more than double the M&A activity in 2014), the question is where will we go from here? There are many different ways to slice this but for this blog let’s talk about the thousands of semiconductor professionals that will be changing jobs as a result of this M&A hyper activity and the recent reduction in forces (QCOM just riffed 18%).

(Click for larger image)

I see two things happening here:
[LIST=1]

  • Semiconductor people will join system houses as they ramp up internal IC development. Apple is the best example of a systems house becoming a major fabless semiconductor player (vertical integration) and now everyone wants to be like Apple, right?
  • Semiconductor entrepreneurs will start new fabless companies. The problem here is capital but of course that has always been a problem and as the saying goes “Where there’s a will, there’s a way.”

    One of the things I do during my day job is help emerging technology companies get funding. Some of it is from angels, banks, or other traditional sources. Sometimes customers or partners make investments and of course there are always crowd funding sites (Gofundme, Kickstarter, etc…). Unfortunately a slide deck will not always get you funding for a chip project, you really need working silicon but of course that is a chicken/egg kind of thing.

    You may have noticed we have been writing about FPGA prototyping quite a bit lately and of course there is a reason for that. Paul McLellan even did a “Brief History of FPGA Prototyping” blog last week. You can also check out a new video from S2C on Prorotyping the Future or you can get the white paper FPGA Prototyping Primer.

    Bottom line: You can easily do architectural exploration, block design, system integration, and embedded software development for a proof of concept design to raise money for your project. Using FPGA prototyping you can also get it right the first time which is critical for emerging technology companies, absolutely.

    FPGA Prototyping: The next best thing to silicon!

    Speaking of FPGA prototyping, S2C just released their new rapid prototyping solution “Quad Kintex UltraScale Prodigy™ FPGA prototyping Logic Module Addresses Designs with Massive Parallel DSP Algorithms.” This is the latest addition to S2C’s Prodigy Logic Module family aimed at large DSP algorithm development and is ideal for applications such as voice processing, graphics imaging, military, instrumentation, disk controllers, and digital mobile.

    “Designers that must deal with a huge number of DSP calculations now have a highly reliable and fast solution that can help them achieve their stringent time-to-market goals,” commented Toshio Nakama, CEO of S2C. “An added benefit is that our Quad KU115 Prodigy Logic Module is thoroughly integrated into our Prodigy Complete Prototyping Platform giving users access to a vast array of prototyping tools and our expansive library of 80+ daughter cards to quickly build their prototyping targets.”

    You can download the Quad KU115 Prodigy™ Logic Module datasheet HERE.

    With over 200 customers, S2C’s focus is on SoC/ASIC development to reduce the SoC design cycle. Our highly qualified engineering team and customer-centric sales force understands our users’ SoC development needs. S2C systems have been deployed by leaders in consumer electronics, communications, computing, image processing, data storage, research, defense, education, automotive, medical, design services, and silicon IP. S2C is headquartered in San Jose, CA with offices and distributors around the globe including the UK, Israel, China, Taiwan, Korea, and Japan. For more information, visit www.s2cinc.com.


  • Xilinx Skips 10nm

    Xilinx Skips 10nm
    by Paul McLellan on 09-28-2015 at 7:00 am

    At TSMC’s OIP Symposium recently, Xilinx announced that they would not be building products at the 10nm node. I say “announced” since I was hearing it for the first time, but maybe I just missed it before. Xilinx would go straight from the 16FF+ arrays that they have announced but not started shipping, and to the 7FF process that TSMC currently have scheduled for risk production in Q1 of 2017. TSMC already have yielding SRAM in 7nm and stated that everything is currently on-track.

    See also TSMC OIP: What To Do With 20,000 Wafers Per Day although I screwed up the math and it is really over 50,000

    I think that there are two reasons for doing this. The first is that TSMC is pumping out nodes very fast. Risk production for 10FF is Q4 of 2015 (which starts inext week) and so there are only 6 quarters between 10FF and 7FF if all the schedules hold. I think that makes it hard for Xilinx to get two whole families designed with some of the design work going on in parallel. It costs about $1B to create a whole family of FPGAs in a node. On the business side of things, 10nm would be a short-lived node. The leading edge customers would move to 7nm as soon as it was available so the amount of production business to generate the revenue to pay for it all and make a profit might well be too limited.

    I contacted Xilinx to try and they pretty much confirmed my guess:The simple reason is that our development timelines & product cadence lined up better with 7nm introduction. TSMC has a very competitive process technology and world class foundry services and their timeline for their 7nm introduction lines up well with our needs and plans.

    There have been rumors that Intel might skip 10nm too, although the recent rumors are that they will tape out a new 10nm core M processor early next year. I don’t know lf anything much that Intel has said about 7nm, either from a technology or a timing point of view.

    See also Intel to Skip 10nm to Stay Ahead of TSMC and Samsung?
    See also Intel 10nm delay confirmed by Tick Tock arrhythmia leak-“The Missing Tick”

    That brings up the second big reason. All processes with the same number are not the same. TSMC’s 16FF process has the same metal stack (BEOL) as their 20nm process. It is their first FinFET process and so presumably they didn’t want to change too many things at once. Interestingly, Intel made the same decision the other way around at 22nm, where they had their first FinFET process (they call it TriGate) but kept the metal pitch at 80nm so it could still be single patterned. The two derivative TSMC 16nm processes, 16FF+ and 16FFC, have the same design rules and so the same 20nm metal. This limits the amount of scaling from 20nm to 16nm. There is a big difference in speed and power but not so much in density.

    See also Scotten Jones’s tables in Who Will Lead at 10nm?

    At 10nm Intel has a gate pitch of 55nm and a metal 1 pitch of 38nm (multiplied together gives 2101nm[SUP]2[/SUP] although I get 2090nm[SUP]2[/SUP]). TSMC at 10nm has a gate pitch of 70nm and a metal 1 pitch of 46nm, for an area of 3220nm[SUP]2[/SUP]. But perhaps more tellingly, Intel’s 14nm has a gate pitch of 70nm (same as TSMC’s 10nm) and a metal 1 pitch of 52nm, only a little looser than TSMC’s 10nm pitch of 46nm. So another reason Xilinx might skip 10nm is that it would not look good against Altera’s products in 14nm.

    TSMC say that 10nm is about 50% smaller than their 16nm processes. TSMC said that 7FF will be 45% of the area of 10FF. Without any information to go on, it is still clear that Intel’s 7nm will be higher density than TSMC’s. The TSMC 7nm process will probably close to the Intel 10nm process. This is not necessarily a criticism of anyone. Intel is totally focused on bringing out server microprocessors and can read the riot act to all their designers as to how restrictive their methodology has to be and the designers have to suck it up. TSMC has to accept a much wider range of designs from a broad group of customers that they do not control in the same way.

    [TABLE] style=”width: 400px”
    |-
    | Intel: you will do designs this way
    Intel designers: but…
    Intel: you will
    Intel designers: OK

    | TSMC: you will do designs this way
    Apple engineers: no we won’t
    TSMC: OK

    |-

    One wrinkle in all of this is also the Intel acquisition of Altera, Xilinx’s primary competitor. They seem to have been struggling to tape-out their designs in Intel’s 14nm process. If Intel is serious about using FPGAs in the datacenter, especially if they want to put the arrays on the same substrate as the processor, then they will need to get Altera’s fabric into 10nm and then 7nm hot on the heels of the server processors themselves. Xilinx’s worst nightmare would be if they produced a family of arrays in TSMC 10nm (only slightly better than Intel 14nm) and Altera got a family out in Intel’s 7nm which is a generation ahead.

    So, Xilinx skipping 10nm and Altera being acquired by Intel with an opaque roadmap makes for an interesting spectator sport.


    Nine Cost Considerations to Keep IP Relevant

    Nine Cost Considerations to Keep IP Relevant
    by Pawan Fangaria on 09-27-2015 at 12:00 pm

    It’s about 15 years the concept of IP development and its usage took place. In the recent past the semiconductor industry witnessed start of a large number of IP companies across the globe. However, according to Gary Smith’s presentation before the start of 52[SUP]nd[/SUP] DAC, IP business is expected to remain stagnant for next 5 years. There are reasons to believe into Gary’s thesis. A bird’s eye view shows an IP sitting at the heart of an SoC or subsystem. This is significant reason for a system company to assess an IP fully before utilizing it; also assess the IP provider’s quality and other business practices. At the tip of the iceberg it appears very simple to buy an IP and use it in your SoC design as required. However there are significant implications of using an IP from business as well as technical perspective; not all system companies have bought into the idea of using 3[SUP]rd[/SUP] party IP, barring some standard and common IP blocks from reputed suppliers. The standardization of IP blocks that go into most of the SoCs reduces cost for the overall value-chain of developing SoCs; however it can commoditize the stuff to an extent that it can start impacting the differentiated value of SoCs. Moreover, there are serious technical implications that need to be considered before using IP. There has been a significant change in the modern SoC ecosystem where the system companies are experiencing increasing need of customizing IP before their use in SoCs.


    Considering it from a macroeconomic angle in a consolidating semiconductor industry, the IP-based business model of SoC design does provide a good proposition provided differentiated value is added into the SoC. However, it’s essential that the hidden costs in accomplishing some specific tasks to make this model successful are better understood. Often certain tasks are not performed adequately because of lack of understanding, and also because the associate costs not considered. This can leave an IP in a poor state, inside or outside of an SoC. The success or failure of an IP in a system depends upon how best these tasks are understood, invested-in, and performed by the IP provider as well as the system integrator. There are specific costs involved in doing these tasks which stand apart from the usual developmental cost involved in the normal course of IP development. These exclusive types of costs other than normal development are mentioned below along with what incurs those costs and their proper rationale.

    Cost of Differentiation – The differentiation in an IP has to be construed from the design level. The system companies are expecting differentiation in IP that fits into their designs so that they don’t have to design the same IP themselves as much as possible. A common form of differentiation can emanate from IP vendors for providing extended solutions such as interconnect along with the cores. It’s true that such differentiation can again be seen as common in an IP for different SoC vendors; however it can move the IP to a level up. IP providers such as ARM, Synopsys, Cadence and some others are providing subsystem level IP solutions. On the other side of the coin an IP provider can work in joint collaboration with an SoC vendor to design a completely differentiated IP. In this case the cost can be very high; in-sourcing of the complete IP team may be preferable for the SoC vendor. In other words, the IP team in-house with the SoC vendor can work at the sub-system level which allows the team to add enough differentiation, do trial layouts and optimize, and thus reduce risk and time-to-market.

    Power is becoming a prime criterion for differentiation, especially in the mobile and IoT market. An IP characterized for certain power parameters with a particular technology needs to be re-designed in most cases of newer technology; moreover the dynamic power profile may change significantly with the use-cases.

    Considering PPA (Power, Performance, and Area), an IP can be designed to have flexibility to scale between different factors such as power and performance according to the technology used.

    A new concept of IP abstraction is coming up where an IP is delivered at a higher level of abstraction which goes through High Level Synthesis at the SoC end. This provides scope of differentiating the IP and SoC in power consumption. Qualcommand Googlehave used this approach with Calypto(now Mentor) HLS solution where they deliver ‘C’ code which can be further optimized while integrating into SoC at system-level. A start-up Adapt IP provides option for delivery of IP at a high level of abstraction. In this scenario, the cost for IP vendor can decrease, but that gets added up in the SoC for the SoC vendor to account for differentiation and implementation. Moreover, this brings in a newer methodology for SoC architecture exploration and integration at the system-level and asks for fresh investment and learning. I will talk more about it in later sections.

    Cost of Customization – One may think customization as a part of differentiation, but actually they are distinct. Customization is a process which takes place during integration of an IP in an SoC. There is a separate section on integration in this article. In this section I am talking about the provisions which need to be made in the IP itself to make it customizable according to different environments; interconnect IP is a good example in this case. For example, ArterisFlexNoC can be configured and customized for interconnections on the chip that can provide best latency, least congestion, and optimize other aspects. Similarly power management can be another area where configuration can be added for power harvesting. So the question is how configurable your IP is so that it can be customized according to different environments? Configurability in your IP adds provisions in your IP to be customized in different environments. This increases the value of your IP to operate in a wider range of possibilities. More often than not an SoC vendor may need to ask an IP vendor to add specific customization in the IP so that it can exactly fit into the scheme of the SoC. This situation may get extrapolated to an extent that the IP gets transformed into a subsystem; proper evaluation of cost for such customization must be done.

    Another kind of customization can be for different market segments such as automotive segment which needs wide range of operating temperature and other environmental parameters.

    In certain market segments like IoT, where the standards can vary by a large extent, SoC vendors prefer adding custom IP in-house rather than buying from outside.

    Cost of Characterization – This is a big area where IP needs investment, specifically at advanced technology nodes where process can vary significantly between different foundries at the same node. It asks for the characterization of IP at every process variant. It depends how much pre-characterization can be done at the IP level; the SoC vendor might ask for special characterization at a specific node of choice for the SoC. A level of prudence can help here. An IP for GPU or mobile processing may need advanced nodes like 14nm FinFET and hence the characterization for process variants at those nodes will be needed. However, an IP for other applications which can stay at higher nodes may not need too many characterizations. But there may be other complications for specific applications. For example, an IP for automotive applications can stay at higher nodes (although moving down from 150nm and 90nm) such as 55nm, 40nm, or even 28nm for specific cases; however that will need characterization for a wide range of temperature and other PVT conditions.

    Within an IP the characterization can be at the fundamental unit level such as bit cell and at macro level. The fundamental unit level characterization may not change frequently, but macro level characterization may change according to the design. So, that kind of characterization needs to be planned appropriately.

    Cost of Acquisition – The cost of acquisition of IP is a very important aspect for SoC vendors. Large system companies have specific processes laid out for IP selection and procurement. They include items such as quality of the IP, ease of its integration, the IP vendor’s past record and ranking, vendor support throughout the SoC lifecycle, cost and RoI analysis as per single or multiple use of the IP, etc. It’s prudent to explicitly mention, specifically in single use, what kind of modifications and support elements such as error code revisions, defect fixes, configuration modifications, and so on are permitted. Also, the fees applicable for reusing and making variants of IP must be explicitly mentioned.

    The evaluation of an IP and its integration into SoC is co-ordinated with the associated EDA vendors, development partners, and design service providers along with the IP provider. It’s a costly affair and hence it’s required that the list of selected and qualified vendors is kept short. The emergence of eSilicon as an IP service provider is a step in the right direction for IP evaluation before its acquisition.

    An important aspect comes into picture when the IP needs some customization. In this event it’s important for both the IP provider and the SoC vendor to determine how the customized code will be maintained in future, whether the changes are generic enough to be merged into the main code branch. If not, then special support for that custom IP branch will be needed, asking for extra support resources borne by either the IP provider or the SoC vendor. So, here the question comes, how much is the support? Is it scalable and profitable for the IP provider to take it in her/his main stream? If not, then is it justified for the SoC vendor to acquire the commercial IP, customize it, and maintain it, or otherwise develop her/his own IP? If it is customization by the SoC vendor on top of the commercial IP, then the ownership rights must be clarified at the time of acquisition.

    The cost of acquisition can also be factored in on a long-run production basis where the IP provider is paid on the basis of royalty fees. This can be finalized on the basis of specific terms such as actual sales or shipments. For an IP provider as well as the SoC vendor a typical challenge appears when the fab does not see enough RoI in creating a slot for a particular IP; this needs right level of negotiation before embarking on the journey.

    I will park this part of the article at this stage as it has gone long. Stay tuned to read part – 2 of this article where I will mention about rest of the costs involved in using IP in modern SoC ecosystem. In the pursuit of increasing value of IP for SoCs to continue to have differentiated value, these costs must be understood well and accounted for in order to create a win-win situation between the IP provider and SoC vendor. This is required to maintain the IP industry at a healthy level and grow further from here.

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com


    How to Build an IoT Endpoint in Three Months

    How to Build an IoT Endpoint in Three Months
    by Tom Simon on 09-27-2015 at 7:00 am

    It is often said that things go in big cycles. One example of this is the design and manufacturing products. People long ago used to build their own things. Think of villagers or settlers hundreds of years ago, if they needed something they would craft it themselves. Then came the industrial revolution and two things happened. One is that if you wanted something like furniture or tools you were better off buying them. The other was there was a loss of skills; people ‘forgot’ how to make things. This meant that the ability to create was concentrated in the hands of a few, and individuals had less control over what was available to them.

    The maker movement has changed all that. The ability to design and build things has come full circle. Now if you want to design with 3D Printers and Arduino boards you can design a range of things, from simple everyday items to sophisticated appliances. In many ways the Internet of Things was started through this same pathway. People took low cost development systems and tools, added sensors, wireless and often servos to make a wide variety of useful things.

    Semiconductor design has followed an analogous path. Early on design teams were small and they built chips that became the components of that era’s products. I remember calling on chip design companies in the late 90’s where it was literally three guys with a Sun workstation running layout software.

    That era has ended and it seems that recently the only feasible way to design chips was at places like Nvidia, Intel, Freescale, Marvell, etc. They can apply design teams with hundreds of people to build their products. If you had an idea for a design and did not have the manpower, your idea went un-built.

    However, things are changing again. The same market and technology forces that drove the maker movement, and pushed for standardization of building blocks, has spilled over into the internals of chip design. With the need for increased sophistication, the tools for building integrated platforms for IoT have been growing and maturing. We all know the formula by now: MCU, on board NVM, one or more radios, ultra low power, security, interfaces to sensors and a SW development environment to build user applications.

    Differentiation is the key to success; product developers know they need to optimize their platform for their specific needs. ARM recently embarked on a project to test out the real world feasibility of having a small team build a custom IoT end point device in a fleeting 3 months. ARM used the TSMC Open Innovation Platform Forum in September to present their results.

    ARM Engineering Director Tim Whitfield gave a comprehensive presentation on their experience. The challenge was to go from RTL to GDS in 3 months with 3 engineers. additionally, there were hard analog RF blocks that needed to be integrated. They went with the ARM mbedOS to make it easy to prototype. They also included standard interfaces like SPI and I2C for easy integration of external sensors.

    ARM used their arsenal of building blocks which includes the Cortex M3, Artisan physical IP, mbedOS, Cordio BT4.2, ancillary security hardware and some TSMC IP as well. The radio was the most interesting part of the talk. A lot of things have to be done right to put a radio on the same die as digital. The Cordio radio is partitioned into a hard macro containing all the MS and RF circuitry. In the hard IP there is also real-time embedded firmware and an integrated power management unit (PMU) – critical for effective low power operation. It comes with a Verilog loop-back model for verification. The soft IP for the radio is AMBA-3 32-bit AHB compliant. It is interrupt driven and can operate in master & slave mode with fully asynchronous transmit and receive.

    When adding the radio to the design, designers are given guidelines to avoid supply coupling in the bond wires. This is provided by adding 100pF decoupling per supply. They used CMOS process friendly MOM caps. They did receive some guidance from the radio team on how to prevent substrate coupling. They used a substrate guard ring with well-ties. Tim suggested that the guard ring could possibly be delivered as a macro in the future.

    They discovered that if there was no cache that 80% of their power would be used for reading the flash and 20% used running application code. So they reduced the power overhead by using caching. Tim sees opportunity to further improve power performance with additional cache enhancements.

    They already taped out in August, and are now waiting for silicon from TSMC in October. That, of course, will be the real test. Whatever lessons learned will be applied to improve the process for customers down the road.

    This is certainly just a “little bit more” impressive than a maker getting their Arduino project working. Nonetheless, it is definitely a branch of the same tree. Enabling this kind of integration and customization democratizes product development and will in turn create new opportunities. I look forward to hearing how the first silicon performs.


    5 uses of Bluetooth Smart Technology that you didn’t know!

    5 uses of Bluetooth Smart Technology that you didn’t know!
    by Daniel Nenni on 09-26-2015 at 7:00 am

    Ever wondered why they named the most universal wireless technology Bluetooth? Apparently it was named after a 10[SUP]th[/SUP] century Danish King named Harald Blåtand whose nickname was Bluetooth because one of his teeth was blue. King Bluetooth’s claim to fame is that he helped peacefully unite Norway, Sweden, and Denmark. Since Bluetooth was created by Ericsson (a Swedish company) and it allows you to share voice, data, music, photos, and other warring factions amongst our mobile devices I get the naming convention, absolutely.

    Bluetooth is awirelesstechnology standard for exchanging data over short distances (using short-wavelengthUHF radio wavesin theISM bandfrom 2.4 to 2.485 GHz[4]) from fixed and mobile devices, and buildingpersonal area networks(PANs). Invented by telecom vendorEricssonin 1994,[5] it was originally conceived as a wireless alternative toRS-232data cables. It can connect several devices, overcoming problems of synchronization (Bluetooth 101).

    The latest version is called Bluetooth Smart which is an extension of the original Bluetooth brand focused on low power implementations. Think IoT and wearables where you want to go days, months, or even years on a single charge of a tiny battery.

    Bluetooth Smart provides a very low power, low MIPs & low gate count platform for applications requiring Single Mode Bluetooth Low Energy (BLE), e.g. smartwatches, hearing aids, wearable sensors for medical /sports (heart rate, glucose, temperature), remote controls, toys, environment sensors, location beacons and many other machine–machine communications (CEVA RivieraWaves Bluetooth Platforms).

    As you can imagine, Bluetooth Smart applications are exploding as are the presentations on Bluetooth. The latest and greatest one I have seen is from Dialog Semiconductor (they just acquired Atmel) presented on September 16[SUP]th,[/SUP] 2015 at Capital Markets Day in London. The presentation title is Personal Portable Connected and in 22 PDF pages it covers Bluetooth Smart: Market update, product update, and key takeaways.

    *Spoiler alert* Here is the Bluetooth® Smart market size:

    And here are the 5 uses of Bluetooth Smart Technology that you probably didn’t know compliments of CEVA:

    1.) Nest Thermostat Gen 3 and Nest Protect Gen 2, the latest generation Thermostat and Protect devices from Nest both include Bluetooth Smart Connectivity in order to simplify maintenance and control of these devices to your smartphone:

    2.) Salt Card, a a Keyless Entry Method for Smartphones. With the SALT card, which is in the shape of a credit card, users will no longer need to enter their pin or pass codes while within 10ft. of the card, as the device and card wirelessly sync thanks to Bluetooth Smart technology:

    3.) Li-Ning Smart Trainers, Xiaomi’s smart trainers, designed by Sportswear brand Li Ning are available in China now. Bluetooth Smart chips are the solo of the trainer and send information to Xiaomi’s existing Mi Fit app on your smartphone, measure steps taken and calories:

    Eizo ‘Foris’ Gaming Monitor, With the Bluetooth connectivity built into the monitor, gamers are able to use their mobile devices to adjust the colour, brightness, gamma and other settings, and to post a notification in the corner of the screen when a call or message arrives on their smartphone:

    Olympus Air A01 Smartphone Lens, turns your smartphone into a mirrorless camera – uses Bluetooth to connect and control the lens directly from your iPhone or Android Smartphone:


    Phablet Impact on PC Sales

    Phablet Impact on PC Sales
    by Daniel Payne on 09-25-2015 at 4:00 pm

    Apple iPhone 6 and 6s users are recent converts to the latest growth trend in smart phones, large screens at 5.5″ in size and aiming even higher each year. I’ve owned a 5.5″ smart phone from Samsung for some 3 years now, so have immensely enjoyed the larger screen size to get my daily work done with: web browsing, LinkedIn reading, Google+ browsing, Tweeting, email, Facebook, messaging, writing notes, taking photos, sharing docs on Google Drive, etc. I’ve also owned two generations of the Apple iPad and that device allows me to attend an event like DAC and take notes all day long without having to charge the battery, typing away on the Logitech keyboard. I haven’t owned a desktop computer for about 10 years now, instead using a MacBook Pro laptop because of it’s portability and generous 17″ display.


    Samsung Galaxy Note 4

    The research company IC Insights just published an info-packed bulletin titled, “Large-Screen Smartphones Erode Total Personal Computing Unit Growth“. Starting in 2010 we saw tablet devices enter the market and drive new growth, and by 2013 the volume of tablets shipped was larger than notebooks. Something unexpected happened in 2014 because tablet growth slowed way down as the larger-sized smart phones became more popular. Take a look at the info-graphic for total personal computing unit growth from the year 2000 through 2018:

    The trend in 2015 is that both PC and tablet IC sales are declining, with PC IC sales looking like a $57.7B market (-3%) from a $59.4B market in 2014 (+5%). IC sales for tablets could go down 5% this year to $16.6B. One segment seeing continued growth is ICs for Internet and cloud computing laptops like the Chromebook with an increase of 38% to $931M. Here’s the table showing the IC market for personal computing systems:

    These numbers from IC Insights are confirmed by companies like Apple that have seen decreases in sales of the iPad and iPad Mini devices, so one way to counter that trend is to introduce new products like the larger iPad Pro with a 12.9″ display, available in November this year.


    Apple iPad Pro

    Apple was smart to add a keyboard and optional stylus for this device, as the keyboard makes this device a replacement for some notebooks and the stylus is great for graphic artists and anyone that loves handwriting or painting.

    For the complete report, visit IC Insights.


    Together At Last—Combining Netlist and Layout Data for Power-Aware Verification

    Together At Last—Combining Netlist and Layout Data for Power-Aware Verification
    by Beth Martin on 09-25-2015 at 12:00 pm

    The market demanded that gadgets it loves become ever more conscious of their power consumption, and chip designers responded with an array of clever techniques to cut IC power use. Unsurprisingly, these new techniques added to the complexity of IC verification. When you’re verifying a design that has 100+ separate power domains, plus tightly packed digital and analog parts in the same substrate, proximity effects like noise, latchup, and parasitic effects require more than a basic on-off verification. Because the market also demands that these low-power devices actually work when you turn them on.

    In most cases, traditional LVS methodology is simply not good enough to ensure circuit performance and reliability in these designs, because some design rule checks can’t be performed without adding layout features, and some errors are nearly impossible to debug.

    I had a chance to talk to some of Mentor’s engineers who are developing ways to check these new design rules, such as deep n-well biasing, well implant, and extract parasitic effects for mixed-signal SOCs with multiple power domains, using a new approach and new algorithms. Sridhar Srinivasan is the technical lead for Calibre® PERC™, Frank Feng is a methodologist, and Yi-ting Lee is a foundry technical liaison. They presented a paper at the China Semiconductor Technology International Conference (CSTIC) about reliability verification, which you can read for yourself here.

    Not surprisingly, power is a hard design problem to generalize. Even the “simplest” device, say, one with four power domains, has thousands of failure points. When you start increasing the number of domains, adding in use cases, varying voltages, etc., and trying to analyze how each power state affects the functionality of other parts that may be turning on or off themselves, it’s pretty easy to understand how power-aware verification gets really difficult really fast.

    All three pointed out the need for verification tools that can combine and analyze the netlist and layout simultaneously both to understand power intent and identify a variety of power-related problems. In their work, the trio focuses on several proximity-related issues that can be power-domain-dependent:

    • Verifying parasitic junction diodes
    • Accuracy of deep n-well biasing voltages
    • Identification of devices with latchup risks
    • Situations leading to leakage current due to domain crossings

    The rule checks that involve these proximity effects combine electrical rule checks and geometry-based DRC rule checks. To solve the verification challenge, they used a netlist-based infrastructure with a programmatic interface to the geometry database that was developed at Mentor Graphics. This technology is available as part of the Calibre® PERC™ product from Mentor Graphics. The inputs to the flow are a schematic netlist, and/or a GDSII/OASIS/LEFDEF layout, and a user-described rule set that includes power, ground, and IO setup. The setup includes the specification of the various power signals present in the system, specification of the known internal supplies, programmatic specification of derived internal supplies based on the circuit structure (like charge pump, level shifter, etc.), and design-dependent property propagation “stop” rules.

    First, they said, the tool reads in the netlist representation of the design to construct the graph. Then the user-defined signal specifications are processed—the explicit signal definitions with net names are processed first, followed by the programmatic, structure-dependent specifications. All signal specifications are saved as properties and are propagated through the graph. Srinivasan says they included hierarchical APIs to control the propagation of these properties based on the design device types (PMOS, NMOS, RESISTANCE, etc.), including custom device types, and then let the user specify blocking conditions. After property propagation, the nets in the graph have the propagated data, and the user can then inspect the propagated properties and defined properties through a handy introspection API at each net.

    With large and complex designs, runtime is an issue in every step of the flow, including verification. To reduce runtime, the properties can be collapsed. For example, instead of assigning unique properties to each power supply, you can group the power supplies by domains and voltage ranges and assign properties to each group. Srinivasan said that keeping the total number of properties below 64 provides a major performance advantage, as the properties can be encoded without the special data structure needed to create complex bit sets.

    The trio performed reliability checks on real-world multiple-power-domain and mixed-signal designs using Calibre PERC in tandem with traditional LVS and DRC…If you’d like to learn more of the details, download the paper (free, but registration required).

    As for the next phase? In the future, they said, the Calibre PERC tool will be able to handle device reduction, netlist transformation, and voltage transitions automatically with minimal user input, further improving usability. When it comes to the details, I’m not sure exactly what all that will mean, but I do know it signifies good things for designers struggling with complex power verification at advanced nodes.


    A Brief History of FPGA Prototyping

    A Brief History of FPGA Prototyping
    by Paul McLellan on 09-25-2015 at 7:00 am

    Verifying chip designs has always suffered from a two-pronged problem. The first problem is that actually building silicon is too expensive and too slow to use as a verification tool (when it happens, it is not a good thing and is called a “re-spin”). The second problem is that simulation is, and has always been, too slow.

    When Xilinx and Altera produced field-programmable gate-arrays (FPGAs), which were reprogrammable, it didn’t take long for ASIC designers to realize that these could be a third prong to solve their verification problem. Much cheaper than building silicon but much faster than simulation.

    There were no commercial solutions at first for FPGA prototyping. Everyone who wanted to do it had to buy FPGAs (or an FPGA-based board) and then cobble together a flow that worked. The biggest issue was probably that the types of designs for which this would be an interesting approach had more gates than the largest FPGAs, so the designs had to be partitioned. But partitioning a typical design meant that more signals needed to go between the various partitions then were actually available on the FPGAs, so the pins needed to be multiplexed. This is a problem that continually gets worse since, to a first approximation, the number of gates grows quadratically and the number of pins linearly meaning that there are thousands of gates per pin. So it was clearly not a straightforward approach, and it required a lot of knowledge about the design to get it ready, and a lot of knowledge about FPGAs and FPGA tool flows to actually get anything to work.

    Like most things where lots of customers are making something difficult independently, some people saw an opportunity to create a commercial product to serve the whole market. I expect there were other companies lost in the mists of pre-internet days but one company survives from that era to the present day.

    In 1987 Hardi Electronics was formed in Sweden. In 2000, they created an FPGA prototyping system called HAPS. As described by Hardi:HAPS is a modular, high performance and high capacity FPGA-based system for ASIC prototyping. HAPS comprises multi-FPGA motherboards and standard or custom-made daughter boards which can be combined in a wide variety of ways in order to quickly assemble ASIC prototyping systems. Rapid assembly is facilitated by the availability of many standard daughter boards including video processing, memory and interfaces to Ethernet, USB, PCI Express and ARM core modules. Customers prefer the time-to-market advantage of using an off-the-shelf prototyping solution, which can save months in the critical verification phase.

    In 2007 Synplicity acquired Hardi Electronics for $24M. At the time, Synplicity was one of two companies that competed in the merchant FPGA synthesis market, competing with the free (or nearly free) solutions provided by the FPGA vendors, Mentor being the other. But Synplicity was not long as an independent company and Synopsys acquired them in 2008 for $227M. HAPS has been through several generations, the most recent of which, HAPS-80, was announced just last week.

    In the meantime, Cadence also decided to create an FPGA prototyping solution. The first generation, imaginatively named Rapid Prototyping Platform, came out in 2011. By the second generation it had a real name. In 2014 Cadence announced their second generation of the product called Protium.

    In 2003, another company, S2C, was founded in Silicon Valley to address the FPGA prototyping market. They have grown and have several hundred systems installed. Developing and manufacturing is in China and Taiwan respectively.

    So that is the scene today. About half the market is still people putting together their own prototyping systems and there are three suppliers with off-the-shelf product portfolios: Synopsys, Cadence and S2C. All the solutions consist of two parts. There is a hardware component, which consists of the range of boards and connectors. And there is a software component which takes the design, partitions it, handles the multiplexed signal connectivity between the arrays, creates the bitstreams to program them, and provides access and visibility to debug the design.

    FPGA prototyping can be used by chip designers to give them a way to run huge amounts of verification vectors, often including booting an operating system and bringing up drivers. It can also be used by software designers who need something on which to run their software before silicon is available. Even when silicon is available, the FPGA prototyping system often provides a better environment for debugging the code.


    Samsung to cut Semi Capex 20% due to over capacity

    Samsung to cut Semi Capex 20% due to over capacity
    by Robert Maire on 09-24-2015 at 4:00 pm

    Article confirms market fears…
    An article in the Korea Times cites sources that say Samsung will cut Semi Capex by 20% due to current oversupply and weak pricing. This is obviously a huge negative as Capex for 2016 will certainly be down significantly from 2015 given these cuts which follow on cuts by Intel and others. We can only assume that Micron and SK Hynix will also cut spending on DRAM as they are also acting more rationally then they have in the past.

    This will force further capitulation…
    Semiconductor Bulls have been saying that 2016 capex is in good shape and that Samsung will keep Capex flat. Those analysts who have remained too bullish for too long are going to have to back pedal and capitulate and start to take numbers down.

    How long a drop in spend???
    We would expect that Samsungs spending cut will remain in effect throughout 2016 as it will take a while to sop up the excess capacity in the market. It is also clear that 3D NAND and foundry will not make up for the drop in DRAM spend as DRAM has been spending at a much higher than normal rate for a while now.

    We have been very clear that this wouldn’t last forever and sooner or later things would come back to the industry’s normal cyclical behavior. We continue to maintain that while the cyclicality is not as severe it is still none the less a cyclical industry……though many in the industry have developed amnesia given the length of the upturn.

    Micron & SK Hynix likely to follow…
    With everybody in DRAM behaving better its logical to assume that both SK Hynix and Micron will also cut back their spend. We would assume that Micron, which is naturally a cheapskate when it comes to capex, will easily slow spending. We do expect an uptick in spending related to Xpoint memory but not likely to make up for a drop in DRAM. SK Hynix is not in the financial shape of Samsung and thus is less able to tilt against the winds of declining DRAM pricing and over supply.

    Stocks will see further downside…
    We had suggested a $60 downside to LRCX which has fallen off the proverbial cliff in the last week. Given the momentum it may push through $60 and all bets would then be off. This is a far cry from $80+ the stock had reached but you live by the sword of memory, you die by the sword of memory…….

    A small cap stock that has high Samsung exposure is Mattson which could easily face another near death experience if Samsung cuts spending significantly as Mattson is a marginal supplier and less of a core supplier and thus on the edge. We had suggested a $2 downside and again, we could break though that as well.

    KLAC is the least impacted…

    Of the large semi equipment companies KLAC will be the least impacted as they have the lowest exposure to memory and higher exposure to logic and foundry which seems to be at or near a bottom. KLAC recently said that business had seemed to be at the high end of very lowered expectations.

    We remain underweight the group…

    We have been very clear that we were not at the bottom for the stocks and we are likely still not there yet but we are getting closer. Q3 reports could be one of the last nails in the coffin that puts the stocks on the bottom of their cyclical range in valuation. We don’t see a lot of support for the group or near term catalysts to turn things around. News flow will remain negative in the near term

    Robert Maire
    Semiconductor Advisors LLC