CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The Emerging Importance of Parallel SPICE

The Emerging Importance of Parallel SPICE
by admin on 05-15-2016 at 7:00 am

SPICE simulation is the workhorse tool for custom circuit timing validation and electrical analysis. As the complexity of blocks and macros has increased in advanced process nodes — especially with post-layout extraction parasitic elements annotated to the circuit netlist — the model size and simulation throughput of traditional SPICE engines became problematic, even intractable. A class of Fast SPICE products emerged that incorporated modified approaches for device modeling, network solver algorithms, and transient simulation timestep management. These tools provide significantly improved throughput and capacity, and are typically applied to simulation of networks such as large memory arrays.

Nevertheless, circuit simulation requirements have become more diverse and demanding than ever, such as:

  • validation of large, mixed-signal IP blocks (big A/little Ddesigns)
  • evaluation of models incorporating Verilog-A semantics and S-parameter system elements
  • low power validation (with requirements for accurate sub-threshold leakage currents, and including detailed power rail models)

With the advent of multi-core, large memory footprint compute servers, parallel SPICE products have been introduced, to address the capacity/throughput issues and to provide the requisite accuracy of “traditional” SPICE for more demanding applications.

I recently had the opportunity to chat with Bruce McGaughy, CTO of ProPlus Design Solutions, Inc. He provided an enlightening perspective on the latest in the parallel SPICE methodology, as represented by the ProPlus NanoSpice-Giga toolset.

First, a little background…

Traditional SPICE execution flow
The figure below illustrates the typical SPICE execution flow, for a transient simulation.


(From: Chen X., Wang, Y., and Yang, H., 2012 IEEE 26th International Parallel and Distributed Processing Symposium)

The principal step is the convergence of Kirchhoff’s Current Law (KCL) and Voltage Law (KVL) for the circuit network matrix at each successive (adaptive) timestep, as depicted in the Newton-Raphson iterations loop in the flow diagram. The KCL network solution is derived, and KCL and KVL are verified to a suitable absolute and relative error accuracy tolerance, as defined by ABSTOL and RELTOL settings. In the figure, the inner Iterations part is expanded to show the corresponding substeps:

  • model evaluation
  • factoring of the (sparse) network matrix A into Upper and Lower matrices
  • solving the network branch current measures (vector ‘x’) at each timestep, with applied stimulus vector ‘b’, evaluating node voltages, confirming KCL and KVL

The key differentiation of traditional SPICE to the Fast approach (to be discussed next), is that thefullnetwork matrix is solved (“converged”) at the end of the Newton-Raphson loop, for the selected timestep increment, to the accuracy defined by the error tolerances.

Fast SPICE approach

Fast SPICE products utilize a number of “speed-up” methods, to achieve greater capacity and throughput over traditional SPICE. These methods include:

  • pre-characterized, multidimensional table-lookup device models (accelerating the model evaluation step in the figure above)
  • aggressive RC reduction of annotated parasitics
  • multi-rate, event-driven circuit analysis (and timestep advance) for network partitions

These approaches are ideally-suited for designs such as large memory arrays, with regular hierarchy (enabling aggressive model partitioning), and with limited inter-partition circuit activity.

Bruce noted that there are classes of simulation problems where Fast SPICE is not the ideal approach. Specifically, the event-driven method improves performance by evaluating active partitions, and not the entire network in total — in other words, full network convergence is not required at each timestep. As a result, there is a potential loss of accuracy in critical circuit electrical measures.

Parallel SPICE


The figure above from ProPlus provides a comparison between parallel SPICE and Fast SPICE approaches.

Their NanoSpice-Giga product is in the class of parallel SPICE methods. The target hardware execution platform is a multiprocessor (e.g., 32 cores), shared memory server, running ~8-16 threads.

The model input and user settings are the same as traditional SPICE, including: Verilog-A, device aging support, etc.

No table lookup models are used. Bruce highlighted,“Given the increasing model nonlinearities, and the diversity of device operating domains, table models are less accurate for circuit electrical analysis.”

Referring again to the SPICE flow diagram above, the model evaluation step is relatively easy to parallelize. The key technical advancement in this class of SPICE products is the parallelization of the LU factoring and full network solver algorithms. There are additional memory utilization optimizations applied in NanoSpice-Giga, as well, to support netlists exceeding 50M devices and 500M parasitics.

NanoSpice-Giga has the added benefit of its relationship to other ProPlus solutions. The flagship product, BSIMProPlus, is used by all major foundries for developing PDK device model parameters, whether the process is bulk, FinFET, FD-SOI, etc.

The NanoSpice engine is integrated in BSIMProPlus (for model fitting), and is highly tuned for execution runtime. As illustrated in the figure below, this same optimized kernel is incorporated into the parallel NanoSpice-Giga product, as well.


Bruce provided a couple of customer examples, illustrating the application domains where NanoSpice-Giga is being used. Silicon Creations is developing high-speed SerDes IP (in 28nm, 16nm, and 10nm nodes) with this parallel SPICE simulation methodology. eSilicon Corp is characterizing memory arrays and custom IP in advanced nodes using parallel SPICE, as well.

SPICE-based circuit simulation will remain a key component in the circuit designer’s toolbox. Yet, designers need to ensure that the algorithmic approach used — i.e., traditional, fast SPICE, parallel SPICE — is best-suited to the model analysis requirements, with the right tradeoffs on accuracy, throughput, compute resources, license cost, and usability. An increasing set of applications will require parallel SPICE methods, such as NanoSpice-Giga from ProPlus Design Solutions.

For more information on NanoSpice-Giga, please follow this link.

-chipguy


ARM Server Update (NFV)

ARM Server Update (NFV)
by Patrick Moorhead on 05-14-2016 at 7:00 am

There has been a lot of buzz in the industry over the last several years about ARM Holdings’ and its partners’ plans to go after the datacenter market with a particular focus on servers. Companies like Advanced Micro Devices, Applied Micro Circuits Corp. (AppliedMicro), Cavium, and Qualcomm, are now in various stages of product development with ARM-based server SoCs. AppliedMicro was the first to ship ARM server chips.

It has taken many years, but a number of high profile customers in the web services, enterprise, and high performance computing markets are in the midst of ARM server technology evaluations and proof of concept deployments. While progress has been made to drive customer interest by the ARM community, ARM server players still face a significant uphill battle as they look to compete with Intel’s current dominance in the server space at 98% market share.

The trend toward software-defined infrastructure in the storage and networking markets opens the door for commodity datacenter hardware technology to be married with specialized software stacks to improve efficiencies and lower costs. The market shift to software-defined means that hardware requirements between storage, servers, and networking are converging in some areas which could allow infrastructure hardware providers to expand their reach to service tangential markets and to go after other slices of the datacenter pie.

AppliedMicro (Applied Micro Circuits Corporation), the current time-to-market leader amongst the ARM Holdings server pack, made a recent announcement at Mobile World Congress with embedded system provider Kontron to develop a platform designed for the Network Function Virtualization (NFV) space. This move by AppliedMicro is a great proof point that demonstrates the potential for the ARM server community to think beyond just servers and to target other markets within the datacenter.

NFV offers many benefits to mobile networks by moving from purpose-built monolithic network appliances to software-based network functions using commodity cloud-based hardware with centralized control. Carriers can experience greater efficiency with NFV by taking many of the primary network functions out of the hardware and putting them into easily manipulated software for fast service updates and changes.

NFV encompasses the set of functions—like load balancing, firewalls, intrusion detection, or WAN acceleration—that live along the border of the network and help the carrier maintain their service and connection to the customer. This critical tier of network infrastructure makes NFV is a huge part of the telco carriers plans to attack 4.5G (Advanced LTE Pro), 5G, and Industrial IoT by providing them with a flexible, low-cost approach to infrastructure deployment. The flexibility that carriers can get with NFV allows them to customize their Quality of Service (QoS) and security offerings in a way that is profitable and efficient for the business.


Kontron showing off both Intel Xeon and AppliedMicro X-Gene sleds
(Photo credit: Patrick Moorhead)

While NFV has been primarily driven by the carriers, enterprises also can benefit from the market shift to NFV. Historically, some of the challenges enterprise IT has faced when trying to roll out new services quickly were based on their carriers’ limitations to deliver the required networking capability to support these services. By virtualizing networking functions through NFV, a carrier can deliver networking faster, make changes quickly, and drive better overall service levels for enterprise IT. Additionally, because these new virtualized functions are software-based and tend to rely on open standards, it is easier for enterprise IT to plug into the carrier’s development frameworks and provide value-added services on top of the carriers’ service offerings.

Many new players are joining traditional network vendors to pave the way forward for NFV, offering new solutions to carriers and enterprise IT organizations looking to take the first steps forward. Because NFV is a greenfield opportunity without an established set of standards and incumbent players, this market is a great entry point for an up and coming architecture like 64-bit ARM datacenter-class SoCs. As I stated above, AppliedMicro announced at Mobile World Congress that the company is collaborating with German embedded systems vendor Kontron to develop the Symkloud 10GbE modular server targeted at the NFV space. I saw a fully-functional Symkloud server in Kontron’s booth at MWC last week, and was impressed with the system’s robustness and density. I think this platform is a great first step for AppliedMicro to go after the NFV space. Also, I believe AppliedMicro’s rich history in networking with their traditional OEM business makes them well-suited to understand and address the changing needs of telco and service provider customers.


AppliedMicro NFV compute sled
(Photo credit: Patrick Moorhead)

APM has always mentioned the 3 market segments of Compute/Networking, Storage and HPC. SDN/NFV areas are further proof points into the networking market segment and the opportunity for X-Gene. Like SDN/NFV, software defined storage opens the door for commodity hardware to lower the cost and improve efficiencies over traditional monolithic storage platforms. Vendors looking to bring software-defined storage platforms to market require efficient, low cost hardware technology to marry with their differentiated software stacks. At the ARM Holdings Techcon event last fall, EMC acknowledged using X-Gene in their storage arrays and Hewlett-Packard Enterprise claimed to be evaluating X-Gene SoCs for use in enterprise storage systems.

I believe that using X-Gene to target tangential markets like storage and networking is a smart move by Applied Micro and could help accelerate time-to-revenue with X-Gene. NFV and software defined storage are new markets that provide a significant opportunity for alternative technology approaches to be successful which makes them a good opportunity for up and coming architectures like 64-bit ARM SoCs. Revenue diversification on a large investment like X-Gene is important to ensure this big bet pays off. If AppliedMicro is successful with gaining traction for X-Gene in networking and storage and at the same time generates some meaningful server production deployments, the company can build a near-term revenue stream to keep their R&D pipeline funded in order to continue to differentiate over the long-term.

More from Moor Insights and Strategy


AMD Forms China X86 Server Chip Joint Venture

AMD Forms China X86 Server Chip Joint Venture
by Patrick Moorhead on 05-13-2016 at 4:00 pm

We have written a lot of research and notes about the China server market and their unique needs as it relates to security and intellectual property and the ways western server OEMs and chipmakers like Intel, Advanced Micro Devices, ARM Holdings, Qualcomm and IBM’s OpenPOWER are addressing the challenge.

Basically, China wants their “own” hardware and software for government-funded institutions. This was driven by the Snowden revelations and the cooling of the China economy. For software, it means submitting source code to inspect for things like back doors, but also having Chinese institutions buy software from Chinese software vendors. It’s a bit more complex on the hardware side. Net-net, on the server side it involves Chinese companies integrating core IP from western countries, adding their own “special sauce” like security and accelerators and selling to Chinese server OEMs. This is exactly what happened with Advanced Micro Devices announcement today of its China JV.

Advanced Micro Devices (AMD) has created a JV with THATIC (Tianjin Haiguang Advanced Technology Investment Co., Ltd.) to develop SoCs for the Chinese server market. Essentially, AMD will license its x86 processor technology and the IP (intellectual property) related to designing an SoC (system on chip), so I expect this involves IP like memory controllers, input/output and caching, but not GPU technology.
Neither THATIC or AMD is commenting on which flavor cores these will be or any more technical details, but I have to expect that it is Zen as AMD hasn’t had much of any success in servers with its current CPU cores. This lack of detail doesn’t strike me as that they haven’t determined this, I think it’s more about secrecy from their competitors. This needs to be Zen for the most success. If not Zen, then this will most likely be focused on appliance servers like storage and networking.


Photo credit: AMD

The deal also gives AMD a minimum of $293M in cash as long as AMD meets their IP delivery dates. AMD’s Q1 earnings results include $52 million net cash received from the IP licensing agreement and will be another $25M in Q2. According to AMD, as soon as JV parts start shipping, it would also involve royalties on top of the licensing dollars on a per-shipment basis. Cash is good for AMD at this point, but leaving the analysis there isn’t doing this justice.

Years back, Advanced Micro Devices disclosed that it would attempt to monetize its IP in areas that aren’t being monetized, which AMD called “key areas”. And AMD has good IP. There are only two “big” GPU producers and AMD is one of them. I could also argue there are four, maybe five people with the expertise to do “big” CPU cores, and AMD is one of them. This deal is at least one victory lap on monetizing its IP. I expect more to come in the future, especially with GPUs.
I like this deal for AMD for many reasons:

  • Delivers on the promise of expanded IP monetization
  • Increases AMD’s cash balance
  • Has a royalty back-end after the license fees run out
  • Extends AMD’s reach with technology that for the most part, has already been created. AMD won’t have to spend 100s and millions of dollars to deliver on this.
  • Doesn’t block AMD from having their own branded X86-based processor in China

In terms of uniqueness, from publicly-based information, this is the only X86-based server SoC put together with a Chinese company. Intel cut a deal in January in China for an MCP (multi-chip package) which could become an SoC, but for now, it’s an MCP. The end result is a custom Chinese solution for both, but an SoC just has further levels of integration.

I will be writing more on how this compares to Intel, IBM’s OpenPOWER and efforts from the ARM Holdings consortium, especially Qualcomm and AppliedMicro, as more information becomes available.

We are many years away from a real product, so many years that the JV isn’t providing a date for productization, but if the IP blocks are ready, I would expect we are two to three years away. All-in-all, this is a good direction for Advanced Micro Devices and I hope they can do even more of these arrangements, particularly in graphics.

More from Moor Insights and Strategy


IOT Trends In Manufacturing

IOT Trends In Manufacturing
by Bill McCabe on 05-13-2016 at 12:00 pm

Trends That Will Shape the Internet of Things in 2016
In a relatively short time, The Internet of Things (IoT) has grown from a niche technology in the global market, into a widely embraced phenomenon. Rapid advancements in IP technologies, as well as the IoT devices and industries that they’re used in, mean that devices are now able to be integrated in more ways than ever before. One particular sector that has strongly embraced IoT adoption, is the manufacturing industry.

Offering a range of benefits, IoT will be a major force in shaping manufacturing throughout 2016 and beyond.

Manufacturers Will Become Increasingly Software Centric
Manufacturing hardware, processes, and even operational processes, will become more reliant on software. Whether referring to the embedded apps and software within devices, or the server-side software that controls machines and automations, manufacturers that adopt IoT as part of their strategy will need to focus investment and knowledge building around software. Not only will this affect the depth and complexity of their IoT integration, but it will also mean that these manufacturers will need to procure new talent or upskill existing staff with specific IoT skillsets in IT.

Costs will Decrease, Increasing Adoption
Cost has been a significant factor for manufacturers who have been hesitant to adopt widespread IoT systems in manufacturing. As IoT technologies continue to mature, implementation costs will decrease. Because IoT provides significant benefits in operational efficiency, price shrinkages will influence manufacturers who were previously undecided on the financial benefits of IoT.

RFID Will Be a Major Technology in Manufacturing
Research firm Markets and Markets, has projected that RFID will be widely adopted in the manufacturing sector. There are a number of factors contributing to this, including the ability to use passive RFID chips in manufacturing, with little additional cost. NFC is expected to experience the highest level of growth. Manufacturers will be able to benefit from RFID tracking on the production floor, but also in packaging and distribution.

In case studies, such as the use of RFID to track luggage at Hong Kong International Airport, RFID tags have been shown to provide read rates of up to 97%, compared to 80% for optically read barcode tags.

North America will Lead IoT use in Manufacturing
Although China and the United States have often swapped positions at the top spot of total manufacturing output, it is the U.S. that will lead IoT implementation in manufacturing for 2016. This is mostly due to high automation, frequent technological advancements, and a history of early-adoption of new technologies. This contrasts greatly with China, where output is high, but production methods differ, favoring low-cost labor in place of high levels of automation.

This increased trend in IoT adoption is expected to benefit other areas of North American industry, such as the R&D and software sectors. Cisco Systems, Microsoft, Intel, IBM, and General Electric are all U.S. based multinationals that lead in IoT sensor and software development. German companies SAP SE, Siemens, and Bosch, are also IoT leaders that will benefit from increased demand for IoT solutions in manufacturing.

Bottom Line – IoT Shows no Signs of Slowing Down

Regardless of initial reluctance to adopt, and increasing security concerns surrounding IoT devices, the industry as a whole is showing no signs of slowing down. Firms like Gartner research have predicted that there will be almost 7 billion sensors in use by the end of 2016, and that enterprise level software spend will total over $860bn, globally.

Manufacturers will realize more efficient operations which stretch from administration, to production floors, and even distribution. The internet of things doesn’t represent a flawless group of technologies, but it is set to be a significant aspect of the future of high tech manufacturing, no matter which way you look at it.

For more information on IOT Recruiting please check out our new website www.internetofthingsrecruiting.com
Bill McCabe


Si Photonics in a 300mm Fab – This is Getting Serious!

Si Photonics in a 300mm Fab – This is Getting Serious!
by Mitch Heins on 05-13-2016 at 7:00 am

Greater demand for more data exchange within data centers is being driven by mobile computing and the Internet-of-Everything. In 2011, it was estimated that over 1 Zettabytes (ZB) of data was pushed through the internet. That’s 1×10[SUP]21[/SUP] bytes of data. And, that amount has been doubling every 3 years since. Given that rate, it is estimated that by 2017 we will be looking at 7ZB of data being exchanged over the internet per year. This semiconductor industry has responded with 100Gbps transceivers and work is active on 400Gbps versions. With 400Gbps comes two additional pushes. The first is a move to longer reach fiber-based connections to overcome copper’s current limitation of 100 meters. The fiber/optical connections promise up to 2km reach at these speeds and higher. The second is the push towards using fiber/optical for ultra-short reach board-to-board and even chip-to-chip connections where 10’s of Terabytes per second rates are desired (see figure).

We have heard a lot about integrated photonics and how it will be deployed to address bandwidth challenges, yet we have not seen adoption of photonic processes by the major fabs. Then I read chapter 10, in a recently published book entitled, Silicon Photonics III and I was stunned. The chapter was contributed by a team from ST Microelectronicswhere they published no less than 35 pages detailing their efforts to integrate a photonics process into one of their 300mm SOI fabs. How did I miss this? Until reading this, everything else I had read about photonic detailed achievements in laboratories and universities where photonic engineers were investigating new materials or defining better ways to modulate or control light. Chapter 10 put things into a totally different light (pun intended). This was a group of serious semiconductor professionals doing what they do best. They were integrating a process in preparation for large scale production.

You can tell things are getting serious when you starting reading phrases such as these:
– Qualification runs
– Reliability test chips
– Optical & Electrical wafer sorting
– Production test methodologies and equipment
– Photonic process integration and process control
– Process monitoring / characterization
– Silicon-based model extraction
– Optical SPICE modeling
– Silicon-to-SPICE model correlation (see figure)
– Stress and temperature dependent modeling

Add to this, seeing wafer maps showing cross-wafer parameter variance and whisker charts showing lot-to-lot performance of various metrics and you know that you are no longer looking at an experiment in a lab. The ST team was detailing what they were doing to bring about a production worthy optical platform. Then consider all of this work is wrapped around the words “300mm” and you also realize that a major investment is being made even as we speak. One shudders to think of how many more resources are being deployed working on products to fill this line once it is fully up and running. The good news for ST is that they claim that this effort uses the same fab equipment as their regular SOI lines, at least on the fabrication side. That means new capital investments were minimal. That is not to say however that this photonic process is the same as their existing SOI logic processes. In fact, the book chapter is littered with new three and four letter acronyms that would imply otherwise. Things like DSOI which stands for double-SOI where ST is modifying the base wafer to improve the surface coupling efficiency of gratings used to bring light on and off of the IC. They did this by adding a polysilicon layer buried in the BOX to create the equivalent of a Bragg reflector that reduces the amount of light lost through the silicon. Additionally, they added the ability to do multiple partial etch steps enabling them to etch to different depths in different areas of the die. This enables ST to optimize the photonic wave guides for lower loss and tighter turns, which means smaller footprint die. There has also been a considerable amount of work done to quantify the effects of stress and temperature on the various photonic components. This is because ST knows that they will be flip-chip assembling electronic die on top of photonic die and interposers and the photonics must work even when stressed by Cu Pillars and TSVs (through-silicon-vias) and by the heat generated from the electrical die.

While there is still much work to be done, this article to me was a watershed event as it represents a real shift of photonics out of the labs and into the fabs. It doesn’t yet represent a fabless photonic eco-system but moving silicon photonics into a 300mm fab is a serious step forward for photonics and that’s good for anyone working in this space.


Getting Low Power Design Right in Mixed Signal Designs

Getting Low Power Design Right in Mixed Signal Designs
by Bernard Murphy on 05-12-2016 at 4:00 pm

Mixed-signal design creates all sorts of interesting problems for implementation and verification flows, particularly when it comes to design for low power. We tend to think of mixed-signal as a few blocks like PLLs, ADCs and PHYs on the periphery of the design. Constrain and verify the digital power requirements up to analog boundaries, let the analog guys do their thing, check (probably manually) very carefully at the interfaces and you should be good, right?

Unfortunately, not so much these days. It’s much more common to find nested analog and digital (some call these sandwiches of analog and digital) for digitally tuning analog performance, using a DSP embedded in an RF section to drive programmable beam-forming, for managing self-test features and other tricks. This greatly complicates defining and checking constraints and verifying through these nested objects.

You no longer have one digital domain touching islands of analog on the periphery. You now have islands of digital floating in seas of analog, floating in seas of digital, … But those digital islands still need to be optimized for power (and for timing, layout and everything else). Manually island-hopping constraints/intent/verification between these digital pieces no longer looks practical.

But whether you are building IoT devices or full-featured mobile platforms, these designs have sensors, they have radios and they have a lot of digital logic, even in the IoT edge nodes. That’s a lot of hungry transistors to feed on a very limited energy budget, so you still have to squeeze every drop of power out of the design. Manual steps have to be automated out, and an automated flow has to start with an infrastructure supporting mixed analog/digital; at Cadence the OpenAccess database is already set up for this.


The infrastructure is in place for passing constraints back and forth between islands through macro-models for the analog pieces. Macro models abstract the analog functionality and enable implementation and verification in the context of the power intent for the design. There is support for timing constraints (of course) and for CPF power intent. In the Virtuoso schematic editor, analog designers can build CPF macro-models which then allow digital designers to stitch these together with the digital components in the context of the power intent. UPF flows also work up to the boundaries of digital circuitry, and since the infrastructure is already in place for CPF macro-model definition and support, Cadence doesn’t anticipate significant development effort to extend this to UPF.

Conformal Low Power can now handle mixed-signal designs as you progress through the design flow. One of the big challenges for everyone in these cases is to minimize false violations. Macro-models and netlisting the design correctly is a central component of this solution; Cadence has created a seamless interface between Virtuoso and Conformal to ease this flow.

Additionally, power-aware dynamic verification before implementation can be challenging due to problems in interpretation and communication of intent at the interfaces between the analog and digital domains. Cadence has done a lot of work in this area to simplify the AMS low power verification flow; this is available for both the CPF and UPF flows.

Power estimation and analysis for mixed-signal designs continues to grow in importance. Analog blocks can be powered down just like digital blocks. Getting to an optimal power solution requires careful and accurate analysis. Cadence’s recently launched Joules product, supporting both both power intent standards, claims to accurately estimate power at the RTL stage and to be within 15% of power seen at signoff.

Handling low power design in modern mixed-signal designs is getting complex. Cadence seems to have most of the tools in place to help. To learn more about the Cadence low power flow for AMS, click HERE.

More articles by Bernard…


Channel Operating Margin (COM) — A Standard for SI Analysis

Channel Operating Margin (COM) — A Standard for SI Analysis
by Tom Dillinger on 05-12-2016 at 12:00 pm

There’s an old adage, attributed to renowned computer scientist Andrew Tannenbaum, one that perhaps only engineers find amusing: “The nice thing about standards is that you have so many to choose from.” Nevertheless, IEEE standards arise from customer requirements in the electronics industry. Many relate to the definition of complex communication protocols, such as the emerging 100Gigabit Ethernet interface (100GbE). A recent version of this protocol standard — IEEE 802.3bj-2014 — added a 4-lane X 25Gbps physical specification for backplanes, connectors, and twinax copper cables.
Continue reading “Channel Operating Margin (COM) — A Standard for SI Analysis”


CEO Insight: Transformation of Vayavya Labs into System Design Automation

CEO Insight: Transformation of Vayavya Labs into System Design Automation
by Pawan Fangaria on 05-12-2016 at 7:00 am

With the advent of SoCs, design abstractions and verification has moved up at the system level. It’s imperative that EDA moves up the value chain to start design automation at system level. The System Design Automation will be the new face of EDA in coming years.
Continue reading “CEO Insight: Transformation of Vayavya Labs into System Design Automation”


Facebook and the Internet-Of-Things

Facebook and the Internet-Of-Things
by Sudeep Kanjilal on 05-11-2016 at 4:00 pm

Something very important happened recently at the annual developer conference (F8). Facebook firmly staked its claim on IOT. Facebook events (like the Google annual developer events) are always interesting, as they give a tantalizing view of what is coming next. Yes, it lacks the panache the Apple events have. However, just because Facebook and Google events are deeply technical, does not mean they are not momentous, or exciting.


Several interesting data points were shared, several important consumer features announcements were made. I will not repeat them here, expect perhaps point out an important point – F8 was perhaps the pivot where Facebook finally shed its web legacy and became a native mobile ecosystem player.

Russian Nested Dolls – Stack within Stack within Stack

What, however, caught my attention was something that was not consumer facing – something regarding Parse, a mobile app backend / platform-as-a-service firm that Facebook acquired 3 years ago and then seemed to have forgotten about it.

At the level of the consumer internet, it’s been clear for some time that Apple and Google won the platform war. That leaves other consumer-facing/consumer-service firms in an interesting predicament – how do they survive, and thrive, on someone else’s platform? How far can they capture attention and intent of the consumers, what other interaction models will emerge, and so on.

The smart answer is – build your own stack inside someone else stack/platform

That’s what Facebook and Amazon have been up to, for the past couple of years – moving up and down the stack simultaneously.

Moving up, these firms are building a new run-time overlaying the runtime embedded in the iOS and Android. Zuckerberg pointed out at F8 that five years ago, most content on Facebook was text. Now it’s photos, soon it will be video, and eventually it will be immersive content like virtual reality and augmented reality. If the content sharing of the future requires a headset, Facebook needed one, so it acquired Oculus.Moving down, Facebook acquired Parse.

Facebook wants to manage all interactions – between people, between people and things, and between things

We’re rushing headfirst into this era of “Internet of Things” — a time of connected coffee makers, connected fridges, connected light switches. There’s been very little done, however, in the way of standardizing how these things work (and work with each other) behind the scenes.

And that’s where Parse related announcement comes into play. Parse launched SDKs that act as the backend brains for IoT projects. It’s compatible with Arduino first, with other platforms on the way.

So, Facebook, which was basically an app (the top consumer layer) on the iOS and Android based mobile ecosystem/stack, expanded that into a full 5-layer stack. Third party consumers apps as the top layer, Occulus as the runtime/second layer, facebook itself, along with Messenger as the service/third layer (which will include payments!), Parse as the infrastructure/fourth layer.

What it means in layman terms is that Facebook could very well become the central control for Smart Home, Smart Auto, Smart Health, etc. Basically, on the day it completed the pivot from web to mobile, it also took the first step towards the next ecosystem – IOT!


Paranoia, Porsche, Paul Newman & Self-Driving Cars

Paranoia, Porsche, Paul Newman & Self-Driving Cars
by Roger C. Lanctot on 05-11-2016 at 12:00 pm

I was watching a documentary on the racing life of Paul Newman yesterday and I couldn’t get over the disconnect between Paul Newman’s near-obsession with auto racing and the general public’s understanding of the man. Most of us know Newman for salad dressing and iconic movie roles, but it appears, based on the testimony of friends and family members, that racing was his real passion.

He wasn’t much good at it when he got started. But like his acting he worked hard at it and ultimately became a winning driver and team owner. In interviews in the documentary he is pretty much at a loss to explain his passion other than to say that unlike the Academy Awards you don’t vote for the winner of the race, either you cross the finish line first or you don’t.

But it’s Newman’s passion for driving which fascinates me. For me, driving is an obligation, maybe a privilege, and occasionally fun. Driving can be empowering and it can be dangerous. I clearly lack Newman’s passion for this activity.

I was thinking of this because I had just visited Porsche in Stuttgart last week. I had a nice chat with the executive I was visiting and as I turned to leave I asked who at Porsche was working on self-driving car technology – the current industry rage. My host looked at me incredulously. “Oh,” I said. “Sorry. Porsche. That’s right. No self-driving cars.”

A Web search turned up February 2016 reports of Porsche CEO Oliver Blume’s comments that Porsche would never create a self-driving car.

http://tinyurl.com/hbm6rrf – “Porsche CEO: Don’t Expect to See a Self-Driving Porsche Any Time Soon” – RoadandTrack.com

Blume’s position seems a little severe given the fact that nearly every other car company on the planet is working on this technology. Even governments are getting into the act setting rules and, in some cases, providing funding.

Where Porsche leaves off in objecting to self-driving cars, the paranoid step in. Website EricPetersAutos.com paints a gloomy picture of sheeple financially shackled to remote controlled drone cars under the command of car makers or the government or both.

http://tinyurl.com/zdyro73 – “Why the Hard Sell for the Self-Driving Car?”

Setting hysteria and driving tradition aside it’s important to understand the spectrum of self-driving and what it means to every day driving. The industry’s and government’s effort to master automated driving is having the collateral outcome of accelerating progress toward reducing car crashes and saving millions of lives.

Mania further reflects the fact that outside of Paul Newman and millions of other driving enthusiasts driving and owning cars is a headache. The effort to master automated driving may ultimately raise questions regarding vehicle ownership (thereby undermining the assumptions behind the paranoid screed linked to above) but it will also open up new employment opportunities and may help to reduce congestion, emissions and, most definitely, collisions.

As we assess the self-driving car opportunity it is best to recall that we may only want or need self-driving in particular circumstances – while on the highway, when in the midst of a medical emergency, when we are tired or disabled, or when we are in rush hour traffic. Some of these capabilities already exist and some do not. But 1.25M annual highway fatalities is intolerable and every car company and even governments have a responsibility to explore any technology capable of mitigating that toll.

It is best not to view self-driving cars in absolutist terms. We know there are governmental organizations that have tried or expressed interest in remote control of cars and Google’s proposed removal of the steering wheel and pedals is troubling to enthusiasts. If some consumers would rather be driven than drive, then the market should decide. Just remember, it will be impossible to look as cool as Paul Newman if you are in a self-driving car.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk