CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Intel – "Inside" good markets, outside bad- Huge potential in Memory & Photonics

Intel – "Inside" good markets, outside bad- Huge potential in Memory & Photonics
by Robert Maire on 09-11-2015 at 2:00 am

Intel’s lack of exposure to handsets/China is now a positive rather than a negative

Intel’s strength in “cloud” & servers avoids much consumer weakness in China & developing Markets

Crosspoint has huge upside- Coupled with Photonics it fortifies Intel server dominance

The shoe is on the other foot….
Intel has long been beat up for its lack of exposure and success in the handset market. Analysts, ourselves included, have been negative on the company as it has missed much of the upside in China and other developing markets where handsets and tablets are the consumers device of choice for accessing the internet and being connected.

All of a sudden, exposure to those same markets is a bad thing as consumers, particularly in those markets, are now pulling back their spending sharply, and handsets sales are down. In a perverse turn, Intel now looks good because it doesn’t have much to lose in those markets as compared to many others who rode the rocket up but can’t eject as it returns to earth.

PC market not likely changed very much…
We don’t think that the PC market, both laptop and desktop is impacted all that much by the current slowdown in consumer as the trend was negative anyway and that was built into numbers and expectations. Its the handsets and tablets that are seeing the brunt of the negative turn. Again,this tends to work to Intel’s favor in terms of performance relative to expectations.

The cloud biz will see near zero impact…

Although there is certainly some linkage, we don’t expect nearly as much weakness in the server market and in fact it likely has continued upside given the new processors being rolled out with better power/performance specs. While consumer spending, especially in developing markets is more fickle, and obviously more negative near term, business spending on servers should be more steady. Again this is a net positive for Intel.

Monster potential in Crosspoint memory…

Its been a really long time since a new memory technology has been rolled out. NAND which was the last “new” memory technology can be credited with both creating a new class of products from the Ipod to the Iphone and tablet and accelerating their development by enabling applications. Without NAND we would still be walking around with clunkly Ipods with miniature hard disk drives sucking power with slow performance. NAND made all the difference in the world.

In our view, Crosspoint has even more potential to both create and accelerate new applications in all areas of computing from handsets to servers and have even more of an impact than NAND has had. Crosspoint combines the speed of almost DRAM and the non volatility of NAND into a hybrid with the best of both worlds. We will likely see many new data driven applications as well as substituting Crosspoint for existing NAND applications assuming we can hit the price point, which we think is doable.

Photonics accelerates the data center…
As data centers get larger and larger with thousands of servers spread across acres of floor space communication quickly becomes the bottleneck. Photonics has the opportunity to break that bottleneck with much higher speeds and cleaner network architecture. The potential for improvement is obviously large. Computing power is usually not the problem in data centers. Memory and communications are. When you combine the speed of Crosspoint with the speed of Photonics you get a huge uptick in the overall performance of the data center and thus the cloud.

Intel locks up the data center…
We think the triple threat of new processors, Crosspoint memory and Photonics really fortifies Intels position in the data center and could make it very difficult, if near impossible to crack by competitors. This is great news for Intel as this market segment is Intels “wheelhouse” and the core of their current strength as well as their future in the near and medium term.

With PCs continuing to swoon and handsets & tablets now slowing in developing markets, having the cloud as your core business, and doing a good job of protecting it is key to success & longevity.

Daydreaming – If we were BK for a day….
The very first thing we would do is call Tim Cook and cut a deal to put Crosspoint memory into the Iphone 7 next year to replace the NAND. First, a phone with Crosspoint would kick serious butt in performance versus NAND and both Tim and Intel could crow about it and leave Samsung further behind in the dust. A huge win for both companies and a negative for Samsung (who we think will lose the A10 business to TSMC anyway)

Second, we would take that billion dollar, hole in the ground, echo chamber of an empty fab 42 in Arizona and turn it into a fab dedicated to Crosspoint memory so I could supply enough of the market with Crosspoint to enable new applications and become a serious NAND replacement (much as NAND is replacing HDDs so could Crosspoint replace NAND).

Micron is the master of reuse of existing leftovers and Fab 42 is clearly a left over. Given that Crosspoint can be built using older semiconductor equipment, as its not bleeding edge geometry, Intel could probably scrounge up enough extra tools to lower the cost. Intel has serious upside potential here.

The stock…

We think Intels stock is at an attractive point. Its been on a long slide but has firmed up a bit lately. We think it will look increasingly attractive to investors as they move their money out of harms way in companies that have China, handset, tablet, developing market consumer exposure.

Intels current exposure and markets seem much more resistant to the current worries driving stocks in the market. With strength in the cloud, large upside in new technologies, Intel is suddenly looking much more attractive (especially as the others look much uglier…)

We would likely be buyers here as we see less downside and longer term upside. We would warn investors that all these things will take time and in the mean time the macro news will lower almost all boats but we would use that opportunity to look closer at Intel.

Robert Maire
Semiconductor Advisors LLC





What’s the Difference between Emulation and Prototyping?

What’s the Difference between Emulation and Prototyping?
by Tom Dillinger on 09-10-2015 at 12:00 pm

Increasing system complexity requires constant focus on the optimal verification methodology. Verification environments incorporate a mix of: transaction-based stimulus and response monitors, (pseudo-)random testcase generation, and ultimately, system firmware and software. RTL statement and assertion coverage measurement is crucial. Power domain verification is required, with functionality described in both RTL and power format file descriptions. Power dissipation estimates help guide both system architects and implementers. In-circuit emulation (ICE) features are required to exercise the model with external peripherals.

These requirements all underscore the need for faster simulation throughput. It’s evident why the EDA market for simulation acceleration technology is growing rapidly.

On that topic, Frank Schirrmeister was gracious enough to share his insights. Frank is Senior Group Director, Product Management, for the Cadence System and Verification Group. Cadence is uniquely positioned in both simulation acceleration domains – processor-based emulation and FPGA-based prototyping.

Initially, Frank addressed a question that has been puzzling (to me, at least): “What is the difference between emulation and prototyping?”

Frank provided a unique perspective. “It comes down to customer expectations. If the RTL design contains (synthesizable) constructs appropriate for the emulation platform, a system model should automatically compile on Cadence’s Palladium-XP. If it doesn’t, that’s a bug – call us. Conversely, developing a prototype model on an FPGA platform typically requires significant manual intervention to optimize the implementation. That effort may include partitioning the model across FPGA’s, managing clock domains, or completing FPGA physical design and timing closure.”

He then described an alternative approach. “Cadence provides the FPGA-based Protium platform with emulation-like automation, to reduce the time and resource typically associated with prototyping.”

The figure illustrates a recent customer’s experience across the gamut of accelerated simulation technologies, with performance data from:

  • a C-based model
  • the PalladiumXP emulation platform, and
  • a highly-optimized FPGA prototype board the customer developed internally

(Reference: SRISA, CDNLive-EMEA, 4/27-29/2015).

The Cadence Protium FPGA-based platform was then introduced, applying the “Palladium adjacent” flow to automate model compilation and physical implementation. Although performance was less than the highly-optimized prototype board, the fast model bring-up on Protium was a huge win for the customer, in both schedule and resource.

Frank indicated that additional model performance improvements on Protium are certainly supportable, with the manual optimizations described above. Yet, the Palladium-adjacent flow offers customers the benefits of emulation, followed by the performance improvement of Protium.

More from Frank shortly, but a little background might be helpful…

There are two main technical approaches to hardware-assisted verification, augmenting software simulators:

(1) processor-based emulation

An array of custom-designed processors provides functional emulation of a system model. Conceptually, these processors evaluate a large concurrent set of Boolean operations – i.e., each processor is comprised of thousands of general-purpose logic units. The sequence of instructions to these units are defined and synchronized as part of model compilation. System state and memory array values are stored, based upon the “reference clock” (which is not the actual hardware clock frequency for compiled model execution on the processors).

The platform provides high visibility and trace recording, across moving time windows.

A workstation is attached for job control, and for execution of testbench functionality not compiled into the accelerator.

Physical interfacing is supported by a rate-adapting card between the emulator and a peripheral, to compensate for bandwidth differences – a number of “SpeedBridges” are offered by Cadence for standard interfaces.

(2) FPGA-based prototyping

A prototyping system consists of a number of FPGA modules (and memory) programmed to implement the model. SpeedBridges may be connected, as well.

The compilation process is more involved, as mentioned above. Internal model visibility for tracing is more limited, often requiring signals to be identified during model compilation.

The prototyping platform model capacity can be unpredictable, as it depends on:

  • (datapath/arithmetic/control) logic functionality and the FPGA slices available
  • clock domain definitions and the internal FGPA clock resources
  • conversion of model register files to internal FPGA storage, and
  • the target timing and the FPGA local/global signal buffering resources available

Conversely, internal FPGA slice utilization can be high, yet the FPGA pin count is a limiting factor. In these cases, time-multiplexing of I/O communication between modules is used.

The prototyping platform is not targeted for a transaction-based verification environment, which would adversely impact throughput. The performance of the platform is best suited for a full system model — “a DUT in a box” — to exercise system firmware/software.

Frank offered some details on the Protium “FPGA-based emulation” methodology. A common front-end model compilation flow (“HDL-ICE”) is used, to ensure compatibility. The next flow step splits, either mapping to PalladiumXP or to Protium (using Cadence partitioning and Xilinx Vivado implementation tools). The key is that Cadence guarantees Protium model functionality with the same support as PalladiumXP.

In summary, when an RTL model is less stable, the unique advantages of emulation are leveraged:

  • speedup over software simulation, with high trace visibility
  • CPF/UPF power domain checks synthesized into the model
  • Dynamic Power Analysis (DPA) is integrated with Cadence Joules, with toggle counts now available from testbenches only feasible to run on an accelerator
  • emulation coverage data can be merged with that from software simulation, in the Cadence “big data” Metric Center

As the RTL model stabilizes, the transition to “FPGA emulation” with the DUT-in-a-box model provides an efficient path to achieving performance suitable for firmware/software development and debug.

Cadence’s product portfolio for simulation acceleration, combined with the “Palladium adjacent” flow offers a unique methodology. From a cost perspective, it strikes me that this allows managing concurrent verification projects effectively. Rather than requiring capacity for two projects in a single platform, verification would transition from emulation to prototyping in a pipelined manner, to offer the best of both worlds. Having both platforms available introduces some interesting post-silicon debug model verification options, as well.

-chipguy


Xilinx is a Software Company

Xilinx is a Software Company
by Paul McLellan on 09-10-2015 at 7:00 am

If you think of Xilinx the word that immediately comes to mind is FPGA. After all they were one of the pioneers of the space. FPGAs are a means of implementing hardware, and the main implementation methodology is RTL-based. This compares to writing software and compiling it for a microprocessor, which is the main software implementation methodology. However, there is a sea change going on in how systems are designed. They are written in software programming languages but instead of being compiled simply into code for a processor they can be processed into a hybrid system with much of the system implemented in software but the most performance or power critical portions implemented in the programmable fabric.

For example, a high end Zynq has a quad-core ARM Cortex-A53 processor, a dual-core ARM Cortex R-5 processor and an ARM Mali-400MP GPU and an H.265 video Codec. This is more like a blade server on a chip than a traditional FPGA. To accelerate a particular procedure (such as a video or DSP algorithm) it is only necessary to mark it for speed up. Under the hood, the C code will be synthesized into the FPGA fabric and all the plumbing constructed to link it to the C source code. It will run just like the C code on the ARM processor, only faster (and probably with lower power).


Xilinx is at the forefront of this transformation towards software defined systems. It is focused on six markets where this approach is especially well suited:

  • 5G wireless
  • Software defined networks (SDN) and network functional virtualization (NFV)
  • Video/vision
  • Advanced driver assistance systems (ADAS) for automotive
  • Industrial IoT
  • Cloud computing

One of the key technologies for doing this is high-level synthesis which is allowing engineers with negligible hardware design experience to implement high performance systems. Xilinx calls this SDx for software-defined systems (OK, systems doesn’t begin with “x” but it is more like the automotive term x-by-wire). There are already 3 special environments: SDNet (for networking), SDAccel (for datacenter accelerators to sit alongside the main server processor), SDSoC (for more general systems-on-chip). This is not purely a Xilinx trend: I attended a presentation at DAC last year by engineers from Google who were experts in video standards about how they had implemented their own proprietary algorithms with high level synthesis and no hardware knowledge.

The reason these new software-focused environments are so important is that the experts in the algorithms required for a successful systems all work at a very high level and are not (usually) RTL-literate. Besides, taking an algorithm for something like vision processing and implementing it in RTL is just too slow and each improvement to the algorithm results in too much rework. These are not algorithms that are stable, they are areas where companies attempt to differentiate themselves and continuously improve their solutions. Even where there are standards they are largely still in development (see 5G wireless for a great example).


Next generation systems like this have a new set of requirements:

  • These applications require systems to be ‘connected’ (like ‘connected cars’, connected machines, not just chip to chip connectivity)
  • If they are connected, they must be much more secure…and safe
  • They are ever more ‘software defined’, with support of virtualization,
  • Much of their intelligence comes from analytics (of ‘big’ data, video pixels, etc.)
  • Typically, they process or transport images & video, which is now ‘everywhere’
  • And they all must be supported with some form of heterogeneous multi-processing, with scalable approaches to accelerate key functionality

Xilinx has redesigned their website to reflect this ongoing transformation of their business. Of course most Xilinx revenue comes from production orders for designs that were done several years ago, that is the nature of the business. But new design starts, especially at the high end, are increasingly these new types of systems.

The most recent issue of Xcell Journal covers the end-markets above here. And coming soon, a new quarterly publication the Xcell Software Journal specifically for software engineers. Marc Andreeson is famous for his statements that “software is eating the world.” Xilinx is doing their bit to make the prophecy true.

Xilinx’s new website is here.


Apple: Watch, iPad, tv, iPhone

Apple: Watch, iPad, tv, iPhone
by Paul McLellan on 09-09-2015 at 3:35 pm

If you didn’t watch the Apple event from the Bill Graham Civic Auditorium this morning, you didn’t miss a whole lot. The only truly different thing announced was the new remote control for the new Apple TV. Everything else was pretty much what you might expect (bigger screen, faster). The whole show seemed remarkably wooden compared to a Steve Jobs love-a-thon, and the crowd was pretty subdued. And at the end, when I was hoping for a dramatic “one more thing” we got One Republic instead.

The show started with the Apple watch. They didn’t really announce anything new from the point of view of the electronics, just new finishes, new straps, a relationship with Hermès.

Next up the new iPad, called the iPad Pro. It has a 12.9″ screen, the size picked so that the width is the same as the height of the previous iPad and the aspect ratio is the same. It is 2732×2048 pixels (5.6M if you can’t multiply those in your head). Inside is the A9X which is their third generation 64-bit tablet processor. If has twice the memory bandwidth, twice the flash storage performance and 1.8X the overall processor performance of the previous generation (“desktop class”) which makes it 22X faster than the original iPad (with 360X the graphics performance). 10 hour battery life, 4 speaker audio, a tiny bit thicker than the iPad air (6.9mm vs 6.1mm). It is the same weight as the original iPad. 8MP iSight camera. 802.11ac (WiFi) with MIMO (multiple attennas). Optional 150Mbps LTE. Pricing from $799 for 32GB to $1076 for 128GB with LTE.

There is also a new physical keyboard (a bit like the Logitech one I have for my iPad except it communicates through a magnetic 3 terminal connection rather than bluetooth). There is also an (optional) stylus called Apple Pencil. It can sense position, force and tilt and as a result can do some neat calligraphy and watercolor effects. I doubt a casual user like me would get one, but for professional designers I think it would be a no-brainer. $169 for the keyboard, $99 for the pencil.

Next up Apple TV (or tv as it looks like we have to write it). The new remote is totally different from the previous Apple TV remote. It has a touch surface, Siri is built-in for voice control, it has motion sensors (like the Nintoendo Wii) for games, unlike the current remote it can also control the TV itself turning it on/off and changing the volume. It has a new operating system called tvOS which works with Xcode and other iOS technologies (so I assume that tvOS is another variation on OS X like iOS in the iPads and iPhones). It contains an A8 chip with Bluetooth 4.0, HDMI (TV connector) and Ethernet (connector to your cable modem or whatever you have). The remote communicates with Bluetooth so you don’t need to point it at the tv box like with the current remote. The remote is rechargeable and last 3 months on a charge (through a lightning connector). The 32GB Apple tv will be $148, and 64GB will be $199 (and the existing Apple tv will still be available at $69).

The moment everyone was really waiting for. iPhone. Year on year growth has been 35% with growth in China at 75% (versus growth for the rest of the Chinese suppliers at -4%). As widely expected they announced two models, the iPhone 6S and the iPhone 6S+ with 4.7″ and 5.5″ screen respectively. As also widely expected, it has what they are calling 3D touch which can detect light touches differently from heavy touches. This has been integrated into the operating system and applications so that the light touch typically gives a sort of peek at something, that goes back when you release, and a strong touch takes you there more permanently. Inside there is a 3rd generation A9 64-bit application processor with “a new transistor architecture” which we all know just means FinFET. It is 70% faster than the A8, 90% faster graphics (“console class”). It also contains a motion coprocessor M9 that is permanently on and is used for health and fitness apps (pedometer etc). They didn’t say explicitly so they may have been trying to be deliberately misleading, but they implied the M9 being permanently on means that you can say “hey siri” and get voice recognition to fire up even if the phone is off. New fingerprint sensor. New 12Mp camera. I never expected to hear the words “deep trench isolation” in an Apple keynote but that is what is used in the camera image sensor to keep the pixels from interfering with their neighbors. 4K video (including ability to edit it on the phone). For the frontside camera, 5Mp, the display backlight can be used as a flash (at 3X the normal brightness used for backlighting). LTE modem with 300Mbps capability (not sure if any carriers actually can support that) on 23 bands so great for international. 866Mbps WiFi.

There is a sneaky App for Android that gathers up all your data and migrates it to the iPhone, making migration for Android users easy.

Pricing is $199-399 for the iPhone 6s and $299-499 for the iPhone 6S+ with a typical carrier contract. These are the same as the old 6 and 6+ prices (which are now reduced to $99 and $199).Can preorder on 9/12 for delivery on 9/25 with iOS 9 being available on 9/16 (for older phones).

More details all over the web I’m sure. And Apple’s website is, of course, here. If you want to watch it, the keynote can be replayed here.


A Brief History of Apple Mobile and SoCs

A Brief History of Apple Mobile and SoCs
by Daniel Nenni on 09-09-2015 at 2:30 pm

The big Apple iProduct announcement was today so I thought it would be a good time to premier a draft of the Apple chapter in our upcoming book. Try as I might I was unable to get one of the 7,000 tickets to the live event (it was like getting a Willy Wonka golden ticket!) so I live streamed it from my iPhone like millions of other people. It ran over two hours but is definitely worth your time, especially if you like the band OneRepublic.

The Apple chapter starts with Steve Jobs 2.0 (his return to Apple) and chronicles the rise of the iProducts from start to finish. This is a no holds barred account of how Apple redefined mobile and became the most powerful fabless semiconductor company in the world, absolutely.

SemiWiki Book Download:A Brief History of Apple Mobile Devices and SoCs

The Apple chapter is in wiki form so please excuse the formatting. This advanced look is exclusive to SemiWiki members. If you are not currently a member please join as my guest: https://www.legacy.semiwiki.com/forum/register.php

If you have questions, comments, and/or corrections post them in the discussion section of the wiki. The following is the prologue of the book. Enjoy!

MOBILE, UNLEASHED
The Origin and Evolution of ARM Processors in Our Devices

How does a company go from a crazy idea a couple of engineers had for designing a processor from scratch to power a “business computer,” to being the maker of the family of processor cores at the heart of roughly 95% of the world’s mobile phones today?

At the dawn of the ARM architecture, the project was a tightly kept secret in a few technologist’s hands at Acorn Computer Group. It was so secret that Olivetti, a firm at that time in the process of shifting its fortunes from typewriters to computers, was not aware of the existence of the chip design or its development team until after an investment stake in Acorn became final.

What Acorn had was a processor quite unlike any other of the period – but that was far from all. They established a reduced instruction set for machine-level programming most users never see, software development tools for using it, and the concept of a customized processor core for independent fabrication.

Challenging the Mainstream
Given that breakthrough, one would think Acorn could have taken the world by storm right out of the gate. However, in the mid to late 1980s, the scene was far from ready for an alternative to the mainstream chips.

Intel was building its empire on the processors that throbbed inside nearly every personal computer. Semiconductors came from Silicon Valley, designed in big, expensive buildings – not in rustic barns near Cambridge, UK. Parts were typically complex, huge, costly, and hot. Being the fastest gun in town, and staying that way, was priority number one under Moore’s Law.

The source of popular software was Redmond, Washington, and anything incompatible with Microsoft was unable to survive for long. A thriving flock of personal computer companies found out the hard way that PC compatibility was the only thing people cared about, or asked for. If a processor could not run MS-DOS, Lotus 1-2-3, Word Perfect, and Turbo Pascal, what good was it?

Those forces left even the now mighty Apple dangerously near bankruptcy at one point after initial success with the Apple II. The comeback was underway; their latest innovation was the Macintosh, built on a Motorola processor and a graphical user interface (GUI) that introduced the mouse to millions of people. It was just different enough to hang on pitted against a wide row of function keys and never-ending combinations of ALT, CRTL, and SHIFT codes on the other side.

Coincidentally, those two companies – Apple and Motorola – running on separate tracks in the early 1990s paved the way for ARM to rise from the relative obscurity and limited volumes of Acorn.

Research on computing alternatives had been underway at Apple for some time, spawned in part by a leadership change from Steve Jobs rev 1.0 to John Sculley. The objective: break off from the desktop and into handheld platforms then known by the clunky category name of “personal digital assistant.” The first Apple PDA project was Newton, and along the way, they reached out to Acorn for an ARM core.

Meanwhile, Motorola led the way to the height of the analog cellular telephone sensation. As phones evolved from analog to digital, one key to reducing size and bill of materials cost became digital signal processing (DSP). In a twist of fate, seeking to diversify business and not compete with customers for capacity, Motorola did not leverage their own semiconductor parts. Instead, as the digital handset revolution took shape they opted mostly for popular DSP chips from Texas Instruments – as had Ericsson, Nokia, and others.

More Than a Cool Idea
Those Apple and Motorola tracks may seem completely separate, but they collided head on in digital mobile devices. Microprocessors were too big and power hungry, and microcontrollers were too slow. Code such as multitasking operating systems, wireless communication stacks, and handwriting recognition – once thought to be the killer application for PDAs – sucked the life out of most CPU architectures.

A dire need was developing for a more optimized but fast processor core, and better integration with lower power DSP capability for handling wireless signals.

With a complex instruction set, or CISC, changing anything to improve performance risked mangling instructions, and breaking software. Motorola would enjoy early success in PDAs with their scaled-down part, the MC68328 DragonBall, winning designs such as the original Palm Pilot. Intel and their X86-compatible ecosystem had initial device wins at IBM, Research In Motion, and Nokia. Both CISC processors found themselves displaced as alternatives emerged.
Long before Steve Jobs rev 2.0 returned and eventually defined the post-PC era, Apple saw greater potential for the ARM architecture. With the Acorn development team and their fab partner VLSI Technology, Apple helped form a joint venture in 1990: Advanced RISC Machines, Ltd. One brand was born, and another remade, ushering in sweeping change in mobile device leadership.

If ARM had been just another company with a cool idea for an embedded processor core, there would not be much more to add to the history – one that others have visited numerous times. Covering ARM from its stealthy origins as a few determined, creative people inside Acorn to today with over 50 billion processor cores shipped and counting is inspiring, but not the whole story.

In this book, we explore the origin story of ARM from an industry perspective, and the evolution of its processor technology that unleashed mobile devices.

Once the Acorn and Apple teams joined forces, sharing their early parallel experience that we open this book with, the bond between mobile devices and the ARM architecture formed. That bond is now nearly unassailable despite massive investments from competitors, mostly because its basis is more than a semiconductor company designing and selling parts to a segment of customers.

An entire ecosystem, in many ways more diverse and more powerful than the impressive PC community, has developed around the combination of mobile and ARM. It reaches across the entire supply chain, from EDA firms, foundries, semiconductor companies, software companies, mobile device manufacturers, carriers, application developers, and makers, who have rather recently joined in.

We will look at not just the processors and phones and tablets and other devices, but the business of mobile as it evolved along the lines drawn by each successive generation of ARM technology. We will have insights on the obvious names – Apple, Google, Qualcomm, and Samsung among them – and some not so prominent ones that played important roles building up to today. Perhaps most fun, we will wrap up with an analysis on where we see all this going in the future.

A journey of billions of processors and beyond begins with a single step. Now, we go back to the beginning of ARM, which on the surface had nothing to do with mobile but everything to do with creating a better processor core.


On The Beauty Of Turkey Vultures

On The Beauty Of Turkey Vultures
by Bernard Murphy on 09-09-2015 at 12:00 pm

Now and again I like to switch from technical topics and write about something good for the soul. I’m involved with a wildlife rescue organization; we take orphaned and injured birds (generally found by members of the public), nurse them back to health and release them back into the wild. We have permits all the way up to the federal level, we get regular training, we’re a 501(c)(3) charity – it’s quite an operation.

Among the more interesting birds we’ve had recently were three turkey vultures. I’m willing to bet if you think about vultures at all, you think “ugly, eat dead stuff, yuck”. But there’s a lot more to them than that. The first of the three was an adult with fractures in the collarbone (coracoid) area, possibly flew into a car or maybe he hit a tree. This one was spotted out in the wilds by a couple of horseback riders. Then there was a juvenile, in generally good health but skinny and wandering around by the side of the road – called in by the Sheriff. Finally, we got a baby, found under a homeowner’s deck. Both the baby and the juvenile were dehydrated and were probably straying from home looking for a source of water (that darned drought).

You probably haven’t seen a baby turkey vulture before. They’re pretty cute – all white fluff with a black face.

They instinctively know they should threaten when approached but the execution is a little less than terrifying. This kid would stamp its feet and charge at us but – like any toddler learning to walk – would immediately fall flat on its face.

Once they had a plentiful supply of food and water both the baby and the juvenile recovered quickly. The adult also recovered but more slowly as broken bones knitted and he regained strength in flight muscles. Here are the three of them in a flight cage, the juvenile up top, the adult lower down and the growing baby at the bottom.

The adult was not happy that he wasn’t strong enough to fly to the top perch; this put him lower in the hierarchy than the juvenile and he was not about to be subservient to a teenager. As soon as he could make it to the top, they got into it. In a lot of species, dominance battles can be quite violent, but these guys were more civilized. They started with chest-butting; when that wasn’t going anywhere the adult switched tactics. Vultures indicate submissiveness by not making eye-contact with the leader, typically by hanging their heads when the boss is around. The adult realized if the pretender wouldn’t voluntarily look down, he could force the issue by standing on his head. Since vultures are fairly hefty birds the juvenile lost to simple mechanics. After a few rounds of this, he evidently concluded that a demotion was preferable to a vulture hat.

Vultures are very social birds. You commonly see large groups roosted in trees or power pylons, or circling an area looking for good eats. They’re not so keen on us though. An adult’s favored response to a perceived threat (a human approaching for example) is to vomit, which is a disgusting but very effective way of discouraging any would-be attacker, although the principal purpose seems to be to shed weight so they can take off quickly. Whenever we had to handle these three we were always very careful to wait until they had fully digested their previous meal. Not that they don’t still try to gross us out, like this guy.

After they all recovered we released the birds at a nearby park. Releases are the best part of what we do, so we get plenty of help, in this case from volunteers and a couple of park rangers. The adult flew straight to a tree where maybe 30 other vultures were already roosting, but the baby (now grown) and the juvenile took off and soared over the reservoir, just enjoying the thrill of flying free and catching thermals. Seeing that made it all worthwhile.

If you’ve read this far, I hope you feel you learned a little about vultures. Maybe these amazing birds can also offer something you can take back to your regular job. A group of vultures gathered together in a roost is often called a committee. When they’re feeding together, they’re called a wake. So next time you’re in a committee meeting and you feel you’re surrounded by vultures, you’ll know why. When lunch is brought in, you can share how much you’re enjoying the wake (I remember some working lunches that felt that way). And while I wouldn’t recommend vomiting as a debate tactic, when a committee member gets a little too aggressive, you can always try standing on their head.

More articles by Bernard…


Explore Your Interconnect the ICScape Way

Explore Your Interconnect the ICScape Way
by Paul McLellan on 09-09-2015 at 7:00 am

One of the surprises at DAC for ICScape was to be listed on Gary Smith’s list of companies to see. Surprised, since ICScape had never presented their products to him. They were listed under design debug. They don’t have a single product that really falls under that description, but rather a family of tools under the ICExplorer family such as ClockExplorer.

Part of the family is a tool for interconnect analysis which is called RCExplorer. Although like any tool it can be used for many things (who hasn’t put in a screw using a hammer?) it is targeted at 3 main functions:

  • fast resistance/capacitance analysis which works with popular layout editing tools
  • post-extraction interconnect analysis
  • interconnect comparison of different versions of a design

RCExplorer works with popular layout editing tool environment, and is also tightly integrated into ICScape’s own chip finishing environment Skipper. It gives a way to do a quick analysis of interconnect during design, looking for high resistance paths that might lead to ESD problems, or to differential paths that should have the same resistance, and more. To do this does not require the user to run a time-consuming LVS and full extraction, RCExplorer does what is necessary under the hood starting from the layout itself. At its most basic the user can select two points on the layout and RCExplorer will display the resistance between them, giving the resistance by segment/layer/location. When used within the Skipper platform, which has fast net-tracing capability, RCExplorer can provide flexible point-to-point analysis for large nets, such as power and clock nets, completely or partially, for interactive usage.
The way it works is to extract the segment resistances of the whole net using patterns, add virtual current sources at the two specified points, compute voltage at points using a high-performance matrix solver and then get the equivalent resistance between the two points based on voltage/current values. It only performs extraction once per net, so it it fast to do resistance analysis for multiple points on the same net.
The next use-model is post extraction. Instead of doing its own layout analysis, RCExplorer reads in the DSPF parasitic file directly. It then reports:

  • pin-to-pin resistance
  • net capacitance, both total and coupling
  • pin-to-pin delay on the flattened design
  • it can run analysis on either selected nets, using filters, or on all nets
  • it can run analysis on a selected pin pair or on all pin pairs
  • it can handle the largest nets: power, ground, clocks etc

If the DSPF file contains layout information for the nodes and RC devices, RCExplorer can also do cross reference between layout-editing tool and the analysis result, very useful during post-layout debugging.

The final use model is to compare two versions of the same design at the DSPF level. The normal way to do this is to set some filter, and get RCExplorer to identify, for example, all nets that have changed by more than 5%. The same analysis can be used to compare two different extractions with different tools, looking for nets where the two extractors are off by more than a trivial amount. The error distribution can be displayed either graphically or in tabular format.

A tool like RCExplorer is only useful if it is accurate. Comparing its calculations to a “golden” HSPICE simulation over thousands of pin-pairs in a design shows the accuracy is 0.005% (so 0.00005 which has to be in the noise).

The analysis is also fast because the use of advanced matrix solver technology and reuse of analysis models. Analyzing a 343 net design (60K pin-pairs) with a 1GB DSPF took just 1 minute 38 seconds. Analyzing a design with 265K nets (21M pin-pairs) took 1h 33m from a reduced DSPF and 6h 12m from a non-reduced DSPF.

RCExplorer is in full-release. It is in use at several customers including a major flash-memory company. They haven’t given permission to use their name so you will have to make your own guess as to which one.

See also How Good Are Your Clocks?
See also What is Skipper?

The RCExplorer product page is here.


Breakthrough Battery Life for Mobile Devices

Breakthrough Battery Life for Mobile Devices
by Daniel Payne on 09-08-2015 at 6:02 pm

Battery life is a never-ending battle for me with all of my mobile devices: Laptop, Tablet, Smart phone, bike computer, Kindle Reader, Bluetooth headset, etc. It seems like I’m constantly having to charge up my battery at the most inconvenient times. When I think about the history of batteries for mobile devices I can recall all of the technology generations for mobile phones:

  • Nickel Cadmium (aka NiCad) – used in the 1980’s and 90’s, had a memory effect so make sure to let fully discharge before charging.
  • Nickel Metal Hydride (aka NiMH) – no memory effect, smaller, faster charging.
  • Lithium Poly Ion (aka LIB) – no memory, smallest, fastest charging, can catch fire or explode.

There’s an emerging battery replacement technology based on Hydrogen Fuel Cells that could be making its way to mobile devices soon, because astute followers of Apple have noted a recently awarded patent, “Fuel Cell System to Power a Portable Computing Device“. The benefit of using a fuel cell is that it would power your notebook or smart phone for days or possibly weeks between charging. Here’s a figure from the Apple patent showing all of the components:


US Patent #20150239280

In this particular patent they show a Mag Safe connector, something only used on the MacBook series of notebooks from Apple, although there’s a British company Intelligent Energy that has already developed a hydrogen fuel cell prototype for the iPhone 6. Since a fuel cell based on Hydrogen emits water vapor, they had to place vents on the rear of the iPhone 6.

The fuel cell system also has some electronics built-in control the DC voltage and communicate with the mobile device, shown in figure 1B below:

Some folks speculate that Intelligent Energy may be working with Apple on commercializing this technology for MacBook notebooks or future generations of iPhones. No matter how it comes to market with Apple or Intellgent Energy, it now appears that Hydrogen fuel cells are coming to our mobile devices with the immediate benefit of increased time between charges to days or weeks. I look forward to this improvement because it will simplify my busy life, plus I love trying new technologies.

Related reading – Fuel Cell Technology Total Game Changer


Congratulations Dr. Walden C. Rhines!

Congratulations Dr. Walden C. Rhines!
by Daniel Nenni on 09-08-2015 at 1:00 pm

A funny thing happened at the Design Automation Conference last June in San Francisco. I was browsing the Kaufman award winner mug shots in the EDAC booth and noticed that Wally Rhines was NOT a winner. You can see them HERE. Immediately in disbelief I said to myself: Self, how can this be? Joe Costello, Aart de Geus, and some other guys I have never heard of are there but no Wally? I then said the same thing to Bob Smith, the new EDAC Executive Director. In fact everyone I mentioned it to was either shocked and/or in disbelief. How could EDAC overlook a man with this pedigree:

WALDEN C. RHINES is Chairman and Chief Executive Officer of Mentor Graphics, a leader in worldwide electronic design automation with revenue of $1.24 billion in 2014. During his tenure at Mentor Graphics, revenue has nearly quadrupled and Mentor has grown the industry’s number one market share solutions in three of the ten largest product segments of the EDA industry.

Prior to joining Mentor Graphics, Rhines was Executive Vice President of Texas Instruments’ Semiconductor Group, sharing responsibility for TI’s Components Sector, and having direct responsibility for the entire semiconductor business with more than $5 billion of revenue and over 30,000 people.

During his 21 years at TI, Rhines managed TI’s thrust into digital signal processing and supervised that business from inception with the TMS 320 family of DSP’s through growth to become the cornerstone of TI’s semiconductor technology. He also supervised the development of the first TI speech synthesis devices (used in “Speak & Spell”) and is co-inventor of the GaN blue-violet light emitting diode (now important for DVD players and low energy lighting). He was President of TI’s Data Systems Group and held numerous other semiconductor executive management positions.
Rhines has served five terms as Chairman of the Electronic Design Automation Consortium and is currently serving as a director. He is also a board member of the Semiconductor Research Corporation and First Growth Family & Children Charities. He has previously served as chairman of the Semiconductor Technical Advisory Committee of the Department of Commerce and as a board member of the Computer and Business Equipment Manufacturers’ Association (CBEMA), SEMI-Sematech/SISA, Electronic Design Automation Consortium (EDAC), University of Michigan National Advisory Council, Lewis and Clark College and SEMATECH.

Dr. Rhines holds a Bachelor of Science degree in metallurgical engineering from the University of Michigan, a Master of Science and Ph.D. in materials science and engineering from Stanford University, a master of business administration from Southern Methodist University and an Honorary Doctor of Technology degree from Nottingham Trent University.

When I asked Wally about it he mentioned that he had been nominated before so there was a nomination form on file but maybe he was considered more of a semiconductor versus an EDA veteran. Suffice to say he is definitely an EDA veteran now, absolutely!

The other thing you will notice is that there are no female Kaufman winners. My bet is that next year that will no longer be the case.

About the Phil Kaufman Award
Presented by the Electronic Design Automation Consortium and the IEEE Council on Electronic Design Automation, this award honors an individual who has had demonstrableIMPACTon electronic design through contributions in the field of Electronic Design Automation (EDA).

  • Business Impact
  • Industry Direction and Promotion Impact
  • Technology and Engineering Impact
  • Educational and Mentoring Impact

The 2015 Phil Kaufman Award will be presented on Thursday, Novmeber 12 [SUP]th[/SUP], 2015 in conjunction at the 4 [SUP]th[/SUP]Street Summit Center in San Jose. [ Additional Information]


Also Read: Wally’s Fireside Chat at #52DAC!


How MunEDA Helps Solve the Difficulties of AMS/RF IP Reuse

How MunEDA Helps Solve the Difficulties of AMS/RF IP Reuse
by Tom Simon on 09-08-2015 at 12:00 pm

Reusing design IP is crucial for competitiveness. The need for reuse occurs with new designs on the same process node as the original design, new designs at the same node but using a different PDK or foundry, or designs on a different process node – usually smaller. However, achieving effective IP reuse has always been a challenge.

Digital designs are often expressed in an HDL such as Verilog or VHDL. Because of the design flow used for HDL based designs, recreating gate level representations targeted at different libraries and process nodes is manageable. However, when it comes to AMS, full custom or RF designs, porting to new technologies becomes a much more difficult proposition.

These designs are most often represented with schematics. When switching to a new PDK, designers will need to convert the schematic so that it uses the cells available in the new library. Not only can the cell/symbol name change but there will be changes in the number and names of the pins, and more fundamentally there will be changes in design parameters that determine circuit performance. Probably no design task is more tedious and risky than manually porting a schematic.

I’ve written previously about MunEDA’s circuit optimization software, but they also have expertise in schematic porting. According to MunEDA, the challenges of schematic migration are:

  • Different device parameters (vth, etc.) require adjustment of biasing and small signal parameters
  • Needed W, L shrinking is not as simple as digital
  • Some devices (mimcaps, inductors, etc.) may or may not be available, or may be of a different type
  • Circuit topology may need modification
  • Layout shrinking in integrated technologies is insufficient

MunEDA suggests a 3 step process starting with updating the schematic with the new library symbols. As previously mentioned this is complicated by new symbol names, potentially different number of pins, and changes in parameter names. MunEDA offers their Schematic Porting Tool (SPT) that can significantly automate this process.

Once the symbol mapping information is entered for the two libraries, MunEDA SPT can replace 100’s or 1,000’s of symbols in seconds. MunEDA’s SPT is fully integrated into Cadence Virtuoso based custom and analog design flows. This includes properly handling SKILL context files, wrapper scripts and configuration scripts. The error prone manual alternative would take orders of magnitude longer. Here are some examples of the operations that SPT can perform.

The second step of the MunEDA flow involves assessment and topology changes in the schematic to accommodate the new PDK. Any desired topology changes can be made at this point to accommodate the new library. Once this is done, the design is ready for automated sizing and optimization.

The last and most important step is modification of the design parameters so the circuit operates properly. MunEDA SPT lets users create their own sizing strategy. It also supports multi-objective optimization, including power and noise minimization. Sizing can also can be done over multiple process corners.

The MunEDA website offers several papers on use of SPT by their customers, including examples submitted by the University of Dresden, STARC, Evatronix and others. Look here for the specifics.

Despite the usual difficulty automating custom, analog and RF design steps, it’s good to know there are options for dramatically improving the process of porting schematics to new PDK’s. MunEDA is able to do this by building on their clearly established expertise in custom circuit design optimization.