BannerforSemiWiki 800x100 (2)

MAS370 MH370 DO254 and Cell Data

MAS370 MH370 DO254 and Cell Data
by Luke Miller on 03-15-2014 at 10:00 pm

For the connected, in the instant knowledge, information world we live in, the missing Malaysia Airlines Flight 370 is most humbling. Let us be reminded as we look for details, and theorize… that someone’s Father, Mother, Brother, Sister, Son, Daughter, Friend are missing. Just terrible but the Miller’s continue to pray and hope for that miracle.

What have we learned from a technology standpoint? Me, being a RADAR/EW fella, it amazes me on how many holes we have in the air space as we get further out from US soil. Check out NEADS (Northeast Air Defense Sector), which is right down the road from me. Simply amazing, but it is not possible for all places to have such resources. We have learned about transponders and other fabulous technologies. Engines that keep a pulse to their designers, and the list goes on. It is obvious the plane was not destroyed mid-flight as we know hardware does not keep pinging without power.

If one was to look at the DO254 standard; DESIGN ASSURANCE GUIDANCE FOR AIRBORNE ELECTRONIC HARDWARE, to design in hardware into an aircraft is anything but trivial nor for the faint in heart. Most designers prove their requirements work, the DO254 standard in a nut shell requires one to prove what is not supposed to work. Being an FPGA guru, this is possible but takes stellar IP and a dedicated team. Check out logicircuit.com for an excellent overview of these standards and IP. This basically means every design and bit permutation is tracked and covered to prove that the design could not cause a safety critical issue.

It does look more and more that unfortunately we do not have the world’s best people. Trust is a funny thing, every day we place our lives in the hands of others. I experienced that this week, one with my wife driving me to the airport and then with my six flights this week. You know what I was thinking about? I wonder about the character of these pilots? My co passengers? I looked them over, couldn’t help it.

With the simple flick of a switch the 5th largest passenger Jet in the world, the Boeing 777 went into stealth mode… Humbling.

Technology is revealed in the search for the airplane, the data analysis, the tracking algorithms, the possibilities. I believe it is time to pull the cell phone data from all the pilots and passengers to see if there phones were not in flight mode and correlate that data with what we know. As you know, you can get coverage in spots and there is a record of pings when the cell phone trying to get service… Perhaps some texts got thru. It would be great if some of the phones were on and this data leads to a safe recovery of the people. That would be fantastic. Search the cell data. The answer may be in there.

As you know, after this is over we can expect the over correction, the new rules, the new technology etc. Technology can track us, heal us, help us but it will never give integrity, trust, respect, virtue and the like. Electronics and laws can hold us accountable but the rest is up to each one of us. Still pulling for the miracle.


Getting 3D TV from 2D Content

Getting 3D TV from 2D Content
by Daniel Payne on 03-14-2014 at 7:28 pm

3D TV has been all the rage over the past few years because of the added realism it offers the viewer, but there’s really not that much content that you can stream or play on a Blu-ray device. Wouldn’t it be cool if there was a box that could create 3D on the fly from a 2D stream or Blu-ray? This week I discovered that such a box does exist and I got to see it myself courtesy of a company called VEFXiand their converter box called 3D-Bee.

Manny Muro is the VP of Engineering at VEFXi, and he invited me over to their place in North Plains, Oregon, about a 40 minute drive away from my home. This 3D-Bee product has an FPGA inside, along with some video chips off-the-shelf. Manny and his team are hard at work on the next generation chip that will have higher performance and be implemented as an ASIC where the ASIC vendor will take the RTL code and do the physical implementation. Their design process starts with an architect writing algorithms which then get manually entered as Verilog, followed by logic synthesis with Synopsys. For virtual prototyping they are using a Xilinx Spartan-6development board from Avnet. With an HDMI daughter card they can look at real video results to verify their design.

I asked Manny if they were using any of the High Level Synthesis (HLS) in the Vivado software from Xilinx, but surprisingly he said that they didn’t because they needed a finer level of control over the implementation. I remember that Luke Miller blogged about the HLS in Vivado, and thought that video conversion would be a perfect fit for it at VEFXi.

For a demo they used a Blu-ray player with 2D content and a 3D TV that required glasses, and we watched different action movie clips where the 3D effect made the movie even more compelling. I learned that 3D TV’s have two different technologies:

  • Side by side
  • Frame sequential

I also met Craig Peterson, the founder and CEO of VEFXi. Craig is also on the board for the International 3D Society (I3DS). For the grand finale, Craig showed me a demo of some of their upcoming 3D technology that provides dramatically more depth during viewing compared to any other technology out there.

The holy grail of 3D is to do away with the funny glasses, and view 3D unencumbered, naturally. Stay tuned for some upcoming product announcement in this area from VEFXi. In the industry they use the phrase auto-stereoscopic 3D, which means glasses-free 3D.

The 3D-Bee product family has been on the market since 2011 and you can buy it online directly at www.3d-bee.com/store for just $349. VEFXi is also looking to hire a couple of ASIC design engineers.

lang: en_US


Jasper at DVCon and EJUG

Jasper at DVCon and EJUG
by Paul McLellan on 03-13-2014 at 7:05 pm

The Jasper European User Group meeting (EJUG) is coming up in a couple of weeks. It will be held in the Munich Hilton (which I have stayed in many times, the S-bahn from the airport pretty much stops in the basement) on April 2nd.

The schedule for the day is:
9:00 AM – Registration and continental breakfast
9:30 AM – Jasper Overview
9:45 AM – Customer Success Stories
10:00 AM – ARM Presentation
10:45 AM – Break
11:15 AM – Port-Based Generic Mechanism for Connectivity by Ericsson
11:45 AM – Efficient IP Bring-up with Jasper
12:30 PM – Lunch
2:00 PM – Advanced Functional Verification for Quality and Productivity Increase by ST
2:45 PM – Low Power Formal Verification with Jasper
3:15 PM – Exhaustive Security Verification with Jasper
3:45 PM – Break
4:15 PM – High-Performance Sequential Equivalence Checking with Jasper
4:45 PM – Coverage Verification with Jasper
5:15 PM – Panel: Efficient Verification Problem Solving with Jasper
-Yann Oddos, Intel
-ARM
-Yousaf Gulzar, Ericsson
-Guy Dupenloup, ST
6:00 PM – Closing remarks and cocktail reception

Registration for EJUG is here.

At DVCon some of Jasper’s customers presented on various aspects of using formal techniques. Prosenjit Chatterjee, Scott Fields and Syed Suhaib from Nvidia presented A Formal Verification App Towards Efficient, Chip-Wide Clock Gating Verification. Clock gating is an important part of SoC design for controlling power. But with multiple clock domains the verification can be complex. nVidia presented a fully automated approached based on top of Jasper’s sequential equivalence checking (SEC) app. The SEC app performs various optimizations automatically to achieve deeper proof bounds or even full proofs, in many cases, taking advantage of the symmetry of the setup. Nvidia apply this methodology across the chip to illustrate its usefulness. They found multiple clock gating bugs across many projects using this approach, where over half of these were found after supposedly high simulation coverage of the design. If you want to find obscure corner cases that are not handled correctly, then formal techniques once again outperform simulation.

Shuqing Zhao, Shan Yan and Yafeng Fang from Broadcom presented Practical Approach Using a Formal App to Detect X-Optimism-Related RTL Bugs. X-optimism is a problem during RTL simulation and with SoCs the size they are, gate-level simulation is simply not practical to eliminate the issues. The JasperGold X-propagation verification app reads in the RTL, analyzes the design, and then automatically implements assertions to check for all X occurrences on targets such as clocks, resets, control signals and output ports. If the formally proved X occurrences are determined by user to be unexpected, it usually implies they were masked in RTL simulation due to X optimism. Broadcom discussed results of this approach using two case studies, a power management controller module and an audio processing module, both of which have design bugs masked due to X-optimism.

And save the date. DVCon next year is March 2-5th 2015. And DVCon Europe is October 14-15th this year (also in Munich like EJUG).

The DVCon website is here.


More articles by Paul McLellan…


Cadence and ARM BFF

Cadence and ARM BFF
by Paul McLellan on 03-13-2014 at 6:38 pm

The biggest market for semiconductors is mobile and an ARM processor is the center of the axle around which it revolves. So everyone in the mobile ecosystem needs to work closely with ARM. At CDNLive earlier this week Cadence and ARM announced that they are deepening their partnership. Most of what they announced makes it a lot easier to use Cadence’s products with ARM’s without the user having to put it all together themselves. The announcement is a three-legged stool.

The first leg is that Cadence has new adaptable interconnect performance characterization test suite in the Cadence Interconnect Workbench, along with AMBA Designer integration, that delivers a significant speed-up of performance analysis and verification of CoreLink CCI-400 system IP and NIC-400 design tool based systems.

The second leg of the stool is that Cadence now provides ARM Fast Models combined with the Palladium XP II platform to support ARMv8-based system embedded OS verification. What this means in practice is that it is much easier to use Cadence’s hybrid virtual platform technology using ARM Fast Models for the processor (and perhaps some peripherals) and Palladium emulation to model the parts of the design that only exist at RTL. In particular, operating system (OS) bringup should be straightforward since everything is coming from a single supplier, Cadence.

Thirdly, verification IP (VIP) supporting the ARM AMBA5 CHI protocol for advanced networking, storage and server systems is now available for simulation and the Palladium XP II platform.

Together these three capabilities make bringing up ARM Cortex based systems easier. The lead customer for this is nVidia. As Kevin Kranzusch, vice president, System Software says:The Cadence Palladium solution for embedded software development enabled by ARM-based Fast Models helps us reduce the system software validation cycle and ensures a smoother post-silicon bring-up.

I worked for several years in the virtual platform market, as did Frank Schirrmeister who I met to discuss the announcement. The big problem with virtual platform technology was never the processor modeling, which was amazing technology, but modeling the boring peripherals. No matter how good the modeling technology, it took time and money to create and maintain the models. Since we were selling the ability to start software development earlier, taking time reduced the value proposition, and taking money is always worse than not needing to. It is starting to look really important that the emulation has transformed from something esoteric that the occasional development group had for SoC design, into something mainstream that is part of every verification strategy. The result is that hybrid virtual platforms with processors modeled using the JIT compiler technology and peripherals modeled using RTL on an emulator is the “killer app” for virtual platforms.


More articles by Paul McLellan…


Designing for Wearables!

Designing for Wearables!
by Daniel Nenni on 03-13-2014 at 5:30 pm

Wearables are going to be a real game changer for the fabless semiconductor ecosystem, absolutely. What other high volume semiconductor market segment has such a low barrier of entry? Speaking of low barrier of entry, the first stop on my Southern California trip last week was Monrovia, the home of Tanner EDA. Tanner is already a player in wearable design enablement due to their low cost and low learning curve.

Tanner EDA tools for analog and mixed-signal ICs and MEMS design offers designers a seamless, efficient path from design capture through verification. Our powerful, robust tool suite is ideal for applications including Power Management, Life Sciences / Biomedical, Displays, Image Sensors, Automotive, Aerospace, RF, Photovoltaics, Consumer Electronics and MEMS and Wearables!Try it for free!

Generally speaking, 80% of the silicon is shipped by 20% of the companies (the tried and true 80/20 rule). This may also be the case with wearables from Google, Apple, Samsung, Intel, LG, Sony, Microsoft, etc… but the remaining 20% represents billions of units over the next 10 years and there will be thousands of semiconductor entrepreneurs vying for that 20% so you had better get started!

I guess the smartwatch would be considered the first wearable. My first smartwatch was a Pulsar calculator watch, remember those? I was tossed out of a math exam for wearing one. Not for using it, just for having it on my wrist! Even today wearable technology scares people. The Google “Glasshole” stories are pretty funny.

The history of the wrist watch is pretty interesting. Wearable time pieces were generally kept in a pocket until the military started strapping them to their wrists for more efficient time telling. My Grandfather wore his first wrist watch in WWI but went back to the traditional pocket watch afterwards until his death at age 102. We are creatures of habit for sure. Wrist watches continued to grow in popularity in the 1920s and pretty much everyone wore a watch until the mobile phone became ubiquitous.

I stopped wearing a watch when I started carrying a Blackberry and I have not worn one since. Even though I’m a fitness fanatic I have yet to buy a Fitbit or Fuelband. What’s it going to take to get something back on my wrist? From what I can tell Apple is on the right track by combining a superset of functions from the iPhone and a health wrist band. What I really want to know is how much longer I have left to finish my bucket list? By profiling my many health and movement indicators, a smartwatch should be able to predict a catastrophic health event such as a seizure, stroke, or heart attack. And that capability is coming, believe it. We will change the world once again.

More Articles by Daniel Nenni…..

lang: en_US


Galileo, not a barber, but an Intel maker module

Galileo, not a barber, but an Intel maker module
by Don Dingee on 03-13-2014 at 3:00 pm

Words often have much deeper meaning than first meets the ear. The story behind a lyric, or a name, reveals origins, philosophical themes, and ideas beyond the obvious. A new effort from Intel conjures up just such an example – a deep reference to makers everywhere.

In a familiar refrain from Queen “Bohemian Rhapsody,” we hear two choirs sparring over the fate of a youngster who has taken the life of another, and is now considering an even greater offense. On the surface, the words are completely in keeping with the theme of the album, “A Night at the Opera”:

“(Galileo) Galileo
(Galileo) Galileo
Galileo figaro, magnifico!”

In her book “The Real Story of Freddie Mercury” (blogger’s note: adult themes, parental discretion advised), Mariam Akhundova suggests the actual operative word is not Figaro, the Barber of Seville, but the Latin term figuro. The resulting interpretation: “Magnify the Galilean’s image,” making this an elegant reference to Jesus of Galilee. Freddie was way deeper than headbanging in an AMC Pacer, for sure, and this meaning fits better for me than a commentary on Galileo Galilei’s barbering skills.

An homage to one of the earliest scientific makers, whose name is in turn a tribute to the Christian “maker of all things”, is an interesting play on words, indeed. The project bearing this name – Intel Galileo – is a creative celebration, and carries with it deeper meaning as a company strives to reinvent itself on the Internet of Things.

Makers are now powering development of the IoT and wearables, enabled by inexpensive modules ready to run open source software for just a few dollars. With ideas backed by crowdfunding and creative communities, makers are reaching far beyond “learning to code” into rapid prototyping of concepts, and depending on the module and situation, even production.

Intel is taking its Quark processor straight to makers, trying to capture hearts and minds with its powerful brand and broad software support. The first Intel maker module with Quark onboard, Galileo draws on the Arduino (Italian for “brave friend”) open source hardware/software project. The board footprint is 4.2” x 2.8”, with the connectors projecting slightly over the edges.

photo courtesy Arduino Blog

Strictly speaking, Intel Galileo with its 400 MHz Quark SoC X1000 is not an Arduino board; official Arduino hardware is based on Atmel megaAVR microcontrollers. Galileo does accept Arduino shields, compliant to the Arduino 1.0 pinout devised for the Arduino Uno R3, for hardware expansion. Also on board are a mini-PCI Express slot (an easy way to add a Wi-Fi module), a 100Mb Ethernet port, a microSD slot, host and client USB ports, and an RS-232 port. Power comes from a 5V DC barrel jack and an external AC-to-DC adapter.

Galileo also honors the Arduino software framework, fully emulating Arduino on Linux. This is a very interesting area: some programmers like Python scripting, some want the abstraction and portability Arduino libraries provide, and some just want to go after embedded C/C++ code. Intel has a variety of Galileo software downloads – with and without Arduino – to get makers started quickly.

Like many maker modules, Intel Galileo is after lower cost; a quick web survey currently shows pricing from $55.79 to $79.99 depending on outlet and volume. We should keep in mind a couple things: the Quark SoC X1000 in 32nm is relatively new and hasn’t come down the learning curve yet, and Intel can always move pricing by subsidies – they claim to have fielded Galileo units in over 400 universities so far.

Will Galileo make an impression with makers? In my experience, ARM loyalists are ARM loyalists, and Intel devotees are Intel devotees: comparing the two is somewhat academic, because crossover is limited. A maker can get a lot of pop for a little price on ARM, but there is something to be said for Intel, their brand, and their marketing muscle. In a word, Intel is reseeding, and we may not see the harvest for a while.

The fact of the matter is until now, there wasn’t any really small, inexpensive module with Intel Inside; the smallest one could get approximating an X86 environment was VIA Technologies and their Nano processor on a Pico-ITX board, or matching competitive offerings with Intel Atom, and none are maker-cheap. The ARM maker modules have a lengthy head start, and Intel is definitely stealing a page from the ARM playbook here.

Arduino figuro, magnifico. Galileo is definitely an interesting play from Intel. One unique problem with microcontrollers is they are so inexpensive, and have so many varied features, it has been difficult to drive a standard form factor (something like PC/104, EBX, or EPIC) down into this space with affordable boards. Arduino is perhaps the closest thing to a de facto standard we have for maker modules right now.

The next round of Intel maker module – Edison, and a newer 22nm version of Quark – puts a much smaller form factor in play, drafting on the popularity of Electric Imp. More on that next time.

lang: en_US


Mark your Date for Semiconductor Design Vision

Mark your Date for Semiconductor Design Vision
by Pawan Fangaria on 03-13-2014 at 4:30 am

A very popular acronym is ‘WYSIWYG’ – What You See Is What You Get! This is very true and is important to visualize things to make it better in various aspects such as aesthetics, compactness, organization, structure, understandable for correction and so on; the most important, in case of semiconductor design, is being able to identify issues and resolve them to get the best PPA optimized design.

No matter how complex a design is, designers need to decompose it until the last bit and view the details in order to be able to debug it. At times, they need to shut other parts of design, simplify only the portion of interest and inspect it to correct things. Considering billion gate SoC designs today, it’s imaginable how difficult it would be to visualize these and correct. What if we have automated tools at various levels in the design process that can help designers visualize things in a matter of seconds to the level of details they need, and then analyze and correct?

Concept Engineeringhas such tools at transistor, gate, RTL and mixed-signal, mixed-language level that provide extremely easy visualization, analysis and debugging capability to designers, thus increasing their productivity. Besides, the company provides several other support utilities for designers as well as software components for EDA tool developers to delight designers’ experience; NlView[SUP]TM[/SUP] Widgets can be used for automatic schematic generation at transistor, gate, RTL, block and system level; optimized by algorithms and flexible to be controlled manually.

SpiceVision is a complete tool that reads Spice and works at the transistor and circuit component level and has numerous capabilities for viewing, analyzing, debugging, optimizing and documenting complete or part of the circuits at transistor level, thus speeding the overall circuit design. Parasitic level debugging which is considered tough can be made extremely easy by using SpiceVision.

GateVision is an ultra-fast gate level netlist viewer, analyzer and debugger, that can handle largest SoCs, process largest Verilog, LEF/DEF and EDIF netlists and display waveforms of simulation results with signal tracing up to the source level. High featured design navigation, logic cone extraction, interactive viewing, intuitive GUI etc. make the debugging activity fun for designers.

RTLvision provides fast viewing, debugging and optimizing capability for RTL code which can be written in VHDL, Verilog or SystemVerilog. It has several capabilities such as Clock Tree Extraction and interactive code navigation among others that make designers’ work easy.

StarVision is the ultimate in providing quick debugging capabilities to designers for mixed-signal, mixed-language designs, thus easing the job of integration of IPs from various sources into their complex SoCs. To make the job of analyzing and debugging complex SoCs (that can have its parts at different levels of abstraction) easy, StarVision works as an integrated cockpit that can be used to debug design at transistor, gate, RTL, or even source code level. Various parts of the designs can be analysed separately through various means.

Above is just a high level summary of these tools. To get what you want you get to see them personally. Concept Engineering is setting up a booth #11 at DATE[SUP]14[/SUP] (Design, Automation & Test in Europe) to be held in Dresden, Germany on 24-28 March 2014. Detailed presentations/demos of these capabilities will be provided along with latest tools and features.

There is another event in Silicon Valley on 24[SUP]th[/SUP] March, SNUG Silicon Valley Designer Community Expoat Santa Clara Convention Center, CA, at which Concept Engineering will showcase its products.

EDA Directis authorized distributor of Concept Engineering products. You may like to setup a time with them for specific, focused discussion or review of products by writing at sales@edadirect.com

Meet the people who make the inside of electronic circuits visible to you!!

More Articles by Pawan Fangaria…..

lang: en_US


A Tool Conceived With Designers’ Input and Developed from Scratch

A Tool Conceived With Designers’ Input and Developed from Scratch
by Pawan Fangaria on 03-12-2014 at 10:15 am

If we look at the past, most of the EDA tools in the semiconductor design space have originated from a designers’ need to do things faster. Regardless of whether it is design exploration, manual design, simulation, verification, optimization (Power Performance Area – PPA) and many other steps in the overall design flow. What matters most, is how fast is the overall design turnaround with all kinds of design closures such as functionality, timing, area and power. Many tools are lost in the middle of market dynamics of mergers and acquisitions. However, my close observation tells me that the tools which are conceptualized with expert designers’ participation and driven in collaboration with design houses are built to last. They cannot be victimized as long as they serve the larger purpose of the overall design flow.

Often we ignore when designers ask something audacious, looking at it as if they needed a ‘push-button’ solution. But, hey, just wait, if you do not provide it today, it will become a reality tomorrow, and you may be left behind. If I look at RTL sign-off at the top level of the design flow, it was at an infancy stage in pre-millennium; however, today, it appears in the main-stream design flow. Inspired by this thought, I met Siddharth Guha, Sr. Manager at Atrenta’sNoida office. Siddharth is an expert of power solutions for SpyGlass RTL Sign-off platform. He has worked on SpyGlass Power since the very early years of this millennium. It was a nice opportunity for me to learn the intricate details about this product, what went behind the creation of this product, and how best it serves the industry. Here is the conversation –

Q: Siddharth, I guess SpyGlass Power is a well known product in the semiconductor design community. So, instead of talking generally about the product, tell me something about how it was conceptualized and how did it start?

A: Power used to be an aspect looked at toward the end of the design flow. However, as design sizes and complexities started increasing and technology nodes continued shrinking, power analysis and optimization gained importance. Our initial customers were looking for a tool which could estimate and optimize power of the chip in advance, managing budgets accordingly. So, estimating and optimizing power at the RTL, in the beginning of the design flow, was conceived with our customers’ inputs. This product was developed from the ground up, with intense customer participation at all stages of development; from conceptualization, to development, and including validation. Today it is effectively being used in production by our customers.

Q: What are the typical challenges that need to be handled by SpyGlass Power at the RTL?

A: Power closure with silicon is always a challenge. We have to make sure that the power stays within limits at every stage: RTL design, logic synthesis, pre-CTS, post-CTS and so on. Around 28nm and below, other issues that were not talked about earlier have become much more pronounced. For instance, for internal power, the accurate slew rate has to be taken into account, and the wire loads models are weak. Additionally, leakage power is very significant at smaller geometries. All these issues require the tool to estimate and reduce power as early as RTL.

Q: So, how does SpyGlass Power tackle these issues because many of these may show up late in the design cycle?

A: SpyGlass is a platform for the complete design flow. SpyGlass Power estimates power at the RTL as well as at the gate level which allows the user to track correlation of the power throughout the flow. Accuracy is achieved through calibration of reference gate level data or directly with technology found in SpyGlass Physical.

Q: That’s right, but looping back from physical (gate) level will take longer?

A: Yes, however, this looping is much less costly than traditional looping. What has to be looked at is how fast the designer gets the data to make right decisions. Since much of the design process starts early at RTL, things keep getting structured as we go down to layout level and refinements keep taking place at every stage so that we have no major surprises at the end of the design cycle. In the SpyGlass RTL Signoff platform, most of the tools are connected together which helps in faster convergence of the design.

Q: What are the major differentiating factors in SpyGlass Power?

A: SpyGlass Power provides a consistent and comprehensive solution for power estimation, reduction and verification. Our customers tell us that our best-in-class estimation engine provides faster results and convenient calibration with reference gate-level netlists. After power estimation, our users perform power profiling to check the quality of simulation data and the power efficiency of the current design. The profiling provides complete activity report with power computations. The power reduction step offers guidance to designers for possible power reduction through various means such as clock gating optimization, more efficient memory data operation for example. With the designers’ permission, SpyGlass Power can also fix the RTL for power reduction ensuring correct functionality with our SEC (Sequential Equivalence Checker). The power reduction engine also leverages SpyGlass CDC to ensure that the modified design is CDC safe. The power verification, SpyGlass Power checks the complete design against the power intent for various domains, level shifters’ states, isolation logic etc. As our customers tell us, SpyGlass Power provides a complete solution for power from architecture to estimation, reduction, auto power fix, and verification at all levels.

Q: So, how do you see customer response to SpyGlass Power?

A: Our customers are very happily using this tool. We work in a very collaborative manner and take a pro-active approach in solving the issues designers face. We have seen our customers using this tool in unique ways, which we had not envisioned while developing the tool. For example, while using this tool for chip power optimization, they also use the activity report to improve their software and optimize transactions on the design.

Q: That’s quite heartening. What more are designers looking forward to, from this tool, for their large SoC designs?

A: Interesting question. Although SpyGlass Power supports UPF, one of our customers recently requested that it should leverage a set of specific constructs that would allow the tool to model the effect of automatic switching and voltage scaling of power domains. Looking at the huge number of these domains in today’s SoCs, this is going to be fun challenge to achieve.

It was a great session with Siddharth for me to learn about what goes into the making of SpyGlass Power. It definitely looks to be an effective tool that provides large returns for the designers’ time. Since power has become more important, not only for mobile and hand held devices, but also for other consumer and home appliances, I can see a rising demand for such a tool in the semiconductor design community.

More Articles by Pawan Fangaria…..

lang: en_US


Now even I can spot bad UVM

Now even I can spot bad UVM
by Don Dingee on 03-11-2014 at 8:30 pm

Most programmers can read a code snippet and spot errors, given enough hours in the day, sufficient caffeine, and the right lens prescription. As lines of code run rampant, with more unfamiliar third-party code in the mix, interprocedural and data flow issues become more important – and harder to spot.

Verification IP particularly resembles that third-party code remark: vendors supplying UVM for test is now widely-accepted practice. Debugging a large testbench environment with a lot of third-party UVM can become a rather hilatious effort if the only view available is a text error message, likely launched somewhere in the middle of potentially foreign code not being fed something it expects. Tracking down dependencies in the midst of the model from just the source code view is possible, but slow going.


The latest release of Aldec Riviera-PRO 2014.02 brings a powerful new feature to testbench debugging: UVM Graph. Integrated directly into the tool on a new tab, UVM Graph lets users switch from their UVM source code view into a top-down visualization showing the components and objects and transaction-level modeling (TLM) connections between them.

Clicking a component rectangle expands its contents showing the encapsulated objects with ports and interfaces, quickly revealing exact details of the testbench model. A right click takes you to a cross-probe window for a closer look at objects or classes, or back to the source itself. Icons next to object names show their types.


For another perspective on the UVM Graph capability, an Aldec guest blog post from Srinivasan Venkataramanan of CVC shares his first look at the tool:

Visualizing UVM Environments: Debug Features Deliver a Clearer View

The new Riviera-PRO 2014.02 release doesn’t stop there. Another new feature is a Finite State Machine (FSM) window, with a color-coded graph showing state transitions. The same data can also be presented in a tabular format, handy for highlighting transition counts.


We explored Riviera-PRO’s plotting capability in a previous release about a year ago, and this latest release includes a significant enhancement. Plots are great at viewing data quickly, but in a large data set even a plot can be overwhelming. Setting limits gives users control over what is seen and can improve readability for many situations.

Per normal, Aldec continues to make incremental improvements in Riviera-PRO, speeding up SystemVerilog simulation and GUI viewing performance at each release. Riviera-PRO uses Flexera Software FlexNet Publisher for licensing, with an update in this release that preserves existing licenses but uses the latest licensing daemon. For an overview of all the improvements, download the updated Riviera-PRO What’s New presentation.

Testbench software productivity today means bringing new in-house and third-party code into the mix quickly, and avoiding the need to read through code manually. The ability to visualize code and the relationships between code modules is a huge debugging aid, and the addition of UVM Graph to Aldec Riviera-PRO should be a welcome improvement for those working in UVM on a daily basis, or those new to the arena.

lang: en_US


DSP running 10 times faster at ultra-low voltage?

DSP running 10 times faster at ultra-low voltage?
by Eric Esteve on 03-11-2014 at 12:30 pm

The LETI and STMicro have demonstrated a DSP that can hit 500 MHz while pulling just 460mV – that’s ten times better than anything the industry’s seen so far. Implemented on a 28nm FD-SOI technology, with ultra thin forward body biasing (UTFBB) capability (used to decrease Vth), this DSP can also be exercised at higher voltage when required by the application, then hit 2.6 GHz at Vdd = 1.3 V, equivalent to a similar device implemented on a 22nm Tri-gate technology (2.5 GHz at 1.1 V). But, for any mobile application, delivering 500 Mops at 460 mV is a big achievement: according with Fabien Clermidy, head of Digital Design and Architecture at Leti, this could mean extending your battery life by about another 30% for typical usages. Leti and ST showed the FD-SOI DSP at ISSCC – the IEEE’s International Solid-State Circuits Conference (February 2014), which is widely considered the premier forum for presenting advances in solid-state circuits and SOCs.

As you can see on the above picture, the Forward bias capability is the key enabler of such a performance at ultra-low voltage level: even if we would have to zoom the picture in the 400-500 mV abscise, it’s the 2000 mV FBB Boost that allows reaching the 500 MHz bar, as with no FBB Boost, the DSP would run in the low 100’s MHz range. Such a result is already a great achievement, but we can analyze another advantage of using FDSOI technology: to reach the maximum DSP performance, 2.6 GHz in this work, a chip maker would have to target a more expansive technology, the Tri-gate 22 nm.

In this table, we see that the maximum frequency is reached with work (1) and (4), or 2.6 GHz at 1.3V Vdd on 28nm UTBB FD-SOI and 2.5 GHz at 1.1V Vdd on 22nm Trigate. From the previous articles about FDSOI, you know that the over-cost of SOI wafers is more than compensated by the exploding fabrication cost increase paid by going further by one technology node, then the over-cost related to Trigate implementation compared with planar transistor used for FDSOI in this work. That is, the DSP described by the LETI/STM paper presented at ISSCC can exhibit TWO advantages when compared with a 22 nm Trigate:

  • End user can benefit from a 30% reduction of the power consumption, when the DSP runs at ultra-low voltage (460 mV), still delivering 500 MHz performance
  • The same device can deliver same performance than 22nm trigate (2.5 or 2.6 GHz), at a much lower cost

The later is interesting too, as the semiconductor industry is facing a BIG economic issue: Moore’s law has been empirically defined by Gordon Moore, as an economic law (back-up by simple maths, not by physics), saying that the cost per gate was divided by two every 18 months. That we see, starting with 20 nm technology node, is now a cost increase node after node. The reasons why this cost per gate is increasing have to see with the laws of physics: don’t forget that 20 nm is half of the smallest visible wavelength, for example! Nevertheless, there will be applications where integrating more IP (CPU, GPU, DSP, SRAM etc.) into a SoC, or designing the most performing CPU, or an ultra-low power device to stay competitive will justify paying a price premium, not only a price per gate, but also a huge increase of development cost. But how many fabless chip makers will be able to invest so much, or, if you prefer, how many market segments will economically justify making such an investment?It’s good to know that technologies like FD-SOI will allow continuing Moore’s law, and I will come back with more explanations and material in the very near future to illustrate this assumption…

From Eric Esteve from IPNEST

More Articles by Eric Esteve…..

lang: en_US