RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Galileo, not a barber, but an Intel maker module

Galileo, not a barber, but an Intel maker module
by Don Dingee on 03-13-2014 at 3:00 pm

Words often have much deeper meaning than first meets the ear. The story behind a lyric, or a name, reveals origins, philosophical themes, and ideas beyond the obvious. A new effort from Intel conjures up just such an example – a deep reference to makers everywhere.

In a familiar refrain from Queen “Bohemian Rhapsody,” we hear two choirs sparring over the fate of a youngster who has taken the life of another, and is now considering an even greater offense. On the surface, the words are completely in keeping with the theme of the album, “A Night at the Opera”:

“(Galileo) Galileo
(Galileo) Galileo
Galileo figaro, magnifico!”

In her book “The Real Story of Freddie Mercury” (blogger’s note: adult themes, parental discretion advised), Mariam Akhundova suggests the actual operative word is not Figaro, the Barber of Seville, but the Latin term figuro. The resulting interpretation: “Magnify the Galilean’s image,” making this an elegant reference to Jesus of Galilee. Freddie was way deeper than headbanging in an AMC Pacer, for sure, and this meaning fits better for me than a commentary on Galileo Galilei’s barbering skills.

An homage to one of the earliest scientific makers, whose name is in turn a tribute to the Christian “maker of all things”, is an interesting play on words, indeed. The project bearing this name – Intel Galileo – is a creative celebration, and carries with it deeper meaning as a company strives to reinvent itself on the Internet of Things.

Makers are now powering development of the IoT and wearables, enabled by inexpensive modules ready to run open source software for just a few dollars. With ideas backed by crowdfunding and creative communities, makers are reaching far beyond “learning to code” into rapid prototyping of concepts, and depending on the module and situation, even production.

Intel is taking its Quark processor straight to makers, trying to capture hearts and minds with its powerful brand and broad software support. The first Intel maker module with Quark onboard, Galileo draws on the Arduino (Italian for “brave friend”) open source hardware/software project. The board footprint is 4.2” x 2.8”, with the connectors projecting slightly over the edges.

photo courtesy Arduino Blog

Strictly speaking, Intel Galileo with its 400 MHz Quark SoC X1000 is not an Arduino board; official Arduino hardware is based on Atmel megaAVR microcontrollers. Galileo does accept Arduino shields, compliant to the Arduino 1.0 pinout devised for the Arduino Uno R3, for hardware expansion. Also on board are a mini-PCI Express slot (an easy way to add a Wi-Fi module), a 100Mb Ethernet port, a microSD slot, host and client USB ports, and an RS-232 port. Power comes from a 5V DC barrel jack and an external AC-to-DC adapter.

Galileo also honors the Arduino software framework, fully emulating Arduino on Linux. This is a very interesting area: some programmers like Python scripting, some want the abstraction and portability Arduino libraries provide, and some just want to go after embedded C/C++ code. Intel has a variety of Galileo software downloads – with and without Arduino – to get makers started quickly.

Like many maker modules, Intel Galileo is after lower cost; a quick web survey currently shows pricing from $55.79 to $79.99 depending on outlet and volume. We should keep in mind a couple things: the Quark SoC X1000 in 32nm is relatively new and hasn’t come down the learning curve yet, and Intel can always move pricing by subsidies – they claim to have fielded Galileo units in over 400 universities so far.

Will Galileo make an impression with makers? In my experience, ARM loyalists are ARM loyalists, and Intel devotees are Intel devotees: comparing the two is somewhat academic, because crossover is limited. A maker can get a lot of pop for a little price on ARM, but there is something to be said for Intel, their brand, and their marketing muscle. In a word, Intel is reseeding, and we may not see the harvest for a while.

The fact of the matter is until now, there wasn’t any really small, inexpensive module with Intel Inside; the smallest one could get approximating an X86 environment was VIA Technologies and their Nano processor on a Pico-ITX board, or matching competitive offerings with Intel Atom, and none are maker-cheap. The ARM maker modules have a lengthy head start, and Intel is definitely stealing a page from the ARM playbook here.

Arduino figuro, magnifico. Galileo is definitely an interesting play from Intel. One unique problem with microcontrollers is they are so inexpensive, and have so many varied features, it has been difficult to drive a standard form factor (something like PC/104, EBX, or EPIC) down into this space with affordable boards. Arduino is perhaps the closest thing to a de facto standard we have for maker modules right now.

The next round of Intel maker module – Edison, and a newer 22nm version of Quark – puts a much smaller form factor in play, drafting on the popularity of Electric Imp. More on that next time.

lang: en_US


Mark your Date for Semiconductor Design Vision

Mark your Date for Semiconductor Design Vision
by Pawan Fangaria on 03-13-2014 at 4:30 am

A very popular acronym is ‘WYSIWYG’ – What You See Is What You Get! This is very true and is important to visualize things to make it better in various aspects such as aesthetics, compactness, organization, structure, understandable for correction and so on; the most important, in case of semiconductor design, is being able to identify issues and resolve them to get the best PPA optimized design.

No matter how complex a design is, designers need to decompose it until the last bit and view the details in order to be able to debug it. At times, they need to shut other parts of design, simplify only the portion of interest and inspect it to correct things. Considering billion gate SoC designs today, it’s imaginable how difficult it would be to visualize these and correct. What if we have automated tools at various levels in the design process that can help designers visualize things in a matter of seconds to the level of details they need, and then analyze and correct?

Concept Engineeringhas such tools at transistor, gate, RTL and mixed-signal, mixed-language level that provide extremely easy visualization, analysis and debugging capability to designers, thus increasing their productivity. Besides, the company provides several other support utilities for designers as well as software components for EDA tool developers to delight designers’ experience; NlView[SUP]TM[/SUP] Widgets can be used for automatic schematic generation at transistor, gate, RTL, block and system level; optimized by algorithms and flexible to be controlled manually.

SpiceVision is a complete tool that reads Spice and works at the transistor and circuit component level and has numerous capabilities for viewing, analyzing, debugging, optimizing and documenting complete or part of the circuits at transistor level, thus speeding the overall circuit design. Parasitic level debugging which is considered tough can be made extremely easy by using SpiceVision.

GateVision is an ultra-fast gate level netlist viewer, analyzer and debugger, that can handle largest SoCs, process largest Verilog, LEF/DEF and EDIF netlists and display waveforms of simulation results with signal tracing up to the source level. High featured design navigation, logic cone extraction, interactive viewing, intuitive GUI etc. make the debugging activity fun for designers.

RTLvision provides fast viewing, debugging and optimizing capability for RTL code which can be written in VHDL, Verilog or SystemVerilog. It has several capabilities such as Clock Tree Extraction and interactive code navigation among others that make designers’ work easy.

StarVision is the ultimate in providing quick debugging capabilities to designers for mixed-signal, mixed-language designs, thus easing the job of integration of IPs from various sources into their complex SoCs. To make the job of analyzing and debugging complex SoCs (that can have its parts at different levels of abstraction) easy, StarVision works as an integrated cockpit that can be used to debug design at transistor, gate, RTL, or even source code level. Various parts of the designs can be analysed separately through various means.

Above is just a high level summary of these tools. To get what you want you get to see them personally. Concept Engineering is setting up a booth #11 at DATE[SUP]14[/SUP] (Design, Automation & Test in Europe) to be held in Dresden, Germany on 24-28 March 2014. Detailed presentations/demos of these capabilities will be provided along with latest tools and features.

There is another event in Silicon Valley on 24[SUP]th[/SUP] March, SNUG Silicon Valley Designer Community Expoat Santa Clara Convention Center, CA, at which Concept Engineering will showcase its products.

EDA Directis authorized distributor of Concept Engineering products. You may like to setup a time with them for specific, focused discussion or review of products by writing at sales@edadirect.com

Meet the people who make the inside of electronic circuits visible to you!!

More Articles by Pawan Fangaria…..

lang: en_US


A Tool Conceived With Designers’ Input and Developed from Scratch

A Tool Conceived With Designers’ Input and Developed from Scratch
by Pawan Fangaria on 03-12-2014 at 10:15 am

If we look at the past, most of the EDA tools in the semiconductor design space have originated from a designers’ need to do things faster. Regardless of whether it is design exploration, manual design, simulation, verification, optimization (Power Performance Area – PPA) and many other steps in the overall design flow. What matters most, is how fast is the overall design turnaround with all kinds of design closures such as functionality, timing, area and power. Many tools are lost in the middle of market dynamics of mergers and acquisitions. However, my close observation tells me that the tools which are conceptualized with expert designers’ participation and driven in collaboration with design houses are built to last. They cannot be victimized as long as they serve the larger purpose of the overall design flow.

Often we ignore when designers ask something audacious, looking at it as if they needed a ‘push-button’ solution. But, hey, just wait, if you do not provide it today, it will become a reality tomorrow, and you may be left behind. If I look at RTL sign-off at the top level of the design flow, it was at an infancy stage in pre-millennium; however, today, it appears in the main-stream design flow. Inspired by this thought, I met Siddharth Guha, Sr. Manager at Atrenta’sNoida office. Siddharth is an expert of power solutions for SpyGlass RTL Sign-off platform. He has worked on SpyGlass Power since the very early years of this millennium. It was a nice opportunity for me to learn the intricate details about this product, what went behind the creation of this product, and how best it serves the industry. Here is the conversation –

Q: Siddharth, I guess SpyGlass Power is a well known product in the semiconductor design community. So, instead of talking generally about the product, tell me something about how it was conceptualized and how did it start?

A: Power used to be an aspect looked at toward the end of the design flow. However, as design sizes and complexities started increasing and technology nodes continued shrinking, power analysis and optimization gained importance. Our initial customers were looking for a tool which could estimate and optimize power of the chip in advance, managing budgets accordingly. So, estimating and optimizing power at the RTL, in the beginning of the design flow, was conceived with our customers’ inputs. This product was developed from the ground up, with intense customer participation at all stages of development; from conceptualization, to development, and including validation. Today it is effectively being used in production by our customers.

Q: What are the typical challenges that need to be handled by SpyGlass Power at the RTL?

A: Power closure with silicon is always a challenge. We have to make sure that the power stays within limits at every stage: RTL design, logic synthesis, pre-CTS, post-CTS and so on. Around 28nm and below, other issues that were not talked about earlier have become much more pronounced. For instance, for internal power, the accurate slew rate has to be taken into account, and the wire loads models are weak. Additionally, leakage power is very significant at smaller geometries. All these issues require the tool to estimate and reduce power as early as RTL.

Q: So, how does SpyGlass Power tackle these issues because many of these may show up late in the design cycle?

A: SpyGlass is a platform for the complete design flow. SpyGlass Power estimates power at the RTL as well as at the gate level which allows the user to track correlation of the power throughout the flow. Accuracy is achieved through calibration of reference gate level data or directly with technology found in SpyGlass Physical.

Q: That’s right, but looping back from physical (gate) level will take longer?

A: Yes, however, this looping is much less costly than traditional looping. What has to be looked at is how fast the designer gets the data to make right decisions. Since much of the design process starts early at RTL, things keep getting structured as we go down to layout level and refinements keep taking place at every stage so that we have no major surprises at the end of the design cycle. In the SpyGlass RTL Signoff platform, most of the tools are connected together which helps in faster convergence of the design.

Q: What are the major differentiating factors in SpyGlass Power?

A: SpyGlass Power provides a consistent and comprehensive solution for power estimation, reduction and verification. Our customers tell us that our best-in-class estimation engine provides faster results and convenient calibration with reference gate-level netlists. After power estimation, our users perform power profiling to check the quality of simulation data and the power efficiency of the current design. The profiling provides complete activity report with power computations. The power reduction step offers guidance to designers for possible power reduction through various means such as clock gating optimization, more efficient memory data operation for example. With the designers’ permission, SpyGlass Power can also fix the RTL for power reduction ensuring correct functionality with our SEC (Sequential Equivalence Checker). The power reduction engine also leverages SpyGlass CDC to ensure that the modified design is CDC safe. The power verification, SpyGlass Power checks the complete design against the power intent for various domains, level shifters’ states, isolation logic etc. As our customers tell us, SpyGlass Power provides a complete solution for power from architecture to estimation, reduction, auto power fix, and verification at all levels.

Q: So, how do you see customer response to SpyGlass Power?

A: Our customers are very happily using this tool. We work in a very collaborative manner and take a pro-active approach in solving the issues designers face. We have seen our customers using this tool in unique ways, which we had not envisioned while developing the tool. For example, while using this tool for chip power optimization, they also use the activity report to improve their software and optimize transactions on the design.

Q: That’s quite heartening. What more are designers looking forward to, from this tool, for their large SoC designs?

A: Interesting question. Although SpyGlass Power supports UPF, one of our customers recently requested that it should leverage a set of specific constructs that would allow the tool to model the effect of automatic switching and voltage scaling of power domains. Looking at the huge number of these domains in today’s SoCs, this is going to be fun challenge to achieve.

It was a great session with Siddharth for me to learn about what goes into the making of SpyGlass Power. It definitely looks to be an effective tool that provides large returns for the designers’ time. Since power has become more important, not only for mobile and hand held devices, but also for other consumer and home appliances, I can see a rising demand for such a tool in the semiconductor design community.

More Articles by Pawan Fangaria…..

lang: en_US


Now even I can spot bad UVM

Now even I can spot bad UVM
by Don Dingee on 03-11-2014 at 8:30 pm

Most programmers can read a code snippet and spot errors, given enough hours in the day, sufficient caffeine, and the right lens prescription. As lines of code run rampant, with more unfamiliar third-party code in the mix, interprocedural and data flow issues become more important – and harder to spot.

Verification IP particularly resembles that third-party code remark: vendors supplying UVM for test is now widely-accepted practice. Debugging a large testbench environment with a lot of third-party UVM can become a rather hilatious effort if the only view available is a text error message, likely launched somewhere in the middle of potentially foreign code not being fed something it expects. Tracking down dependencies in the midst of the model from just the source code view is possible, but slow going.


The latest release of Aldec Riviera-PRO 2014.02 brings a powerful new feature to testbench debugging: UVM Graph. Integrated directly into the tool on a new tab, UVM Graph lets users switch from their UVM source code view into a top-down visualization showing the components and objects and transaction-level modeling (TLM) connections between them.

Clicking a component rectangle expands its contents showing the encapsulated objects with ports and interfaces, quickly revealing exact details of the testbench model. A right click takes you to a cross-probe window for a closer look at objects or classes, or back to the source itself. Icons next to object names show their types.


For another perspective on the UVM Graph capability, an Aldec guest blog post from Srinivasan Venkataramanan of CVC shares his first look at the tool:

Visualizing UVM Environments: Debug Features Deliver a Clearer View

The new Riviera-PRO 2014.02 release doesn’t stop there. Another new feature is a Finite State Machine (FSM) window, with a color-coded graph showing state transitions. The same data can also be presented in a tabular format, handy for highlighting transition counts.


We explored Riviera-PRO’s plotting capability in a previous release about a year ago, and this latest release includes a significant enhancement. Plots are great at viewing data quickly, but in a large data set even a plot can be overwhelming. Setting limits gives users control over what is seen and can improve readability for many situations.

Per normal, Aldec continues to make incremental improvements in Riviera-PRO, speeding up SystemVerilog simulation and GUI viewing performance at each release. Riviera-PRO uses Flexera Software FlexNet Publisher for licensing, with an update in this release that preserves existing licenses but uses the latest licensing daemon. For an overview of all the improvements, download the updated Riviera-PRO What’s New presentation.

Testbench software productivity today means bringing new in-house and third-party code into the mix quickly, and avoiding the need to read through code manually. The ability to visualize code and the relationships between code modules is a huge debugging aid, and the addition of UVM Graph to Aldec Riviera-PRO should be a welcome improvement for those working in UVM on a daily basis, or those new to the arena.

lang: en_US


DSP running 10 times faster at ultra-low voltage?

DSP running 10 times faster at ultra-low voltage?
by Eric Esteve on 03-11-2014 at 12:30 pm

The LETI and STMicro have demonstrated a DSP that can hit 500 MHz while pulling just 460mV – that’s ten times better than anything the industry’s seen so far. Implemented on a 28nm FD-SOI technology, with ultra thin forward body biasing (UTFBB) capability (used to decrease Vth), this DSP can also be exercised at higher voltage when required by the application, then hit 2.6 GHz at Vdd = 1.3 V, equivalent to a similar device implemented on a 22nm Tri-gate technology (2.5 GHz at 1.1 V). But, for any mobile application, delivering 500 Mops at 460 mV is a big achievement: according with Fabien Clermidy, head of Digital Design and Architecture at Leti, this could mean extending your battery life by about another 30% for typical usages. Leti and ST showed the FD-SOI DSP at ISSCC – the IEEE’s International Solid-State Circuits Conference (February 2014), which is widely considered the premier forum for presenting advances in solid-state circuits and SOCs.

As you can see on the above picture, the Forward bias capability is the key enabler of such a performance at ultra-low voltage level: even if we would have to zoom the picture in the 400-500 mV abscise, it’s the 2000 mV FBB Boost that allows reaching the 500 MHz bar, as with no FBB Boost, the DSP would run in the low 100’s MHz range. Such a result is already a great achievement, but we can analyze another advantage of using FDSOI technology: to reach the maximum DSP performance, 2.6 GHz in this work, a chip maker would have to target a more expansive technology, the Tri-gate 22 nm.

In this table, we see that the maximum frequency is reached with work (1) and (4), or 2.6 GHz at 1.3V Vdd on 28nm UTBB FD-SOI and 2.5 GHz at 1.1V Vdd on 22nm Trigate. From the previous articles about FDSOI, you know that the over-cost of SOI wafers is more than compensated by the exploding fabrication cost increase paid by going further by one technology node, then the over-cost related to Trigate implementation compared with planar transistor used for FDSOI in this work. That is, the DSP described by the LETI/STM paper presented at ISSCC can exhibit TWO advantages when compared with a 22 nm Trigate:

  • End user can benefit from a 30% reduction of the power consumption, when the DSP runs at ultra-low voltage (460 mV), still delivering 500 MHz performance
  • The same device can deliver same performance than 22nm trigate (2.5 or 2.6 GHz), at a much lower cost

The later is interesting too, as the semiconductor industry is facing a BIG economic issue: Moore’s law has been empirically defined by Gordon Moore, as an economic law (back-up by simple maths, not by physics), saying that the cost per gate was divided by two every 18 months. That we see, starting with 20 nm technology node, is now a cost increase node after node. The reasons why this cost per gate is increasing have to see with the laws of physics: don’t forget that 20 nm is half of the smallest visible wavelength, for example! Nevertheless, there will be applications where integrating more IP (CPU, GPU, DSP, SRAM etc.) into a SoC, or designing the most performing CPU, or an ultra-low power device to stay competitive will justify paying a price premium, not only a price per gate, but also a huge increase of development cost. But how many fabless chip makers will be able to invest so much, or, if you prefer, how many market segments will economically justify making such an investment?It’s good to know that technologies like FD-SOI will allow continuing Moore’s law, and I will come back with more explanations and material in the very near future to illustrate this assumption…

From Eric Esteve from IPNEST

More Articles by Eric Esteve…..

lang: en_US


The Infamous Intel FPGA Slide!

The Infamous Intel FPGA Slide!
by Daniel Nenni on 03-11-2014 at 10:30 am

As I have mentioned before, I’m part of the Coleman Research Group so you can rent me by the hour to better understand the semiconductor industry. Most of the conversations are by phone but sometimes I do travel to the East Coast, Taiwan, Hong Kong, and China for face-to-face meetings. Generally the calls are the result of an event that needs further explanation or just a quarterly update. Again, as an active semiconductor professional I share my experiences, observations, and opinions so rarely will I agree with the analysts or journalists who rely on Google for information.

In 2003, Kevin Coleman founded Coleman Research to give investors a better way to access industry knowledge. Coleman helps thousands of clients get answers to their most critical questions, without leaving their desks. Rather than spending hours reading research reports, or traveling to meet people at conferences, we connect clients directly with industry experts, to hear immediate, relevant insights.


The Intel analyst meeting last November was full of surprises and resulted in a series of phone consultations. The Intel 14nm superior density claim slides were the most talked about and were absolutely crushed by TSMC, which I wrote about in “TSMC Responds to Intel 14nm Density Claims”. The other slide that caused a flurry of calls is the one above comparing Altera and Xilinx planar to FinFET. After talking to dozens of people (including current and former Altera, Intel, and Xilinx employees) I have concluded that this slide is an absolute fabrication. Get it? Fabrication? Hahahahaaaa….


I did a comparison of the Altera and Xilinx analyst meetings and found the slide above which supports my point. Clearly silicon does not lie so when the competing FPGA FinFET versions are released we will know for sure, but my bet is that Altera/Intel will lose this one. It also goes to my point that the transistor is not everything in modern semiconductor design and Intel’s claims of process superiority are a paper tiger when it comes to finished products.


There are thousands of FPGA and semiconductor process professionals reading SemiWiki so I’m hoping for a meaningful discussion in the comments section. If any of you would like to post a rebuttal blog I’m open to that as well. SemiWiki is an open forum for the greater good of the fabless semiconductor ecosystem, absolutely.

The most recent event that caused a flurry of calls was the JP Morgan Report: Meetings at MWC – Intel Mobile Effort Largely a Side Show, but Some Problems in Foundry a Concern. The press really had a field day with this one:

Some issues popping up with foundry business – we are concerned.Ourchecks indicate there have been some problems with Intel’s foundry effortscentered on design rules and service levels. It appears Intel is being inflexibleon design rules and having trouble adapting to a service (foundry) model. Our J.P. MorganFoundry analyst, Gokul Hariharan wrote today that Altera has re-engaged TSMC.

This resulted in a handful of tabloid worthy articles taking the JP Morgan report completely out of context:

Altera to switch 14nm chip orders back to TSMC, says paper Commercial Times, March 4; Steve Shen, DIGITIMES [Wednesday 5 March 2014]

While I appreciate the consulting business this generated I really do question the motives of Steve Shen. The first “Altera leaving Intel” rumor started HERE and I’m sure this won’t be the last but I’m still not buying it and neither should you.

More Articles by Daniel Nenni…..


Effective Verification Coverage through UVM & MDV

Effective Verification Coverage through UVM & MDV
by Pawan Fangaria on 03-10-2014 at 5:00 pm

In the current semiconductor design landscape, the design size and complexity of SoCs has grown to large extent with stable tools and technologies that can take care of integrating several IPs together. With that mammoth growth in designs, verification flows are evolving continuously to tackle the verification challenges at various levels. Today, verification is not a single continuous flow; it is being done from several different angles including formal verification; h/w, s/w and h/w-s/w co-simulation; acceleration, emulation, assertion-based and so on. VIPs (Verification IPs) for standard components in SoCs are to the fore to ease the pressure on verification teams.

In such a scenario, it’s evident that the verification of SoCs in every organization must be a continuous improvement and coverage building process where coverage from various verification processes can be added up and accumulated; testcases, testbenches and verification plans maintained and re-used between same as well as different projects to the largest extent possible; interoperability maintained between different verification engines and best quality obtained with optimum utilization of resources.

Cadencehas established a very novel, effective and everlasting methodology for verification called MDV (Metric-driven Verification) that uses the well-known standard UVM (Universal Verification Methodology) which is based on OVM (Open Verilog Methodology), itself an evolution from eRM (e Reuse Methodology). UVM supports both e and SystemVerilog, thus enjoying wide-spread use in the semiconductor industry.

MDV methodology advocates a step-by-step approach in a planned way starting from test coverage, code coverage, advanced verification up to planned verification closure of design features. Cadence Incisive Verification Kit includes verification planning (test structures against specific features can be specified in MS Excel spread sheet) codified within the tool. The vPlanner feature of Incisive vManager can also be used to identify abstract features and hierarchies of features that closely resemble the hierarchy of specification. A vPlan can be hierarchical integrating together vPlans of other features. It becomes executable when it is able to re-direct verification activities and is dynamically updated with highest priority items executed first. The coverage is accumulated progressively as various verification engines execute. Test driven verification is RTL centric and can be in the form of block, expression, toggle or finite state machine coverage. Then there is additional assertion-based functional testing. Constrained Random testing, although incapable of keeping track of what is tested, is very effective in finding bugs in random manner. The tracking and visualization of what has been tested is done by Coverage Driven approach; however that can generate huge data, thus limiting usability and scalability. In a generic approach of plan-based MDV, the overall verification can be organized in a verification plan which can have milestones set, feature wise or design hierarchy wise, and can capture what is tested through various means; feature hierarchies are organized by the executable vPlan which contains many-to-many relationships between features and tests, thus also helping in traceability against specification.

Incisive Verification Kit includes several real world testbench examples that can enable design verification engineers to plan and embrace this new verification methodology and scale on productivity through re-use of verification plans, testbenches and several other components. Above is a testbench architecture that shows how UVCs (Universal Verification Components) are hooked up together to a UART DUT. Then there is a cluster-level testbench for the kit APB system with major re-use of serial interface and APB UVCs. This modular and layered approach creates a user-friendly plug-and-play environment where hardware and software verification components can be easily re-used from block to cluster to chip to system between multiple projects and platforms. Each of these components has its own pre-defined executable vPlan which can be plugged into the master vPlan of the SoC. Cadence has a rich set of commercial VIPs for standard interfaces (e.g. USB, PCI Express, AXI etc.) in its portfolio.

The intensity of re-use in this verification platform plays a vital role in accelerating the testbench development and scaling verification for large SoCs through manageable, repeatable and closed-loop process. Not only are the verification components re-used, but also vPlans, sequence libraries and register definitions.

MDV provides analysis of coverage contribution for each regression run against a specific feature. Random seeds and tests that contribute most to the overall coverage can be identified and run as much of the time as possible for effective simulations.

It’s interesting to note the results from a real Cadence customer project where project timeframe was reduced from 12 months to 5 months and more bugs found with lesser resources.

The Incisive Verification Kitprovides comprehensive hands-on workshops (which include techniques from both planning and execution perspectives) for verification engineers getting up to speed with MDV platform. It has material for several design paradigms such as low-power and mixed-signal. A subset of the workshops is available through Cadence online support in Cadence Rapid Adoption Kits. Read the Cadence whitepaper for more detailed description on this powerful and effective SoC verification methodology.

More Articles by Pawan Fangaria…..

lang: en_US


Intelligent Sensors

Intelligent Sensors
by Paul McLellan on 03-10-2014 at 3:48 pm

Wearables are clearly one of the hot areas of the Internet of Things (IoT). A big part of that market is sensors of one sort or another. Andes low power microprocessors are a good fit for this market which requires both 32 bit performance and ultra low power. Performance is needed since IoT by definition has internet access in some way (perhaps piggy backing on your phone or wireless hub) which requires enough performance to run a network stack. Low power is required since many IoT applications need to last for a very long time and, perhaps, never have their battery replaced.

Intelligent sensors are the fusion of a microprocessor, sensor, and communications interface into a single physical unit. Fueled by the automotive, wearables, medical equipment, and industrial automation markets, innovative new intelligent sensors are expected to grow at a CAGR of 9.8% from 2012-2018 with sales to reach $6.9 billion by 2018.

On March 22nd at 10am Pacific Andes Technology is presenting a webinar entitled Intelligent Sensors. It will be presented by Dr. Emerson Hsiao, Director Field Application Engineering.

Andes, one of EE Times’ Silicon 60: Hot Startups to Watch, is a leading provider of embedded microprocessor IP for intelligent sensor vendors and SoC developers. In this webinar, they will discuss the processing, power consumption, security, and cost requirements for integrating an embedded microprocessor core into an intelligent sensor. Andes provides a portfolio of cores ranging from a low-end 8051 upgrade 32-bit MCU to a high-end 1 GHz 32-bit microprocessor to address all the needs of the intelligent sensor design community. Attendees will learn how to use the complete Andes embedded microprocessor design solution including a graphical SW development tool (IDE), rich operating system and application software stack, easy-to-use debugging tools, and convenient in-circuit evaluation and simulation platforms.

Attendees Will Learn:

  • The breadth of processing requirements for intelligent sensors in product applications ranging from wearables to automotive to medical equipment to industrial automation.
  • The tradeoffs of power and performance in selecting a microprocessor core.
  • How to implement security features in an embedded microprocessor core.
  • The complete embedded microprocessor design solution from Andes.

Dr. Hsiao has an extensive background in the ASIC and IP industry. Prior to Andes, he worked at Kilopass Technology as the VP of Marketing. Dr. Hsiao previously held the General Manager position for Faraday Technology USA, where he spent several years in field application in various locations including Taiwan, Japan and USA. Dr. Hsiao worked at UC Santa Barbara as a visiting scholar prior to Faraday. He received his Ph.D in Electrical Engineering from National Taiwan University.

Register for the webinar here.


More articles by Paul McLellan…


EDAC Update: Elections, Kaufman and More

EDAC Update: Elections, Kaufman and More
by Paul McLellan on 03-10-2014 at 3:24 pm

I wrote recently about the EDAC mixer in Mountain View. Due to college basketball there won’t be one in March, the next one will be in April. Details later in the month.

The EDA Consortium (EDAC) is seeking nominations for the Board of Directors for the two-year term beginning May 29, 2014. Voting member companies are entitled to nominate their CEO, president or COO to serve on the consortium’s Board of Directors. Nine of the nominees will be elected to serve a two-year term on the EDA Consortium Board. The deadline for nominations is Monday, March 31st at 5pm Pacific.

DATE is in Dresden in two weeks, from March 24-28th. You are cutting it fine but EDAC members get a discount on exhibit booth space at DATE (and at DAC which is in San Francisco the first week of June).

Meanwhile, here is some news from some of the groups that work on various issues of general interest to EDA companies.

The Emerging Companies Group promotes the interests of companies whose annual revenue is less than $5M. The Emerging Companies Committee is planning another exciting year of events with information especially useful for start-ups. Upcoming events include Marketing Best Practices, and more installments of the popular EDAC – Jim Hogan series, including Investments Fueled by the Upcoming Energy Boom in the spring and Alternative Sources of Funding for Emerging Companies scheduled for the fall. Previous installments of the EDAC – Jim Hogan series are available in the EDAC media library.

The Export and Government Relations Group supports the interests of EDA in matters pertaining to product export. In September 2013, the Export Committee, in cooperation with the Emerging Companies Committee, held an informative seminar on export regulations at the new EDAC offices in San Jose. Many smaller companies do not have the resources to track these complex and ever changing sets of export regulations, but ignorance of the laws is not an excuse. Non-compliance can be costly, with penalties ranging from significant fines to imprisonment. This valuable seminar was recorded, and is available to members on the EDAC web site. A publicly available preview is available; EDAC members can view the full presentation. Members can also visit the Export Compliance Update, which reviews key export information for EDA companies.

The License Management & Anti-Piracy Group provides a forum for members to identify and solve software licensing and piracy problems common to EDA vendors and their customers. The committee met with Flexera in October 2013 to discuss our thoughts on external license reclamation. (External License Reclamation is a tool or procedure that attempts to force an application program to give up its licenses by applying an external action like process suspension or stoppage.) After some discussion, we concluded this is a very risky practice, and in general is not supported by EDA vendors. This document details the issues surrounding external license reclamation, and why the LMA committee does not support the practice.

There is more at EDAC. Interoperability. Market Statistics Service. The Kaufman Award. All at the EDAC website here. EDAC members have full access to all the supporting materials, videos, presentations etc.

More articles by Paul McLellan…


WordPress and EDA Software, How Do They Compare?

WordPress and EDA Software, How Do They Compare?
by Daniel Payne on 03-09-2014 at 8:34 pm

I first started using WordPress in 2008 after having written my own Content Management System (CMS) to build and manage web sites. WordPress is the number one CMS in the world, is just 10 years old, and is used by over 70 million users. What got me thinking about WordPress and EDA software companies was a recent book by Scott Burken, The Year Without Pants: WordPress.com and the Future of Work. In the book Scott talks about his experience working at Automattic (the company that owns WordPress) as one of the first-ever engineering leads and contrasts it with working at Microsoft, a traditional software company.

Let’s start off with a quick comparison between WordPress and most EDA software companies:

[TABLE] style=”width: 500px”
|-
|
| WordPress
| Most EDA Companies
|-
| Pricing
| Freemium
| Expensive
|-
| Licensing
| Open Source
| Proprietary, Leased Software
|-
| Users
| 70+ Million
| About 200,000
|-
| Release Cycle
| Every 2 weeks
| Maybe twice per year
|-
| Cloud-based
| Yes
| Limited
|-
| Schedules
| None, really
| Bureaucratic, elaborate
|-
| Sales & Marketing
| Word of mouth
| About 30% of total revenue
|-
| Web volume
| #19 in the world
| #61,421 in the world (mentor.com)
|-
| Adding Features
| 29,827 Plugins
| Scripts – Tcl, Skill, C, API, etc.
|-
| Customer Support
| Team Happiness
| Thankless job
|-
| Management Decisions
| Bottom up
| Top down
|-
| Employee locations
| Remote
| Centralized offices
|-
| Communication
| IRC, Skype, Blog
| email
|-
| Formal Meetings
| None
| Often, unproductive
|-
| Rating, Ranking
| Not used
| Annual spectacle
|-

OK, I know that WordPress is not as technically sophisticated as a formal analysis tool in EDA, however it does contain 248,090 lines of code and installing it uses some 1,100 files. Did you see how often they release a new version of WordPress, every two weeks! Now that’s what I call being responsive to customers.

When I worked at Mentor Graphics in 2003 our product team worked on a FastSPICE circuit simulator and released a new version every month, or 12 times per year. I think that many EDA start-up companies release frequently because they are adding new features at a rapid rate, and customers are thrilled with the added automation. What tends to happen at large EDA companies is that products can get stuck into dependencies on other products or a framework, so that all products must be released at the same time instead of being autonomous.

If you need a feature not included in WordPress then you quickly search from a centralized repository of Plugins, however in the EDA world we really don’t have a place to share our scripts, mostly because management doesn’t want to share any automation with a potential competitor. I figure that most large EDA users have a way to share their own scripts internally, but not for the world at any price.

At Automattic, all new employees work their first month in Team Happiness, a customer support role in order to find out how real users are using WordPress and finding bugs or just having trouble learning how to get their work done. In the EDA world I recall how Model Technology (acquired by Mentor Graphics) assigned their developers to answer the customer support phones each week in order to stay close to the customer, and better understand how ModelSim was actually being used and what users really wanted in their functional simulator.

What shocked me most about WordPress was that the founder Matt Mullenweg sets general directions about what should be done, but doesn’t order any person or team to go out and do it. Teams were formed, and they simply divided up the work and got it done, all without using formal scheduling or deadlines. I’m not sure how the chaos style used at WordPress would work in an EDA world, however I bet that most EDA startups follow a similar approach of informal project management.

Transparency was another huge part of the success at WordPress, where they use internal blogs to document what is happening within each team, so that the entire company is privy to your development progress and requests for help. The culture at Automattic is certainly unique and maybe the stodgy folks in EDA could loosen up a bit and get back to their start-up roots and become more productive at the same time.

Summary

I recommend Scott Burken’s book about the inner workings of WordPress because it answered many of my questions about what the culture at WordPress was, how they got started, what they believe in, and how working at a software company can be fun and rewarding. The only negative point I could think of about Scott Burken, is that he used to work at Microsoft on the team that created Internet Explorer, the most un-compliant web browser on the planet and loathed by web developers.

lang: en_US