RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Xilinx UltraFast Design Methodology Guide will save you time and money

Xilinx UltraFast Design Methodology Guide will save you time and money
by Luke Miller on 10-27-2014 at 7:00 pm

Well today, i’m easing my way back in from vacation. Took a camper, 6 kids, 1 wife with bun in the oven and saw the great USA. 17 States, roughly 5500 miles. It was great fun and tiring at the same time. The Grand Canyon was a blessing but I really enjoyed the ‘The Four Corners‘ where UT, CO, NM, AZ all meet. I had each kid lay down and made sure they got the chance to be in four states at the same time. Below is my pic. Now the X formation is of course a tribute to the Mother Ship Xilinx and not to the fact at I was eXhausted.

While I was away, Xilinx published a very valuable document named “UltraFast Embedded Design Methodology Guide“. As you know, FPGAs have radically changed over the years and Xilinx has added ARM processors to them and FPGAs are now SoC’s, named Zynq. One very rarely can just sit down and start banging out code for any serious design. The Zynq design takes planning and process/methodologies like any other design, but with a few twists.

In the past, FPGAs were scary, and only used and programmed by hardware engineers, this has changed as well. Who is this guide for? The table below, sums it up very well, it is for anyone who is using a Zynq SoC in their design.


The guide is broken down into seven chapters,

  • Chapter 1: Introduction
  • Chapter 2: System Level Considerations
  • Chapter 3: Hardware Design Considerations
  • Chapter 4: Software Design Considerations
  • Chapter 5: Hardware Design Flow
  • Chapter 6: Software Design Flow Overview
  • Chapter 7: Debug

The most important chapter in my opinion is Chapter 2, System Level Considerations. This is where the trade space and bake off’s begin to decide where functions/algorithms need to live. For example, you have a million point FFT that needs to be solved. Based on the latency, power and resource usage will determine if this FFT is going to reside in the programmable logic or within the ARM processor system.

The other important factor is data movement, where is the data going? Can it get there fast enough? As these exercises start to begin many domino’s will fall. This guide provides much attention to the issue of memory and how it impacts the system and as you know, it is all important.

The bottom line is the Xilinx Zynq SoC design is a team effort, and all must be willing to overlap and work together. Want to have a great low power Xilinx Zynq design? The figure below really shows that all involved have an impact on power, whether you are an software engineer, hardware engineer or system architect.

The Xilinx Zynq Series will be the best selling FPGA of all time. Download the guide and get started with Zynq today!


Designing Hardware with C++ and its Advantages

Designing Hardware with C++ and its Advantages
by Pawan Fangaria on 10-27-2014 at 10:00 am

Very recently, I was seeing intense discussions on the need for agile hardware development just like agile software and ideas were being sought from experts as well as individuals. While in software world it has already evolved, in hardware world it’s yet to see the shift in paradigm. My point is that the end goal of agile hardware development must be to accomplish maximum of the hardware job at the highest level of abstraction at the fastest rate which should leave only essential jobs (which may be done in parallel) to be done at lower levels till physical implementation and that must improve the turnaround time by a large extent. Also, I am a believer of the future standing on the shoulders of the past and hence we must look at what has been done so far for hardware realization. These thoughts reoccurred in my mind when I came across a webinarwhich dealt with writing C++ code for High Level Synthesis (HLS) which can be much simpler to write and easier to test and debug than RTL code and obviously orders of magnitude (1000x – 10000x) faster in simulation compared to RTL; of course actual timing and cycle accuracy need to be looked at later.

Although, the webinar does not tell anything about agile, my intuition is that this methodology of hardware realization through software writing in C++ (most versatile programming language) must be looked at in the spirit of agile development for hardware; of course further exploration about actual hardware realization and how these concepts can be used to be truly agile is a matter of research. Let’s look at some details about this methodology; courtesy Calypto.

The key aspect here is to use algorithmic model of hardware without any timing which could later become reference model for hardware. The algorithmic model uses AC (Algorithmic C) data types (can be used with SystemC as well) which is bit-accurate, has unlimited length integer, fixed-point and floating-point that enables very fast simulation as well as compile and link times. Refinement of the model can be done by converting it to fixed-point model. Standard C++ debuggers and test coverage tools can be used along with assertions.

Above is an example code fragment of ALU implementation with ‘assert’ and ‘cover’ point statements to realize coverage driven verification of the implementation. This ALU function can be easily synthesized by Catapult, Calypto’s HLS tool. The algorithmic model can be wrapped with SystemC/TLM, embedded in SystemVerilog/UVM environment and dumped onto ESL platform for SoC verification.

The hardware architecture refinement such as block partitioning (through function calls), memory architecture (synthesized directly without any re-ordering from source code) and processing order depending on the data movement must be defined in the source code. Synthesis constraints such as clock period, technology, resource sharing, throughput etc. are for consideration at RTL, not for algorithmic model.

Above is an example of JPEG engine created by partitioning the algorithmic model; each box represents a function which is mapped to hierarchy during synthesis.

The synthesizable C++ models thus created can be used for design exploration with ‘what if’ analysis, SystemVerilog testbench development with SystemC wrappers, sequential logic equivalence checking, formal property checking and TLM synthesis.

What lies at the core of HLS is, mapping of abstract transaction to pin-accurate, cycle-accurate protocol and optimizing of throughput and latency in the target technology.

As shown above, by developing an executable design in C++ (or SystemC), the overall flow is condensed by a large extent compared to traditional flow for hardware design. One can simulate the functionality in the code very fast and then apply constraints to optimize the design through HLS; any architectural change again can be quickly implemented in the source code. Also any derivative design (such as cell phone to tablet) with similar function but different constraints can be quickly done. Power analysis and optimization can also be done with shorter loops between RTL and C++.

The SLEC (Sequential Logic Equivalence Checker) is a unique tool that can formally verify C++ against optimized C++, C++ against RTL and also check C++ properties, where it can formally prove if assertions always hold, something simulation may miss. Thus SLEC along with Catapult unlocks the full potential of ESL by providing optimized and verified RTL from C++ / SystemC. The RTL block can be verified in UVM environment against reference and ESL model.

A well executed methodology in this way can reduce RTL design and verification time by more than 50%, re-use HLS source code in TLM SoC model and leverage verification environment across abstractions. More detailed information can be obtained by registering and attending the webinarat Calypto’s website.

Coming back to my thought (it might appear radical) about agile hardware development – Can a pure agile system be thought of where RTL level stuff can also be done at the functional level in algorithmic model? Can the concepts used in SLEC, Catapult and may be other HLS tools applied at more micro as well as macro level to make it cycle accurate at algorithmic level? By the way, I learnt that SLEC also handles timing differences in internal computation and at interfaces. Comments welcome!

More Articles by PawanFangaria…..


The IoT and the Forbidden Fruit

The IoT and the Forbidden Fruit
by Peter Gasperini on 10-26-2014 at 4:00 pm

A tremendous froth of press and promotion has arisen in the last year concerning the Internet of Things. Nearly every High Tech firm on the globe has begun to advertise their offerings as an integral part of it, positioning their products and services as both essential to the IoT and as a vital component of its future. As the smartphone and tablet markets saturate and roll over, everyone wants to jump on the IoT bandwagon, hoping that it will be the Next Big Thing upon which High Tech fortunes are made and new businesses are built. Yet what is often lost amidst the hyperbole is what the IoT really is and what its effect on our lives will be.

The mind is its own place, and in itself can make a heav’n of hell, a hell of heav’n. – Milton, “Paradise Lost”


Source: wikipedia

Enthusiasts envision a world where everyone can work, live and play out their lives thru the internet – sharing data & files, conducting business meetings over video, shopping, watching TV, cooking, managing the lights and the thermostat at home, driving, etc etc. This will be made possible by having every electrical appliance, instrument and object connected to the internet with its own unique address, rendered possible by IPv6.

Networking and Storage systems houses such as Juniper, Brocade, EMC and Cisco are salivating at the prospect of such a tsunami of streaming data. Every company building a widget with even the most inane wireless capability is touting their product as part of the IoT. Even companies as sober and conservative as IBM have been caught up in the media frenzy, anticipating a new Golden Age of High Tech.

Also read: Processor for Internet-of-Things (IoT)

Yet it remains impossible to name a truly successful IoT product introduced over the last five years. Samsung’s smart watch line has flopped – a shocker, since Samsung has performed so brilliantly in High Tech and, in particular, in Consumer Electronics over the last two decades. Google Glass has actually damaged the Google brand. Even the Apple Watch and Sony’s SmartEyeglass have been received by the market with a barely stifled yawn.

One thing is clear: no company has found the magic formula for inventing the successor to the smartphone. In fact, normally savvy and brilliant consumer electronics firms seem to have forgotten the lessons learned from products such as Sony’s Walkman and the Apple iPod:

[LIST=1]

  • The new technology should use the current infrastructure
  • It should be built on a unique combination of existing technologies and capabilities
  • The product should provoke a visceral, instinctive attraction to the target audience
  • It should be simple to use – even familiar
  • The product should not be engineered as a fashion statement, but should be neutral and unobtrusive
  • It should be marketed not as a piece of technology, but as a tool that will enhance the user’s life

    Though the above may seem very obvious and just basic common sense, every single one of the major IoT products released over the last several years violates at least half of those principles.

    Nonetheless, research-oriented technophiles have been anticipating the expansion of the internet into every facet of life and have been looking at ways to support it thru the infrastructure. There are three peer-to-peer wireless standards of significant interest contending for mastery of this space – Wi-Fi Direct, Bluetooth 4.0 and Wireless USB. All are based on existing, widely used and very robust protocols and have years of work and decades of expertise captured in their specifications. Nevertheless, as the following table demonstrates, none of them have a combination of performance, power, bandwidth and other features that makes it a clearly superior choice over the others:

    Yet just as Adam and Eve changed everything when they ate fruit from the Tree of Knowledge, the many far-reaching effects of the IoT will include their share of decidedly negative consequences. For instance: if a person has all of their home appliances connected to the internet and controlled thru their e-wallet-equipped smartphone, a third party – whether criminal or official – will be able to track everything this person buys, does and communicates on a daily basis, should they be able to hack into the personal network of that individual. And hack it they will – even purportedly secure and well protected systems for Wall Street firms, retailers and government agencies worldwide have been penetrated and their databases compromised by organized criminal enterprises. Governments the world over have also demonstrated how they can be a potential threat to their citizenry thru monitoring their communications and activities, the most recent scandal highlighting the LAPD and its disturbing pronouncement that all drivers in Los Angeles are being tracked.

    I’m blogging about all this and more at http://vigilfuturi.blogspot.com. As the IoT turns into a juggernaut, we need to be prepared to deal with how it will affect our lives – embracing some parts of it while altering or resisting other aspects.


  • Cliff Hou at TSMC OIP

    Cliff Hou at TSMC OIP
    by Paul McLellan on 10-26-2014 at 7:00 am

    I attended Cliff Hou’s keynote at TSMC OIP Forum earlier this month. OIP is a huge undertaking. It currently has over 100 ecosystem partners, 10 technology generations, 7600+ IPs, 60+ EDA tools, 7000+ tech files and 150+ PDKs.

    Most of Cliff’s presentation gave details on where TSMC are with the various processes. Of course 20nm and above is all in full production, and we know it is shipping in high volume to both Apple and Qualcomm, among others, since they have said so. In fact there are 12 products that are already function proven on first silicon.


    16FF completed full qualification in Q4 2013 and entered production. Over 55 products are planned for tapeout in 2014/5 in mobile, networking, CPU, GPU, FPGA and more. They achieved first silicon success on a network processor for HiSilicon Technologies. It is actually a combination of several chips using TSMC’s CoWoS (chip-on-wafer-on-silicon) 3D technology. The logic chips are built on 16FF process containing 32 Cortex-A57 cores, and the second chip is a 28nm I/O chip.

    16FF+ (the “+” is important, it is a different process) is currently in qualification, which is on track. They released V1.0 in August 2014 so designs can start. 16FF+ yield is ahead of plan.


    The 16FF+ IP ecosystem is already showing silicon results with various interface and memory IP already completed silicon qualification.

    Cliff talked about 10nm. He said that it has industry leading density for the smallest die size. Compared to 16FF+ it has a speed improvement of 25%, a power reduction of 45%, density improvement of 2.2X for logic and 0.45X for SRAM. The key upcoming milestones are:

    • V0.1 available for design starts Q4 2014
    • V0.5 available, Q2 2015
    • Risk production November 2015


    Going off the bleeding edge, Cliff talked about TSMC’s ultra-low-power technology, especially targeted at internet of things (IoT) applications:

    • 0.18eLL and 90uLL in production
    • 55ULP, 40ULP and 28ULP will have risk production in 2015
    • RF and embedded flash features for IoT SoC integration
    • The ULP processes have lower Vdd to reduce active and standby power. Tailored eHVT device enables over 70% reduction in standby power. Think battery life.

    Cliff’s last slide summarized TSMC’s process introduction roadmap:

    • 16FF+ is mature and ecosystem ready with multiple solutions. First product is silicon-proven with 50+ tape-outs are scheduled for 2014 and 2015
    • 10nm offers 2.2X gate density, 25% better speed or 45% power reduction with risk production in Q4 2015. Ecosystem solutions have been developed, certified and in use on test chips
    • Ultra Low Power technology platform, covering from 0.18ɥm to 28ULP, can support various IoT applications. Existing ecosystem can be leveraged for fast time-to-market

    3 reasons to focus on hardware dependent software

    3 reasons to focus on hardware dependent software
    by Don Dingee on 10-25-2014 at 4:00 pm

    Why is software for modern SoCs so blasted expensive to develop? One reason is more software is being developed at the kernel layer – hardware dependent software, or HdS. Application software often assumes the underlying hardware, operating system, communication stacks, and device drivers are stable. For HdS, this flawed assumption of stability can eat a project alive. Continue reading “3 reasons to focus on hardware dependent software”


    Sensing, Processing and Connecting: IoT Fundamentals

    Sensing, Processing and Connecting: IoT Fundamentals
    by Eric Esteve on 10-25-2014 at 7:00 am

    Internet of Things, or Internet of Everything, is certainly the buzzword of the year. Does IoT describe one product family? Not really as the acronym describes a family of concepts, each of these concepts could effectively be turned into a family of products, if this concept reach the market, or fulfill a market need. Nevertheless we could unify IoT definition, saying that IoT fundamentals at hardware level are: sensing, processing and connecting. These three actions will serve as an IoT product foundation, the finished product also including software encapsulation and security. An IP vendor is not selling a finished system, but some of the hardware “lego” pieces that you need to integrate to build the system. CEVA is typically one of these IP vendors you may want to talk with when defining an IoT product, as the company propose two of these fundamental pieces: Processing (thanks to the DSP IP core family) and Connecting (thanks to Riviera Waves acquisition). If you consider that CEVA has developed their DSP IP port-folio by supporting the mobile phone semiconductor industry, you realize that the company should be well positioned to address the various needs of IoT systems by offering low power solution. Low power is the mobile industry DNA, it’s also CEVA DNA because CEVA’s IP port-folio development is tightly coupled with battery powered, mobile systems.

    If you search for low cost, low power sensors, you will certainly find several products filling your requirements. However, low cost/power generally means noisier sensors. That’s why DSP fit in sensing, as you need a lot of signal-cleaning (filtering, smoothing, calibrating) in order to extract meaningful data. Moreover, with the introduction of mics and image sensors, no processor does a better job than a digital signal processor: DSP have been invented to process algorithms, and voice or image processing rely on algorithms.

    Thus, DSP-based sensing is not only the best (algorithm) processing option, it’s also the best approach for power saving, as we can see on the above example of MP3 decode comparison. Main CPU is clearly the worst option (it’s normal as a CPU is supposed to be general purpose processor) and the power consumption of M4 MCU/DSP is still 2x this of an optimized DSP core like CEVA-TL4.

    The next question is to know if you can support multiple sensing technologies by using a single DSP (or do you have to use a specific DSP for each sensing technology). In fact the answer is in the picture below: the same DSP core can support sensor fusion (contextual awareness + motion gesture + indoor navigation), voice trigger, ultrasonic gesture, face trigger and BlueTooth Low Energy beacons. CEVA is claiming that such a DSP solution implemented on 28HPM supporting Always-on (sensor fusion + Voice Trigger + Race Trigger + BLE) consume less than 150 uW…


    CEVA propose using the CEVA-TL421 as an audio analyzer, supporting mobile wearable, smart home and robot applications, all these applications requiring supporting:

    • Voice recognition and speaker identification
    • Speaker separation through beamforming
    • Environment sensing like in the train or cinema
    • Emotion detection

    To support video analytics, the CEVA-MM3101 Computer Vision Engine would be suited for mobile, wearable, smart home, smart cities and security & surveillance markets. For such markets the DSP need to support:

    • Human & object recognition
    • Face recognition, gesture recognition
    • Tracking based on feature and pattern matching
    • Emotion detection
    • Image and video enhancement or Video stabilizing.

    If you goal is to develop an IoT platform, to serve as a demonstrator as well as to support software development, you can put all together the various CEVA DSP core (CEVA-TL410, CEVA-TL421 and CEVA-MM3101) with the CEVA Connectivity Platform supporting Zigbee, Bluetooth and WiFi. But you will probably need to integrate a low-power CPU/MCU (don’t need a powerful one, as the DSP cores are running the high performance processing), shared memory and various interfaces like I2S, I2C, LCD drivers or some of the MIPI specification like CSI or SoundWire.

    Eric Esteve from IPNEST


    A Brief History of ASTC and VLAB Works

    A Brief History of ASTC and VLAB Works
    by Paul McLellan on 10-24-2014 at 4:00 pm

    When I worked for VaST our engineering was in Sydney Australia. To my surprise there was another, entirely independent, group working on virtual platform modeling and tools in another place in Australia, in Adelaide. Is there something in the Fosters? They had originally been part of Motorola Corporate R&D and Software Group, servicing all the many segments of its semiconductor arm, SPS, but they incorporated as Australian Semiconductor Technology Company (ASTC) when SPS was spun out of Motorola as Freescale in 2005. Then in 2011 they also created the VLAB Works business outlet, essentially a complete virtual prototyping laboratory for accelerating embedded system design, hw/sw co-design, and embedded software development.

    They have a mixture of EDA engineers, semiconductor designers, IP, embedded software and more. They have a lot of really innovative new internal technology, as is their premier virtual prototyping laboratory VLAB and its suite of tools and toolboxes, but typically they do business as turnkey projects to create, deploy and support complete virtual platforms to accelerate system design and embedded software development. They have close to 100 engineers and have completed over 200 projects. Everyone is an engineer, including management, with 50% of management and 25% of staff having PhDs, so it is a very deep and very experienced team, everyone with over 10 years of experience in these domains.


    One area of ASTC focus, where they seem to be ahead of anyone else, is automotive grade MCU and ECU virtual platforms, very fast, accurate, integrated in the automotive flow solutions, where they support all the leading automotive microprocessors (not the usual ones used in, say, cellphones), the bus communication standards such as FlexRay and CAN, (which I thought was “car area network” for years but the C is a much more boring “controller”), LIN, SPI, Ethernet, and all others, and support of the right system simulators and analysis tools used in the industry such as Mathworks’ Matlab and Simulink, dSpace HILS/VEOS, Vector CANoe/CANalyzer.

    For example, they can handle an entire closed loop ECU virtual system, including MCUs, ASICs, Interface electronics, and car motor and other plant models, running the Autosar operating system, running a mixture of tasks on multiple simulated CPUs and with a virtual console connected to control and observe, at real time speeds or much faster than real time,


    Another focus area is platforms for mobile. Cell-phones have very short development cycles and if you wait until the hardware is available before you start to do the embedded software development you will be late. Reference boards used to be one solution but they are getting too slow. I once asked an engineering manager in Japan what they did while they were waiting for reference boards: “we pretend to program” he said.

    Mobile requires an almost completely different set of models and interfaces from automotive (outside of automotive infotainment which is much closer to smartphone technology and does not have safety critical issues). They have undertaken complex operating system porting for mobile/multimedia and have operating system ready to run on new architectures (including new DSPs). The operating system will be ready to install immediately the silicon is available.


    ASTC was born in Australia but has grown into a global company by attracting similarly experienced teams from around the world and is now a global company. For example, in 2007 they acquired the Motorola phone virtual prototyping team in Urbana-Champaign, IL, US, and later on added the key experts from the modem simulation organisation in Toulouse, France. They have offices across the world in US, Japan and Europe. And, yes, still in Australia too. The yellow stars are their offices, the red dots are their customer locations.

    As a manager from a major European automotive Tier 1 business unit (outside Stuttgart I’m guessing) said:Deploying a VLAB MCU virtual prototype “enabled us to prepare our ECU software up to 6 months earlier and be ready for the winter season tests on time, rather than miss a whole year cycle of time to market!

    Even by the comparatively glacial speed of automotive development, one year of time to market is huge.

    VLAB Works’ website is here. Parent ASTC’s website is here.


    More articles by Paul McLellan…


    ANSYS Electronics Simulation Expo – A View from Industry

    ANSYS Electronics Simulation Expo – A View from Industry
    by Pawan Fangaria on 10-24-2014 at 7:30 am

    As we are seeing more and more automation in most of our activities, not only through software but also smart electronics (at cutting-edge technologies) equipped with processors, micro-controllers, sensors and so on which make a whole system as an integrated entity on a small piece of semiconductor intertwined with other systems to accomplish various tasks, there is a growing need to consider and optimize (Power, Performance, Area, …) the whole system together and make it robust and reliable to avoid any silicon failure over its lifetime. Since ANSYSspecializes in offering solutions for power, noise and reliability management in electronic and semiconductor systems, it was a nice gesture from ANSYS organizing an Electronics Simulation Expo (AESE) in Bangalore where they invited the electronic and semiconductor community from this part of the world (the world today is globally connected though) to present their views on improving electronic system design through sharing of design and simulation best practices.

    I had a great conversation with Jai Pollayil, Director Applications Engineering at ANSYS and his team at Bangalore and was very positively influenced with the success of AESE. It was impressive to know there were about 300+ registrations and more than 50% of that in controlled attendance at this first instance of AESE; that confirms about the pace of electronics and semiconductor design development in India and the world over. Some glimpses about this Expo are here –


    [Sudhir Sharma, Global Industry Director at ANSYS, addressing the audience]

    Sudhir Sharma, Global Industry Director at ANSYS welcomed the audience and provided a brief introduction about the day long program.


    [Jagan Ayyaswami, Director Engineering at Qualcomm, presenting the keynote address on IoT]

    Jagan Ayyaswami, Director Engineering at Qualcommpresented the Customer Keynote and spoke at length on the trends, opportunities, challenges and future of Internet of Things (IoT) which is driven by electronics and internet.

    Internet and Mobile (which drive electronics business) are the most rapidly growing area in the world, connecting other devices together. Between 2003 and 2010, while world population has increased from 6.3 Billion to 6.8 Billion, the number of connected devices has increased from 500 Million to 12.5 Billion. And it is expected that the number of connected devices will grow to 50 Billion by 2020 at the expected population of 7.6 Billion by then; that means ~7 devices per person! What does that foretell? Everything will be smart in the world with smart appliances, smart home, smart meters, smart automotive, smart city and smart personal electronics including health, fitness and medicine. How will it be accomplished? With IoT, smart communication between electronic devices through internet, entering every domain of our life including industrial, personal, education, business, communication, management, and entertainment and so on. Of course there are challenges such as security, privacy, energy sourcing and efficiency, governance, and common standards, interoperability and protocols etc. which need to be addressed, there is huge opportunity going forward. And number of internet users in India is growing rapidly making the electronics and semiconductor design business in India much attractive.

    The ANSYS Technical Keynote was presented by Aveek Sarkar, VP Technology Evangelism at ANSYS in which he, very rightly, emphasized on the need for analyzing an electronic system as a whole to ensure its robust, optimized and reliable design and how ANSYS solutions can help meeting these objectives.

    With the whole electronic system as the focus, the technical paper presentations were well organized into two parallel tracks – i) Package / PCB and ii) IC / Semiconductor. There were great presentations from key players in the industry including Intel, Qualcomm, Dell, Broadcom, AMD and others. While I will specifically talk about a few of them later, the key messages which were coming out from expert designers who presented were – i) Power has become a key criteria and needs to be looked at from whole system perspective, ii) Reliability is at stake at advanced nodes and needs careful handling, iii) The design sizes are growing ultra-large and those need high capacity and high performance solutions, iv) Need for faster design (chip + package) convergence with wider coverage and accuracy. It will be interesting to know about how these problems are being solved in these design houses by using ANSYS tools & technologies. Stay tuned!

    More Articles by PawanFangaria…..


    eSilicon Creates One-Click Access to MPW and GDSII Quoting Portals

    eSilicon Creates One-Click Access to MPW and GDSII Quoting Portals
    by Paul McLellan on 10-23-2014 at 7:00 am

    I have written before about eSilicon taking their internal quoting tool and making it user accessible. This first started just for MPW shuttles for half-a-dozen foundries, and then was extended to cover production runs at TSMC. And it is getting heavily used; eSilicon have had 315 people register to use it from 43 different countries and created nearly 700 quotes.


    Today, they have rolled out a new-and-improved version. Firstly, the quoting portals are a lot easier to find since they are clearly linked from the eSilicon home page. Previously there was actually a link from the home page but it was by no means obvious and people wasted time getting started on their quotes.


    There is a webinar on experience using the portals called Semiconductors Go Online and Worldwide. It will take place at 9am Pacific Time on November 12th. The webinar will feature three customers of eSilicon who will talk about their experiences. They are Ronald Jew, wireless infrastructure manager at Integrated Device Technology, Bill Brennan, president and CEO of Credo Semiconductors, and Mahesh Tirupattur, EVP of Analog Bits. Mike Gianfagna, VP marketing at eSilicon, will moderate.

    The panel will explore the impact of this new online technology from both a fabless semiconductor and IP provider point of view. Discussion will include the impact of online quoting for semiconductors, users and use cases and what else is needed. They will take questions from the audience too.

    As Ronald says in today’s press release:My team really likes the ‘what if’ aspect of the tools. For example, with the GDSII quoting tool, you can see how changing one variable, such as packaging or metal stack, affects your production unit price three years out. This is a very powerful budgetary and design planning tool.

    Anyone can experiment and use the portal. There is no charge for doing so.

    Register for the webinar here.


    More articles by Paul McLellan…

    About eSilicon
    eSilicon, a leading independent semiconductor design and manufacturing solutions provider, delivers custom ICs and custom IP to OEMs, independent device manufacturers (IDMs), fabless semiconductor companies (FSCs) and wafer foundries through a fast, flexible, lower-risk, automated path to volume production. eSilicon serves a wide variety of markets including the communications, computer, consumer, industrial and medical segments. www.esilicon.com.


    Virtual Platform Powers AUTOSAR Software Development

    Virtual Platform Powers AUTOSAR Software Development
    by Pawan Fangaria on 10-23-2014 at 12:00 am

    As a significant part of our life is spent in travelling, it’s natural that automotive sector continues to get traction with a significant push towards electronics and automated solutions for automobiles such as cars to provide safety, comfort and entertainment. These solutions are provided by complete systems which operate based on electrical signals, sensors, actuators and complete control units. There can be multiple systems in a car to control several operations which can be of different natures such as mechanical, thermal, fluidic etc. These systems need to work with very high accuracy and performance and hence need dedicated environment for their development which can also take into account their surroundings. It’s not always possible to have the real hardware environment to develop and test them. As I have been an advocate of virtual methods of doing several complex tasks which could be very costly and time consuming to do in real environment, the AUTOSAR (Automotive Open System Architecture) Software development has again caught my attention of how powerful it is to develop the automotive systems on Virtual Platform.

    I recently attended a webinarpresented very nicely by Lance Brooks of Mentor Graphicswhere he provided a good introduction about VSI (a virtual development environment for AUTOSAR Software systems), explained its core features for system modeling through an example of Virtual ECU (Electronic Control Unit) and provided details about analyzing trace results (collected during execution) to solve problems. He provided good demonstrations on each part of the presentation with real life examples.

    VSI includes a powerful set of tools to provide a virtual environment for AUTOSAR-aware editing, building, integrating, analyzing and debugging AUTOSAR software systems. The builders automate configuration, code generation and compilation of code and a powerful debugger allows software execution in the context of AUTOSAR for users to interact with it setting breakpoints, stepping, inspecting variables etc. as desired. The execution results can be traced through without any limit on trace depth in a non-intrusive way. This environment allows you to connect a Virtual ECU to real verification, calibration and other tools like vehicle network testers without needing the real hardware. Your actual AUTOSAR functions execute against real AUTOSAR BSW (basic s/w), including the OS, RTE (Runtime Env), MCAL (Microcontroller Abstraction Layer), memory and diagnostic services, I/O drivers and other basic s/w services.

    The above image shows an integrated development environment (IDE) of VSI which is used to deterministically control the Virtual ECU hardware model in order to execute software. Since the Virtual ECU is the model of an actual ECU including the embedded processor, it can be simulated and optimized for performance of your final ECU software, thus solving problems that can be very difficult to solve in actual hardware environment. The verification is powered by unlimited execution trace depth and complex trace conditions and supported by hardware and software fault injection to measure and analyze performance of your software under adverse conditions. Additional tools and technologies are provided for you to quickly integrate your application software and connect the Virtual ECU to your vehicle models and verification suites. A brief demo (includes Getting Started tutorial that ships with VSI) in the webinar explains all these basic features in very nice way.

    Mentor’s underlying simulation technology takes into account the embedded processor and peripheral behavior during software execution that portrays the Virtual Platform in place of real hardware; actual target processor compiler and accurate model of the processor core, bus architecture, memory, and surrounding hardware peripherals are used while simulating instructions. Since it is accurate, it can be used as real hardware and since it is fast, complete application software and AUTOSAR BSW can be executed on the platform.

    Above is an example of Virtual ECU hardware with bus model connecting memory and peripherals and processor model. The conneXion instruments are used to connect external components such as virtual sensors, actuators, and other vehicle hardware.

    VSI includes actual target version of Volcano[SUP]TM[/SUP] VSTAR BSWwhich has AUTOSAR-Compliant OS, RTE, Services, MCAL and ECU Abs (I/O) Firmware. Cross-target C/C++ software development tools are used to compile code to execute on it. The advantages of Virtual hardware is that it is always available enabling early validation, it provides complete ECU visibility for both hardware and software, full non-intrusive trace without affecting behaviour of the system, and accurate timing.

    In the webinar, there is a great demonstration of a full seat heater control system of a car which executes on Mentor’s Vista Virtual Prototype. The software executes on top of VSTAR BSW that is ported to the Virtual ECU. The Virtual ECU is connected to a model of the seat heater hardware through VSI’s conneXion technology which is also used to connect automated tests.

    A unique and powerful feature of VSI is the ability to trace complete results collected during execution, visualize them, analyze and solve problems.

    VSI’s Sourcery Analyzer real-time analysis technology (which is non-intrusive) includes agents that crunch trace results to compute answers and otherwise help visualize vehicle software in an AUTOSAR-aware manner. It provides simultaneous real-time hardware and software trace for system-wide analysis with unlimited trace depth and complex conditioning. Again an interesting demo has been provided in the webinar on this real-time analysis of the seat heater control system where various parameters such as temperature rise time, average value of temperature etc. can be computed.

    VSI enables software developers to start early without dependencies on complex, shared, hardware-based development environment and complete system integration and function validation much earlier. It is interoperable with popular test and validation tools for its wider applicability. To know more and witness interesting demos, attend the webinar here.

    Read more on Vista Virtual Prototyping and Sourcery Analyzer –
    Develop A Complete System Prototype Using Vista VP
    Debugging Complex Embedded System – How Easy?

    More Articles by PawanFangaria…..