Synopsys IP Designs Edge AI 800x100

ISCUG – Excellent Indian Conference, needs to grow

ISCUG – Excellent Indian Conference, needs to grow
by Pawan Fangaria on 04-21-2013 at 8:05 pm

Promoted by Accellera, SystemC User Groups are in work worldwide; NASCUGin North America, ESCUGin Europe and ISCUG in India. While I was shuffling between my day-to-day work and strategy management course/exams, I received an invitation from my long time colleague, President and CEO of Circuitsutra Technologies, Mr. Umesh Sisodia to attend the ISCUGconference on 14[SUP]th[/SUP] and 15[SUP]th[/SUP] April. I was delighted as I found the conference quite interesting, enhancing my knowledge and knowing about some good work being done in this part of the world. Also there were important messages about SystemC initiative, related technologies and about progress in semiconductor and electronics industry in this developing country.

Umesh Sisodia
First day was filled with several tutorials on SystemC, TLM, HLS, SystemC based verification, advanced modelling and the like. I could not attend all, but tried to gain insight from these as much as I could by switching between these sessions; can talk about a few some other times when my schedule allows. An important meeting of ISCUG steering committee was scheduled in the evening, in which again Umesh invited me, my pleasure! I will talk about the message from that meeting at the end of this article.

Dennis Brophy
So coming into the 2[SUP]nd[/SUP] day, in the morning we had keynote speeches from Industry leaders which were eye openers. Right after the introduction & background about ISCUG by Saurabh Tiwari from Technical Review Committee, Dennis Brophy, Vice Chairman, Accellera Systems Initiative and Director of Strategic Business Development at Mentor gave a very nice presentation about various activities Accellera is supporting and promoting; from formation of SystemC user groups in different parts of the geography to various working groups such as Language WG, Synthesis WG, Verification WG, TLM WG, SystemRDL WG, CCI (Configuration, Control and Inspection) WG, the latest being SystemC AMS WG and Multi-language WG and various subcommittees. There is open invitation for experts to join these working groups and strengthen the standards around SystemC for greater productivity, performance and interoperability of the systems built with these. It was a great informative speech.

Jaswinder Ahuja
Jaswinder Ahuja
, Founder Member of IESA(formerly India Semiconductor Association) and Corporate VP & Managing Director at Cadence presented a detailed update about the rapid developments in semiconductor space and electronics in general in India. A key eye popping message was that the overall electronics market in India will grow to $400B by 2020. That’s amazing news! Another message was the great demographic dividend India has in terms of young population. Here I am a little disappointed and purposely I have omitted ‘working’ from population. Employment has been a major objective since 2[SUP]nd[/SUP] five year plan of India after its independence and has been repeated in most of the subsequent plans. Still major portions of Indian population do not have meaningful employment. In my opinion, demographic dividend can be obtained only if the young population is educated and vocationally trained and more than 90% are employed. Government, business institutions and people need to work together to achieve this.

Sri Chandra
Sri Chandra, Standards Manager at IEEEprovided update about India IEEE activities to promote technical education activities among professionals and students. The community is growing with local chapters in cities like Delhi, Bangalore, Hyderabad etc. IEEE standards and publications are there in most of the areas and that greatly helps in growing the knowledge and skills.

Mike Meredith
Mike Meredith, Past President of OSCI (Open SystemC Initiative) and now Vice Chairman of Accellera Synthesis WG and Technical Marketing VP at Forte talked at length about SystemC from its origin, various developments, how it helps the SoC design community and the future insight. It was interesting to note that SystemC came into existence in 1999-2000 and is evolving since then into multiple applications as is evident from various working groups around it. I am willing to talk about it in detail in a separate article.

At the end of the keynote lectures, Umesh Sisodia, Organizing Committee Chairman talked about future roadmap of ISCUG. This is where; a summary of what was discussed on the previous day evening in the steering committee meeting was disclosed. Although the conference is being attended by world class people (130 people attended this conference) and specialized focus is given to SystemC and which should be considering SystemC’s versatility as well as exclusiveness, there was a need felt to increase participation in this conference in order to make it more outreaching and economically viable. More sponsorship is needed and a non-profit organization needs to be formed to support it. So, how can it be done? More ideas are welcome, but a few of them are – open it up to wider areas, even associate it IEEE, Embedded Systems Conference and so on. Another idea was to start DVCon India which can include all standards of Accellera and not just SystemC. In that case SystemC must be co-located event along with DVCon India such that it retains its exclusivity as it is attended and used by involved and dedicated experts.

So what do you think? Considering the growth potential in India as elucidated in Jaswinder’s presentation, shouldn’t the ISCUG conference expand and build its brand?


FPGA + MATLAB = FATLAB

FPGA + MATLAB = FATLAB
by Luke Miller on 04-21-2013 at 7:00 pm

Now Michael Bloomberg probably wouldn’t want FATLAB but let’s face it, to think like him you need a lot of education, alot. He may be banning 14nm because it will increase FPGAs densities and thus the consumer as well. Stay tuned. After some comments from my dear readers, one who said to watch it with respect to my harshness about AccelDSP; I needed to address this issue immediately. I actually woke up with a Virtex-II in my bed missing some pins. Yes, this is serious.


The idea behind AccelDSP was good. As a system developer, like the rest of the world, I use MATLAB or Octave. (Now don’t tell anyone about the Octave, as I’m frugal and wise, but not cheap!) MATLAB is the best systems modeling tool, period. Rhapsody, SysML, UML and the likes to model some grandiose system is never going to replace MATLAB and frankly it is another thing to do on the ever growing design checklist. The disconnect as you know is how to get that great system you have modeled in MATLAB into silicon, in particular an FPGA. Well AccelDSP tried doing that but had limitations. For me the big limitation is the one that it needed to work. You know click that magic button and bingo VHDL. True it would work for a FIR filter but for larger more nested loop structures, no worky. No floating point either and I like floating point, who doesn’t? I also enjoyed the tool hanging up for hours on end, then just going away. It had a way of making me feel productive and of course I would just sit and stare at the screen until it finished. The other issue was cost; it was like a billion dollars. ($100k+)

My earlier article about HLS being Real is true but there is still a disconnect from the modeling realm into the Silicon. MATLAB does have C Coder, which takes your MATLAB and converts to C and then Vivado HLS can work on that. I have used that path and it works but it is not ideal for obvious reasons. The major one is the penalty for abstracting a level higher. So can I make a plea with the FPGA companies? Would you consider partnering with MATLAB (besides Simulink, I know all about that, and that is not the solution either) or maybe buying MATLAB?

Every DSP house, Medical, RADAR, Automotive, Wall Street etc… USE MATLAB. I’ll share a secret with you. I cast my C from MATLAB by hand. I know, you’re saying how inefficient, how error prone Luke! What are you thinking? I know, it’s terrible and I am ashamed.If we think about it, system engineers model the design and flow down / partition the requirements to hardware (FPGAs) and software (CPU). The Modeling tool is MATLAB (Have I said that enough?). It would be orders of magnitude more efficient to flow down the model to the FPGA guys to work the ‘FATLAB’ tool instead of writing the VHDL by hand. One thing is true; the idea that we are going to hand code a 6.8 Billion transistor FPGA is totally absurd. Try it and your competition is going to blow your doors off using HLS.

But FPGAs have inherent frustrations on the programming language. CPU’s have C; FPGAs well now have C, C++, SystemC, Verilog, VHDL, (Zync) EDKs, etc… And now I want a MATLAB HLS, Again? Perhaps the group is smaller than I think but the programming of the FPGAs cannot be bird shot and MATLAB is where most start; why not leverage that correctly to synthesize, again? Ok, time to dust of AccelDSP and create FATLAB? If it existed, most I know would use it and would probably pay for it, I know I would.

lang: en_US


A bird told me the EDPS Monterey Conference was a great success

A bird told me the EDPS Monterey Conference was a great success
by Camille Kokozaki on 04-20-2013 at 8:10 pm

The 20th annual Electronic Design Process Symposium (EDPS) held April 18-19 at the Monterey Beach Hotel in Monterey California was an unqualified success. I know this because a bird (seagull?) sitting on the window sill of the conference room was so captivated by the fascinating insight provided by a number of luminaries that it joined in the conversation and could not wait to tweet about it. OK, you know I am making up stories here but the truth is that it was a great conference full of technical content, lively debate, deep dives, great perspectives, geeky humor, clever one-liners and wonderful settings. I was there and I must admit I enjoyed the event a great deal.

The event program consisted of five modules as follows:

  • ESL & Platform
  • Design Collaboration
  • 3D–IC System Design
  • FinFET: Design Challenges
  • FinFET: Foundry Design Enablement Challenges

The last 2 modules occurred on the 19th, a day designated as Foundry Day: FinFET Design Enablement. The first day opening keynote was given by Ivo Bolsens, Xilinx CTO and was entitled: “The All Programmable SoC – at the Heart of Next-Generation Embedded Systems”. Gary Smith, of Gary Smith EDA delivered a dinner keynote entitled: “Silicon Platforms + Virtual Platforms = An explosion in SoC design” and Semiwiki’s Daniel Nenni had a keynote on Foundry Day entitled “The FinFET Value Proposition”. Three panels were held those 2 days as well.

The sheer number of content calls for a multi-part series of posts to follow this one describing the insight and interaction of the sessions. In this post, I will only summarize Ivo Bolsens’ presentationand its key takeaways. All the event presentations are posted here.

Ivo made the case for the new FPGA age of All-Programmable SoC. He outlined the unstoppable march down the technology curve with a pipeline of ever shrinking technology nodes from 32/28nm to 5nm where novel materials and esoteric technologies will kick in at the end of this pipeline. This lead to highlighting the ever increasing cost of implementing chip design as process nodes shrink, reaching more than $170M in 28nm. He showed an interesting Infographic slide entitled “What Happens in an Internet Minute?” depicting the staggering data growth.

An interesting factoid was that Google changes/refreshes data centers every 2 years. Facebook does that every 8 months. The Data Center is integral to business success now and is no longer viewed as a cost center.
Security is more important. People these days talk more about latency than server capacity.

We have reached the age of All Programmable in One. Zynq’s programmable I/Os, logic, CPUs, with different granularity allowing you to tailor your application to what you need. FPGAs and CPUs have grown much closer. FPGA was a slave to the CPU, now the new FPGA is more of a peer. FPGAs can now program a function, like FPGA at CPU bus, but right on the CPU bus, not further away.

Ivo added that we are moving from dumb pipes to smart networks. The wired infrastructure is evolving to software defined networks. The Data Center infrastructure is moving to the cloud. Ivo also discussed Vivado IP integrator , and Zynq in wireless digital front-end.

Ivo’s take-aways were that the FPGA is Entering the Era of the All Programmable SoC:

  • Modern FPGA is an All Programmable SoC
  • Software Centric Design Flow
  • Unmatched Performance/Watt
  • Towards Heterogeneous Multi-Core
  • Targeted Teaching Platform

I could not agree more. In fact I see us soon talking about All Programmable SoS (System of Systems), software defined, hardware realized with programmable everything: CPU, GPU, FPGA, Memory, I/Os. If the SoS term takes off, you heard it here first. If not, you heard it here last..

Fair disclosure: All transcription errors are mine, I was part of the EDPS organizing committee, and I was a presenter.

Please post your EDPS 2013 Monterey questions, comments, and trip reports HERE!

lang: en_US


Cadence ♥ TSMC

Cadence ♥ TSMC
by Daniel Nenni on 04-19-2013 at 6:00 pm

TSMC has been investing in the fabless semiconductor ecosystem for 25+ years and that is why they are the #1 foundry and lead this industry (my opinion). I’m a big fan of joint webinars. Not only is it collaboration open to the masses, it is a close collaboration between the two sponsoring companies. Having worked on the TSMC AMS reference flows for the past four years I can tell you that these webinars are definitely worth your time.

Interested in advanced node designs?
Enhance your expertise with two new webinars from TSMC and Cadence!

Addressing Layout-Dependent Effects: At 9am and 6:30pm PDT on April 25, Manoj Chacko and Bala Kasthuri of Cadence and Jason Chen from TSMC will present, “Variation-Aware Design: Detecting and Fixing Layout-Dependent Effects Using the Cadence® Virtuoso® Platform, Part II, a sequel to Variation-Aware Design, Part I. You’ll learn about:

  • The solutions jointly developed by Cadence and TSMC, to provide a complete layout-dependent effect (LDE) flow for circuit and layout designers working at 28nm and below
  • When, why, and how you should incorporate TSMC’s LDE-API with Cadence Virtuoso tools into an analog, custom, or mixed-signal design flow to achieve the most efficient design cycle time

Register Now: https://www.secure-register.net/cadence/TSMC_Q2_2013

Managing Design Complexity at 20nm: At 9am and 6:30pm PDT on May 23, Rahul Deokar and John Stabenow of Cadence and Jason Chen from TSMC will present, “20nm Design Methodology: A Completely Validated Solution for Designing to the TSMC 20nm Process Using Cadence Encounter®, Virtuoso, and Signoff tools.” You’ll learn about:

  • The TSMC-Cadence solutions in the TSMC 20nm Reference Flow, tools certification, and Cadence tools and methodology to enable 20nm design with double patterning technology (DPT)-aware capabilities, to reduce design complexities and deliver required accuracy
  • How in-design DPT and design rule checking (DRC) can improve your productivity
  • How both colored and colorless methodologies are supported, and data is efficiently managed in front-to-back design flows
  • How local interconnect layers, SAMEMASK rules, and automated odd-cycle loop prevention are supported
  • How mask-shift modeling with multi-value SPEF is supported for extraction, power, and timing signoff

Register today: https://www.secure-register.net/cadence/TSMC_Q2_2013

Cadence enables global electronic design innovation and plays an essential role in the creation of today’s integrated circuits and electronics. Customers use Cadence software, hardware, IP, and services to design and verify advanced semiconductors, consumer electronics, networking and telecommunications equipment, and computer systems. The company is headquartered in San Jose, Calif., with sales offices, design centers, and research facilities around the world to serve the global electronics industry. More information about the company, its products, and services is available at www.cadence.com.

lang: en_US


Andes, ARM, Imagination, MIPS

Andes, ARM, Imagination, MIPS
by Paul McLellan on 04-19-2013 at 12:44 pm

The last session of the day for Linley Mobile was about processors to go into smartphones. One surprise was that there is a core that nobody seems to have heard of since it is only really used in Taiwan up until now, and it is used in several Mediatek chips.

The most “glamorous” processor in a smartphone is the one in the application processor chip (or the one exposed to the apps in an integrated AP+BB chip). However there may be as many as 15 more processors in a smartphone inside things like the GPS, WiFi, power management. These processors are not automatically ARM since the code is purely internal to the chip and is not exposed to the user. It is hard for anyone to win an AP processor from ARM (although Intel is trying) since even on Android where apps are written in Java and is supposed to be portable, in reality many apps, especially games, contain ARM assembly.

The new processor that nobody had heard of is Andes Core. I was sitting next to a strategic marketing guy from Qualcomm and he’d not heard of it either. They were actually announcing a new ultra-low-power core, the Hummingbird N705, at the conference. They claim that the performance is 30% better than the ARM Cortex-M0 measured by Dhrystone MIPS/mW (although I thought the Dhrystone benchmark was regarded as obsolete these days compared to others more focused on browsing etc). They also said they had over 60 licensees, and that their development environment has over 5000 installations.

Next up were ARM. They emphasized that when it comes to the process, one size does not fit all. For the high end of the market (iPhone, Galaxy etc) ARM’s roadmap is dual and quadcore Cortex A57/53. But the low end of the market is not viable in anything except premium products since 20nm and below is not a cost reduction from 28nm. So to reduce area and power and so sort of keep on Moore’s law trajectory for low end requires micro-architectural innovation.

The Cortex-A7 has a very efficient power architecture with in-order 8-stage partial dual issue pipeline and integrated improved L2 cache sybsystem. It consumes less than 100mW at 1GHz (don’t know what process, I’m assuming 28nm).


ARM’s view on Quadcore is that although four core scales well on threaded benchmarks that these don’t correlate with user-experience. Since the 3rd and 4th cores handle background and OS threads they do not need to be big. The current big.LITTLE architecture doesn’t allow this, in any pair either the big core is running, or the little, but not both. However that will change soon and it will be possible to use both cores of any pair. ARM believes that this will be the most efficient way to build a Quad core (or six or eight if required for the high end) delivering energy savings of as much as 75% for the same peak performance. If a Mali GPU is added, it offloads so much of the performance needs that the high-performance graphics drivers can all run on just a little processor for power efficiency (or high FPS for the same power).

Finally, to close out the day, was Imagination who, of course, have just recently acquired MIPS. Their view is that the GPU is the heart of a smartphone. Although most of the software runs on the regular CPU (and ARM almost always), over half the die area is taken up with the GPU and it is the GPU that provides the ‘wow’ factor. Performance is moving towards 1 TFLOP on-chip for mobile.

Although there are lots of standard APIs, especially for graphics, the inherent architectural efficiencies remain important. Frequency, power and area all impact the user experience (or the price, which I suppose is part of the experience).

Of course Imagination would love to use MIPS to displace all those ARM application processors but that is clearly not going to happen. Both ARM (who have Mali) and Imagination (who have MIPS) know that they have to work with each other. iPhone doesn’t use ARM’s graphics processor nor MIPS for the control processor, so definitely coopetition, where they compete and cooperate at the same time.


Linley Mobile

Linley Mobile
by Paul McLellan on 04-19-2013 at 12:11 pm

I was at the Linley Mobile Microprocessor conference earlier in the week. Well, just the first day since the second day overlapped with the GSA Silicon Summit. The first surprise was seeing Mike Demler in a suit. It turns out that he has joined the Linley Group as a full-time analyst in the mobile space.


Linley Gwennap started the day with his overview of the whole space. Smartphone forecast is increasing but there is a lot of growth in low-cost (sub $100 BOM) smartphones. By 2017 these will even be cannibalizing basic phones with about half of basic phones switched. So the smartphone market has a sort of “dumbbell” shaped distribution, with the high end (iPhone, Galaxy) part of the market already approaching saturation, and with a lot of future growth at the low end of the market.


Vendor consolidation is continuing as the standalone application processor (AP) model collapses. Although high end designs like iPhone have a separate application processor, Apple designs it themselves and so they are not a customer for merchant AP chips. The merchant market is almost entirely integrated baseband (BB) and perhaps more (wireless, GPS etc).

TI and Freescale exited the smartphone and tablet markets. ST Ericsson finally collapsed, unable to survive the loss of Nokia to Qualcomm. Mediatek shipped more than 100 million AP+BB chips. Spreadtrum released their first integrated AP+BB. In a break with trend, Intel got some smartphone AP wins (without integrated BB). They have not yet announced any roadmap for AP+BB (which they can do using the Infineon Wireless technology they acquired a few years ago).

The two big winners at present seem to be Qualcomm at the high end, replacing TI (and ST) at Nokia and Marvell at RIM. And at the low end, Mediatek is gaining market share fast.

While the high end top-selling phones like iPhone and Galaxy use a best-of-breed approach, picking and choosing vendors for various components, most others use a single vendor reference design. Obviously this reduces R&D cost for lower-volume models but only vendors with a complete portfolio can offer a complete reference design.

[TABLE] align=”left” class=”cms_table_grid” style=”width: 480px”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” |
| class=”cms_table_grid_td” | AP
| class=”cms_table_grid_td” | WCDMA
| class=”cms_table_grid_td” | LTE
| class=”cms_table_grid_td” | WiFi
| class=”cms_table_grid_td” | GPS
| class=”cms_table_grid_td” | NFC
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Qualcomm
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Marvell
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | In Qual
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Broadcom
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Mediatek
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Licensed
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | N
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | nVidia
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | AT&T
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Intel
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Y
| class=”cms_table_grid_td” | Sampling
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
| class=”cms_table_grid_td” | N
|-


Moore, or More Than Moore?

Moore, or More Than Moore?
by Paul McLellan on 04-19-2013 at 12:05 pm

Yesterday was the 2013 GSA Silicon Summit, which was largely focused on contrasting what advances in delivering systems will depend on marching down the ladder of process nodes, and which will depend on innovations in packaging technology. So essentially contrasting Moore’s Law with what has come to be known as More Than Moore: 2.5D interposer-based designs using TSV and other innovations.

The first panel was focused on communication. The second on Internet of Things (IoT). Finally, the third was on integration challenges.

The industry is preparing for a 1000X increase in traffic (for example, data volumes doubled between 2010 and 2011). But there are major challenges. Moore’s Law is slowing down and cost reduction (per transistor) with each process node is dubious or not happening. On air interfaces we are now close to the Shannon limit of information/Hz. Integrating RF onto CMOS (especially in FinFET) is an opportunity/challenge. Advances in packaging technology, especially those that allow a mix of die in the same package, offer alternative ways of assembling systems if the cost can be got under control.

There seemed to be general acceptance that 20nm is not going to be significantly cheaper than 28nm which has a few implications:

  • 28nm will be a very long lived node
  • there may be opportunities for innovation at 28nm such as fully-depleted options to get many of the advantages (especially low power) of moving to 20nm without the additional cost (due to double patterning in particular)
  • cost-sensitive designs will not go to 20nm unless the volumes are enormous but premium products that can take advantage of the extra gates and lower power will eat the cost
  • old nodes such as 0.13um and 0.18um will continue to be important, and in fact these are both currently growth nodes
  • the cost of moving to 20nm is not just the wafer cost but the development cost, so it needs $Bs of revenue to justify
  • IP availability may be as important when moving to a new node, as process availability. SoC design groups cannot afford to design all their own Serdes, Phys etc

There seemed to be a general feeling that true 3D won’t happen any time soon, but 2.5D interposer-based designs will play a big role in the next few years. The two big challenges are that silicon interposers are expensive and organic interposer (what some people called 2.1D) may be better. The other problem big problem is thermal, getting the heat out from the design.

However, another driver for the More Than Moore packaging approach may be that “split chips” become important because analog turns out to be too difficult in FinFET, which has quantized transistor sizes. So analog (and RF) may need to be left outside the main SoC as we move down the process nodes.

Another packaging driver in the mobile market appears to be that RF front ends won’t have integrated filters (on-chip) so they will need to be integrated into the package to get close to the IC.

So Moore and More Than Moore are going to be required in the coming years.


Semiconductor PLM – Needs to be smart for techies

Semiconductor PLM – Needs to be smart for techies
by Pawan Fangaria on 04-18-2013 at 8:15 pm

During my long career in semiconductor, EDA, I have heard, believed and experienced that this is a knowledge industry swamped with rapid innovation and technology drivers; typical manufacturing product development processes like Gantt charts and others do not apply here. The fallback is that most of the time estimations are ad hoc, based on gut-feel or expert opinion. Not only schedule, most of the processes are run by individual preferences; in other words the whole process is more people driven than process driven. Naturally, we see missed targets, re-spins, cost overruns, lost market opportunities and so on. It is said that success rate to first silicon is 0%! And we attribute the Product Lifecycle Management (PLM) issues to high complexity of designs at nanometer scale, high density, analog and digital mixed-signal and so on.

After I read the Kalypso white paper on Semiconductor Lifecycle Management at Dassault website located at –
Semiconductor Product Lifecycle Management Industry Adoption, Benefits and The Road Ahead
my perspective changed. Yes, I believe if we take PLM as a strategic direction towards improving product lifecycle in semiconductor space, many of the issues related to short window of opportunity, time-to-market, reduced design cost, profitability etc. can see significant improvement.

So where is the problem? Why is PLM adoption slow in Semiconductor space? As I have experienced myself, there is no common standard packaged PLM tool in this space, amplified by lack of awareness and limited understanding of value proposition of PLM. It is important that a product must be looked at not only from technical angle, but also from commercial angle.

[Components of a comprehensive PLM solution for Semiconductor industry]

The white paper describes in great detail the analysis by Kalypso researchers (who interviewed Semiconductor industry executives) and proposes a comprehensive method of PLM specifically for Semiconductor industry which includes complete value chain including design, software, data management, supply chain and so on. A key message here is “Think big, start small, move fast and build incrementally”. By using this principle the PLM strategy can be successfully implemented. Unless it is fully implemented, we do not see major results. Often we try to do it all at once and end up with dissatisfaction. If implemented gradually and completely, I am sure we then start seeing results project after project.

With strategic PLM program in place companies are seeing time-to-market and time-to-profit being condensed by 5 to 30% meaning more revenues due to improved market penetration. It also frees up R&D resources to work on other new product development early in the cycle. Also, PLM helps in commercializing and launching products globally and concurrently. A summary of PLM benefits is concisely listed as –

PLM provides a single comprehensive repository of complete data about any project which includes overall value chain. Once implemented this provides great ease in re-use, process improvement and implementation of subsequent projects.

Some of the early adopters have seen the benefits of PLM in semiconductor space. A long term strategy needs to be built starting with a high impact business problem and then building on it to realize the full potential of the system for the overall business.


Using Android Multimedia Framework drastically reduces power consumption in mobile

Using Android Multimedia Framework drastically reduces power consumption in mobile
by Eric Esteve on 04-18-2013 at 8:05 pm

The multiplication of chips capable to run Multimedia processing (sound, image or video) in a mobile device, smartphone or media tablet, like Application Processor (AP), Baseband (BB), Codec or companion chip, each of these embedding one or multiple processor can be seen as a good opportunity to simplify the device architecture… However, running multimedia tasks on a non-optimized processor for that task, results in higher power and limits the device processing capacity, for example:

  • Running audio/voice tasks on CPU, instead of the DSP
  • Running imaging and vision tasks on GPU, instead of the imaging platform

This is especially true in the presence of an Operating System that is un-aware of the multimedia DSPs available in the system. CEVA has developed Android Multimedia Framework (AMF) to solve this system integration problem.

AMF is a system level software solution and allows offloading of multimedia tasks from CPU/GPU to most efficient application-specific DSP platforms. When running Android OS, you need either to develop such a solution by yourself, either to benefit from a ready to use framework, allowing using deeply embedded programmable hardware engines, and software modules optimized for them. Due to its OS agnostic standard API, CEVA’s AMF would comply with any Android endorsed mechanism for multimedia offloading (e.g. KLP).

On the above picture, we see how to use standard OpenMAX API (AMF complies with the current Android 4.x versions) and how to access, through an Host Link Driver located in the CPU subsystem, to the DSP, hardware drivers and Power Scaling Unit resources. The benefits and functions enabled by AMF include:

  • Multimedia tasks are abstracted from the CPU and are physically running on the DSP. Furthermore, tasks can be combined (“tunneled”) onto the DSP – saving data transfer, memory bandwidth and cycles overhead on the CPU
  • Scalability – developers can utilize multiple DSPs in the system through AMF, e.g. multiple CEVA-TeakLite-4 DSPs, or CEVA-TeakLite-4 for audio/voice and CEVA-MM3101 for imaging/vision tasks
  • Utilization of the PSU (Power Scaling Unit) available for CEVA DSPs to significantly lower power consumption further, when running multimedia tasks
  • Easy activation of CEVA-CV computer vision (CV) software library for the development of vision-enabled applications targeting mobile, home, PC and automotive
  • Support for future standards such as OpenVX, a hardware acceleration API for computer vision
  • Automatic tile management for multimedia tasks which includes managing memory transfers and organization into DSP memory for efficient processing
  • An optional Real-Time Operating System (RTOS) is offered for the DSP

The benefit of using AMF and run API on CPU and SW on DSP, instead of running both API and SW on the CPU is crystal clear: the latest drains CPU resources, which may not be such an issue if the first consequence was that this approach is absolutely not power efficient. Power in-efficiency is not only a design issue, or a fault, it’s a real crime when the system has been designed for battery-powered mobile application! Just think about Intel, almost unable to sell their application processor based on their own CPU, just because these are too much power hungry for the emerging and exploding smartphone and media tablet markets. We can see on the right side of the above picture an AMF based architecture where the API run on the CPU, within the Stagefright Framework, and the SW run on DSP, allowing to offloads CPU for other tasks and drastically reduce power consumption.

Eric Esteve from IPNEST

lang: en_US