RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Vision for FPGA Prototyping Realized

A Vision for FPGA Prototyping Realized
by Daniel Nenni on 04-27-2015 at 7:00 am

FPGA prototyping is beginning its move to the forefront of design and verification. More and more companies are turning to this technology not only for in-circuit testing and earlier software development but also for refining, validating, and implementing chip architecture. The increases in design size and complexity as well as the utilization of globalized design teams are driving the need for greater use of FPGA prototyping. This is exactly the vision that the founders of S2C, Toshio Nakama and Mon-Ren Chene had for the technology when they first started the company back in 2003.

Back in the late 90’s and early 2000’s, Nakama and Chene both worked for Aptix, a leader in reconfigurable prototyping systems. Although the prototyping technology at the time worked well, it was very expensive and thus cost prohibitive for many design houses. That’s when the two founders decided to get together to see about creating a prototyping technology that was cost-effective but still delivered high performance.

FPGAs were widely used at the time and were much less expensive and more flexible than ASICs. It made sense to build a prototyping board around an FPGA. The entrepreneurs had their first product idea. Rather than seeking outside investment, the two pooled their own money to kick-start S2C, Inc. and then quickly launched their first FPGA-based prototyping board.

They recognized a couple of things early on. First, it was important to constantly listen and learn from their customers to build products that provided the best value. The company still operates on that primary foundation today. It was during these in-depth discussions with customers that they recognized something else – that FPGA prototyping technology had the potential to do so much more than the aforementioned in-circuit testing. Through significant research and development, S2C could provide its customers the ability to utilize the inexpensive and high-performance nature of FPGA prototyping for other application areas and tackling even the largest designs.

What they envisioned was a complete prototyping platform that not only included the hardware components but also software to ease the strains of partitioning and debug. That vision extended into the ability to link the simulation environment with the prototyping environment to enable early algorithm/architecture exploration, accelerate design verification and increase test coverage.

Even after these breakthroughs, FPGA prototyping was still being thought of as only useful for small to medium size designs. Nakama and Chene had a plan to change that too. Its true that advances in FPGAs in recent years have increased gate capacity and reduce the partitioning issue, but for companies designing large designs, FPGA prototyping still had some hurdles to overcome. Managing multiple FPGA prototyping boards across disparate design teams is a daunting task that FPGA prototyping hadn’t been able to address. S2C tackled this issue by giving those companies the ability to connect up to 32 FPGA prototyping boards together in one chassis with the added benefit of being able to manage the boards and design teams remotely via the cloud. Now companies could truly take advantage of FPGA prototyping technology.

A large electronics company recently visited validated the approach taken by S2C. This electronics company had themselves invested millions of dollars and thousands of man-hours to create an in-house solution managed by a team of 20 engineers. The result – a solution that is very similar to what S2C has developed. Had S2C’s FPGA prototyping solutions been available to this electronics company, imagine the money that could have been saved and innovations that could have been achieved by reallocating those precious engineering resources.

Today, S2C has grown to be a world leader in FPGA prototyping solutions. The Company boasts an impressive 200+ customers and 800+ system installations in leading consumer electronics, communications, computing, image processing, data storage, research, defense, education, automotive, medical, design services, and silicon IP organizations.

The success of the Company has been in large part due to the Complete Prototyping Platform vision for FPGA prototyping at any design stage, for any design size, and across multiple geographies. This vision has been realized due to sheer determination and the guiding principle at S2C to provide customers value so that they can innovate and become the geniuses of their own designs.

White Paper: FPGA Prototyping of System-on-Chip Designs


Semiwiki Blogger at DAC: MIPI Beyond Mobile, Myth or Reality?

Semiwiki Blogger at DAC: MIPI Beyond Mobile, Myth or Reality?
by Eric Esteve on 04-27-2015 at 4:00 am

Some of the various MIPI specifications are now massively used in mobile (smartphone or tablet), especially the Multimedia related specs like Camera Serial Interface (CSI-2), Display Serial Interface (DSI) or SoundWire (even if the spec has been released in December 2014, the adoption rate is very sharp, no doubt that it will be massively used in the future). You have to implement with CSI-2 or DSI controller a serial based physical interface, D-PHY specified up to 2 Gbps. In fact we have named the Top 3 MIPI specification by adoption rate: DSI, CSI-2 and D-PHY.

Catching an image with a camera (CSI-2) and transmit it (D-PHY) to the Application Processor (AP), process it and transmit (D-PHY) to a display device (DSI). Something is missing… how do I store these data? Universal Flash Storage (UFS) has been introduced a couple of years ago, supported by the Nand Flash manufacturers (Samsung et al.) therefore UFS is now the dominant spec for storage for (high end) smartphones. You need to implement UniPro (an extra controller) and M-PHY specified up to 5.7 Gbps to support UFS. M-PHY, UniPro and UFS are part of the second group of most used interface specification.

To define the last group, just remember that the MIPI Alliance has been initially created to support wireless phones, this interface group includes the radio related specifications, DigRF and RFFE (RF Front End). We have extracted this graphic from the latest MIPI Ecosystem Survey made by IPnest in 2015 (some of the miscellaneous interfaces have been removed for easier comprehension):

Thanks to this in depth review of the 263 members of the MIPI Alliance (as of February 2015), we know what are the preferred interface specifications used in mobile (smartphone and tablet). But the electronic industry is moving and searching for new growth segments, some emerging like IoT or wearable and others like automotive strongly evolving to include always more functionalities, in particular infotainment. MIPI technology is production proven and a chip maker desiring to integrate MIPI specification can benefit from all necessary support functions: Design and Verification IP vendors and test equipment vendors. Moreover, MIPI technology has been defined from the very beginning for low power and –even if it looks obvious- for interoperability between chips designed by different suppliers.

Why not using MIPI specifications to support wearable devices where low power is crucial (just think about the poor iWatch autonomy) and you need to communicate through one of the various wireless protocol (WiFi, Zigbee, BLE, etc.)? This is “MIPI Beyond Mobile”…

One of the very interesting findings of the MIPI Ecosystem 2015 survey was that MIPI is attracting many new members at a rate double in 2013 and 2014 than the last three years average (or 25/year in 2010, 2011 and 2013). Another finding is that a strong proportion of these new members are made of “young companies” and that a significant number of companies are supporting wearable and IoT.

We will propose our assumptions about the specifications to be adopted to support emerging IoT and wearable devices (and the related IP sales to be generated) during the 52th DAC in San Francisco. It will be on Tuesday 9 in the “IP Strategies and Management” track starting at 1:30 pm.
Sorry dear reader but I have to stop here to preserve the key findings for the DAC audience…

From Eric Esteve from IPNEST


Heterogeneous Processing: Power Reduction and Performance Advances

Heterogeneous Processing: Power Reduction and Performance Advances
by admin on 04-26-2015 at 7:00 pm

Implementation of Heterogeneous processors has shown demonstrated reduction in power consumption and improved performance in mobile processors like ARM_big.LITTLE technology, where low power and relatively slower cores are coupled with the more powerful ARM cores. According to ARM Holdings, it can save 75% of CPU energy in low to moderate performance scenarios, and can increase performance by 40% in high performance scenarios. Due to its power efficient characteristics, it is present in applications of heterogeneous computing and many high end mobile processors also follow ARM big.LITTLE.

Currently, many desktop and mobile processors have heterogeneous computing with the CPU and GPU on the same integrated circuit. Present day AMD’s APU and Intel Laptop processor uses Heterogeneous System Architecture (HSA) which enables it to have a CPU and GPU on a single die. A step towards Heterogeneous-ISA is AMD’s Sky Bridge project. Sky Bridge attempts to integrate circuitry between two systems on a chip (SoC). This architecture has two separate instruction sets on one chip; however, problems arise when attempting to integrate these two chips into one integrated circuit design – mainly communicating between each instruction set and chipset circuit design. This should reduce hardware and software costs without cluttering the computer. “Different motherboards are currently required for x86 and ARM chips, and it’s expensive for developers and users alike to support disparate architectures, said Lisa Su, general manager of AMD’s global business units, during a press event that was webcast.” (Computer World article)

AMD recently came up with their first ARM server processor, AMD Opteron A1100 which is a step towards the Project Sky Bridge and in 2015 a lower power version would be compatible with another x86 CPU. According to AMD, they are developing their own ARM processing core called K12 scheduled for release in 2016.

Multiple Instruction Set Architecture’s in Chip Level Multiprocessing need sufficient diversity to be useful for heterogeneous processing. Modern Instruction Set Architectures (e.g. ARM, x86 and x64, MIPS, Raspberry PI) have enough diversity between themselves to provide an acceptable increase in efficiency and decrease in power consumption to effectively fund the research for heterogeneous processing. There are four key elements to heterogeneous processing: code density, dynamic instruction count, register pressure, and floating-point & SIMD support.

Code density refers to the efficiency of the cache, memory use, and power consumed by interchanging between instruction set architectures that use different bit lengths. Dynamic instructions count refers to the number of the lines of code needed to be encoded and decoded by a multi instruction set architecture multiprocessor. Because of the differences inherent in multiple ISAs, either more or less lines of code are needed to carry out parallel processing on the same code. Register pressure refers to the different ways each ISA uses the registers. Because some ISAs use the registers specifically, different implementations must be used to correctly allocate instructions passed through the registers; software emulation of registers and direct memory addressing are some of the methods used to reduce register pressure. The final key element of heterogeneous computing is floating-point and SIMD support. Lightweight ISAs sometimes forgo precise floating-point and single instruction multiple data support in favor of lower power consumption, producing a convergence of ISA activity, and creating a convergence of operations where (in an ideal situation) asymptotic divergence should occur.

Core switching can be used as an advantage. certain code can be best for one processor but will often not be the best for another. So, with different cores on a chip we can switch the cores according to their efficiencies. Moreover, in heterogeneous computing we can shut down the other core instead of making it idle; there would be no static leakage and dynamic switching power which will enable a system to be power efficient as a result. Since core switching costs time, it also acts as a disadvantage.

Designing the scheduler according to the heterogeneous chip can prove efficient. The way of arranging processor cores also affect the performance. In present day, ARM*_big.LITTLE has three ways of arranging the scheduler which has different advantages depending on the arrangement. Similarly, differing arrangements can prove to be a positive performance factor in a heterogeneous processors and can utilizes each processor to its max.

Future applications of heterogeneous computing can include multi-ISA processors. Different architectures are both good and bad at different aspects: while multi-ISA heterogeneous computing can enable a processor to be more efficient, a single-ISA processor outperforms the former. For example: ARM processors are known for their low power requirement and x86 are known for faster processing. In multi-ISA we can combine ARM and x86 core that could prove to be better for processing. Some big companies are currently working on it but there are a lot of challenges. Heterogeneous computing has challenges with applications like binary interface, which includes endianness, calling conventions, and memory layout. All of this depends on both the architecture and compiler being used.

Heterogeneous designs, if properly implemented, can maximize efficiency, though there are many challenges with the designs, such as compatibility of software with the specific compiler that connects each instruction set architecture. If the solutions to these problems can be overcome, future processors can effectively provide advancement in computational architectures.

By Suprabh Singh and Thomas Pitts

The University of Mississippi Electrical Engineering Department introduced a Digital CMOS/VLSI Design course this semester. As part of this course, students researched a contemporary issue and wrote a blog article about their findings for presentation on SemiWiki. Your feedback is greatly appreciated.

References:
Rakesh Kumar, Keith I. Farkas, Norman P. Jouppi, Parthasarathy Ranganathan, Dean M. Tullsen (2003). Single-ISA Heterogeneous Multi-Core Architectures: The Potential for Processor Power Reduction. 36[SUP]th[/SUP] International Symposium on Microarchitecture, IEEE.
Link: http://www.microarch.org/micro36/html/pdf/kumar-SingleISAHeterogen.pdf

Antonio Barbalace, Alastair Murray, Rob Lyerly, Binoy Ravindran (2013). Towards Operating System Support for Heterogeneous-ISA Platforms. The 4[SUP]th[/SUP] Workshop on Systems for future Multicore Architechture, University of Washington.
Link: http://sfma14.cs.washington.edu/wp-content/uploads/2013/03/sfma2014_submission_2.pdf

Ashish Venkat, Dean M. Tullsen (2014) Harnessing ISA diversity: design of a heterogeneous-ISA chip multiprocessor. Proceeding of the 41st annual international symposium on Computer architecuture.

Big.LITTLE Technology.
Link: http://www.arm.com/products/processors/technologies/biglittleprocessing.php

Agam Shah (May 5, 2014). AMD unites x86 and ARM in Project Skybridge. Computer World.com.
Link: http://www.computerworld.com/article/2489007/computer-processors/amd-unites-x86-and-arm-in-project-skybridge.html


Growing Innovation in Modern PCB Design Tools

Growing Innovation in Modern PCB Design Tools
by Pawan Fangaria on 04-26-2015 at 7:00 am

In last 30+ years, the electronic design industry has seen rapid changes more than any other industry. The change has taken place in the whole electronic ecosystem including semiconductor technology, transistor design, IC / SoC design, PCB, and system design. Today, a PCB can be very complex connecting several heterogeneous components operating at different voltages and clocks, thus leading to complex design constraints. The component selection itself can be a very tedious task as there can be numerous suppliers for a component and a designer may need to browse through the catalogues for several of them.

It feels great to see a product staying in the market for 30 years and continuously being upgraded to address newer requirements of designs that eases designers’ work, increases their productivity, and improves the quality of design. I remember when I was at Cadence; they acquired OrCAD in 1999, a public company by then which had started much earlier in 1985. In fact, PSpice circuit simulation technology came to Cadence through OrCAD’s acquisition of Microsim in 1997. The OrCAD, a complete suite of schematic capture and PCB layout design solution with circuit simulation capability also adopted Cadence’s Allegro PCB-based technology and several other enhancements to cater to modern PCB requirements. In OrCAD’s 30[SUP]th[/SUP] anniversary release 16.6-2015 there are key enhancements and new featured products that make high-speed PCB designing a qualitative, productive, and fascinating experience for engineers.

Today, high-speed components such as memories (DDR), interfaces (USB) and others with complex constraints introduce newer challenges in PCB designs. For example, a high-speed memory can have complex constraints such as ‘Relative Propagation Delay’ for address and control bits, data byte lanes, and others. The OrCAD PCB Designer Professional provides support for various kinds of complex constraints including ‘Relative Propagation Delay’, ‘Differential Pair – static phase control’, Impedance, and High-speed constraint heads-up display among others. For a quick manual-assisted routing, it provides features such as Scribble routing, Contour routing, Group routing, and Via arrays that improve designers’ productivity by a large extent.

The OrCAD Capture has been integrated with a new Signal Explorer with direct data flow between them allowing interactive exploration, analysis and editing of design entities and constraints for high-speed signals. A complete exploration and signal integrity analysis can be done at the pre-payout stage to improve the design quality and robustness.

The other new products added into OrCAD 16.6-2015 release are –

OrCAD DFM Checker – It performs manufacturing and fabrication-centric checks that help correcting such issues before manufacturing, thus eliminating re-work and cost overrun.

OrCAD Component information Portal – It provides an interactive web-based on-line interface with a rich GUI for accessing a shared component database between different vendors. Designers can research through different parametric components from a common environment without needing any additional setup or maintenance. This significantly improves designers’ productivity and simplifies CIS maintenance.

OrCAD Panel Editor – This is an intelligent documentation environment that significantly simplifies panel creation and documentation.

OrCAD Sigrity ERC – This provides an important capability to check the signal quality in the PCB design, thus improving the quality and robustness of the design.

A comprehensive set of electrical rule checks (ERCs) are performed, that go beyond the usual geometry based DRCs, for validation of signal quality. This helps designers identifying and correcting the first-order signal quality issues before more exhaustive analysis is performed.

After going through the Cadence press releaseon OrCAD 16.6 release on its 30[SUP]th[/SUP] anniversary, I had an opportunity talking to Josh Moore, Project Management Director for OrCAD at Cadence. Josh explained about how these capabilities expand OrCAD solution for greater productivity and efficiency in creating high-speed PCB designs for today’s complex applications such as IoT, automotive, and others. Interestingly, this product’s popularity is explained by several worldwide yahoo groups created around OrCAD by its users for sharing their experiences, ‘how to’, and libraries, and so on.

This is a product built to last in the semiconductor industry. Long live OrCAD!

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


GlobalFoundries: 14 in 15 in 8

GlobalFoundries: 14 in 15 in 8
by Paul McLellan on 04-26-2015 at 1:00 am

How is that for a cryptic title? It is GlobalFoundries’ rallying cry. Their big focus is to bring up 14nm during 2015 in fab 8 in Malta NY.

Last week I chatted to Shubhankar Basu. He is the senior product line manager for leading edge technologies. Not surprisingly, that means he is currently leading the 14nm FinFET business. He told me that they are well on track to achieving “14 in 15”.

It is almost exactly a year since GlobalFoundries announced that it was licensing Samsung’s 14nm process and would implement it in fab 8. The same process also runs in Samsung’s fab in Austin as well as in Korea. The two companies are running identical processes with a single PDK. It is possible to tapeout 14nm designs to either or both suppliers.

There are actually two 14nm processes, LPE (E is for early) and LPP. GF plans on this being a long-lasting node since they are seeing lots of demand. In fact they have kicked part of their technology development center out of the fab to create more space to enable a bigger and faster ramp. In the future there will also be additional 14nm derivatives based on the same underlying process.

GF is running just a single part in LPE. LPE completed qualification in January of this year and has reached the yield levels for a volume ramp of this product starting this month.

GF could not say who the customer for this single product is, but there are rumors all over the net that it is nVidia. But nVidia isn’t admitting to it either:When asked about this during a conference call with investors and financial analysts[in February], Jen-Hsun Huang, CEO of Nvidia, neither confirmed nor denied the plan.

The second process, LPP is also on-track. It has already reached yields as good as LPE for SRAM after less than 3 months. Qualification should be early in Q3 this year, with volume ramp in Q1 2016. They have been (and will continue) to run LPP shuttles. Almost all of these have been over-subscribed.

There is a core of common IP. GF have added some additional capabilities to their 14nm LPP offering:

  • additional IP blocks
  • 2.5D and 3D TSV-based packaging
  • advanced DFM
  • advanced EDA reference flows

There are many designs taping out this year in LPP. Note that these are “first source” designs where GF is the only fab (or is the first source). These designs are in mobility, and some high-performance computing large-die customers.

I asked Shubhankar about 10nm. He said that they were currently doing internal development and he couldn’t comment about any possible co-operation with Samsung at that node.

So I switched topic to IBM. Since the deal to “buy” IBM’s semiconductor business hasn’t officially closed, there is a limit to what he could say (in fact, having been through mergers myself, I know there is even a limit to how much the two companies can even work together). He would say that since GF is IBM’s exclusive foundry for the next 10 years then obviously 14LPP will be part of the offering. GF is also inheriting IBM’s ASIC business, some of which over time should migrate to LPP as a next generation process.

So the catchphrase is “14 in 15”, with 14nm LPE ramping to volume, and 14nm LPP qualified and ready to ramp, all in 2015.

GlobalFoundries’ website is here. The schedule for the GlobalShuttle MPW program, including 14nm, is here. The next one is June 30th.


2015 Semiconductor Capex led by Memory & Foundry

2015 Semiconductor Capex led by Memory & Foundry
by Bill Jewell on 04-25-2015 at 7:00 am

Semiconductor industry capital expenditures (capex) in 2015 are expected to be $69 billion in 2015, up 6% from $65 billion in 2014 according to IC Insights. We at Semiconductor Intelligence have compiled 2015 capex outlook by company. The major memory companies account for 38% of 2015 capex and the major foundries account for 27%.

Memory and foundry companies combined account for almost two-thirds of 2015 capex. The three largest spenders (Samsung, TSMC and Intel) add up to half of total capex. The table below shows capex for 2014 and projections for 2015. The projections are from the companies, Digitimes, and Semiconductor Intelligence (SC IQ). Digitimes forecast Samsung Semiconductor will spend $15.1 billion in 2015, up 13% from 2014. The midpoint of TSMC’s April guidance for 2015 is $10.8 billion, up 13%. This is down from TSMC’s January guidance of $11.5 billion to $12.0 billion, up 23% at the midpoint. Intel also reduced its guidance for 2015 capex from $11 billion in January (up 9%) to $8.7 billion in April (down 14%).

Overall the listed companies are expected to total $55.7 billion in 2015 capex, up 12% from 2014. Based on IC Insights forecast of $69.0 billion for the total semiconductor industry, this leaves $13.3 billion for other companies, down 14%. Eight of the nine biggest spenders are either memory companies or foundries. Flash Ventures is a combination of manufacturing joint ventures between Toshiba and SanDisk which account for most of their memory supply. Four of the companies on the list (Infineon Technologies, STMicroelectronics, Texas Instruments and NXP Semiconductor) are top 15 semiconductor suppliers which once depended primarily on internal wafer fabs. These companies are increasing their reliance on foundries and thus their capex is now rather small relative to their sales. The “others” category largely consists of small to medium size companies producing analog and discrete semiconductors.

Do current industry conditions justify the strong increase in capex by the foundry companies? TSMC, UMC and SMIC combined are expected to increase their capex 17% in 2015. Global Foundries expects $9 billion to $10 billion in capex in 2014 and 2015 combined, but did not indicate the amount each year. The capacity utilization trends of TSMC, SMIC and UMC show high utilization rates since second quarter 2014 after dips in late 2013. TSMC does not report a utilization rate. The TSMC utilization calculated by dividing wafers shipped by wafer capacity results in unrealistic rates above 100%. However the calculated number provides a general trend in TSMC’s utilization.

Bookings and billings for semiconductor manufacturing equipment show a relatively healthy market, based on data from SEMI and SEAJ. Although billings have been relatively flat for the last four quarters, the book-to-bill ratio has been above 1.0 for the last two quarters indicating near term growth.

2014 semiconductor manufacturing equipment shipments were $37.5 billion, up 18% from 2013. Shipments were still well below the $43.5 billion in 2011 and the $42.8 billion in 2007, before the last major semiconductor downturn. The chart below shows shipments by region for 2007 and 2014. Shipments in 2014 were $5.3 billion lower than 2007. The difference is primarily in Japan, which was $5.1 billion lower. Shipments were higher in 2014 in North America (up 25%) and China (up 50%). Shipments were lower in South Korea, Taiwan, and the rest of the world (ROW).

2015 should be a good year for semiconductor capital expenditures and semiconductor manufacturing equipment. However it is dependent on continuing healthy demand for semiconductors. Our latest forecast at Semiconductor Intelligence is for 8% semiconductor market growth in 2015, enough to support the current capex outlook.


Managing Design Flows in RF Modules

Managing Design Flows in RF Modules
by Majeed Ahmad on 04-24-2015 at 7:00 pm

The semiconductor industry is expected to grow at a reasonable pace in 2015 and beyond, with the biggest market being compute applications followed by wireless and consumer applications. The highest growth, however, is expected to be in application-specific products for devices such as smartphones, wearables, memories, and SSDs. In addition, the industrial electronics segment also also expected to grow significantly, with the Internet of Things (IoT) dominating the market.

From a design standpoint, a common factor is the increased use of RF design modules. The need for faster connections and greater network capacity for wireless technologies like LTE, Wi-Fi and IoT is driving the demand for more complex radio circuit designs. In fact, IoT is predicted to grow at a phenomenal pace, with over 30 billion devices predicted to be connected to the Internet by 2018.


Number of devices connected to the Internet
(Source: BI Intelligence)

Needless to say, with the increased usage of RF modules, RF designers are being sought after by an increasing number of companies. RF design teams that typically have been used to working in isolation are now being thrust into the limelight and must collaborate efficiently with different design groups, such as the digital and analog teams. The RF aspect of a design adds a more complex set of challenges as it is essential that the integrity of the communication path be maintained. So the RF teams now need to work more closely with the physical implementation and other teams to ensure that, for example, the noise due to discrete logic is taken care of properly.

From a design management standpoint, increased RF and mixed-signal designs make it important for all engineers to follow some essential design methodologies such as revision control to streamline the complex flows and design schedules. More importantly, ensuring that everyone adheres to revision control enables designers to revert back, as needed, in the event design mistakes are made, as well as to manage and tag the various handoff releases made to the different teams.

For example, to ensure proper integration of the RF and analog IPs in the SoC, it is important to shield it from the digital logic and to resolve all the noise issues. Since there are several methods to resolve these issues—such as proper frequency allocation, etc.—having a revision control system in place enables reversion to previously saved versions if efforts to reduce noise do not work as anticipated.

Whether a team is comprised of multiple designers under one roof or globally-dispersed members, it is useful for design companies to use only one tool for design configuration management. That tool should manage, among other things, revision control and release management for all types of designs—digital, analog, RF and mixed-signal.

From a design manager’s viewpoint, it is easier to track all changes made during the project and any open issues against a release when all design engineers manage all the design data through one design configuration system. As the design ecosystem changes to allow greater interoperability between the different tools used by analog, RF, and digital designers, a common design management platform adhering to all types of designs helps avoid unnecessary problems and potential slips in the design schedule.

ClioSoft’s SOS design management platform is the only such platform that provides a cohesive design environment for RF, analog, digital and mixed-signal design. Its tight integration with tools from EDA vendors such as Cadence Design Systems®, Keysight Technologies, Mentor Graphics and Synopsys® makes it easy for designers to adopt and use SOS for managing design data.

ClioSoft recently held a webinar to show RF designers using the Advanced Design System (ADS) tool from Keysight Technologies how to use the SOS platform integration within ADS. ClioSoft’s Director of Application Engineering, Karim Khalfan, walked the attendees through the SOS interface for the ADS design environment.

Karim explained how to set up SOS tool as add-on for Keysight ADS

Karim began by explaining how to set up the SOS data and IP management tool as an add-on for ADS. He also elaborated on the key features of SOS within the ADS environment, such as revision control, data recovery, side-by-side comparisons and more. You can view a 30-minute recording of the webinar by clicking here.

There are several challenges while integrating analog and RF modules with the digital portions in an SoC: noise, verification, modeling, process variations, etc. In addition, from a design management standpoint, there are several challenges to managing the design handoffs for complex design flows, tracking open issues and managing the different revisions of designs being used in the SoC. Having an underlying design data management tool to manage the intricacies of complex design flows amongst all design engineers located at all sites reduces the inefficiencies and mitigates the risks considerably, enabling design teams to be more productive.


Agile IC: All You Gotta Do To Join Is…

Agile IC: All You Gotta Do To Join Is…
by Paul McLellan on 04-24-2015 at 7:00 am

Back last October 1st was an announcement of Agile IC Methodology. As I said then:Today Sonics has launched the Agile IC Methodology along with several collaborators. The initial phase is to create a LinkedIn group to start the discussion.

See also Agile IC Development

At that point there was just an idea and a LinkedIn group. The group now has well over 300 members from about 60 companies. In addition, Neil Johnson of XtremeEDA created a similar group called AgileSoC which has over 500 members. So there is a lot of interest. The Design Automation Conference (you knew that is in San Francisco from June 7-11th didn’t you?) has added a meeting on the subject, the Agile IC Methodology Forum. This will be in room 306 on Tuesday from 10.30am until noon. If you are interested, then just show up. There is no need to register, any DAC badge (including exhibitor badges) will let you in.

The roots of Agile IC development are in Agile software development. In software development, it would not be incorrect to call it a movement. It is not there yet in the IC world, although the needs and motivations are almost identical:

  • individuals and interactions over processes and tools
  • working software over comprehensive documentation
  • customer collaboration over contract negotiation
  • responding to change over following a plan

The hottest buzzword in semiconductors right now is the internet of things (IoT). What is most notable about IoT is that nobody really knows what it is. Furthermore, in many areas, nobody has much idea of precisely what product features make for a winner which means that development has to start before there is a spec. The old waterfall model where marketing writes a spec and then engineering builds all the hardware and software is simply unworkable and too inflexible. Any spec in IoT will be obsolete before it has even been completed. It is not just IoT, it is pretty much any system. The next iPhone, for example, has to be ready a year after the last one to make the holiday gift-giving season.

So the methodology has to change, has to become more agile. It is not just ICs either. Almost any IC today also contains a huge software stack. Qualcomm, for example, is the largest fabless semiconductor company but it employs more software engineers than hardware designers, and I wouldn’t be the least bit surprised if that same statistic applies to Intel.

The speakers at the DAC event are:

  • Randy Smith of Sonics who told me he will give a rallying call to Agile IC Development and details of what it is and how to contribute, the top-down view
  • Neil Johnson of XtremeEDA who told me he approach from the other end and will look at bottom-up view, what individual designers or small teams can do inside an organization that is anything but agile, to create small initial successes
  • A 3rd person, probably from the FPGA world

There will be plenty of time for discussion.

As Arlo Guthrie sang (well, spoke) in Alice’s Restaurant:They may think it’s a movement. And, friends, that’s what it is. And all you gotta do to join is to sing it the next time it comes around on the guitar

Well, that’s not how you join the Agile IC movement, you should:

  • Enrol in the Agile IC LinkedIn group if you have not already done so. The group page is here
  • Show up during DAC in room 306 on Tuesday June 9th from 10.30am to 12pm
  • Watch the presentation Solving the System-Level Design Riddle from Design World

Shift-West of Semicon Power Centers

Shift-West of Semicon Power Centers
by Pawan Fangaria on 04-23-2015 at 5:00 pm

It’s true that Japan was once the center of semiconductor business and we were carrying on with that perception until recently. In 1990, six out of top10 semiconductor companies (excluding pure-play foundries) were in Japan; and 59% of worldwide semiconductor market was concentrated with the top10 companies. The semiconductor business was east dominated. But that domination has faded away over time. The changing equations of semiconductor business have brought up newer players into business year-over-year and pushed the Japanese companies out of the top10 list. In 2014, only two Japanese companies were in the top10 list; and in 2015, only one of them would remain in that list. Let’s look at the table below from an IC Insights’ report.

In 1990, the top 3 ranks were occupied by the Japanese companies; NEC, Toshiba and Hitachi. The two well know US names in this industry, Intelwas at 4[SUP]th[/SUP] rank and Texas Instrumentswas at 8[SUP]th[/SUP] rank.

In 1995, Intel became number one and is maintaining that coveted position till date. Samsungtook a dramatic entry at number 6, rapidly improved its rank to number 2, and is maintaining that position just next to Intel. Motorola (now Freescale) has been there in top10 list until being out in 2014. Philips (now NXP) also has been on-and-off in top10 until being out in 2014. But now NXPand Freescale have merged together and the combined entity will show up in top10 list of 2015.

What is interesting to look at in this top10 list is continuous elimination of Japanese companies from 6 in 1990 to 4 in 1995, 3 until 2006, 2 in 2014, and now the only one remaining will be Toshiba in 2015.

The other dramatic entries in recent times are of fabless giants, Qualcommand Broadcom. The fabless companies significantly changed the semiconductor business model. The changed equation in 2015 shows 5 US companies, 2 European companies (NXP+Freescale is considered to be European), 2 South Korean and 1 Japanese companies in top10 list.

This is a significant shift in semiconductor business domination in western countries today. In 1990s, about 25 years ago, the direction was just opposite, Japan in Far East dominated in semiconductor business. Why the top10 list is important to consider is that these top10 companies have major portion of worldwide sales. In 1990, they contained 59% of total semiconductor sales, and in 2015 again they are expected to contain ~53% of total semiconductor sales. There were a few dips in the business during economic crisis. In 2000, they represented 49% and in 2006 it was the lowest of all at 45% of total worldwide semiconductor sales. This equation of lion’s share with top10 companies will continue because it needs large capital in the semiconductor industry to sustain the economy of scale needed to compete and remain at the leading edge in the semiconductor market place.

The fabless model has given rise to semiconductor IP industry that allows smaller companies to enter in the market with small capital requirements. However, they are ultimately serving the larger SoC players. And even in the IP market, ARM holds more than 35% of the total IP market share and if we combine the top5 IP companies, then the combined market share exceeds 70% of the total IP market share. Okay, IP business is much smaller compared to the total semiconductor IC market. However, it justifies how capital strong companies hold large part of the market share in the semiconductor business.

After looking at this top10 list, it raised my curiosity on how it would look if we include pure-play foundries also. So, I went back to look at one of my earlier blogs, “Look who is Leading the World Semiconductor Business”. There in the top20 list, I could find only TSMC as the pure-play foundry at number 3 among all other top 10 companies of 2014. So, let’s say, if we include foundries in the top10 list of 2015. Then the only change will be that TSMC will be in and ST will be out of the top10 list of 2015. Still, USA with 5 companies and Europe with 1 company will dominate the East.

Also read: US is the Ultimate Leader in Semiconductor Business

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Xtensa Fusion DSP Target IoT including Wireless and Security

Xtensa Fusion DSP Target IoT including Wireless and Security
by Eric Esteve on 04-23-2015 at 8:30 am

Internet of Things (IoT) can be seen as a fashionable buzzword covering so many distinct applications that IoT is sometimes nick-named “Internet of Everything”, or it can be perceived as the next revolution in electronic systems generating more revenues than the smartphone and computer market together in 2020. But the industry consensus about IoT is that guaranteeing security will be absolutely crucial. And obviously IoT is by definition a wireless connected system…

Fusion DSP core from Tensilica/Cadence based on the proven Xtensa® Customizable Processor is addressing the three basic functions to support needs of any IoT system: Sensing, Compute and Communicate. Computing information generated by sensors is by far more efficient when using a DSP rather than a General Purpose Processor (GPP). IoT application require using sensors, but only using low cost sensors will allow wide adoption of IoT systems everywhere it makes sense. Low cost sensor implies higher performance DSP computation, while keeping the power consumption as low as possible. Integrating a configurable DSP core is the right option, far more power effective than a GPP or even a standard DSP chip, allowing supporting always on type of application.

Xtensa Fusion DSP integrates a floating point unit to support sensor fusion as well as VLIW processor and configurable MAC. The designer mat select one single 32×32, dual 32×16, dual 24×24 or dual 16×16 MAC. The multiple configuration options allow core to be configured exactly for the targeted applications. No silicon waste and maximum performance efficiency, and even more important optimized power consumption. Audio/Voice/Speech (AVS) block has been derived from HiFi 3 Audio DSP and guarantee the S/W compatibility with HiFi 3, as well as an access to 140+ Audio/Voice software packages. As we have mentioned earlier, IoT implies wireless communication, the MAC and PHY from Cadence plus the BaseBand Bit Ops allow supporting:

  • Bluetooth Low energy (BLE)
  • WiFi
  • Zigbee
  • Smartgrid
  • LTE
  • GNSS

To add security to this wireless communication (BLE and WiFi), AES-128 encryption acceleration is integrated into the Xtensa Fusion DSP.

Xtensa Fusion DSP can be designed into systems on chip (SoCs) for wearable activity monitoring, indoor navigation, context-aware sensor fusion, secure local wireless connectivity, face trigger, voice trigger and voice recognition. The needed features to support voice activation (Quad 16 MAC), sensor fusion (FPU), Audio/Voice/Speech (AVS) and wireless communication, where both AES encryption and baseband Bit Ops are required, are listed on the above table. Tensilica Fusion is the unique DSP core offering complete features set to design IoT end products like activity, healthcare or smart home. SoC targeting IoT applications has to be low cost, integrating computing core is the right path compared to a design based on standard GPP or RISC chip. Using low cost sensor may push using high computing power, Tensilica Fusion DSP offers multiple, configurable MAC as well as a FPU. IoT require supporting one of the wireless communications among BLE, Zigbee, WiFi, Smartgrid, LTE or GNSS, guaranteed through the fusion DSP for the MAC and Cadence PHY IP. Moreover using AES-128 encryption integrated with the DSP core allows securing wireless transaction. Finally and probably the most important for a mobile IoT application, the configurable DSP offers unmatched power efficiency, especially when compared with any of the RISC CPU competitors!

Did you know that Tensilica Xtensa cores have shipped by billions, more precisely 2 billion in 2014? That Cadence Tensilica is #2 overall in royalty-bearing licensable processor shipments? I am not talking of DSP IP, as Cadence is #1 in licensing revenue since 2012… Cadence is simply #2 in licensable processor core, DSP or CPU! And Cadence is proudly claiming a number of mask-set re-spins caused by bugs in Xtensa processors to be…ZERO in history, since 1998.

No doubt that this new Xtensa Fusion DSP version is specifically designed for the vast IoT market, made of a multitude of applications. Let’s listen to stated Martin Lund, senior vice president of the IP Group at Cadence: “As we were designing Tensilica Fusion DSP, we saw that our customer requirements varied greatly. We took those requirements into consideration while designing this DSP to extend beyond anything in the market today. The Xtensa Customizable Processor allows customers to further optimize the processor to create distinctive, highly efficient processors and DSPs that will help set them ahead of their competition.”

By Eric Esteve from IPNEST