RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Brief History of Atmel

A Brief History of Atmel
by Paul McLellan on 02-11-2014 at 4:12 pm

Atmel was founded in 1984. The name stands for “advanced technology for memory and logic” although initially the focus was on memory. George Perlegos the founder had worked in the memory group of Intel back when Intel was a memory company and not a microprocessor company although that didn’t stop Intel suing them after a couple of years for patent infringement.

Initially Atmel was fabless but they purchased Honeywell’s old fab in Colorado Springs raising some venture capital to do so. They expanded the fab after purchasing FPGA manufacturer Concurrent Logic in 1991, the year in which they went public.

They then developed their first microcontroller based on the Intel 8051 although with much higher performance. They still have the 8051 on their product list today, both as a standalone microcontroller and as a core for they can embed is more customized devices. In the mid-1990s they licensed the ARM architecture and today they have a broad portfolio of ARM-based devices.


They also developed the AVR line of microcontrollers that were the first to use flash memory. The name comes from the initials of the Norwegian developers of the original core. Today the open-source Arduino project is built around AVR, targeted not to computer professionals but hobbyists, designers and makers.

Over the years Atmel made acquisitions that ended up giving them additional fabs: Matra Harris Semiconductor (MHS) with a fab in Nantes, a fab in Germany and one in UK they acquired from Siemens (this was before Infineon was spun out separately). Like other semiconductor companies, having a number of small fabs went from being an advantage as process development advanced and the size of an economical fab increased. Steven Laub became CEO and transformed Atmel to an increasingly fab lite model, divesting most of the fabs that it had acquired although they still have the Colorado Springs fab.


However they also made some acquisitions, moving into the touchscreen market and making both touchscreen controllers and flexible touchscreens called Xsense that look pretty much like those old overhead projector transparencies, a market with great promise moving forward.

Revenue for the fourth quarter of 2013 was $353.2 million, a 1% decrease compared to $356.3 million for the third quarter of 2013, and 2% higher compared to $345.1 million for the fourth quarter of 2012. For the full year 2013, revenue of $1.39 billion decreased 3% compared to $1.43 billion for 2012.

More than half of Atmel’s revenue comes from microcontrollers with the rest spread around various lower volume products for in markets such as automotive, memory (their original product area), LED drivers, wireless for low-cost consumer products and security/encryption. They have very strong positions in certain fast growing end markets. For example, over 90% of the controllers for 3D printing markets are Atmel’s. The breadth of their product line, especially their range of lower end microcontrollers, low-cost wireless and security, positions them well for the Internet of Things (IoT)


More articles by Paul McLellan…


ASTC and the new midrange ARM Mali-T720 GPU

ASTC and the new midrange ARM Mali-T720 GPU
by Don Dingee on 02-11-2014 at 3:00 pm

When we last visited texture compression technology for OpenGL ES on mobile GPUs, we mentioned Squish image quality results in passing, but weren’t able to explore a key technology at the top of the results. With today’s introduction of the ARM Mali-T720 GPU IP, let’s look at the texture compression technology inside: Adaptive Scalable Texture Compression, or ASTC.

In contrast to some other proprietary texture compression implementations, ASTC is backed by the Khronos Group – and royalty free. This multi-vendor backing by the likes of AMD, ARM, NVIDIA, Qualcomm, and even Imagination Technologies bodes well, with adoption gaining momentum. For example, CES 2014 saw NVIDIA’s Tegra K1 and Imagination’s PowerVR Series6XT GPU core both add ASTC support to the mix, and the latest homegrown Adreno 420 GPU core from Qualcomm showcased in the Snapdragon 805 also features ASTC.

Until these recent developments, many have equated ASTC technology with ARM, probably due to their leadership in the Khronos specification activity, and an IP onslaught designed to challenge Imagination head on. ARM was able to stake out a claim to ASTC support beginning with the Mali-T624 and subsequent versions of their GPU IP family.

First released in 2012, ASTC had the advantage of learning from actual use of other popular texture compression formats and targeting the opportunity for improvement. The problem, as the original Nystad presentation on ASTC from HPG 2012 opens with, is the wide field of use cases with varying requirements for color components, dynamic range, 2D versus 3D, and quality. Most of the existing formats do well in a particular use case, but not in others, and mobile designers have generally opted for more decode efficiency at the expense of some image quality.

ASTC is a lossy, block-based scheme, and like other schemes is designed to decode textures in constant time with one memory access, but with a big difference: variable texel size working within a fixed 128 bit block, with the result a finely scalable bit rate. ASTC also supports both 2D and 3D textures, LDR and HDR pixel formats, and an orthogonal choice of base format – L, LA, RGB, RGBA – so it can adapt to almost any situation, leaving the bit rate encoding unaffected.

All this control provides a way to scale image quality more precisely, without changing compression schemes for different situations, but at first glance one might think it would blow up storage space requirements with all those settings running around. The math magic behind ASTC is fairly complex, depending on a strategy called bounded integer sequence encoding (BISE).

We know power-of-two encoding can be dreadfully inefficient, but BISE takes a seemingly-impossible approach: effectively, fractional bits per pixel. A long story short (follow the above BISE link for a more complete ARM explanation), BISE looks at base-2, base-3, and base-5 encoding to efficiently pack the number of needed values into a fixed 128 bit block. That magic explains the above bit rate table and its rather non-intuitive texel sizes.

There are a few further insights to ASTC that make it pretty amazing:

  • As with other texture compression schemes, the heavy lifting is done during encoding, a software tool running on a host – all the mobile GPU has to worry about is fast texture decompression.

  • With all these options in play, the only thing fixed in the implementation is the 128 bit block footprint – every setting can vary block-to-block, meaning image encoding can change across an image dynamically based on the needs. (In theory, at least. I’m not sure ARM’s encoder tool actually does this, and in most comparisons, a group of settings applies to the entire image.)

  • The end result of more efficient and finer grained encoding is better signal-to-noise ratios – those better Squish results we mentioned earlier, with ARM indicating differences of 0.25dB can be detected by some human eyes.

  • ARM’s ASTC implementation is synthesizable RTL (plus ARM POP IP technology for hardening), allowing it to find homes with customers choosing Mali GPU IP or customers implementing their own GPUs like the ones listed above – and the absence of per-unit royalties is attractive for many potential users.

Now, ARM breaks back into the midrange with the cost-optimized Mali-T720 GPU core targeting Android devices, fully supporting ASTC and other enhancements including a scalable 1 to 8 core engine and partial rendering. As a companion to the optimized ARM Cortex-A17 core, the Mali-T720 continues to shrink die area targeting a 28nm process, while improving energy efficiency and graphics performance running at 695 MHz.

ASTC may be in the early stages of taking over for mobile GPU texture compression, especially on Android implementations where platform variety is larger. The adoption by Qualcomm is especially significant, and I’ll be excited to see Jon Peddie’s 2014 mobile GPU data soon to see what kind of impact the availability of more ARM Mali GPU IP is having. Stay tuned.

More Articles by Don Dingee…..

lang: en_US



ARM Announces A17

ARM Announces A17
by Paul McLellan on 02-11-2014 at 12:36 pm

It is microprocessors all the time right now, with Linley last week. Today ARM announced the next generation Cortex-A17 core. It is a development built on the Cortex-A12 core, itself built on A7 (which is the current volume leader). ARM says that it is 60% faster than the A7 core, although I’m sure a lot of that gain is a process node change and not just architecture, but I could be wrong (A7 is single issue, I think, and A12 is dual). The timing isn’t coincidental, Mobile World Congress is coming up in Barcelona in a couple of weeks.

The mobile market for processors is fragmenting since different price, performance and area targets are needed for different markets:

  • the very high end: companies like Apple and Qualcomm license the ARM architecture and build their own processors. For example, Apple had the first 64-bit ARM processor in production (before any of the licensees of ARM’s own implementation)
  • the high end: Cortex A-57 and A-53 big.little implementations with a good mix of high performance and low power
  • the middle end which seems to be where the A-17 is positioned
  • I’m not sure if A7 is obsolete or is still a good solution for the low end especially in something like TSMC’s 28LP process. It can also do big.little with A15.


The lead customer for the A-17 is Mediatek. They announced an 8-core big.little A17/A7 application processor with on-chip 150Mbps LTE modem expected to be in volume production in second half of the year. I believe this is also the first big.little chip that allows all 8 cores to be used at the same time.

There is a process war (at least of words) going on between TSMC and Intel, along with a processor war between ARM and Intel.

Intel very publicly had graphs showing that Intel’s wafer price continues to come down linearly but TSMC’s is taking a pause. TSMC, most unusually for them and for Taiwanese companies in general, addressed this and denied it in their latest conference call. Cynics have suggested that the reason Intel show linear decline (there are no numbers on the graphs) is because their wafer cost is so high that if they only get it a bit closer to TSMC’s then they can make the graphs look good. The numbers that I have heard are that Intel’s wafer cost is as much as 30% higher than TSMC’s.

Intel’s high end server cores have unparalleled performance on single thread, so perfect for some sorts of datacenters. But as I said last week, for some datacenters other aspects are more important: throughput, power, cost. For mobile, the high end is mature and in a low growth phase. Apple, Qualcomm and Samsung already own the bulk of the market. The low and mid end is where all the smartphone growth will come and it is all about cost. The goal is an unsubsidized smartphone at $200 or less next year. If Intel’s wafer cost really is 30% higher than TSMC’s then it won’t be cost-competitive.

Another Intel weakness is that it doesn’t have an on-chip LTE modem. Its LTE modem is a mixture of technology it acquired from Infineon and Fujitsu but it is manufactured by TSMC while the application processors for mobile are all on Intel’s own processes. Qualcomm, Mediatek, Broadcom and perhaps other all have integrated modems already. Intel’s plan is to have an integrated modem for the low end in the second half of this year, but still manufactured by (according to their investor day presentations) “external foundry” which I assume means TSMC although they’d bite their tongue rather than say so explicitly. At the high end they will have a two-chip solution with their modem and a 14nm Atom called Moorefield. But I’m not sure who would buy it, maybe some tablet vendors (most of which are sold without modems anyway, at least today).

The jury remains out on whether there is an enterprise “surface” tablet market where Intel architecture compatibility is important. I have always felt that it wasn’t since I don’t feel the need to run big spreadsheets on my iPad or enter new powerpoints. If there is, then Intel could be successful. In any case, it depends on what enterprise IT managers think, not what I do.

ARM didn’t just announce the A17, they also announced two new GPU cores targeted at the same mid-range markets: Mali-T720 GPU and Mali-V500 video processor which are scalable, energy-efficient compact multimedia solutions tand are complemented by the existing Mali-DP500 display processor.

ARM press release is here.


More articles by Paul McLellan…


If you still think that FDSOI is for low performance IC only…

If you still think that FDSOI is for low performance IC only…
by Eric Esteve on 02-11-2014 at 11:02 am

…then you should read about this benchmark result showing how digital power varies with process corners, for high-speed data networking chip, not exactly the type of IC targeting mid-performance mobile application. Before discussing the benchmark results, we need to have some background about this kind of ASIC chip. Such a chip is really at the edge in term of performance (running frequency), that’s why the just processed wafers are tested and binning is exercised. Intel uses binning to categorize the maximum frequency the CPU IC can reach. The higher is the frequency, the most expensive the chip will be sold. In this case, the goal is to extract as many IC as possible from wafers, in order to keep the chip price as (reasonably) low as possible. When the binning detects chips ranked in “slow” category, these chips are not trashed, but will be corrected in the field by using adaptive supply voltage (ASV). At this point, you may suspect that exercising a higher VDD on this chip will have a negative impact on the power consumption (according with the VDD[SUP]2[/SUP] law). Binning allows you to correct the impact of process variations to keep a chip running at the desired high frequency by applying higher VDD, impacting the dynamic power consumption.

Those who still think that power consumption is only an issue for mobile application should imagine dozens (if not hundreds) high performance chips closely packed in racks. The “cost of ownership” linked with such a system is going higher with chip power consumption: you need to guarantee excellent power dissipation at chip level (most expensive package, thermal drain etc.), at system level you may have to deploy an expensive cooling strategy (from Wikipedia we learn that for 100 watts dissipated in a server, you have to spend another 50 watts to cool them!). At the end of the day, somebody will pay for the electricity bill! Add to this pure $ expenses, the company image value degradation, in the eyes of eco-concerned customers and you finally realize that lowering the power, or increasing the power efficiency, should be the next semiconductor industry concern, not only for mobile applications…

The main conditions for this benchmark are listed below, if you want to know about the complete picture, I suggest you to read the full article from Ian Dedic, Chief Engineer at Fujitsu Semiconductor Europe, posted in Linkedin “FD-SOI design community” group here:

  • The benchmark uses extracted parasitics with typical clock rate, optimized library mix (different Vth and gate lengths),
  • fanout and tracking load taken from a high-speed data networking chip
  • high gate activity and 100% duty cycle
  • maximum Tj because this is the maximum power condition needed for system design
  • supply voltage is adjusted for each case (ASV) to normalize the critical path delay (clock speed) to the same value as slow corner 28nm
  • The FDSOI forward body biasing (FBB) (used to decrease Vth) is adjusted to get minimum power across process corners

From the first table showing Digital voltage (Vdd) variation in respect with the process conditions to keep the critical path delay to the same value, we already can suspect that enabling the FBB allows keeping Vdd almost flat. Such an effect can only occur on FDSOI technology, with regular transistor architecture or FinFET. Now if we look at the results in term of maximum power consumption (dynamic + leakage), the impact of forward body bias is very impressive, as we see a 31% difference for the same technology node (14FD-SOI) and slow conditions… and up to more than twice power consumption for the device in 28HPM with ASV.

Unfortunately, there is no benchmark with 14Bulk and FinFET, but the author makes the assumption that 14FDSOI (standard transistor architecture) with ASV only is very similar to 14FDSOI with FinFET. That we can say for sure is that you can’t exercise FBB effect on 14Bulk FinFET technology. Thus the great improvement on maximum power consumption on FDSOI technology is clearly due to the forward body bias effect, and such an improvement is a great benefit for high performance chips. Eliminating “Slow” process corner device could be possible, but extremely costly as a chip maker pays for the complete wafer. Using adaptive supply voltage is a way to keep high performance at the same level for any chips, even coming from a slow process corner, but at the expense of higher maximum power consumption. Finally, FDSOI is the only way to keep the device cost minimum (thanks to ASV) and minimize the power consumption (thanks to FBB).

As a reminder or if you did not read one of the previous blog about FDSOI, you have the opportunity to visualize the forward body bias effect on the above picture. It’s only possible to apply a forward bias to the “body”, or the substrate of the wafer, on silicon on insulator (SOI) technology, as the buried oxide is playing a role similar than a gate on a standard architecture, except that it only change the threshold value (Vth). If the threshold becomes lower than nominal, it becomes possible to lower Vdd (or not to increase it) to get the same performance value than for higher Vdd. Because the dynamic power consumption is a function of square Vdd, the FDD impact is terrific on power consumption…

From Eric Esteve from IPNEST

lang: en_US

More Articles by Eric Esteve…..


Data Outgrowing Datacenter Performance

Data Outgrowing Datacenter Performance
by Paul McLellan on 02-10-2014 at 1:13 pm

Last week I attended the Linley Datacenter Conference. This is not the conference on mobile which is not until April. However, a lot of the growth in datacenter is driven by mobile, with the increasing dominance of the model where data is accessed by smartphones but a lot of the backend computing and datastorage is in the cloud.

From 2012 to 2017 smartphones have grown 20% and tablets 46%. There are now more than 10 billion internet-connected devices. It takes about 400 smartphones to drive one server in a datacenter. During that same 5 year period, traffic per mobile device has increased between 4 and 8 times, with video being a big driver (growing fast and with the highest bandwidth requirements). This has driven a cloud computing growth rate of 23.5% from 2013 to 2017. As a specific example, Amazon Web Services (AWS) adds enough servers daily to support a $7B company.


One them running through the first part of the conference is that the growth in the amount of data is overwhelming the compute power. The standard way to build a datacenter is with a two level network, a top-of-stack (ToS) router at the top of each stack of servers, and then another level to link the stacks and connect to the outside world. The ToS may not use ethernet, RapidIO and PCIe are options for the stack communication. The problem is that the data rates are now so high that it is taking more and more of the compute power on each server just to run the network stack, leaving less and less compute power available to do actual work.

More of the cost and more of the power dissipation is in the networking, and as a result there are solutions from companies like Cavium, Netronome and Tilera that can be used to offload the servers and free up more compute power for actual processing. Also, specialized memory architectures such as the Micron Hybrid Memory Cube are also targeted at improving the power and performance.

Another area that still seems to be coming soon rather than here is scaling out rather than up, using (mostly) ARM-based servers that have a much lower cost-of-ownership. Intel/AMD cores have very high single thread performance but this comes at a significant cost in terms of power dissipation. Highly integrated ARM-based chips are much lower power but at have less single threat performance. However, aggregate performance can be much higher for the same budget in $, W or Size. Solutions are shipping but the big guys like Facebook don’t seem to yet be building ARM based datacenters. But there are products shipping:

  • Broadcom/Netlogic with up to 8 cores
  • Cavium Octeon II family with up to 32 cores
  • Freescale QorIQ P4 with up to 8 cores
  • LSI Axxia ACP3448 with 4 cores
  • Tilera Tile-GX with up to 72 cores


More articles by Paul McLellan…


Update on AMS Verification at DVcon

Update on AMS Verification at DVcon
by Daniel Payne on 02-09-2014 at 7:35 pm

Digital verification of SoCs is a well-understood topic and there’s a complete methodology to support it, along with many EDA vendor tools. On the AMS (Analog Mixed-Signal) side of the design world life is not so easy, mostly because there are no clear standards to follow.

To gain some clarity into AMS verification I spoke today with Hélène Thibiéroz. She has more than 17 years of combined experience in engineering, product development and marketing for Semiconductors and EDA companies. After getting a degree in Doctoral studies in EE, she worked for 5 years at Motorola as a device and spice characterization engineer within their advanced process and research center. She then moved to the EDA industry and has been focusing for more than 12 years in the AMS domain, from environment to analog, RF and mixed-signal simulations. She currently works as a senior marketing manager for Synopsys AMS products.

Q: Hélène, you have organized a panel discussion at DVcon to discuss what’s next for AMS verification. What was the motivation behind it?

In just a decade, the landscape of mixed signal design has drastically changed: we went from simple co-simulations between a digital and analog solver to a more complex mixed signal verification environment. For example, more and more SoCs designs rely on assembling successfully IPs rather than on a full-chip design approach. As such, there is a clear need today for advanced debugging features and technologies to fully test those new design assembly approaches.

While the future and unification of mixed signal verification is unclear due to the large diversity of use models and needs in the industry, new technologies and trends are emerging. I therefore wanted to invite several experts to discuss about those emerging techniques that would enable the digitally-centric mixed-signal community to reach their next level of verification.

Q: Can youtell us more about the panel? What would be the format and topics?

The four panelists are from different industry segments with diverse requirements and opinions, but each with a deep AMS background. Each panelist will first present their flow and design challenges. We will then have specific topics opened for discussion, followed by an audience Q&A. I selected some topics ahead of time that in my opinion will generate an interesting discussion. Some of those topics will cover:

• New behavioral modeling needs and standards
• Digital verification methodologies applied to mixed signal
• Debugging/regression environment

Thanks Hélène for organizing this event. You’ll have to register here online to ensure a spot at this luncheon panel on Monday, March 3, from 12:30pm to 1:30PM in the Pine/Cedar Ballroom.

lang: en_US


TI – The Initial Innovator of Semiconductor ICs

TI – The Initial Innovator of Semiconductor ICs
by Pawan Fangaria on 02-09-2014 at 9:00 am


[TI’s China Foundry acquired from SMIC]

During my engineering graduation, electronic design courses and mini-projects, the ICs I used to come across were SN 7400 series from Texas Instrumentsthat covered a large range of devices from basic gates and flip-flops to counters, registers, memories, ALUs, system controllers, and so on. There is ‘54’ series with military specifications and wider temperature range. Today, we have much wider variety of ICs, ASICs, SoCs and the like with various functions on the chip. Initially, 7400 series ICs used to have bipolar TTL (Transistor-Transistor Logic) technology that slowly transitioned through various refinements; now we see CMOS, BiCMOS, HCMOS and the like. While underlying technologies changed, the part number of a particular type of IC remained almost similar, thus standardizing the IC number to relate to the particular logic inside. While TTL provided higher speed, it also dissipated higher power; the CMOS dissipated lower power at the cost of slower speed. The technology within ICs has come through a long way with many variants (and new entrants over the years offering newer, differentiated ICs) struggling to optimize and trade-off between power and performance. And obviously, minimization of area with newer technologies and lower nodes played an important role in what we have today. In the near future, it would be my pleasure to talk more about TI’s technology and product portfolio with focused attention on particular segments.


At this moment, it’s too inviting for me to briefly ponder over the history of this holistic, innovative, most ethical, one of the oldest, built-to-last, and ever shining semiconductor design and manufacturing company (an IDM). Jack S. Kilby, the great inventive and creative mind at TI and Nobel Laureate, has the honour of inventing the first commercial semiconductor IC in 1958 (1[SUP]st[/SUP] patent filed in May 1959 on miniaturized semiconductor integrated circuits), probably the most versatile invention of the 20[SUP]th[/SUP] century which transformed the world of electronics. Holding more than 60 patents, in Dec 2000, Kilby was presented with the most prestigious Nobel Prize in Physics for his lifelong exemplary work. In 1967, TI invented the first electronic hand held calculator; again a patent in the name of Kilby along with two of his colleagues was filed. With more than 41000 patents in its portfolio, TI is probably among the first movers to start earning revenue out of its patents. More on that and other leading technologists on TI board later.

So, how did TI start? That’s again an interesting and exciting transformation story. Although TI was founded in 1951, its existence came much before the Second World War. Eugene McDermott and John Clarence Karcher started a company in 1930 to provide seismography services to oil exploration companies and that was later incorporated as Geophysical Service Inc. (GSI) in 1938. In 1941, Eugene McDermott along with Cecil Green, Eric Jonsson and H. B. Peacock bought GSI. During the later years of wartime (WW2), U.S. Navy, under the leadership of Lieutenant Patrick Haggerty, entrusted GSI with contract for developing devices to detect submarines from low flying aircrafts above the sea. The company with its expertise in locating oil was very successful in accomplishing this task which involved detecting magnetic disturbances due to submarine’s movement. With such assignments catering to defense systems, GSI transformed itself into an electronics company. As the electronics business grew, in 1951, GSI changed its name to Texas Instruments Inc. which was a name much closer to the electronics business.

Patrick Haggerty was a great visionary. After leading the contract with GSI as a Navy Lieutenant, he joined GSI in 1945 as General Manager with responsibility to diversify electronics business. His keen mind had observed that last 20 years of electronics development was driven by components and their connections through circuitry, and the future belonged to how dense that connection can be through new evolution of materials and technologies. In 1951, when TI was formed, Haggerty was Executive V. P. and he initiated purchase of a license from Western Electric to manufacture transistors with an eye to enter the consumer electronics business which had a great hidden growth potential. TI then manufactured the first transistor radio with low power consumption and size which could fit into a large pocket. In 1958, Haggerty became President of TI and that is when Jack Kilby, a researcher at TI at that time invented the IC. This became a great union of a visionary and an inventor.

It was an opportune time for TI in 1958, when it was the prime supplier of electronics to the U.S. Military and U.S. Air Force needed major renovation in its ballistic missile guidance system. TI received major funding from U.S. Air Force for developing ICs that were used in the ballistic missile guidance system. Later, the first IC based computer for U.S. Air Force was developed in 1961. Stay tuned for more….

Today, TI (headquartered in Dallas, Texas, U.S.A.) has its presence in more than 35 countries with more than 100000 customers, largest number of sales and support staff WW and more than 100000 analog ICs, embedded processors, tools and software.

More Articles by Pawan Fangaria…..

lang: en_US


Has LinkedIn Jumped the Shark?

Has LinkedIn Jumped the Shark?
by Daniel Nenni on 02-08-2014 at 11:00 am

LinkedIn is without a doubt the number one social network for semiconductor professionals. Based on my experience, the big LinkedIn boom came with the massive unemployment during the Great Recession of 2009. In my estimate, unemployment was 12%+ at the high point in Silicon Valley and resumes clogged the internet with LinkedIn crowned the best job search tool.

This was the start of the blogging boom within the fabless semiconductor industry and it’s also when I started blogging. At that time there were more than 200 bloggers covering our industry but as employment slowly returned bloggers began to disappear, resumes were harder to find, and now the press is suggesting LinkedIn has jumped the shark.

Jumping the Shark is a TV term used when a particular show starts to decline in popularity. It was coined when the series Happy Days aired an episode of Fonzi jumping a shark on water skiis wearing a leather jacket, and yes, I watched Happy Days growing up.

According to the charts, LinkedIn revenue beat expectations, new subscribers beat expectations, but the all important unique visitors and page views declined. LinkedIn stock took a huge hit as a result. If you chart the decreasing unemployment rate you will absolutely see a correlation as people are spending their time working versus looking for work on LinkedIn.

That brings us to a new challenge for the fabless semiconductor ecosystem, finding qualified people for our expanding industry. The solution of course is to change the rules of engagement as SemiWiki did with blogging. Why blogging you ask?

A friend of mine and I debated over dinner on Fisherman’s Warf whether the famed Escape from Alcatraz convicts made the swim to San Francisco. I argued they did make it but my argument was not convincing since I had not attempted the swim myself. Not long after that debate I jumped off a boat on the East side of Alcatraz at the break of dawn and swam for my life towards the Bay Bridge. 45 minutes later I landed at the marina just inside the Golden Gate Bridge. The outgoing tide was very strong, the water was freezing cold, and even though I was wearing a wetsuit I was hypothermic. In fact, my legs froze up so the last few yards was all arms.

Now when I argue that the convicts did NOT make the swim to San Francisco I can do so convincingly by sharing my experience, my observation, my opinion, and that is what blogging is all about. The bloggers on SemiWiki are semiconductor professionals who enjoy writing, and that is why our unique visitor and page view numbers continue to grow.

As you may have read, we published our first book “Fabless: The Transformation of the Semiconductor Industry” and there will be more to come. This year we created a jobs forum and will blog about job openings to help our subscribing companies grow. SemiWiki bloggers will also go deep this year on technologies from the top fabless semiconductor companies. SemiWiki will not be jumping the shark anytime soon, believe it.

More Articles by Daniel Nenni…..

lang: en_US


Who Won the DesignVision Awards at DesignCon this year?

Who Won the DesignVision Awards at DesignCon this year?
by Daniel Payne on 02-07-2014 at 7:37 pm

The Seattle Seahawks had an awesome victory in the SuperBowl against the Denver Broncos, so folks living here in the Pacific Northwest are feeling proud and optimistic. The recent DesignConconference and exhibit ended 10 days ago and there were also victors announced in terms of the annual DesignVision awards that have three criteria:


Continue reading “Who Won the DesignVision Awards at DesignCon this year?”


What does a 52% increase in DSP IP core licensing means?

What does a 52% increase in DSP IP core licensing means?
by Eric Esteve on 02-07-2014 at 11:18 am

The future market performance for an IP vendor licensing an IP based on a model with upfront fee plus royalties can be easily and safely evaluated if you look at the first part of revenue: upfront fee. Even if the royalty part is declining, exhibiting a 52% increase (Q4 2013 to Q4 2012) in upfront licensing fee is a promise that the future revenues will also climb. It may takes 12 to 24 months before the IC including the DSP IP core go into full production, then you may have to add at least a quarter for the production figures to be consolidated and the royalties to be paid, but, if the production volumes are high, the royalties will be high too. You may wonder that some IC project may not be successful, and this is a matter of fact, but when an IP vendor reaches licensing revenue in the $7.3 range, you may rely on a statistical effect. We can extrapolate this high licensing range to be linked with 15 (+/- 5 !) licenses. Even if a couple of design starts fail to reach high volume production, most of it will generate royalties. On the other side of the Gaussian curve, the IP vendor may be surprised by higher than expected production volumes. That is, such a high level of licensing revenues generated by design-in in Q4 2013 will certainly generate a strong royalty flow in 2015, 2016 and even further…

If we take a look at CEVA’s licensee list (above picture) we can see that most of them are large Semiconductor company, the type of customer staying loyal as far as the product is competitive, and also because of the software installed base, reusable project after project. These are also the type of customers able to invest into SoC development, the most flexible and efficient approach to implement Digital Signal Processing algorithm, as we have shown in this previous article. If DSP as an ASSP product is in bad shape, DSP as an IP core is booming!

To be more specific, there are precise reasons why the future is bright for CEVA:

  • LTE: CEVA is now benefiting from increased momentum behind Long Term Evolution (LTE). You may want to read CEVA white paper, created jointly with ARM, on their LTE solution which involves an ARM Cortex-R7 to handle the higher levels of the stack (2 and 3) with Ceva DSPs to handle level 1 where all the heavy lifting is done
  • We have seen that the wireless phone market growth is now coming from the developing world, asking for low end smartphones and low cost basic phones. Thanks to DSP IP core flexibility, CEVA is also entrenched in this very promising market. Just think at the volume production level, and at CEVA business model, based on upfront fee and royalties.

CEVA is also expanding beyond wireless phone market and baseband:

  • CEVA-XC product line has been tailored for the wireless infrastructure market
  • We have posted numerous blogs about video/imaging solutions from CEVA, with MM3100 product line. This blog explains how using CEVA solution for super resolution, this one about Computer Vision and Imaging.

  • CEVA is also diversifying into the very promising voice-audio markets for Internet of Thing and wearable systems. Will these markets develop as expected and reach smartphone like production level? I guess that nobody knows it for sure today, but it is certainly better to propose a specifically tailored (low gate count and low power), highly customable solution: CEVA TeakLite4.

A company like CEVA, enjoying more than 200 licensees and 300 licensing agreements signed to date, and a comprehensive customer base including most of the world’s leading semiconductor and consumer electronics companies, has certainly a brilliant future. Moreover, the systems developed today tend to integrate large SoC including DSP IP core, rather than DSP ASSP, and this trend is reaching all market segments, after being mostly used in wireless phones. CEVA was present at the early days of the wireless segment, no doubt that the company will continue to expand!
If you want to get the full picture of CEVA’s portfolio, just take a look at CEVA powered product

Eric Esteve from IPNEST

lang: en_US

More Articles by Eric Esteve…..