BannerforSemiWiki 800x100 (2)

Has LinkedIn Jumped the Shark?

Has LinkedIn Jumped the Shark?
by Daniel Nenni on 02-08-2014 at 11:00 am

LinkedIn is without a doubt the number one social network for semiconductor professionals. Based on my experience, the big LinkedIn boom came with the massive unemployment during the Great Recession of 2009. In my estimate, unemployment was 12%+ at the high point in Silicon Valley and resumes clogged the internet with LinkedIn crowned the best job search tool.

This was the start of the blogging boom within the fabless semiconductor industry and it’s also when I started blogging. At that time there were more than 200 bloggers covering our industry but as employment slowly returned bloggers began to disappear, resumes were harder to find, and now the press is suggesting LinkedIn has jumped the shark.

Jumping the Shark is a TV term used when a particular show starts to decline in popularity. It was coined when the series Happy Days aired an episode of Fonzi jumping a shark on water skiis wearing a leather jacket, and yes, I watched Happy Days growing up.

According to the charts, LinkedIn revenue beat expectations, new subscribers beat expectations, but the all important unique visitors and page views declined. LinkedIn stock took a huge hit as a result. If you chart the decreasing unemployment rate you will absolutely see a correlation as people are spending their time working versus looking for work on LinkedIn.

That brings us to a new challenge for the fabless semiconductor ecosystem, finding qualified people for our expanding industry. The solution of course is to change the rules of engagement as SemiWiki did with blogging. Why blogging you ask?

A friend of mine and I debated over dinner on Fisherman’s Warf whether the famed Escape from Alcatraz convicts made the swim to San Francisco. I argued they did make it but my argument was not convincing since I had not attempted the swim myself. Not long after that debate I jumped off a boat on the East side of Alcatraz at the break of dawn and swam for my life towards the Bay Bridge. 45 minutes later I landed at the marina just inside the Golden Gate Bridge. The outgoing tide was very strong, the water was freezing cold, and even though I was wearing a wetsuit I was hypothermic. In fact, my legs froze up so the last few yards was all arms.

Now when I argue that the convicts did NOT make the swim to San Francisco I can do so convincingly by sharing my experience, my observation, my opinion, and that is what blogging is all about. The bloggers on SemiWiki are semiconductor professionals who enjoy writing, and that is why our unique visitor and page view numbers continue to grow.

As you may have read, we published our first book “Fabless: The Transformation of the Semiconductor Industry” and there will be more to come. This year we created a jobs forum and will blog about job openings to help our subscribing companies grow. SemiWiki bloggers will also go deep this year on technologies from the top fabless semiconductor companies. SemiWiki will not be jumping the shark anytime soon, believe it.

More Articles by Daniel Nenni…..

lang: en_US


Who Won the DesignVision Awards at DesignCon this year?

Who Won the DesignVision Awards at DesignCon this year?
by Daniel Payne on 02-07-2014 at 7:37 pm

The Seattle Seahawks had an awesome victory in the SuperBowl against the Denver Broncos, so folks living here in the Pacific Northwest are feeling proud and optimistic. The recent DesignConconference and exhibit ended 10 days ago and there were also victors announced in terms of the annual DesignVision awards that have three criteria:


Continue reading “Who Won the DesignVision Awards at DesignCon this year?”


What does a 52% increase in DSP IP core licensing means?

What does a 52% increase in DSP IP core licensing means?
by Eric Esteve on 02-07-2014 at 11:18 am

The future market performance for an IP vendor licensing an IP based on a model with upfront fee plus royalties can be easily and safely evaluated if you look at the first part of revenue: upfront fee. Even if the royalty part is declining, exhibiting a 52% increase (Q4 2013 to Q4 2012) in upfront licensing fee is a promise that the future revenues will also climb. It may takes 12 to 24 months before the IC including the DSP IP core go into full production, then you may have to add at least a quarter for the production figures to be consolidated and the royalties to be paid, but, if the production volumes are high, the royalties will be high too. You may wonder that some IC project may not be successful, and this is a matter of fact, but when an IP vendor reaches licensing revenue in the $7.3 range, you may rely on a statistical effect. We can extrapolate this high licensing range to be linked with 15 (+/- 5 !) licenses. Even if a couple of design starts fail to reach high volume production, most of it will generate royalties. On the other side of the Gaussian curve, the IP vendor may be surprised by higher than expected production volumes. That is, such a high level of licensing revenues generated by design-in in Q4 2013 will certainly generate a strong royalty flow in 2015, 2016 and even further…

If we take a look at CEVA’s licensee list (above picture) we can see that most of them are large Semiconductor company, the type of customer staying loyal as far as the product is competitive, and also because of the software installed base, reusable project after project. These are also the type of customers able to invest into SoC development, the most flexible and efficient approach to implement Digital Signal Processing algorithm, as we have shown in this previous article. If DSP as an ASSP product is in bad shape, DSP as an IP core is booming!

To be more specific, there are precise reasons why the future is bright for CEVA:

  • LTE: CEVA is now benefiting from increased momentum behind Long Term Evolution (LTE). You may want to read CEVA white paper, created jointly with ARM, on their LTE solution which involves an ARM Cortex-R7 to handle the higher levels of the stack (2 and 3) with Ceva DSPs to handle level 1 where all the heavy lifting is done
  • We have seen that the wireless phone market growth is now coming from the developing world, asking for low end smartphones and low cost basic phones. Thanks to DSP IP core flexibility, CEVA is also entrenched in this very promising market. Just think at the volume production level, and at CEVA business model, based on upfront fee and royalties.

CEVA is also expanding beyond wireless phone market and baseband:

  • CEVA-XC product line has been tailored for the wireless infrastructure market
  • We have posted numerous blogs about video/imaging solutions from CEVA, with MM3100 product line. This blog explains how using CEVA solution for super resolution, this one about Computer Vision and Imaging.

  • CEVA is also diversifying into the very promising voice-audio markets for Internet of Thing and wearable systems. Will these markets develop as expected and reach smartphone like production level? I guess that nobody knows it for sure today, but it is certainly better to propose a specifically tailored (low gate count and low power), highly customable solution: CEVA TeakLite4.

A company like CEVA, enjoying more than 200 licensees and 300 licensing agreements signed to date, and a comprehensive customer base including most of the world’s leading semiconductor and consumer electronics companies, has certainly a brilliant future. Moreover, the systems developed today tend to integrate large SoC including DSP IP core, rather than DSP ASSP, and this trend is reaching all market segments, after being mostly used in wireless phones. CEVA was present at the early days of the wireless segment, no doubt that the company will continue to expand!
If you want to get the full picture of CEVA’s portfolio, just take a look at CEVA powered product

Eric Esteve from IPNEST

lang: en_US

More Articles by Eric Esteve…..


Verification Execution: When will we get it right?

Verification Execution: When will we get it right?
by Daniel Payne on 02-06-2014 at 7:50 pm

Verification technologist Hemendra Talesaraattended a conference in Austin and asked me to post this article on verification execution for him as a blog. I first met Hemendra when he worked at XtremeEDA, and now he works at Synapse Design Automation – a design services company.
“In theory there is no difference between theory and practice, but in practice there is”
– Harry Foster

At a recent conference in Austin, I heard one of my favorite verification-philosopher-scientist, Harry Foster, gave a talk on how, in spite of advances in verification technology, methodology and processes; we are barely keeping up with Verification. Two thirds of the projects are always behind schedule. This has been constant for last five plus years for which they tracked the data. Good news is that this is true in spite of an increase in Verification Complexity (Harry cited different ways folks have looked at verification complexity – although the golden standard for Complexity is still illusory; maybe Accellera will focus on it next) and shrinking schedule. Bad news is that in spite of all the advances and maturation of tools, technology and processes, Verification Execution remains the number one headache for management.


Let’s look at it from another perspective. There is at least one third of the projects that are either on-time or ahead of schedule.

A schedule slip just indicates the gap between planning and execution. Where is the gap?

Let’s acknowledge that indeed, there are a few difficulties in planning a Verification Effort. In earlier days Verification was not very distinct from design tasks and was well understood by design managers. But, as design complexity has grown, Verification has become a lot more complex. Tools and Technology also have become more sophisticated. The skillset required to execute is also becoming more diverse. Scoping, Planning and leading a Verification Effort have also become a lot more complex. Verification is consuming more resources. Harry extrapolates that at the rate more resources are thrown in Verification, in twenty-five years only thing designers will be doing is Verification :rolleyes:.

Indeed, part of the problem is that planning itself requires that one understands what it takes to deploy these new technologies and the demand it places on skillset, resources, effort etc. A gauge on complexity of tasks at hand and a good understanding of new verification technologies is required. There are standard project management practices that can be employed for initial planning, scoping, scheduling etc. The problem starts when plans do not reflect reality, but, market constraints or wishful thinking of the management.“Realism is at the heart of execution, but organizations today are full of people who like to avoid or shade reality”
– Larry Bossidy and Ram Charan

Even, if the plan did reflect reality, reality is constantly shifting. Impact of specification changes and overall scope of the tasks that need to be dealt with are not always reflected in the plan. Perhaps Verification today demands more agility than a typical design task of earlier days.
Yes there are difficulties, but, even then at least one third of the projects are finishing on time.

Finally, if we look at this slide from Harry, it appears that on average almost 36% of the time spent on debugging. Now, if one must shade reality to please the upper management, what is it that is most likely going to be cut short in the planning process? Not so predictable “debugging”, which is mostly in the final phase of verification. This is the verification execution reality. Actual mileage for each project will vary. But, does your typical plan actually reflect similar distribution?

I invite you to share your experience and insight to help understand where do we fall short on the plan? Where do we miss on execution? Or when we ship on time, what is it that we do right? Are current project management practices adequate to deal with Verification issues?

I am looking forward to your thoughts and comments.

Hemendra Talesara

lang: en_US


Verification Execution: When will we get it right?

Verification Execution: When will we get it right?
by Daniel Payne on 02-06-2014 at 7:50 pm

Verification technologist Hemendra Talesaraattended a conference in Austin and asked me to post this article on verification execution for him as a blog. I first met Hemendra when he worked at XtremeEDA, and now he works at Synapse Design Automation – a design services company.
“In theory there is no difference between theory and practice, but in practice there is”
– Harry Foster

At a recent conference in Austin, I heard one of my favorite verification-philosopher-scientist, Harry Foster, gave a talk on how, in spite of advances in verification technology, methodology and processes; we are barely keeping up with Verification. Two thirds of the projects are always behind schedule. This has been constant for last five plus years for which they tracked the data. Good news is that this is true in spite of an increase in Verification Complexity (Harry cited different ways folks have looked at verification complexity – although the golden standard for Complexity is still illusory; maybe Accellera will focus on it next) and shrinking schedule. Bad news is that in spite of all the advances and maturation of tools, technology and processes, Verification Execution remains the number one headache for management.


Let’s look at it from another perspective. There is at least one third of the projects that are either on-time or ahead of schedule.

A schedule slip just indicates the gap between planning and execution. Where is the gap?

Let’s acknowledge that indeed, there are a few difficulties in planning a Verification Effort. In earlier days Verification was not very distinct from design tasks and was well understood by design managers. But, as design complexity has grown, Verification has become a lot more complex. Tools and Technology also have become more sophisticated. The skillset required to execute is also becoming more diverse. Scoping, Planning and leading a Verification Effort have also become a lot more complex. Verification is consuming more resources. Harry extrapolates that at the rate more resources are thrown in Verification, in twenty-five years only thing designers will be doing is Verification :rolleyes:.

Indeed, part of the problem is that planning itself requires that one understands what it takes to deploy these new technologies and the demand it places on skillset, resources, effort etc. A gauge on complexity of tasks at hand and a good understanding of new verification technologies is required. There are standard project management practices that can be employed for initial planning, scoping, scheduling etc. The problem starts when plans do not reflect reality, but, market constraints or wishful thinking of the management.“Realism is at the heart of execution, but organizations today are full of people who like to avoid or shade reality”
– Larry Bossidy and Ram Charan

Even, if the plan did reflect reality, reality is constantly shifting. Impact of specification changes and overall scope of the tasks that need to be dealt with are not always reflected in the plan. Perhaps Verification today demands more agility than a typical design task of earlier days.
Yes there are difficulties, but, even then at least one third of the projects are finishing on time.

Finally, if we look at this slide from Harry, it appears that on average almost 36% of the time spent on debugging. Now, if one must shade reality to please the upper management, what is it that is most likely going to be cut short in the planning process? Not so predictable “debugging”, which is mostly in the final phase of verification. This is the verification execution reality. Actual mileage for each project will vary. But, does your typical plan actually reflect similar distribution?

I invite you to share your experience and insight to help understand where do we fall short on the plan? Where do we miss on execution? Or when we ship on time, what is it that we do right? Are current project management practices adequate to deal with Verification issues?

I am looking forward to your thoughts and comments.

Hemendra Talesara

lang: en_US


SoC Verification Closure Pushes New Paradigms

SoC Verification Closure Pushes New Paradigms
by Pawan Fangaria on 02-06-2014 at 10:00 am

In the current decade of SoCs, semiconductor design size and complexity has grown by unprecedented scale in terms of gate density, number of IPs, memory blocks, analog and digital content and so on; and yet expected to increase further by many folds. Given that level of design, it’s imperative that SoC verification challenge has gone much beyond that (more than 2/3[SUP]rd[/SUP] of design time is spent in verification) at an exponential rate; however verification resources have seen more or less incremental and scattered additions. Other than logic and timing simulation, multiple other verification methods have evolved such as formal, hardware assisted, embedded software, virtual and so on. Considering that the whole SoC has to be verified together in a reasonable time with accuracy, we must look at it from a different perspective of complete and fast design closure rather than only performance of certain steps in the verification flow, such as simulation speed. Of course, performance is very important, but that needs to be put to productive use with intelligent assignments (or tweaking) to reach the final verification closure, faster.

Last week, when I reviewed Cadence’snew release of Incisive 13.2 platform, I felt it is a real major leap in the right direction towards what an SoC verification needs today. Adam Sherer, Product Marketing Director at Cadence has described top 10 ways (by using Incisive 13.2 platform) to automate verification for highest levels of performance and productivity in his whitepaper posted at Cadence website. Although most of these methods provide multi-fold productivity improvement in verification, I was really impressed with some of those which focus at higher level of abstraction to complete the job by order of magnitude faster, yet with the same level of accuracy as gate level. Notable among them are –

a) X-Propagation FlowIncisive Enterprise Verifier formal app creates assertions (to resolve issues due to X-optimism of RTL such as Verilog which can often assign a ‘0’ or ‘1’ to evaluate a state which should actually be an ‘X’ as per gate logic; refer herefor details) that can be applied to X-propagation RTL simulation to monitor the generated X values. These X values created by logic, X-propagation or low-power simulation can then be identified in the SimVision debugger.

b) UVM Debug Enhancements– With SystemVerilog support (in addition to ‘e’) in Incisive Debug Analyzer, its integration with Incisive Enterprise Manager, and several enhancements in the SimVision unified graphical debugging environment, bugs can be found in minutes instead of hours and also debugging can be done in regression mode.

c) Digital Mixed SignalSystemVerilog (IEEE 1800-2012) real number modeling support in ‘Digital Mixed Signal Option’ of Incisive Enterprise Simulator enables user to model wave superposition at digital simulation speed, thus introducing a powerful capability in Incisive Enterprise Simulator to perform complete mixed-signal simulation without needing co-simulation with an analog solver.

d) Register Map Validation– Considering a design having 10s to 100s of thousands of control registers, it’s impossible to manually write tests and verify them. Incisive platform has Register Map Validation app that completes the job in a few hours as compared to weeks of simulation effort.

Further, finding a nice articleon this topic written by Richard Goeringwhich provided me a link to a webinaron Register Map Validation, I was delighted to go through the webinar to know about how exhaustive, efficient and easy that Register Map Validation is. Thanks to Jose Barandiaran, Senior Member of Consulting Staff and Pete Hardee, Director of Product Management at Cadence for presenting this nice webinar.

Using the simulation approach for register map validation can be very inefficient in terms of coverage and test time whereas the Register Map Validation App can automatically test the register with all meaningful activities; above is an example of RW access policy test.

Users can provide specific information such as protocol which is merged with register description in IP-XACT format by a ‘Merge Utility’ and the extended IP-XACT description is passed to the Register Validation Map App which automatically generates Bus Functional Model (BFM) and other checks that get verified through Incisive Enterprise Verifier.

Both, front door (writing to register) checking between the IP and BFM and back door (read) checking are done. Usually at the sub-system level, multiple IPs are interfaced through a bridge. Verilog, VHDL or mixed language can be used. All types of checking such as Read-only, RW, write one set, write one clear, read to set, read to clear, register value after reset and write-only (in case of back door) are done. The app also provides control over activity regions to check any corruption. Particular registers or fields can be specifically selected for test.

An easy to use SimVision GUI is provided for customized debugging to identify the particular bits in error, in particular phase of testing (front door or back door), register address, reset sequence and value etc. There is an interesting demo also during the webinar which shows live debugging and validation of registers. It’s an interesting webinar to go through!

Register Map Validation App enables designers to do exhaustive testing of register access policies. It significantly reduces verification time and enhances designers’ productivity with easy debugging. The Incisive Enterprise Verifier uses mixed approach of simulation and formal. Formal analysis is done statically in breadth first manner covering complete space. Assertion driven simulation is dynamic and linear requiring no testbench. Cadence is seeing rising customer base using Incisive Verification Apps.

More Articles by Pawan Fangaria…..

lang: en_US


Cadence Acquires Forte

Cadence Acquires Forte
by Paul McLellan on 02-05-2014 at 4:46 pm


Cadence today announced that it is acquiring Forte Design Systems. Forte was the earliest of the high-level synthesis (HLS) companies. There were earlier products. Synopsys had Behavioral Compiler and Cadence had a product whose name I forget (Visual Architect?), but both products were too early and were canceled. Cadence internally developed their own high-level synthesis product called C-to-Silicon compiler. The market is now ready for adopting high-level synthesis. Xilinx purchased autoESL a couple of years ago and are aggressively pushing designers to a higher level, mainly so that software engineers who have never even heard of Verilog can use Xilinx products to create hardware accelerators for parts of their software when pure code is either too slow or too power hungry. Calypto is selling Catapult, which used to be Mentor’s offering in the space. Synopsys purchased Synfora although they don’t seem to be pushing into the space aggressively.

As Cadence said:Driven by increasing IP complexity and the need for rapid retargeting of IP to derivative architectures, the high-level synthesis market segment has grown beyond early adopters toward mainstream adoption, as design teams migrate from hand-coded RTL design to SystemC-based design and verification. The addition of Forte’s synthesis and IP products to the Cadence C-to-Silicon Compiler offering will enable Cadence to further drive a SystemC standard flow for design and multi-language verification.

It is actually a bit of an illusion that all HLS products are pretty much the same. HLS in general has been good for DSP algorithms where there is a fair bit of flexibility about trading off performance, power, throughput, latency and so on. But some approaches have focused on automatically taking very high level algorithms (such as recognizing moving objects in video) and others have focused on cleanly interfacing to complicated memory restrictions. Others are focused mainly on improving productivity, requiring fewer lines of code to get to the eventual implementaiton. The reality is that there often is less flexibility in the implementation than meets the eye but for sure HLS is at a higher level: fewer lines of code and written in C or C++, familiar to software engineers (and often also SystemC which typically is more of a hardware guy’s world).

Cadence again on how they see Forte as complementary to their existing C-to-Silicon offering:Forte brings high quality of results (QoR) for datapath-centric designs, world-class arithmetic IP, valuable SystemC IP and IP development tools. Forte’s Cynthesizer HLS product features strong support for memory scheduling, especially for highly parallel or pipelined designs. These strengths complement the high QoR for transaction-level modeling, under-the-hood RTL synthesis and incremental ECO support featured by Cadence C-to-Silicon Compiler.

I don’t know enough about the nitty-gritty under-the-hood details of either product although it is clear that the emphasis on their development has been different. Just how complementary they are remains to be see (would a customer buy both of them?) and I assume in time that the two technologies will be integrated together into a single HLS productthey don’t seem different enough to keep them as two separate product indefinitely. A lot of the IP that Forte has presumably will play in C-to-Silicon almost immediately.

The terms were not disclosed but Cadence say they expect it to be slightly accretive this year and accretive going forward, which I guess means the price was not extremely high. Accretive means that after the merger accounting, they will make more profit per $ of Forte product revenue than Cadence makes currently across its entire product line. Actually accretive means increasing EPS but it is almost the same thing.

Cadence press release is here.


More articles by Paul McLellan…


Intel Quark awakening from stasis on a yet-to-be-named planet

Intel Quark awakening from stasis on a yet-to-be-named planet
by Don Dingee on 02-05-2014 at 3:00 pm

We know the science fiction plot device from its numerous uses: in order to survive a journey of bazillions of miles across galaxies into the unknown future, astronauts are placed into cryogenic stasis. Literally frozen in time, the idea is they exit a lengthy suspension without aging, ready to go to work immediately on revival at their destination.

In our latest real life semiconductor drama, a tiny traveler is getting ready to awake after a journey from the pinnacle of past PC prestige into a very uncertain future on a planet observers haven’t even agreed on a name for. Some say this new terrene should be named Post-PC World, others say Wearables, more say the Internet of Things or the Internet of Everything, a few say Connected Autotopia, and still others are staking a claim with names like Makerland.

Pictured above is the platform ostensibly built to celebrate the old world: Intel Fab 42, on the West Side of Chandler AZ (“East Side, Don D in da house”). When asked recently about the decision to hold up on installing tooling in the massive facility, Will Strauss of Forward Concepts simply said:

Intel didn’t make a mistake, they just bet wrong.

If you observe Fab 42 from the point where our traveler embarked, that is somewhat true. The salad days of personal computing growth appear to be coming to an end, maybe not a precipitous one, but at least a long, slightly downhill decline. I don’t think the bet was wrong, however.

Intel certainly considered the possibility of a PC downturn, but went ahead with Fab 42 for several reasons. First, they had plenty of capital to support the investment. Second, one cannot simply build a $5B fab overnight – there are years of planning, environmental analysis, municipal infrastructure support, material logistics and construction techniques, and other details way before installing tooling. Third, Intel’s aging fabs, such as Fab 17 in Hudson MA, and Fab 11 in Rio Rancho NM, are entering retirement or slated to stay at mature process nodes, a reflection of the reality not every physical facility can be rehabbed to support advanced nodes cost effectively.

In a world nearly ready for 14nm but not exactly sure what will be economically viable to build with it, Intel decided to prep our traveler for a new mission, just in case. Perhaps not the actual parts, but certainly the idea behind them is being held in cryogenic stasis inside Fab 42, ready to awaken when some form of this future everyone says is coming actually appears in volume.

Our traveler is named Quark, a play on sub-Atomic particles that make up matter. Our story has a slight twist, because our traveler emerges from stasis not just well preserved but different, better than when it entered. Intel recognizes their legacy and future is Intel Architecture, not ARM or anything else – the blowback from one or more wearables shown at CES 2014 with “third party cores” affirms that. To be credible on landing in the new world, Intel has to build and ship Intel Architecture chips.

So, how do you take a still relatively hoggy implementation into a world currently dominated by ARM with mostly really small batteries for power in devices? The answer is a creative one from Intel’s Ireland design team. Grab the star of the PC era, the Pentium P54C at 600nm, reengineer its microarchitecture into a synthesizable core, lose the northbridge-southbridge paradigm and surround it with DDR3, PCIe, and an AMBA interconnect attaching modern peripheral IP, and shrink it to 32nm for starters. The first result: the Intel Quark SoC X1000, clocked at 400 MHz and consuming 2.2W max.

When CEO Brian Krzanich previewed the Intel Edison module at CES 2014, he teased us with a 22nm version of Quark, which back-of-the-envelope math says should be 1.5W or less, and threatened a roadmap to 14nm (but with darned few details yet) which should get them to the near-or-wee-bit-below-1W neighborhood. Yes, after years of denial, Intel may have finally figured out what “low power” actually means. A synthesizable core and AMBA interconnect means they can scale process and spin new variants much more quickly, so Quark is likely to become a family of travelers in short order.

Quark is a throwback to the lower average selling price (ASP) of the microcontroller franchise Intel created with the 8051 et al, but left for the allure of higher ASPs and margins available with complex microprocessors. As I said recently in a post on IoT semiconductor volumes, Intel has to be real careful here not to trade (relatively speaking) one $230 Core i7 for one $23 Quark in their advanced fabs – that would be absolute disaster. To win this game, they need to drive demand for Quark 10x or 20x what their microprocessor business has been in the long term.

That’s a huge order, literally; a whole lot of selling has to happen between here and there. In the meantime, Intel has the capacity, patience, and marketing might to make a push no microcontroller company can match – with Fab 42 waiting to bring Quark out of stasis like a wrecking ball when orders appear, be it from makers, wearables, the IoT, automotive, or the conglomerate known as post-PC. There are also some details, like mixed signal integration and perhaps a low-end GPU and maybe even a revisit of Stellarton with programmable logic, which Intel may want to reacquaint themselves with for success in taking on the space between the traditional MCU and the now mainstream mobile SoC.

Before getting too excited about planetary naming and gigantic projections and if Fab 42 ever really comes on line, the basic questions are: does Intel have the right idea(s), and can they win developer hearts and minds back from the ARM movement? If the technological equivalent of antidisestablishmentarianism is to succeed with Quark as the hero, it will likely start with makers and crowdfunding, smaller wins with viral potential and community buzz.

A journey of a bazillion miles starts with one step. In our next Quark installment, we’ll look at the maker angle and explore the latest on the chip architecture and Galileo and Edison modules in more detail.

More Articles by Don Dingee…..

lang: en_US


Cliosoft Grows Again!

Cliosoft Grows Again!
by Daniel Nenni on 02-05-2014 at 10:25 am

Cliosoft was one of the first companies to work with SemiWiki so they are an integral part of our amazing growth and we are part of theirs. I remember talking to Srinath Anantharaman (Cliosoft CEO) for the first time and discussing the goals of working together. It was simple really, there was disinformation in the market about Cliosoft and they wanted to set the record straight. The best way to do that of course was to talk to customers directly and is what we did with more than a dozen interviews.

It has been three years now and both Cliosoft and SemiWiki continue to grow. I caught up with Srinath after the New Year for an update and here it is:

(1) How was business for your company in 2013?
2013 was an excellent year for us. We increased the breadth of our support for EDA flows. Now engineers using Agilent’s ADS and Mentor’s Pyxis can get the benefit of integrated SOS design management. We saw over 40% growth in bookings over 2012 and added almost 30 new customers. Several IP vendors adopted our DM solution to manage their IP development including a leading EDA & IP vendor.

2) Where do you see growth for your company in 2014?

We have a large customer base within a wide variety of product areas and they are constantly providing product feedback and suggestions. We plan to introduce some exciting new improvements to our existing design data management solutions and also new products to improve team collaboration and IP reuse in 2014. We believe both of these will help us grow, both with IP vendors and SoC design teams.

(3) What were the challenges you face in hiring?
EDA is a stable and mature industry and does not have the lure of social media startups and block buster IPOs. With so many startups in the Bay area, and marquee companies like Google and Facebook, it is always a challenge for EDA and semiconductor companies to attract the best talent.

(4) What position(s) are you looking to fill today?
We are looking for a hands-on marketing manager/director and a web software developer

(5) Why should a candidate chose your company?
We are a small and yet very stable and fun company with very competitive compensation and benefits. Employees have a lot of authority and can make significant contributions to the products and company and are not just cogs in a large wheel. We believe in long-term partnerships with our customers, employees and partners. In fact, we have had no turnover at all since the company was founded in 1997.

About ClioSoft:

ClioSoft is the premier developer of hardware configuration management (HCM) solutions. The company’s SOS Design Collaboration platform is built from the ground up to handle the requirements of hardware design flows. The SOS platform provides a sophisticated multi-site development environment that enables global team collaboration, design and IP reuse, and efficient management of design data from concept through tape-out. Custom engineered adaptors seamlessly integrate SOS with leading design flows – Agilent’s Advanced Design System (ADS), Cadence’s Virtuoso® Custom IC, Mentor’s Pyxis Custom IC Design, Synopsys’ Galaxy Custom Designer and Laker™ Custom Design. The Visual Design Diff (VDD) engine enables designers to easily identify changes between two versions of a schematic or layout or the entire design hierarchy below by graphically highlighting the differences directly in the editors.

Also Read

High Quality PHY IPs Require Careful Management of Design Data and Processes

ClioSoft at Arasan

Data Management in Russia


DesignCon 2014 AMS Panel Report

DesignCon 2014 AMS Panel Report
by Daniel Nenni on 02-05-2014 at 10:15 am

DesignCon 2014 was very crowded! I have not seen the attendance numbers but as the first conference of the New Year it was very encouraging. The strength of the fabless semiconductor ecosystem is collaboration and face-to-face interactions are the most valuable, absolutely.

The session I moderated was on Mixed Signal Design which is of growing importance as mobile and wearable devices continue to drive the semiconductor industry. The panelists included an emerging fabless company CEO, a leading IP company VP, and an EDA product manager. The moderator is the opening act so it is my job to keep it interesting and entertaining, which I did.

As a moderator, I always ask for biographies and a list of questions the speakers would be comfortable answering following their presentation. I never actually ask these questions of course because that would be boring. I generally come up with candid questions that are a bit edgy but focused on content. I also try and get a ringer or two in the audience to keep the presenters on their toes.

For this session my ringer was a good friend and fellow consultant Herb Reiter. Herb started his career as an analog circuit designer and earned a dozen circuit design patents in Germany. He worked for VLSI Technology, ViewLogic, Synopsys, Mephisto Design Automation, and Barcelona Design amongst many other consulting clients. Herb knows AMS design, absolutely.

Zhimin Ding, is the CEO of Anitoa Systems which I wrote about HERE. The device he described was not really a wearable or a swallowable device, it is an ultra-low-light CMOS molecular imager enabling portable medical and scientific instruments. This is the future people, immediate results for medical tests, definitely. Swallowables are next! And if I’m going to swallow a medical diagnostic device it had better be 28nm or smaller that’s for sure.

Mahesh Tirupattur, SVP of Analog Bits, was the most entertaining. Mahesh and I have been on panels together before and I have not yet been able to unnerve him. Yes, he is that good. Mahesh can go from the IP business model down to the transistor and field any question in between. Analog Bits uses the Tanner EDA tools to develop high speed IP on the bleeding edge process nodes. They have already taped-out FinFETs and are currently collaborating on 10nm with the foundries.

Jeff Miller, PM of Tanner EDA was my Q&A focus as he interacts with customers and knows where the bodies are buried so to speak. Jeff spent his first 5 years with Tanner Research doing design work for medical, military, and aerospace applications. He then switched to the Tanner EDA group and has been helping customers with AMS designs ever since.

I tried to unnerve Jeff by asking him directly where the bodies are buried. I asked him for some customer crash and burn stories. I did get him to pause and give me a funny look but he came through with tales of process variation and mixed signal design that were well worth the price of admission, without mentioning specific customer names of course. Jeff earned his wings here, a true professional through and through.

Bottom line: If you want an interesting panel, have me moderate. If you want to attend an interesting panel look for the ones I moderate. If you are on a panel that I’m moderating don’t bother submitting questions in advance.

More Articles by Daniel Nenni…..

lang: en_US