ads mdx semiwiki building trust gen 800x100ai

Apple and Google Turn Towards Enterprise

Apple and Google Turn Towards Enterprise
by Ed McKernan on 03-27-2013 at 9:00 am

As a calm settles over the mobile market, post the overhyped Samsung Galaxy S4 launch, many analysts are at a loss as to describe a way forward with Apple that is understandable and positive. The dozens of reports that focus on the summer launches of the iPhone 5S and cheap iphone miss the side of the barn on the true strategy being put in place. Apple is at the center of solar system with Samsung, Intel, Google, Microsoft, Qualcomm, TSMC and others orbiting as partners and former partners turned competitors. The term Co-opetition does not survive in a market heading to 5BU while offering seemingly endless profits. For its next encore, Apple they say must innovate on the product side of the house. I see a realignment in their partnerships that could lead to a successful march through the unserved developing countries and more importantly the less competitive enterprise world.

It was Samsung who provided the low cost processors, DRAM, NAND and LCDs that vaulted the iPhone into volume and profitability as a USconsumer play. And it is Samsung, with the help of Google who has crushed the non-Amazon Android suppliers and seeks to commoditize Apple in an expensive CapEx end game. Apple has migrated all but the CPU to various suppliers with an emphasis on the Yen depreciating Japan Inc. Soon TSMC will free Apple, but not at a cost orperformance level that enables the outflanking of Samsung. A bulked up 14nm monster called Intel, sans Paul Otellini, is now ready to accept sub 40% margins in order to shrink the playing field by the time 10nm rolls around. Samsung is their #1 mortal enemy. Apple would concur and thus the reason to partner.

Apple’s plan to roll out a low cost iphone is a play on the middle 40% of the mobile market that contains the last 30% of the profits currently registering with Samsung. A squeeze play between Apple and China smartphone vendors is in the cards later this year. With Apple about to open up China Mobile, India and Japan’s DoCoMo, Samsung will lose their sole refuge in the other 3BU TAM. Every mobile vendor is dialing in 50% growth for 2013 for a market that is more like 30%. If history holds true, the bottom half of the market will be lucky to be breakeven as the rising tide creates a serious undertow driven by excess capacity.

Apple’s plan in the developing world would seem to point to growth but not too much growth. I expect a fortress wall to be built around what is initially not much more than 40% of the market or until the excesses are wrung out. In the near term, greater riches are to be had in the corporate PC market that is 90% Wintel and has so far hesitated to make a dramatic move until Window 8 had entered the fray. Legions of Microsoft and Intel sales folks call on CIOs regularly to sell the benefits of the latest O/S and x86 processors. Although the Apple MAC line uses both Intel and Microsoft Office, it has not been attractive enough to crack the market. Legacy is a long tail in the corporate world but the timing is right for Apple to adjust its partnerships in a Complex Divide and Conquer Strategy.

Since 2011, I have believed that Apple will end up in Intel’s Fabs for two reasons. One is based on the advanced, ultra low power Trigate Process technology combined with the massive capacity build out. Intel’s $13B capex in 2013 is a bet that either Apple or Qualcomm is coming. The second reason is the belief that they would be far better off promoting their products into corporate if Intel was at their side and not as a roadblock and dishing out FUD. Analysts and the press have assumed Intel will never build an ARM based processor. This is foolish, Intel may never develop anything but x86, however, the 5BU mobile market demands that fabs be filled or risk losing the long war. ARM be damned, fill the fabs.

If Apple builds its A7 at Intel, they can be guaranteed the highest performing, lowest power and lowest cost processor for its smartphones and tablets. Apple will leverage this into higher end iPADs and iPADs in MAC Air formfactors that run a multitasking iOS. Then the pitch will be made that Apple offers the broadest product line that corporations can buy without the >$100 Microsoft O/S tax. Intel will play a two handed strategy of supporting Wintel tablets as well as the Apple iPADs based on A7 silicon coming off the 14nm lines. The strategy offers the opportunity to crush the non-Apple based ARM market before it gains speed. Perhaps Qualcomm will feel it necessary to partner with the Intel Foundry.

In Intel’s long-term game plan, TSMC and Samsung need to be lowered. Google can be a potential partner, but only if they center their future O/S, application and cloud ecosystem around x86 processors. Google’s announcement of Andy Rubin stepping down as head of Android appears to be a sign that the company has lost control of the Android mobile market to Samsung with little in the way of compensation. In comparison, iOS is a profit center, though at risk of being displaced. Look for a refocusing in the enterprise space as a better bet for growth.

In the near term, analysts will be concerned that the Intel margin model that includes servicing Apple’s ARM chips is a threat to the long term viability. However, in reality Intel is trying to build a parallel vertical silicon supply model that is more extensive and profitable than what Samsung can offer today because it will not only include cheap ARM mobile processors and baseband chips but also the legacy x86 PC processors all the way to $4,000 Xeon Server chips for the cloud build out that Apple, Google, Amazon, Microsoft and others will rely on to proliferate their ecosystems. A new metric of server chips per mobile devices will be devised to monitor Intel’s health.

In two to three years we may look back at today and understand that the mobile market endured a small blip that in reality was a quick transition from feature phones to smart phones with only a very small percentage actually leveraging the cloud. The $50 white box internet phone and cheap tablets that are widely available in China are being driven by an excess of chips built on n-1 process based chips and older versions of Android, thus leading analysts to focus on the commoditization downside driven by Moore’s Law and not on the productivity enhancement upsides that are soon to be driven with the Cloud. And this is why an Enterprise push makes the most sense.

Full Disclosure: I am Long AAPL, INTC, ALTR, QCOM


Will 14nm Yield?

Will 14nm Yield?
by Daniel Nenni on 03-26-2013 at 9:00 pm

If I had a nickel for every time I heard the term “FinFET” at the 2013 SNUG (Synopsys User Group) Conference I could buy a dozen Venti Carmel Frappuccinos at Starbucks (my daughter’s favorite treat). In the keynote, Aart de Geus said FinFET 14 times and posed the question: Will FinFETs Yield at 14nm? So that was my mission, ask everybody I see if FinFETs will yield at 14nm.

Established in 1990, SNUG represents a global design community focused on accelerating innovation. In 2012, SNUG brought together nearly 9,000 users across North America, Europe, Asia and Japan, and featured nearly 250 customer papers. As the electronics industry’s largest user conference, SNUG delivers a robust technical program developed by users for users and includes strong ecosystem partner participation through Design Community Expos in many regions.

Given that Synopsys owns the TCAD market, Aart certainly knows what is yielding and where. Ever the businessman, Aart then segued into a pitch for the Synopsys Yield Explorer product which is a staple at 20nm. At the Press event following the keynote Aart mentioned lithography challenges and doping. In the hallway Co-CEO Chi-Foon Chan mentioned parasitic extraction challenges and process variation was mentioned multiple times by some of the 3,000 or so attendees. No real consensus could be had because in the words of one conference attendee, “We just don’t know what we don’t know.” Hard to argue with that. An Intel person also told me over lunch that yielding 3D transistors at 14nm will be a new challenge, even after successfully yielding at 22nm.

Aart mentioned “The Exponential Age” during his keynote and talked a bit about Moore’s Law and how semiconductor manufacturing improves exponentially with time. Afterwards a hallway discussion included Wright’s Law of Experience or “We learn by doing” which seems much more applicable in regards to semiconductor yield. A quick search on my pocket supercomputer (iPhone5) brought me to a paper published in 2010 out of the Santa Fe Institute: “Statistical Basis for Predicting Technology Progress” which tests the hypotheses of Moore’s, Wright’s, and four other laws amongst 62 different technologies including transistors. Spoiler alert: Wright’s Law wins but Moore’s Law is a close second!

Circling back around to the original question “Will 14nm Yield?” you really have to take a close look at 20nm since 14nm is really just a 20nm process with FinFET transistors. According to an FPGA and two Mobile SoC companies that I know and love, 20nm is currently yielding “as expected”. So I would have to say yes, 14nm FinFETs will have a reasonably quick yield ramp since most of the problems will be vetted in 20nm. The next question of course is: Who will be first with 14nm production silicon? This is probably one of the most interesting races in the history of the fabless semiconductor ecosystem. If you allow handicapping and buy into Wright’s Law it will be TSMC, absolutely.

Don’t forget to register for FinFET day at the Electronic Design Process Symposium on April 19th. There will be technical presentations from TSMC, GLOBALFOUNDRIES, ARM, Synopsys IP, and Oracle plus an opening keynote from me on the Planar versus FinFET value proposition. I hope to see you there!


In compliance we trust, for integration we verify

In compliance we trust, for integration we verify
by Don Dingee on 03-26-2013 at 8:10 pm

So, you dropped that piece of complex IP you just licensed into an SoC design, and now it is time to fire up the simulator. How do you verify that it actually works in your design? If you didn’t get verification IP (VIP) with the functional IP, it might be a really long day.

Compliance checking something like a PCIe interface block is a stringent process that has to explore every sequence of a protocol. A diligent IP vendor will have performed exhaustive testing on a block to assure the protocol execution conforms within interoperability guidelines. For most third-party blocks that have been released, it is a fairly safe assumption that the block executes the protocol under controlled conditions.

An SoC design, with multiple cores operating in a complex application generating traffic asynchronously, usually doesn’t qualify for “controlled conditions”. The job of integration testing is to expose known-good IP to stimuli within the complete environment, subjecting it to just enough chaos to increase confidence in the overall design.

A recent Synopsys webinar titled “Accelerate PCIe Integration Testing with Next-Generation Discovery VIP” walks through the thought process behind verifying a piece of complex interface IP. Paul Graykowski, Senior Corporate Application Engineer (CAE) for Synopsys, outlines four areas that should be explored. My Cliff Notes version:

Link testing – first, a PCIe link must be trained. VIP allows setting the number of lanes and the link speed, verifies that L0 is reached, and performs enumeration. From L0, link speeds can need to be renegotiated, from 8Gbps for Gen3, down to 5Gbps and 2.5 Gpbs. Finally, lane reversal needs to be checked and verified.

Traffic testing – first, the PCIe block is set up as requester, and the VIP block as completer. A set of reads and writes with different completion and payload sizes is done, looking for proper multiple completions with random data sets. Then the roles are reversed, and the VIP is the requester. Config writes and reads to random registers and random addresses with randomized payload sizes are performed. Finally, a series of writes and reads is performed with other AXI traffic in the system.

Interface testing – supported PCIe interfaces include SERIAL, PARALLEL, and PIPE; all must be verified. Also important is verifying lane error handling, forcing negotiation to a lower number of lanes, and looking at common errors such as disparity and bit-flipping. Gen3 equalization should also be checked, seeing that coefficients are requested and rejected. Finally, the behavior under power management scenarios, including things like hot plug and clock gating, should be verified.

Performance testing – with things working, performance can then be verified. With background AXI traffic not targeting the PCIe block running, concurrent traffic is then generated with direct I/O for PCIe, and the latency and throughput is verified.

Sounds easy, right? Even if you’re a PCIe expert, generating the cases is non-trival, and that is assuming everything works. When is enough coverage enough? And how do you debug something subtle when it goes wrong?

The architecture includes coverage models, so you can track things like tests for all the layers of the protocol – transaction, data link, and PHY – and combinations of traffic class and virtual channels, including cross combinations. It allows you to track DLLPs, looking at ack/nak, power management, flow control, and injected errors. It also allows you to track TLPs, looking at transaction types, header values, sequence numbers, error injections, and completion status.

For protocol-aware debug, the Protocol Analyzer understands a TLP, DLLP, credits, ordered set, LTSSM state – all presented in graphical way – and also understands waveforms. You can select a particular TLP, and see address, data, byte enables, and descendants. You can also put more than one protocol side-by-side, say AXI in, PCIe out, to explore deeper issues.

There’s more in this webinar, and if you are new to verification, or having a hard time justifying the outlay of buying into commercial verification IP, or just curious what the verification process looks like beyond the mechanics of writing SystemVerilog, it is definitely worth a look. Synopsys has a full range of verification IP for a variety of interfaces and protocols, and the site has more information on things like error injection, coverage tracking, and protocol-aware debug.


Moore Push Versus Market Pull

Moore Push Versus Market Pull
by Paul McLellan on 03-25-2013 at 5:55 pm

I was at SNUG earlier today at both Aart’s keynote that opened the conference and at his “meet the press” Q&A just before lunch. The keynote was entitled Bridges to the Gigascale Decade. And the presentation certainly contained lots of photos of bridges! Anyway, I’m going to focus on just one thing, namely how the dynamics of the industry change depending on the cost-per-transistor as we go down to 9nm.

One thing that Aart talked about at both sessions was this trend as we go down through the next few process nodes. It is clear that FinFETs bring great value, especially much lower leakage current. Whereas 20nm planar doesn’t bring much advantage, all the extra costs and hassles of 20nm without the advantages of FinFETs. No wonder everyone is rushing to 14nm (or sometimes called 16nm). This is actually 14nm FinFETs with 20nm interconnect fabric.

Aart had a graph from Intel showing the costs per transistor coming down almost linearly, with an extra kicker if and when we have 450mm wafers. Of course there is another saving with EUV but, as you probably know, I’m a bit of a skeptic about that. I hope this is true, but I’ve also seen other graphs showing the cost being flat. At the common platform meeting a few weeks ago, Gary Patten of IBM said that there was a cost saving but it is much less than we have been used to. The old economics was a 50% increase in die per wafer and a 15% increase in cost per wafer leaving a 35% saving. Who knows what the new rules are?

But Aart feels it doesn’t really matter. We are leaving the era where the push of Moore’s law drove the semiconductor industry and entering the era when market pull will drive the semiconductor business even if transistor costs do not come down. That is certainly true for some markets. The cost of the application processor chip in your cell-phone isn’t that critical since it is a $500 product. But in Africa there is a market for cell-phones with a $50 BOM and every $ is important.

So I’m a little unconvinced that the economics of Moore’s law are irrelevant compared to the exponential demand for greater and greater functionality.

At the Q&A we all discussed this. We talked about how everyone would like IBM’s Watson in our pocket and semiconductor technology over the next decade will be able to deliver this. If the costs per transistor come down, then I believe this. But if they don’t, and if Watson has, say, ten times as may transistors as the current chip in your smartphone, that means that the chip will cost ten times as much or more. Yes, wonderful functionality, low power, but maybe at a price point that doesn’t work even in the US. And, to make it worse, if you just wait, which has always worked in the past as a way of getting electronics cheaper than being the first person to buy the first version of something, it won’t get any cheaper.

A lot of electronics has been driven over the years by the exponential decrease in cost over many process generations. Not a 15% saving, but a reduction in cost of 1000X over 20-30 years. That is how we have more computer power in our pockets than million dollar flight simulators did in the 1980s. I suspect that costs will come down as we get more learning about yield, but there are genuinely unavoidable extra costs like double patterning and the complex construction of the 3D FinFET structure.


New ways for High Frequency Analysis of IC Layouts

New ways for High Frequency Analysis of IC Layouts
by Pawan Fangaria on 03-25-2013 at 5:30 pm

Amidst frequently changing requirements, time pressure and demand for high accuracy, it is imperative that EDA and design companies look at time consuming processes in the overall design flow and find alternatives without losing accuracy. High Frequency Analysis of IC designs is one such process which is traditionally based on models developed using TCAD tools, often leading to approximation in the actual designs. The method is not suitable for architectural exploration which is a prime need today. Moreover considerations of actual interactions among devices and interconnects are necessary.

Mentor Graphics has developed a novel and practical approach to address these issues realistically in live designs on-the-fly. It can extract full chip with automatic detection of HF devices and their accurate characterization taking into account their interaction with neighbouring devices and interconnects. Naturally the approach is quite suitable for design exploration as well; the HFA engine is seamlessly integrated into the various design flows of Calibre platform making it very scalable, user-friendly and production worthy giving performance & capacity boost of 10X compared to traditional TCAD based approach.

It’s interesting to have a look at Mentor’s white paper located at – http://www.mentor.com/resources/techpubs/upload/mentorpaper_74888.pdf. It describes the new modelling and characterization approach in detail. An inductor has been taken as an example for its automatic recognition and characterization. It uses port characterization method which is given in terms of matrix of S-parameters. Effective inductance (L) and quality factor (Q) can be calculated from S-parameters.


[Example – Inductor layout with assigned ports and a shield (in blue) placed below the inductor to reduce substrate loss]

With the help of SVRF (Standard Verification Rule Format) rule file which has specific information about process layers, port assignments, frequency range etc. Calibreautomatically detects precise inductor geometry from the layout data. Electromagnetic (EM) modelling is used and Calibre EM engine evaluates the integral form of Maxwell equations in the frequency domain, using numerical calculations based on boundary-element method. Special techniques have been employed to control memory consumption and computation time in solving the linear system resulting from the boundary-element method.

The S-parameter results from Mentor’s new modelling approach are within 2% of those from TCAD.


[Mentor (red) and Reference (green) results for port S-parameters]

The L and Q values derived from Mentor S-parameters are within 5% and 10% of TCAD based L and Q values respectively.


[L and Q values results from Mentor (red) and Reference (green)]

Mentor’s HFA solution is excellent for practical purposes of design and exploration in quick turn-around-time with high accuracy. Calibre provides integrated platform with excellent usability, scaling and on-the-fly solution. The scalability can be increased further by using multiple CPUs in a multi-threaded, multi-core, or clustered configuration.


Moore and Beyond: Global Semiconductor Primer

Moore and Beyond: Global Semiconductor Primer
by Daniel Nenni on 03-24-2013 at 8:10 pm

There are a couple of new analyst reports out that are interesting enough to blog about. I talk to the financial types on the phone every week or so explaining the semiconductor business and answering questions about the foundries and the top fabless companies. We don’t always agree on the outlook but there is always something to be learned.

“Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computer, automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasible today” (Gordon Moore, 1965).

The first report is a 160 page Moore and Beyond: Global Semiconductor Primerfrom Merrill Lynch / BofA. It’s a very good overview with some excellent market data but comes up short when talking about the foundries and FinFETs. Intel still has a big influence on Wall Street, certainly much more influence than Intel has on the semiconductor ecosystem as it stands today. Commenting on the Intel versus ARM mobile battle without first knowing the next Intel CEO seems to be a waste of time but they do it anyway. If Intel promotes from within rather than grabbing a mobile oriented CEO, Intel will go the way of the PC (my opinion). ARM has already made the CEO move which is talked about HERE.

For more information on this reoport contact: Merrill Lynch Research

Even more interesting is a Foundry Tracker Report from Catamount Strategic Advisors: Competition increasingly to define the semiconductor manufacturing business:

METHODOLOGY

  • Interviewed semi foundry professionals in recent trips to Taiwan, Seoul (S. Korea) and Silicon Valley (CA) to gauge industry trends.
  • Identified momentum shifts between manufacturing processes and measured impact for companies throughout the food chain.

COMPANIES MENTIONED:

  • TSMC (TSM)
  • UMC (UMC)
  • GLOBALFOUNDRIES
  • Samsung
  • IBM (IBM)
  • INTEL (INTC)
  • Apple (APPL)
  • Altera (ALTR)
  • Xilinx (XLNX)
  • Marvell (MRVL)

KEY TAKES

  • Via relentless execution, TSMC has established itself as the 800 lb gorilla in the semiconductor manufacturing industry (>50% market share). Our research indicates this will change in the next few years and highlight our research below:

For more information on this report contact: Catamount.

And yes, I’m one of the industry professionals they interviewed for this report. Catamount is a boutique research firm here in Northern California so they are easy to reach:

“Tech Equity Research the way it should be: Actionable, Insightful, and Timely”

For more detailed information on the value proposition of FinFETs please join me at the Electronic Design Process Symposium in Monterey, CA on April 19[SUP]th[/SUP]. You can find more information HERE. I hope to see you there!


Can Samsung Deliver As Promised?

Can Samsung Deliver As Promised?
by Daniel Nenni on 03-24-2013 at 8:01 pm

Samsung’s aggressive marketing is starting to hit the fabless semiconductor ecosystem so we had all better be prepared. Samsung’s 2012 marketing budget exceeded $400M which probably beats the marketing budget of the entire fabless ecosystem! The question is, will the fabless semiconductor industry be easily swayed by Samsung’s clever marketing? Let’s take a look at their recent mailer and see:

Samsung is fueling the innovations that are changing the way we connect, interact, and benefit from technology. Our low-power, high-performance components and advanced process manufacturing are the foundation for your next leap in mobile products and the infrastructure that connects them. Connect with Samsung and stay abreast of our latest solutions.

This email talked about 14nm and getting ready for FinFET design. According to what I hear in Silicon Valley, Samsung is claiming to be far ahead of the competition and that is confirmed here:

GET READY TO DESIGN 14nm PROCESS IS HERE!

The foundry business is all about setting customer expectations and meeting them. Let’s take a look back at 28nm. According to an EETimes article dated 6/6/2011:

South Korea’s Samsung Electronics Co. Ltd. said Monday (June 6[SUP]th[/SUP]) that its foundry business, Samsung Foundry, has qualified its 28-nm low-power (LP) process with high-k metal gate technology and is ready for production.

Where is the qualified 28nm silicon they speak of almost two years ago? Why does my iPhone5 have a Samsung 32nm based SoC? Why does TSMC have 99.99% 28nm market share? If you want someone to blame for the 28nm shortage how about Samsung? Clearly they did not deliver. Barely a month later (7/12/2011) EETimes reports:

Samsung Electronics Co. Ltd. has said that is foundry chip making business has taped out an ARM processor test-chip intended for manufacture in a 20-nm process with high-K metal-gate technology.

Where is the Samsung 20nm silicon they speak of? I’m waiting for a teardown of the new Galaxy S4 but knowing that the S3 is powered by a 32nm Samsung SoC I highly doubt we will see a Samsung 20nm SoC anytime soon.


Speaking of marketing smoke and mirrors, I was at the Samsung keynote at CES where the Exynos 5 Octa SoC was launched claiming to be the first (8) core SoC. There was a more technical presentation at ISSCC a month later. In reality, the Exynos 5 Octo is (4) off the shelf ARM A15 cores plus (4) off the shelf ARM A7 cores in a Big/Little configuration where only (4) cores can be used at a time. What a mess, they should have named it Octomom!

Meanwhile, back on planet Earth, Apple and Qualcomm license the ARM architecture and create highly integrated custom SoCs tuned for the associated operating system, which is why they have the best performance/battery life ratio on the market today. How long until APPL and QCOM have the ARM 64 bit architecture rolled out? Closer than you think, believe it.

Time will tell but based on previous Samsung marketing materials for 28nm and 20nm, Samsung 14nm may not be as close as it appears. Samsung 28nm test chips were announced June of 2010 and hopefully we will see 28nm production silicon 3 years later. Samsung 20nm test chips were announced July 2011 with silicon expected in 2014? Samsung 14nm test chips were announced in December of 2012 so you do the math. Maybe Q1 2016? If so they are behind the competition, certainly.

Of course that will not stop the Samsung propaganda machine. As they say, the pen is mightier than the sword and $400M buys a whole lot of ink!


Unlocking the Full Potential of Soft IP

Unlocking the Full Potential of Soft IP
by Daniel Payne on 03-22-2013 at 11:32 am

EDA vendors, IP suppliers and Foundries provide an eco-system for SoC designers to use in getting their new electronic products to market quicker and at a lower cost. An example of this eco-system are three companies (TSMC, Atrenta, Sonics) that teamed up to produce a webinar earlier in March called: Unlocking the Full Potential of Soft IP.


Continue reading “Unlocking the Full Potential of Soft IP”


A Brief History of Forte Design Systems

A Brief History of Forte Design Systems
by Daniel Nenni on 03-21-2013 at 8:10 pm

When Semiwiki readers see the name Forte Design Systems, they may think of the live bagpipers’ performance that closes the yearly Design Automation Conference. Forte has been the sponsor of this moving end to DAC since 2001. Step with me behind the plaid kilts for a good look at this remarkable company headquartered in San Jose, Calif., enabling remarkable designs.

Forte finished 2012 with 22% growth, and boasted that 2012 was the seventh consecutive year of revenue growth. Better yet, it was named the #1 provider of electronic system-level (ESL) synthesis software by Gary Smith EDA. Forte prefers high-level synthesis (HLS) –– a way for hardware engineers to work at a higher level of design abstraction –– to the ESL acronym and, in fact, is credited with defining the market. Sean Dart, CEO, and Brett Cline, vice president of marketing and sales, cite a strong commitment to high-value software and top-notch support for moving into the top slot.

Forte has been around since 2001, not to sponsor the bagpipers but after CynApps and Chronology merged. The name was chosen appropriately enough because it’s a noun that means “good at” or “strength.” Long-time industry observers will remember that Dr. John Sanguinetti started CynApps in 1998 after Chronologic and VCS, the popular Verilog Compiled Simulator, were acquired by Viewlogic in 1994. (Synopsys acquired Viewlogic in 1997 and continues to sell VCS today.)

Dr. Sanguinetti is Forte’s CTO and knew back in the early 1990s that logic verification and logic synthesis were two big EDA problem areas. He took on logic verification at Chronologic, then turned his attention to logic synthesis, knowing that the change in abstraction levels from gates to register transfer level (RTL) would improve design and verification efficiency. That would be enabled by logic synthesis. In 1998, he and two other engineers founded CynApps to create a higher level design environment, along with a synthesis product that would produce RTL code from higher level designs.

Then came the 2001 merger of CynApps and Chronology with synthesis and verification offerings. Chronology was a verification provider, but not as well known or successful as Verisity (now part of Cadence). With a visible presence and good reputation in the synthesis space, CynApps could attract funding. The decision was made to continue to focus on high-level synthesis and use the verification tools to build a high-level synthesis environment with verification at its core. The move to a high-level synthesis business model was soon validated –– Fujitsu, Ricoh and Sony were among the first Forte customers that year.

Asia’s consumer electronics companies were the first to adopt a high-level synthesis methodology ahead of other regions. Japan’s consumer device designs were a natural fit for this kind of software because HLS can implement image-manipulation algorithms into hardware.

Cynthesizer, Forte’s SystemC high-level synthesis, is selected by development teams who want to reduce time-to-market pressures by designing at a higher level of abstraction and require substantial improvements in circuit size and power. In many cases, teams create designs that would be impossible using RTL. United States, Korea and Japan, in particular, are design centers where Cynthesizer is in use for custom processors, and wired and wireless communication devices.

In 2009, Forte acquired Arithmatica to complement its product offerings with a portfolio of intellectual property (IP) and datapath synthesis technology that has been integrated directly into Cynthesizer.

Moving design to a higher level of abstraction has been a tough challenge that Forte solved and its production-quality tools are going mainstream to prove it. Of course, Forte will be at the 50th DAC in Austin, Texas, and is planning already for the return of the bagpipers this year.