Synopsys IP Designs Edge AI 800x100

A Brief History of Mentor Graphics

A Brief History of Mentor Graphics
by Beth Martin on 08-20-2012 at 11:00 pm

In 1981, Pac-Man was sweeping the nation, the first space shuttle launched, and a small group of engineers in Oregon started not only a new company (Mentor Graphics), but an entirely new industry, electronic design automation (EDA).


Mentor founders Tom Bruggere, Gerry Langeler, and Dave Moffenbeier left Tektronix with a great idea and solid VC funding. To choose a company name, the three gathered at Bruggere’s home, former “world headquarters,” of the fledgling enterprise. Moffenbeier’s choices, ‘Enormous Enterprises’ and ‘Follies Bruggere,’ while witty did not seem calculated to inspire confidence in either potential investors or customers. Langeler, however, had always wanted to own a company called ‘Mentor.’ Later, ‘Graphics’ was added when it was discovered that the lone word ‘Mentor’ was already trademarked by another company. Mentor, now based in Wilsonville, Oregon, became one of the first commercial EDA companies, along with Daisy Systems[SUP]1[/SUP] and Valid Logic Systems[SUP]2[/SUP].

The Mentor Graphics team decided what kind of product it would create by surveying engineers across the country about their most significant needs, which led them to computer-aided engineering (CAE), and the idea of linking their software to a commercial workstation. Unlike the other EDA startups, who used proprietary computers to run their software, the Mentor founders chose Apollo workstations as the hardware platform for their first software products. Creating their software from scratch to meet the specific requirements of their customers, and not building their own hardware, proved to be key advantages over their competitors in the early years. One wrinkle—at the time they settled on the Apollo, it was still only a specification. However, the Mentor founders knew the Apollo founders, and trusted that they would produce a computer that combined the time-sharing capabilities of a mainframe with the processing power of a dedicated minicomputer.

The Apollo computers were delivered in the fall of 1981, and the Mentor engineers began developing their software. The goal was to demonstrate their first interactive simulation product, IDEA 1000, at DAC in Las Vegas the following summer. Rather than being lost in the crowd in a booth, they rented a hotel suite and invited participants to private demonstrations. That is, invitations were slipped under hotel room doors at Caesar’s Palace, but because they didn’t know which rooms DAC attendees were staying in, the invitations were passed out indiscriminately to vacationers and conference-goers alike. The demos were very well received (by conference-goers, anyway), and by the end of DAC, one-third to half of the 1200 attendees had visited the Mentor hotel suite (No record of whether any vacationers showed up). The IDEA 1000 was a hit.

By 1983, Mentor had their second product, MSPICE, for interactive analog simulation. They also began opening offices across the US, and in Europe and Asia. By 1984, Mentor reported its first profit and went public. Long time Mentor employee Marti Brown said it was an exciting time to work for Mentor. The executives worked well together, complementing each other’s strengths, and Bruggere in particular was dedicated to creating a very worker- friendly environment, including building a day care center on the main campus. Throughout the 80s, Mentor grew aggressively (you can see all the companies Mentor acquired on the EDA Mergers and AcquisitionsWiki). As an aside, Tektronix, where the Mentor founders worked previously, also entered the market with the acquisition of a company called CAE systems. The technology was uncompetitive, though, and they eventually sold the assets of CAE Systems to Mentor Graphics in 1988.[SUP]3
[/SUP]
Times got tough for Mentor between 1990-1993, as they faced new competition and a changing EDA landscape. While Mentor had always sold complete packages of software and workstations, competitors were beginning to provide standalone software capable of running on multiple hardware platforms. In addition, Mentor fell far behind schedule on the so-called 8.0 release, which was a re-write of the entire design automation software suite. This combination of factors led some of their earliest and most loyal customers to look elsewhere, and forced Mentor to declare its first loss as a public company. However, learning from the experience, Mentor began to redesign its products and redefine itself as a software company, expanding its product offerings into a wide range of design and verification technology. As it made the transition, the company began to recapture market share and attract new customers.


In 1992, founder and president Gerry Langeler left Mentor and joined the VC world.
In 1993, Dave Moffenbeier left to become a serial entrepreneur. That same year, Wally Rhines came on as president and CEO, replacing CEO Tom Bruggere who also retired as chairman in early 1994.[SUP]4[/SUP] Under the leadership of Rhines, Mentor has continued to grow and thrive, introducing new products and entering new markets (such as thermal analysis and wire harness design), cementing its position as the IC verification reference tool set for major foundries, and reporting consistent profits. In recent years, Mentor has been a target for acquisition (by Cadence, 2008) and an activist investment (by famed corporate pirate Carl Icahn, 2010). Despite those events, Mentor continued to grow its revenue and profitability to record levels by introducing products for totally new markets. Throughout its 31 years, Mentor has been a solid anchor of the EDA industry that it helped to create.

Very Interesting References and Asides

[LIST=1]

  • Daisy Systems merged with Cadnextix, who was acquired by Intergraph, who spun out the EDA business as VeriBest, which was bought by Mentor in 1999.
  • Valid was acquired by Cadence in 1991.
  • Thanks to industry gadfly Simon Favre for this information.
  • Another interesting aside about Bruggere. His English manor-style home, which is for sale, was used last year as a filming location for the t.v. series Grimm–as the site of poor Mavis’ murder (Season 1, episode 20).

  • The Business Case for Algorithmic Memories

    The Business Case for Algorithmic Memories
    by Adam Kablanian on 08-20-2012 at 11:00 am

    Economic considerations are a primary driver in determining which technology solutions will be selected, and how they will be implemented in a company’s design environment. In the process of developing Memoir’s Algorithmic Memory technology and our Renaissance product line, we have held fast to two basic premises: Our technology and products have to work as promised, and we have to reduce the risk and total cost of development for our customers. The reality is that the entire semiconductor ecosystem needs to be approached in a new way. Gone are the days when ROI was a second or even third tier concern. Gone, also, are the days when multiple iterations of a product are not only tolerated, they are actually accepted as the norm.

    One of the most expensive and risky parts of chip design is silicon validation. From the beginning, Memoir has focused on developing its technology using exhaustive formal verification that eliminates the need for further silicon validation by the customer. It may sound like this approach should be a given in today’s economically challenging product development environment. However, implementing this philosophy as part of our product portfolio takes a lot of understanding of the underpinnings of embedded memory technology. We have invested a substantial amount of time and energy in developing our exhaustive formal verification process that is used to test and certify our Algorithmic Memory before shipping it to customers. This is very unique for an IP company and this is the cornerstone of our risk reduction strategy, which also significantly reduces cost for the customer.

    For the past 40 years, the semiconductor industry blindly continued to use the 8T bit cell to build dual port SRAM memories. Today, successfully incorporating 8T bit cells in dual port memories into SoC designs is not as simple as it used to be. The current word in the industry is that 8T bit cell is problematic in terms of design margins and VDD min, which is paramount for low power designs. Additionally, yields are also a concern. So, rather than just coming up with a different way to implement 8T bit cell to design synchronous dual port memoires, we have chosen a different path. With Algorithmic Memory technology, customers can use single port memory utilizing the 6-transistor (6T) bit cell to create new dual port and multi-port memories for synchronous chips. This matches the performance of an 8T bit cell-based design methodology. By eliminating the need for 8T cell, the testing is also simplified since only a single type of memory using only the 6T bit cell needs to be tested. This helps to reduce overall design and test complexity, which translates into faster time-to-market, better yields, and cost savings.

    Algorithmic Memory brings an innovative design methodology that results in a reduction in overall product development risk, design time and implementation costs. While it’s difficult to translate these savings into specific dollar amounts, what we have learned is that by focusing on all the levels of the embedded memory development ecosystem, we can reduce the number of physical memory compilers that our customers have to develop by half. In addition, there is substantial cost savings because fewer physical memory compilers have to be developed, maintained, and silicon validated again and again every time there is a technology change. Still, the greatest savings is that because of our exhaustive formal verification process we have eliminated the need for further silicon validation. For synchronous designs this is a major advancement in design and product development methodologies. It represents an industry sea-change in how SoC IP technology is developed and deployed.

    In the past, there have been pockets of innovation in the embedded memory space. However, with Algorithmic Memory, for the first time, there is now a third-party IP offering that can have a significant, industry-level impact to help advance the semiconductor ecosystem as a whole.


    MemCon 2012: Cadence and Denali

    MemCon 2012: Cadence and Denali
    by Eric Esteve on 08-20-2012 at 7:00 am

    I was very happy to see that Cadence has decided to hold MEMCON again in 2012, in Santa Clara on September 18[SUP]th[/SUP] . The session will start with “New Memory Technologies and Disruptions in the Ecosystem”from Martin Lund.

    Martin is the recently (March this year) appointed Senior VP for the SoC Realization Group at cadence: he is managing the group in charge of IP, including the Memory Controller product line (DDRn, LPDDRn or WideIO) and PCI Express IP that Cadence has inherited after Denali acquisition. With these products, Cadence is competing head-on with Synopsys and, even if the revenue generated by DDRn IP license is kept confidential by Cadence, my guess is that both companies are very close in term of market share.

    The charter of Martin Lund is crystal clear: capitalize on Denali acquisition and the related IP product lines, leverage on know how (SerDes development, Ethernet Controller and more) acquired by Cadence when doing design service for they demanding customers, to build a real IP business unit, capable of competing head to head with Synopsys. I have no doubt that Cadence has the right designers, marketers and the IP products “backbone” to turn this strategy into success. Then, it will be a question of realization, as usual, and maybe this strategy should be comforted by some cleaver acquisition to grow the business faster. We will see in the future…

    If you want to register, just go here.

    If you prefer to have a look at the conference agenda first, then you can click here… or read this blog, I will tell you why I think going to MemCon 2012 is a good idea!

    The first time I attended to MemCon was in 2005, at that time I was representing PLDA and I came with a Xilinx based board with our x8 PCI Express IP core integrated (this was the first X8 PCIe IP running on FPGA worldwide, and yes, thanks, we sold a lot of boards, as well as a lot of PCIe IP to our ASIC customers). I must say I was very impressed by MemCon, as I had the chance to listen to a few presentations.

    All these presentations, whether about PCI Express or more specifically about Memories, had in common to be very deep technically, and very informative. It was not pure marketing, the audience would really learn about the topic (I remember a presentation about PCI Express protocol given by Rambus – I was PCIe Product Marketing Director- and I learned more than during the long discussions I had with our designers).

    The second reason why I was impressed was when I realize that Denali could manage such high quality event. At that time, in 2005, Denali revenue was probably in the $30M to $40M –or less, they never share it. That’s a good size when you run IP and VIP business, but you have to compare it with the companies presenting at MemCon: Rambus was the smallest, the others being Micron, Samsung and the like. Denali has been bought in 2010 for $315M by Cadence (or seven time their 2009 revenue!), and this was not by chance. The best Denali strength was their marketing presence. Everybody knows about the Denali Party during DAC, and about Memcon. So everybody knows about Denali in the SC industry. Can you think about that many company of that size able to create such a level of awareness? Denali was really the benchmark in term of marketing in the CAE, IP or VIP industry! Now, you better understand why they could have been sold for 7X their yearly revenue…

    To come back to the conference, here is a short list of the presentations (you will find more here):

    • Navigating the Post-PC Worldfrom Samsung
    • Simplifying System Design with MRAM—the Fastest, Non-Volatile Memoryby Everspin
    • Paradigm Shifts Offer New Techniques for Analysis, Validation, and Debug of High Speed DDR Memoryfrom Agilent
    • LPDDR3 and Wide-IO DRAM: Interface Changes that Give PC-Like Memory Performance to Mobile Devicesby Marc Greenberg from Cadence

    Just a word about the last one from Marc Greenberg: I saw his presentation in Munich, during CDN Live in May, I can tell you that this guy knows very well the topic. Don’t hesitate to ask him questions (like I did), you will get answer, and you could even start a longer and informative discussion after the presentation (like I did too!).

    I don’t know if I could make it and go to MemCon (Santa Clara is a bit far from Marseille), but you should do it, and tell me if I was wrong to send you there.

    By Eric Esteve from IPNEST


    A Brief History of SoCs

    A Brief History of SoCs
    by Daniel Nenni on 08-19-2012 at 10:00 am

    Interesting to note; our cell phones today have more computing power than NASA had for the first landing on the moon. The insides of these mobile devices that we can’t live without are not like personal computers or even laptops with a traditional CPU (central processing unit) and a dozen other support chips. The brain, heart, and soul of today’s cell phone is a single chip called an SoC or System on Chip, which is a literal definition.


    Sources: Device Sales: Gartner, IDC; Chip Sales: ARM, Wired Research

    The demands on cell phones are daunting. What were once simple tasks; talk, text, email, now include photos, music, streaming video, GPS, and artificial intelligence (Apple Siri / Android Robin), all now done simultaneously.

    I worked my way through college as a field engineer for Data General mini computers. CPUs were dozens of chips on multiple printed circuit boards, memory was on multiple boards, I/O was a board or two. Repairing computers back then was a game of board swap based on which little red lights blinked or stopped blinking on the front panel. My first personal computer was a bit more compact. It had a mother board with multiple chips and slots to plug in other boards for video, disk, modem, and other interfaces to the outside world. Those boards are now chips on a single mother board which is what you will see inside your laptop.

    Today, this entire system is on one chip. Per Wikipedia:

    A system on a chip or system on chip (SoC or SOC) is an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio-frequency functions—all on a single chip substrate.

    Let’s look at the first iPhone tear down which can be found HERE. The original iPhone was released June 29, 2007 and featured:

    • 480×320 display
    • 16GB storage
    • 620MHZ single core CPU
    • 103MHZ GPU
    • 128MB DRAM
    • 2M pixel camera

    Compare this to the current iPhone4s tear down which can be found HERE. The iPhone 4s was released October 4, 2011 and features:

    • 960×640 display
    • 64GB storage
    • 1GHZ dual core CPU
    • 200MHZ GPU
    • 512MB DRAM
    • 8M pixel camera

    There is a nice series of Smart Mobile articles on SemiWiki which cover the current SoCs driving our phones and tablets:

    It will be interesting to see what the iPhone5 brings us but you can bet it will be an even higher level of SoC integration; a quad core processor, a 2048×1536 display, and a 12M pixel camera, yet in a slimmer package.

    The technological benefits of SOCs are self-evident: everything required to run a mobile device is on a single chip that can be manufactured at high volumes for a few dollars each. The industry implications of SoCs are also self-evident: as more functions are consolidated into one SoC semiconductor companies will also be consolidated.

    The other trend is the transformation from traditional semiconductor companies (IDMs and fabless) to semiconductor intellectual property companies such as ARM, CEVA, and Tensilica. This is partly due to the lack of venture funding made available to semiconductor start-ups (it costs $100M+ to get a leading edge SoC into production), but also due to the mobile market which demands SoCs be highly integrated and power efficient with a very short product life. As a result, hundreds of semiconductor IP companies are emerging and hoping to ride the SoC tidal wave leaving traditional semiconductor companies in the wake.

    A Brief History of Semiconductors

    A Brief History of ASICs
    A Brief History of Programmable Devices
    A Brief History of the Fabless Semiconductor Industry
    A Brief History of TSMC
    A Brief History of EDA
    A Brief History of Semiconductor IP
    A Brief History of SoCs


    Ex ante: disclose IP before, not after standardization

    Ex ante: disclose IP before, not after standardization
    by Don Dingee on 08-17-2012 at 3:46 pm

    Many of the audience here are involved in standards bodies and specification development, so the news from the Apple v. Samsung on the invocation of ex ante in today’s testimony is useful.

    I worked with VITA, the folks behind the VME family of board-level embedded technology, on their ex ante policy several years ago, and can share that insight. I’m not a lawyer, nor do I play one on TV, so this is the highly simplified, non-legalese version of the rules. Consult your legal department with any questions.

    • If you’re working on a specification with a standards body, and it looks like your company has IP in the form of a patent or patent pending applies, you must disclose that. You’re not yielding your IP rights when doing so, and in fact you’re protecting them for later.
    • If the standards body and its membership decide that the technology is appropriate for use in the specification, it’ll proceed through the normal channels of approval with the accompanying IP disclosures so balloters are aware of the possible implications.
    • The standards body and its membership might decide to re-engineer the specification to avoid impinging on the IP in question.
    • Should the standard be approved with the IP in question, there will be a discussion of FRAND – fair, reasonable, and non-discriminatory licensing for use of the IP inside.

    What this prevents is the unwitting or unvigilant members of a standards body picking up a duly approved specification, implementing it, then finding themselves the target of an IP claim from the company that got their IP engineered in.


    ETSI, the European telecom folks behind 3GPP, LTE and other specifications, just whacked Samsung over the head with their ex ante policy in testimony today. Three articles for more reading.

    CNET: Former ETSI board chief: Samsung flubbed disclosures
    EETimes: Apple Claims Samsung Views Patent Disclosures As ‘Stupid’
    AllThingsD: Apple: Samsung Didn’t Live Up to Its Standards Obligations

    Ex ante has been vetted through the US Dept. of Justice and forms legal precedent, so whether you agree with it or not isn’t the issue. It can and will come back to the surface if the standards body backs its members.

    Well played, Apple. We’ll see where this goes.


    I/O Bandwidth with Tensilica Cores

    I/O Bandwidth with Tensilica Cores
    by Paul McLellan on 08-17-2012 at 3:00 pm

    It is obviously a truism that somewhere in an SoC there is something limiting a further increase in performance. One area where this is especially noticeable is when a Tensilica core is used to create a highly optimized processor for some purpose. The core performance may be boosted by a factor of 10 or even as much as 100. Once the core itself is no longer the limiting factor, I/O bandwidth to get data to and from the core often comes to the head of the line. Traditional bus-centric design just cannot handle the resulting increase in data traffic.


    A long time ago processors had a single bus for everything. Modern processors separate that so that they have one or more local buses to access ROM and RAM and perhaps other memories, leaving a common bus to access peripherals. But that shared bus to access the peripherals becomes the bottleneck if the processor performance is high.

    Tensilica’s Xtensa processors can have direct port I/O and FIFO queue interfaces to offload overused buses. There can be up to 1024 ports and each can have up to 1024 signals, boosting I/O bandwidth by thousands of times relative to a few conventional 32 or 64 bit buses.


    But wait, there’s more. Since Tensilica’s flexible length instruction extension (FLIX) allows designers to add separate parallel execution units to handle concurrent computational tasks. Each user-defined execution unit can have its own direct I/O without affecting the bandwidth available to other parts of the processor.


    While plain I/O ports are ideal for fast transfer of control and status information, Xtensa also allows designers to add FIFO-like queues. This allows the transfer of data between the processor and other parts of the system that may be producing or consuming data at different speeds. To the programmer these look just like traditional processor registers but without the bandwidth limitations of shared memory buses. Queues can sustain data rates as high as one transfer per clock cycle or 350Gb/s for each queue. Custom instructions can perform multiple queue operations per cycle so even this is not the cap on overall bandwidth from the processor core. This allows Xtensa processors to be used not just for computationally intensive tasks but for applications with extreme data rates.

    It is no good adding powerful capabilities if they are too hard to use. I/O ports are declared with simple one-line declarations (or a check-box configuration option). A check-box configuration is also used to define a basic queue interface although a handful of commands can be used to create a special function queue.

    Ports and queues are automatically added to the processor and, of course, are completely modeled by the Xtensa processor generator, reflected in the custom software development tools, instruction set simulator (ISS), bus functional model and EDA scripts.

    A white paper with more details is here.



    What’s Next For Emerging Memories

    What’s Next For Emerging Memories
    by Ed McKernan on 08-17-2012 at 11:00 am

    In doing some digging in preparation for the start of www.ReRAM-Forum.com Christie Marrian asks if ReRAM.CBRAM technology is approaching a ‘tipping point’ relative to NAND Flash. You can read more of his analysis over at the blog he moderates (ReRAM-Forum.com). Also a note to readers. The blog is interested in collecting new posts from engineers and developers working with today’s memory and emerging memory technologies. Drop Christie a note on your analysis or if you have written a paper on emerging memories, the site welcomes original research work.


    2012 semiconductor market decline likely

    2012 semiconductor market decline likely
    by Bill Jewell on 08-16-2012 at 9:00 pm

    The worldwide semiconductor market in 2Q 2012 was $73.1 billion, according to WSTS data released by the SIA. 2Q 2012 was up 4.7% from 1Q 2012 but down 2.0% from 2Q 2011. Major semiconductor companies are generally expecting slower revenue growth in 3Q 2012 versus 2Q 2012. The table below shows revenue estimates for calendar 3Q 2012 for the largest semiconductor suppliers which provided guidance. TSMC, the largest wafer foundry company, is included since its business is a key indicator of the outlook for many fabless companies.

    TSMC, Texas Instruments, Qualcomm, STMicroelectronics and AMD all predicted revenue declines for the low end of their 3Q 2012 guidance. The midpoints of guidance ranged from -1% to +5.9%. The high end of guidance was over 9% for Intel and Broadcom, but below 6% for the other companies. Renesas was an exception, forecasting 17.6% growth in 3Q 2012 after an 11% decline in 2Q 2012.

    The major memory suppliers – Samsung, SK Hynix and Micron Technology – did not provide specific revenue guidance for 3Q 2012 but expressed similar outlooks: a weak DRAM market and a steady to improving flash memory market. Given the lackluster guidance by major semiconductor companies, the 3Q 2012 semiconductor market will likely show slower growth than the 4.7% in 2Q 2012. This slow growth will likely continue into 4Q 2012. TSMC indicated it expects a decline in revenue in 4Q 2012 from 3Q which could be as severe as double-digit.

    With semiconductor market growth sluggish in the second half of 2012, it appears the full year 2012 will show a decline from 2011. We at Semiconductor Intelligence believe our February 2012 forecast of a 1% decline for 2012 was the first forecast from an analyst firm to predict a decline. We revised the forecast up to 2% in May, based on signs at the time of improvement in both the worldwide economy and in electronics markets. We have returned to the 1% decline in our latest forecast.

    Most analyst firms expect 2012 semiconductor market growth in the 4% to 7% range. WSTS’s May forecast was for only 0.4% growth. The Carnegie Group in July forecast a flat market. The Information Network in August predicted a decline in 2012, but did not state a specific number. Mike Cowan’s forecast model based on historic WSTS data is updated each month. Cowan’s 2012 forecast first went negative in March, turned slightly positive in June and July, and went negative again in August at -0.9%.

    The semiconductor market in the last twelve years has shown years of growth over 30% and declines as high as 32%. From this perspective, the difference between a low single-digit decline and a low single-digit increase in 2012 does not appear meaningful. However it is important from a psychological standpoint. The semiconductor industry does not want to see a decline in 2012, especially after growth of only 0.4% in 2011. Semiconductor companies would like to show positive revenue growth to their shareholders in 2012, even if very slight, rather than a decline. Unfortunately a decline is becoming more likely. The major economic concerns – the European debt crisis and weak U.S. recovery – are not likely to be resolved before the end of 2012. Two key drivers of the semiconductor market are showing no growth. IDC estimates PC shipments in 2Q 2012 were flat versus a year ago. IDC also said mobile phone shipments were up only 1% from a year ago in 2Q 2012 after a 1.5% decline in 1Q 2012.

    Semiconductor Intelligence


    The Generational Legacy of Steve Jobs

    The Generational Legacy of Steve Jobs
    by Ed McKernan on 08-16-2012 at 12:00 pm

    Truly great leaders are recognized by the impact they leave several generations down the road. Roosevelt and Churchill are two historical figures who together saved Western Civilization, thus leaving a tremendous legacy even now, two generations later. In the semiconductor world we mark our generations in the two-year cadence of Moore’s Law. When Steve Jobs passed away it was commented in Walter Isaacson’s book that he left Apple a 4-year product development pipeline. Surely this is significant with regards to Apple’s future viability but I am beginning to believe that he put in place an IP and Branding Strategy whose legacy will last a generation, which is atypical for technology companies. Perhaps only IBM can claim that. The reasoning behind this post is the current courtroom battle between Apple and Samsung. Steve Jobs used the word “Thermonuclear” to describe how he would destroy Android an I am beginning to believe his intention was beyond an IP fight and more in a public humiliation of the Android Cloners. Samsung is being forced to go through what I would call a “Branding Perp Walk.”

    Steve Jobs commented that at the time he left Apple in 1985, they had a 10-year technology lead. Given that he had spent 10 years building Apple, one could conclude that one of his work years was equivalent to two work years developing a PC at IBM or O/S for Microsoft. History though has proven him out, as Microsoft was not able to match the 1984 Macintosh until Win95 was launched. There is no doubt that the Mac was way ahead of its time as the software overwhelmed the processor, graphics and DRAM hardware. It would take several Moore’s Law generations for silicon performance to improve and costs to drop to a reasonable range to support the mass markets with GUI based PCs. However when it arrived, the cloners, led by Dell feasted off the higher margined IBM, Compaq and Apple products resulting in brands reduced to differentiated by price.

    Fast-forward a dozen years to 2007 and the introduction of the iPhone leads to the most revolutionary computing platform since the 1984 Macintosh. Steve Jobs knows it will be copied and unlike the John Sculley era, Apple will need to vigorously defend its IP and its Brand or fall into the Dell trap. Eric Schmidt, taking the role of Bill Gates begins executing the software commoditization strategy with the Free Android O/S. The cloners are set in motion to the point that in the case of Samsung, everything down to the product packaging is replicated. Apple needs to put a halt to the rapidly expanding Android ecosystem and destroy not only Google but the cloners. If the whole world can see that Apple’s competitors are proving to be nothing more than fake, knock offs then their brand can be severely damaged, which severely breaks a businesses operation model. How many individuals will want to show off their new Android Smartphone to friends at a dinner party after the supplier have been slapped down in the world of customer opinion.

    To convince a future judge and jury and the world over that Apple plays fair, Steve Jobs laid a honey trap that Samsung fell into. Per court testimony we find that Apple was willing to offer a license on their patents to Samsung at the rate of $30 per smartphone and $40 per tablet. If Samsung took the license, then the impact on profits would be so great that they would not be able to compete with Apple. By denying the license and by not negotiating in good faith, Samsung took the risk that they would be shown to not have respect for an innovator’s property and thus are being held up as an example of improper business practices. It is unclear what the settlement will be at this time in terms of royalty payments or penalties. The more relevant point is that Apple will walk away with an enhanced brand while Samsung and other Smartphone cloners will be living with Brands that are significantly diminished. This is Apple’s way of preserving their brand and profit streams as they pull their customers ever tighter into the iCloud ecosystem. In a few years, I think the dominance and legacy will make it too difficult for users to leave the Apple Cloud and a new competitor to assault the castle walls.

    Soon the trial in California will end and a settlement will come forth but then the next set of trials will begin in both the US and Europe. Do Apple competitors utilizing Android want to continue fighting this “Branding Death March” with a company that has the deepest pockets in the industry or maybe they migrate over to Microsoft’s O/S. In the end it was not just the 4-year product pipeline that is the legacy of Steve Jobs but in addition the IP and Branding strategy that will extend Apple’s dominance out beyond a generation, which is a time span that mark true greatness.

    Full disclosure: I am long INTC,QCOM,AAPL,ALTR


    SystemVerilog from Nevada?

    SystemVerilog from Nevada?
    by Daniel Payne on 08-16-2012 at 10:58 am

    When I think of EDA companies the first geography that comes to mind is Silicon Valley because of the rich history of semiconductor design and fabrication, being close to your customers always makes sense. In the information era it shouldn’t matter so much where you develop EDA tools, so there has been a gradual shift to a wider geography. Aldec is one of those early EDA companies that started in 1984, just three years after Mentor opened it’s doors, however Aldec is headquarteredin Nevada instead of Silicon Valley. I wanted to learn more about Aldec tools and decided to watch their recorded webinar on System Verilog.

    The first time that I used Aldec tools was back in 2007 when Lattice Semiconductor replaced Mentor’s ModelSim with the Aldec Active-HDL simulator. I updated a Verilog training class and used Active-HDL for my lecture and labs delivered to a group of AEs at Lattice in Oregon. Having used ModelSim before it was actually quite easy for me to learn and use Active-HDL. For larger designs you would use the Aldec tool called Riviera-PRO.

    Webinar

    Jerry Kaczynski presented the webinar, he’s a research engineer at Aldec, and has been with the company since 1995. His background includes working on simulator standards. With 53 slides in just 65 minutes the pace of the webinar is brisk, and filled with technical examples, no marketing fluff here.


    SystemVerilog came about because Verilog ran out of steam in the verification side. Accellera sponsored SystemVerilog and the first standard to extend Verilog in 2005, then by 2009 Verilog and SystemVerilog became merged. SystemVerilog has various audiences:

    • SystemVerilog for Design (SVD) – for hardware designers
    • SystemVerilog Assertions (SVA) – both design and verification
    • SystemVerilog Testbench (SVTB) – mostly verification
    • SystemVerilog Application Programming Interface (SV-API) – CAD integrators

    SVD
    Verilog designers get new features in SystemVerilog like:

    • Rich literals: a= ‘1; small_array='{1,2,3,42};
    • User-defined data types
    • Enumeration types (useful in state machines)
    • Logic types (can replace wire and reg)
    • Two-value types (bit, int) – simulates faster than 4 state
    • New operators (+=, -=, *=, /=, %=, &=, |=, <>=)
    • Hardware blocks (always_comb, always_latch, always_ff)
    • Implicit .name connections for modules, also implicit .* connections in port list
    • Module time (timeprecision, timeunit)
    • Conditional statements (unique case, priority keyword – replaces parallel case and full case pragmas)
    • New do/while Loop statement
    • New break and continue controls

    • Simpler syntax for Tasks and Functions
    • New procedural block called final
    • Aggregate Data Types (Structures, Unions, Arrays – Packed, Unpacked)
    • Structures added (like the record in VHDL or C struct)
    • Unions added
    • Array syntax simplified

    • Special unpacked arrays (Dynamic, Associative, Queues) – not synthesizable
    • Packages – organize your code better using import

    SVA
    Assertions are used in property based design and verification, and they look at the design from a functionality viewpoint.

    • Look for illegal behavior
    • Assumptions on inputs
    • Good behavior, coverage goals

    • HW designers add assertions in code to document and verify desired behavior
    • System level designers can add protocol checkers at top level
    • Verification engineers can add verification modules bound to an object to monitor behavior

    SV Interfaces
    For communicating between modules SV Interfaces bring new abilities and less typing:

    SV Testbench

    • Class is used for OOP
    • Inheritance – reuse previous classes
    • Polymorphism – same name do different things depending on class
    • Abstract classes – higher level
    • Constrained random testing (CRT)
    • Spawn threads

    • Mailbox (type of Class) – FIFO for message queue
    • Functional Coverage – coverage analysis (covergroups, coverpoints, bins)

    Verification Methodologies

    • Verification Methodology Manual (VMM) – created by Synopsys, both testbench and design as SystemVerilog
    • Open Verification Methodology (OVM) – created by Mentor and Cadence, has SV and SystemC testbench with design files in any language
    • Universal Verification Methodology (UVM) – created by Accellera to unify VMM and OVM
    • Teal/Truss – by Trusster as Open Source HW verification utility and framework in C++ and SV

    Q&A
    Q: What tools support SVA?
    A: SVA is included in Riviera-PRO simulator.

    Q: How could I use SystemVerilog in my VHDL testbench?
    A: You could bind SystemVerilog as checkers, then connect them to entities or components in VHDL.

    Q: What is difference between logic and reg?
    A: Logic is more than reg, also used where wire was used.

    Q: Can I connect VHDL inside of SystemVerilog?
    A: That’s not controlled by a standards body, so it’s tool specific.

    Q: Can I synthesize a queue?
    A: No, not really today.

    Q: How are modports related to assertions?
    A: Not directly related, modports used to define directions of interconnect.

    Q: Can we execute random in modules?
    A: Random is used for classes.

    Q: Will associative arrays handle multi-dimensions?
    A: not yet.

    Q: Good SV books?
    A: Depends on if you do design or verification. Many good choices. Design subset – Sutherland’s book. Browse Amazon.com.

    Q: Constrained random test generation details?
    A: Just an overview today, sorry.

    Summary
    SystemVerilog gives the designer richer ways to express hardware than Verilog, more clearly defined intent, better verification with assertions, and use fewer lines of code. It’s about time to upgrade from classic Verilog to SystemVerilog in order to reap the benefits. VHDL designers may benefit from using SV for verification.