Synopsys IP Designs Edge AI 800x100

SoC Implementation, Sometimes You Need a Plan B

SoC Implementation, Sometimes You Need a Plan B
by Daniel Payne on 02-13-2013 at 11:14 am

I read two blogs this week that got me to thinking about contingencies in SoC implementation. By contingency I mean using an EDA tool flow from the leading vendor for logic synthesis and then discovering that you cannot route the design without expanding the die size after a few weeks of concerted effort, then having to come up with a Plan B. The blogs were from two people at Oasys Design Systems:

This AE named Ganesh Venkataramani talks about visiting a semi company in the mobile market that had a chip in tape-out that was growing too large because of congestion. They couldn’t even fit the whole chip into their logic synthesis tool at one time, so that made it tough to figure out if there were any top-level timing issues.

Large SoC design teams have separate front-end and back-end engineers that perform very different tasks with different logical and physical IC design tools. This particular design team decided to give Oasys tools a shot on resolving their routing congestion dilema, I mean what did they have to lose?

With the help of the AE they were able to get their design loaded into Oasys tools in a day. The physical synthesis tool is called RealTime Explorer, and it could synthesize the entire SoC in a couple of hours. Next, they visualized the results by cross-probing between the routed IC layout and RTL source code to identify the source of congestion. Plan B worked out quite well for this design team because now they had pin-pointed their routing congestion back to the RTL where wide muxes had been used. Pretty impressive for just one day’s work compared to previous efforts that fell short after weeks with multiple engineers.

The second blog was written by Dan Ganousis, an EDA veteran who started out as a design engineer just one year prior to me back in the 1970’s. His blog also focuses on the implementation issue of routing congestion.

So what happens when the front-end RTL team assembles all their IP, writes new code, verifies the logic, does some logic synthesis, then throws the design over the wall to the back-end IC implementation team? Often the result is an IC design deemed “Unroutable”. The good news is that using an EDA tool like RealTime Explorer can let you cross-probe between the physical and logical worlds:

The screenshot above shows the physical routing congestion on the right, where Red is the area of highest routing congestion. The cross-probing lets you quickly identify the RTL source code responsible for causing this congestion, so now you know what to modify in RTL to improve your physical layout.

Here’s the big picture view of analysis features that you’ll find in RealTime Explorer:

Summary
If your present EDA tool flow isn’t creating the chip size that you want, or getting out of tape-out quickly enough, then consider calling Ganesh as your Plan B.


An Affordable AMS Tool Flow gets Integrated

An Affordable AMS Tool Flow gets Integrated
by Daniel Payne on 02-13-2013 at 11:10 am

EDA tools come in all sizes and price ranges, so I was pleased to readthat Tanner EDAhas completed an integration with Incentia. A few months ago Tanner announced their integration with Aldec for digital simulation, and today’s announcement extends their tool suite to include digital synthesis and static timing. Here’s the entire integrated AMS tool flow:

Continue reading “An Affordable AMS Tool Flow gets Integrated”


Extending Wintel collaboration to Nokia?

Extending Wintel collaboration to Nokia?
by gauravjalan on 02-12-2013 at 10:00 pm

Any article or blog on mobile phones talks about the ongoing battle between Apple and Samsung or the never ending struggle of Nokia and Blackberry. These reports are primarily based on the US market that hosts close to 322 million mobile subscriptions as per report in DEC 2012 by mobiThinking. With 81% (256 million) of the US population on 3G/4G i.e. Smart phones, it makes sense to have the articles talk around them. While the US leads the market in terms of product launch and adoption, 30 percent of the world’s mobile users live in India and China. China leads in terms of subscriptions with 1+ billion subscribers out of which 212 million are 3G/4G users. India is ranked second with 906.6 million total (699 million active) subscribers and only 70.6 million 3G/4G users. This means it will take quite some time for the Smart phones to overtake feature phones and the reasoning might be dropping price point and not the need from consumer unless a killer application cuts in. What it also means is that with Nokia still leading the feature phone market, it has some room left to pick up.

This belief of mine turned stronger when I visited a few stores in search of a phone recently. My wife’s touch screen CDMA phone from a leading brand lost its touch sensitivity again. It has already demanded service twice in last 5 months. Infact, it was the 3[SUP]rd[/SUP] CDMA phone bought in 2 years while our GSM one still continues to operate smoothly. With limited CDMA phone options available it was always a compromise on the handset. The reason for sticking to CDMA was the unlimited talk time plan offered by one of the carriers. With the capital cost balancing the savings on operating cost, we decided to transition from CDMA to GSM that has full range of handsets. With a tablet available for internet browsing, the usage of phone is limited to calls. Taking a rational approach, we decided to look for sub $200 phone to overcome this need (still guessing if the result was an outcome of my convincing capability or my wife’s ability to understand the logic). Note – In terms of carriers providing the handsets with a 1 or 2 years plan, India has limited options. The phones are bought mostly bought directly from stores hosting a range of handsets from a variety of sources.

On our visit to the store, there was a wide range of options within $200 price tag and Nokia was clearly leading the range. The price points for the features offered were amazing. Samsung failed to impress in this category and there was not a single handset from LG. While there were a lot of Chinese phones but the packaging is still poor and no matter if they provide double the features for the same price, if the phone hangs, God help! For a $200 phone category, one of the impressive sets was Xolo from Lava powered by Intel Atom. It had the largest display with clarity and maximum features in this price category. Finally Nokia won the choice by being closest to the requirement in all respect.

After reaching back, I checked on the latest reports on the Indian market to validate our buy. According to CMR’s India Mobile Handsets Market Review, 2Q 2012, September 2012 release, during 1H 2012 (January-June 2012), total India shipments of mobile handsets was recorded at 102.43 million units. During the same period, total India shipments of smart phones were 5.50 million units. Interestingly Nokia still leads the market here.

Even in the Smart phone market in India, Nokia still has a strong presence followed by RIM. No Apple yet!

The cell phone market in India is unconventional in terms of handsets, services and usage but carries strong potential. Considering the developments in this domain, what if Nokia enters the Wintel partnership directing it to the cell phone market? Intel recently announced that it is looking to attack the lower price segment to increase foot print in the mobile segment. Nokia already has tied up with Microsoft for its OS. Why not extend this partnership further and make a ‘NokWinTel’ or ‘WinTelKia’. The later could be interesting as “Kia” in Hindi means “did” giving the partnership a regional flavourJ. All 3 are recognized as top brands in the local market. They need to hit the right price points with a variety of products along with look & feel. If this trio (Microsoft, Intel & Nokia) is able to collaborate and attack this LONG FAT TAIL, all 3 would be able to claim a bigger share for their respective products.


Synopsys Magma Acquisition Stock Trading Under Investigation?

Synopsys Magma Acquisition Stock Trading Under Investigation?
by Daniel Nenni on 02-12-2013 at 7:00 pm

An attorney from the Division of Enforcement at the U.S. Securities & Exchange Commission contacted me in regards to activity on SemiWiki. Not a great way to start a Monday! Given the parameters of the discussion and the type of questions it is undoubtedly (in my humble opinion) concerning the Synopsys acquisition of Magma.

This conversation and the resulting heart palpitations give ample support to John Cooley’s decision to keep DeepChip.com a “dumb” HTML site. SemiWiki, on the other hand, is an “intelligent” site built on a relational database with a content management system enabling full social media capabilities, data mining, and analytics.

The capability in question here is SemiWiki’s private email where registered members can talk amongst themselves under the cover of their screen names. SemiWiki respects the privacy of our members and will never willingly disclose member information. You can shine bright lights in our eyes and waterboard us, we are not talking. A subpoena by the Federal Government however is a completely different story.

Why would people commit felonies on SemiWiki or any other social media site under the guise of anonymity? The same reason why politicians tweet their private parts, why people post their crimes on FaceBook, and why movie stars and famous athletes send incriminating emails and texts. Because they think they are clever but they are not.

The most recent one that comes to mind is former four star General and CIA Director David Petraeus who communicated with his mistress via draft emails on a gmail account where no emails were actually sent. Clever right? Apparently not as the deleted draft emails were “discovered” during the investigation. Bottom line, digital footprints leave tracks that cannot be covered no matter how clever you think you are, believe it.

The other interesting aspect of this story is Frank Quatrone. From a previous blog “Synopsys Eats Magma…”:

Investment banker Frank Quattrone, formerly of Credit Suisse First Boston (CSFB), took dozens of technology companies public including Netscape, Cisco, Amazon.com, and coincidentally Magma Design Automation. Unfortunately CSFB got on the wrong side of the SEC by using supposedly neutral CSFB equity research analysts to promote technology stocks in concert with the CSFB Technology Group headed by Frank Quattrone. Frank was also prosecuted personally for interfering with a government probe.

To make a long story short: Frank Quattrone went to trial twice: the first ended in a hung jury in 2003 and the second resulted in a conviction for obstruction of justice and witness tampering in 2004. Frank was sentenced to 1.5 years in prison before an appeals court reversed the conviction. Prosecutors agreed to drop the complaint a year later. Frank didn’t pay a fine, serve time in prison, nor did he admit wrongdoing! Talk about a clean getaway! Quattrone is now head of merchant banking firm Qatalyst Partners, which, coincidently, handled the Synopsys acquisition of Magma on behalf of Magma.

A conspiracy theorist might think that the Federal Government is keeping an extra close eye on Frank hoping to rid themselves of the black eye he gave them last time around. I’m not a conspiracy theorist so I really wouldn’t know.

I’m including this disclaimer again since it apparently made the legal folks from the SEC chuckle:

Disclosure: This blog is opinion, conjecture, rumor, and non legally binding nonsense from an internationally recognized industry blogger who does not know any better. To be clear, this blog is for entertainment purposes only.


Video? Tensilica Has You Covered

Video? Tensilica Has You Covered
by Paul McLellan on 02-12-2013 at 2:01 am

Video is a huge growing area and advanced imaging applications are becoming ubiquitous. By “advanced” I mean more than just things like cameras in your smartphone. There is lots more coming, from high-dynamic range (HDR) photography, gesture recognition, more and more intelligent video in cars to keep us safe, face-recognition and so on. And not at the resolutions that we are used to, things are going ultra-high definition (UHD) with 4Kx2K pixels, 18 megapixel cameras (in our phones). Result, video processing requirements are doubling every 18 months whereas Moore’s law is doubling more like every 3 years these days.

So what’s a designer to do? The application processor chip in a smartphone already has some cores, can’t we use them? It probably had a dual/quad core ARM Cortex of some sort. It probably has a dual or quad core GPU, maybe from Imagination. It probably already has some specialized video circuitry such as H.264 encode and decode. Isn’t that enough? Even ignoring power this is not enough processing performance for many of these new applications. With a realistic power limit it is not even close. Video didn’t just kill the radio-star, it is killing the power budget too.

First the ARM. High-resolution video processing requires at least 4 cores but at 1.5GHz that is 3W, which is about 10 times the power budget. But at least it is easy to program. If we try and use the GPU for the image processing, it is not a very good match since the GPU is focused on floating point and 3D graphics. Plus GPUs are notoriously hard to program for anything other than the graphics pipeline for games etc. Using dedicated hardware can work well for anything like codecs where the standards are fixed (for a decade or so) but the new application areas are in flux with the algorithms changing all the time. And while adding a couple of specialized blocks can be attractive compared to using that ARM, the tradeoff doesn’t look so good once you get up to a dozen or more blocks. They take up a lot of area and consume a lot of power.


Today Tensilica is announcing their Imaging Video Processor (IVP) family and the first member of the family. Of course under the hood this is built on top of the Xtensa VLIW processor generation technology and the associated software technology (vectorizing compilers, modeling etc) that go with it. So while the video aspects are new the fundamental technology is mature.

Of course the focus is on low-power handheld devices since these have the most severe constraints: low power budgets, consumer price points and product cycles. The IVP hits the sweet spot between ease of programming and energy efficiency.


The core is licensable but that’s not all there is. RTL, EDA scripts, Instruction-set simulator, IDE (Eclipse-based), C compiler, debugger, image processing applications, operating systems, documentation. And software partners with deep video expertise who work on the bleeding edge of video algorithms.

In the crucial measure of 16b pixel operations per second per watt, IVP is about 10 to 20X as efficient as using the ARM Cortex, and about 3 to 6X as efficient as using the GPU. All with the programming flexibility of a software implementation, a big advantage over direct implementation in RTL in an area where tweaks to algorithms and even entirely new algorithms are commonplace.

Oh, and there is an FPGA development platform too, so that you can hook up an IVP to cameras, TVs, screens or whatever.

Tensilica will be showing the new product in Barcelona at Mobile World Congress at the end of the month. They are in Hall 6 at booth D101. If you are lucky enough to be in Barcelona, one of my favorite European cities, then go by and see them. And don’t miss all the amazing Gaudi architecture. Or the Miro museum. And tapas on la Rambla.

More details on Tensilica’s website here.



Assertion Synthesis: Atrenta, Cadence and AMD Tell All

Assertion Synthesis: Atrenta, Cadence and AMD Tell All
by Paul McLellan on 02-11-2013 at 6:22 pm

Assertion Synthesis is a new tool for verification and design engineers that can be used with simulation or emulation. At DVCon Yuan Lu of Atrenta is presenting a tutorial on Atrenta’s BugScope along with John Henri Jr of Cadence explaining how it helps emulation and Baosheng Wang of AMD discussing their experiences of the product.

Creating an adequate number of high quality assertions and coverage properties is a challenge in any verification plan. Assertion Synthesis takes as input the RTL description of the design and its test environment and automatically generates high quality whitebox assertions and coverage properties in standard language formats such as SVA, PSL and Verilog. Assertion Synthesis enables an automated assertion-based verification methodology that improves design quality and reduces verification overhead.

Here’s the 5000ft version of what Assertion Synthesis is. BugScope watches the simulation (or emulation, which I think of as a special sort of simulation) of the design and observes its behavior. Based on what it sees, BugScope automatically generates syntactically correct assertions about the design, behaviors that it believes are always true based on the simulation.

The designer and verification engineers can use these assertions in three different ways:

[LIST=1]

  • They agree with BugScope that the assertion should always be true. There is now a new assertion that can be used in subsequent verification runs that is ready for use. Construction of syntactically correct assertions can take hours so this is a real time saver. Of course, once included in the verification run, the assertion will trigger anytime the condition is violated making it easy to track down the newly introduced problem.
  • The assertion should not always be true. But this means that there are not enough simulation vectors to exhibit to BugScope any situation in which it is actually not true. This is a real coverage hole and more vectors are required. This is obviously also very useful information.
  • The assertion is indicating a behavior that should, in fact, never happen. BugScope has identified a real design issue, something it considers should happen that the designer knows should not.

    All three of these alternatives result in an improved verification process: either more assertions added very cheaply, a coverage hole identified, or a real error in the design.

    The DVCon tutorial, which is officially titled Achieving Visibility into the Functional Verification Process using Assertion Synthesis, is on Thursday February 28th from 1.30pm to 5pm in the Donner Ballroom. More details are here.


  • Magic? No! It’s Computational Lithography

    Magic? No! It’s Computational Lithography
    by Beth Martin on 02-11-2013 at 7:00 am

    The industry plans to use 193nm light at the 20nm, 14nm, and 10nm nodes. Amazing, no? There is no magic wand; scientists have been hard at work developing computational lithography techniques that can pull one more rabbit out of the optical lithography hat.

    Tortured metaphors aside, the goal for the post-tapeout flow is the same as always – to compensate for image errors on the wafer due to diffraction or process effects. Resolution enhancement technology (RET) and optical proximity correction (OPC) do this by moving edges and adding sub-resolution shapes to the pattern that will be written to the photomask.

    Now, if you are prepared to have your mind blown by new computational lithography techniques, you can sign up for the SPIE Advanced Lithography conference. There is so much going on with computational lithography that I couldn’t start to cover it all here. I’ll introduce two presentations on my list: “Inverse lithography technique (ILT) for advanced CMOS nodes,” by scientists from ST Microelectronics and Mentor Graphics, and “Effective model-based SRAF placement for full-chip 2D layouts” by Mentor Graphics R&D. In both of them, the authors address the problem of lithographic hotspots that can remain after full-chip OPC. Without innovations like those described in these papers, such as the litho quality benefits of free–form SRAF placement, the solution would be to tune your OPC recipe to address each hotspot and maybe, eventually, fix them all. This is not a very good solution.

    A better way is to use a more aggressive correction approach to solve just the hotspots that remain after traditional OPC/SRAF insertion, and then stitch the result back into the OPC-ed full-chip layout. Inverse lithography technology is so called because rather than moving the fragmented edges of the starting (target) shapes to produce the desired wafer image, it uses a rigorous mathematical approach to solve an inverse problem, thus generating the “ideal output mask (OPC + SRAF) shapes” that will result in the desired image on the wafer. ILT solves an optimization problem, and as such is computationally expensive if applied full-chip. At SPIE, the authors present it as a tool for localized printability enhancement (LPE).

    This chart shows their iterative localized printability enhancement flow.
    You first perform OPC and SRAF insertion. Verification then gives you a list of hotspots and you apply ILT only to those hotspots. The research presented uses Mentor’s ILT engine (called pxOPC) plus some automation to cut and stitch the repaired areas back into the full layout.

    Author Alex Tritchkov of Mentor Graphics told me that with their inverse lithography technology, the OPC and the SRAFs have greater degrees of freedom and can employ non-intuitive but manufacturable shapes. This allows for significant process window improvement for certain critical patterns, which are very hard to achieve with conventional OPC/SRAF insertion.

    Tritchkov sees ILT as useful both during R&D when design rules are not established, the OPC/RET recipes are immature, and the test chips are pushing the resolution limits, but also in high-volume production to eliminate rework and reduce cost and risk. Papers 8683-14 and 8683-17 will be presented on Tuesday, 26 February at 3:50pm and 4:50pm, respectively.

    Registerfor SPIE today.


    Want 10nm Wafers? That’ll Cost You

    Want 10nm Wafers? That’ll Cost You
    by Paul McLellan on 02-10-2013 at 9:01 pm

    As you know, I’ve been a bit of a bear about what is happening to wafer costs at 20nm and below. At the Common Platform Technology Forum last week there were a number of people talking about this in presentations and at Harvey Jones’s “fireside chat”.

    At the press lunch I asked about this. There are obviously lots of technical issues to address to get to 10nm and below, and I don’t want to underestimate them, but at some point an important question is “at what cost.” Historically, with each process generation we have had something like a 50% reduction due to scaling along with a 15% increase in wafer fabrication cost resulting in an overall 35% reduction in cost per transistor.

    Mike Noonen of Global Foundries answered the question. But like that dog in Sherlock Holmes that didn’t bark, Mike didn’t say the costs are going down. He talked about the value of moving to more advanced process nodes: higher performance, lower power. This is all true, of course, and for some markets the value is enormous (smart phones and data center servers most obviously). However, we are clearly entering a new era where we don’t get everything for free anymore. Nobody knows what will happen to Moore’s law in practice when we don’t get more transistors, faster designs, lower power designs, and it’s all cheaper. Instead, it is “how much extra would you pay for lower power?”.

    Later, Gary Patten of IBM was more explicit. “Yes,” he said, “there is a cost benefit but it is much smaller that we have been used to.”


    The problem is lithography (see the picture, litho is on the right). Even the purple bar on the right, which is EUV, doesn’t really get those costs completely under control.

    In Handel Jones’s fireside chat with Brian Fuller, he said that they have done significant work to show a reduction in cost. He said that the big issue is parametric yield, mostly related to leakage. Some yield is driven by defects in processing and, while there is always some learning to be done, that is under control. But at these advanced nodes, with high variability, the issue of yield needs to be optimized at the design stage or parametric yield loss will make render the design financially non-viable.

    Handel reckoned that design closure costs doubled from 45nm to 28nm and are double again from 28nm to 20nm. Leakage is the killer and the designs need to be optimized for it.

    The other challenge at these advanced nodes is the volume required to make a design economic. Handel’s numbers are a cost of $100M/design at 20nm meaning a market of $1B or more for that design. Only 4 or 5 companies (Apple, Samsung, Intel…) can cope with this but the volumes will be very high.

    Even Intel is not immune. At the press lunch someone pointed out that Intel’s costs had gone up 30% from 32nm to 22nm and it had taken them 24 months to get them down and under control. And of course, they play in a market with very high margins and where performance is valued very highly. That won’t work for more cost-sensitive markets.

    If you read articles about electronics in the non-specialist press (aka newspapers, Time, the Economist etc) everyone assumes that electronics is going to continue getting cheaper. But we may be in a different regime. Yes, your smartphone will have longer battery life and do amazing voice recognition and so on. But it won’t get much cheaper than it is today (today meaning that most of the expensive components are manufactured on 28nm). But wait, it gets worse. If all that voice recognition requires twice as many transistors for all those processor cores then the cost of that chip may be close to twice as much as the old one. This is not what we are used to.


    Cadence Sigrity, Together At Last

    Cadence Sigrity, Together At Last
    by Paul McLellan on 02-10-2013 at 9:00 pm

    In July Cadence acquired Sigrity, one of the leaders in PCB and IC packaging analysis. Until a decade ago, signal integrity and power analysis was something that only IC designers needed to worry about. For all except the highest performance boards, relatively simple tools were sufficient. Provided you hooked up the pins on all the packages correctly then the design would work. That is no longer the case and for this reason Sigrity created their product line and subsequently Cadence acquired them.

    When I was at Cadence we acquired Cadmos, who were the leader in IC signal integrity at the time. One of the challenges with an acquisitions like Cadmos and Sigrity is that there are really two things involved: a running business and a core technology. Going forward, two things need to be done: ramp up the running business, and integrate the core technology into other existing products. The reason that it is such a challenge is that there is only one team in place that understands the product well enough to do these things, and they are stretched thin to do both. Although it is still work-in-progress, Cadence have come a long way to getting the Sigrity products integrated into Allegro.


    By November, Cadence had the Sigrity tools (Power Aware SI, Serial Link SI, Power Integrity, and Package Extraction) updated with Cadence look and feel and available through standard Cadence contracts. Now, in January, Cadence has Sigrity integrated into their high-end Allegro board design suite so that Sigrity products can be used directly from within the Allegro environment.

    Capabilities to address something like signal integrity typically show up first as verification tools to check a design has no problems, and allowing the designer to manually fix up the handful of things that get identified. The trouble is that design constraints never get any easier and so verification on its own is not enough. When it is no longer a handful of things that get identified as problems but hundreds or thousands then the technology needs to get integrated into the core algorithms to enable constraint-based design. It is no longer good enough to identify problems, it is necessary to avoid creating them in the first place.


    The most critical and highest speed signals on a board these days are typically high-speed serial links. These run at multi-gigabit/s speeds, fast enough to require special analysis that almost goes back to Maxwell’s equations: a full-wave 3D field solver. The board, package and system cannot be analyzed separately, the entire system including the board, multiple packages and multiple chips, and maybe other connectors, must be looked at holistically.


    So today Cadence has Allegro Sigrity SI. It is built on top of Allegro PCB/ICP/SiP without requiring any manual translation. It has become an integral piece of the front to back constraint driven PCB/Package design flow, accelerating time to volume manufacturing. Special features allow integrated analysis of high-speed memory interfaces, and very high speed (multi-Gb/s) serial link analysis including algorithmic transceiver model support, an integrated full-wave 3D field solver and a high capacity simulation engine that accurately predicts bit-error-rate. The full Power Integrity Suite is also available for PI signoff (and will be integrated directly into Allegro in the future).


    Apple’s Ma Bell Moment

    Apple’s Ma Bell Moment
    by Ed McKernan on 02-10-2013 at 12:17 am

    The wreckage that is Apple’s stock is a surprise to many including yours truly but it appears to mark the beginnings of a transition period that will result in freeing the company from the demands of Wall St as it appeals to the broader population of mainsteam America. I call it the Ma Bell Strategy. Unlike Microsoft or Intel, Apple sustained itself for the first 25 years of its life by serving its small cadre of loyal fans who could be counted on to say: “I am with the band”. The hyper growth stage that commenced with the iPOD enticed Wall St. to enter the game and thus the manic-depressive nature of the Hedge Funds infected the outlook of the company. Out of this comes, no doubt, introspection and an alternative approach that can stabilize the company as it appeals to the generous spirit of America that is not tied to the money changers. Apple has the opportunity to be the successor to the AT&T that existed pre 1980s breakup. With an expected announcement of a larger dividend outlay, Apple will cement an intergenerational loyalty from Grandma to Grand kids for decades to come. What company wouldn’t want that as a long-term way to stabilize its business?

    During the past 100 years there was only one company that was AT&T. It was a monopoly to be sure but it delivered great service and unbreakable phones. Underneath it all was a technical giant that hired the best and brightest engineers to not only advance communications technology but to invent the future with developments such as the transistor and UNIX operating system. On the outside though it was as American as Apple Pie (no pun intended).

    Above the waterline, AT&T’s monopoly status as a provider of a daily necessity meant it was within reach of nearly all and if not, then Congress would pass laws requiring that telephone lines be strung to the remotest great plains town. To recreate AT&T in the middle of the 20th century would be out of question given the cost of building out the wired line infrastructure. To the average American, AT&T projected consistency and safety in its products and services. This was enhanced when the company followed through with years of delivering strong dividends that grew in time to the point it became a major income supplement to retirees. At times the dividend payments have reached as high as 7%, often exceeding that of Government Treasuries. Therefore, the tragedy of the AT&T breakup was in the homes of the retired who expected it to be around forever.

    Apple is perceived to be in a tremendous battle with a host of mobile players who are looking to recreate the “Wintel” model that blunted the growth of the MAC in the 1990s. In reality, Apple and Samsung are taking home nearly 100% of the profits in an industry that has years of growth ahead of it. Within a year, both Apple and Samsung will complete filling out their smartphone and tablet product lines with LCDs sized from 4” to 11”. Then they ride down the component cost curve and ramp volume similar to how Dell did with PCs in the 1990s. Earnings should continue to expand nicely, especially if Apple is able to garner an edge in the corporate space. Assuming they upgrade iOS with multitasking capabilities, then there will be additional revenue from an “iPAD Pro” line that mirrors the way Microsoft sold its higher priced corporate O/S and Intel would sell high end processors. Value migration is still in play.

    However, with all this being said, there is still the aspect of brand loyalty that is underrated but still determines the long term viability of a company and ensures stability in times of economic storms and product delays. This is ultimately where Tim Cook should aim to get to if he wants to spend more time building a company and less time responding to Wall St. demands and distractions. If he sets Apple on a course of a strong Dividend program then he will end up recruiting an army of well wishers, including Grandmas, across the entire spectrum of Red and Blue America that can counter the influence of Hedge Funds. It is well within his reach and it will be a good thing for the High Tech Industry.

    Full Disclosure: I am Long AAPL, INTC, QCOM, ALTR