Analog Characterization Environment (ACE)

Analog Characterization Environment (ACE)
by Daniel Nenni on 09-12-2013 at 10:00 am

I’m looking forward to the 2013 TSMC Open Innovation Platform Ecosystem Forum to be held Oct. 1[SUP]st[/SUP] in San Jose. One paper in particular that has my attention is titled, “An Efficient and Accurate Sign-Off Simulation Methodology for High-Performance CMOS Image Sensors,” by Berkeley Design Automation & Forza Silicon. It is not every day that we get a chance to learn how design teams are tackling the tough verification challenges in complex high-performance applications, such as image sensors.


CMOS Image Sensor

The paper will discuss how many image sensor performance-limiting factors appear only when all of the active and passive devices in the array are modeled, including random device noise and layout parasitics. Coupled with the highly sensitive nature of image sensors, where tens of microvolts of noise can create noticeable image artifacts, these characteristics create an enormous challenge for analog simulation tools, pushing both the accuracy and capacity simultaneously.

The presentation will highlight image sensor design and verification and include a description of Forza’s verification methodology, which uses a hierarchy of models for the image sensor blocks. At higher levels of the hierarchy, the complexity of the model is reduced, but the accuracy of the global interactions between blocks is maintained as much as possible.

CMOS Image Sensor Block Diagram

Forza’s verification flow relies on the Berkeley Design Automation (BDA) Analog FastSPICE (AFS) Platform. AFS is qualified on the latest TSMC Custom Design Reference Flow and, according to Forza, has significantly improved their verification flow.

Results will highlight how the AFS Full-Spectrum Device Noise, included in the latest TSMC Custom Design Reference Flow, validates that the sensitive ADCs and readout chain will withstand the impact of device noise and parasitics. For top-level sign-off, AFS AMS enables Forza to speed up verification by using Verilog to model non accuracy-critical circuits while maintaining nanometer SPICE accuracy on blocks that were independently verified in other tools. AFS Mega has the required capacity, speed, and accuracy required for us to perform verification of over 700 signal chains at the transistor level, including extracted parasitics.


ACE Visual Distribution Analyzer – 1000 Iterations

In terms of characterization, Forza relied on BDA’s Analog Characterization Environment (ACE) to improve their characterization coverage and efficiency. Results include Monte Carlo-based analysis to predict image sensor nonuniformity due to device mismatch. Additionally, the AFS Circuit-Specific Corners, included in the latest TSMC Custom Design Reference Flow, efficiently eliminates the limitations of traditional digital process corners and generates circuit-specific corners for each measurement suitable for analog designs.

Evaluation


Datasheet


TSMC OIP

Also read: BDA Introduces High-Productivity Analog Characterization Environment (ACE)

lang: en_US


TSMC OIP: Mentor’s 5 Presentations

TSMC OIP: Mentor’s 5 Presentations
by Paul McLellan on 09-09-2013 at 6:30 pm

At TSMC’s OIP on October 1st, Mentor Graphics have 5 different presentations. Collect the whole set!

11am, EDA track. Design Reliability with Calibre Smartfill and PERC. Muni Mohan of Broadcom and Jeff Wilson of Mentor. New methodologies were invented for 28nm for smart fill meeting DFM requirements (and at 20nm me may have to double pattern some layers of fill). Also, Calibre PERC for checking subtle electrical reliability rules. These were both successfully deployed on a large 28nm tapeout at Broadcom.

2pm, EDA track. EDA-based Design for Test for 3D-IC Applications. Etienne Racine of Mentor. Lots about how to test 3D and 2.5D ICs such as TSMCs CoWoS. 3D requires bare die testing (to address the known good die problem) via JTAG. This can also be used for contactless leakage test. How to test memory on logic and other test techniques for the More that Moore world of 3DICs.

3pm on EDA/IP/Services track. Synopsys Laker Custom Layout and Calibre Interfaces: Putting Calibre Confidence in Your Custom Design Flow. Joseph Davis of Mentor. Synopsys’s Laker layout environment can run Calibre “on the fly” during design to speed creation of DRC correct layouts. Especially at nodes below 28nm, where the rules are incomprehensible to mere mortals this is almost an essential requirement to developing layout in a timely manner.

4.30pm, EDA track. Advanced Chip Assembly and Design Closure Flow Using Olympus SoC. Karthik Sundaram of nVidia and Sudhakar Jilla of Mentor. Chip assembly and design closure has become a highly iterative manual process with a huge impact on both schedule and design quality. Mentor and nVidia talk about a closure solution for TSMC processes that include concurrent multi-mode multi-corner optimization.

Identifying Potential N20 Design for Reliability Issues Using Calibre PERC. MH Song of TSMC and Frank Feng of Mentor. Four key reliability issues are electro-migration, stress-induced voiding, time-dependent dielectric breakdown of intermetal dielectric, and charge device model ESD. Calibre’s PERC can be used to verify compliance to these reliability rules in the TSMC N20 process.

Full details of OIP including registration are here.


TSMC OIP: Soft Error Rate Analysis

TSMC OIP: Soft Error Rate Analysis
by Paul McLellan on 09-09-2013 at 1:34 pm

Increasingly, end users in some markets are requiring soft error rate (SER) data. This is a measure of how resistant the design (library, chip, system) is to single event effects (SEE). These manifest themselves as SEU (upset), SET (transient), SEL (latch-up), SEFI (functional interrupt).

There are two main sources that cause these SEE:

  • natural atmospheric neutrons
  • alpha particles

Natural neutrons (cosmic rays) have a spectrum of energies which also affects how easily they can upset an integrated circuit. Alpha particles basically are stopped by almost anything and as a result these can only affect a chip if they contaminate the packaging materials, solder bumps etc.


Increasingly, more and more partners need to get involved in this reliability assessment process. Historically it has started when end-users (e.g. telecom companies) are unhappy. This means that equipment vendors (routers, base-stations) need to do testing and qualification and end up with requirements for process, cell design (especially flops and memories), and at the ASIC/SoC level.

Large IDMs such as Intel and IBM have traditionally done a lot of the work in this area internally, but the fabless ecosystem relies on specialist companies such as iROCtech. They increasingly work with all the partners in the design chain since it is a bit like a real chain. You can’t measure the strength of a real chain without measuring the strength of all the links.

So currently there are multi-partner SER efforts:

  • foundries: support SER analysis through technology specific SER data such as TFIT databases
  • IP and IC suppliers: provide SER data and recommendations
  • SER solution providers: SEE tools and services for improving design reliability, accelerated testing

Reliability has gone through several eras:

  • “reactive” end users encounter issues and have to deal with them. product recalls, software hot-fixes etc
  • “awareness” system integrators pre-emptively acknowledge the issue. system and component testing, reliability specifications
  • “exchanges” requirements and targets are propagated up and down the supply chain with SER targets for components, IP etc
  • “proactive” objective requirements will drive the design and manufacturing flow towards SER management and optimization

One reason that many design groups have been able to ignore these reliability issues is that they depend critically on the end market. The big sexy application processors for smartphones in the latest process do not have major reliability issues: they will crash from time to time due to software bugs and you just reboot, your life is not threatened. And your phone only needs to last a couple of years before you will throw it out and upgrade.

At the other end of the scale are automotive and implantable medical devices (such as pacemakers). They are safety critical and they are expected to last for 20 years without degrading.


iRocTech have been working with TSMC for many years and have even presented several joint papers on various aspects of SER analysis and reliability assessment.

iRocTech will be presenting at the upcoming TSMC OIP Symposium on October 1st. To register go here. To learn more about TFIT and SOCFIT, iRocTech’s tools for analyzing reliability of cells and blocks/chips respectively, go here.


Xilinx At 28nm: Keeping Power Down

Xilinx At 28nm: Keeping Power Down
by Paul McLellan on 09-08-2013 at 2:26 pm

Almost without exception these days, semiconductor products face strict power and thermal budgets. Of course there are many issues with dynamic power but one big area that has been getting increasingly problematic is static power. For various technical reasons we can no longer reduce the voltage as much as we would like from one process generation to the next which means that transistors do not turn off completely. This is an especially serious problem with FPGAs since they have a huge number of transistors, many of which are not actually active in the design at all, just due to the nature of how an FPGA is programmed.

Power dissipation is related to thermal because heat dissipated rises with power. But temperature is related to reliability: every 10°C increase in temperature doubles the failure rate. There are all sorts of complex and costly static power management schemes but ideally the FPGA would simply have lower static power. TSMC, which manufactures Xilinx FPGAs, has two 28nm processes, 28HP and 28HPL. The 28HPL process has some big advantages over 28HP:

  • wider range of operating voltages (not possible with 28HP)
  • high-performance mode with 1V operation leading to 28HP-equivalent performance at lower static power
  • low-power mode at 0.9V operation, with 65% lower static power than 28HP. Dynamic power reduced by 20%.

The 28HPL process is used to manufacture the Xilinx 7 series FPGAs. The net result is that competing FPGAs built in the 28HP process have no performance advantage of series 7 and some of the competing products come with a severe penalty of over twice the static power (along with the associated thermal and reliability issues). In fact Xilinx’s primary competitor (I think we all know who that is) has had to raise their core voltage specification resulting inn a 20% increase in static power and their power estimator has gradually crept the power up even more. Xilinx has invested resources so that the power specifications published when the product is first announced do not subsequently require being revised, meaning that end-users can plan their board design confident they do not need to worry about inaccurate power estimates.


The low power and associated thermal benefits mean that Xilinx FPGAs can operate at higher ambient temperature without the FPGA itself getting too hot. The graph below shows junction temperature against ambient temperature for series 7 standard and low power devices compared to the competitor’s equivalent array. This is very important since the ability to operate at 50-60°C ambient temperature while keeping the junction temperature at or below 100°C is essential in many applications, such as wired communications designs in rack-style environments such as datacenters. It is no secret that routers and base-stations are one of the largest end-markets for FPGAs.


But wait, there’s more, as the old infomercials said.

Not specific to series 7, the Vivado Design Suite performs detailed power estimation at all stages of the design, and has an innovative power optimization engine that identifies excessively power hungry paths and reduces them.

Reducing dynamic power depends on reducing voltage (which the 28HPL allows), reducing frequency (usually not an option since that is set by the system performance required) or reducing capacitance. By optimizing the arrays for dense area, meaning shorter wires with lower capacitance, power is further reduced compared to Xilinx’s competitors.

And there is even more than that. But to find out you need to download the Xilinx white paper Leveraging Power Leadership at 28nm with Xilinx 7 Series FPGAs available here.


A Brief History of TSMC OIP

A Brief History of TSMC OIP
by Paul McLellan on 09-01-2013 at 9:00 pm

The history of TSMC and its Open Innovation Platform (OIP) is, like almost everything in semiconductors, driven by the economics of semiconductor manufacturing. Of course ICs started 50 years ago at Fairchild (very close to where Google is headquartered today, these things go in circles). The planarization approach, whereby a wafer (just 1” originally) went through each process step as a whole, led to mass production. Other companies such as Intel, National, Texas Instruments and AMD soon followed and started the era of the Integrated Device Manufacturer (although we didn’t call them that back then, we just called them semiconductor companies).

The next step was the invention of ASIC with LSI Logic and VLSI Technology as the pioneers. This was the first step of separating design from manufacturing. Although the physical design was still done by the semiconductor company, the concept was executed by the system company. Perhaps the most important aspect of this change was not that part of the design was done at the system company, but rather the idea for the design and the responsibility for using it to build a successful business rested with the system company, whereas IDMs still had the “if we build it they will come” approach, with a catalog of standard parts.

In 1987, TSMC was founded and the separation between manufacture and design was complete. One missing piece of the puzzle was good physical design tools and Cadence was created in 1988 from the merger of SDA and ECAD (and soon after, Tangent). It was now possible for a system company to buy design tools, design their own chip and have TSMC manufacture it. The system company was completely responsible for the concept, the design, and selling the end-product (either the chip itself or a system containing it). TSMC was completely responsible for the manufacturing (usually including test, packaging and logistics too).

This also created a new industry, the fabless semiconductor company, set up in many ways to be like an IDM except for using TSMC as a manufacturer. So a fabless semiconductor company could be much smaller since it didn’t have a whole fab to fill, often the company would be funded to build a single product. Since this was also the era of explosive growth in the PC, many chips were built for various segments of that market.

At this time, the interface between the foundry and the design group was fairly simple. The foundry would produce design rules and SPICE parameters, and the design would be submitted as GDSII and a test program. Basic standard cells were required, and these were available on the open market from companies like Artisan, or some groups would design their own. Eventually TSMC would supply standard cells, either designed in house or from Artisan or other library vendors (bearing a underlining royalty model transparent to end users). However, as manufacturing complexity grew, the gap between manufacturing and design grew too. This caused a big problem for TSMC: there was a lag from when TSMC wanted to get designs into high volume manufacturing and when the design groups were ready to tape out. Since a huge part of the cost of a fab is depreciation on the building and the equipment, which is largely fixed, this was a problem that needed to be addressed.


At 65nm TSMC started the OIP program. It began at a relatively small scale but from 65nm to 40nm to 28nm the amount of manpower involved went up by a factor of 7. By 16nm FinFET half of the effort is IP qualification and physical design. OIP actively collaborated with EDA and IP vendors early in the life-cycle of each process to ensure that design flows and critical IP were ready early. In this way, designs would tapeout just in time as the fab was starting to ramp, so that the demand for wafers was well-matched with the supply.

In some ways the industry has gone a full circle, with the foundry and the design ecosystem together operating as a virtual IDM.

To be continued in part 2


The TSMC OIP Technical Paper Abstracts are up!

The TSMC OIP Technical Paper Abstracts are up!
by Daniel Nenni on 08-25-2013 at 8:10 pm

The TSMC Open Innovation Platform® (OIP) Ecosystem Forum brings TSMC’s design ecosystem member companies together to share with our customers real-case solutions for customers’ design challenges and success stories of best practice in TSMC’s design ecosystem.

More than 90% of the attendees last year said “this forum helped them better understand the components of TSMC’s Open Innovation Platform” and “they found it effective to hear directly from TSMC OIP member companies.”

This year, the forum will feature a day-long conference starting with executive keynotes from TSMCin the morning plenary session to outline future design challenges and roadmaps, as well as discuss a recent collaboration announcement, 30 selected technical papersfrom TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies, and an Ecosystem Pavilion featuring up to 80 member companies showcasing their products and services.

Date: Tuesday, October 1st, 2013

Place: San Jose Convention Center

Attendees will learn about:

  • Design challenges in 16nm FinFET, 20nm, and 28nm
  • Successful, real-life applications of design technologies and IP
  • Ecosystem specific implementations in TSMC reference flows
  • New innovations for next generation product designs

In addition, attendees will hear directly from our design ecosystem member companies talk exclusively about design solutions using TSMC technologies, and enjoy valuable opportunities for peer networking with near 1,000 of industry experts and end users.

TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event: : please register in order to attend. We look forward to seeing you at the 2013 Open Innovation Platform Ecosystem Forum.

Registration: Join the TSMC 2013 Open Innovation Platform® (OIP) Ecosystem Forum to be held on Tuesday, October 1st at the San Jose (CA) Convention Center.

Established in 1987, TSMC is the world’s first dedicated semiconductor foundry. As the founder and a leader of the Dedicated IC Foundry segment, TSMC has built its reputation by offering advanced and “More-than-Moore” wafer production processes and unparalleled manufacturing efficiency. From its inception, TSMC has consistently offered the foundry segment’s leading technologies and TSMC COMPATIBLE® design services.

TSMC has consistently experienced strong growth by building solid partnerships with its customers, large and small. IC suppliers from around the world trust TSMC with their manufacturing needs, thanks to its unique integration of cutting-edge process technologies, pioneering design services, manufacturing productivity and product quality.

The company’s total managed capacity reached 15.1 million eight-inch equivalent wafers in 2012. TSMC operates three advanced 12-inch wafer fabs, four eight-inch wafer fabs, and one six-inch wafer fab in Taiwan. TSMC also manages two eight-inch fabs at wholly owned subsidiaries: WaferTech in the United States and TSMC China Company Limited. TSMC also obtains eight-inch wafer capacity from other companies in which the Company has an equity interest.

lang: en_US


20nm IC production needs more than a ready Foundry

20nm IC production needs more than a ready Foundry
by Pawan Fangaria on 08-23-2013 at 11:00 am

I think by now all of us know, or have heard about 20nm process node, its PPA (Power, Performance, Area) advantages and challenges (complexity of high design size and density, heterogeneity, variability, stress, lithography complexities, LDEs and so on). I’m not going to get into the details of these challenges, but will ponder on the flows and methods which can overcome these and can generally be available for a larger design community for mass production of ICs at 20nm; of course, based on the rules and regulations laid down by foundries. If anyone wants to refer to details of these challenges, she/he can refer to an earlier paper published by Cadencehere.

Sometime in Jun/July this year, it was reported by TSMCthat their risk production of 20nm chips has already started and volume production will start by Dec this year or early next year. It is known that Apple(for its A8 processors), its first customer is already lined up, more may join the queue. It must be noted that in last quarter of 2012 TSMC also announced support of double patterning technology and multi-die integration and corresponding reference flows for 20nm process node.

For proliferation of this technology into mass production by leveraging the sea of design houses, EDA vendors must provide the complete holistic solutions to overcome these challenges rather than point tools. At 20nm, that need becomes more prominent because it changes the paradigm in the context of double patterning complexities, variability and interdependence between design phases and manufacturing. The designers can no longer wait to fix problems until layout sign-off, everything has to be done in parallel at each stage.


[Challenges and requirements for 20nm IC design]

As we see, tackling these issues in the design is not enough, the design closer needs to happen in time and with desired PPA in order to avail the window of opportunity in the market. Having earlier worked at Cadence, I can firmly say that this is one company which provides a complete end-to-end solution to the overall design flow, with the whole spectrum of EDA tools for all types of designs; custom, digital, mixed-signal and so on. This company has the right expertise, through its long tenure in semiconductor EDA domain, to address designers’ need at all levels. For example, analog design needs more customized approach whereas digital design has very high level of automation.


[Cadence GigaFlex technology – A flexible modelling approach to manage large designs]

Cadence proposes rapid prototyping and rapid verification methodologies to save significant amount of design time. It uses flexible modelling to support required level of abstraction at each stage. For example, the model at design exploration or planning stage does not require details of that used at block implementation level. Further, it uses an innovative “Prevent, Analyze and Optimize” approach which drives both custom and digital platforms to enable faster design convergence at advanced nodes. In-design sign-off is done at each stage such as placement, routing, lithography analysis, timing and signal integrity and so on by utilizing state-of-the-art sign-off quality tool engines. Also correct-by-construction approach is used at the design time by utilizing smart tools such as constraint-driven design, LDE-aware placement, color-aware P&R and in-design verification.


[Clock Concurrent Optimization combines timing-driven CTS with physical optimization]

Clock Concurrent Design Flow is a paradigm shift that makes Clock Tree Synthesis (CTS) timing window-driven rather than skew-driven, and merges it with physical optimization. This provides significant PPA optimization; 30% saving on power and area and 100MHz of chip performance improvement for a GHz design with ARM processors.

To conclude, there are several challenges to fetch the benefits of 20nm technology, but with right tools, methodologies and collaboration across semiconductor ecosystem, they can easily be achieved. There is a detailed whitepaper from Cadence on the methodologies to be used for 20nm designs, “A Call to Action: How 20nm will Change IC Design”. It’s worth looking at, I enjoyed reading it and jotted down a summary of that in this article. The paper also has other references on 20nm technology. Enjoy reading!!


Why Adopt Hierarchical Test for SoC Designs

Why Adopt Hierarchical Test for SoC Designs
by Daniel Payne on 08-15-2013 at 4:37 pm

IC designers have been creating with hierarchy for years to better manage large design sizes, however for the test world the concept of hierarchy and emerging standards is a bit newer. TSMC and Synopsys jointly created a webinarthat addresses hierarchical test, so I’ve attended it this week and summarized my findings here.Adam Cron, Synopsys Continue reading “Why Adopt Hierarchical Test for SoC Designs”


450mm Wafers are Coming!

450mm Wafers are Coming!
by Daniel Nenni on 08-14-2013 at 8:05 pm

The presentations from the 450mm sessions at SEMICON West are up now. After talking to equipment manufacturers and the foundries I’m fairly confident 450mm wafers will be under our Christmas trees in 2016, absolutely. TSMC just increased CAPEX again and you can be sure 450mm is part of it. SEMI has a 450mm Central landing page HERE. The SEMICON West 450mm Transition presentations are HERE. The Global 450mm Consortium is HERE. Everything you ever wanted to know about 450mm wafers just a click away; you’re welcome.

Intel, Samsung, and TSMC have invested heavily in 450mm and will have fabs built and operational in 2015 (my opinion). Given the pricing pressures and increasing capacity demands of the mobile semiconductor market 450mm wafers will be mandatory to maintain healthy margins. Based on the data from SEMICON West and follow-up discussions, this is my quick rundown on why moving from a 12” wafer (300mm) to an 18” wafer (450mm) is the next technical innovation we will see this decade.

First and foremost is timing. 14nm wafers will begin production in 2014 with 10nm slated for 2016. Ramping already production worthy 14nm wafers in a new 450mm fab reduces risk and the semiconductor industry is all about reducing risk. Second is wafer margins. As I mentioned before, there will be a glut of 14nm wafers with no less than six companies (Intel, Samsung, TSMC, GLOBALFOUNDRIES, UMC, and SMIC) manufacturing them 24/7. The semiconductor industry has never ever seen this kind of total capacity increase for a given node. Add in that the mobile electronics market (phones and tablets) have reached commodity status, wafer margins will be under even more pressure than ever before. Just like the top criteria for investing in real estate: location, location, location. Wafer purchasing criteria at 20nm and below will be: price, price, price.


According to Intel a 450mm fab will cost twice as much as a 300mm fab with equipment accounting for the majority of the delta. The wafer handling equipment is a good example. The additional size and weight of the 450mm wafers will require complete retooling. If you have never been in a fab let me tell you it is something to see. The wafers zip around on ceiling mounted shuttles like something out of a Star Wars movie. As much as I would like to change our dinner plates at home from 12” to 18” to accommodate my increasing appetite, I certainly don’t want to buy a new dishwasher and cabinets to store them.

The ROI of 450mm wafers however is compelling. A 450mm fab with equal wafer capacity to a 300mm fab can produce 2x the amount of die. If you roughly calculate die cost, a 14nm die from a 450mm wafer will cost 23% less than a 300mm wafer. This number is an average of numbers shared with me by friends that work for: an IDM, a foundry, a large fabless company, and an equipment manufacturer. Sound reasonable?

lang: en_US


TSMC is a more profitable semiconductor company than Intel

TSMC is a more profitable semiconductor company than Intel
by Daniel Nenni on 08-07-2013 at 9:00 pm

There is an interesting article on Seeking Alpha, “A More Profitable Semiconductor Company Than Intel”, and for a change the author does not PRETEND to know semiconductor technology. Refreshing! Personally I think the stock market is a racket where insiders profit at the expense of the masses. But if you are going to gamble you should do as much research as possible so you don’t end up on the wrong end of a pump and dump.

INTC was highly successful in capitalizing on the PC revolution showering investors with outsized returns. INTC, teamed up with Microsoft (MSFT) to form the famed Wintel combo that basically owned the PC market, much to shareholders delight. Alas, this is no longer 1998, and a new wave of competitors has emerged knocking INTC of its once mighty perch. The article below, will detail why Taiwan Semiconductor (TSM) is a far better play in the semiconductor space.

I certainly like how this article starts. Intel is in serious trouble and very few financial people seem to really understand it. Unfortunately, comparing Intel and TSMC is like comparing an apple to a grape since TSMC customers (AMD, QCOM, NVDA, etc…) compete with Intel not TSMC. I suggested the author do a similar comparison between Intel and Samsung since Samsung has made it very clear that they will be the #1 semiconductor company in the very near future. Considering what they have done to Apple in the mobile space, my bet is on Samsung.

Without a doubt, TSMC created what is today’s semiconductor foundry business model. While at Texas Instruments, Morris Chang pioneered the then controversial idea of pricing semiconductors ahead on the cost curve, sacrificing early profits to gain market share to achieve manufacturing yields that would result in greater long-term profits. This pricing model is still the foundation of the fabless semiconductor business model and nobody does this better than TSMC.

Today the fabless semiconductor ecosystem is a force of nature. According to IC Insights’ August Update to the 2013 McClean Report, the top 20 is now dominated by foundries, fabless, and fab-lite companies. Intel is down 4% while Qualcomm, MediaTek, and TSMC each scored more than a 20% year-over-year growth. It’s all about mobile devices. The writing is on the wall yet the Intel fan club is still calling for $30 per share. My bet would be that INTC and TSM will both be $20 stocks after FY2013 numbers are announced. But then again, I think the stock market is a racket.

lang: en_US