BannerforSemiWiki 800x100 (2)

iDRM for Complex Layout Searches and IP Protection!

iDRM for Complex Layout Searches and IP Protection!
by Daniel Nenni on 02-05-2014 at 8:00 am

iDRM (integrated design rule manager) from Sage-DA is the world’s first and only design rule compiler. As such it is used to develop and capture design rules graphically, and can be used by non-programmers to quickly capture very complex and shape dependent design rules and immediately generate a check for them. The tool can also be used for layout profiling: it detects every instance of a design rule or pattern, measures all its relevant distances and provides complete information on all such instances in the design.

In this paper we want to describe a slightly different application or use-case for iDRM: Searching for specific layout configurations. Using the iDRM GUI, users can quickly draw and capture specific layout structures or configurations they are interested in. These can be quite complex and involve many layers and polygons and can also include connectivity information.

Let’s take a look at a simple example: let’s say you want to look for the following layout configuration:

a Z shape blue layer polygon, that has on each side two parallel blue lines crossing diagonally situated green polygons, where the two inner parts of the green layer are electrically connected.


Fig 1: drawing a specific layout configuration to search for

There are no specific dimensions here, so pattern matching tools cannot be used in this case, but for iDRM this is a very easy and quick task. you simply draw the above configuration in the iDRM GUI , exactly as it is drawn here, and click the FIND button.

iDRM will search your layout database and find any layout instance that meets this criteria. Furthermore, if you also add measurement variables to your drawing, e.g. the spaces and width defined by A, B, C, D, E (see figure below), iDRM will measure them for you, and will display the results for every found instance. iDRM can then create tables or histograms counting all these results, and you can view each such instance using the iDRM layout viewer.

Once the dimensions are found, the user can choose to limit the search by adding specific measurement values or ranges of values as qualifiers to the search.


Fig 2: adding measurements to the layout configuration

Use cases: Circuit, layout, yield, reliability and … IP protection
This functionality is useful for circuit and layout designers that are looking for specifically laid out circuits, or for yield engineers that suspect certain configuration to be sensitive to yield or reliability issues. Another slightly different application is looking for use of protected IP or patented configurations, where specific layout or circuit techniques might be protected by patent and the user wants to find if such configurations are used in a design database.

More Articles by Daniel Nenni…..

lang: en_US


CMOS Biosensor Breakthrough Enables Portable Diagnostics Solution

CMOS Biosensor Breakthrough Enables Portable Diagnostics Solution
by Daniel Nenni on 02-04-2014 at 8:30 am

The panel I moderated at DesignCon last week was both entertaining and enlightining. One of the panelists, Zhimin Ding, is the CEO of an emerging fabless semiconductor company and here is their story:

In the past 5 to 10 years we have seen vast advancement in medical diagnostics technology. Doctors can now use DNA or anti-body analysis to get very precise answers about the type of virus, bacteria or cancerous cells that cause our illness. This is great news as precise diagnostics leads to effective drug and treatment with minimum side effect.

Unfortunately much of the world’s population still does not have access to this technology due to the cost and bulkiness of the equipments involved. Anitoa Systems, a startup in Palo Alto, CA, is working on meeting this challenge. They are creating a low cost, field portable nucleic-acid-test system built upon a proprietary CMOS molecular sensors technology.

Ultra low-light CMOS imager for molecular sensing

A majority of molecular diagnostic systems today use optical methods to detect molecular events, based on principle of fluorescence and chemiluminescence signaling. To meet the sensitivity requirement, engineers have to resort to bulky and expensive devices such as photon multiplier tubes (PMT) or cooled CCDs.


Anitoa’s ULS24 ultra-low light CMOS imager chip

Recent innovations of CMOS image sensors have made it possible to achieve much better sensitivity than what is possible before. But still more improvements in process, circuit, logic and software are needed to compete with PMT and CCDs for molecular sensing. For example, engineers at Anitoa need to reduce the noises of CMOS image sensor to provide a high signal-to-noise ratio (SnR). The excessive noise that cannot be eliminated in the chip, due to limitations of physics, is further computed and filtered through software algorithms.

With this approach, Anitoa has fabricated a CMOS image sensor built on 0.18um CIS technology from a world-leader specialty foundry. This chip has shown to achieving 3e-6 lux detection sensitivity, capable of detecting just a few molecules labeled with fluorescence reporter probes. Anitoa is now creating a miniaturized qPCR (quantitative polymerase chain reaction) system using its CMOS imager. The CMOS imager is paired with LEDs as the optical excitation source, to achieve fluorescence-based molecular sensing in a very compact platform.

qPCR for infectious disease diagnostics
When it comes to detecting very small amount of pathogenic molecules, such DNA molecules released from viruses or cancerous cells, it is important that the method is not only sensitive, but also specific. This is because the target molecules are immersed in a much larger number of surrounding DNAs from normal human blood cells.


DNA amplification and detection with qPCR

qPCR can achieve sensitivity and specificity through combined amplification and detection. By amplification, qPCR can cause target DNA strands be selectively replicated millions of times, with the help of a special enzyme called polymerase. As the target DNA strands being replicated, they bind with specially design molecular probes that are labeled with fluorescence materials.

The high sensitivity and SnR in Anitoa’s CMOS image means the instrument is able to work with small reaction volumes confined in a microfluidic structure. Small reaction volume means faster reaction and faster time to results.

Future trends
Today molecular diagnostics are performed in centralize labs located in big cities. Patient samples are collected at hospitals, sealed in ice-boxes and loaded onto trucks that deliver the samples to these labs .Transportation, material handling and batching means that sample to result take days and weeks. For many critical infectious diseases, such H1N9, the optimal symptom to treatment window is less than ten hours.

Anitoa envision in the near future, small and portable molecular diagnostics devices would be deployed at point of care, enabling rapid diagnostics of infectious disease on-the-site, so that doctors can respond quickly with life-saving drugs and treatment. The diagnostics device will be internet enabled, and the diagnostic results will be transmitted to a central database in the cloud, allowing doctors, drug companies and policy makers to make strategic decisions on global epidemic control.

Electrochemical molecular sensors have shown promise but require sophisticated surface chemistry and suffer from stability and specificity problems.

More Articles by Daniel Nenni…..

lang: en_US


The Great Wall of TSMC

The Great Wall of TSMC
by Paul McLellan on 02-03-2014 at 5:27 pm

TSMC doesn’t just sell wafers, it sells trust. It’s the Colgate Ring of Confidence for fabless customers. This focus on trust started at the very beginning when Morris Chang founded TSMC over 25 years ago, and still today trust remains an essential part of their business.

When TSMC started, the big thing it brought was that it was a pure play foundry. It had no product lines of its own. Foundry had existed before, but it was semiconductor companies selling excess capacity to each other. This meant that the buyer of wafers was always vulnerable to the seller company being successful and needing that capacity and they would get thrown out. And that was without even considering that companies might be buying wafers from a competitor, sending them masks of their crown-jewels and trusting that nobody would try and reverse engineer anything.

So when TSMC started, it brought the confidence that TSMC wasn’t going to suddenly stop supplying wafers since they needed the capacity for themselves, nor that TSMC was competing with them in the same end-markets. That is not to say that there never have been capacity issues: TSMC cannot afford to build excess capacity “just in case” any more than anyone else, so when businesses take off better than forecast or some other event happens, wafers can end up on allocation just as has always been the case in semiconductor, an inherently cyclical business.

Not competing with its customers remains the case today (as, to be fair, it does for GlobalFoundries, SMIC, Jazz and other pure-play foundries). But it is not the case for Samsung, which is in the slightly bizarre situation of having Apple as its largest foundry customer while competing with it as the volume leader in the mobile market (and never mind the lawsuits). Samsung is large diversified conglomerate, almost a lot of different companies all using the Samsung brand-name. Samsung makes all the retina displays for iPhone too, and doesn’t even use them themselves. They are a huge memory supplier. Apple is rumored to be moving from Samsung to TSMC for its next application processor (presumably to be called A8).

Intel has made a lot of noise about entering the foundry business but the only significant company that has been announced is Altera. And there are even rumors that they are thinking of going back to TSMC. But a company like Altera using Intel for its high end FPGA products might need 1000 wafers a month when a fab has a capacity of 50-100K wafers a month. It won’t “fill the fab”. It needs to get an Apple or a Qualcomm or an nVidia for that. But at least Altera can be confident that no matter how successful Intel’s other businesses are, at those volumes they are unlikely to be squeezed out, the amount of capacity they need is in the noise.

The other area that foundries have had to invest is to create an ecosystem around them of manufacturing equipment and material suppliers, IP and EDA companies. This grand alliance has made a huge investment in R&D. In aggregate, it has invested more than any single IDM. As a result the Grand Alliance has produced more innovation in high performance, lower power, lower cost than any single IDM.

At a modern process node deep cooperation is required. It is not possible for everything to be done serially: get the process ready, get the tools working on a stable process, use the tools to build the IP, start customer designs using the IP and the tools, ramp to volume. Everything has to happen almost simultaneously. This requires an even greater sense of trust among everyone involved, and the fact that changing PDKs means changing IP means redoing designs means inevitably an increased investment too.

So TSMC has a competitive edge, the great wall of TSMC to keep out the barbarian hordes:

  • it sells confidence and trust, not just wafers
  • it does not compete with its customers
  • it has orchestrated a grand alliance to create an ecosystem around its factories that has made a bigger R&D investment than any single IDM.


More articles by Paul McLellan…


Dual Advantage of Intelligent Power Integrity Analysis

Dual Advantage of Intelligent Power Integrity Analysis
by Pawan Fangaria on 02-03-2014 at 9:30 am

Often it is considered safer to be pessimistic in estimating IR-drop to maintain power integrity of semiconductor designs; however that leads to the use of extra buffering and routing resources which may not be necessary. In modern high speed, high density SoCs having multiple blocks, memories, analog IPs with different functionalities and IO cells integrated together on the same chip, space is a costly real estate and that must be used carefully. While keeping electromagnetic interference within acceptable limits, it is important that power and signal integrity must be addressed with accuracy for best possible performance and reliability.

So, how do we estimate the actual IR-drop in order to design a right sized power delivery network (PDN) and address these concerns? I was delighted to see this research paperpresented jointly by ST Microelectronicsand Apachein last PATMOS(International Workshop on Power and Timing Modeling, Optimization and Simulation). ST actually evaluated the impact of substrate on IR-drop reduction by extending the standard cell electrical characterization with the substrate characterization and noise analysis capabilities of Apache’s tools RedHawk and CSE (Chip Substrate Extension) and implemented this methodology in designing their complex microcontrollers for automotive applications that has severe area constraints. The methodology for accounting substrate parasitics in the design of PDN has been successfully implemented in ST’s digital design flow.

Main sources of current injection into the substrate by the digital core are the simultaneous switching noise (i.e. power supply fluctuation and ground bounce) and capacitive coupling of transistor sources and drains. To model the cell for noise injected into the substrate, the transistor bulk terminals have been separated from the P/G network, and a resistance extracted from the substrate technology parameters representing the path between the well P/G contacts and the transistor bulk has been inserted to probe the noise injected into the substrate network by each transistor. The cell netlist has a dedicated pin to bias the transistor bulk, which can be used to probe the substrate currents. Well resistance must be modeled from the transistor body to the well contacts, and a series resistor has been inserted between the probing pin and the substrate biasing contact of the standard cell.

The extended cell macro model includes well resistances and two additional current sources, Ipwell and Inwell that represent the substrate current injection as shown in the figure. The substrate passive network parasitics are represented by a lumped RC mesh.

The IR-drop simulation based on this model is performed by characterizing the library of standard cells and IPs to obtain their current profiles, estimating the power consumption, extracting the RC mesh for the top level PDN and the substrate and performing power integrity analysis.

The power integrity analysis done with this approach on ST’s leading edge industrial microcontroller, STXX with embedded Non Volatile Memory (eNVM) and integrated with digital core, analog blocks and IO cells on the same die, shows interesting results. The static IR-drop analysis shows a reduction of voltage drop when the substrate contribution is taken into account.

In case of dynamic IR-drop analysis, capacitive coupling between VDD and GND networks due to the decoupling capacitances (intrinsic, extrinsic and substrate parasitic capacitances) is also taken into account. The reduction in dynamic voltage drop (DVD) due to the substrate is more significant than that in static case. This is due to the fact that the substrate contributes significantly in the overall on-chip decoupling capacitance.

Hence, by taking the substrate into account, a more accurate and less pessimistic IR-drop analysis is performed that guides the designers not to add unnecessary extra routing resources and extrinsic decaps on-chip while guaranteeing the power integrity targets. This method through the use of RedHawk and CSE provides more accurate estimations for power integrity as well as saves costly area on the chip. An icing on the cake is that this flow for substrate noise analysis can also be used to explore different technologies such as highly doped vs. lightly doped, with or without deep n-well to improve the power integrity.

This was an interesting paper to study and discover that the substrate can be a blessing in disguise as opposed to its nature of impacting the noise integrity of analog and IO cells. However, in order to make the best use of it, careful modeling of substrate parasitics must be done at the top level PDN. Interested designers can go through the actual paper to study many details and references to know the actual physics behind these models and technology.


More Articles by Pawan Fangaria…..

lang: en_US


Jasper Goes to DVCon

Jasper Goes to DVCon
by Paul McLellan on 02-02-2014 at 6:00 pm

As usual, since they are firmly in the verification space, Jasper will have a number of things going on at DVCon 2014 which is March 3-6th at the Doubletree in San Jose. In the exhibition hall they are at booth #402.

Jasper will be happy to talk to you about anything, I’m sure, but the focus this year is on the JasperGold Security Path Verification (SPV) App. This uses formal techniques (no surprise there) to verify that there are no leaks in the hardware. The nice thing about this is that you really want to prove, not just feel fairly certain, that your security is solid and, for example, it is not possible to take over the CPU and send all the encryption keys over the network. I wrote about it recently here. This is the summary paragraph:The Jasper Security Path Verification (SPV) App is used to prove properties about the paths to secured data. For example,

    [*=1]Data in secure area must not be visible by CPU if it is not in secure mode
    [*=1]Secure register must not be written by non-secure agent

Usually data propagation requirements can usually be translated into one of these questions:

    [*=1]Can data on secure location A propagate to non-secure location B?
    [*=1]Can data in non-secure location X propagate to secure location Y?

In both cases we want the answer to be no, but it can be very hard to perform good verification without proper tools. Structural analysis can be very ad-hoc. Simulation depends on how good the verification engineer is at breaking the security. And standard formal verification is not a good fit since it is hard to describe the requirements as SVA/PSL assertions.

Jasper is presenting a SPV App in a tutorial on Thursday March 6th from 8.30am to 12pm in the Carmel room. The tutorial — complete with customer case studies — covers the analysis and verification of the design path leakage that opens a design to hardware hacking. More details on the tutorial Formally Verifying Security Aspects of SoC Designs are here.

Earlier in the week, two JasperGold App users, NVIDIA and Broadcom will be giving presentations. Both presentations are in the Fir ballroom. At 10am NVIDIA will present on chip-wide clock-gating verification. This is followed by Broadcom presenting on detecting x-optimism related bugs. More details on both are here.

Full details on DVCon (including a link to register, sponsored by Jasper yeah) are here.


More articles by Paul McLellan…


Who needs DDR4 PHY running at 2667 Mbps?

Who needs DDR4 PHY running at 2667 Mbps?
by Eric Esteve on 02-02-2014 at 11:15 am

As of today, DDR4 are targeting server, networking and consumer applications, and it will take another year before we use DDR4 equipped PC at home. In fact, a majority of consumers will rather buy a smartphone or tablet than a PC, most of these devices coming with PLDDR2 and only a few high-end tablets are equipped with LPDDR3 memory. The low-power LPDDR4 specification for tablets and smartphones is still under development, and it may takes many years before it reaches mobile devices, that’s why Amjad Qureshi, senior group director, Cadence IP Group think that DDR4 may also be used for ultrathin and high end tablet markets. We may wonder if it was a good decision to develop and launch DDR4 Memory Controller IP, including a PHY up to 2667 Mbps…

“DDR4 for servers, laptops and mobile devices will be around for a long time as no successor is under development”, said Mike Howard, principal analyst at IHS iSuppli. Howard is also making a very important point, saying that “It will be the last DDR iteration”. I would think this assertion to be true, because the need to higher and higher memory bandwidth becomes more and more difficult to support with a parallel protocol like DDRn is. Increasing the DDRn bus width causes implementation issues at board level, and increasing the DDRn PHY frequency generates multiple design issues, and has been proven to be far less power efficient (in Joule per transferred bit) than a SerDes based protocol… But DDR4 is here and expected to stay for a while. The Memory Controller (including PHY) IP market has probably weighted at least $100 million in 2013 (we are still expecting results to be launched by IP vendors). If we consider that DDR4 Memory Controller IP will start selling in 2014, it may applies to only 5 to 10% of the design starts, with customers accepting to pay a premium for the first IP available, thus the Average IP Selling Price (ASP) will be in the high range. Then, when in 2015 and 2016, DDR4 will reach enterprise, micro-server and mainstream PC, DDR4 Memory Controller IP sales should generate large revenues.

We can predict these $ sales to be higher than for DDR3, for two reasons: PHY IP ASP is getting higher with PHY frequency or transfer rate, and chip makers tend to outsource more, as far as the IP is getting more complex, or in this case the transfer rate getting higher. These are very good reasons a customer may decide to outsource, and Cadence catch it very well: “As DDR4 high-speed memories now become readily available, designers are looking for IP that can support 2667 Mbps,” said Amjad Qureshi, senior group director, Cadence IP Group. “By providing both the DDR4 controller and the DDR4 PHY at such high speeds, Cadence gives designers the confidence and assurance that they can build next-generation systems which are faster, lower power and have more capacity.” Obviously some technology will eventually replace DDR, but we can predict that DDR4 will be used doe a long time, especially if it’s the last protocol iteration, and the cumulated DDR4 Memory Controller IP sales, from 2014 to… maybe as late as 2020, could be in excess of several $100s million.

DDR4 PHY IP is available now and for more information on DDR IP, please visit: http://ip.cadence.com/ipportfolio/memory-ip/ddr-lpddr#ddr-controllers

“Technologies like phase-change memory, RRAM (resistive RAM) and MRAM (magnetoresistive RAM) are under development, and one of these technologies will eventually replace DDR. Meanwhile, 3D memory chip stacking may bridge the gap until a successor emerges,” Howard from IHS iSuppli said. As far as I am concerned, I don’t see how the Memory (whichever technology) Controller protocol could escape to move to High Speed Serial, SerDes based protocol. Hybrid Memory Cube product is already relying on a 12.5 Gbps SerDes today (25 Gbps tomorrow), but this is an expensive technology, targeting High End Server or Networking. What will be the next protocol to support data transfer from memory, once again, whichever technology has been used, offering higher data transfer bandwidth than DDR4, and affordable enough so it could be used for mainstream applications? I honestly don’t know, even if I would bet for a SerDes based protocol. But I am sure that, in the meantime, DDR4 Memory Controller IP business will have generated huge ROI, as it’s expected to be the latest DDRn type of protocol (not counting LPDDR4) to be used.

By Eric Esteve from IPNEST

More Articles by Erik Esteve…..

lang: en_US


SemiWiki Job Forum

SemiWiki Job Forum
by Rich Goldstein on 02-02-2014 at 11:05 am


As Dan has mentioned, SemiWiki has added a Job Forum in an effort to help fit qualified people to jobs around the fabless semiconductor ecosystem. A quick survey of companies working with SemiWiki revealed over 1,000 job openings planned for 2014 and finding the right people for those positions is something we can help with.

Dan and I have been friends for more than 20 years and I have the utmost respect for what SemiWiki has accomplished in such a short amount of time. I have been in EDA and Semiconductor IP recruiting since the mid ’80s when the fabless semiconductor industry truly came to be. I’ve experienced the evolution of the companies, the technology, the people, and am well connected in the semiconductor industry. DAC Search Inc. was a premier search firm that I founded in the mid ’80s specializing in these markets and I have remained dedicated to this field.

Following the in-house recruiting trend, I started with Magma Design Automation in 2008 where I was responsible for all of North American Sales recruiting. I even finished my tenure there with an assignment as an inside sales person promoting Spice tools to smaller accounts. From there I had contract recruiting positions at Kilopass Technology, Xilinx, and PMC Sierra. Most recently I spent more than a year at AMD as the lead recruiter for GFX design and verification across North America.

My goal here is to provide the industry with an expert recruiting experience for in-house recruiters and hiring managers alike. We will transform the SemiWiki Job Forum into the place where Semiconductor, EDA, and IP companies can reach out to our members and viewers in order to attract the top talent to their respective career pages. As Dan says, “For the greater good of the fabless semiconductor ecosystem”.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 800,000 unique visitors have been recorded at www.SemiWiki.com viewing more than 6M pages of blogs, wikis, and forum posts.

lang: en_US


PreEDAC Mixer

PreEDAC Mixer
by Paul McLellan on 02-02-2014 at 11:00 am

Get together with your fellow industry peers and insiders at the monthly EDAC Mixer, to the benefit of local charities. You don’t need to donate anything, you just show up and pay for your own drinks. A portion of the proceeds will go to local charities, this month to the Mountain View Educational Foundation (MVEF), a volunteer driven non-profit that provides funding for enrichment programs and educational material to enhance the solid academic curriculum and maintain the high quality of education in the Mountain View Whisman School District. To learn more about MVEF, visit their web site here.

When is it? Thursday February 27th at the Savvy Cellar Wine Bar which is at 750 Evelyn Avenue, Mountain View from 6pm to 8pm. That is right next to (actually in) the Mountain View train depot and across the rails from the light rail station.

Although I doubt you will get thrown out if you just attend, EDAC would like you to register (it’s free) so they have some idea of numbers. Register here.

While on the subject of EDAC, nominations for the Kaufman award are now open until February 28th. 2013’s recipient was Chenming Hu, inventor of the FinFET. Nomination forms are here. I’m assuming that it will be presented, as in 2013, at DAC (which is June 1-5th in San Francisco, in case you have been hiding under a stone).

So just in case it isn’t clear, here is what you do:
[LIST=1]

  • Decide to go
  • Register here
  • Show up at Savvy Cellar, 750 Evelyn at 6pm on 27th Map
  • Mingle with your industry colleagues
  • Pay for any food and drink you consume
  • Savvy Cellar will donate a percentage to MVEF

    More articles by Paul McLellan…


  • Grid Vision 2050 – Unified & Open Across The Globe

    Grid Vision 2050 – Unified & Open Across The Globe
    by Pawan Fangaria on 02-02-2014 at 10:30 am

    Whenever there is good momentum in a particular technology, IEEEtakes major initiative to standardize the procedures, formats, methods, measurements etc. involved in the technology to proliferate it for the advantage of wider community. And that becomes successful by active participation and collaboration of both producers and consumers, otherwise it remains in silos. At times, monopoly of a strong organization prohibits it to open up to standards; however that’s a restrictive leadership, doesn’t last longer. A positive, healthy and true leadership is to be open, promote standards, involve broader community and deliver products adhering to those standards; it’s a win-win which can pay higher dividends to all. I admire IEEE’s unrelenting service, by fostering technological innovations in various ways (research initiatives, publishing research papers, holding technical conferences, evolving and promoting standards for universal adoption, and so on) through larger collaboration in different industries, to the global community for last 51 years; to be precise; it started on 1[SUP]st[/SUP] Jan 1963, and so I should say “Belated 51[SUP]st[/SUP] Happy Birthday” to IEEE!

    A few months ago I had attended a live webinar presented by Bill Ash and Srikanth Chandrasekaran from IEEE Standards Association (IEEE-SA) that talked about the evolution of Smart Grid across the globe. Saving of Energy is a major focus, whether it’s small chips of semiconductor (of which I am accustomedJ), households or industrial applications. Also, power generation and its efficient distribution is a major need, especially for underdeveloped countries and rural areas of developing countries. While India (with 17% of world population, 2.6% of global GDP and 6-9% share in global energy demand) is struggling by all means to provide uninterrupted power supply to 100% of its population, China is looking towards ultra-high-voltage-transmissions, and then there are countries like Japan, Germany, North Africa and others seriously considering renewable sources of energy such as sunlight, wind, water, biomass etc.

    IEEE is actively involved in Smart Grid technology initiatives such as electric vehicle, wireless power transfer, power magnetics and electronics in distributed resources, DC in home, utility forum and Data analytics. The aim is to conserve energy and power through clean technology without pollution and hazards.

    Since priorities of different countries are different from both sourcing and distribution perspective to satisfy their local needs by exploiting available resources and considering political, social, environmental and economic situations, the challenge of soliciting a common standard gets further augmented. However, common standard is a must for companies to serve larger markets with great ease of interoperability and collaboration, thereby exploiting full potential of any technology. By doing so they will be able to produce products at lower cost and provide them to consumers at lower prices. And therefore IEEE-SA approach is to foster global economic growth by meeting local needs through OpenStand, a global community that stands together to support common open standard, develop, deploy and embrace technologies for larger benefit of global society. A proper balanced process is followed to maintain broad consensus and transparency to create greater value for the society through competitive products and services.

    While the paradigm of global open standards is relevant for any industry, the focus of this conference was on “Global Smart Grid” that augments regional facilities for electricity generation, distribution, delivery and consumption with a two-way end-to-end network for communications and control. IEEE’s vision for Smart Grid by 2050 for communications, power, IT, control systems and vehicular technologies is that there will be two way movements between makemoveuse cycles of power as opposed to today’s unidirectional process of make->move->use. This provides greater sharing of resources, local utilization, conservation and re-use.

    IEEE-SA invites for open membership, participation and governance from individuals and organizations who can contribute in advancing the technology for the benefit of humanity by volunteering in various activities such as pre-standard roadmap development through use cases, application scenarios for the Smart Grid and enabling technologies, standards development and standards implementation.

    IEEE-SA Smart Grid Portal is available at http://smartgrid.ieee.org
    Here, one can fine all resources associated with Smart Grid – conferences, publications, standards, activities being performed by various working groups in different countries etc. Other sites for more information include http://open-stand.org, http://standards.ieee.org

    There is an interesting video at http://www.youtube.com/watch?v=_4qQ4qA9xeE&feature=youtu.be

    More Articles by Pawan Fangaria…..

    lang: en_US


    Why Intel 14nm is NOT a Game Changer!

    Why Intel 14nm is NOT a Game Changer!
    by Daniel Nenni on 02-02-2014 at 10:00 am

    On one hand the Motley Fool is saying, “Intel 14nm could change the game” and on the other hand the Wall Street Cheat Sheet is saying, “Intel should shut down mobile”. SemiWiki says Intel missed mobile and should look to the future and focus on wearables and in this blog I will argue why.

    Let’s look back to 2009 when Intel and TSMC signed an agreement to “collaborate on addressing technology platform, intellectual property (IP) infrastructure, and System-on-Chip (SoC) solutions.” Intel and TSMC ported the Atom Core to 40nm and offered it to more than 1,000 of TSMC’s customers:

    “We believe this effort will make it easier for customers with significant design expertise to take advantage of benefits of the Intel Architecture in a manner that allows them to customize the implementation precisely to their needs,” said Paul Otellini, Intel president and CEO. “The combination of the compelling benefits of our Atom processor combined with the experience and technology of TSMC is another step in our long-term strategic relationship.”

    Unfortunately this venture was a complete failure for business and technical reasons and was put on hold a year later. I was a frequent visitor to Taiwan at the time so I had a front row seat to this one. The excuse was that you can’t just flip a switch and be successful in the mobile market, meaning that Intel’s Atom effort will require patience and persevance. Fast forward to 2012:

    “We are moving Intel[SUP]®[/SUP] Atom[SUP]TM[/SUP] processors to our leading-edge manufacturing technologies at twice our normal cadence. We shipped 32nm versions in 2012, and we expect to launch the 22nm generation in 2013, and 14nm versions in 2014. With each new generation of technology, we can boost performance while reducing costs and power consumption—great attributes for any market, but particularly for mobile computing.”Our Mobile Edge by Paul Otellini, Intel 2012 Annual Report.

    Clearly that did not happen at 22nm with Intel literally GIVING AWAY 40 million 22nm SoCs to get “traction” in the mobile market. And Intel 14nm SoCs are delayed until 2015 which will be in lock step with the next generation of 14nm ARM based processors from QCOM, Apple, Samsung, and a handful of other fabless SoC companies.

    As a stopgap measure to fill their new 14nm fabs, Intel dipped its toe into the shark infested foundry business waters. Unfortunately the only taker was Altera and their 14nm wafer demand is 3+ years out and the volume is a fraction of what is needed to keep a fab open. Intel is lucky to have only lost a toe here as they also risked exposing the secret manufacturing sauce they are famous for. Intel then shuttered fab #42 which could have been filled by foundry customers.

    Let us not forget the other multi-billion dollar Intel forays away from their core competency: McAffee? Intel TV? Can someone help me complete this list in the comment section please? There are just too many for me to remember.

    That brings us to where we are today: Intel still does not have a competitive SoC offering and time is running out. I strongly suggest that Intel take note of Google’s recent move out of the Smartphone business selling Motorola Mobility to Lenovo:

    The smartphone market is super competitive, and to thrive it helps to be all-in when it comes to making mobile devices…..Larry Page Google CEO.

    If Intel is going to go all-in I strongly suggest Intel focus on Quark and the wearable (embedded) market. Mobile has hit commodity status and is moving way too fast for a semiconductor giant to keep up (TI already gave up their mobile SoC business). Intel has had a historically strong position in the embedded market and it is time for them to get back to a business they truly believe in, absolutely.

    More Articles by Daniel Nenni…..

    lang: en_US