CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

What’s Driving Real Medical Tech

What’s Driving Real Medical Tech
by Bernard Murphy on 12-17-2015 at 12:00 pm

I just watched a webinar on non-invasive bio-imaging as a way to detect and track disease, which gave me a sense of the way tech progresses in the medical field and makes for a positive counterpoint to my views on medical IoT, at least as envisioned in much of our industry. The webinar, on new approaches to in-vivo imaging was hosted by Science magazine and sponsored by Perkin Elmer (remember those guys?). Presenters were Christopher Contag, Professor in Pediatrics, Radiology and much more at Stanford and Anna Moore, Professor in Radiology and much more at Harvard Medical School. A lot of the focus was around detection and treatment of cancers so I’ll stick to what I learned there, though there was also mention of application to diseases like diabetes.

Cancer is still a very challenging disease, both in detection and therapy. As we live longer and avoid what earlier might have killed us for other reasons, cancer becomes more prominent as a cause of death. Detection is hampered by the fact that current methods find possible tumors at a quite late stage (grown to as many as 1 billion cells), and remedial action such as excision always leaves the possibility of some residual cancer cells around the periphery of the surgery which then go on to metastasize. A sobering fact mentioned in the webinar is that 90% of cancer-related deaths are due to metastasis, not to the original cancer.

For imaging, one goal is to get to much earlier detection, when a tumor has grown to as little as 1000 cells. This gives a better chance of micro-targeting the tumor, not just in where it is but also in cell biology. Some suggested methods for detection are use of photoluminescence (uptake of luminescent compounds in cancerous cells which can then be detected from outside the body), to the use of optical imaging (either from outside or through endoscopy) in short-wave IR (~1.5um) with carbon nanotubes. Imaging helps not only in detection but also in tracking progress in response to therapies. Wearables could also help here in counting tumor cells circulating through the vascular system which can contribute to metastasis.

Another practical and possibly near term advance in imaging is in use of complementary imaging techniques to confirm diagnoses. A known problem with mammography is the rate of false positives, leading in some cases to unnecessary surgery, since X-rays cannot easily distinguish between tumors and fibrous tissue. One method that has been shown to be very complementary is optical imaging of hemoglobin concentration in the breast, combined with X-ray data. Fibrous tissue showing as a potential tumor in an X-ray does not show in the optical view and can be ruled out as cancerous (because blood concentrates around a growing tumor but not around fibrous tissue).

Finally, remember that point about surgery leaving a residue of cancerous cells too small for the surgeon to detect? Surgical tools for excision could be supported by cancer-detecting microscopes with resolution down to 1um, helping surgeons be much more accurate in eliminating margins of tumors around the main excision. Advanced laser-based surgical tools could micro-target these margins, based on this microscopy.

So where does this leave semiconductor and system design? First, any development would need to be in partnership with experts in the field, like GE or Perkin Elmer. Given that, suppport for imaging at specialized wavelengths, new and more portable methods for tomography combining X-ray or other sources and light images, wearables counting circulating tumor cells, creative combinations of microscopy and laser surgery – these are all possibilities and, once proven, many of these will be fairly high demand solutions. Where successful, these will certainly have more lasting value than counting how many times you stood up today.

The literature in this domain that I have found is heavily medical, but if you’re willing to try, there are a few references HERE, HERE and HERE.

More articles by Bernard…


Challenges in IP Qualification with Rising Physical Data

Challenges in IP Qualification with Rising Physical Data
by Pawan Fangaria on 12-17-2015 at 7:00 am

With every new technology node, there are newer physical effects that need to be taken into account. And every new physical effect brings with itself several new formats to model them. Often a format is also associated with several of its derivatives, sometimes an standard reincarnation of a proprietary format further evolved by an standard body. For example, we have SPF from Cadence, and then SPEF, first proposed by OVI (Open Verilog Initiative) and later standardized by IEEE. We also have RSPF (Reduced Standard Parasitic Format), DSPF (Detailed Standard Parasitic Format), and SBPF (Synopsys Binary Parasitic Format).

Why so many different formats for a particular physical representations? It’s to do with accuracy and different methods of modeling, efficiency and size of data, optimization, and so on. A certain type of format can be used for a particular type of trade-off, e.g. modeling preference, tool affiliation, data size optimization, and so on. One thing is common; the volume of data to represent an electronic circuit on a piece of silicon and characterize it under all physical conditions is increasing exponentially with every emerging technology node.

The situation has become more complicated with lower nanometer technology nodes where manufacturing variation becomes prominent. The manufacturing variation can be significant to what you design, so you have to figure out the variation before manufacturing and make appropriate provisioning for that into your design.


Above is a SEM image of contact-holes that illustrates photon shot-noise as a result of quantum effects at nanometer dimensions. When contact-hole dimension shrinks, the required number of photons to create the required response from the photo-reactive compound in the resist on wafer decreases, however the variability remains the same. Due to this, the difference in the number of photons seen by every contact-hole (i.e. photon shot-noise) makes a visible impact. There are specific formats to model manufacturing variability as well.

It’s chaotic situation learning, understanding the pros and cons, and making use of various formats in designing, verifying, and testing semiconductor IP and SoCs. An IP must be fully qualified with all the data it possesses before its integration into an SoC. The volume of data in silicon IP has grown multi-fold.


The above chart shows typical amount of characterization data necessary to describe the silicon IP needed to design and verify an SoC at different process nodes. It’s 1 TB at 14nm and is expected to grow to 4 TB at 10nm. Today, all factors including timing, power, noise, reliability, and variability have to be taken into account.

At 14nm with FinFETs, power characterization requires a format like Liberty CCSP (Composite Current Source Power) to model power where the current with which an output is able to drive the connected RC network is accurately modeled in the characterization file. It takes into account the leakage as well as dynamic current. Advanced modeling for gate leakage, asynchronous operation, and voltage and temperature scaling is done to factor all effects.

As the physical effects modeled in CCSP are highly dependent on process corner, there may be different CCSP models for different states, thus summing up to hundreds of CCSP files for each process corner for a full characterization.

Interestingly, to add further to data, extensions to CCSP have already started; for electro-migration (EM) and on-chip variability (OCV) effects. Going forward to 7nm and below, the characterization data is bound to increase further.

This exponential growth in volume and complexity of data per IP makes it impossible continuing with same home grown scripts to check the IP for both the IP provider and SoC integrator. Even simple checks applied on huge datasets can become a difficult and time consuming task. It needs smart automated tools which can do much more than just sanity check, for example trend check for a particular parameter, feedback and correction tips, waiver report, and so on.

Fractal TechnologiesCrossfire is a right tool to provide an efficient and productive solution for quick IP qualification before its integration into SoC. Crossfire provides a detailed graphical as well as textual report on completeness of data, presence of all parts of a component, failed components / constructs, waived violations along with their justifications, and many more. It can quickly check large data by using separate processes on dedicated machines, thus parallelizing various tasks.

Covering most of the design, verification, and test formats, design databases, and documentation formats, Crossfire is a tool of choice for IP providers to check the compliance of their offering and SoC vendors to qualify the IP for acceptance before using it in the SoC. Crossfire keeps adding support for upcoming new formats as well as popular vendor specific models.

Read Fractal’s new whitepaper HERE.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Why Did Apple Buy a Fab?

Why Did Apple Buy a Fab?
by Scotten Jones on 12-16-2015 at 4:00 pm

It was announced today that Apple has purchased a 200mm fab located in San Jose from Maxim Integrated Products for $18.2 million dollars. My initial reaction to this announcement was shock but then I started thinking through what Apple might use the fab for and I concluded this announcement is less significant and surprising than it may appear at first look.

The first point to make is this is a relatively old and small 200mm fab and is not suitable for production of Apple’s applications processors. According to the SEMI World Fab Watch Database the fab was built in 1987 and that is really old by fab standards. The same database also indicates that the capacity of the fab fully ramped was 10,000 wafers per month and that is also low by fab standard. Apple’s latest applications processors are made using 16nm/14nm FinFET processes with 10nm in development. 200mm equipment has only been pushed down to around 45nm and is simply not available for smaller nodes. Even if it was possible to make applications processors in this fab, low volume 200mm production wouldn’t be economical.

So if they aren’t going to make the applications processors in the fab what are they going to do with it.

There are two scenarios that I can envision:

[LIST=1]

  • Apple may use the fab to develop sensors or displays using MEMS technology. This would likely be a good fab for MEMS development. Potentially it could be used for analog applications as well but I think that is less likely. Apple has historically outsourced all of their production and some kind of R&D usage fits their corporate strategy.
  • Convert the fab to a data center. Over the last several years the practice of buying older fabs and converting them to data centers has emerged. Fabs have large power feeds and large air conditioning systems both of which are needed for data centers. Fabs also typically have raised floors that are useful for connecting all of the servers together.

    In summary, although at first look Apple buying a fab may be surprising, a deeper look into the specifics of the fab they bought signals that this is likely less significant than it first appears. It is most likely for small scale R&D most likely into some kind of MEMS technology. Alternately it may not even continue to be used as a fab and instead be converted to a data center. It certainly doesn’t present any threat to TSMC and Samsung as Apple foundry providers.


  • IEDM Blogs – Part 2 – Memory Short Course

    IEDM Blogs – Part 2 – Memory Short Course
    by Scotten Jones on 12-16-2015 at 12:00 pm

    Each year the Sunday before IEDM two short courses are offered. This year I attended Memory Technologies for Future Systems held on Sunday, December 6[SUP]th[/SUP]. I have been to several of these short courses over the years and they are a great way to keep up to date on the latest technology.
    Continue reading “IEDM Blogs – Part 2 – Memory Short Course”


    Tis the Season to be kWh Wasteful!

    Tis the Season to be kWh Wasteful!
    by Alex Lidow on 12-16-2015 at 7:00 am

    As the world gears up for the upcoming holiday shopping season, the technology needed by online retailers to meet demands will bring with it many unintended negative byproducts: increased inefficiency, waste and pollution, to name a few. Online sales are expected to grow by 12 percent in the holiday season, on top of an already unprecedented, some might say alarming demand for online information.

    Why alarming? In 2014, data centers in the United States consumed approximately 100 billion kilowatt hours (kWh) of energy. According to Sudeep Pasricha, an associate professor in the Department of Electrical and Computer Engineeringat Colorado State University, “that’s almost twice the electricity needed to power the whole state of Colorado for a year.” Further,this growing and insatiable desire for digital content is actually polluting the environment: the massive data centers that house all this digital content on servers are now responsible for an astounding 2 percent of global greenhouse gas emissions, a similar share to today’s aviation industry.


    Source: Thinkstock gyn9038

    Inefficient grid
    To add insult to injury, the power needed to support this rapidly growing demand comes from an electrical grid that is wildly inefficient and is based on infrastructure that was created, in large part, more than a century ago. To put it simply, electricity goes through several conversion stages: first, from its origination at the power plant, then on to transmission through power stations before finally feeding the remaining energy through semiconductor chips to provide computer power to servers. And due to aging equipment, a significant amount of power is lost as it travels from the power plant to the computer chip that does all the actual computing work.

    Just how significant is this waste? It turns out that the power grid supplies 150W of power to meet the demands of a digital chip that may need only 100W. Moreover, the amount of wasted energy is even greater because every watt of power lost through power conversion is transferred into heat. And it is necessary to remove that heat from the server farm by expensive and energy-intensive air conditioning. It takes about 1W of air conditioning to remove 1W of power losses, effectively doubling the inefficiency of this power conversion process. Not to mention the enormous amount of carbon-dioxide that these air conditioning units emit in an effort to convert all that wasted energy.

    In aggregate, the combined waste across the United States due to data center power conversion is enough to power over half of the state of Colorado.

    Limits of silicon

    And if the inefficiencies and waste in the power grid aren’t enough, the power conversion process has been built around post World War II silicon-based semiconductors, which have reached their theoretical power conversion performance limitations. Subsequently, these chips are responsible for creating additional power inefficiencies, with great financial and environmental costs.

    However, new materials have emerged that can convert electricity more efficiently and at a lower cost. In short, superior crystal properties in these materials enable the elimination of the most wasteful final stages of conversion. It’s a dynamic similar to the evolution of air travel in the post WWII era. Initially, air travel across the country required at least one stop for refueling. When jet powered flight became commercially available, the increased fuel efficiency resulted in not only non-stop coast-to-coast travel, but also significantly reduced costs of the journey.

    By eliminating the inefficiencies in this final stage in the server farm power architecture we can realize a direct saving of 7 billion kWh per year. This is doubled when air conditioning energy costs are added, bringing the total to about 14 percent of the total energy consumed by servers in the US alone. The cost savings are also significant. At the average cost of $0.12 per kWh, that’s a savings of $1.7 billion annually, which does not include the additional savings in system cost resulting from fewer power converters and air conditioners.

    While the need for computing power is only likely to increase in the upcoming holiday shopping season and beyond, technologies are appearing that will help reduce waste and drive subsequent environmental and financial savings that benefit future generations of information gluttons the world over.

    Now that’s a holiday gift that I believe Santa and I can both agree on!


    Why Connect to the Cloud with Atmel SMART SAM W25?

    Why Connect to the Cloud with Atmel SMART SAM W25?
    by Eric Esteve on 12-15-2015 at 4:00 pm

    Atmel SMART SAM W25 is in fact a module, Atmel names it “SmartConnect Module”. As far as I am concerned I like SmartConnect designation and I think it could be used to describe any IoT edge device. The device is “smart” as it includes a processing unit, in this case ARM Cortex M0 based SAMD21G, and “connect” remind the Internet part of the IoT definition. The ATWINC1500 SoC supports WiFi 802.11 b/g/n allowing to seamlessly connecting to the cloud. What should we expect from an IoT edge device?

    It should be characterized by both low cost and power! This IoT system is probably implemented multiple times, either in a factory (industrial), either in a house (home automation) and the cost should be as low as possible, to enable large dissemination. I don’t know the SAMD21G ASP, but I notice that it’s based on the smallest MCU core of the ARM Cortex M family, then the cost should be minimum (my guess). Atmel claims the W25 module to be “Fully-integrated single-source MCU + IEEE 802.11 b/g/n Wi-Fi solution providing battery powered endpoints lasting years”…sounds like being ultra low-power, isn’t it?

    The “Thing” of IoT is not necessarily tiny. We can see on the above example in the industrial world that the interconnected things can be as large as these wind turbines (courtesy of General Electrics). To maximize efficiency in power generation and distribution, the company has connected these edge devices to the cloud where the software analytics allow wind farm operators to optimize the performance of the turbines, based on environmental conditions. According with GE, “raising the turbines’ efficiency can increase the wind farm’s annual energy output by up to 5%, which translates in a 20% increase in profitability”. Wind turbines are good for the planet as they allow avoiding burning fossil energy. IoT devices implementation allows wind farm operators to increase their profitability and to build sustainable business. At the end, thanks to Industrial Internet of Thing (IIoT), we all benefit from less air pollution and more affordable power!

    ATWINC1500 is a low power Systems-On-Chip (SoC) bringing Wi-Fi connectivity to any embedded design. In the above example this SoC is part of a certified module, ATSAMW25, Atmel Wi-Fi solutions for embedded designers seeking to integrate Wi-Fi connectivity in their system. If we look at the key features list:

    • IEEE 802.11 b/g/n (1×1) for up to 72 Mbps
    • Integrated PA and T/R switch
    • Superior sensitivity and range via advanced PHY signal processing
    • Wi-Fi Direct, station mode and Soft-AP support
    • Supports IEEE 802.11 WEP, WPA
    • On-chip memory management engine to reduce host load
    • 4 Mbit internal Flash memory with OTA firmware upgrade
    • SPI, UART and I2C as host interfaces
    • TCP/IP protocol stack (client/server) sockets applications
    • Network protocols (DHCP/DNS), including secure TLS stack
    • WSC (wireless simple configuration WPS)
    • Can operate completely host-less in most applications

    We can notice that host interfaces allow direct connection to device I/Os and sensors through SPI, UART, I2C and ADC interfaces and can also operate completely host-less. A costly device is then removed from the BOM which can enable economic feasibility for an IoT, or IIoT edge device.

    The Atmel® SmartConnect SAM W25 is a low-power Wi-Fi certified module which is currently used in industrial systems supporting applications such as Transportation, Aviation, Health Care, Energy or Lighting as well as in IoT like Home Appliance and Consumer Electronic. For all these applications, certification is a must have feature, but low-cost and ultra-low power are the economic and technical enablers.

    From Eric Esteve from IPNEST

    More articles from Eric…


    Freescale Semiconductor: The End of a Long Journey

    Freescale Semiconductor: The End of a Long Journey
    by Majeed Ahmad on 12-15-2015 at 7:00 am

    “You don’t argue with success,” said Paul Galvin back in 1949 at the creation of a new venture that would eventually become known as Motorola Semiconductor Products Sector. He was referring to how Daniel E. Noble, one of Motorola’s top managers, had persuaded him to set up a small electronics research facility in Phoenix, Arizona geared toward solid-state electronics.


    Motorola Semiconductor Products Sector facility in Mesa, Arizona

    Conservatives within Motorola had opposed the idea, calling it “Noble’s Resort” and arguing that whatever Noble wanted to do in Phoenix could be done at the headquarters in Chicago. Noble had Phoenix in mind for a number of reasons, including its reputation as a clean city and prospects of hiring qualified engineers and scientists.

    Noble was a pioneer in his own right before he joined Motorola as director of research in 1940 after taking a year’s leave of absence from the University of Connecticut. He had developed the first FM mobile communications system for the specialized needs of the Connecticut State Police.

    The timing of this move was impeccable. In the coming years, solid state electronics would unleash the power of semiconductors, thrusting Motorola’s semiconductor division into a key position in the rapidly evolving chip industry. The first major milestone in Motorola’s journey in the semiconductor industry came in 1952 when it licensed the design of transistor from Bell Laboratories.


    Daniel Noble in front of the Motorola Research facility in Phoenix

    For a start, Motorola’s semiconductor division began toying with the transistor as a replacement for bulky and expensive radio power supplies. Then, in 1955, Motorola launched its first mass-produced semiconductor product: a high-power germanium transistor for car radios. Motorola was a leading manufacturer of two-way mobile radios, and initially, it facilitated the company’s entry into the embryonic semiconductor industry.

    By the late 1950s, Motorola’s semiconductor division had become a major player in transistors and diodes business. Apart from radio and communications, Motorola’s semiconductor business made a major impact in automobiles, where car makers like Ford used electronic components to build alternators and replaced generators during the 1960s.


    Motorola’s first chip: a germanium-based high-power transistor

    Motorola’s major break in the semiconductor business came with the launch of NASA’s celebrated Apollo 11 mission to the moon for whom the company supplied components for on-board tracking and communications equipment.

    The radio transponder that relayed the first words from the moon to earth in July 1969 was based on a Motorola-supplied module that transmitted telemetry, tracking, voice communications and television signals between Earth and the moon. Motorola Semiconductor Products Sector was now a leading player in the nascent semiconductor industry.

    It’s the first part of the three-part series of blogs about Freescale’s long journey. Stay tuned for more about how Motorola Semiconductor Products Sector became a formidable player in the chip industry and what led to its spin-off from the parent company in 2004.


    Semiconductor capital spending consolidating

    Semiconductor capital spending consolidating
    by Bill Jewell on 12-14-2015 at 10:00 pm

    Shipments of semiconductor wafer fab equipment are expected to grow 2.5% in 2016 after 0.5% growth in 2015 according to the December forecast from Semiconductor Equipment and Materials International (SEMI). Gartner is more pessimistic, with its October forecast calling for fab equipment to decline 0.5% in 2015 and drop 2.5% in 2016. Gartner projects the market will return to growth in 2017 through 2019.

    A drop in fab equipment shipments in 2016 appears almost certain based on the combined October data from SEMI and Semiconductor Equipment Association of Japan (SEAJ). Three-month-average semiconductor manufacturing equipment bookings dropped to $2.01 billion in October, the lowest level since February 2013. The book-to-bill ratio was 0.87, the lowest since 0.84 in November 2012. The semiconductor equipment market declined 13% in 2012 and dropped 16% in 2013. We at Semiconductor Intelligence believe the current downturn will not be as severe, with semiconductor equipment down 5% to 10% in 2016 and resuming growth in 2017.

    Semiconductor capital expenditures (CapEx) for 2015 are projected to be $63.9 billion, down 1% from $64.6 billion in 2014, according to Gartner’s October forecast. Gartner expects CapEx will drop 3.3% in 2016 before picking up to 5% to 6% growth in 2017 through 2019. Semiconductor capital spending can be grouped into four major segments:
    1. Foundry companies such as TSMC, GlobalFoundries and UMC.
    2. Memory companies including Samsung, SK Hynix, Micron and IM Flash (Toshiba/SanDisk joint venture).
    3. Intel, the largest semiconductor company and dominant microprocessor supplier.
    4. Other – all other semiconductor companies (numbering in the hundreds).

    The chart below shows SC CapEx from 2010 to 2015. Total CapEx is based on data from IC Insights (2010 to 2014) and Gartner (2015 estimate). CapEx for each segment is based on company data and estimates for 2015.

    CapEx was $54 billion in 2010, more than double the $26 billion in 2009 during the semiconductor downturn. Capital spending increased 25% to $67 billion in 2011. CapEx dropped in 2012 and 2013 before resuming growth in 2014. The four segments show varying trends. The foundry segment has generally been on an upturn, from 21% of the total in 2010 to 25% in 2015. Memory increased from 31% in 2010 to 44% in 2015. The Memory segment should decline in 2016 as both Samsung and SK Hynix plan to cut CapEx. Intel’s portion increased from 10% in 2010 to 19% in 2012 and 2013 before dropping to 11% in 2015. The most significant trend is the Other segment dropping from 39% in 2010 to 19% in 2015. In dollars, Other CapEx dropped from $21 billion in 2010 to an estimated $10 billion in 2015.

    What is behind the downward trend in Other CapEx? One major factor is almost all new semiconductor companies adopt a fabless strategy – focusing on design, marketing and sales and leaving manufacturing to outside foundries. Another key factor is many existing semiconductor companies are relying less on their own fabs and more on outside foundries. As shown in the table below, the top three spenders grew CapEx from 2010 to 2014, ranging from 22% for Samsung to 94% for Intel. In contrast, three major semiconductor companies with fabs have significantly reduced CapEx. Texas Instruments, STMicroelectronics and Renesas Electronics all spent over a billion on CapEx in either 2010 or 2011. Each company has been cutting CapEx significantly since, with 2014 versus 2010 down 60% for ST and Renesas and down 68% for TI.

    [TABLE] border=”1″
    |-
    | style=”width: 155px” | CapEx, US$Billion
    | style=”width: 53px” | 2010
    | style=”width: 54px” | 2011
    | style=”width: 48px” | 2012
    | style=”width: 48px” | 2013
    | style=”width: 54px” | 2014
    | style=”width: 66px” | 2014/10
    |-
    | style=”width: 155px” | Top 3
    | style=”width: 53px” |
    | style=”width: 54px” |
    | style=”width: 48px” |
    | style=”width: 48px” |
    | style=”width: 54px” |
    | style=”width: 66px” |
    |-
    | style=”width: 155px” | Samsung
    | style=”width: 53px” | 11.0
    | style=”width: 54px” | 12.1
    | style=”width: 48px” | 12.3
    | style=”width: 48px” | 11.8
    | style=”width: 54px” | 13.3
    | style=”width: 66px” | +22%
    |-
    | style=”width: 155px” | Intel
    | style=”width: 53px” | 5.2
    | style=”width: 54px” | 10.8
    | style=”width: 48px” | 11.0
    | style=”width: 48px” | 10.7
    | style=”width: 54px” | 10.1
    | style=”width: 66px” | +94%
    |-
    | style=”width: 155px” | TSMC
    | style=”width: 53px” | 5.9
    | style=”width: 54px” | 7.3
    | style=”width: 48px” | 8.3
    | style=”width: 48px” | 9.7
    | style=”width: 54px” | 9.5
    | style=”width: 66px” | +61%
    |-
    | style=”width: 155px” | Others
    | style=”width: 53px” |
    | style=”width: 54px” |
    | style=”width: 48px” |
    | style=”width: 48px” |
    | style=”width: 54px” |
    | style=”width: 66px” |
    |-
    | style=”width: 155px” | Texas Instruments
    | style=”width: 53px” | 1.20
    | style=”width: 54px” | 0.82
    | style=”width: 48px” | 0.55
    | style=”width: 48px” | 0.41
    | style=”width: 54px” | 0.39
    | style=”width: 66px” | -68%
    |-
    | style=”width: 155px” | STMicroelectronics
    | style=”width: 53px” | 1.24
    | style=”width: 54px” | 1.28
    | style=”width: 48px” | 1.11
    | style=”width: 48px” | 0.53
    | style=”width: 54px” | 0.50
    | style=”width: 66px” | -60%
    |-
    | style=”width: 155px” | Renesas Electronics *
    | style=”width: 53px” | 0.90
    | style=”width: 54px” | 1.05
    | style=”width: 48px” | 0.56
    | style=”width: 48px” | 0.37
    | style=”width: 54px” | 0.36
    | style=”width: 66px” | -60%
    |-
    | colspan=”7″ style=”width: 479px” | * Renesas data for fiscal year ending March of following year
    |-

    Mergers and acquisitions are also leading to decreased CapEx in the Other segment. Combinations of semiconductor companies with fabs leads to a consolidation of manufacturing. The combined company will spend less on CapEx than the individual companies would have spent. In 2010, Renesas Technology and NEC Electronics merged to form Renesas Electronics. Renesas was originally formed by the merger of the semiconductor businesses of Hitachi and Mitsubishi Electric in 2003. Texas Instruments acquired National Semiconductor in 2011.

    Recent M&A activity contributing to manufacturing consolidation and lower CapEx includes:
    · NXP Semiconductors completed its acquisition of Freescale Semiconductor last week.
    · ON Semiconductor agreed to acquire Fairchild Semiconductor last month (see our November post for details).
    · Infineon Technologies acquired International Rectifier in January.
    Other recent M&A activity does not affect CapEx since the acquired companies are fabless. This includes Avago’s pending acquisition of Broadcom and Intel’s proposed acquisition of Altera.

    The trend is inevitable – new fabs will be built by companies with the economy of scale to justify an investment of $5 billion to $10 billion per fab. Thus memory companies, foundry companies and Intel will dominate CapEx. Other companies will continue to upgrade their existing fabs, but few (if any) will build new fabs.


    Latest Crop of Energy Harvesting Chips Powers IoT Sensor Nodes

    Latest Crop of Energy Harvesting Chips Powers IoT Sensor Nodes
    by Tom Simon on 12-14-2015 at 12:00 pm

    Like death and taxes, changing batteries in remote sensor nodes and wireless IoT devices is often inevitable. Huge effort has been expended on reducing power consumption for battery operated devices, but the day always comes when the battery goes dead. Taking care of this can be as simple as popping open a battery cover and swapping batteries, or as dreadful as driving or hiking to countless remote locations to do the same. Furthermore, batteries are expensive and wasteful. Energy harvesting promises to reduce or eliminate this problem by using among other things light, heat and motion to power electronics.

    Huge gains in rechargeable battery life and capacity have enabled enormous progress for wireless IoT devices, but to eliminate the need for wired recharge or battery replacement other energy sources need to be harnessed. The most common sources for harvesting energy are solar, thermal and vibration. Because each of these sources are not consistent and deliver varying voltage and current profiles, the energy from them needs to be stored in a reservoir such as a capacitor or battery. To capture all the energy possible there is a need to up-convert low voltages to that the power is useful. Lithium Polymer (LiPo) batteries need to be charged up to 4.2V and require specialized charging profiles to avoid overcharging or reduced battery life.

    IC’s that run off of 5V and charge LiPo batteries have been around for quite a while, but efficient energy harvesting demands the ability to operate with inputs well below 1V. Cypress offers several chips for converting power from solar, vibration and thermal sources. Each of these chips is designed for somewhat different applications.

    The MB39C831 works for charging LiPo batteries with its built in charge controller. It will also charge up super capacitors which can replace batteries in many low power devices. It can drive circuit loads with lower voltage inputs, such as when 5V is needed from a 3.7V nominal LiPo battery, or even lower input voltages. The important feature of this chip for working off of solar is what is called Maximum Power Point Tracking (MPPT). This ensures the load that the MB39C831 presents to a solar panel optimizes power transfer. Solar panels provide their highest voltage under no load. If a high current load, such as charging a battery, is applied, the current will spike but the voltage coming from the panel will collapse. This can cause large fluctuations and inefficient power transfer. MPPT seeks the optimal I/V point for the connected panel.

    The MB39C811 is suited for driving circuits rather than charging batteries. Despite the absense of charge control circuitry, it is still useful because it can handle power sources that deliver AC currents. It has a low-loss rectifier bridge to handle AC sources like piezoelectric generators. The other advantage this chip has for piezoelectric is a high input voltage rating – up to 24V, with over voltage protection that keep the unit operating and allows 100mA operation.

    Both of these chips have extremely low quiescent current (1.5uA and 41uA). It would not do to have the voltage converter draw down what limited power should be available for the application circuit. The MB39C831 can start operating at voltages as low as 0.35V, which helps squeeze every bit of power out of a solar panel on a cloudy day. Remote sensor nodes must have the ability to tolerate periodic lack of sun if necessary.

    Spansion (merged wtih Cypress) posted an interesting summary of the needs for energy harvesting on their web site. It also discusses several evaluation boards they offer to help with system prototyping. These sensor node boards come with the above PMIC’s and an ARM Cortex M3 with a FM3 MCU. There is an RF module that is included to provide communication. The board also comes with an LCD display, temperature, light sensor and I2C for attaching other peripherals.


    Is That My Car on Fire?

    Is That My Car on Fire?
    by Daniel Payne on 12-14-2015 at 7:00 am

    I was kind of shocked when the service manager at our local VW dealership told me that one of the wires in the ignition system of my wife’s New Beetle had started to overheat, melting the insulation and becoming a safety hazard. Why didn’t a fuze just blow, protecting the wiring from overheating? We decided to quickly sell that car and made a mental note to double-check with Consumer’s Report before buying another vehicle with electrical issues. Most of us drive a car and often I just take it for granted that the electrical wiring is OK, expecting that some engineer has tested out the complex system before actually building production units.

    Automotive companies can use a methodology of selecting wires that are larger than needed for the actual load, then building prototypes in order to measure currents and finally determine how to protect wires with a fuze. A more modern approach is to model and simulate the wiring for a car, although it’s a challenge when each make and model of car has a variety of trim packages and electrical options. Analyzing your automotive wiring during the design phase requires that you design different by:

    • Entering accurate load information for all devices that draw current (motor size, Volts, Amps)
    • Knowing the current flow through Engine Control Units (ECU’s)
    • Modeling the entire electrical architecture of the vehicle with component locations, relative temperatures, and the harness interconnect points
    • Have accurate models for the battery, fuses, wires and basic discrete devices

    Engineers at Mentor Graphics have created automation tools to help automotive designers in this area, and a full analysis can be run that produces a table showing the recommended wire size (CSA max):

    Knowing the proper wire sizes before production is a much smarter way than building a prototype and measuring because it saves both time and money. Recalls in the automotive industry can be quite costly and embarrassing to the brand.

    Fuses should protect against accidental shorting, and there are three types of short circuit test cases:


    If you build a physical prototype and run this type of short testing (aka Wire Smoke Testing) for all electrical loads in a car it will take lots of time and engineering effort. Then you have to check for fumes caused by wires that are over-loaded, heating up and burning the insulated coating.

    Knowing the characteristics of wire fumes (blue line) and the fuse blow time (orange line), we can start to get a bit more scientific about wire sizing and fuses.

    The Y-axis is Amps and the X-axis is Seconds. At point 1 the fuse will blow first at about 20 amps (orange curve), which is desirable. However, at about 8 amps show as point 2 the wire will be fuming before the fuse blows, and that’s a big problem.

    Summary
    Automotive wire harnesses must be designed for safety, and now there’s an automated approach that can be used during the design process to ensure that the current loads of each wire do not damage the insulation, and that the proper fuse values are selected, giving us electrical protection. Mentor Graphics has been involved with cabling design systems for many years now, and Mike Stamper has written a White Paper on this topic:

    Related blogs: