CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

A Real Engineering Challenge – Artificial Red Blood Cells

A Real Engineering Challenge – Artificial Red Blood Cells
by Bernard Murphy on 02-17-2016 at 7:00 am

When you’re thinking about “what can we do next”, you can think big or you can think small – very, very small. Robert Freitas at the Institute for Molecular Manufacturing (IMM) has such an idea – artificial red blood cells (RBCs). These would be nano-machines which could augment the oxygen and carbon dioxide carrying capacity of natural RBCs.

Oxygen is fundamental to life – mitochondria inside cells use oxidation to drive power generation for a cell, without which they will quickly die. The hemoglobin in RBCs is essential in transporting oxygen from lungs to tissues and carrying the carbon-dioxide generated in the consumption of food back to the lungs.

Any loss of effectiveness in this task due to anemia, respiratory problems or a host of other conditions can be life-threatening. Some methods to address insufficient oxygenation, such as hemoglobin formulations (hemoglobin separated from RBCs) have lower carrying capacity than RBCs and very short lifetime in the vascular system. Alternatives with higher carrying capacity and significantly longer lifetime would be a major step forward.


Dr. Freitas’ proposal (this has not yet been built as far as I know) is to construct a spherical nano-machine, ~1μm in diameter, which can easily pass through capillaries. The machine acts as a pressure vessel, pumping O[SUB]2[/SUB] in and CO[SUB]2[/SUB] out when in the lungs and pumping O[SUB]2[/SUB]out and CO[SUB]2[/SUB] in when in other tissue. Pumps are driven by rotors which can separate O[SUB]2[/SUB] and CO[SUB]2[/SUB] molecules from blood plasma.

Rotors are arranged around the spherical chamber, some to pump in and some to pump out, along with sensors to detect O[SUB]2[/SUB] and CO[SUB]2[/SUB] concentrations and a glucose-powered engine to drive the rotors (glucose is readily available in the bloodstream). O[SUB]2[/SUB] and CO[SUB]2[/SUB] are stored in tanks around the surface of the sphere; the center of the sphere is reserved for a water ballast chamber and the computer which will monitor sensors, control rotors and perform other functions.

The ballast chamber is provided to control buoyancy. This feature is not thought to be required during normal operation of the artificial cells in the body but would be useful for extracting these cells after they have completed their therapeutic purpose. Blood can be circulated from the body to a centrifuge where, with suitably adjusted buoyancy control, the artificial cells can be encouraged to separate out.

The computer has interesting constraints. The author expects that adequate computing capacity could be contained inside a sphere 58nm in diameter, consuming ~10[SUP]-14[/SUP] W, which would be a small percentage of the output of the glucose engine. Would be interesting to hear what ARM users think about this.

So far, what is described is challenging but maybe not inconceivable. The real challenge comes in manufacturing at volumes required to make this useful. The author estimates that a full load required to replicate RBC carrying capacity for one patient is ~5.10[SUP]12[/SUP] devices. If you could make a million devices for a penny, a single transfusion would cost ~$50k. Scaling down the load doesn’t really help – dropping to a 10% load will reduce the value of the solution correspondingly. So we have to get to much better than a million devices per penny. Geometrically that doesn’t seem impossible, but I‘m sure this will take a high order of MEMS device physics (and chemistry).

Final devices will of course be separated (each wrapped in a diamond-like shell to avoid degradation of the device and toxicity to the blood). This will truly be silicon dust (or more exactly diamond dust). There’s a great description in the paper on how a therapeutic dose is prepared starting with this powder. A cold glucose solution is added to the powder, along with any necessary salts, proteins, etc. Sensors on the cells detect glucose which they start pumping into their tanks. They then fill the oxygen tanks and finally fill the ballast tanks, at which point they sink in the solution. Any powder left on top indicates defects in those cells which should be skimmed off. A command is sent to the cells to expel enough water to reach neutral buoyancy; from there they are ready to be injected into the patient through an IV drip.

Probably this isn’t something we’re going to see very soon, but it does define a great stretch goal for where we could get if we really try hard. You can get more detail from Dr. Freitas’ paper HERE.

More articles by Bernard…


The (not so) Easy Life of an SOC Design Integrator

The (not so) Easy Life of an SOC Design Integrator
by Tom Simon on 02-16-2016 at 3:00 pm

How can large SOC projects effectively integrate sub blocks and IP into a stable version for release or internal development? The person responsible for integrating SOC sub blocks into a validated configuration for release has a difficult task. Usually there are many sub-blocks, each undergoing their own development. There needs to be a stable configuration for the entire project to work off of, so there is a huge penalty for releasing sub-blocks that are not ready.

The integrator, who may not necessarily be part of the design team, has to know when blocks in the design have available updates. They also need to systematically integrate one or more updates while ensuring that each change to the configuration meets quality levels. If indeed one of the blocks fails validation, the integrator will need to quickly and easily back out the changes before they finalize the updated release for the project.

There are other participants in the operation. There is the contributor who owns the IP or sub-block that is being updated. This person will want to be able to flag a set of file versions as a new release for consideration by the integrator. The contributor will also want to be able to test their new release for the sub-block by running verification. Of course they will also need a way to inform the integrator that a new release is ready.

Because most designs are hierarchical, once the integrator has created a new release of their block, or chip, the consumer will need to be notified that a fully tested release is available for them. The consumer might be responsible for the next phase of the design. For example, let’s imagine that the design we are talking about is Verilog, once it has been released as synthesizable and meeting initial timing, the back end group will want to take this deliverable and work on the physical implementation.

There is actually a lot more to this than described above. Fortunately, Methodics as written a white paper that goes into more detail on the requirements and on their implementation of a system that addresses the design needs in this workflow for very large and multi-site projects. Methodics’ ProjectIC system uses ‘Moving Aliases’ to identify releases of IP or subsystems. A given alias will point to a set of versions of the contents and makes it easy to update a workspace to a particular release level. Aliases can have meaningful names like “SOC_READY” or “GOLD”.
ProjectIC can use pre-release or post-release hooks to invoke verification steps automatically when workspace is updated to a particular alias. This makes the process more automated and helps ensure better quality. It’s also easy for a user to look at a workspace and see what its status is. For example, this includes information about what release is present for each sub-block, and whether the files are in a container or rather a being referenced. Most importantly the workspace status can tell if a newer version is available.

The Methodics whitepaper goes into more detail and provides examples of ProjectIC commands and their results. You canfind the paper here on their website.

More articles from Tom…


IoT implementation and Challenges!

IoT implementation and Challenges!
by Ahmed Banafa on 02-16-2016 at 12:00 pm

The Internet of Things (IoT) is the network of physical objects—devices, vehicles, buildings and other items which are embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. Implementing this concept is not an easy task by any measure for many reasons including the complex nature of the different components of the ecosystem of IoT. To understand the gravity of this task, we will explain all the five components of IoT Implementation.
Continue reading “IoT implementation and Challenges!”


DDR4 is a complex interface to verify — assistance needed!

DDR4 is a complex interface to verify — assistance needed!
by Tom Dillinger on 02-16-2016 at 7:00 am

The design of parallel interfaces is supposed to be (comparatively) easy — e.g., follow a few printed circuit board routing guidelines; pay attention to data/clock/strobe signal lengths and shielding; ensure good current return paths (avoid discontinuities); match the terminating resistances to the PCB trace impedance; and provide good decoupling capacitance, especially on the signal termination reference voltage. The definition of the DDR4 interface standard by JEDEC incorporates many new features beyond DDR3, and requires that designers place additional focus on the “data eye” analysis that has primarily been the domain of SerDes serial interface implementations.

At DDR4 data rates, the validation of data “DQ” signaling is transitioning from setup/hold timing checks to a true eye margin analysis. The figure below illustrates a rather startling trend in the (LP)DDRn standard, in terms of the available eye opening.


This article highlights but a few of the new features of the DDR4 standard and the corresponding analysis and verification challenges.

I recently had the opportunity to chat with Nitin Bhagwath, technical marketing engineer at Mentor Graphics, who provided me with a great overview of DDR4. [ Nitin: A belated “Thanks!” ]

The motivations for the DRR4 standard are much broader than simply increasing memory bandwidth, although that’s certainly still the “prime directive”. The VDDQ supply has been lowered from 1.5V for DDR3 to 1.2V, to save power. Specific features have also been included to enable additional power and simultaneous switching noise optimizations.

Vref /= VDDQ/2

The biggest change from DDR3 is the adoption of a new on-die termination (ODT) approach, which results in a significant change in the DQ switching threshold voltage.

As illustrated below, DDR3 used a receiver termination impedance to VDDQ/2. Conversely, DDR4 uses a VDDQ pullup termination — aka, “pseudo-open drain” or “POD1.2“.

As a result, the average voltage levels between ‘1’ and ‘0’ are VDDQ/2 for DDR3 — the reference voltage used as the data threshold is available with a straightforward voltage divider, using two equal (low tolerance) resistors.


For DDR4, however, the average voltage levels are above VDDQ/2, based upon the driver impedance and receiver termination resistances — the center of the data eye is shifted upwards from VDDQ/2.

As there will be variation from pin-to-pin in the on-die receiver termination and driver resistances at both the MEMCTRL and DRAM‘s, the expected voltage range between ’1’ and ’0’ will have pin-to-pin tolerance, as well. Therefore, a key DDR4 design implementation requirement is the “training” of a programmable internally-generated switching reference. This reference voltage will be determined during an initialization sequence performed by the MEMCTRL, typically using one reference per data lane at the controller and one reference for the DRAM.


(Note: The termination to VDD for DDR4 DQ signals is similar to the GDDR5 interface. Also, the DDR4 address/command signals continue to use the termination resistor connected to VDDQ/2, and an external VREF.)

Power Optimizations in DDR4

Why change the DDR4 termination from DDR3? Current is flowing between the MEMCTRL and DRAM for either a driven ‘1’ or ‘0’ DQ signal in DDR3, whereas in DDR4, current is only present for a logical ‘0’ signal — the VDDQ termination results in no current for a logical ‘1’. Significant power savings is achievable if the DQ data bit values transferred between the MEMCTRL and DRAM could be biased toward a higher percentage of 1’s. The aggregate simultaneous switching noise (SSN) of successive data bursts will be reduced, as well.

DDR4 adds a Data Bus Inversion signal option. A pre-calculation of the data is made, and either the true or inverted values are sent across the DDR interface to maximize the number of 1’s. The DBI signal accompanying the data indicates whether the receiver should/should not invert the incoming values when capturing. There’s a tradeoff, to be sure — designers need a tool to help assess whether the additional overhead of the DBI signaling is a significant power/SSN benefit.

Signal Integrity Optimizations in DDR4

The use of multiple ranks of DRAM on the DDR4 interface requires additional options for selecting the on-die terminating resistance for individual ranks. (A memory “rank” is defined as a set of DRAM parts sharing the same select/enable signal, which operate together in response to a command from the MEMCTRL. The overall memory architecture may divide the total capacity into multiple ranks; the additional signaling complexity is offset by opportunities for faster, interleaved read/write accesses.)

A DDR4 DRAM offers a range of terminating resistance values. The specific DQ pin receiver resistance presented to the interface is selected by a combination of the initial chip configuration and the DRAM operating command, if dynamic on-die termination is enabled. For example, during a DRAM write operation, a transition to the RTT_WR terminating resistance can be enabled. During other operations, two additional settings are available (RTT_NOM and RTT_PARK) to allow optimum signal integrity response among different ranks. The figure below illustrates an example of a write operation where different termination is used across three ranks — RTT_WR (active rank), RTT_NOM = high-Z, and RTT_PARK.


As DDR4 is approaching the complexity of a SerDes interface, I asked Nitin about the statistical bit-error rate (BER) associated with eye diagram analysis. The JEDEC spec for DDR4 defines the data eye window as requiring a BER of 1E-16 from signal integrity simulation. Nitin added that the DDR4 standard includes additional features to enhance reliability (at the expense of additional latency):

  • a full cyclic-redundancy check on the DQ interface for a DRAM write operation
  • parity checking on the address and command signals

If enabled, the CRC is sent by the MEMCTRL after the write data, and verified by the DRAM. There are still Error Correcting Code and non-ECC DDR4 DIMM’s — the CRC implementation goes beyond the scope of the ECC offering, as illustrated below:


The validation of a DDR4 design clearly requires a detailed SerDes-like extraction and simulation to confirm the data eye margins. Unlike a SerDes serial interface with clock-data recovery, SI analysis for DDR4 requires detailed extraction of the differential clock and data strobe signals, as well. Address and command signal validation still uses traditional setup/hold timing verification.

Nitin highlighted how the Mentor Hyperlynx v9.2 development team has enhanced the “DDR Wizard” application, to support the complex configuration options available in DDR4, and how to apply those options when selecting IBIS model parameters, running simulations, and analyzing results. Additionally, DDR Wizard features ensure that simulations will include the requisite JEDEC checks.

This guidance is absolutely required, to ensure this complex interface is thoroughly verified, while offering insight into the tradeoffs of the new power and SI optimization features available in the DDR4 standard. More information on the Hyperlynx DDR Wizard is available here; there’s an exceptional YouTube video demonstration here.

-chipguy


NHTSA and Google’s War on Drivers

NHTSA and Google’s War on Drivers
by Roger C. Lanctot on 02-15-2016 at 4:00 pm

Google and the National Highway Traffic Safety Admnistration (NHTSA) have recently joined forces in a battle against drivers. It is an unusual alliance and one with significant implications for the future of automotive safety in the U.S. and globally.

That alliance was manifest this week in a letter sent by NHTSA to Chris Urmson (Letter: http://tinyurl.com/zadphfj) director of Google’s self-driving car project, indicating that Google’s SDS (self-driving system) will qualify as a driver for regulatory purposes. This follows recent statements by Transportation Secretary Anthony Foxx that it’s clear that “unsafe behaviors and human choices … contribute to increasing traffic deaths on a national scale.”

Secretary Foxx was referring to NHTSA’s report of a 9.3% increase in estimated highway fatalities for the first nine months of 2015. The agency has not officially determined the cause for the increase. Foxx quoted NHTSA research as showing that human factors contribute to 94% of crashes.

This 94% figure refers to a NHTSA report published nearly a year ago called: “Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey.” The report was based on a survey of a weighted sampling of 5,470 crashes over a 2.5-year period.


The study indeed identified drivers as the “critical reason” for 94% of crashes. The report then notes: “However, in none of these cases was the assignment intended to blame the driver for causing the crash.”

The report then provides additional details regarding the causality for crashes in which the driver was the “critical reason.”


So Secretary Foxx says drivers are at fault. NHTSA’s own research appears to blame drivers. The conclusion appears to be obvious that drivers are the problem – which is great news for auto makers.

If a machine can be a driver and human drivers drive poorly, the long-term prospects for human driving of any kind is not good. But it must be reassuring for automakers to learn that all those horrible crashes that are killing 100 people in the U.S. every day – and injuring more – are no fault of the auto makers. It’s all those crappy drivers and their poor decision making.

The automotive industry has a long history of blaming drivers for collisions and crashes and the resulting injuries and fatalities. That history is detailed in Michael Lemov’s “Car Safety Wars” and goes all the way back to the 1950’s when industry resistance to the use of airbags and seatbelts was ingrained and widespread.

The battles over airbags and seatbelts were eventually won by the regulators who demonstrated that these technologies, if required on all cars, could save lives in the event of a crash. Car makers blamed the drivers for the crashes, but safety systems were shown to be able to save lives anyway – especially when metal dashboards were replaced among other things.

Somehow a step forward, blessing Google’s driverless car technology, looks and sounds eerily like a throwback mentality that is diverting the NHTSA from the kind of research it ought to be doing on making cars even more safe than they already are. Advanced sensor technologies including radar, LiDAR and cameras offer the prospect of life-saving solutions none of which are being subjected to agency review as part of potential rule-making.

The last major mandate pursued by the agency was the back-up camera mandate, which is expected to save 200-300 lives annually. Meanwhile, highway fatalities are spiking upward as vehicle sales hit records and gas prices plummet.

The NHTSA’s long-term vision, as reflected in the recent Smart Cities Challenge application criteria, is skewed toward both autonomous and automated driving, two very different scenarios, the latter of which will required expensive infrastructure. Hundreds of thousands of U.S. drivers and passengers are likely to die long before this vision is realized.

It’s time for the NHTSA to get real and get back to fundamental research to identify the next wave of mandated technologies that will be capable of saving lives. The NHTSA has been accused of being captive to the automotive industry, with multiple former General Motors executives holding senior agency positions. Now it appears, with Google’s hiring of former assistant administrator Ron Medford, that NHTSA is falling under the sway of Google.

It would be nice if NHTSA could actually commence doing the bidding of the American people rather than letting itself be pushed around by lobbyists with vested interests. A good start might be mandating automatic emergency braking or cross-traffic alert or pedestrian airbags. Let’s get serious about saving lives rather than pointlessly blaming drivers for crashes.

Addendum:To the list of other mandate-able safety technologies can be added: blind spot detection, lane departure warning, curve over-speed warning, and collision avoidance.

More articles from Roger…


Automotive Augmented Reality Applications Insights from Patents

Automotive Augmented Reality Applications Insights from Patents
by Alex G. Lee on 02-15-2016 at 12:00 pm

US20150202962 illustrates a system for controlling vehicle features via an augmented reality vehicle user interface. The system includes an image capturing device for capturing an image of the vehicle. The system identifies the points of interest that correspond to the vehicle features within portions of the image of the vehicle captured by the image capturing device. The system presents an augmented reality user interface of the vehicle that includes the vehicle feature settings menu icons overlaid upon the points of interest using the driver’s smartphone.
US20150226568 illustrates a navigation system for providing road guidance using the augmented reality head-up display (HUD). The system provides the route information as a virtual preceding vehicle on a front glass of a vehicle using the HUD technology. The virtual preceding vehicle represents various direction information and traffic information. The virtual preceding vehicle is driven in front of the vehicle by a reference distance that is adjusted based on the speed of the vehicle. The virtual preceding vehicle displays a turn signal lamp of the virtual preceding vehicle at different flickering periods based on a remaining distance from a current position to left and right turn points. The virtual preceding vehicle guides the driver’s vehicle up to the preset destination in connection with a position of the vehicle. Thus, the system allows the driver to arrive at a preset destination while viewing the movement of a virtual preceding vehicle.

*CNET Augmented Reality HUD System Introduction Video
*Continental Augmented Reality HUD System Demo Video
*Hyundai Augmented Reality HUD System Demo Video

US20150206431 illustrates an augmented reality HUD system for providing the safe driving information (e.g., anti-collision warning) in front of a vehicle in a form that can be easily recognized by the driver of the vehicle. The system collects and processes the driving environment information, driving situation information of the vehicle, obstacle information, and information on the driver’s line of sight. The system determines a provision based on the processed information. The system provides the safe driving information according to the determined provision form and provision timing of the safe driving information.

US20150291160 illustrates a cruise control system using the augmented reality HUD. The system avoids the preceding vehicle following cruise control function being released or stopped because the preceding vehicle is temporarily not detected as the driving road is changed from a straight road into a slope road or a curved road during the preceding vehicle following cruise. The system continuously displays the preceding vehicle using the augmented reality technologies if the preceding vehicle is temporarily disappeared as the preceding vehicle enters into a slope section or a curve section. The system receives road information on a front road of the vehicle during a preceding vehicle following cruise. The system determines whether the front road is a slope road or a curved road based on the road information. Then, the system displays a virtual preceding vehicle in the information display region through augmented reality when it is determined that the preceding vehicle does not exist.

US20150154802 illustrates an augmented reality lane change assistant system that increases convenience and safety for a driver. The system provides the augmented reality overlapping image on an outside mirror that visualizes the driving information of an objective vehicle in the side rear area from a vehicle.

More articles from Alex…


Does the Internet of Things need new Artificial Intelligence?

Does the Internet of Things need new Artificial Intelligence?
by Akeel Attar on 02-14-2016 at 12:00 pm

Judging by the number of confusing posts, blogs and articles on this topic, anyone exploring the potential of what the IOT can deliver to their business/organisation can be forgiven for thinking that the IOT will need a new set of AI technologies to work correctly. Throw into the mix the hype that the IOT will need big data analytics & platforms to work and we have a very confusing IOT landscape to navigate and understand. Having been involved in delivering intelligent software solutions for over 30 years using ‘traditional’ AI, I feel well positioned to assess the need for new AI for the IOT.

Traditional AI has evolved continuously over the last 35 years and is now routinely embedded within many classes of software applications. There are two main manifestations of traditional AI:

  • Rules automation/expert system technologies. These systems are designed to capture human expertise (decision making, risk/situation assessment, diagnostics/trouble-shooting, advising on products/services, asset performance monitoring etc.) and to automate policy rules and regulations.
  • Machine learning. These systems can learn new patterns/rules from historic data. The learning can be either algorithmic (black box models such as neural networks) or symbolic rules/trees that are understandable to humans.

People often quote Apple Siri, IBM Watson or the Amazon recommendation engine as examples of the brave new AI world!! The reality is that these new AI technologies complement rather than replace traditional AI. The main limitation of traditional AI is that it operates on structured data (numeric values such as price, age, voltage etc. or a pre-defined set of discrete symbols (labels) such as colours, occupations, etc.). The new AI technologies are focussed on interpreting and learning from unstructured data such as free-format text, speech, images and videos. The concepts, patterns, features and attributes generated by the new AI from free-format text, speech, images and videos can be used as structured data to drive traditional AI.

Having clarified the distinction between traditional and new AI, I will further argue that the IOT does not need new AI to work correctly, but what it needs most is distributed traditional AI (intelligence) as outlined below:

  • The growth of the IOT is being driven mainly by the availability of low value, low power, small sensors attached to objects & things ranging from street lamps to farm animals to home appliances to industrial plants to elderly patients. By definition these sensors generate structured numeric data which is easily processed by traditional AI (with the exception of data from microphones and digital CCTV cameras which need new AI to pre-process into patterns and features)
  • Most IOT ecosystems involve a model of centralised intelligence whereby data from sensors (things) are uploaded to a central private/public cloud where is it processed by a cloud based AI engine before actions/alerts are notified back to devices/people at the edge of the IOT. Such a centralised model will not work for two reasons; firstly it is critically dependent on the internet connectivity and will lose all intelligence if the network is down, and secondly with over 30 billion things forecast to be connected to the IOT over the next 5 years, the amount of data being uploaded to the cloud will overwhelm the bandwidth of most internet networks. The solution is distributed intelligence with traditional AI/rules engine running everywhere on the IOT echo system (IOT edge hubs/devices, cloud and mobile devices)

In summary, the IOT needs not new AI but distributed traditional AI with engines that are scalable in terms of performance and footprint so that they can run anywhere from a Raspberry Pi at the IOT edge to a massive Azure Service Fabric cloud server to a smart phone. Distributed intelligence is the key to a resilient IOT with real time intelligence.


Reconfigurable redefined with embedded FPGA core IP

Reconfigurable redefined with embedded FPGA core IP
by Don Dingee on 02-12-2016 at 7:00 am

On November 1, 1985, before anyone had heard the phrase field programmable gate array, Xilinx introduced what they called a “new class of ASIC” – the XC2064, with a whopping 1200 gates. Reconfigurable computing was born and thrived around the RAM-based FPGA, whose logic and input/output pins could be architected into a variety of applications and modified very quickly. Continue reading “Reconfigurable redefined with embedded FPGA core IP”


ARM POPs Another One!

ARM POPs Another One!
by Daniel Nenni on 02-11-2016 at 4:00 pm

ARM announced a new POP deal with UMC 28nm last week. POP stands for Processor Optimized Package meaning physical IP libraries (logic and memory) are customized for ARM processor cores and mainstream EDA tools creating a platform for optimized chip design. POP is a much bigger deal than most people realize so let’s get into a little more detail.

Having spent a significant amount of my 30+ year career in Semiconductor IP I may have a different view than most people. While EDA got the fabless semiconductor ecosystem and my illustrious career started, it was commercial semiconductor IP that got us to where we are today, absolutely.

Unfortunately, when we talk about IP it is usually Design IP versus Physical IP (PIP), the building blocks of modern semiconductor design (think Legos). These Lego blocks are used repeatedly throughout a design so the power and performance of the blocks are critical to the success of your chip.

Back in 1998 Artisan Components turned the IP business model upside down by offering free logic and memory libraries to design houses (the foundries paid Artisan wafer royalties). As one of many competing IP vendors I was horrified as we were making hundreds of thousands of dollars by delivering a single library and then suddenly they are free?!?!?!

Artisan did this as a result of competitive pressures of course but now, 18 years later, you can see what a brilliant move it was. Not only did it enable the $900M price ARM paid for Artisan in 2004, it was also the key enabler to thousands of design starts including our cherished mobile devices (Apple’s iProducts). We covered the semiconductor history of Apple in our latest book “Mobile Unleashed: The origin and evolution of ARM processors in our devices” but we did not mention how important ARM POP was.

In a perfect world there would be logic and memory libraries customized for each and every design. Unfortunately this requires very large and experienced IP groups with millions of budget dollars for tools and test silicon. Clearly the top semiconductor companies do this but what about the rest of the world?

Artisan created “free” foundry specific libraries and made them available to the masses for rapid adoption. ARM took that one step further by providing foundry specific POP libraries optimized for designing with ARM cores to better meet timing and power requirements. This allows emerging and fast following semiconductor companies an opportunity to design chips that will better compete in the global market.

If you learn one thing from our book Mobile Unleashed it is that ARM is by nature a collaborative company. They started out working closely with customers, design partners, and foundries and that is still the key to their success today. POP is a clear example of that as ARM CPU, GPU, and PIP groups work closely together with customers, design partners, and foundries to continually improve this platform.

Right now I would bet that China is the biggest market for ARM POP since they have new design budgets but not the IP expertise. I would also bet that ARM POP will start focusing on specific vertical markets such as Automotive, Data Centers, and IoT in the very near future. And did I mention our second book “Mobile Unleashed: The origin and evolution of ARM processors in our devices” is out now?

“ARM and UMC Extend 28nm IP Partnership to Target Cost-Effective Mobile and Consumer Applications”

About POP IP
POP IP technology comprises three key elements necessary for optimized ARM processor implementation. These include Artisan physical IP logic libraries and memory instances tuned for a given ARM core and process technology, comprehensive benchmarking reports pinpointing conditions and results ARM achieved for core implementation, and detailed POP implementation knowledge and methodologies that enable end customers to achieve successful implementation quickly with minimized risk. POP IP products are available from 40nm to 28nm with roadmap down to FinFET process technology for a wide range of Cortex-A series CPU and Mali™ GPU products.

More articles from Daniel Nenni


Fastest SoC time-to-success: emulators, or FPGA-based prototypes?

Fastest SoC time-to-success: emulators, or FPGA-based prototypes?
by Don Dingee on 02-11-2016 at 12:00 pm

Hardware emulators and FPGA-based prototyping systems are descendants of the same ancestor. The Quickturn Systems Rapid Prototype Machine (RPM) introduced in May 1988 brought an array of Xilinx XC3090 FPGAs to emulate designs with hundreds of thousands of gates. From there, hardware emulators and FPGA-based prototyping diverged – but why, considering they do approximately the same thing? Continue reading “Fastest SoC time-to-success: emulators, or FPGA-based prototypes?”