CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

A Little More Quantum Computing

A Little More Quantum Computing
by Bernard Murphy on 02-25-2016 at 7:00 am

There’s another domain in Quantum Computing (QC) which periodically attracts headlines – Quantum Key Distribution (QKD). I thought this worth covering because it does not depend on the ability to do parallel computation on superposition states, so may not be as much at the mercy of limited coherence times. And ultimately it may also find much wider application, at least in cryptography.

The Core Principle
This starts with entanglement, which a lot of writers delight in presenting as mysterious, but which isn’t so difficult to understand with a very cursory understanding of quantum physics. Entanglement is another way to describe states (such as spin or polarization) of two or more particles that are related. The electrons in an atom are entangled simply because they can’t take on arbitrary independent states. For example, in a helium atom in the ground state total electron spin must be zero – the spin states are coupled in a combined state.

Entangled states don’t have to be constrained within an atom. Basic laws of conservation require that when free particles are created from a single interaction, energy, momentum and angular momentum (AM) must be conserved. If two particles are created from a state with zero AM, the AM of the created pair must be zero. If you measure the AM of one particle, the other must measure with opposite AM, no matter how far apart they get. (Some implications of doing this at a distance seem mysterious, but that’s another topic.)

There are multiple ways this basic idea can be used for QKD – I want this to be a quick read so I’ll stick to a simple method. Put a device which generates entangled pairs (typically photons, so we’re measuring polarization rather than spin) in-between two parties A and B who want to communicate. The device shoots out a sequence of pairs with each member of a pair travelling in opposite directions, so A gets one set, B gets the other set.

Both measure polarization of the photons they receive. Per pair, what they measure will be randomly distributed, but they are guaranteed, by entanglement, to measure opposite values. They synchronize, then each measures a pre-agreed fixed number of photons, giving them each a random string which they can use as an encryption key since they each know their key is the inverse of the other’s key.

The Value of This Approach to Secrecy
This use of entanglement benefits secrecy in several ways. Most obviously, this method gives a physically-based random number generator for encryption keys. Also, A and B can change keys frequently, making the approach similar to one-time pad encryption, which is known to be superior to any other method.

Most importantly, quantum physics ensures that if anyone attempts to eavesdrop on key generation, they will disrupt the entangled states, because any measurement will destroy the entanglement. You can’t even get around this limitation by making a copy of the state and measuring the copy – the no-cloning theorem ensures that copying will also destroy the state. So any attempt at eavesdropping can be detected by A and B. The mechanics of checking for this involves something called Bell inequality tests which I won’t try to explain here.

Implementation
An early method for producing entangled photons was (and maybe still is) spontaneous parametric down-conversion, where certain types of crystal will produce entangled pairs when pumped with a laser beam. More recently, advances in GaAs semiconductors with InAs quantum dots are now producing entangled pairs with much higher efficiency (although these methods require ultra-low temperatures).

The photons can be transmitted through fiber or free-space. There is attenuation with distance in each case through progressive loss of entanglement, more so in fiber than in free-space. There are various methods to correct for and improve on this, including quantum repeaters and the very cool-sounding but actually rather mundane quantum teleportation.

Using these methods, effective key distribution has already been demonstrated over >300km in optical fiber and to orbital levels in free-space. At least some of these approaches do not seem to require low temperatures, so lack some of the more obvious drawbacks associated with quantum computing.

Summary
While there’s always a danger in making absolute statements about security, it feels fairly safe to assert that any method of cracking this kind of encryption is inconceivable in the following sense: one-time key usage/frequent changes make statistical attacks impossible and any attempt by an eavesdropper to read the key directly or indirectly will be detectable because it will break the entanglement, unless 100 years of heavily-tested quantum physics is wrong. Of course this will just shift hacking to attacking cleartext data at source or destination, but hey, one step at a time.

There are already several commercial operations in this space: ID Quantique, QuintessenceLabs and SeQureNet and several of the big semis have research programs. The NSA is actively working in this area, as is the Chinese government who plan to launch a satellite to support QKD in the near future.

QKD is still pretty expensive – a tool for government encryption rather than personal use, but at least so far it doesn’t seem there are fundamental barriers to optimizing the technology and costs. This may ultimately be a much more practical application of quantum technologies in electronic systems than the much-hyped ultra-fast quantum computers.

You can read more about key distribution over fiber HERE, the Chinese government’s plans HERE, US government efforts HERE and an introduction to quantum teleportation HERE.

More articles by Bernard…


tinyAVR in 8 and 14 pin SOIC now self-programming

tinyAVR in 8 and 14 pin SOIC now self-programming
by Don Dingee on 02-24-2016 at 4:00 pm

At this week’s Embedded World 2016, Atmel is heading back to 8-bit old school with their news, straight to the low pin count end of their MCU portfolio with a significant upgrade to the tinyAVR family.

According to Atmel’s briefing package, development of the ATtiny102 and ATtiny104 has been in progress for some time. Continue reading “tinyAVR in 8 and 14 pin SOIC now self-programming”


Apple Google FaceBook and Person of Interest!

Apple Google FaceBook and Person of Interest!
by Daniel Nenni on 02-24-2016 at 12:00 pm

Apple, Google, and FaceBook are making significant investments in artificial intelligence (deep learning) and other advertising (snooping) enabling technology to better serve (exploit) their customers (us) and make trillions of dollars for their offshore accounts. This reminds me (in a very creepy way) of the TV series “Person of Interest” that I’m currently binge streaming on Netflix.

For those of you who have not seen it, the premise of Person of Interest is a black box computer system that identifies security threats (terrorists) using artificial intelligence and the data that we make available on the internet every day. Security cameras and IoT of course but also everything we do online. In addition to identifying suspected terrorists for the US Government, there is a backdoor that identifies people who are in danger and spits out their social security number. The billionaire developer of the system has recruited an ex-assassin (who likes to shoot bad guys in the knee) to help rescue these people. The series started in 2011 so today the technology they describe exists. The irony here is that while I’m binge watching POI, Netflix is collecting data on me of course.

“You are being watched. The government has a secret system: a machine that spies on you every hour of every day. I know, because I built it. I designed the machine to detect acts of terror, but it sees everything. Violent crimes involving ordinary people; people like you. Crimes the government considered ‘irrelevant’. They wouldn’t act, so I decided I would. But I needed a partner, someone with the skills to intervene. Hunted by the authorities, we work in secret. You’ll never find us, but victim or perpetrator, if your number’s up… we’ll find you”.

Since I am an Apple fan let’s take a look at a recent acquisition that should make us all stop and think, “Person of Interest”. Last month Apple acquired Emotient, a VC-funded startup commercializing emotion recognition technology based on 20 years of research by the six founding scientists. Using video from any camera, the deep learning-based technology detects facial expressions and demographics. They call it “a neuromarketing wave that is driving a quantum leap in customer understanding”.

And let’s not forget what demographics means:

Quantifiable characteristics of a given population. Demographic analysis can cover whole societies, or groups defined by criteria such as education, nationality, religion and ethnicity.

I also had to Wikipedia Neuromarketing:

A field of marketing research that studies consumers’ sensorimotor, cognitive, and affective response to marketing stimuli. Researchers use technologies such as functional magnetic resonance imaging (fMRI) to measure changes in activity in parts of the brain, electroencephalography (EEG) and Steady state topography (SST) to measure activity in specific regional spectra of the brain response, or sensors to measure changes in one’s physiological state, also known as biometrics, including heart rate and respiratory rate, galvanic skin response to learn why consumers make the decisions they do, and which brain areas are responsible. Certain companies, particularly those with large-scale ambitions to predict consumer behaviour, have invested in their own laboratories, science personnel or partnerships with academia.

Did you read the article about the husband who learned of his wife’s pregnancy from her Fitbit data? Now think about how much your smartphone knows about you and tell me if you are a Person of Interest?

Next I will write about the 8 robotics companies Google has purchased in the last 6 months……..

More articles from Daniel Nenni


Neural Networks Ready for Embedded Platforms

Neural Networks Ready for Embedded Platforms
by Tom Simon on 02-24-2016 at 7:00 am

If you are not yet familiar with the term Convolutional Neural Networks, or CNN for short, you are certainly bound to become in the year ahead. Using Artificial Intelligence in the form of CNN is on the verge of replacing a large number of computing tasks, especially those involving recognizing things such as sounds, shapes, objects, and other patterns in data. Its applications are so widespread it would be hard to list them. However, one application stands out among all the rest – vision.

The demand for processing image data, in terms of the amount of data, is larger than any other and is growing at an accelerating rate. Using traditional methods, creating software for this task required describing what to look for and then creating custom code for that one application. The most obvious prerequisite for the traditional approach is the ability to describe or define the thing(s) to recognize. Once recognition code has been developed then comes the job of testing and then fixing all the corner cases.

No matter how good the recognition code, there is always the limitation that it is not possible to write code for an intangible quality, such as a fake smile versus a real smile. Our brains can do this, but writing code to differentiate between them is almost impossible. The other thing neural networks are good at is dealing with noisy input. Image data that might throw off a classical vision algorithm would work just fine when put through a neural network.

The approach in using CNN is to create a general purpose neural network and train it by providing input that matches the ‘target’. A neural network divides the source image into very small regions and performs a large number of parallel math operations on the pixels. The results are handed off to another layer which is not as complex but is still suitable for parallel processing. This is repeated many times, passing the data in a sequential manner through many processing layers until a result is achieved. At each stage during training there is back propagation to correct for errors. The result is a large number of coefficients that drive the neural network.

The same network implementation could be retrained to recognize a different target. All that is needed is retraining to create new coefficients. So by now it should be clear that training is a compute and memory intensive activity: something ideally suited for the cloud or large servers. Training can require from 10[SUP]22[/SUP] to 10[SUP]24[/SUP] operations. Where as, inference – the real time component of CNN used to recognize things – requires a far fewer 10[SUP]8[/SUP] to 10[SUP]12[/SUP] operations (i.e Multiply Accumulate, MAC).

Furthermore, optimizations such as reducing the matrix size and coefficient precision are an active area of research, and if done properly have minimal impact of the quality of results. For instance, in one example going from floating point data to 8-bit integer coefficients only reduces accuracy from 99.54% to 99.41%. Most humans are not nearly that accurate when it comes to spotting a familiar face in a crowd. This means that with more computation in the training phase, the data set and number of operations for inference can be dramatically reduced.

The reason things are getting even more interesting now is that CNN is not just for super computers. The places where the applications for this technology exist are mobile – cars, phones, IoT devices. Overall since 1989 we have seen a 10[SUP]5[/SUP] increase in hardware performance, according to Nvidia’s Chief Scientist Bill Dally. This has made CNN practical. More interestingly, we have recently moved into a technology space where embedded CNN is feasible.

On February 9[SUP]th[/SUP] Cadence hosted a summit on Embedded Neural Networks which had an overflow crowd at their in-house auditorium. Not only are embedded neural networks starting to become practical, they are something that is creating significant interest.

The first speaker was Jeff Bier, president and founder of the Embedded Vision Alliance. One of the first things he did was show a video for a product that uses CNN on a tablet to interpret sign language. Jeff talked about how Neural Networks are ideal for otherwise difficult recognition problems. CNN really shines when the input varies because of lighting angles, orientation changes, and other noise in the input.

Next up was Bill Dally, Chief Scientist and SVP of Research at Nvidia. He drilled into the parallel nature of the solutions for CNN. He has spent a lot of time also looking at optimizations to reduce the footprint for the inference portion of CNN. Mobile applications are constrained by power and storage budgets. Using the cloud for CNN inference is not practical for the same basic reasons: too much data would need to be transferred to the cloud to do real time inference.

The energy cost of inference is tightly tied to how close the storage is to the compute resource. Much of Bill’s work has been to look at making it so that the data can reside as close to the processor(s) as possible. Here is a chart that shows the relative cost of different data locations.

Shrinking the neural network and reducing the data size for the neural network training coefficients can be done in clever ways to achieve stunning improvements. CNN has an inherent robustness that allows for large portions (up to 90%) of the neural network to be removed, then retrained with minimal loss in accuracy. So now it is possible to take neural networks that required over 1GB and fit them into 20-30MB – a size easily handled by a mobile device.

Google is even promoting its own open source CNN software and hopes that embedded platform developers and software developers will use it. Pete Warden, Staff Engineer at Google, came to the Cadence event to talk about his role in proliferating the use of their open source TensorFlow software, the technology they use internally. Google is making a long term commitment to its development and use. Their goal is to allow it to operate in SRAM and always stay on. They are working towards reducing the power requirements so that this is possible on mobile devices such as Android using the CPU or potentially GPU.

Cadence’s IP Group is very experienced with vision processing, and sees big growth and changes in this area. Neural Networks have the flexibility to be implemented on CPU’s, GPU’s, DSP’s and dedicated hardware. Each of these alternatives offer their own benefits and drawbacks. The hardware implementation area is moving as fast as the software for Neural Networks. Chris Rowen, CTO of the Cadence IP Group and one of the Founders of Tensilica, spoke at the Cadence event emphasizing that Neural Networks are definitely heading to embedded applications.

Right now there is a vigorous debate as to whether CPU’s, GPU’s or DSP’s will be the best vehicle for embedded inference applications. Tensilica offers a family of DSP’s focused on vision applications. Chris said that over 1 billion chips that have been manufactured with a Xtensa vision, imaging or video DSP on board. These are cores that are tuned for embedded applications and optimized for their specific applications.

Due to the fundamental shift from hard coded recognition software to neural networks the main task is now architecting the underlying network hardware, rather than writing code for recognizing specific things. This makes selecting the optimal processing architecture paramount.

In the years ahead we can expect to see remarkable progress in neural networks, and specifically with inference in embedded applications. Given the turnout at the Cadence event it is clear that many companies see this as an important area with big business potential.

More information about Cadence solutions for Neural Networks is available here.

More articles from Tom…


S2C opens up FPGA prototyping for PCIe fabrics

S2C opens up FPGA prototyping for PCIe fabrics
by Don Dingee on 02-23-2016 at 4:00 pm

Reconfigurable computing began with FPGA cards dropped into expansion slots in workstations. FPGA-based prototyping vendors tended away from that model as interconnect speeds rose and cabling complexity between modules increased. Much faster PCIe interfacing and bigger FPGAs mean revisiting the concept. Continue reading “S2C opens up FPGA prototyping for PCIe fabrics”


Synopsys at DVCon 2016

Synopsys at DVCon 2016
by Bernard Murphy on 02-23-2016 at 12:00 pm

It’s that time of year again – DVCon starts on Monday Feb 29[SUP]th[/SUP] and as always should be a packed event. Synopsys plans a big showing, in the exhibit hall, in a sponsored lunch, at tutorials and in papers. Time to get your conference shoes on and go check them out – I plan to be there all week.

One of the most obvious things you will notice is Synopsys’ presence in the exhibit hall – they take up a complete side of the hall with stations running the gamut of functional verification: integrated verification solutions, simulation, static analysis, debug, verification IP, emulation and prototyping.

One of the really cool demos is a HAPS integrated FPGA-based prototyping system running an ARC-based processor doing real-time speed sign recognition. If you’re up on advances in ADAS (advanced driver assistance systems), you’ll know that automated recognition systems are a hot topic these days. Come see one working real-time.

The Verification Compiler station will be showing an overview of the functional verification flow including verification engines (such as simulation and formal verification), natively integrated with common verification coverage and with Verdi’s unified debug environment.

On February 23, Synopsys announced an extension to its Verdi debug environment – Verdi Advanced AMS Debug. This is a much-needed development that I’ll certainly be checking out. Mixed-signal analysis gets a lot of press around simulation engines, not so much around debug which is really the heart of getting AMS interfaces right. One of the most commonly cited reasons for respins is problems at these interfaces – improved debug can only help.

The verification IP station will be highlighting Verdi Protocol Analyzer, the protocol and memory aware debug visualization solution and previews the new Protocol Performance Analyzer, offering interactive, transaction and statistical analysis of protocol utilization. You can also check out an overview of the complete library of UVM-based verification IP for Bus, Interface and Memory models.

And of course the SpyGlass station showcases an overview of the SpyGlass RTL signoff solution, including next-generation Lint, Clock / Reset Domain Crossing Analysis, early Power Analysis and optimization.

To help keep you focused (and happy) on this tour through Synopsys verification land – the cocktails featured at the top of this blog (appropriately named “Verification Continuum” cocktails) will be served throughout exhibit hours.

You may also want to consider sticking around for Thursday, or if you have limited time, you might want to consider only attending for Thursday. Synopsys will be featuring a tutorial on developing verification/debug methodologies using VC Apps. This is going to be a can’t-miss tutorial for any verification expert needing to know the latest and greatest tips and tricks to better handle their ever-expanding verification burden.

Afterwards you can break for a relaxing (and free) lunch in the Donner/Siskiyou ballroom and learn how others in your industry are tackling verification complexity and planning for future verification challenges. If you look at DVCon as a way to stay current with verification best practices (and you should), then knowing what other users are doing is an essential part of that picture.

See you there!

More articles by Bernard…


Should terrorists prefer iPhone (thanks to privacy)?

Should terrorists prefer iPhone (thanks to privacy)?
by Eric Esteve on 02-23-2016 at 10:00 am

The case between Apple and the FBI may not be as limpid as it could be. If you ask me if Apple, or any US or Europe based supplier of high tech system should help the FBI (or any similar organization) and provide the technical support needed to extract information belonging to a terrorist, my answer would be definitely YES.

I don’t know any of the people who were killed in San Bernardino, but I just feel sad about the human beings they were. Another reason comforting my position is what has happened in Paris, in the Bataclan, a couple of months ago. I am not living in Paris anymore, but I used to go to the Bataclan to dance or listen to a band many times. About 100 peoplewere killed and more than 300 injured within a couple of hours. Just imagine how horrible it was to be systematically massacred with a Kalachnikov… not on a battle field, but in place where you just came to enjoy music with friends.

If you would be responsible for the post-attack investigations don’t you think you would try to do anything to find who has been behind this attack?

Now, we have to more precisely look at the whole Apple vs FBI story as it unfolds. Here is a summary from BI:

It seems that San Bernardino county officials, or the FBI (or both) have made a mistake by resetting the terrorist’s iPhone password, giving Apple a “good reason” not to satisfy to the FBI’s request. This request was that Apple should create a kind of back door, allowing the FBI to access the data stored by the terrorist on the phone and iCloud. We can expect $500 per hour lawyers to enjoy a long fight before anything can be decided one way or another. Once again, the mistake made by resetting the password doesn’t help.

But the real question is:

Should we provide to terrorist the right to store or exchange data with absolute privacy?

Should we consider as “Liberte d’expression” (freedom of expression) the possibility for terrorists to post movies describing assassinations?

As far as I am concerned, I deny privacy and freedom of expression to people who’s main goal is to kill me, or my friends, or even people I don’t know who just want to enjoy life. Probably because I have learned this sentence about the French Revolution “”No freedom for the enemies of freedom!”…

But the reason why Apple wants to protect privacy at any cost may just be linked with the image of the company. I am sure that you remember when Apple was the “nice guy” fighting with the bad guys, IBM, Intel or Microsoft. If we look at this problem at this angle, Apple’s problem is a business model problem. Protecting this image is a way to protect their business model (we are the nice guys and protect customer privacy)… at any cost?

From Eric Esteve

More articles from Eric…


SoC power management a study in transition latency

SoC power management a study in transition latency
by Don Dingee on 02-22-2016 at 4:00 pm

Apple’s recent bout with ‘Batterygate’ highlighted just how important dynamic power management can be. Our last Sonics update looked at using their NoC to manage power islands; this time, we look at their research progress on architectural measures for power management. Continue reading “SoC power management a study in transition latency”


A Brief History of Open-Silicon

A Brief History of Open-Silicon
by Tom Simon on 02-22-2016 at 12:00 pm

In 2003, when Open-Silicon was founded there was a growing need for flexible and innovative ways of getting chip designs manufactured. Semiconductor companies, given the alternatives of COT or traditional ASIC, often were looking for more flexibility without the huge investment and risk of going COT. Let’s look at how Open-Silicon addressed this need and in doing so grew a successful business that offers even more today.

Open-Silicon’s Innovative OpenModel for developing and executing fabless semiconductor designs was conceived by the founding management team, who came from Intel and the Synopsys Professional Services Group. With this concept they were able to raise $8.4M from Norwest Venture Partners and Sequoia Capital. This was followed by a B round in 2004 adding InterWest Partners, bringing the total investment to $19.5M.

In 2004 Open-Silicon was selected as a finalist in the Red Herring Top 100 Innovators Award. This was a result of their unique OpenModel business model and their innovative ASIC technology. One of the main advantages of the OpenModel is the level of choice that customers have to optimize the implementation and manufacturing steps to ensure the best return on investment.

Also in 2005 they officially joined TSMC’s Design Center Alliance. Even though they had been working closely with TSMC since when they were founded, this added deeper links with a leading foundry.

Open-Silicon’s business models gained traction in their early years. Just four years into their business they booked their 100[SUP]th[/SUP] design win in 2007. This design was taped out less than a year later. In this period, they worked on designs ranging from 250/130nm down to the then most advanced node of 45nm. These tape outs included wireless, mobile, consumer, digital entertainment, computing, networking and telecommunications.

In 2008 Open-Silicon was bestowed the honor of being named the GSA’a Most Respected Private Company. Previously this award had been given to companies like nVidia, Marvell and Atheros. Then in 2009 Open-Silicon joined nVidia and Cavium as the only companies to ever win this award more than once. The GSA looks at many factors and only nominates companies that their committee of industry experts select based on merit. The final selection is made by the semiconductor community, including semiconductor financial and industry analysts and suppliers. Another big accomplishment on 2008 was achieving ISO 9001 certification.

One of the big advantages of using fabless ASIC is the range of choices available in each step of the design and implementation process. One of the most crucial is IP. So in addition to being able to select IP from almost any source, Open-Silicon customers can decide to use IP that Open-Silicon has developed or curated. Open-Silicon has been aggressive about providing state of the art high performance interface, communications, storage and other types of IP.

Open-Silicon is a founding member of the Interlaken Alliance and has developed and enhanced their own implementation of a controller core. In 2011 they announced significant enhancements to this IP block, adopting fully configurable SerDes logical to physical lane mapping to address the need for higher data rates.

Also in 2011 Open-Silicon signed a strategic agreement with ARM to establish multi year licensing of ARM technology. This included ARM Cortex processors and associated ARM Processor Optimization Packs (POPs), ARM Mali™ Graphics Processing Units (GPUs) and ARM system IP. This created one-stop-shopping where customers can get ARM technology and have it implemented in their products with the added benefit of Open-Silicon’s MAX technologies for optimization of design elements. Later in the same year Open-Silicon established its ARM Center of Excellence to deepen its commitment to developing ARM based products for its customers. Today that continues with Open-Silicon being selected as an implementation partner for the aggressive Cortex-M licensing model that ARM established to enable IoT designs for early stage semiconductor companies with limited resources.

Keeping in line with emphasizing IP, Open-Silicon announced the industry’s first Hybrid Memory Cube (HMC) Controller IP in 2012. This makes it easier for their customers to incorporate the novel advantages of HMC’s higher density, speed and lower power in their projects. They followed up on this IP with a HMC 2.0 memory controller IP in 2014.

Also High Bandwidth Memory (HBM) IP is available from Open-Silicon. They are offering a solution for the complete subsystem, including HBM Controller, PHY and 2.5D interposer IO. This IP is typically used in graphics and networking, and is also suited for high performance computing and networking where lower power and smaller form factors are needed.

Open-Silicon also has analog IP that is required for many applications. At the core of this is SerDes IP. High performance SerDes is often make or break for high speed designs. To address this need, Open-Silicon established a High-Speed SerDes Technology Center of Excellence in 2014. This expertise can shorten design cycles and ensures the delivery of successful silicon. High speed SerDes have become much more complex, making them difficult to implement without a lot of design team experience.

Moving to higher value solutions, Open-Silicon focused on ensuring that their customers can build highly optimized products. One way to tackle this is to do early virtual prototyping. As part of its collaboration with ARM in 2014 they began offering virtual prototyping services to their customer. This lets their customers evaluate different architectures and design trade offs early in the development process. There are several dimensions in which to make choices for implementation. Good choices reduce design effort and risk, as well as can mean a more competitive final product.

In keeping with their tag line “Your idea. Delivered”, over the last few years Open-Silicon has focused on enabling companies with product ideas but lacking front end design capabilities. For instance, IoT product developers may have an idea and want to focus on creating their software. However, they need an underlying hardware platform. Open-Silicon now offers Spec2Chip to make it possible for companies to focus on their value-add without compromising on the silicon implementation.

Another recent innovation is web access to quotes for projects. Instead of lengthy discussions, potential customers can describe their requirements and even start to make preliminary trade off decisions on-line in order to get a streamlined quote in under 2 business days.

Last year Open-Silicon hit another significant milestone: they have passed shipping 100 Million ASIC’s. These are chips fabricated at the leading 4 or 5 foundries, using internal and external IP – chosen to fit customer needs. In many cases Open-Silicon has managed the entire production chain from start to finish. Along the way, their customers have been able to view project status using state of the art project management software – providing complete transparency during the project.

Today Open-Silicon has engineering and support centers around the world, with 80% of their staff in engineering roles. They are continuing to invest in developing unique internal IP, tracking leading process nodes, and offering complete and flexible front to back capabilities.

More articles from Tom…