RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Complete Timing Constraints Solution – Creation to Signoff

A Complete Timing Constraints Solution – Creation to Signoff
by Pawan Fangaria on 09-28-2014 at 10:00 pm

With the unprecedented increase in semiconductor design size and complexity design teams are required to accommodate multiple design constraints such as multiple power domains for low power design, multiple modes of operation, many clocks running, and third party IPs with different SDCs. As a result timing closure has become extremely complex and tricky. While false paths may take away all your attention towards timing closure, some of the real timing issues may get missed, and certain subtle issues such as incorrect exceptions may show up later in the chips or ask for re-spin. As such, timing constraints by nature are incomplete or inconsistent and evolve through the design process. A valid constraint at block level may become invalid at chip level, thus impacting the timing closure and overall design cycle. And that gets further complicated when you have to promote the validated constraints from IP to SoC or push down from SoC to IP during IP integration and then signoff at SoC level taking into account all modes of operations, detailed validation and post-layout repair. So, it’s high time we have an automated comprehensive timing solution and constraints signoff flow to accelerate timing closure and reduce design risks.

Early in this month, it was a nice opportunity to attend an interesting webinaron SpyGlass Timing Constraints Signoff Flow, presented by Mark Baker, Director of Product Marketing at Atrenta. For me, it was an extra pleasure to hear Mark as I know him from my Cadence days. What I found was a real comprehensive flow where everything about timing constraints is taken care of, starting from creation and validation including exception verification, all the while providing management of constraints through signoff.

An SDC can be created from scratch or incrementally added to an existing set of design constraints. The creation process works at the RTL level by identifying constraints for primary clocks, generated clocks, and primary IO; of course uncertainly and latency has to be added by the user. Identification of all clock crossings is automatic, setting false paths (FPs) between asynchronous CDC. Architectural exceptions or false path constraints are generated to avoid false timing violations between exclusive clocks. Addition of these exceptions can lead to faster timing closure.

Constraint validation is done at RTL as well as netlist level, checking completeness, consistency and correctness within a robust debugging environment. There are over 300 rules covering clocks, I/O delays, structural exceptions and methodology based rules with full support of SDC standards. The solution supports commonly used non-SDC constructs as well. All clock constraints are taken into consideration for consistency of clock intent. Any constant propagation conflicts including forward and backward propagation are flagged.

Formal waveform verification is done to avoid mismatch in design and timing intent which can give a false sense of timing closure but actually can lead to chip failure or re-spin. Complete clock domain analysis or relationship reporting between all clocks and generated clocks are done by extracting clocks, false paths and uncertainties. The setup for CDC, power and exception verification is done automatically while pointing out any conflict that can lead to incorrect timing or CDC.

Timing Exception Verification is a major step to ensure adequate and correct timing exceptions are applied to the design. Accurate timing exceptions lead to faster timing closure without overdesigning, thus enabling better power and area optimization. The asynchronous FPs are detected using CDC solution, quasi-static FPs through simulation and synchronous FPs using functional verification. The Multi-Cycle Path (MCP) verification is done by using patented formal techniques, providing a fast solution with high rate of completion. The overall idea is to identify exceptions meaningful to design implementation that accelerates back-end timing closure through industry standard STA flows. Closure of exception verifications are achieved faster by intelligently monitoring any changes to the exceptions throughout the implementation, use of assertions, incremental verification and ensuring that a path is either timed or verified for synchronization.

The constraints are gracefully managed throughout the design by detecting and incrementally adding missing constraints at any stage and checking their equivalence at every stage under different scenarios, a patented capability in SpyGlass Timing Constraint Signoff Flow. There can be a scenario where there may be 2 SDCs created for a single design; the equivalence between the two SDCs can be easily checked. Similarly, SDCs at block and chip levels can be checked for their equivalence. Also, equivalence between 2 SDCs for two different flavours of a design can be checked.

A design can have multiple functional and test modes and for each mode of operation there is a separate SDC file. In order to simplify the job and save runtime for implementation tools such as Synthesis, P&R and STA, the SDC files of multiple modes can be merged into a single file representing a virtual mode that has timing constraints of all individual modes. To ensure that the merged mode has all timing aspects of individual modes (with pessimism in merged constraint) the SDC equivalence between an individual mode and the merged mode can also be checked.

The overall health of timing analysis can be measured by a nice mode wise coverage summary that points out any aspects of timing that’s not covered such as missing clock constraints, unconstrained registers or ports and so on. The timing coverage analysis report acts as a good indicator of signoff readiness in all modes.

As a concluding remark, this flow covers all aspects of timing constraints for a robust optimized design; it ensures correct, consistent and complete constraints with all clock relationships properly defined, constraints checked against design intent, timing exceptions verified, constraint equivalence ensured at each stage of design, modes merged for faster implementation and exception coverage report generated for timing signoff.

It’s a nice webinar to attend and learn about timing constraint issues in today’s designs and how they can be addressed using a comprehensive timing constraints signoff solution.

Also read –
Expert Constraint Management Leads to Productivity & Faster Convergence

SpyGlass CDC: A Comprehensive solution for addressing CDC issues

An Approach to Clock Domain Crossing for SoC Designs

Smart Clock Gating for Meaningful Power Saving

More Articles by Pawan Fangaria…..


ARM TrustZone and Zynq

ARM TrustZone and Zynq
by Paul McLellan on 09-28-2014 at 10:00 am

Security of embedded devices is becoming more and more important. The requirement for good protection increases as devices become more interconnected: wearable medical devices that connect to the cloud, mobile base stations that are no longer up poles but in much less physically secure areas, cars that communicate among themselves. A programmable device is especially vulnerable since not only can the software running on the Soc potentially be compromised, so can the very hardware on which it is running if the programming bitstream itself is replaced. Your base station router no longer just processes the packets, it also sends a copy to the Chinese military, the NSA or Google. Pick your bogeyman.

To further compound the problem, many devices are open platforms on which additional software such as apps can be run. Your smartphone probably runs a mixture of stuff you don’t care about much, like games, to things that you probably have some concern about, such as your WhatsApp chat history, to things you certainly care a lot about like access to your bank. There are compromises involved in security: very high security may be too complex for the average use to install and maintain, and it may be too expensive (in terms of power dissipation or FPGA fabric use). Minimal security may stop the clueless but it is a waste of time against anyone knowledgeable.

One solution is ARM TrustZone. This is widely used because of the near ubiquity, or at least widespread use, of ARM processors in embedded and other systems. This is a combined hardware/software solution to security that builds up in layers.

The Zynq-7000 AP SoC architecture integrates a dual-core ARM Cortex-A9 along with Xilinx FPGA programmable fabric into a single device built on top of TSMC’s 28nm HPL (low power) process. Like traditional SoCs, the processor-centric approach allows the processor to boot first and then bring up the rest of the device. This approach also allows control and partial reconfiguration of the programmable logic by running software on the processor. In turn, this enables the user to optimize system performance and power management to meet varying operating environments.


The ARM TrustZone architecture makes trusted computing within the embedded world possible by establishing a trusted platform, a hardware architecture that extends the security infrastructure throughout the system design. Instead of protecting all assets in a single dedicated hardware block, the TrustZone architecture runs specific subsections of the system either in a “normal world” or a “secure world.” Such an approach, when combined with software designed to leverage its advantages, enables creation of an end-to-end security solution that includes functional units as well as debug infrastructure.

In the Zynq-7000 AP SoC, a normal world might be defined as a hardware subset consisting of memory regions, caches and specific devices. This non-trusted software can be limited to an environment that prevents access to, or even knowledge of, the additional hardware dedicated to the support of the TrustZone architecture in the secure world. Trusted applications run on a TrustZone based ssystem tat implements a trusted execution environment. On the Zynq SoC further system-wide security is provided by integrating the TrustZone framework into the processor interconnects and system peripherals.


A key part of ARM’s TrustZone approach is that all AXI interfaces contain an additional bit known as the Non-Secure (NS) bits. During a transaction all masters assign an appropriate value to this bit and all slaves must interpret them to ensure that security separation is not violated, so, in particular, a non-secure master cannot access a secure slave.

It is beyond the scope of an introductory blog entry to go into all the low-level operating details of how a complex SoC design is configured. But luckily Xilinx has a detailed white paper on the subject, TrustZone Technology Suppport in Zynq-7000 All Programmable SoCs. You can download it here.


More articles by Paul McLellan…


ARM ♥ Xilinx!

ARM ♥ Xilinx!
by Daniel Nenni on 09-28-2014 at 7:00 am

The good news is that as a part of SemiWiki we get free media passes to all of the cool conferences. The bad news is that our inboxes get flooded with announcements. ARM TechCon is next week and my delete button is on overtime but it is interesting to see who is active in conferences and who is not. In this case Xilinx is very active and Altera not so much which is surprising since Altera has ARM inside, right?

Also Read: Pigs Fly. Altera Goes with ARM on Intel 14nm

My first FPGA experience was with a start-up called GateField which was acquired by Actel in 2000 then Actel was acquired by Microsemi in 2010. Some of the GateField people are still at Microsemi, others are at QuickLogic and Lattice. For me, the GateField experience of competing against Altera and Xilinx in the trenches was enough for a lifetime, absolutely.

As a Strategic Foundry Relationship Consultant I enjoyed working with Altera down to 20nm. 40nm was a lot of fun since Xilinx and their foundry partner UMC missed a step. Xilinx then moved to TSMC at 28nm and has dominated ever since, just my opinion of course. In my experience both Xilinx and Altera have great technology but Xilinx executes at a higher level and has a much stronger sales and marketing channel. ARM TechCon is a clear example:

Register today for ARM TechCon 2014 to learn how Xilinx® and ARM® are delivering smarter solutions with the ARM processor-based Zynq® All Programmable SoC.

Xilinx In-booth Customer and Ecosystem Partner Demonstrations Will Feature:

  • System on Module presented by NI
  • Integrated Media Processing Platform presented by Cloudium
  • Medical Application Development Platform presented by Topic
  • IC CAM for Personal Identity Recognition by Cornerstone
  • Real-Time Object Recognition and Reconstruction by VanGogh Imaging

Xilinx Product Teardown Presentation

  • October 2nd at 11:30 a.m. in the ARM TechCon Theater: Moderated by Steve Leibson, Editor of Xcell Daily, the product teardown will feature the NI VirtualBench and the Cloudium Integrated Media Processing Platform.

Xilinx Technical Presentation

  • October 2nd at 11:30 a.m. in the Santa Clara Convention Center: Join Carl Cao, Wireless Systems Architect at Xilinx, to learn about “Integrated All Programmable HW and SW Platforms for Wireless Applications”.

To learn more about how Xilinx & ARM are delivering All Programmable Solutions for Smarter Systems,please visit Smarter Systems

I did not get a mailer from Altera but I did reach out to them and was told an Altera wireless expert will present on optimizing wireless radio heads using ARM-based SoC FPGAs:

Wireless DPD (Digital Pre-Distortion) Optimization and Profiling in Altera SoC Devices

In addition, Altera is planning some ARM-related news at the show but, for obvious reasons, I was not offered an advanced briefing. Ever since Altera and TSMC divorced, Altera and I don’t speak much. In fact, I was at TSMC Fab 12 when the Altera/Intel relationship was announced and it really did feel like a divorce after a very long marriage.

If you are at ARM TechCon on Wednesday or Thursday look me up. It would be a pleasure to meet you!

You can read more about Xilinx on SemiWiki HERE.

More Articles by Daniel Nenni…..

ARM TechCon 2014delivers an at-the-forefront comprehensive forum created to ignite the development and optimization of future ARM-based embedded products. By offering three full days of technical tracks, demonstrations, and industry insight from broad and deep levels of industry-leading companies and innovative start-ups, ARM TechCon remains more than a tradeshow; it is a comprehensive learning environment for the entire embedded community, uniting the software and hardware communities.


Mentor at TSMC OIP, 16nm, and 10nm

Mentor at TSMC OIP, 16nm, and 10nm
by Beth Martin on 09-26-2014 at 4:46 pm

On Tuesday, September 30, TSMC hosts another Open Innovation Platform Ecosystem forum at the San Jose Convention Center. Have you registered? This year includes 30 technical sessions from TSMC’s ecosystem partners, divided into three separate tracks. I’ll be hanging out in the EDA track, listening to various takes on 16nm FinFET process design issues and marveling at the prospect of 10nm.

Mentor has three sessions:

  • “Design and Verification of 2.5D/3D IC Architectures Using TSMC 16nm FinFET Technology,” Mentor Graphics
  • “Four Ways an ECO Fill Reference Flow Can Benefit Your Bottom Line,” Mentor Graphics and TSMC
  • “Maintaining Hierarchy and Accuracy for Post-Layout Simulation: Grey/Black Box Flows in LVS->PEX->Simulation,” Oracle and Mentor Graphics.

Your friends at Mentor have made a videoabout working with TSMC on 10nm, and also have a press release out today about it.

It’s interesting to see engineering work turn towards 10nm when I’m not used to the idea of 16nm, but thus is the never ending march of technology, right? I look forward to learning more at the forum.


Synopsys Verification Continuum

Synopsys Verification Continuum
by Paul McLellan on 09-26-2014 at 4:00 pm

Verification spans a number of different technologies, from virtual platforms, RTL simulation, formal techniques, emulation and FPGA prototyping. Going back a few years, most of these technologies came from separate companies and one effect of this was that moving the design from one verification environment to another required completely different scripts, changes to the RTL, and a lot of time, sometimes measured in months. Getting a large design up and running in emulation or an FPGA prototype system, in particular, was a major challenge.


Over the last couple of years, Synopsys has assembled a broad portfolio of leading edge technologies. They have rewritten their static and formal engines, acquired 3 virtual platform companies, acquired EVE’s emulation technology, and Springsoft’s Verdi debug environment. But these tools largely still showed their roots as separate product lines with different scripts and different requirements on inputs. It was still too hard to get a design that was running cleanly in, say, VCS into an emulator. It is important for verification to be able to move up and down the chain of engines easily, so that the best tool for the job can be used as the design proceeds through the development process.

Earlier this week, Synopsys announced their Verification Continuum Platform. This is a major rewrite of the front ends of all of the various verification engines so that they have a common input, common scripts and so on. This gives seamless transitions between engines and a consistent interface for setup, runtime and debug. Earlier in the year, Synopsys announced Verification Compiler which pulled together formal and simulation into one environment. With this week’s announcement, that has now been broadened to pull in virtual platforms, emulation and FPGA prototyping too.


The VCS front-end is used for all compilation, analysis, elaboration, debug preparation, optimization, code-generation, synthesis and mapping. One immediate effect is that compile times for emulation are up to 3 times as fast as before.


There is also unified debug with Verdi across all the technologies from virtualizer, through simulation, formal techniques, Zebu emulation and FPGA prototyping with HAPS.

The result is a “shift left” of the bug discovery process, enabling bugs to be found earlier and enabling software development to be done earlier. Obviously this has the potential to accelerate product development schedules significantly and, by making the technology easier to use, increase the use higher performance technologies such as emulation, FPGA prototyping and virtual prototyping.


More articles by Paul McLellan…


Dominating FPGA clock domains and CDCs

Dominating FPGA clock domains and CDCs
by Don Dingee on 09-26-2014 at 7:00 am

Multiple clock domains in FPGAs have simplified some aspects of designs, allowing effective partitioning of logic. As FPGA architectures get more flexible in how clock domains, regions, or networks are available, the probability of signals crossing clock domains has gone way up. Continue reading “Dominating FPGA clock domains and CDCs”


Coverage Driven Verification for Analog?

Coverage Driven Verification for Analog?
by Pawan Fangaria on 09-26-2014 at 1:00 am

We know there is a big divide between analog and digital design methodologies, level of automation, validation and verification processes, yet they cannot stay without each other because any complete system on a chip (SoC) demands them to be together. And therefore, there are different methodologies on the floor to combine analog and digital designs together and simulate and validate them together. In case of verification, digital design verification is highly automated based on assertions applied through readily available languages, however analog design verification is mostly done manually on an ad-hoc basis. Although a good level of effort has been done in the semiconductor design and verification community to automate validation of analog digital mixed-signal (AMS) designs through a single testbench which may utilize UVM methodology, a dedicated automation for verification planning of analog design is still essential to cover today’s increased design complexity, large variation in device characteristics and meeting specifications across all process corners to ascertain coverage and quality of verification.

Mentor Graphicshas developed a novel methodology for analog design verification planning which uses Coverage Driven Verification (CDV) in the similar manner as is done in digital world. It utilizes a requirement tracking system that links the design specification to CDV and tracks the status of verification through several stages.

Each design specification is linked to one or more items in the test plan and vice-versa. All items in the test plan are linked to simulation results. Analog design simulation is primarily Spice based and uses multiple analyses such as transient, AC, RF and Monte Carlo at various PVT corners. The .EXTRACT and .SETSOA constructs of Spice with boundary conditions can be used to ascertain if a selection being tested passes or fails or is within the boundary limits, thus providing a way to decide on testbench setup and stimuli to test the cover points. The methodology uses UCIS(Accellera unified coverage interoperability standard) APIs and database to represent the coverage data in a standard way that can be accessible through different tools from different vendors. The verification status along with coverage data, dependencies and any change impact can be reported at various levels such as Executive Summary, Project Status or at the granularity of cover points, as required for different stakeholders.

Most of the analog design characteristics such as transient and frequency domain characteristics presented in an architecture or specification document can have their corresponding cover points captured in a test plan which can be in the form of an Excel spreadsheet. The requirement tracking and verification planning can be done with existing tools with some enhancements required to accommodate analog design characteristics. Mentor uses Reqtracer for requirement tracking, Eldo for Spice simulation and Questa for viewing, merging, analysis and reporting of the UCIS database.

Mentor’s verification team used an OPAMP design for proof of concept of this verification methodology. In the above picture is the illustration of how cover points are implemented by using the EXTRACT and SETSOA constructs in Spice netlist. In this example, nine different tests covering PVT and parameters sweep along with transient, frequency domain, and quiescent analysis are setup in a regression environment. Simulation is done for each test and corner, results are post-processed using a custom script and the UCIS API is used to write the data into the UCIS database.

Above picture shows how specifications are extracted from a MAS (Microarchitecture or specification) document and linked to test plan and tests are linked to simulation results. There can be one-to-one, one-to-many or many-to-one relationships. This provides the actual status of verification against the defined objectives. Different views of reports representing the overall status of verification can be obtained from this. Upstream and downstream dependency graphs for each verification objective in the test plan can also be drawn for analysis.

Questa coverage viewer is used to view and analyze the merged simulation result from UCIS database against the goals set for each objective in the test plan. This analysis provides real-time information about the overall coverage along with failing tests (e.g. slew rate in this case) and any coverage holes for which cover points are not implemented (e.g. quiescent current specs in this case). The UCIS data can be analyzed in multiple ways as per individual needs by using command line interface. The bottom part of above example shows failing cover points in ‘red’ color along with their failure counts. By adding user attributes in UCIS, these can be linked to generate debug information. With the progress of the design, effort is put to remove the coverage holes, convert failing tests to pass and improve coverage for design closure. Trend analysis graph of coverage against goal can also be plotted to assess maturity of the design at certain intervals.

Questa SIM – UCIS Framework enables CDV methodology for analog designs (with a few enhancements to existing digital verification flows), thus unifying verification methodology for digital and analog designs. Read this whitepaperwritten by Atul Pandey, Guido Clemens and Marius Sida at Mentor Graphics for more details.

More Articles by Pawan Fangaria…..


Electro-Thermal Simulation of Power Transistors

Electro-Thermal Simulation of Power Transistors
by Daniel Payne on 09-25-2014 at 4:00 pm

Power transistors are commonly used in applications like: hybrid vehicles, electric vehicles, automotive, home appliances, LED lighting, TVs, power and energy. In the old days an engineering team could build their device with power transistors, then after production run some thermal testing to see if they guessed the proper junction temperature, which in turn effects performance like current drive and Rdon. That approach of build first, measure later leads to product iterations costing time and money. There is a better way today, and it involves simulating the electro-thermal behavior prior to tape-out, while still in the design phase when you have engineering options to improve thermal performance.Related: EDA for Power Management ICs at DAC

One of the challenges for IC designs in general is that as you turn the power supply on then current begins to flow in the transistors, and this current heats up the transistor which then changes the electrical performance of the device. Hot-spots on an IC will decrease the current flow in transistors in that local region. It’s not really accurate to assume that the entire die is at a constant temperature across all dynamic input conditions any more.


Temperature in Top-down, cross-cut view

I followed up with Dundar Dumlugol of Magwelby phone to get more details about their approach to modeling heat flow across chip and package for power devices. They’ve been offering an EDA tool called PTM-ET (Power Transistor Modeler, Electro-Thermal) for the past two years that answers engineering questions like:

  • What is the Temperature across my chip?
  • How does heat flow as a function of time?
  • Where should I place thermal sensors in my IC layout?
  • Will my device have thermal runaway?
  • What are the IR drops in my layout?
  • How does changing my package affect temperature?

How Electro-Thermal Simulation Works
The PTM-ET tool first reads in your IC layout as a GDS II or OpenAccess file, along with a technology file provided by the foundry. Using a 3D solver the simulator can calculate the dynamic currents and Joule self-heating in metal and active area. Active devices are modeled based on SPICE results and internally saved as table models for faster simulation speed. All of the electrical and thermal equations are solved self-consistently by using non-linear, iterative techniques.

With this approach you can model heat conduction through metal interconnect, the substrate, lead frame, clips, bond wires, package and PCB.Related: Ensuring ESD Integrity

Accuracy
With any type of simulation it’s natural to ask how accurate the predicted results are compared with measurements. Results presented at the Therminic 2013 conference show good correlation between simulated and measure results of temperature as a function of time:

These simulated results are within a few percent of measured data, which is accurate enough to make engineering decisions like: changing the IC layout, choosing a different package, making new pad placements, adding contacts, or moving the thermal sensors.

Results
A customer designed a display driver circuit, and the PTM-ET tool then simulated their circuit by creating 250K mesh nodes. To perform 0.5ms of transient simulation required just 10 minutes of CPU time and the simulated results were within 5% of measured values.


Temperature in cross-section Metal and substrate

Another customer design was a power management IC with 23 heat sources, and included the package thermal model. Input for a dynamic Electro-Thermal (ET) simulation came from SPICE and the simulation required under 1 hour of CPU time to cover 2ms of time. A static ET simulation took just 10 minutes of CPU time, using about 1M mesh nodes.


Chip-level ET simulation results

Summary
IC engineers designing circuits with power transistors now have a methodology to simulate and analyze the electro-thermal characteristics prior to tape-out. Using such an approach as this will reduce the number of silicon spins required to meet specifications, and confirm that you’ve selected the right package and IC layout to mitigate thermal issues.


TSMC Delivers First FinFET ARM Based SoC!

TSMC Delivers First FinFET ARM Based SoC!
by Daniel Nenni on 09-25-2014 at 9:00 am

Right on cue, TSMC announces 16nm FinFET production silicon. I believe this is the original version of FinFET versus 16FF+ which is due out in 1H 2015. I will confirm this next week at the TSMC OIP event in San Jose, absolutely. Either way this is excellent news for the fabless semiconductor ecosystem and I look forward to the first tear down of a TSMC FinFET SoC in comparison to an Intel FinFET SoC. TSMC 20nm compared quite favorably against Intel 14nm in regards to density and 16FF will do even better.

Let’s not forget that The Chairman (TSMC’s Dr. Morris Chang) speculated that TSMC would not win a majority FinFET market share in 2015. To me this was a head fake to rally the troops. Morris has done this before on conference calls, he is a very clever man. As I mentioned previously, I have never seen TSMC more energized during my last Taiwan trip. Hschinsu on a whole was really buzzing with activity and it was all about FinFETs no matter where I went.

HSINCHU, Taiwan, R.O.C., Sept. 25, 2014 /PRNewswire/ — TSMC (TWSE: 2330, NYSE: TSM) today announced that its collaboration with HiSilicon Technologies Co, Ltd. has successfully produced the foundry segment’s first fully functional ARM-based networking processor with FinFET technology. This milestone is a strong testimonial to deep collaboration between the two companies and TSMC’s commitment to providing industry-leading technology to meet the increasing customer demand for the next generation of high-performance, energy-efficient devices.

For those of you who don’t know, HiSilicon is the ASIC design division of communications giant Huawei. I first encountered HiSilicon in 2008 during an IP licensing negotiation involving SMIC. More recently I visited the new HiSilicon design center in Taiwan. You will be hard pressed to find a leading SoC company without a design center near Hsinchu so they can seamlessly integrate with TSMC. HiSilicon has 100+ people there now and I’m told they are still hiring.

“Our FinFET R&D goes back over a decade and we are pleased to see the tremendous efforts resulted in this achievement,” said TSMC President and Co-CEO, Dr. Mark Liu. “We are confident in our abilities to maximize the technology’s capabilities and bring results that match our long track record of foundry leadership in advanced technology nodes.”

The other interesting thing about this design is that it uses 3D IC packaging combining 28nm mixed signal and 16nm logic chips. TSMC calls this CoWoS (Chip-On-Wafer-On-Substrate) which allows you to integrate multiple chips into a single device. We have written aboutCOWOS many times before and this is an excellent example. To save time and minimize cost you can integrate 28nm blocks with leading edge CPUs for your SoC.

“We are delighted to see TSMC’s FinFET technology and CoWoS[SUP]®[/SUP]solution successfully bringing our innovative designs to working silicon,” said HiSilicon President Teresa He.”This industry’s first 32-core ARM Cortex-A57 processor we developed for next-generation wireless communications and routers is based on the ARMv8 architecture with processing speeds of up to 2.6GHz. This networking processor’s performance increases by three fold compared with its previous generation. Such a highly competitive product can support virtualization, SDN and NFV applications for next-generation base stations, routers and other networking equipment, and meet our time-to-market goals.”

Congratulations to TSMC, HiSilicon, and the entire fabless semiconductor ecosystem for this incredible achievement. And for those who predicted that fabless FinFET chips would “Happen in 2016 at the earliest or never at all”… There are no words left for you.

Also Read: Intel’s 35% Density Advantage Claim Explored

More Articles by Daniel Nenni…..