NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Photonics at DAC – Integrated Electronic/Photonic Design Flow to be Presented at Cadence Theater

Photonics at DAC – Integrated Electronic/Photonic Design Flow to be Presented at Cadence Theater
by Mitch Heins on 06-09-2017 at 12:00 pm


I recently wrote an article on SemiWiki talking about the integrated Electronic/Photonic Design Automation (EPDA) flow that is being developed by Cadence Design Systems, Lumerical Solutions and PhoeniX Software and how that flow is now expanding into the system level through SiP (system in package) techniques.

Up till recently, demos were all being done using a theoretical PDK but this changed last week (May 24[SUP]th[/SUP], 2017) when Cadence, Lumerical and PhoeniX presented a demonstration of the EPDA flow using a real foundry PDK. The PDK was from AIM Photonics and the demonstration was given at the AIM Proposers Meeting in Rochester, NY. This is a key milestone for Cadence, Lumerical and PhoeniX Software as this is the first public demo of the EPDA tool flow with a real foundry PDK.

There are other production PDKs also in the works for the flow, however it’s too early to drop names just yet. Suffice it to say that momentum continues to grow. Cadence’s partnership with Lumerical for circuit level photonic simulation and with PhoeniX Software for photonic physical design gives it a significant jump start towards bringing PDK support online for the full EPDA flow as both Lumerical and PhoeniX Software have extensive PDK support from existing photonics foundries. With only a modest amount of effort, these existing PDKs are now being synced up and used to populate PDKs for the entire EPDA flow.


If you haven’t seen the EPDA flow yet, it will be presented at this year’s Design Automation Conference in a presentation entitled, “Capture the Light. An Integrated Photonics Design Solution from Cadence, Lumerical and PhoeniX Software”. The presentation will be given in the DAC Cadence Theater at 10:00a on Tuesday, June 20th.

Momentum for the new EPDA flow continues to grow as the three companies will also be engaging with more engineers at a five-day class entitled ‘Fundamentals of Integrated Photonics Principals, Practice and Applications’. This class is being put on by the AIM Photonics Academy and will be taking place the last week of July at the MIT campus in Boston, MA.

There are also multiple customer engagements underway for the EPDA flow. Again, it’s too early yet to release those customer’s names. It is however these same customers that are now pushing the trio to work on the advanced system-level flow that was eluded to in my last SemiWiki article (see link below).

As part of this effort, Cadence, Lumerical and PhoeniX Software are also planning to host a second photonics summit in the early September time frame. Like the 2016 photonic summit, this will be a two-day event hosted at the Cadence campus in San Jose. The first day will focus on technical presentations discussing challenges and progress towards implementing integrated photonics systems. The second day, like last year, will again be a hands-on session that will highlight progress made towards extending the existing EPDA flow for integrating the full system (electronics, lasers, and photonics) into a common package. Watch for more details on how to register for this summit in the upcoming weeks.

It’s still early days for integrated photonics but capabilities are rapidly being put into place. If it’s time for you to come up to speed on integrated photonics I would encourage you to attend one or more of these upcoming opportunities to learn.
See Also:


Samsung Details Foundry Roadmap

Samsung Details Foundry Roadmap
by Scotten Jones on 06-09-2017 at 10:00 am

Samsung recently held a meeting where they laid out a detailed roadmap for their foundry business. On Tuesday June 1st, Daniel Nenni and myself had an interview with Kelvin Low, senior director of foundry marketing and business development to discuss the details of Samsung’s plans.
Continue reading “Samsung Details Foundry Roadmap”


Simplifying Requirements Tracing

Simplifying Requirements Tracing
by Bernard Murphy on 06-09-2017 at 7:00 am

Requirements traceability is a necessary part of any top-down system specification and design when safety or criticality expectations depend on tightly-defined requirements for subsystems. Traceability in this context means being able to trace from initial documented requirements down through specification and datasheet documents to the design implementation and to the testplan. Standards such as ISO26262, DO-187C and IEC 61508 demand that critical requirements be verified by demonstrating traceability through these documents and design materials.


This is not so easy. The path to be traced contains in part documents requiring human interpretation, validation and cross-checking and in part design data which lends itself to automating interpretation, validation and cross-checking. The human-dependent part of this tracing is a significant contributor to the cost overhead and incompleteness of requirements tracing efforts. Which raises the obvious question – isn’t there a better way? You can’t get humans out of the loop completely and, for that reason, you can’t get documentation out of the loop completely. But can dependence on human review and verification be reduced in some meaningful way?

Getting there is obviously more difficult if there aren’t machine-readable links between specification, design and documentation, a state of affairs that is still common today in many design shops. But it is possible to have quite good links between these components in SoC/subsystem design given an integrated methodology. Magillem call this ISDD – integrated specification design and documentation. Unfortunately you can’t get there by just adding a tool to whatever unstructured spec/design/doc flow you already have. You must switch to a more structured SoC/subsystem design process incorporating links to specification and documentation.

Which in this world means IP-XACT. I don’t believe anyone will contradict me when I say that Magillem has gone further than any other EDA vendor in delivering commercial products around IP-XACT. I competed with them for several years so I know a little about this area and what they can do. Moving to IP-XACT may require a big switch in methodology which can seem daunting, but I understand tools have evolved significantly to make this much simpler. So let’s assume you decide to make that transition – what can Magillem do for you in requirements traceability?

The mechanism they provide is called Magillem Link Tracer (MLT). This connects interdependencies in specifications, documents and the design through dynamic typed links (I assume connecting to vendor extensions in the IP-XACT schema). Their objective is not simply to push data out to documentation from the IP-XACT database but instead to provide a check and synchronization mechanism between all these views, such that a change in one can be followed through to dependent views, in each of which you can choose to accept or reject a change.


The tool displays links between dependent documents and design components. Note that here links aren’t just back to the design database; there can be links between documents also, allowing for smart reuse of content between different document views.


When you change a requirement, impacted resources are flagged and you can drill down to accept/reject changes. For specifications and documentation this can update appropriate fields, under your control since you want to see where changes are implied before you accept them. Of course a requirement change won’t make design changes automatically – there you will need to go into Magillem design tools to make the appropriate changes to match the new requirements. Changes don’t have to start from the requirements; you might choose to make a change in the IP-XACT design representation, in a register or address map for example, then use the same dependency computation to see where and how that will ripple up.

This kind of analysis can be a valuable contribution to supporting automated requirements traceability. Of course the scope will be bounded to those parameters understood within IP-XACT and the Magillem tools. You will still need to manage requirements tracing for AC and DC characteristics, among others, through other means. And unlinked text in documents must be checked manually. But disconnects in items you will link (bitfield maps for example) can often be where disconnects between requirements, spec, design and test are most likely to happen. Automating the management and traceability of this data should be a big step forward in traceability support.

As a sidebar, some readers may note that there are other tools to connect individual parts of the design process to spreadsheet specifications and documentation. Indeed there are. But a meaningful contribution to requirements traceability needs more than a bundle of disconnected mechanisms, each supporting a limited set of individual requirements. The level of contribution needed for safety standards certification is better served by coverage of a significant subset of requirements through an auditable / verifiable standard representation. That’s what Magillem aims to offer through their MLT solution linked to their extensive range of IP-XACT-based design tools. You can read more HERE.


System Implementation Connectivity Verification and Analysis, Including Advanced Package Designs

System Implementation Connectivity Verification and Analysis, Including Advanced Package Designs
by Tom Dillinger on 06-08-2017 at 4:00 pm

Regular Semiwiki readers are aware of the rapid emergence of various (multi-die) advanced package technologies, such as: FOWLP (e.g., Amkor’s SWIFT, TSMC’s InFO); 2D die placement on a rigid substrate (e.g., TSMC’s CoWoS); and, 2.5D “stacked die” with vertical vias (e.g., any of the High Bandwidth Memory, or HBM, implementations).

Typically, one or more SoC’s are under development concurrently with the advanced package design. A vexing issue often arises, where the design platforms differ for chip and package, with different representations of system connectivity and circuit library models. As a result, there is no direct method for the SoC designer to build a correct connectivity model and simulate circuit paths between die (which potentially use different process PDK’s). Circuit validation throughout the multi-die package involves (error-prone) manual netlist creation, pulling data from the package environment into the circuit designer’s cockpit.

Cadence has recently addressed this flow deficiency, providing the necessary bridges between their leading Virtuoso and Allegro tool platforms.

I had the opportunity to chat with John Park, Product Management Director for IC Packaging and Cross-Platform Solutions at Cadence, about their new product, Virtuoso System Design Platform. “We have enhanced and extended Virtuoso. A product design team is able to incorporate a full-system hierarchical schematic model resident in Virtuoso SDP. We have developed a bi-directional bridge between the IC, package, and PCB design environments. There are two corresponding flows enabled by the Virtuoso SDP model – a new implementation flow and an analysis flow.”, John described.

The figure below provides an overall Cadence product architecture view, highlighting how Virtuoso System Design Platform provides the SoC designer with access to system connectivity and board/package parasitic data.

Virtuoso SDP Implementation Flow
As illustrated in the figure below, the implementation flow is invoked to automatically generate the die model data from Virtuoso SDP for use in Allegro SiP for package design – e.g., the schematic symbol, die physical footprint. In this example, three active SoC designs are underway – all the package (passive) components are represented in SDP, as well. Virtuoso SDP eliminates the issues associated with system connectivity data residing in different platforms and formats.

The SDP implementation flow generates the model exchange data between the Virtuoso and Allegro platforms, potentially on different operating systems – the “generate AllegroDB” operation imports the library and connectivity model into Allegro. The Virtuoso SDP cockpit is also used to maintain the techfile data used by Allegro – e.g., layer stackups, specific net implementation constraints. The SDP implementation flow provides the connectivity to verify the package/board LVS in Allegro (and for the connectivity use when building simulation models for analysis, discussed next).

Signal Integrity and Power Integrity Analysis
The analysis flow developed with the Virtuoso SDP offering provides several key validation features. The Sigrity family of signal integrity and power integrity tools is integrated into Virtuoso SDP.

Circuit Simulation
Additionally, SoC designers can use Sigrity to extract/connect complex multi-port Touchstone (S-parameter) model parasitics into a detailed die-package-die simulation in Cadence ADE (Spectre) invoked from the SDP cockpit GUI. The figure below illustrates the extraction of a model by Sigrity (a selected set of nets to generate a multi-port model), followed by the creation of a Virtuoso SDP instance that can then be used in die-package-die simulation.

Cadence has extended their Virtuoso platform to provide a unified cockpit for design teams to capture a system connectivity model. Additionally, implementation flow features are provided to generate the library, netlist, and design constraint data for advanced package and PCB design, to enable LVS connectivity verification in Allegro. The Sigrity model extraction features are integrated as well, for designers to run circuit analysis simulations. These features eliminate the tedious and error-prone tasks of constructing end-to-end circuit path models from disparate environments.

For additional information on the Cadence Virtuoso System Design Platform, please follow thislink.

-chipguy


CCIX Protocol Push PCI Express 4.0 up to 25G

CCIX Protocol Push PCI Express 4.0 up to 25G
by Eric Esteve on 06-08-2017 at 12:00 pm

The CCIX consortium has developed the Cache Coherent Interconnect for Accelerators (X) protocol. The goal is to support cache coherency, allowing faster and more efficient sharing of memory between processors and accelerators, while utilizing PCIe 4.0 as transport layer. With Ethernet, PCI Express is certainly the most popular protocol in existing server ecosystems, in-memory database processing or networking, pushing to select PCIe 4.0 as transport layer for CCIX.

But PCIe 4.0 is defined by the PCI-SIG to run up to 16Gbps only, so the CCIX consortium has defined extended speed modes up to 25Gbps (2.5Gbps, 8Gbps, 16Gbps, 25Gbps). The goal is to allow multiple processor architectures with different instruction sets to seamlessly share data in a cache coherent manner with existing interconnects, boosted up to 25Gbps to fulfill the bandwidth needs of tomorrow applications, like big data analytics, search machine learning, network functions virtualizations (NFV), video analytics, wireless 4G/5G, and more.

How to implement cache coherency in an existing protocol like PCIe? By inserting two new packet based transaction layers, the CCIX Protocol Layer and the CCIX Link Layer (in green in the above picture). These two layers will process a set of commands/responses implementing the coherency protocol (think MESI: Modified, Exclusive, Shared, Invalid and the like). To be noticed, these layers will be user defined, Synopsys providing the PCIe 4.0 controller able to support up to 16 lanes running at 25Gbps. And the PCI Express set of command/responses will carry the coherency protocol command/responses, acting as a transport layer.

The internal SoC logic is expected to provide the implementing portion of the coherency, so the coherency protocol can be tightly tied to CPU, offering opportunities for innovation and differentiation. Synopsys consider that their customers are likely to separate data path for CCIX traffic vs “normal” PCIe traffic, and the PCI Express protocol offers Virtual Channels (VC), these can be used by CCIX.

The PHY associated with the CCIX protocol will have to support the classical PCIe 4.0 mode up to 16GBbps (2.5GT/s, 5GT/s, 8GT/s, 16GT/s) and also Extended Speed Modes (ESM), allowing Extended Data Rate (EDR) support. ESM Data Rate0 (8.0GT/s or 16.0 GT/s) and ESM Data Rate1, defined for 20.0 GT/s or 25.0 GT/s.

ESM can support four operations:
1PCIe compliant phase (the Physical Layer is fully compliant to PCIe spec

2Software discovery, the System Software (SW) probes configuration space to find CCIX transport DVSEC capability

3.Calibration: components optionally use this state to calibrate PHY logic for upcoming ESM data rate

4 Speed change: components execute Speed Change & Equalization according with the same rules as PCIe specification

The CCIX controller proposed by Synopsys gets all features of the PCIe controller, supporting all transfer speeds from 2.5G to 16G and ESM to 25G. The digital controller is highly configurable, supporting CCIX r2.0, PCIe 4.0 and Single Root I/O Virtualization (SR-IOV), being also backward compatible with PCIe 3.1, 2.1 and 1.1. The controller supports End Point (EP), Root Port (RP), Dual Mode (EP and RP) and Switch, with x1 to x16 lanes.

On the application side, the customer can select Native I/F or AMBA I/F, and dedicated CCIX Transmit and CCIX Receive application interfaces. The interface between the controller and the PHY is PIPE 4.4.1 compliant with CCIX extensions for ESM-capable PHYs. To support 25G, Synopsys proposes the multi-protocol PHY IP in 16nm and 7nm FinFET, compliant with Ethernet, PCI Express, SATA and the new CCIX and supporting for chip-to-chip, port side & backplane configurations.

Because CCIX/PCIe 4.0 solution targets key applications in the high-end storage, data server and networking segments, reliability is extremely important. The IP solution offers various features to guarantee high Reliability, availability and serviceability (RAS), increasing data protection, system availability and diagnosis, including memory ECC, error detection or statistics.

I am familiar with the PCI Express protocol since 2005, when I was marketing director for an IP vendor selling the controller (at that time PCIe 1.0 at 2.5 Gbps), and Synopsys was already the leader on the PCIe IP segment. Twelve years later, Synopsys is claiming 1500 PCie IP design-in!

If we restrict to the PCIe 4.0 specification, Synopsys is announcing 30 design-in in various applications, like enterprise (20 in Cloud Computing/Networking/Server), 3 in digital home/digital office, 5 in storage and 2 in automotive. Nobody would have forecasted the PCIe penetration in automotive 15 years ago, but it shouldn’t be surprising to see that after mobile (with Mobile Express) and Storage (NVM Express), the protocol is selected to support cache coherent interconnect for accelerator specification.

By Eric Esteve from IPnest

More about DesignWare CCIX: DesignWare CCIX IP Solutions


Active Voice

Active Voice
by Bernard Murphy on 06-08-2017 at 7:00 am

Voice activated control, search, entertainment and other capabilities are building momentum rapidly. This seems inevitable – short of Elon Musk’s direct brain links, the fastest path to communicate intent to a machine is through methods natural to us humans: speech and gestures. And since for most of us speech is a richer channel, there has been significant progress on voice recognition, witness Amazon Echo, Google Home and voice-based commands in your car.


Of course there’s a lot of significant technology behind that capability and CEVA is an important player in making that happen. As one example, when you say “OK Google” into your Galaxy S7, a CEVA DSP core inside a DSPG chip provides the platform to listen for and process that command. According to CEVA, the reason Samsung chose that solution over an implementation in the Snapdragon A20 ultra-low power island was that the CEVA/DSPG implementation is even lower power, allowing for always-on listening, even when the screen is off.

Always-on listening is one of several important factors in making voice-control ubiquitous. CEVA recently hosted a webinar, jointly with Alango Technologies, to provide insight into their solutions in this important space. In (acoustic) near-field applications such as in a smartphone, or even in smart microphones, ultra-low power is obviously important. CEVA promotes the CEVA-TL410 for ultra-low power in always-on near-field applications, such as the voice-sensing application used in the Galaxy S7.


The primary focus of this webinar was on high performance applications such as smart speakers / assistants. Here far-field performance become very important, contending with long distances (10 meters in one example), ambient noise, echoes, reverberation and potentially multiple voices. Here CEVA discussed application of the CEVA-X2. According to Eran Belaish (Product Marketing for audio, voice and sensing), building the speaker part of such a device is relatively straightforward. Complexity comes in building the smart part where there is a need to provide sophisticated processing for acoustic echo cancellation and noise reduction, and beamforming from an array of microphones to support intelligent voice processing.


Eran broke down the structure of the audio and sensing part of this solution first into voice activity detection (VAD), like the ultra-low power solution mentioned above. This is followed by PDM to PCM conversion, also in hardware, and then the real smarts for far-field support in a range of audio/voice functions running on the DSP. With a CEVA-based VAD, you still start with ultra-low standby power, which you’ll see later is an important advantage.


The company has an impressive slate of ecosystem partners to provide this functionality (together with their own software of course). Alango presented their software solution for acoustic echo cancellation, beamforming and noise reduction, in their voice enhancement package (VEP) running on the CEVA-X2 platform. All far-field solutions today use multiple microphones for 360[SUP]o[/SUP] coverage, from as few as two to as many as 8, and more can be supported, so this is where high quality voice processing must start. VEP manages echo cancellation for each microphone, then beamforms to produce as many beams as required from the set of microphones (perhaps 8 beams from 4 microphones) and then optionally performs noise suppression on each beam. These beams are then passed on to the automatic speech recognition (ASR) or keyword recognition (KWR) software.

Alango presented impressive results of experiments they ran to show improvements in voice trigger recognition rates (as detected by Sensory voice trigger technology for a trigger like “OK Google”) at varying distances and in the presence of noise as the number of microphones increased. Clearly adding more microphones, together with VEP, greatly improves detection accuracy at distance in noisy environments. That’s why the latest revs of Amazon Echo have 7 microphones.

Bu there’s a problem for existing implementations. Eran talked in the Q&A about the Amazon assistants. Many of these devices are wired – they must connect to a power outlet. This supports always-listening mode but isn’t friendly to portability. Amazon introduced the Tap to offer portability, but portable means battery powered, requiring low power when standing-by, which is why you must tap the device to turn it on before it will start listening. Still, the battery would last a few months in this usage. But tapping isn’t very convenient either, so Amazon released a software update which eliminated the need for a tap – the device was always listening. Unfortunately battery life dropped to 8 hours!

DSPG (whose ultra-low power solution is based on CEVA, see above), demonstrated together with another partner (Vesper) for microphones that they could replace the tap detector with the always-on KWR solution described above, running of course off the same device battery. Battery life shot back up to 3 months. This is impressive; in effect, always-on KWR using this technology consumes negligible power compared to power consumption during active use.

There’s a lot more you can learn about in the webinar. CEVA have a demo unit, there was discussion on voice isolation (differentiating between different speakers speaking at the same time but at different locations), voice biometrics / voice-printing and many other topics requiring more AI, natural language recognition, perhaps more sensor fusion to combine vision recognition with voice/speech recognition to refine inferences. Eran noted that advances here are being worked, at CEVA and other places, but aren’t commercially available today. Still, all of this points very much to Eran’s opening position about the future of voice in electronics – it’s very bright!

You can watch the webinar HERE, learn more on CEVA’s voice/speech solutions HERE and more on their sensor fusion capabilities HERE.


EDA Powered by Machine Learning panel, 1-on-1 demos, and more!

EDA Powered by Machine Learning panel, 1-on-1 demos, and more!
by Daniel Nenni on 06-07-2017 at 12:00 pm

DAC is upon us again! The Design Automation Conference holds special meaning to me as it was the first technical conference I attended as a semiconductor professional, or professional anything for that matter. That was 33 years ago and I have not missed one since. This year my wife and I both will be walking the DAC floor and it would be a pleasure to meet you so be sure and say hi if you see us.


My good friends at Solido will again be giving out SemiWiki.com pens so if you were to catch me in a booth that would be the first place to look. Solido has a lot going on at DAC so be sure and get them on your schedule:

Join Solido at the 54th Design Automation Conference (DAC) (Booth #1113) in Austin, TX at the Austin Convention Center from June 19-21, 2017. DAC is the premier conference for design and automation of electronic systems, Solido will again be offering a panel discussion, networking opportunities, and 1-on-1 product demos at this year’s event.

Be sure to register to attend Solido’s panel discussion, “EDA Powered by Machine Learning” on Monday, June 19, 10:30am, Room 10AB, Austin Convention Center, Austin, TX.

Machine Learning is leaving its mark in a variety of fields, even EDA. This panel will focus on opportunities and examples of how this disruptive technology is addressing various challenges in semiconductor design. Discover how machine learning technologies are already providing disruptive runtime, resource, and productivity benefits in variation-aware design and characterization, and much more, with panelists Ting Ku (NVIDIA), Sorin Dobre (Qualcomm), Eric Hall (Broadcom), Jeff Dyck (Solido), and Amit Gupta (Solido) moderating.

Registration for the “EDA Powered by Machine Learning” panel can be found here: https://www.solidodesign.com/dac-2017-panel-registration/

At DAC, Solido will also be hosting 1-on-1 demonstrations showcasing Variation Designer and newly-launched ML Characterization Suite from Monday June 19 through Wednesday June 21. Variation Designer is the world’s leading technology in variation-aware design for standard cells, memory, and analog/RF, providing full design coverage in orders-of-magnitude fewer simulations, but with the accuracy of brute force techniques. Solido’s new ML Characterization Suite uses machine learning technologies to accelerate characterization of standard cells, memory, and I/O, reducing library characterization time without compromising accuracy.

Available demos include Variation Designer for Memory, Analog/RF, or Standard Cell Design and ML Characterization Suite Predictor or Statistical Characterizer. Registration for a 1-on-1 Solido Demo can be found here: https://www.solidodesign.com/dac-registration/

After Monday morning’s panel and exploring everything DAC has to offer, set aside time to join Solido at their first DAC Rooftop Party at Terrace59 @ Speakeasy (412 Congress Ave D. Austin, TX) on Monday, June 19, 7pm. Appetizers and host bar provided. Space is limited so make sure to RSVP here: http://www.solidodesign.com/dac-2017-rooftop-party/


Webinar: How RTL Design Restructuring Helps Meet PPA

Webinar: How RTL Design Restructuring Helps Meet PPA
by Bernard Murphy on 06-07-2017 at 7:00 am

To paraphrase an Austen line, it is a truth universally acknowledged that implementation, power intent and design hierarchy don’t always align very well. Hierarchy is an artifact of legacy structure, reuse and division of labor, perhaps well-structured piecewise for other designs but not necessarily so for the design you now face, which has a different power objectives and different physical constraints. Power and implementation want to be at least partly flat which doesn’t blend well with a rigid hierarchy.


REGISTER NOW for this webinar on Tuesday, Jun 13, 2017 10:00 AM – 11:00 AM PDT

You can see this in several examples. It often makes sense to merge common power islands to optimize power switching and PG routing, but the RTL hierarchy gets in the way. You could manually restructure the hierarchy, but that can be a lot of work, not just in making the changes but also in verifying you didn’t break anything. As another example, a classic trick in P&R is to run feedthrus through a block to optimize timing for long runs. This could be handled nicely purely within physical design before complex power strategies became common. Now if the blocks involved sit in different power domains, these changes must also be reflected in netlists for power verification. Or think about a timing critical I/O interface in a legacy design, now repurposed to a derivative. That interface perhaps sat deep in a hierarchy in the original design but must be moved to a different hierarchy to better suit floorplanning objectives in the derivative. But all connections to the rest of the logic must be preserved.

In this webinar, DeFacto will present their solution for RTL design restructuring, within their STAR platform, to automate this complex task. This appears so easy you might well consider restructuring as a new aid to further optimize power management and area in your design.

REGISTER NOW


AI Being Used from Probing to Simulation

AI Being Used from Probing to Simulation
by Daniel Payne on 06-06-2017 at 12:00 pm

The 54th annual DAC event is fast approaching, so I hope to see many of you in Austin on June 18-21. The phrases Machine Learning and AI are growing in all areas of software, so I’m glad to see it appearing in more EDA tool offerings over the past year or so. One company that I plan to visit at DAC is Platform Design Automation because they offer both hardware and software tools to engineers that need to characterize silicon and then create device models, PDK (Process Design Kits) and FDK (Foundry Design Kits). Here’s an overview of what to see from Platform DA at booth 1929:

  • A portable die prober for small dies
  • A fast semiconductor parameter (IV/CV) analyzer
  • The NC300 series 1/f noise characterization system
  • Device modeling that is AI-driven
  • An automatic PDK QA and signoff tool

Related blog –Noise, The Need for Speed and Machine Learning

These instruments and software can be shown in three groups:

Two new things that you will see at DAC include:

  • Advanced Semiconductor Education Kit
  • New Generation of Low Frequency Noise Modules

Related blog – Something New for Semiconductor Parametric Testing

The specifications for the low frequency noise modules look impressive with a 200V bias and 1A current range.

PDA has their headquarters in Beijing and branch offices in both Shanghai and Hsinchu, so if you’re from North America then it’s a much shorter flight to Austin, Texas to speak with these folks in person to better understand how they can help you in the process-design integration area.

Related blog – SPICE Model Generation using Machine Learning

If you need some services to quickly understand how to use the test instruments, device modeling, FDK and PDK as it applies to your specific manufacturing node then setup a DAC meeting and start the relationship. The benefits to your company are a higher design quality, improved IC product yields and better IC reliability. Engineers at PDA have many years of experience in this specialized realm and has already worked with many design houses and foundries.

Related blog – Is That PDK Safe to Use Yet?

At DAC you can see also see their latest presentation, “Low-Cost, High-Accuracy Variation Characterization for Nanoscale IC Technologies via Novel Learning-based Techniques.

When you visit PDA at DAC be sure to ask for either Albert Li or Riko Radojcic.

Albert is President at PDA and has 15 years of experience. He founded the company and previously worked at Accelicon, then Agilent Technologies acquired Accelicon in 2012. On the education side Mr. Li earned a BS EECS at the Tsinghua University plus an MSEE from Vanderbilt University.

Riko Radojcic recently joined PDA and has been in the semiconductor industry for 30 years now in a variety of engineering, management and consulting roles. He has worked at companies like Qualcomm, PDF solutions, Cadence and Unisys.

Here’s where booth 1929is located for PDA at DAC this year.