NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

ClioSoft & DAC : Booth 613 – Collaborative Design, Design Data & IP Management and Design Reuse

ClioSoft & DAC : Booth 613 – Collaborative Design, Design Data & IP Management and Design Reuse
by Mitch Heins on 06-06-2017 at 7:00 am


It’s time again to gather for the next Design Automation Conference (DAC). This will be the 54[SUP]th[/SUP] such meeting and this year it runs from June 19[SUP]th[/SUP] – 21[SUP]st[/SUP] in the Live Music Capital of the World, Austin Texas. Put on your best duds, boots and cowboy hat and make your way to Texas.

While you are there make sure to stop by the ClioSoft booth, #613, and learn about ground breaking advancements in System on-Chip (SoC) design and intellectual property (IP) management. ClioSoft will be giving demos and leading discussions on the challenges of SoC design and how their design data management software, SOS, is helping companies like Analog Devices, Google and TSMC to manage their projects.

State-of-the art SoCs can now be made of multiple billions of transistors with a multitude of different IPs (both 3[SUP]rd[/SUP] party and internally developed) being used from all over the world. Design teams continue to get bigger and include engineers from multiple disciplines including hardware, software, design verification, packaging, manufacturing testing and yield to name a few. All these disciplines have their part to play and companies are being challenged to bring these disparate groups together to form virtual teams to make successful products. ClioSoft’s products are used to enable this collaboration even when teams are dispersed across thousands of miles and multiple different time zones.

In addition to demonstrating their SOS and Visual Design Diff (a slick capability that enables chip designers to visualize and track changes between different versions of schematics and layouts) products, ClioSoft will also be showcasing their new designHUB software. The designHUB platform is a unique technology, which helps companies take design reuse to a whole new dimension. It provides an IP reuse ecosystem, which encompasses a knowledge base for both internal and 3[SUP]rd[/SUP] party IPs to help designers leverage the past experiences of designers.

In addition, by providing a dashboard for designers and projects alike, it brings another dimension to collaboration within a company. The designHUB platform enables the creation and sharing of IP meta-data that can be used by teams to search, view, qualify and select IP for their designs. it also tracks IP usage including 3[SUP]rd[/SUP] party IPs and mitigates unauthorized usage of the IPs within a company. When visiting be sure to have ClioSoft explain their use of crowdsourcing within designHUB to enable knowledge transfer between IP developers and design teams that use the IP.

While you are at DAC you may also want to check out the DAC panel discussion titled, ‘Have Third Party IPs Killed Internal IP Development?’. This panel will discuss the pros and cons of using third party IPs and their impact on internal IP development. Ranjit Adhikary, ClioSoft VP marketing, will be on this panel along with Rich Wawrzyniak of Semico Research Corp, Philippe Quinio of ST Microelectronics, Daniel Cooley of Silicon Labs and Andy Hawkins of Cypress Semiconductor. The panel will be held Wednesday June 21 from 3:30-5:00p in Ballroom G.

And what would DAC be without a little fun and relaxation? The ClioSoft team will be hosting a DAC party Tuesday evening, June 20[SUP]th[/SUP] starting at 7:00p. The party will be held at Micheladas in downtown Austin. This is one of Austin’s many venues made famous by the South-by-Southwest events that happen each year. There’s no admission fee and ClioSoft will be sponsoring complimentary beers, margaritas, wine and hors d’oeuvres. It’s a great opportunity to relax after a long day at the DAC show and network with friends and colleagues. Inquire now with ClioSoft as the party is by invitation only and will have limited invitations.


See also:


Webinar: Achieving Very High Bandwidth Chip-to-Chip Communication with the Interlaken Interface Protocol

Webinar: Achieving Very High Bandwidth Chip-to-Chip Communication with the Interlaken Interface Protocol
by Eric Esteve on 06-05-2017 at 12:00 pm

Open Silicon will hold this webinar on June 13th at 8 am PDT (or 5 pm CE) to describe their Interlaken IP core, and how to achieve very high bandwidth C2C communication in various networking applications. To be more specific, the Interlaken protocol can be used to support Packet Processing/NPU, Traffic Management, Switch Fabric, Switch Fabric Interface, Framer/Mapper, TCAMs or Serial Memory (INLK-LA). Open Silicon is marketing the Interlaken IP core for ASIC, but the networking industry also loves FPGA technology, offering fast Time-to-Market (TTM) and, even more important, well-known advantage of flexibility, allowing to support protocol evolution in the field. The Interlaken protocol also supports FPGA implementation.

There are significant demands for performance and bandwidth in high-speed communications, and pressure to step up the pace on technological advancements. The panelists will outline the challenges that designers of advanced communication applications encounter with things like controller specification, latency, various SerDes architectures and implementation. They will outline use cases and discuss the key technical advantages that the Interlaken IP core offers, such as 1.2 Tbps high-bandwidth performance and up to 56 Gbps SerDes rates with Forward Error Correction (FEC), as well is its multiple user-data interface options. They will also discuss the architectural advantages of the core, such as its flexibility, configurability and scalability.

I am very honored as I have been asked by Open Silicon to be the moderator of this webinar. But you must deserve such honor and to some homework to be well prepared, and I will share with Semiwiki readers some bits of information about Interlaken, so you (the reader) will also be well prepared!

Meeting these requirements from the Interlaken Alliance will ensure interoperability for different implementations (don’t forget that Interlaken is a chip-to-chip communication protocol, so interoperability is key):

Supports multiple parallel lanes for data transfer at physical level
User interface packet based. With each packet consisting of multiple bursts
Simple control word to delineate packet and bursts
Protocol independence from the number of SerDes lanes and SerDes rates
Ability to communicate per-channel backpressure
Performance scales with the number of lanes

I think that most of the points in this list are clear enough, with probably the notable exception of per-channel backpressure. If you are (like me) not aware about this concept, you will have to dig to understand the meaning and implication of per-channel backpressure. Don’t worry, I did it, and found this definition:
In queueing theory, a discipline within the mathematical theory of probability, the backpressure routing algorithm is a method for directing traffic around a queueing network that achieves maximum network throughput which is established using concepts of Lyapunov drift. Backpressure routing considers the situation where each job can visit multiple service nodes in the network. It is an extension of max-weight scheduling where rather each job visits only a single service node.

To make it simple:

[LIST=|INDENT=1]

  • Max-weight routing: each job –> a single service node
  • per-channel backpressure: each job –> multiple service node

    Using the second allows to achieves maximum network throughput.

    To continue with the definitions, when looking at the features supported by Open Silicon Interlaken IP core, I found this one: “Supports Interlaken Look Aside protocol”. Look Aside?

    We can find the meaning of this look aside function, extensively used in packet based processing, with some examples of look-aside devices:

    • Search engines, which receive small portions of a packet header
    • Policing engines, which receive small portions of a packet header, or a simple command set
    • Value-add memories, which may perform mathematical operations or linked-list traversals in addition to reads and writes
    • Queuing and scheduling engines which dictate the packet transmission order to a packet buffer device

    The basic idea is to process only a small portion of a packet, aside from the data path. Looking at this chart, you immediately understand the benefit of this look-aside protocol: the smaller the message size (in abscissa), the higher will be the message rate.

    I am sure that you will learn a lot more about Interlaken if you attend this webinar from Open Silicon. Even if the Interlaken protocol is based on some complex concepts, don’t forget that the elegance of Interlaken is his simplicity and high flexibility, as the controller can interface with any SerDes with rates between 3.125 Gbps and 56 Gbps to support very high bandwidth chip-to-chip communication.

    To register to the webinar, click here

    Eric Esteve
    from IPnest


  • Tools for Advanced Packaging Design Follow Moore’s Law, Too!

    Tools for Advanced Packaging Design Follow Moore’s Law, Too!
    by Tom Dillinger on 06-05-2017 at 9:00 am

    There is an emerging set of advanced packaging technologies that enables unique product designs, with the capability to integrate multiple die, from potentially heterogeneous technologies. These “system-in-package” (SiP) offerings provide architects with the opportunity to optimize product performance, power, cost, and area/volume, with the capabilities to: merge processing with local memory (e.g., HBM); consolidate multiple die in rigid-substrate 2D (e.g., CoWoS) or 2.5D configurations (e.g., utilizing vertical vias through an interposer between layers); or, molding (multiple) die in a high pin count, low-cost module (e.g., FOWLP, with redistribution layers to package bumps).

    In many ways, the complexities of die and advanced package technology selection are analogous to the challenges faced when targeting an integrated design to the optimum process node with appropriate PPA (and available, qualified IP).

    As Moore’s Law has been consistent throughout the process generations, chip physical design methodologies have similarly evolved to help manage the increasing circuit capacity and diversity. Specifically, chip physical implementation tools transitioned from a “flat” approach to a methodology incorporating hierarchical floorplanning, interface pin constraint management, and detailed APR algorithms. Design team specializations developed, with architects working on floorplan prototypes and PD experts addressing the intricacies of routing design rules and DFM/DFY requirements.

    Advanced package design shares a similar evolution – architects developing prototypes, with implementation experts completing the (bond wire or bump) die attach topologies, the signal/power routes, and the unique manufacturing patterns. SiP development tools traditionally provided a single cockpit, targeting the physical design expert – Mentor has just announced an evolution to their Xpedition product family, to address the “Moore’s Law of advanced package design flow” requirements.

    I recently spoke with Keith Felton, Product Marketing Manager with the Advanced IC Packaging Solutions group at Mentor, a Siemens Business. “New tools and flows are required, to support the challenges of high-density advanced packaging, or HDAP. Mentor is introducing two new products – Xpedition Substrate Integrator and Xpedition Package Designer.”, Keith highlighted.


    Figure 1. Xpedition Substrate Integrator and Xpedition Package Designer product positioning


    Figure 2. Design domains and data flow for xSI and xPD

    “xSI is specifically targeted to support package design prototyping. The definition of the overall design connectivity is used to develop the substrate, die placement and stacking data, and preliminary bond/ball physical assignments. Architects can integrate their design concept with the PCB (using Xpedition PCB). The optimized prototype is then forwarded to the xPD flow, for final development.”, Keith explained. The figure below illustrates how a bump assignment in xSI is co-designed with the PCB, spanning the board and package domains.


    Figure 3. Data exchange between xSI and Xpedition PCB

    The xSI environment integrates with HyperLynx DRC, for rule checking appropriate for initial definition (for all but the most intricate manufacturing checks). As with the hierarchical chip design flow, the architect using xSI can pass rules/constraints to detailed design – e.g., routed net shielding requirements.

    Xpedition Package Designer, or xPD, is the corresponding detailed implementation tool in the new HDAP methodology. Physical design expertise finalized the attach data (bond or bump) and power/signal routes. xPD provides designers with both a 2D and (highly illustrative) 3D design view, as illustrated below.


    Figure 4. 2D and 3D design views in xPD

    If the prototype definition requires modification, xPD can send updates back to the xSI architect – see the double-sided flow arrow between the two tools in the flow diagram above.

    The detailed design will incorporate the data appropriate for package manufacture – e.g., complex metal route and mesh fill patterns required to accommodate degassing (absorbed moisture removal at high temperature during deposition) and to minimize mechanical stress gradients in the final package. The verification of the design data will link to Mentor’s Calibre 3DSTACK, with robust algorithms/checks for the non-Manhattan geometry used in package design.

    Mentor has addressed the Windows versus Linux operating system environment differences between xPD and Calibre 3DSTACK, seamlessly to designers – the correlation between Calibre and xPD data is illustrated in the figure below; full DRC and LVS checking support is provided.


    Figure 5. Xpedition Package Designer and Calibre 3DSTACK data correlation

    The xPD environment also links to Mentor’s HyperLynx Fast3D algorithm for package parasitic extraction and model generation for signal integrity analysis. Mentor’s FloTHERM interfaces to xPD, as well, enabling a detailed thermal analysis of the package. Given the varying switching activity among the die and the disparate materials used (with thermal expansion coefficient differences) within the package, a thermo-mechanical stress reliability analysis is mandatory.

    Chip design methodologies have evolved to support the complexities afforded by Moore’s Law, from hierarchical floorplanning to detailed DFM/DFY checks. High-density advanced packaging technology also now requires a set of tools and flows that addresses both the early design optimization space and the manufacturability/reliability requirements, leveraging the expertise of different team members. Mentor’s new Xpedition Substrate Integrator and and Xpedition Package Designer directly addresses this new package design methodology.

    For more information on the new Xpedition Substrate Integrator and Xpedition Package Designer, please follow this link.

    -chipguy


    An InFormal Chat

    An InFormal Chat
    by Bernard Murphy on 06-05-2017 at 7:00 am

    Any sufficiently advanced technology is indistinguishable from magic, as the saying goes. Which is all very well when the purpose is entertainment or serving the arcane skills of a select priesthood, but it’s not a good way to grow a market. Then you want to dispel the magic aura, make the basic mechanics more accessible to a wider audience and push usage/applications rather than the mystical spells of the inner circle. After all, few of us have a deep understanding of how our smartphones work but now they’re used by virtually everyone.


    Some of what this takes is usability – we engineers never met a problem we believed couldn’t be solved by yet more engineering, in this case through better user experiences, more attuned to the way we think and even the way we communicate (touch, gestures, speech, …). But in some cases, widespread adoption also depends heavily on socializing the domain. Or, back on the magic analogy, showing how the trick is done – not professional magician to professional magician in magician-speak, but simply explained to us non-experts who just need to make the trick work to get our jobs done, along with a basic understanding of what happened behind the curtain (or inside the hat).

    Formal verification fits this description all too well. We know it does incredible things, providing complete proofs inaccessible to dynamic verification, but much of what is written about the domain today is expert to expert, full of math and strange terms like witness and bounded model-checking. Usage in some areas has been simplified but we still wonder how those kinds of verification fit into our overall test and coverage objectives. And other areas still look inaccessible to anyone but PhD experts who must understand bounded proofs, BDD versus ATPG versus SAT and how to mutter all the right incantations to sufficiently constrain (but not over-constrain) their proofs.

    All of which makes it very timely that Synopsys is launching a blog today called InFormal Chat, written by verification engineers for verification engineers. I’ve read some of the initial blogs. They’re informal and short, each a quick read to pull back the curtain on some aspect of formal verification. They don’t worry much about polished delivery – this is engineers talking to engineers with little marketing interference.

    Synopsys is clearly proud of the expertise they have built up in the VC Formal team, and in the product, and want to get the message out that they are a leading contender in this area. They’ll talk certainly about tool capabilities but they also want to help users and potential users better understand the magic. Some of the discussion will be on the mechanics of getting though formal analysis, like how to handle incomplete proofs and where to watch out for pitfalls. They’ll talk sometimes about advanced topics, such as how to build proofs for cache-coherency. And they are committed to providing thought leadership in the domain, in emerging problem domains and suggested solution approaches.

    This is a worthy direction to socialize formal verification and to convert more of us to being at least passable formal magicians. I’m told new posts should appear every couple of weeks. Together with a good search mechanism, this should be a valuable resource for all of us formal wannabees. You can find the link HERE.


    Margin Call

    Margin Call
    by Bernard Murphy on 06-04-2017 at 7:00 am

    A year ago, I wrote about Ansys’ intro of Big Data methods into the world of power integrity analysis. The motivation behind this advance was introduced in another blog, questioning how far margin-based approaches to complex multi-dimensional analyses could go. An accurate analysis of power integrity in a complex chip should look at multiple dimensions: a realistic range of use-case simulations, timing, implementation, temperature, noise and many other factors. But that would demand an impossibly complex simulation; instead we pick a primary topic of concern, say IR drop in power rails, simulate a narrow window of activity and represent all other factors by repeating analysis at a necessarily limited set of corners of margins on the other factors.


    That approach ignores potential correlation between these factors, which worked well in simpler designs built in simpler technologies but is seriously flawed for multi-billion-gate designs targeted to advanced technologies. Ignoring correlations requires you to design to worst-case margins, increasing area and cost, blocking routing paths, delaying timing closure and still leaving you exposed, because without impossibly over-safe margins you’re still gambling that worse cases don’t lurk in hidden correlations between the corners you analyzed.


    Ansys big data technology (called SeaScape) aims to resolve this problem by getting closer to a true multi-dimensional analysis, tapping existing distributed data reserves of simulation, timing, power, physical and integrity data through distributed processing. This breadth of analysis should provide a more realistic view across multiple domains, providing both efficiency and safety; you don’t overdesign for “unknown unknowns” and you don’t under-design because you see a much broader range of data. Ansys have had a year since my first blog on the topic, so it seems reasonable to call this – did they pull it off?

    It’s always difficult to get direct customer quotes on EDA technology, so I must talk here in general terms, but I believe there will be some joint presentation activity at DAC, so look out for that. The technology first appears in RedHawk-SC and has been proven in production with at least two of the top 10 design companies that I know of, building the biggest and most advanced designs around today. I was told that 16 of those designs are already in silicon and around twice that many have taped-out.

    Off-the-record customer views on the value-add are pretty clear. The most immediately obvious advantage is in run-times. Since much of the processing is distributed, they can get results on a block within an hour and a (huge) full-chip overnight. It becomes practical to re-run integrity analysis on every P&R update. They can run four corners simultaneously for static IR, EM and DVD transients. They can profile huge RTL FSDBs and parallel-solve for multiple modes to find the best vectors with best activities for EM and IR stressing. And that provides the confidence to be more aggressive in reducing over-design, which in turn accelerates closure (less blockages). This customer also commented on the elasticity of this approach to analysis. Previously, running faster was capped by the capabilities of the biggest systems they could use for analysis. Now, since analysis is distributed, they found it much easier to scale up by simply adding access to more systems.

    Faster is always good, but what about impact on the final design? One very compelling customer example looked at die-size reduction. In that case they removed M2 over standard cell rows, then added it back in only where this more refined analysis showed it was needed to meet power integrity margins. They found that they could reduce the overall die size by ~ 5% by freeing up more resources for signal routing. Which resulted in a reduction of P&R block size by 10%. That’s an easily understood and significant advantage, enabled by big data analytics.

    All this is great for teams building multi-billion gate chips in 16 or 7nm, but I was interested to hear that both customers saw significant value for analyzing and optimizing blocks, between 1M and 8M gates, in around 50 minutes, which they found helped them close faster and more completely on physical units than was possible before. So the technology should also have value for less challenging designs.

    Given this, my call is that Ansys delivered on the promise. But don’t take my word for it. Check out what they will be presenting at DAC. You can learn more about SeaScape HERE.


    Memory drives semiconductor boom in 2017

    Memory drives semiconductor boom in 2017
    by Bill Jewell on 06-03-2017 at 7:00 am

    The semiconductor market was down 0.4% in first quarter 2017 from 4Q 2016 and up 18.1% from a year ago, according to World Semiconductor Trade Statistics (WSTS). The 0.4% decline in 1Q 2017 versus 4Q 2016 is strong compared to an average 4% decline from 4Q to 1Q over the previous five years. The relative strength in 1Q 2017 was driven by a strong memory market. The three largest memory companies – Samsung, SK Hynix and Micron Technology – grew their revenues a combined 10% in 1Q 2017 versus 4Q 2016. Excluding these three companies the semiconductor market declined 3.7%, in line with recent seasonal trends.

    Memory will help drive solid 2Q 2017 growth over 1Q 2017. Micron Technology expects 16% growth in its fiscal quarter ending this month versus the prior quarter. Samsung and SK Hynix did not provide 2Q 2017 guidance, but both companies cited strong demand and healthy price trends for both DRAM and flash memory. With the exception of Intel – which is expecting a 2.7% decline – the top non-memory semiconductor companies have guided for healthy 2Q 2017 revenue growth. The midpoint of the guidance from these companies ranges from 2.3% from MediaTek to 5.0% from STMicroelectronics. The high end guidance ranges from 4.5% from MediaTek to 11.6% from Qualcomm. Qualcomm cut $500 million (about 10 percentage points of growth) from its initial guidance due to a royalty dispute with Apple. NXP Semiconductors did not provide guidance since its acquisition by Qualcomm is pending. Toshiba’s reporting has been delayed by financial problems and it is in the process of selling off its memory business. Intel’s projected revenue decline and Samsung’s strong memory growth is expected to result in Samsung passing Intel as the world’s largest semiconductor company in 2Q 2017 according to IC Insights.

    The outlook for full year 2017 semiconductor market growth has improved following the robust start to the year. Recent forecasts range from 11% from IC Insights to 16% from us at Semiconductor Intelligence. These forecasts are about 5 to 6 percentage points higher than the forecasts made by the same companies in the January to February time frame. Forecasts for 2018 include 3.5% growth from Mike Cowan and our Semiconductor Intelligence’s 7.0% growth. Our outlook for 2018 is based on moderating memory demand and stable economic trends and electronic equipment markets.

    The global economic outlook for 2017 and 2018 is solid, according to the latest forecast from the International Monetary Fund (IMF). The table below shows IMF’s April 2017 forecast for annual GDP percent change and the percentage point change in GDP growth rate (acceleration or deceleration). The IMF expects global economic GDP growth to pick up from 3.1% in 2016 to 3.5% in 2017 and 3.6% in 2018. The advanced economies should see modest growth of 2.0% in 2017 and 2018. Among the key countries in this category, improvement in the U.S. is offset by flat or decelerating growth in the Euro area, United Kingdom and Japan. The global GDP growth acceleration is driven by emerging and developing economies. Within this category, lower growth rates in China are offset by accelerating growth in India and the ASEAN-5 (Indonesia, Malaysia, Philippines, Thailand and Vietnam). Also Russia and Latin America should show growth in 2017 and 2018 after GDP declines in 2016.

    16% growth in the semiconductor market in 2017 does not seem like much of an upturn compared to prior peak growth years (32% in 2010, 28% in 2004 and 37% in 2000). However, it will be the first double-digit growth in seven years and follows a flat 2015 (-0.2%) and weak 2016 (1.1%). But all good things must come to an end. Memory booms are always followed by memory busts, usually dragging the overall semiconductor market negative. This could happen as early as 2019.


    Is ARC HS4xD Family More a CPU or DSP IP Core?

    Is ARC HS4xD Family More a CPU or DSP IP Core?
    by Eric Esteve on 06-02-2017 at 4:00 pm

    When I had to define the various IP categories (processor, analog & mixed-signal, wired interfaces, etc.) to build the Design IP Report, I scratched my head for a while about the processor main category: how to define the sub-categories? Not that long ago, it was easy to identify a CPU IP core and a DSP IP core. As of today, if a DSP is clearly dedicated to process digital signal, a CPU IP may also support these type of tasks, on top of the main processing/control function it was initially designed for. Synopsys DesignWare new ARC HS4xD family is a perfect example of a RISC CPU IP core offering 5.0 CoreMark/MHz (so we should rank it in the CPU IP category), being also capable of high performance pure DSP processing (but can we rank it into DSP IP category?).

    Let’s make it clear from the beginning: HS44, HS46, HS48 execute RISC only operations when HS45D and HS47D execute RISC and DSP (through ARCv2DSP). When combining RISC and DSP capabilities in a processor, the key is the software tools and library support, allowing seamless C/C++ programming and debug.

    All the cores are supporting dual-issue, increasing utilization of functional units with limited amount of extra hardware. What is dual-issue? The capability for up to two instructions per clock, with in-order execution and the same software view as single issue. Dual-issue increases both RISC and DSP performance, but the area and power penalty is very decent, with only 15% increase. The instruction set has been improved to increase instruction per clock, allowing to execute multiple instructions in parallel and take benefit of the dual-issue pipeline.

    While all the cores are supporting Instruction and Data Close Coupled Memory (CCM) from 512B to 16 MB, the designer will have to select HS46, HS47D or HS48 to benefit from Instruction and Data cache up to 64K, supporting cache coherency. The L2 cache (from 2MB to 8 MB) is available as an option for HS46 and HS47D, as well as MMU, but is by default supported by the HS48 core.

    Such a core family can support a very wide range of applications, thanks to the high level of configurability. For example, all the cores support multi-core implementation, with single, dual or quad instances. Moreover, Synopsys proposes various licensable options, like FPU, MPU, MMU, real-time trace (RTT), L2 cache, FastMath Pack, cluster DMA or CoreSight Interface.
    The HS4x RISC (only) family can address enterprise SSD processing needs, home networking, automotive control, wireless control or home automation.
    With the HS4xD family, it’s possible to support mobile baseband, Voice/speech applications and multi-channel home audio or human machine interface.

    HS4x(D) family has been tailored for embedded applications and any core is able to manage power budgets that are fixed, at best, or dropping. For any core, the power domain policy has been increased, offering user control over power management.
    Every CPU IP vendor will claim offering the best solution, that’s why it could be wise to look at verified facts when comparing with the competition. Let’s talk about performance-efficiency rather than raw performance, as most of the applications need a tight control of power budget. Synopsys is claiming to offer best in class performance efficiency vs competition, with same or better features.
    Some facts:

    • 45% higher performance than Cortex-A9 at ½ the power consumption
    • 2x higher performance than MIPS InterAptiv or Cortex-A7 at 20% lower power consumption
    • 2.5 higher performance than Cadence Tensilica processors
    • HS4x cores can be clocked at over 2.5 GHz in 16ff (typical), and this is faster than any core in this class
    • HS48x2 delivers higher performance than Cortex-A17… at lower power than Cortex-A9
    • HS family supports up to 8 contexts, when ARM and Cadence only support 1

    So, “should we rank this DesignWare HS4xD IP core family in the CPU or DSP category?” is probably not the most crucial question, the real point to highlight is who (which competitor) and when this HS4xD family will be challenged by another IP vendor!

    By Eric Esteve from IPnest


    AIM Photonics Catching Its Stride as They Move into 2nd Year

    AIM Photonics Catching Its Stride as They Move into 2nd Year
    by Mitch Heins on 06-02-2017 at 7:00 am

    AIM Photonics held its 2017 Proposers Meetings on May 24[SUP]th[/SUP] in Rochester, NY. The meetings included a review of AIM’s progress and strategic direction by their TRB (technical review board) and a session targeted at PIC (photonic integrated circuit) design for multi-project wafer (MPW) runs. While these discussions were covered under non-disclosure agreements, it’s easy to see from public postings in the news and on the AIM website that significant progress has been made by the institution whose mission it is to “advance integrated photonic circuit manufacturing technology development while simultaneously providing access to state-of-the-art fabrication, packaging and testing capabilities for small-to-medium enterprises, academia and the government”. I’ve pulled together a summary of some AIM PIC design related highlights based upon data publicly available on the AIM website.

    The PIC Design for MPW session was chaired by Brett Attaway, who is the AIM Photonics EPDA (Electronic / Photonic Design Automation) Director. From a posted interview of this session, Brett pointed out that the goal of AIM’s EPDA work is to enable the design community with MPW and eventually TAP (test and packaging) services for PIC designs. This includes the development of AIM Photonic PDKs (process design kits) as well as Electronic/Photonic design flows and methodologies. The first AIM PDK was released in June of 2016 (v0.3). A second release was made in September (v0.5) of the same year and a third release was made early January of 2017 (v1.0). Plans are to make major releases of the PDKs twice per year with v1.5 of the PDKs currently targeted for August of 2017 and then v2.0 and v2.5 being release in January and July of 2018 respectively.

    PDK releases include three variants. These include a variant for passive devices, a variant for active devices and a variant for a photonic interposer. The interposer enables the integration of electrical and photonic ICs as well as lasers into the same package. Per the AIM website, the passives portion of the PDK includes components such as silicon and silicon nitride versions of waveguides, edge couplers, vertical couplers, 3db 4-port couplers, Y-junctions, directional couplers, crossings and an interesting device known as an escalator coupler. The escalator coupler enables designers to move light from layer to layer sort of like a photonic via. The actives portion of the PDK includes components such as digital and analog versions of germanium photo-detectors and Mach-Zehnder modulators. Also included are thermo-optic phase shifters and switches as well as tunable filters and micro-disk switches and modulators. AIM plans to have five MPW runs in 2017, 2 full-flow runs with actives and passives, 2 passives-only runs and 1 interposer run for integration work. MOSIS acts as the AIM MPW aggregator and distributor of AIM PDKs.

    AIM PDKs include support for documentation and CAD views enabling schematic capture, simulation, layout and design rule checking for a variety of flows including:

    • Cadence Virtuoso + Lumerical Solutions INTERCONNECT + PhoeniX Software OptoDesigner for mixed electrical-photonic design.
    • Mentor Graphics Pyxis + Lumerical Solutions INTERCONNECT + PhoeniX Software OptoDesigner for mixed electrical-photonic design.
    • Mentor Graphics Calibre for sign-off design rule checking including design-for-manufacturing and simulation of advanced lithographic effects.
    • Synopsys OptSim Circuit + PhoeniX Software OptoDesigner for PIC design. Synopsys component level simulation tools can also be used in conjunction with the AIM processes.
    • Lumerical component level photonic simulation tools + INTERCONNECT for PIC design.
    • PhoeniX Software photonic layout and component level simulation tools + ASPIC for PIC design.
    • There is also an interface between Lumerical Solutions and PhoeniX Software for PIC design.


    The AIM Proposers meetings are meant to solicit inputs for next year’s funded AIM projects. Per the video with Brett Attaway, one of the key items that AIM is pursuing is to continue a project started in 2016 to have photonic reference designs that can be duplicated across the supported EPDA design flows. Per a presentation made by Brett at the Optical Fiber Conference in March of this year, the current reference design is focused on an integrated transceiver with PIC and CMOS designs as well as some efforts to try to collaborate on ways to ease PDK creation. Brett mentioned that he would like to see the current project expanded in 2018 to put more focus on the efficient system-level design of photonic systems that would include interface modeling between the ICs (electronic and photonic) and AIM’s interposer technology.

    Additional projects were being discussed behind closed doors but it’s a sure bet that the rest of the proposed projects will have something to do with one of four KTMAs (Key Technology Manufacturing Areas)

    • Telecom/Datacom,
    • RF Analog Applications
    • PIC Sensors
    • PIC Array Technologies

    or one of the four MCEs (Manufacturing Innovation Centers of Excellence)

    • EPDA: Electronic Photonic Design Automation
    • MPWA: Multi Project Wafer / Assembly
    • ICT: Inline Control & Test
    • TAP: Test Assembly and Packaging

    AIM is pushing hard to enable the eco-system and there is much activity in the market place as both members and non-members are taking advantage of the MPW services they are offering now. It looks like AIM is hitting its stride which is good, because not only do they need to enable the ecosystem, but they also must be self-funding by the time their five-year funding from the government expires sometime in 2020.

    Time flies when you’re having fun and right now time seems to be flying at the speed of light for AIM Photonics.

    See also:


    Getting to IP Functional Signoff

    Getting to IP Functional Signoff
    by Bernard Murphy on 06-01-2017 at 7:00 am

    In the early days of IP reuse and platform-based design there was a widely-shared vision of in-house IP development teams churning out libraries of reusable IP, which could then be leveraged in many different SoC applications. This vision was enthusiastically pursued for a while; this is what drove reusability standards and cost-metrics, among other initiatives. But shifts in markets and fierce competition disrupted the in-house ideal. IP and EDA vendors offered extensive and growing libraries for standard IP, proven over many more designs and in many more processes than most in-house design teams could match. And for chip-vendors, the cycle time and cost to make existing IP truly reusable became increasingly difficult to justify in the face of tougher competition and squeezed schedules.


    This became apparent in retrenchment to adapting internal assets as needed, design by design, rather than investing much in forward-looking reuse objectives; when you’re fighting to stay in the game, tactical priorities tend to overrule long-term strategies. Now it seems the outlook for many semiconductor suppliers has become more stable and EDA vendors like Cadence see a return to separate IP development teams and resurgence in demand for reusability. This is motivating a greater expectation of RTL signoff for IP; after all, reuse is meaningless if an IP must be reworked and re-verified for every design.

    Pete Hardee (product management director at Cadence) told me that chip verification teams are now demanding a higher level of functional quality from IP teams than they had expected in the past, because they no longer have time to debug IP problems. Naturally this requires IP development teams to make a bigger investment in dynamic verification and it also requires starting to make an investment in formal verification; when you don’t know in advance how an IP is going to be used, the more complete checking offered by formal methods becomes important. But there’s a challenge – IP teams can’t afford to staff for formal experts; they must be self-sufficient, so investment in this area must require minimal formal expertise.

    In support of this need, Cadence in their JasperGold product was probably the first to provide a range of autoprove apps requiring little to no expertise in formal, and has recently announced significant customer validation for two of these: Superlint and clock domain crossing (CDC) analysis. The superlint app includes the standard HAL checks, along with checks requiring formal such as overflow and underflow (no, it’s not just a width check), controllability and observability (for testability analysis) and FSM livelock and deadlock checks. CDC analysis includes structural checks (with support for multiple synchronization styles) along with reconvergence analysis and a range of functional checks, such as correct gray-coding on fifo pointers.

    A very nice feature they have added is formal-supported waiver management. CDC analysis can be very noisy, producing many potentially false violations, not because the analysis is inaccurate but because a lot of what determines correct design for CDCs depends on design intent. A good example (and a source of a lot of false violations) comes from quasi-static signals.

    These signals, often used for configuration control, in theory could switch at any time but in practice commonly (though not always) switch only during power-up or reset or other phases where synchronization concerns may be minimal. Since there can be a lot of these, avoiding synchronizers where possible can save useful area – but note the caveats in the previous sentence. Not every such case is a safe candidate to drop a synchronizer – some reconfiguration may be possible during active design use. So how do you figure out which of these violations are potential quasi-statics and which are safe to ignore?

    JasperGold CDC will generate and auto-prove assertions to determine if violations result from quasi-statics. These will drill back to root-causes, often catching a lot more potential quasi-statics in the process. Of course, you’re going to have to make the final decision on whether the root-cause indicates those cases are indeed safe. But with minimal involvement, no formal expertise but still with high confidence you can waive lots of violations and get more quickly to clean CDC signoff. The app also supports dumping assertions for additional checking in Xcelium simulation.

    Cadence has endorsements from ARM and ST for these technologies. ARM, being ARM, did a detailed analysis (reported at the last Jasper User Group meeting) of how using Superlint accelerated bug hunting during RTL development and pulled in RTL signoff, reducing the need for late-stage RTL changes by as much as 80%. ST commented on how the CDC app increased quality of design and chopped up to 4 weeks off design and verification time for each IP.

    This is important – as much for how formal is becoming important in IP RTL development as for the apps themselves. The whole point of reuse is to reduce overall design time and increase design quality by sharing proven IP. Improving IP quality through better RTL signoff is an important way to get there. You can learn more HERE.