webinar banner2025 (1)

Three things you should know about designHUB!

Three things you should know about designHUB!
by Daniel Nenni on 03-13-2019 at 7:00 am

One of the key growth areas for the semiconductor ecosystem is IP which of course includes IP related EDA software. In May of 2017 design management/collaboration expert (one of my personal favorite EDA companies) ClioSoft announced designHUB[SUP]®[/SUP] for IP management and re-use. Using designHUB, semiconductor companies can easily investigate, evaluate, and integrate internally and externally created IP across geographical company lines:

“There is a paradigm shift occurring within the IP ecosystem regarding IP usage, development and data sharing,” said Srinath Anantharaman, founder and CEO of ClioSoft. “Current products in the market today address IP reuse by web-based cataloging. By using the concept of crowdsourcing, designHUB bridges the gap between the IP developer and the IP user all within a single platform and extends the definition of IP to include SoC sub-systems, documents, ideas, scripts, flows etc. Design related information from various sources is seamlessly integrated into designHUB along with a rich set of reports to provide the relevant information to the designer and management community. Design reuse can now be a reality within a company.” May 2, 2017.


Approaching the two year anniversary, designHUB is now a driving force with ClioSoft customers who just announced record new contracts in 2018:

“There is not just one reuse solution that fits all companies,” said Srinath Anantharaman, founder and CEO of ClioSoft. “Every company has unique requirements, and each company looks for enterprise-class solutions that are configurable for their needs, easy to use, have a low maintenance overhead, a rich set of features, can scale to meet their future needs and be easily adopted within the company. The designHUB ecosystem addresses these needs and continues to be positively received by design team’s year after year.”

designHUB is based on crowdsourcing and leveraging the intelligence of employees within an enterprise. This implies that you can easily create interest/discussion groups within an enterprise wherein you can pose a problem and others can provide you with either a solution or an alternative. All discussions are saved as part of a knowledgebase and can be leveraged by others in the future. Here are the three other things you should know about designHUB:

[LIST=1]

  • Traceability:designHUB redefines the notion of IPs to include semiconductor IPs, documents, flows, scripts etc. It tracks all types of IPs along with their variants and provides detailed reports on where it has been used, what are the open issues against the IP . It also provides notifications as needed if you are following the IPs.
  • Project Dashboard:It enables design teams to set up their own dashboard and upload all project collateral to ensure all designers are on the same page. Project leaders can define their schedules and send either broadcast messages or message individual team members.
  • Independent of Data Management: designHUB comes equipped with a well defined API which enables it to be integrated with any data management system. Currently it supports SOS, GIT, Subversion and Perforce. As a result, it can easily serve as a conduit for IPs, and provides design teams the flexibility of downloading IPs from one data management system and use it another data management system.About ClioSoft
    ClioSoft is the pioneer and leading developer of enterprise system-on-chip (SoC) design configuration and enterprise IP management solutions for the semiconductor industry. The company provides two unique platforms that enable SoC design management and IP reuse. The SOS7 platform is the only design management solution for multi-site design collaboration for all types of designs – analog, digital, RF and mixed-signal and the designHUB platform provides a collaborative IP reuse ecosystem for enterprises. ClioSoft customers include the top 20 semiconductor companies worldwide. The company is headquartered in Fremont, CA with sales offices and distributors in the United States, United Kingdom, Europe, Israel, India, China, Taiwan, Korea and Japan.

Also Read

Data Management Challenges in Physical Design

Webinar: Tanner and ClioSoft Integration

The Changing Face of IP Management


Synopsys Tackles Debug for Giga-Runs on Giga-Designs

Synopsys Tackles Debug for Giga-Runs on Giga-Designs
by Bernard Murphy on 03-12-2019 at 12:00 pm

I think Synopsys would agree that they were not an early entrant to the emulation game, but once they really got moving, they’ve been working hard to catch up and even overtake in some areas. A recent webinar highlighted work they have been doing to overcome a common challenge in this area. Being able to boot a billion-gate design, bring up a hypervisor, guest OS, system services and finally applications, running potentially through billions of cycles, is an accomplishment in itself. But then how do you debug such a monster? Synopsys call this exascale debug because running billions of gates through billions of cycles combines to exascale (10[SUP]18[/SUP]) complexity. Finding and root-causing bugs at that scale takes a lot more help than you are likely used to in IP-based debug.

The Webinar presenter (Ribhu Mittal, Dir of Emulations Applications Engg) listed three main challenges in debug at this scale. First the sequential distance between a bug and the root cause of that bug may be significant – perhaps billions of cycles. This might seem improbable at first but consider how a cache error early in bring-up might result in storing an incorrect configuration value, said value not being accessed until much later in running an app. Cache errors notoriously can lead to this kind of long-range problem. The exascale issue here is how you are going to trace back potentially billions of cycles to who knows what in the design might have caused the problem. How do you intelligently decide what to dump, yet converge quickly through a minimum of iterations?

A second problem is indeterminacy. You want emulation to run as fast as possible with no pauses. At the same time, you have multiple asynchronous testbench drivers interacting with the emulation, say USB and PCI VIPs. When a bug is caught (by an assertion for example), if it was timing-sensitive there is no guarantee that it will appear again when you rerun the test. Changes in system load and environment may make the bug intermittent, which isn’t a great place to start when you want to isolate the root cause.

The third problem Ribhu mentioned is efficiently handling the volume of debug data. Streaming out even a selected set of signals at full speed, then expanding them offline and reading them into a debugger such as Verdi could take hours when you’re working at this scale. That’s before you start to work on where and why you have a bug.

Synopsys have refined a methodology and have put work into optimizing this flow for ZeBu; they cite 3 customers who have used these flows effectively. They recommend starting with a breadth-first search using the checker and monitor collateral you probably already get from IPs and subsystem verification teams. In ZeBu you can call out key signals (such as IP interfaces) from this collateral, before compile, to track system-level behavior; you can monitor these through the complete run at full emulation speed. Use the checkers for coarse-grain isolation of problems and monitors to further narrow the window.

The customer use-cases are particularly interesting. What I got from these is that when you are debugging at the system level, you maybe shouldn’t expect to get down to a root-cause in one pass. At this stage, you’re not finding elementary bugs; you’re more likely finding problems manifesting deep in the software stack. That means you likely want to progressively refine down to a window for much more detailed debug where you might do a full-signal dump. The customer examples shown got there in 2-3 passes, which seems fair given the complexity of the problems they isolated.

For handling indeterminacy, ZeBu offers exact signal record and replay, so on a rerun you can disconnect from the testbench and replay exactly what you ran when you saw the bug. Add to this full save and restart with save at periodic checkpoints; this works with signal replay so you can replay from the closest checkpoint before you saw the bug. Together these provide a deterministic and time-efficient way to isolate and debug those annoying intermittent bugs,

Finally, they’ve speeded up digesting those vast quantities of streamed data for offline debug.
They have fast streaming (2TB/s) from ZeBu; you can take this into parallelized expansion, or you can use a high-performance interactive/selective expansion to pick out just the signals you want to check. Synopsys have also added a native ZeBu database to Verdi to speed up load times. Together they say this decreases waveform expansion and load times by 10X.

This is a quick overview of how to drive efficient debug for emulation on big designs with big use-cases. You can watch the webinar HERE.


Webinar: Addressing Multiphysics Challenges in 7nm FinFET Designs

Webinar: Addressing Multiphysics Challenges in 7nm FinFET Designs
by Daniel Nenni on 03-12-2019 at 7:00 am

EDA is big on growth through acquisition, being acquired many times throughout my career I know this by experience. In fact, we have a wiki that tracks EDA Mergers and Acquisitions and it is the most viewed wiki on SemiWiki.com with 101,918 views thus far.

In March of 2017 ANSYS acquired CLK Design Automation which did timing variation analysis and FX for transistor model and simulation. At the time I worked for Solido Design who had some overlap with CLK and we actually looked at acquiring them before ANSYS did. The jewel in the crown of CLK was the technologists and one of those jewels is Dr. Joao Geada, absolutely.

Bio:Dr. Joao Geada is a chief technologist at ANSYS, with over 20 years of EDA experience. He leads the development of the semiconductor business unit’s FX timing and timing variation products. He is the author of numerous papers and patents around static timing analysis and statistical timing. Before ANSYS, Dr. Geada was CTO and co-founder of CLK Design Automation and one of the lead architects in the verification and simulation group at Synopsys. Before Synopsys, Dr. Geada was a senior researcher at Cadence Design Systems and started his career at the IBM TJ Watson Research Center. Dr. Geada holds a Ph.D. and bachelor’s degree in engineering from the University of Newcastle on Tyne (UK).

Dr. Geada is the speaker for the upcoming webinar Addressing Multiphysics Challenges in 7nm FinFET Designs.Even if you can’t make the live webinar sign up and you will be notified when the replay is up:

Date:
March 28, 2019
Time:9 a.m. PST
Presenter: Dr. Joao Geada, Chief Technologist, ANSYS

Webinar Link:http://bit.ly/2C8pF3B

Abstract:
Variability has become the new enemy in 7nm FinFET designs. You can’t fix what you can’t find, and variability takes many forms. For instance, there is variability in process due to smaller geometries, variability in voltage drop due to varying workloads and variability in temperature across the chip due to increased self-heating and joule heating effects. All directly impact silicon performance. Increased cross-coupling of various multiphysics effects such as timing, power and thermal in 7nm designs poses significant challenges for design closure. Power grid design and the profound impact of grid weakness issues on timing-critical paths have become limiting factors for achieving the desired performance and area targets. Power grids consume a significant amount of metallization resources, and with routability becoming a big constraint at advanced nodes, power and timing closure have become a designer’s nightmare.

Traditional margin-based methodologies that have served well in the past are becoming ineffective. These methodologies helped in confining the problem space by decoupling design methodologies to manage complexity and limitations in electronic design automation (EDA) tools that are not architected to solve multiphysics challenges. At 7nm process nodes, however, these siloed methodologies are increasingly failing to achieve the highest performance in silicon. Margins work well only as long as the results are predictable. With the margin-based approach, increased variability makes it hard to predict true silicon behavior and impacts both time-to- result (TTR) and time-to-market (TTM) goals in complex design projects.

Attend this webinar to learn how ANSYS multiphysics simulations can be leveraged for better understanding the true limits of built-in margins and accurately predicting post-silicon behavior. Multiphysics simulations will enable you to achieve the target maximum frequency on silicon, while drastically improving the functional yield of your chips.

About ANSYS, Inc.
If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in Pervasive Engineering Simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and create products limited only by imagination. Founded in 1970, ANSYS employs thousands of professionals, many of whom are expert M.S. and Ph.D.-level engineers in finite element analysis, computational fluid dynamics, electronics, semiconductors, embedded software and design optimization. Headquartered south of Pittsburgh, Pennsylvania, U.S.A., ANSYS has more than 75 strategic sales locations throughout the world with a network of channel partners in 40+ countries. Visit www.ansys.comfor more information.


Accelerating SOC Development for Automobile Applications

Accelerating SOC Development for Automobile Applications
by Tom Simon on 03-11-2019 at 12:00 pm

No area of electronics is moving faster than automotive semiconductors. Everyone has been talking about the increasing electronics content of automobiles for decades. With Advanced Driver Assistance System (ADAS) and autonomous driving becoming a reality the pace has picked up even more. These new designs combine just about every single advanced subsystem used in SoC designs. Prior to the giant leap in mobile device technology, people talked about how there was a ‘convergence’ coming that would integrate communications, networking, graphics, processing, etc. That did indeed happen with the result being the current generation of cell phones.However, a new convergence is coming.

With ADAS and autonomous driving, we are essentially talking about putting advanced supercomputing along with state of the art sensor fusion, multiple modes of wired and wireless networking, high-speed memory, and advanced algorithms into each car. The rigid power, security, and reliability constraints on these new systems make this a more daunting task. Exciting new companies like FABU Technology have come along to address the growing market for AI SoCs with innovative designs targeted at ADAS and autonomous driving.

FABU has set out to rapidly build SoCs for ADAS and autonomous driving that can collect sensor data from gyro, accelerometer, compass, vision, lidar and radar systems, then combine them with maps, real time traffic and road condition data to create an accurate view of the vehicle’s environment. Surrounding vehicles, traffic signs, and pedestrians must be identified. Additionally, the ADAS and autonomous driving systems need to monitor the driver to detect driver attentiveness, distraction or drowsiness.

The market is moving too fast for a company like FABU to set out to build the required automotive IP from scratch. At the same time much of the IP needed is specialized and has be built to automotive standards. These standards include ISO 26262 and AEC-Q100. In order for them to focus on their core competency, FABU chose to license a broad swath of the necessary IP from Synopsys. Their automotive-grade DesignWare IP offerings are ideally suited to FABU’s needs.Going this route allows FABU to focus on where they add value – implementing highly optimized algorithms – while leveraging IP that meets their functional needs as well as all automotive reliability and security requirements.

I had a chance to speak recently with Ron DiGiuseppe, senior marketing manager for automotive IP products at Synopsys, about FABU’s choice for IP. They worked closely with FABU to help them select the optimal interface, security, processors, and foundation IP solutions. Interface IP is being used to collect data from sensors, such as MIPI for image and video.

FABU will be using the Synopsys Safety Island for its ARC processors with dual lockstep cores, which have an independent safety monitor to check for faults and failures. In the event of a safety exception there is an escalation to the host processor where it can be independently processed.

Ron also talked about the need for security. The last thing you want is security intrusions. Synopsys IP offers hardware Root of Trust with encryption and a trusted execution environment (TEE) to prevent tampering and other malicious activities.

By licensing the broad portfolio of IP from Synopsys, FABU will benefit from consistency in the deliverables, especially with respect to documentation for ISO 26262 and IP integration. Ron pointed out that Synopsys will work with FABU to ensure selection of the ideal process node to meet their PPA and safety requirements. Synopsys has built relationships with foundries for automotive processes as part of their commitment to this market. FABU can use the licensed IP as a foundation for creating SoCs that offer breakthrough functionality and performance. The announcement contains detailed information on each of the categories of IP that they licensed and information about how each of them meets the requirements for automotive applications.

Synopsys automotive-grade DesignWare IP comes with documentation for ISO 26262 compliance and their automotive IP is ASIL B or D Ready. For reliability, Synopsys works with foundries to ensure AEC-Q100 compliance. This involves producing GDS layout that meets the more stringent automotive reliability including design rules for Grade 1 and 2 temperature. Another area where Synopsys adds value for the automotive market is with their test and repair tools, which further improve quality and reliability.


eSilicon Bucking the Trend at OFC with 7nm SerDes

eSilicon Bucking the Trend at OFC with 7nm SerDes
by Daniel Nenni on 03-11-2019 at 8:00 am

A recent press release from eSilicon caught my eye. The company has been touting their 7nm SerDes quite a bit lately – reach, power, flexibility, things like that. While those capabilities are important, any high-performance chip needs to work in the context of the system, which usually contains technology from multiple sources. So, interoperability does matter and eSilicon’s press release announcing the addition of an interoperability demo with a mainstream FPGA at a major show is relevant. The release also talked about working with another ecosystem partner – Precise-ITC, to validate that their forward error correction (FEC) IP worked with the eSilicon SerDes as well.

Interoperability demo at OFC: eSilicon 56G SerDes and Precise-ITC 400G FEC

“Our current SerDes demonstration showcases the robustness, low power and flexibility of our 7nm device,” said Hugh Durdan, vice president of strategy and products at eSilicon. “It is also important to demonstrate interoperability with other popular hardware. I am delighted we can showcase this additional aspect of our SerDes capabilities at OFC.”

“Precise-ITC is a leading provider of Ethernet and optical transport (OTN) intellectual property products for ASIC and FPGA,” said Silas Li, Director of Engineering at Precise-ITC. “OFC2019 is a showcase event for the partnerships we have with FPGA vendors, ASIC developers, like eSilicon, and test equipment developers. Together, we’re enabling rapid deployment of 400GbE.”

Digging a bit more, the release announced additions to the demo compliment eSilicon will showcase at OFC. The Optical Fiber Communication Conference and Exposition (OFC) is a huge technical conference and trade show that is over 40 years old. According to their website: “OFC is the largest global conference and exhibition for optical communications and networking professionals.” There are over 700 exhibitors on 350,000 square feet of exhibit space. The show takes up the entire San Diego Convention Center, which is also where Comic-Con is held. This a huge show, absolutely.

OFC show floor

Digging further, you can find some more interesting news in the press release. In addition to the interoperability demo, eSilicon is demonstrating a complete HBM2 memory subsystem using Silicon’s latest 7nm HBM2 PHY, Northwest Logic’s memory controller and an HBM DRAM stack from a leading memory supplier. And they’re demonstrating the performance, flexibility and extremely low power consumption of their 7nm SerDes using with a five-meter ExaMAX Backplane Cable Assembly from Samtec.

eSilicon booth
Five-meter cable demo

eSilicon is demonstrating high-speed communications over a five-meter copper cable at the biggest optical networking show in the world. I would say that takes a lot of confidence. I had some spies at the show, and they reported quite a bit of interest in eSilicon’s copper cable demo. They appear to be driving the longest electrical cable at the show. Getting high speed and low power with a proven, simpler technology such as copper is certainly appealing. I’ll be watching to see what eSilicon announces next.


Ultra low-power Analog Design using a Multi-Project Wafer approach

Ultra low-power Analog Design using a Multi-Project Wafer approach
by Daniel Payne on 03-10-2019 at 1:00 pm

On SemiWiki we often talk about bleeding-edge technology like 7nm, 5nm or even 3nm, but for analog IC designs there’s a low-cost alternative to getting your ideas validated and prototyped without taking out a multi-million dollar loan, and that’s through the use of Multi-Project Wafers (MPW). Starting with a mature process node like 180nm still produces adequate silicon for low-power applications like IoT where analog sensors and converters are the main part of the chip functionality, along with some digital control logic, think big A and little D applications.

My industry contact Wladek Grabinski shared information with me this week about a company in France called CMP(English translation Multi-Project Circuits) that has been offering MPW foundry services since 1981 to keep costs down for IC designers at Universities, research laboratories or industrial companies that want to prototype their analog ideas economically.

For an MPW project you likely want from dozens to thousands of pieces manufactured for you, either packaged or just bare die, ready for testing. In total, some 7,900 projects have been prototyped through 1,043 MPW runs at CMP over the years, helping 614 customers realize their analog ideas into silicon. CMP certainly has their act together and provide a much needed service for companies needing to get quick prototypes for big A, little D designs.

A Swiss company em microelectronic (EM) has an ultra low-power IP library and foundry all ready to use with MPW services provided by CMP. Here’s what EM has to offer you:

  • Mature 180nm node for ultra low-power analog design (APL018)
  • NVM (EE or Flash)
  • EKV accurate models near and sub Vth operation
  • Analog and digital IP libraries characterized for low voltage (down to 0.4V), low current (nA bias)
  • I/O pads, low leakage ESD protection
  • Design Kit for Cadence
  • Digital flow for Synopsys

EM really knows IC design, as they’ve been in business since 1975 and their ultra-low power silicon is used in six major application areas:

  • Energy – harvesting, power management, storage
  • Interfaces – displays, tactile surfaces, computer peripherals, motion sensing, sound production
  • Sensing – interfaces, sensors
  • Communications – RF technologies, RF long range communication, RFID, beacons
  • Smart Processing – wearables, cryptography and security
  • Time – watches, fobs

Even though the EM headquarters are in Marin, Switzerland, you can also find their facilities around the globe in:

  • Colorado Springs, USA
  • Prague, Czech Republic
  • Bangkok, Thailand

If you’ve ever shopped for a watch you likely have seen the iconic Swatch brand in retail stores and online, so EM is the semiconductor company for Swatch.

Looking at the most recent press releases at EM I conclude that this company is well suited for IC designs that require Bluetooth, IoT, RF and anything that is battery-powered and requires ultra-low power consumption.

CMP invited EM to present at a seminar last month, so check out the slides here.


Lyft IPO Paints Perilous Profitless Picture

Lyft IPO Paints Perilous Profitless Picture
by Roger C. Lanctot on 03-10-2019 at 8:00 am

Lyft’s S1 filing for its IPO is a sobering read, as such documents often are, requiring, as they do, the full disclosure of current financial circumstances and, everyone’s favorite: risk factors. Lyft identifies 18 risk factors (below) which could interfere with the long-term success of the operation. I think there are more. Continue reading “Lyft IPO Paints Perilous Profitless Picture”


Data Centers and AI Chips Benefit from Embedded In-Chip Monitoring

Data Centers and AI Chips Benefit from Embedded In-Chip Monitoring
by Daniel Payne on 03-08-2019 at 12:00 pm

Webinars are a quick way to come up to speed with emerging trends in our semiconductor world, so I just finished watching an interesting one from Moortec about the benefits of embedded in-chip monitoring for Data Center and AIchip design. My first exposure to a data center was back in the 1960s during an elementary school class where they wheeled in a Teletype machine connected to a telephone line, and at the other end was a centralized computer system located in some air-conditioned room that ran a Civil War game app that had us students choosing how to run a campaign with our resources and then predict the outcome of the battle. In the 1970s at the University of Minnesota our data center was powered by machines from Control Data Corporation, and then at my first job with Intel in 1978 the data center was powered by IBM mainframes in a remote location that we accessed from Oregon.

Living in Oregon we know something about data centers because of the low cost of electricity from our plentiful hydro power generators, moderate climate, and generous tax breaks for companies like Googleto locate. In 2018 the data centers in the US consumed some 90 billion kilowatt-hours of electricity, while globally that power consumption was 416 terawatts, which was 3% of the total electrical output. This growing trend for data center power consumption causes heat-induced reliability issues for each of the semiconductor components mounted on boards, stuffing racks of equipment.

Source: Google Data Center

Much new VC money in 2018 has poured into AI chip startups, so let’s just summarize both the data center and AI chip design challenges:

Data Center
· Reliability and long MTBF(Mean Time Between Failures)
· Low service interruption
· Big die sizes at advanced nodes
· High volume with high manufacturing yield required
· Fine grain DVFS (Dynamic Voltage and Frequency Scaling) control
· Chip supply voltage noise

AI
· High data throughput
· Intense and bursty computations
· Constrained power
· Variable CPU core usage, or utilisation
· Continual optimisation of algorithms for data analysis and manipulation
One method to deal with all of these chip design challenges is to place PVT (Process, Voltage, Temperature) monitors in your AI or data center chips, allowing you to measure in real time what’s happening deep within each chip, then use that info to make decisions about changing the Vdd values or local clock speeds to ensure chip reliability and meet MTBF goals. Take the example of a typical AI chip which may have CPU clusters with thousands of cores being used, as shown below where 16 cores form each cluster and then placed around each cluster are PVT blocks sensor (colored blocks):

CPU Clusters with PVT monitors

The temperature monitors will let you know if the Junction Temperatures are within specifications, for example 110C. Thermal monitors can be used to:

· Avoid Electrical Over Stress (EOS)
· Mitigate Electromigration effects
· Limit hot carrier aging
· Prevent thermal runaway

Semiconductor processes are not uniform, so you cannot expect that Silicon will be centered on the TT corner, instead you can expect:
· Process variability across each die
· Variation caused by lithography
· Reliability effects like aging
· FinFET variations

IC designers start out with an ideal power supply concept like a Vdd value of 1.1V, but then you have to deal with the non-ideal physical realties with on-chip voltages like:
· Interconnect resistance causing dynamic IR drops along Vdd paths
· Dynamic versus static power
· Electromigration effects on Power, clock and interconnect

Static Timing Analysis (STA) tools are run on chips before tapeout to ensure that your design meets speed criteria across all PVT corners, but with actual physical local variations on advanced nodes it’s conceivable that one die region has a temperature of 50C, Vdd of 0.8V and SS corner, while another region has a slightly different temperature of 65C, Vdd of 0.9V and TT corner. Your STA tool needs to handle these on-chip variations (OCV) while calculating path delays.

Not all thermal monitors are created equal, so if Moortec provides a thermal monitor with +/- 2C accuracy, and another vendor has a +/- 5C accuracy thermal monitor, go with the 2C monitor in order to provide tighter control to your thermal throttling system, which in turn provides greater power savings and allows for the highest data throughput.

Consider the power savings for a data center with 100,000 servers (Facebook having ~400,000 for example) and you could save 2W per chip by using a Moortec PVT approach versus a less accurate monitor that requires 6C more thermal guard-banding. The webinar provided a case study with calculations, showing if this saving per chip were scaled upward then a data center could save around $2M per year in electricity costs.

Just like tighter thermal guard-banding is beneficial to data center chips and systems, the same can be said for voltage guard-banding with highly accurate 1% values with Moortec mean fewer watts wasted on a system compared with less accurate voltage guard-banding. An example system using 0.8V for Vdd and a 20W target and using Moortec voltage monitors shows a worst-case value of 20.4W, while a less accurate voltage monitor has a worst-case value of 22.1W which is 10% more wasted power than what Moortec provides. Again, Moortec outlined that there were material cost savings to the data center operators.

SoCs that use Adaptive Voltage Scaling (AVS) in closed loop benefit from using embedded Process or Voltage Monitors that tell the PMIC (Power Management IC) what the actual silicon values are.


Voltage scaling optimization

Summary
There’s only one IP vendor dedicated 100% to PVT monitoring for ICs and that’s Moortec, they started in the UK back in 2005 and have customers now around the globe using the most popular nodes from the major foundries. You can take the next step and contact one of their offices nearest to your timezone: UK, USA, China, Taiwan, Israel, Europe, South Korea, Russia, Japan.

Watch the entire 35 minute webinar recording online, after a brief registration process.

Related Blogs


Arm Deliver Their Next Step in Infrastructure

Arm Deliver Their Next Step in Infrastructure
by Bernard Murphy on 03-08-2019 at 7:00 am

Arm announced their Neoverse plans not long ago at TechCon 2018. Neoverse is a brand, launched by Arm, to provide the foundations for cloud to edge infrastructure in support of their vision of a trillion edge devices. To a cynic this might sound like marketing hype. Sure, they’re widely used in communications infrastructure and certainly in edge devices, but they never really cracked the datacenter, or so conventional wisdom held. They put that concern to rest not long after TechCon when AWS announced immediate availability of EC2 A1 instances in their services. These are built on Arm-based Graviton processors, developed by AWS Annapurna Labs.

Continue reading “Arm Deliver Their Next Step in Infrastructure”


Newer cryptocurrencies highlight need for agile mining strategies

Newer cryptocurrencies highlight need for agile mining strategies
by Tom Simon on 03-07-2019 at 12:00 pm

Cryptocurrencies represent a radical departure from traditional forms of money. Currencies like Bitcoin, Etherium and Monero offer many unique advantages over traditional currencies, and are changing how money is created and used. Bitcoin, the pioneer of cryptocurrencies, relies on pure computational power for so-called mining, which is the process where transactions are verified and providers of this service are rewarded with newly minted bitcoins. Starting with CPU’s, then GPU’s this lead to an inexorable spiral towards more powerful and dedicated mining hardware. The mining activity moved to FPGAs and then to dedicated ASICs; at the same time, it moved to very specific geographies with low electricity costs. And, the democratization of cryptocurrency yielded to a smaller group of niche players.

Fortunately, this trend has been challenged by newer cryptocurrencies that have imposed new requirements on mining that make it more democratic. For instance, newer currencies such as Monero regularly perform forks, which change the algorithm for mining, rendering dedicated ASICs obsolete. Another strategy is requiring random memory access in a large address space. Both of these features make it more challenging to develop silicon specifically targeted at gaining an advantage in mining.

Interestingly, Achronix has developed a radical departure from traditional FPGAs in the form of embeddable FPGA (eFPGA) fabric, that coincidentally offers some compelling advantages in the mining of these newer cryptocurrencies. Achronix has written a white paper that outlines how their Speedcore eFPGA is well suited to the task of mining. However, their treatise on how well their eFPGA is for mining, also speaks indirectly to how eFPGA can be used to solve a wide variety of challenges that either traditional ASIC or FPGA may struggle with.

Achronix’s Speedcore eFPGA is highly configurable, and at the same time does not drag a lot of unnecessary blocks into the finished design. In an amusing section of their white paper Achronix refers to how some writers refer to standard FPGAs as programmable piles of parts. In all seriousness, standard FPGA parts often are mismatched to the task at hand. Nowhere is this truer than in the area of cryptocurrency mining. Things like Ethernet, PCIe, MAC’s, SerDes, etc. are not needed and just end up taking up valuable real estate for no actual benefit. Also, a multitude of small memories do not suffice for the memory needs associated with mining.

When a precisely configured eFPGA core can be married to custom memory instances, it leads to big performance, power and area advantages. Their white paper compares a case study that uses eFPGA in an ASIC to the performance of GPU or standard FPGA based alternatives. A traditional ASIC based alternative was ruled out because it lacks the re-programmability to deal with forks that require new algorithms for mining.

While perhaps some readers of their white paper may be compelled to embark on designing a new mining chip – the white paper certainly makes clear that it would be a wise choice – the bigger take away is that Speedcore eFPGA offers numerous advantages for a wide range of problems that are currently being addressed with CPUs, GPUs, ASICs or standard FPGAs. It was of course an interesting read on the directions where cryptocurrencies are headed. If you want to learn more, the white paper is available on their website, and makes for good reading.