RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Lam beats reduced guide as 2019 is done and 2020 is just a hope!

Lam beats reduced guide as 2019 is done and 2020 is just a hope!
by Robert Maire on 08-02-2019 at 4:00 am

Nice house in a neighborhood in decline

Lam posted EPS and revenues ahead of reduced expectations, but guided the current quarter below street current estimates.

Is a “beat” really a “beat” if its against greatly reduced numbers? We would remind investors that we are looking at EPS cut more or less in half from a year ago.

Execution has been great as Lam management has done a great job of cutting opex and doing everything in their control that they can possibly do but it doesn’t make up for a market continuing to decline quarter after quarter. $1.1B in stock buy backs this quarter when added to previous buy backs have taken 15% of Lam’s shares off the streets also propping up EPS.

Management did not “call a bottom” as there is no bottom in sight but was hopeful for a better 2020. Right now 2020 is nothing more than a hope that the down cycle will be over and memory will recover—-but its a “hope” with no hard evidence.

Memory is still two thirds of Lams business – and in decline

While memory is no longer 85% of business it is still the vast majority as Lam remains the most exposed to memory.  As we had pointed out in our Semicon West report, and Lam echoed, foundry is recovering based on 5G early production. While this clearly helped ASML and will also help KLAC, Lam has less exposure than average to the foundry segment. Even Applied has more foundry exposure. While Lam has some unique applications that are not memory specific its not enough to offset the weak memory market….at the end of the day Lam is still a memory driven company…..

Memory buys are technology not capacity driven

We have said for many years that there are two cycles that underlie buying; technology buying cycles and capacity buying cycles. Technology buys obviously follow Moore’s law and the progress in memory/logic technology. Capacity buys, which are higher in overall volume are based on market demand such as the switch to SSD’s. Right now capacity buys are zero as equipment is still being idled to artificially reduce supply while technology buys to increase NAND layer count or migrate from 1Z to 1A continue.

Lam needs capacity related buying to come back before it can recover but capacity related buying will take a longer time to come back as the idled capacity will come back on line long before memory makers will need to buy new equipment for additional capacity purposes.

Still searching for a bottom….

Setting the “Limbo Bar” lower….

As we continue in a slow downward spiral, estimates continue to be lowered, resetting the bar ever lower so that the company can “beat” the new lower number and say they beat expectations, meanwhile the stock is up 50% on the year in a declining memory sector with EPS cut in half….go figure….  It might be reasonable if there was a clear or even a murky recovery coming together but right now there is no difference as most analysts continue to kick the can of recovery down the road another quarter or 6 months or talk about an “etheral” recovery in 2020. We would remind investors that the vast majority of so called analysts also thought that industry was no longer cyclical or the downturn was a one quarter “air pocket”, are now calling for a 2020 recovery after previously calling for a 2019 recovery.

The problem is that no one really knows…..

Niche technology is nice but not impactful

Lam talked about some new areas of business outside of the core wheelhouse of advanced etch and dep.  Given the huge number of steps in chip manufacturing and many types of process there is a lot of fertile ground for new business which Lam is doing a good job of rounding up.  This too helps cushion but not offset the downturn in mainstream memory tools.  We think some of these applications could potentially be larger in a recovery scenario.

One that we find interesting, though not publicly mentioned, is “cryo” etch (or a “cold” etch) for buttery soft materials used in new memory types such as Intel’s Optane. We remain a fan of the upside of these type of memory devices.

The stocks

If we put a 15X market multiple on Lams current outlook for the year we get a $210 price target…which is where we are today.  The problem is that the EPS outlook continues to come down and we think Lam should trade at a discount to the market as its business is in contraction mode not expansion mode. If we take a haircut to the EPS outlook and discount the multiple we get a target well below the current stock price.

All this is beside the point that the stock is up 50% on the year in a declining business.   We still have a lot of time to buy the stock prior to an upcycle cause we are still far off in the future.  Add to all this the uncertainty of China still out there.

We would consider taking money off the table as the downside beta is clearly higher than the upside at this point.

Other Stocks

ASML was driven by logic/foundry and an earlier recovery of litho tools.  KLAC has always been a logic/foundry driven company and not a memory company like Lam.  Applied has strong ties to the foundry market.

However, we need to be clear that the memory industry has grown so large and so fast that no tool company can remain immune to its weakness or escape the gravitational pull of the weak memory sector.  We would also caution investors that a “stabilizing ” memory market does not mean that memory companies will rush out and start ordering tools in bulk again.  It could be a long slow climb of using up the idled capacity before we start buying new again.  Even if memory stabilizes in H2 2019 there is no guarantee of increased equipment purchases in 2020.


GPU-Powered SPICE – Understanding the Cost

GPU-Powered SPICE – Understanding the Cost
by Daniel Nenni on 08-01-2019 at 10:00 am

To deploy a GPU-based SPICE solution, you need to understand the costs involved. To get your hands on this new report analyzing this specific issue, all you need to do is attend Empyrean’s upcoming webinar, “GPU-Powered SPICE:  The Way Forward for Analog Simulation,” which will be held on Thursday, August 8, 2019, at 10:00 am (PDT). This webinar is the first webinar in the SemiWiki Webinar series. Click here to sign up using your work email information.

SPICE (Simulation Program with Integrated Circuit Emphasis) was initially developed at the Electronics Research Laboratory of the University of California, Berkeley by Laurence Nagel in the early 1970s. SPICE1, the version of SPICE first presented at a conference in 1973, was largely an effort to have a circuit simulator without ties to the Department of Defense, essentially keeping the electrical engineering department in step with the rest of the anti-war movement at UC Berkeley. SPICE1 was coded in FORTRAN. I believe it ran on the IBM mainframe computer available to the department.

In 1989, SPICE3 was released, the first version of SPICE written in C. As the code has long been available under the standard BSD license, SPICE has been available via open-source, and it was ported to many CPU-based systems as commercial versions started popping up from the emerging EDA industry during the 1990s. There are still many versions of SPICE out there, commercial, academic, and proprietary.

Unfortunately, SPICE being ported to newer computers was still insufficient to keep up with the increasingly large circuits engineers wanted to simulate. Simulation times kept getting longer. Less accurate versions of SPICE were then invented, generally referred to as fast-SPICE. They traded off some accuracy for improved speed of analysis. This has continued to be the state of the market until just recently.

For the last 15 years or so, companies have been trying to find ways of harnessing the incredible computation powers of GPUs (Graphics Processing Units) as an alternative to CPUs. CPU-based computers have typically one-to-eight processors at their disposal. GPUs have hundreds of processors, though they cannot do general-purpose computing well. The idea when programming a GPU is to give it a large sequence of calculations to do with little branching (e.g., avoid IF statements). GPUs are data throughput engines. Think of them as the dragsters of processing units – they go extremely fast but do not turn very well. So, the longer you can only feed the GPU data and let it just calculate it runs fast. When you ask a GPU to branch, it slows down. It takes experienced GPU programming skills to re-write code written for a general-purpose processor (CPU) and make it perform well on the GPU. Not all types of algorithms can be ported to a GPU and run faster, but matrix solving, which is critical in SPICE, will be one of them. I have seen this before in photo-lithography simulations. It makes sense it can be done with SPICE as well.

Empyrean is working on a paper which goes into comparing the cost differences between its GPU-accelerated ALPS-GT™ SPICE simulator and CPU-based simulators. Keep in mind that Empyrean already boasts the fasted CPU-based SPICE simulator Empyrean ALPS™, which was voted “Best of DAC 2018” for being the fastest SPICE simulator with True SPICE accuracy.  Empyrean ALPS™ has displayed 3X – 8X faster performance than the next fastest SPICE simulator in the market. Empyrean claims ALPS-GT is an order of magnitude faster with the same True SPICE accuracy. There are the so-called fast-SPICE simulators that sacrifice some accuracy to achieve faster throughput. That is not what Empyrean’s tools are. They provide true SPICE accuracy.

About Empyrean Software
Empyrean Software provides electronic design automation (EDA) software, design IPs and design services, including analog and mixed-signal IC design solutions, SoC design optimization solutions, and Flat Panel Design (FPD) solutions and customized consulting services.

Empyrean Software is the largest EDA software provider in China, and its research and development team has more than 30 years of experience in technology and product development. Our company has comprehensive cooperation with many corporations, universities and research laboratories.

Empyrean’s core values are dedication, collaboration, innovation, and professionalism. Our company is committed to working with customers and partners through win-win collaboration to achieve our common goals.


Arm Gets More Creative with Licensing

Arm Gets More Creative with Licensing
by Bernard Murphy on 08-01-2019 at 6:00 am

Arm flexible access

Without a doubt, RISC-V is generating a lot of buzz and I’m sure a lot of new designs, especially in spaces that are super-cost competitive or demand added differentiation in the processor. I doubt this is having meaningful impact on Arm business, in $$ rather than press. It takes a long time to replace an ecosystem of that size and the confidence markets have in Arm products. It’s not even clear it would make sense to displace Arm in the foreseeable future, any more than it would make sense for Arm to displace Intel in servers. In both cases, there are subset markets that can be better served by an alternative but there’s no apparent (to me) reason to switch most applications.

Still, I’m sure Arm is feeling some heat. I’m guessing they are also under pressure from customers needing to respond to highly fluid demand, such as system builders moving into SoC design, where what IPs they need and how many they need may not be very clear until relatively late in the development cycle. Perhaps some of those design teams might also wonder if life would be a lot easier if they could instead work with other platforms.

After all, the Arm business model for development wasn’t very flexible; you had to decide and pay up-front those IPs you wanted to license. Many RISC-V implementations are open-source and free to use as a starting point, or available under attractive terms compared to Arm options. And, no doubt, there is some appeal to the thought that you might be able to get processor IP for free, with no up-front payment and perhaps no royalties, even if you have to do more design work yourself.

Arm now offers Flexible Access, a new engagement model intermediate between DesignStart ($0 for access to Cortex-M0, M1 and M3, software trial for 90 days, royalties when you go to production) and the standard licensing model where fees vary with IP you want to use, single or multi-use access, levels of access and so on.

In Flexible Access you get access to a much wider range of core and support IP (eg system and security IP), can choose between an option of one tapeout per year or multiple tapeouts per year at (per the current website details) $75k or $200k annually. Again you pay royalties on production volumes. Arm already has several partners active in this engagement model, including AlphaICs, Invecas and Nordic Semi.

IPs included under the plan include most Cortex-M, -A and -R processors, TrustZone and CryptoCell IP, a number of Mali GPUs, system IP such as the AMBA fabric generators and other tools and models for design and software development. Global support and training are also included.

OK, so it’s not free, but it definitely is more flexible. A lot of customers don’t know exactly upfront which IP they are going to need or how many they are going to need. The big systems houses – hyperscalars, communications equipment and similar – won’t particularly care about cost but they do need flexibility. Smaller and more cost-sensitive ventures needing to react quickly to updated spec demands from their customers should definitely appreciate this new model. And I would imagine for all of these customers, easier access to this range of Arm IP has to be a more attractive and safer option than launching a RISC-V adventure.

Not everyone has the stomach for the inevitable risk in embracing open hardware standards or the need to differentiate on the processor. WD knows exactly what they want from RISC-V and has years of experience and large teams building similar designs around Arm cores; their work with RISC-V must feel like a relatively incremental step for them. But IMHO (and I’m not alone) this step will be a big and unnecessary unknown and risk for many. That said, more heat on Arm (or any near-monopoly) is never a bad thing. They’ll work harder and we’ll all benefit.


56th DAC – In Depth Look at Analog IP Migration from MunEDA

56th DAC – In Depth Look at Analog IP Migration from MunEDA
by Tom Simon on 07-31-2019 at 10:00 am

Every year at DAC, in addition to the hubbub of the exhibit floor and the relatively short technical sessions, there are a number of tutorials that dive in depth into interesting topics. At the 56th DAC in Las Vegas this year, MunEDA offered an interesting tutorial on Analog IP migration and optimization. This is a key issue for large and small companies. Digital IP migration is a fairly well bounded problem, making digital IP reuse a common activity. Though no less important, analog IP has been more difficult to adapt to new processes and new foundries. The 4 hour MunEDA tutorial was rich with technical content and real life case studies from Fraunhofer, inPlay Technologies, Rohm and STMicroelectronics. As always it is way more interesting to hear about design tool experiences from customers.

Michael Pronath from MunEDA started off the tutorial by discussing tools and methodologies for analog IP migration. MunEDA addresses issues in full custom design, which includes memories, custom cells, RF, and of course, analog. There are many challenges in this domain, including difficult design trade-offs, and design for yield & aging. Their migration flow includes SPT, for schematic porting. It handles many of the tedious and error prone steps involved in moving a design. The ported schematic can then be sized and tuned for the new technology with MunEDA’s WiCkeD sizing and tuning tools. WiCkeD stands for their Worst Case Distance optimization and analysis techniques. The final step is applying the WiCkeD based analysis and verification tools to ensure proper operation and performance of the finished design. This can include accounting for processes parameters, global and mismatch variation, reliability and more.

I found the portion of the tutorial given by Rohm’s Hidekazu Kojima particularly interesting. Rohm is a company that has been innovating semiconductor products since the late 1950’s. Their IP group uses MunEDA products to tailor their IP to the individual product group needs. They rely heavily on MunEDA WiCkeD tools for this. Hidekazu compared the flow they use to a traditional optimization flow. The main problem he highlighted with other flows is that there can be many iterations, where each small change requires reverification of all the performance specifications. MunEDA’s WiCkeD Deterministc Nominal Optimizer (DNO) pretty much handles the whole process and only required a few automatic iterations to reach closure.

Hidekazu then talked about finding the worst-case operating condition and corner. The WiCkeD tools can detect a worst operating condition between a min and max range. He also mentioned the easy to understand output graphs produced by the tools. The next part of his presentation was discussion of several case studies, including an AMP circuit, memory circuit, a comparator, and a logic circuit. For the AMP circuit, the optimization time went from 160h to ~6h. The area was also reduced by 60% compared to the original circuit with improved operating characteristics.

He closed with an overview of what he felt were the most useful features. Naturally the schematic porting features were included. He said it made it easy to replace devices with the new ones from the target PDK. It also automates any necessary rewiring. The Worst Case Analysis (WCA) algorithm significantly reduced the number of simulations needed for high sigma verification. This is useful for designs intended for automotive applications. WCA was also very useful for helping improve robustness for process variation and mismatch, with higher accuracy in fewer simulations.  Lastly, they were easily able to produce corner models based on PCM data from typical models. They came to within 1% of their target in only ~15 minutes.

Over time companies develop an array of design IP, which comes to represent significant value. Having the ability easily and predictably migrate this IP means that this value can be effectively leveraged for future projects. MunEDA tools make this a reality. The tutorial was filled with a rich variety of applications for MunEDA WiCkeD tools. If you were not able to attend the tutorial, the comprehensive slides are available are available on the MunEDA website. I expect that, just as I did, you will find the contents very informative.


IP Provider Vidatronic Embraces the ClioSoft Design Management Platform

IP Provider Vidatronic Embraces the ClioSoft Design Management Platform
by Randy Smith on 07-31-2019 at 6:00 am

Having worked at several semiconductor intellectual property (SIP) companies, I know how important it is to have a strong design data management platform for tracking the development and distribution of SIP products. Everyone doing semiconductor design should care about design data management. But for an IP company, it is imperative. Life gets complex quickly when you start giving your customers different versions of the same IP. So, it got my attention when Vidatronic, a provider of energy-efficient analog and power management unit (PMU) IP, said they were willing to talk about their use of ClioSoft’s SOS Design Platform to develop their IPs.

First, a bit about Vidatronic. They have been around since 2010 and have a couple of interesting outside board members amongst some of the biggest names in semiconductors, including Hector Ruiz, Ph.D. (former CEO of AMD) and Mike Bartlett, M.S.E.E. (former Texas Instruments VP). Recently, Vidatronic announced that they will provide PMU and analog IP cores to ARM for use in their solutions and have also teamed up with Samsung Foundry to provide analog IP core designs for licensing through SAFE™, Samsung Advanced Foundry Ecosystem. They sport two primary engineering locations, one in Austin, Texas and one in Egypt. Their analog SoC IP portfolio includes power-management solutions including LDO linear voltage regulators, DC-DC switching converters, bandgap voltage references, and other support circuitry. They also provide radio-frequency solutions, including CMOS transmitters.

Based on this basic description of Vidatronic, we can see that they need to support many SIPs across a large number of design process nodes, where they also need to develop customized versions for certain customer/process node combinations. But when diving in deeper with Vidatronic, we find even stronger reasons to deploy ClioSoft’s SOS Design Platform:

  1. Reduced complexity and efficiency while supporting multiple sites
    1. Used in Texas and Egypt
    2. Supports real-time sharing of design data between the sites
    3. Performance needs for auto-synchronization and secure, efficient data transfer
    4. Easy to control/restrict design access
    5. Optimized disk usage using SOS smart caching (with links to cache work areas to optimize network storage)
    6. Read-only local copy work areas with exclusive or concurrent locking
  2. Support for Cadence Virtuoso platform
    1. Ability to manage complex hierarchical cell views
    2. Integrates well with Cadence Virtuoso
  3. Critical features for tracking multiple versions of each IP
    1. Easy to take and label design snapshots of the designs which helps in efficient collaboration between the teams
    2. “Revert back” feature for recovering to a stable version of the design data, if necessary
    3. Design teams can record important milestones, plus review and track open issues
    4. Use “visual design diff” to identify differences between two revisions of the schematic or layout of an IP

That is a lot of strong reasons for Vidatronic to utilize ClioSoft’s SOS Design Platform.

When I asked Moises Robinson, President and Co-Founder of Vidatronic about his company’s experience with ClioSoft, he told me “We selected ClioSoft’s SOS Design and IP Management Platform for our design needs about four years ago and we have been extremely happy with it ever since. The number one reason we chose ClioSoft was for its design collaboration features. Operating on a global scale is not without its challenges, but ClioSoft allows our engineers across the world to work seamlessly together on the same projects while maintaining tight control over revision histories so we never lose any of our work. Effective management of the different versions of our IP and efficient collaboration among our designers is integral to our success, and ClioSoft plays an important role in us ultimately delivering the highest quality IP to our customers.”

That is an impressive endorsement. Indeed, ClioSoft’s SOS Design Platform seems like a perfect tool for companies developing IPs over multiple sites worldwide.

Also Read

56thDAC ClioSoft Excitement

A Brief History of IP Management

Three things you should know about designHUB!


Semicon West 2019 – Day 3 – Global Foundries

Semicon West 2019 – Day 3 – Global Foundries
by Scotten Jones on 07-30-2019 at 10:00 am

On Wednesday, July 10th I got to sit down with Gary Patton, CTO and SVP of worldwide research and development of Global Foundries and get an update on how the company is doing.

We started with a discussion of Global Foundries (GF) general business health. Revenue for the year is expected to be around $6 billion dollars. They are focused on profitability and will generate over $600 million dollars in free cash flow after $700 million dollars of capital expenditure and $600 million dollars of R&D spending not including any transactions. In the past GF was cash flow negative and this is a huge accomplishment making the company self-funding.

The first key decision in achieving this was pivoting away from 7nm. 7nm is fine for TSMC and Samsung but is messy with EUV, etc. and the R&D and IP investments are very high. According to a graphic they showed me from Gartner, 7nm and smaller nodes are only expected to represent about 20% of the total available market in 2023 so GF is not missing out on a lot of opportunity.

A second key decision has been rationalizing their fabs. GF had 3 – 200mm fabs, with large fabs in Burlington and Singapore and a small fab doing MEMS (Singapore Fab 3E). They have now sold Fab 3E. They also had 4 – 300mm fabs with large fabs in Malta. Dresden and Singapore and a small fab in Fishkill. They have now sold the Fishkill fab to On Semiconductor. On’s product have a lot fewer masks than GF’s making Fishkill a more appropriate scale fab for them. Fishkill will transition to On over three years with 45RFSOI and Silicon Photonics transitioning to Malta and 130nm RFSOI going to Singapore, Dresden has 22FDX and 40nm RF SOI. Even after the fab sales GF still has plenty of space available for growth. Dresden is only about 50% full, Fab 7 in Singapore has some space and after moving out 7nm from Malta there is about 40% available space there. GF can grow revenue by 40% with current cleanroom space.

GF has also sold their ASIC business to Marvell making them a clean provider of foundry services. They are focusing on being a manufacturing service provider, not a product provider.

GF wants to focus on key market segments that are growing, mobile, automotive and IOT (smart devices). In mobile the BOM is switching to more FEM (Front End Module) where GF is strong (I have previously written about GF’s broad portfolio of RF solutions here) and they are the only foundry with turn-key RF.

Some examples of applications for GF technologies are shown in the following slides.

Figure 1 illustrates GF technologies in wireless base stations.

Figure 1. Wireless infrastructure applications.

Figure 2 illustrates GF technologies in smart phones.

Figure 2. Smartphone applications.

Figure 3 illustrates GF technologies in automotive.

Figure 3. Automotive applications.

22FDX will have double the design wins this year and GF has restarted work on 12FDX. 12FDX is being developed in Malta on a slow ramp, they aren’t being pressured by customers for 12FDX yet. They have two new $1 billion-dollar opportunities in the last year for 45RFSOI in Dresden. They think their embedded MRAM solution is more flexible than other suppliers and they have started to get design wins on MRAM and also mmWave.

After years of questions about GF’s long-term survival they appear to be carving out a sustainable position in some key markets.

 

 

 

 


Mentor Highlights HLS Customer Use in Automotive Applications

Mentor Highlights HLS Customer Use in Automotive Applications
by Bernard Murphy on 07-30-2019 at 6:00 am

Catapult HLS

I’ve talked before about Mentor’s work in high-level synthesis (HLS) and machine learning (ML). An important advantage of HLS in these applications is its ability to very quickly adapt and optimize architecture and verify an implementation to an objective in a highly dynamic domain. Design for automotive applications – for example in an intelligent imaging pipeline such as you might find for object detection in a forward-facing sensor – present all of these challenges.

Evolving Demands in Automotive Design

Certainly they can be on very tight deadlines; one example mentioned below required a team to develop three designs in a year. But two other constraints are even more challenging. First, verification suites are naturally based on images, often at 4K resolution, with 8-12-bit color depth and 30 frames per second. On top of this ML inference testing suites using images of this complexity can be huge, since correct detection in these applications needs to be near-foolproof.

Finally, the ecosystem from SoC developer to module maker to auto OEM has become much more tightly coupled, especially to meet the tighter requirements of ISO26262 Part 2 and now also SOTIF (safety of the intended function, another emerging ISO standard). Part 2 and SOTIF demands have placed more burden on the value chain as a whole, from IP suppliers through SoC integrators to Tier1s and the automotive OEMs, to ensure that the final product can meet safety requirements. For example Part 2 now requires a confirmation review to “provide sufficient and convincing evidence … to the achievement of functional safety”. This is a matter of judgment, not just meeting metrics; a tier 1 or a chip maker can require additional support from lower levels to meet that objective, which means that design specs can continue to iterate until quite late in the design schedule.

Under these constraints RTL-based design flows would be impossibly challenging; there simply wouldn’t be enough time to experiment with enough architecture variations, verify over huge reference image databases and respond to and re-characterize and re-verify late-stage changes from Tier1s or OEMs.

This is where HLS shines. You can develop code in C/C++ and experiment with architectures at least an order of magnitude more efficiently than you can at RTL since these are algorithmic problems most easily represented in that format (or in MATLAB or the common ML frameworks, to which the Mentor HLS solutions can connect). You can also run verification of those giant datasets at this level, multiple orders of magnitude faster than RTL-based verification. (I believe this should even be faster than emulation since C-modeling is close to virtual prototyping which runs at near-real time performance.) And in response to late changes, you can incorporate those changes at the C-level and re-verify and re-synthesize pretty much hands-free, limiting impact on your schedule.

Case Studies

Mentor recently released a white-paper (see below) on outcomes for three of their customers using their Catapult flow for designs in the automotive imaging pipeline. Bosch, a well-known mobility Tier1, are finding it valuable to enhance their own differentiation by building their own IPs and ICs for image recognition. This was the example where a design team had to produce three designs in a year. Using the Mentor flow they were able to pull this off and deliver a 30% power reduction because they were could easily experiment with and refine the architecture for power. They also commented that it will be much easier to migrate the C-based model to new designs and evolving standards than it would have been with an RTL model.

Chips and Media, a company providing hardware IP for video codec, image processing and computer vision (CV), also used the Mentor flow to develop a new CV IP. This was their first time using the HLS flow and they ran an interesting experiment with two teams, one developing with HLS, the other with hand-coded Verilog. The Verilog team took 5 months to complete their work, with little experimentation on architecture, whereas the HLS took 2.5 months. This was from a cold-start – they had to train on the tools first, then develop the C code, synthesize and so on. Apparently they were also able to experiment quite a bit in this period.

Finally, ST are well-known for their image signal processing (ISP) products, commonly used in automotive sensors. They have seen comparable improvements in throughput for such designs, delivering (and this is pretty awe-inspiring) more than 50 different ISP designs in two years, ranging in size from 10K gates to 2 million gates. Try doing that with an RTL-based flow!

You can learn more about these user examples and more detail on the Catapult HLS flow HERE.


Virtuoso Adapts to Address Cyber Physical Systems

Virtuoso Adapts to Address Cyber Physical Systems
by Tom Simon on 07-29-2019 at 2:00 pm

LIDAR is a controversial topic, with even Elon Musk weighing in on whether it will ever be feasible for use in self driving cars. His contention is that the sensors will remain too expensive and potentially be unreliable because of their mechanical complexity. However, each of the sensors available for autonomous driving have their strengths and weaknesses. LIDAR offers many of the advantages of camera based sensors, plus it can work in the dark. The other advantage it has over cameras is that it can provide object speed detection.

At DAC I had a chance to talk to Ian Dennison, Senior Group Director at Cadence about innovations occurring sensor technology and their integration into cyber physical systems. For instance, there are potential developments in LIDAR technology that could eliminate the need for mechanical elements, replacing them with a transmitter optical phase array. According to Ian a major roadblock for this kind of development was the difficulty combining optical and electronic design and analysis into a single integrated platform.

Beyond the elimination of mechanical elements, electro-optical design can help expand the application areas of a given technology.  It is well understood that LIDAR is not suitable for close range sensing. For automotive applications an accuracy 10 cm is fine. However, for industrial robotics this will not suffice. Ian believes that with more accurate laser modulation this resolution could be improved. One method could be adding additional electro-optical elements to create a PLL.

Cadence has been working hard on developing a photonics solution that extend design capabilities to solve problems that are holding back what Ian describes a gold rush in cost reductions for sensor systems. Cadence has established partnerships with Lumerical, Coventor and Mathworks to develop Virtuoso integrations that can accelerate design and integration of these systems.

Cadence has developed features and products to facilitate these integrations. An excellent example of this is their CurvyCore for creating curvilinear structures in Virtuoso. They have a SKILL API that allow symbolic curvilinear layout and discretization. It enables waveguide creation and model property calculation. Other useful tools can be added through SKILL IPC calls.

LIDAR is not the only application that is benefiting from Cadence’s enhanced Virtuoso solutions. In the RF space Cadence has announced Spectre X which offers up to a 10X speed improvement coupled with up to 5X capacity while maintaining Spectre’s golden accuracy. At high frequencies, such as 122 GHz, it is possible to include the antenna on the chip with the LNA and PA. Designs such as this need EM simulation for transmission like accuracy. Cadence has recently announced new EM solver solutions that can address all elements of a 122 GHz FMCW radar system.

From my conversation with Ian, it is pretty clear that Cadence is attacking the design issues in sensor design across the board. Indeed, there was a lot to digest. Nevertheless, as designers pick and use these new features, it is sure to change the landscape in autonomous vehicles, robotics, etc. While I am often a big fan of Elon Musk, I would not bet on LIDAR remaining unfeasible for automotive applications. History is full of examples of unforeseen advances due to improving technology. If anything, the rate of change is accelerating. A full description of the many developments in the Cadence solutions for sensor system development is available on their website.


IP Lifecycle Management and Permissions

IP Lifecycle Management and Permissions
by Daniel Payne on 07-29-2019 at 10:00 am

Percipient IPLM

My first professional experience with computers and file permissions was at Intel in the late 1970s, where we used big iron IBM mainframes located far away in another state, and each user could edit their own files along with browse shared files from co-workers in the same department. I saw this same file permission concept when using computers from DEC, Wang, Apollo, Sun, Solbourne, HP and others. Even my MacBook Pro computer has an OS based upon Mach OS, derived from BSD UNIX, so it’s very familiar to me when using the command line. SoC designers today are using Linux and UNIX-based computers either on their desktop, network, private cloud or public clouds, and they all have file permissions to help organize how teams share files while the IT group can administer policies.

For an IP-based SoC we need something to help us manage access and track usage of all of those IP blocks, thus the concept of IP Lifecycle Management (IPLM) tools arose and is served by enterprise solutions vendors like Methodics. Using an IPLM approach means that there is one, centralized repository for an SoC design, so that users can get a Bill of Materials (BOM) and know where each IP block is being used. Just like files have permission, each IP block has permissions with IPLM as a way to bring order and allow the IT group to assign roles like Read access or Write access to trusted engineers on a team.

An ideal IPLM system should be a single source of truth, managing IP, related databases, corporate PLMs, requirement managers and even bug trackers. It turns out that Methodics does have an IPLM tool called Percipient that aims to fulfill these ideals.  Let’s take a quick look at how the Percipient IPLM approach connects to low-level files, requirements manager, Data Management (DM) systems and PLM tools:

Percipient

Just like UNIX allows you to set individual file permission of Read, Write, Execute and Owner; with Percipient you can decide to assign Read, Write or Owner permissions to users or groups of users for each IP block within the company. In Unix you’ve already defined the concepts of Users and Groups, so that info can be re-used within Percipient to enable permissions for each IP block.

Percipient also has the concept of hierarchy, meaning that one IP block may itself contain one or many lower-level IP blocks, so you get to define the IP permissions per user and group. If your team has contractors, then it makes sense that you restrict their access to any sensitive IP block details. An admin using Percipient can also grant permissions to all IP hierarchy using a single command, so managing IP access can be quickly updated as your project dynamically changes.

IP permissions are set by knowing who is working on a project, and also which IPs are being used on a project. Engineers that are working on a project will be part of the same UNIX group, so Percipient synchs with your LDAP/AD system to know which engineers belong to each group. To add a new engineer to your project or remove an engineer just update their UNIX group.

Once Percipient knows each UNIX group that are being used, then you can define which IP  blocks are assigned as Read, Write or Ownership permission. The IP hierarchy permissions are also defined by an admin either all at once, or by hierarchy level. You define group membership using UNIX and it gets synced within Percipient, so it’s always up to date.

Each IP block with multiple users in different project hierarchies can have multiple permissions with different project groups, so it’s quite flexible to meet your unique project needs.

With Percipient there’s a convenient, centralized place to to view both project and file permissions. Percipient consistently applies permissions into the underlying DM system, whether that is a Perforce IP or another DM. Engineers only see and can modify the specific IP blocks that permission has been granted for.

IP blocks that are changed or re-used in different contexts have their file permissions always in-sync with the DM tool.

Permission management for bug tracking tools like Jira, or a Wiki manager such as Confluence can both be performed by Percipient, extending the utility of a centralized approach.

Let’s say that you wanted to find out the project BOM along with all permissions attached to each IP contained in the BOM. With the Percipient tool there’s a RESTful public API, and here’s an example using the command line, along with the output results:

Results of using the RESTful API

The results tell us that the users of group “proj_yosemite” have Read permission to the IP, and that user “sasha” has Read, Write and Owner permissions to the IP.  Using this API makes it straight-forward for CAD engineerings to integrate Percipient with other software tools that use permissions.

Summary

Both operating systems and IPLM systems have come a long way over the years, making the life of SoC engineers a bit easier by using automation to help manage hierarchy in IP blocks, along with synching up with DM, project requirements and bug tracking tools. Your BOM can now be maintained in a single tool, along with managing all of the permissions to each IP. For more details there’s a 10 page White Paper available at Methodics web site.

Related Blogs


Real Men Have Fabs Jerry Sanders, TJ Rodgers, and AMD

Real Men Have Fabs Jerry Sanders, TJ Rodgers, and AMD
by John East on 07-29-2019 at 6:00 am

In 1977 I made a job change:  I took a job at Raytheon Semiconductor.  Raytheon was on Ellis Street next door to the Fairchild “Rust Bucket”.  In the early days, they shared the same parking lot so my commute didn’t change much, but my outlook on life changed a bunch.  I had mostly enjoyed my days at Fairchild, but I hated every single day I spent at Raytheon.

Then, in 1979,  I got a break!  Gene Conner (a great boss and AMD’s first product engineer)  offered me a job as product manager of AMD’s Interface product line.  I jumped on it!!!  Wow.  It was like dying and going to heaven.  Within a few days Gene taught me the most important thing that you had to understand if you were going to be a manager at AMD.

People first.  Products and profits will follow.

Jerry Sanders was definitely a flamboyant guy. Some of the stories you may have heard are probably overstated,  but he was flamboyant!  He was also very sensitive to the needs and feelings of the people who worked there.  Jerry hated the idea of layoffs.  Layoffs are very different from firings.  Someone gets fired if they don’t do their job well.  It seems harsh, but sometimes that has to happen.  With layoffs,  though, people who are doing their job well get let go.  We all hate that.  Jerry particularly hated it. Layoffs were a common part of the Silicon Valley culture at the time (See my week #7.  Layoffs Ala Fairchild). Jerry didn’t want AMD to be like that.  He instituted a no-layoff policy at AMD.  At first it was an informal policy.  Later, he had it written in the company’s policy manual.  For 17 years he stuck to it.  If things weren’t going well temporarily, Jerry’s view was  – hold on to the people and let the earnings suffer.  Not the other way around. That was unheard of in Silicon Valley semiconductor companies.  It made people want to work at AMD. The great recession of 1984 came.  We dropped into a loss position.  Our spending was too high. Our sales too low.  The cash balance wasn’t strong.  At an executive staff meeting we were hashing out what we could do about it.  The subject of a layoff came up.  Several execs were pushing for a layoff.  Jerry went apoplectic. He banged on the table yelling,  “I’m not going to preside over the dismantling of my life’s work.”  Jerry was always a good “quote machine”,  but that one in particular will stick with me forever.

(Unfortunately, by the time 1986 rolled around we were still in a loss position and the cash balance was running dangerously low.  We were forced to abandon the policy.)

In 1980 we had a very good year. Jerry wanted to spread the wealth.  He decided to hold a raffle.  The winner of the raffle was to get a house!   Yes.  The title to a real house here in Silicon Valley! Even back in 1980, production workers generally couldn’t afford their own houses.  The raffle was held, as I recall, on a Saturday night.  Early Sunday morning Jerry, accompanied by a Channel 7 TV crew, went to the home of the winner (A Fab worker named Jocelyn Lleno who didn’t have any idea that she had won) and knocked on the door.  When she answered the door wearing her bathrobe,  he told her,  “Hi.  I’m Jerry Sanders.  I came here to tell you that you won the raffle.  You’ve won a house here in Silicon Valley.”  She was blown away!!!  (Actually, the prize was $1000/month for 25 years.  Hard to believe, but in those days that was enough to buy a very nice house)

Once at a black tie dinner event for AMD executives and their wives,  I was assigned to sit next to Jerry at dinner.  My wife Pam sat directly to his right.  Jerry knew that Pam owned a dance studio (she still does).  He asked her how the studio was going.  It happened that Pam was about to take a contingent of dancers to Russia, Poland, and the Ukraine for three weeks as part of an exchange program – a cadre of Russian dancers had just visited Silicon Valley.  It was expensive to take all those dancers to Russia and nobody had figured out how they were doing to pay for it.  So Pam  — extrovert that she is – responded with something like, “Well.  I’ve got a problem.  I don’t know how I’m going to pay for this Russian exchange.  Can you help?”  As I crawled out from under the table, I saw Jerry reach into his jacket pocket.  He pulled out a check book and wrote out a personal check for $1000.

I first met TJ Rodgers in 1982 when he worked at AMD.  Shortly after that,  he left AMD to found Cypress Semiconductor.  In 1992 plus or minus a year or two Valerie Rice, a writer for the San Jose Mercury News, was interviewing TJ.  The fabless concept hadn’t yet taken over the world,  but it was making inroads.  Valerie asked TJ what he thought about the fabless model.  I love TJ Rodgers!  He was one of the old guard CEOs (As I was).  He believed in Fabs,  device physics, and transistor level circuit design (Things have changed.  See my upcoming week #15.  The Decade that changed the industry.)  Valerie tried to help by summarizing what he had said.  “So, you’re essentially saying that real men have fabs,  right?”  That was a play on the title of a book that was very popular back in the day.  Real Men Don’t Eat Quiche.  TJ jumped on it.  “Exactly!!!”  Jerry Sanders read that line and loved it!  Later that year he was the lunch speaker at the Instat Conference (Jack Beedle’s annual semiconductor conference that was attended by virtually all the big brass in the business).  The high point of his talk?  In his very strongest “take charge of the room and lay down the law” style:  “Now hear me and hear me well.  Real Men Have Fabs!!!!”  Most of the speakers that afternoon were fabless company CEOs.  I was one of them.  Jerry’s talk sent us all scurrying back to our power points to make the necessary changes.  The Instat Conference was always fun, but that was the best one ever!!

There was something about the AMD environment that spawned CEOs.  Was it the collegial environment?  In total,  83 former AMDers have gone on to become CEOs of other tech companies.  The two who impress me the most, though, are two CEOs who were just starting their careers at AMD during the days when I was a VP there.  Jayshree Ullal and Jensen Huang.  Jayshree  (the CEO at Arista Networks)  took Arista from a fledgling company to one now valued at twenty billion dollars!  There’s a great article about her in Forbes Magazine.   Jensen (the CEO of Nvidea) has built a juggernaut, but I think of him as the best public speaker I have ever listened to.  (Actually – he’s tied with Jerry Sanders who is the greatest orator in the history of High Tech!!!).   At the typical dinner event, most of us can’t wait until the keynote speaker shuts up so that we can eat.  In the case of Jensen, though, you don’t want him to stop.  He’s just plain fun to listen to.

There was a terrific amount of camaraderie and love for the company in the early days of AMD.  A terrific spirit!  It seemed to me that it waned a bit, though, when Jerry left.  This May 8th I was invited to attend the AMD 50th birthday celebration in their new offices in Santa Clara.  It was a really well planned event.  I talked briefly with Lisa Su (The new CEO) and with a dozen or so of the present-day rank and file employees.  My takeaway?  Lisa Su is great and the spirit is back.

Jerry Sanders was CEO of AMD for 33 years.  TJ Rodgers was CEO of Cypress for 33 years.  The industry lost a lot when they retired!  I miss them!!!

Next week:  The IBM PC

See the entire John East series HERE.

Pictured:  Jerry Sanders