webinar banner2025 (1)

Cadence Adds “Always On” to vManager Verification Management with Distributed and Cloud Access

Cadence Adds “Always On” to vManager Verification Management with Distributed and Cloud Access
by Mike Gianfagna on 06-17-2020 at 10:00 am

Screen Shot 2020 06 15 at 11.32.15 AM

Cadence vManager™ Verification Management provides what the company describes as metric-driven signoff. Anyone who has been through the tapeout process for a complex SoC knows the perils of verification sign-off. How much of the chip has been verified?  What’s left to do? Will all be ready when the tapeout deadline arrives? In a prior life, I mused that chip verification was done when time ran out. Today, one can do a lot better than that with the available technology from companies like Cadence.

I recently had the opportunity to speak with Matt Graham at Cadence about the vManager platform and the recent enhancements that have been added. Matt is a product engineering director for vManager. He’s been with Cadence for over 15 years and spent time at Nortel, AMCC and Verisity before it was acquired by Cadence. Matt knows a lot about chip verification and how to automate it.

He started by taking me on a tour of the vManager platform. With the goal of helping customers verify smarter, this platform aims to enhance predictability, productivity and quality for chip verification. Think of it as a way to collate and manage the massive data from all the verification tasks applied to a typical SoC. Formal analysis, software simulation, hardware emulation and FPGA prototyping all generate a lot of information.

The vManager platform analyzes and abstracts this information to create a clear picture for the design team regarding where they are in the verification process. Verification requirements are coordinated with design intent to assess functional and code coverage. This allows informed decisions to be made, which results in superior quality and tighter schedule performance. The figure below provides an overview of the data sources and analysis regimes of the vManager platform.

The current architecture of the platform manages the significant data volumes with a client/server SQL database in a centralized configuration. This architecture works great for an IP level team all located in the same location.  As the need to scale to larger, more distributed teams increases, the scale (30 projects and 100 users) and locality limitation of a single server starts to become a concern. The lack of fault tolerance also makes the system a potential single point of failure – a difficult situation when tapeout is fast-approaching.

Cadence has addressed these issues with the addition of a high availability proxy server that provides load balancing and resilient routing. Proxy servers and server agents work together, resulting in a scalable and high-availability environment that delivers fault tolerance. Both public and private cloud environments are supported. The figure below provides an overview of the new architecture.

This architecture supports multi-region capability, allowing design teams to manage regressions across regions and into the cloud with a single server. Every project has a complex geography overlay so this is a key feature. All the data at each location is linked to provide good oversight.

Matt summarized our conversation by pointing out that this new architecture provides greater predictability, productivity and quality with better scalability, improved reliability, tighter collaboration for dispersed design teams and lower maintenance. This enhancement provides a lot of benefits.

Cadence supports a comprehensive upgrade program for these new features, so if you’re a vManager customer contact your Cadence salesperson to find out how to unlock all these new benefits.


Where’s the Value in Next-Gen Cars?

Where’s the Value in Next-Gen Cars?
by Bernard Murphy on 06-17-2020 at 6:00 am

money

Value chains can be very robust and seemingly unbreakable – until they’re not. One we’ve taken for granted for many years is the chain for electronics systems in cars. The auto OEM, e.g. Toyota, gets electronics module from a Tier-1 supplier such as Denso. They, in turn, build their modules using chips from a semiconductor chip maker such as Renesas, who produces their chips using pre-packaged functions from IP providers like Arm. Toyota could do the whole thing themselves, but it’s very expensive to set-up and maintain all of that infrastructure. Specialization makes it all more practical. Everyone makes money doing their bit well and cost-effectively and being able to sell to multiple customers (Toyota, GM, BMW, etc.).

However, that cash flow can be upended when disruptive innovations are thrown into the supply chain, in this case, a lot more intelligence and autonomy. I talked to Kurt Shuler (VP Marketing at Arteris IP) to get his view. Kurt is an IP supplier and has a unique viewpoint because he works with semis, Tier-1s and OEMs, with standard designs as well as newer AI-based designs. He’s also an active member of the ISO 26262 committee.

Value Chain Shifts

Kurt’s view is that there’s a lot of shifting ground in the supply chain, especially between the Tier-1’s and car manufacturers. More and more, the value in the car depends on intelligent electronics, which changes the stakes. For those who want to grab a bigger share, growth beckons and for those who don’t, margins erode. They may become an automotive Foxconn. Everyone is staking out territory, whether grabbing a piece of a neighbor’s turf or grabbing a lot, or all like Tesla. Tesla used to depend on NVIDIA and Mobileye, now they’re doing the whole thing themselves.

On the semiconductor side, you have an NXP or TI who gives you a board support package, maybe a reference board and a software development kit. With these ingredients, a company can create its own solutions. Grabbing for more are companies like Mobileye who are responsible for a full system, ship, boards and software, though they customize these based on Tier-1 requirements.

The Value of Data

There’s also a soft value – the data gathered by all those intelligent systems. That data has immense value in helping refine training for (SAE) level 3 guidance and up, and for improving the overall user experience. Tesla owns all the data its cars generate, including images of bystanders, a point which has raised some privacy concerns (though we consumers generally seem to have a meh reaction to privacy debates). Mobileye similarly owns all the data its systems generate, though it shares that data with the Tier-1 and OEM. One reason the Tier-1s and OEMs are so eager to get into owning chip architectures is so they can have exclusive rights to that data.

This crowd-sourced learning has immense potential value. One of the biggest challenges in getting higher levels of autonomy is getting enough training across enough conditions. Even data captured in non-self-driving mode has value. No wonder Musk doesn’t want to let go of this asset – building on that training will push closer and closer to a self-driving reality. Whoever gets there first may own the future of mobility. Tesla wants to lap everyone else on the track, to be so far ahead on the data that no-one else can catch up.

However, Tesla still has a small customer base. Mobileye has a similar vision but they’re more integrated into the automotive value chain. They’re working with the incumbents and have a good lead because they were first in class in deploying these systems. Kurt sees a couple of big wild cards: GMC Cruise and Bosch. Cruise is trying to do a Tesla, starting behind certainly, but they have volume manufacturing and maintenance expertise on their side, where Tesla will be far behind. Bosch has design teams in Germany and Sophia Antipolis (tech area in the south of France, known for TI OMAP, Apple, Arm and other top-notch chip architects, aside from being spectacularly beautiful and, you know, being in the south of France).

All in all, several big players are acting very much like this is a winner-take-all market. Some of that is on differentiation in chips. But a lot more is on the data – controlling your data. Tier-1s and car companies are throwing a lot of money at this, like a few hundred million dollars is no big deal, to assemble their fortresses.

Still, they’re not quite as flush with cash as the hyperscalars (Google, Apple, Amazon, Microsoft, etc.). Kurt speculates that at some point, this may settle down to some kind of mashup between those guys and the automotive value chain. Meantime we’ll no doubt be trying to figure out if we even want personal cars anymore and if not, what service or combination of services is going to provide us a similar level of mobility. Stay tuned.

Also Read:

Design in the Time of COVID

AI, Safety and Low Power, Compounding Complexity

That Last Level Cache is Pretty Important


Fractal CEO Update 2020

Fractal CEO Update 2020
by Daniel Nenni on 06-16-2020 at 10:00 am

Fractal Technologies SemiWiki

Rene Donkers, the company’s Co-founder and CEO, started his EDA career at Sagantec where he became responsible for world wide customer support and operations management. Ten years ago, Rene and a handful of people noticed a need in the design community for a standardized (portable) IP Validation approach to replace internal solutions, and thus Fractal was founded with Rene as the CEO.

What do you think is the biggest achievement in the past 10 years?
As a privately owned company we decided to grow the company organically. With focus on Hard IP validation we managed to become to standard for IP validation with 25+ customers in US, Europe and Far East.

Internal solutions at customers have been replaced by our Crossfire solution. Staff that were hired for both R&D and customer support are still all with the company so years of experience on IP validation is huge. With this solid base of customers and staff we are ready for the next years to come.

Can you tell how Crossfire is used by your customers?
“Crossfire can be used in three ways: for quality sign-off, design development, or assessment of external IPs.” As a sign-off tool, Crossfire enables IC developers to check for undesirable variations in IP blocks and design formats, thereby ensuring that an IP block meets their specifications. At the same time, Crossfire can also be used in design development to assess design formats and the quality checks employed to prevent potential misalignments. Fractal has also incorporated specialized APIs into Crossfire to aid IC integrators in analyzing IP blocks developed by external vendors and guarantee 100 percent alignment with their specific design format.

What’s new in 2020?
The introduction of our new IPdelta tool. The objective of IPdeltaTM is to inventorize all aspects in which one IP revision may differ from the next. Every database and file-format supplied is compared and deltas are reported for every relevant category of design data. This includes basic elements like cells and terminals but extends to delay-, power- and noise- arcs, their conditions and associated characterization data. Also physical layout (LEF, OASIS, OA, MilkyWay) is covered, as are schematics, netlists, synthesis properties and functional models.

With the new IPdelta product our customers will be able to compare 2 versions of the same IP as part of the IP QA flow. This will strengthen our position as market leader for IP validation.

What is the key to Fractal’s success?
Fractal Technology’s team is of the opinion that it is vital to work closely with customers on what they need and expect and to make sure to deliver what one promises. The company also believes that communication is key, and unquestionably, having best of class staff to develop and support the resultant solutions.

Fractal Tech understands that for any customer, feedback is important, and it knows that if customers have a question or an urgent issue, communication is the first part of solving the problem. According to Rene, Same is the scenario within any company. He says, “Everybody wants to be involved in how the company is doing and what the role and added value of each individual is, in the success of a company. Never assume your customer or staff is happy unless you ask and get confirmation.”

About Fractal
Fractal is a privately held company with offices in San Jose, California, Austin Texas, Weert, the Netherlands, Grenoble, France and Yokohama, Japan. The company was founded by a small group of highly recognized EDA professionals. Fractal is dedicated to providing high quality solutions and support to enable their customers to validate the quality of internal and external IP’s and Libraries. Thanks to our validation solutions, Fractal maximize value for its customers either at the sign off stage, for incoming inspection or on a daily basis within the design flow process.

Also Read:

CEO Interview: Johnny Shen of Alchip

Tortuga Logic CEO Update 2020

CEO Interview: Robert Blake of Achronix


Webinar: Optimize SoC Glitch Power with Accurate Analysis from RTL to Signoff

Webinar: Optimize SoC Glitch Power with Accurate Analysis from RTL to Signoff
by Mike Gianfagna on 06-16-2020 at 6:00 am

Screen Shot 2020 06 15 at 6.59.34 PM

I had the opportunity to preview an upcoming webinar from Synopsys on SoC Glitch Power – what it is and how to reduce it. There is some eye-opening information in this webinar. Glitch power is a bigger problem than you may think and Synopsys has some excellent strategies to help reduce the problem. The webinar is available via replay HERE.

The webinar is presented by Patrick Sheridan, PrimePower Product Marketing at Synopsys. I’ve known Pat for quite a while, dating back to the Cadence days in the 1990’s. Pat has substantial depth on the products and problems being discussed as he’s been at Synopsys for over 10 years. He’s also great at explaining things. Ashwin Sudhakaramenon, PrimePower Application Engineer at Synopsys handles the Q&A. Ashwin has been working as a designer and applications engineer on timing closure for multi-million gate designs for about 10 years, eight of them at Synopsys. Ashwin brings substantial technical depth to the table. These two gentlemen do a great job.

The topics covered in the webinar include:

  • An introduction to the challenges posed by glitches on power
  • Accurate glitch power analysis and optimization from RTL to signoff
  • PrimePower case study examples
  • Summary/Q&A

That’s a lot to cover in about 40 minutes. If you haven’t registered for the webinar yet, I’ll provide a bit of color on these topics to help you decide.

So, what is the glitch power problem? As designs become more complex and technology advances, the chance of switching activity due to a glitch increases. The problem shows up in all kinds of mainstream designs, including mobile and AI applications. Pat reports that the glitch power component for certain blocks in a design can range from 10% – 50%.  That’s not a misprint. Half the power could come from glitches.

These issues can also cause reliability problems from electromagnetic and voltage drop effects. To make it more challenging, designers need accurate vectors to predict where glitches may occur. It is best to fix these problems early in the design process, but detailed vectors are typically not available until late in the design process. Early in the process there is an RTL verification suite, but this data cannot yet take into account the detailed wiring delays, which is what causes the glitch. More challenges.

Once I heard all this, I had to watch the rest of the webinar and you will, too.

What follows is a close look at the technologies available to predict and mange glitch power from RTL all the way to signoff. There are three cornerstone capabilities required here:

  • A signoff accurate power analysis engine (RTL to gate-level)
  • Timing-aware activity delay shifting technology
  • Generation of glitch-aware collateral

Pat goes into quite a bit of detail on each of these points, explaining why it’s important, what the impact on accuracy is and illustrating the results of the analysis. I don’t want to spoil the story – Pat is much better at explaining all this than I am and you really need to hear it directly from him.

The next section of the webinar provides details of four case studies using the Synopsys flow and in particular PrimePower RTL. The four case studies covered are:

  • RTL glitch power source identification
  • Gate-level glitch power from RTL fast signal data base (FSDB)
  • Glitch-aware switching activity interchange format (SAIF) for power recovery
  • Glitch-aware peak power for IR profiling

This section uses real customer design data at advanced process nodes, so the information is quite relevant.  A very useful Q&A follows that Ashwin handles quite well. That’s the summary of the webinar. By now, you must want to see the replay HERE.

Also Read:

The Problem with Reset Domain Crossings

What’s New in CDC Analysis?

SpyGlass Gets its VC


What’s At the Center of Your SoC Design Process?

What’s At the Center of Your SoC Design Process?
by Daniel Payne on 06-15-2020 at 10:00 am

IP SoC min

I love starting a new project from scratch, because there’s that optimistic feeling of having no constraints and being able to creatively express myself and get the job done right this time. For SoC designs today there are teams of engineers and maybe a program manager plus a marketing person that define the features, budget and most importantly the schedule. Because of time to market demands, competition and support of legacy features, we  no longer have the luxury of literally starting from scratch, so semiconductor IP re-use is a firm reality. Back in the 1980s your SoC maybe had a handful of re-usable IP blocks, while today the typical SoC will be filled with a majority of IP blocks, so times have changed dramatically.

If you’re using hundreds or even thousands of IP blocks, how do you manage all of that during the design process as new versions are released, bugs are fixed, new features added, or even when the specifications change?

My contacts at Methodics have built their entire company around addressing this very issue, and their answer is a software tool called Percipient which is an IP Lifecycle Management Platform (IPLM). Their IP-centric approach uses hierarchy to represent your complex electronic design, and with Percipient they introduce the concept of an IP object that goes beyond just semiconductor IP:

 

Through the use of hierarchy all dependencies of a project are tracked in one place, by design. All of the items shown above in orange are modeled as IP inside of Percipient, and every IP has dependencies defined in a hierarchical tree. The following diagram shows an SoC with versions for each IP and Software block being used, so it’s a quick way to create a Bill Of Materials (BOM):

 

Delving a bit deeper into the electronic design process, the data for an IC is a mixture of both binary and text files, based on which EDA tool is being used, so yes, lots of different file formats and databases are used. You may have some engineers using Git or Mercurial as Data Management (DM) tools on your teams that are distributed geographically and they require close collaboration for source code development. For binary files there are other DM tools like Perforce which can handle big file sizes and be replicated across multiple sites. EDA vendor tools also tend to have their own binary file systems. With Methodics the solution was to create a DM layer that works with any tool:

Just keep using your favorite DM tools, without having to learn something new: ClearCase, Perforce, DesignSync, Git, Subversion. Percipient will work with each of these native DM systems behind the scenes, building the trees and project workspaces.

In addition to working with the most popular DM tools, there is something that Percipient provides for each IP, and that’s called meta-data which could includes things like:

  • Who may view or modify an IP
  • IP dependencies
  • Technical specifications of an IP
  • Business properties of an IP
  • Conflict resolution across the hierarchy
  • Workspace building rules

There are four benefit of using meta-data that is separate from IP data for each IP object:

With Percipient each user builds their own workspace, and each workspace is tracked, so Percipient knows which user has built each workspace, and which IP objects are in all workspaces. Team members use the familiar and native DM commands to build a workspace, so all IP stays in its original data format, along with the hierarchy.

Percipient can also track all IP in the workspace as a link to a managed IP Cache, which is also called PiCache. With this concept each IP version is saved for use by any workspace through soft-links. Here’s a diagram of how IP Cache operates:

A big advantage of using IP Cache is that a workspace can be built literally in just seconds, and it uses very little space because links are used to the cache. An engineer could choose to make an IP object “Local” through check out with the DM tool, or keep them as links. You make an IP local when making modifications, otherwise leave it as a soft link.

Summary

The vast complexity of a modern SoC calls for fresh thinking on an efficient design methodology, and with the proliferation of semiconductor IP re-use it makes sense to have a methodology that is IP object centric. Methodics have done a foundational work in this area of IP Lifecycle Management, and their Percipient tool has been integrated with the most popular Data Management tools in the industry, so you don’t have to ask the CAD department to make your tool flow work with custom programming.

Read the complete 10 page White Paper online now.

For chip design teams in Taiwan and China, there’s some good news as Methodics has recently added a new sales representatives in each country. It’s great to live in the same time zone as your customers.

Related Blogs


Synopsys Introduces Industry’s First Complete USB4 IP Solution

Synopsys Introduces Industry’s First Complete USB4 IP Solution
by Mike Gianfagna on 06-15-2020 at 6:00 am

USB 4 Connector source Intel

Synopsys announced an addition to its popular DesignWare IP portfolio recently that has some significant ramifications. The company announced the industry’s first complete USB4 IP solution. Before we get into the details of the announcement, let’s take a quick look at the USB standard and why it’s important.

Standards have a varying degree of impact and stickiness. Some last, others don’t. I can recall many years ago while working at a hardware accelerator company called Zycad we had a design services organization. One of the leaders in that group jumped on a very early spec of USB 1.0 and began aggressively building a team and infrastructure to support it for some very large companies. Many of us wondered why all the commotion about yet another communication standard. As history has proven, we were wrong and the design manger in question was right. The USB standard has been the centerpiece for wired data and power for over 20 years.

The 3.x versions of the standard did muddy the waters a bit with reference to multiple cable types. USB4 promises to fix all that with one connector and a converged connectivity standard. About a year ago, Intel announced it would contribute its Thunderbolt 3 specification to the USB Promoter Group and that formed the foundation for USB4. The standard physical connection going forward will look like a USB-C connector. Anyone with a late model Apple laptop will know what that looks like. The standard supports double the speed of USB 3.x (40Gbps vs. 20Gbps) with backward compatibility and over 50 companies are now supporting adoption, so things will be picking up fast.

Back to the Synopsys announcement. Leading edge IP announcements often detail a first set of deliverables with more to come, sometimes many more to come. This is not what Synopsys announced. Rather, the company is providing a complete IP solution for the new standard, including IP for the controller, router, PHY, and verification. Multiple high-speed interface protocols, including USB4, DisplayPort 1.4a TX, PCI Express, and Thunderbolt 3 are supported, so SoC designers have options. There is already a test chip tapeout of the IP in an advanced 5nm FinFET process, so robustness across process, voltage, and temperature variations has been demonstrated.

The new IP is designed to meet the functionality, power, performance, and area requirements of applications such as storage, PC, and tablet SoC designs as well as software development debug and easy deployment of artificial intelligence (AI) applications at the edge. The completeness of this offering should lower risk and enhance adoption for the new standard.

“As an active member of the USB Implementers Forum (USB-IF) for more than 20 years, Synopsys has helped to advance USB specifications while developing IP products that ease the integration and adoption of the latest USB technologies,” said Jeff Ravencraft, president and COO of USB-IF. “Initial USB4 products are expected to appear in late 2020 and the early availability of integration-ready USB4 IP is critical to helping designers incorporate the USB4 interface into their SoCs. Synopsys continues to support the industry by helping designers ensure interoperability and connectivity with billions of USB-enabled devices worldwide.”

“Synopsys has been at the forefront of providing high-quality, complete IP solutions through every generation of widely used interface standards such as USB,” said John Koeter, senior vice president of marketing and strategy for IP at Synopsys. “By providing a complete USB4 IP solution, backed by billions of SoCs shipped with DesignWare USB IP and our long track record of technical expertise, Synopsys enables designers to accelerate the integration of high-performance USB4 functionality into their SoCs with significantly less risk.”

If you want to learn more, there are several additional resources provided by Synopsys. The Synopsys press release can be found here.  A white paper entitled USB4: User Expectations Drive Design Complexity is available as well. This piece outlines the capabilities of USB4 hosts, hubs, docks, and devices with an emphasis on how end-user expectations drive the complexity of USB4 products. And a video interview with Jeff Ravencraft, President & COO of the USB Implementers Forum is also available.

Also Read:

Synopsys – Turbocharging the TCAM Portfolio with eSilicon

Synopsys is Changing the Game with Next Generation 64-Bit Embedded Processor IP

Security in I/O Interconnects


COVID-19: The Fate of the Fearless

COVID-19: The Fate of the Fearless
by Roger C. Lanctot on 06-14-2020 at 7:00 am

COVID 19 The Fate of the Fearless

As I listened last Friday to Automotive News Publisher Jason Stein interview Scott Corwin, managing partner and “Future of Mobility” practice leader at Deloitte Consulting, regarding potential COVID-19 recovery scenarios for the mobility industry I realized that the vast analytical powers of Deloitte had met its match. Corwin had more qualifications on his insights than actual data and knowledge regarding what the future might hold.

The Deloitte scenarios – described as “passing storm,” “good company,” “sunrise in the East,” and “lone wolves” – are based on taking into account the potential depth and severity of the pandemic and the resulting government response. What the scenarios reflect is Deloitte’s ability to “tell a story.” What they lack is a reflection and quantification of events unfolding in real-time and their implications for consumer behavior and economic outcomes.

Automotive News: Where Does Mobility Go from Here? – https://www.autonews.com/weekend-drive-podcast/daily-drive-podcast-june-5-2020-scenario-planning-where-does-mobility-go-here

In its attempt to capture the “big picture,” Deloitte is missing the obvious reality confronting consumers and business owners every day for the past three months. To survive and compete today we must rely on our own wits and our own resources. We cannot count on the government – any government – and we can’t even rely on traditional customer behavior.

There are four intractable variables we have all been forced to confront: the pandemic itself, the nature of mobility demand, the behavior of auto makers, the priorities of governments and regulators. Students of viral pandemics will know that COVID-19 will be with us indefinitely. There will be no end to the pandemic. So it is best to proceed based on that assumption and take all appropriate measures.

For mobility operators, consumer decision-making will be unpredictable so it is probably best to prepare for the worst while hoping for the best. Service providers and employers should seek the lowest common denominator as to acceptable protocols for returning to a new business as usual.

For auto makers this will mean real-time daily factory worker testing to achieve reliable and scalable vehicle production. For dealers it means online vehicle sales with touchless dealer-delivery must be accelerated. It will take time to ramp up testing. It will take time to ramp up touchless vehicle sales and delivery – but this is the new normal.

As for government assistance, the best that can be hoped for is financial assistance. By now, it is clear, at least in the U.S., that the government will not step in to lead testing and tracing activities, nor will it provide specific worker protection guidelines with appropriate enforcement. It is every man, woman, and company for him or her or itself.

Strategy Analytics reached out to consumers in the U.S. and the U.K. to ask them about their car buying, transport usage, and ride hailing plans post-COVID-19. The survey was conducted in May.

The survey reflected some key shifts in transportation preferences with long- and short-term implications – some of which may appear obvious:

  • Usage of all mobility services will likely fall in COVID-19 recovery.
  • Owned car usage is likely to remain unchanged or increase.
  • For ride-hailing UX, air/surface cleaning and plastic partitions are of modest importance to most riders.
  • Driverless technology does not add great value to ride-hailing UX, even during a pandemic.
  • With regard to car shopping, most but not all remote activities and features are of interest.
  • Though remote inventory browsing and sales completion are broadly appealing, some advanced tech-driven walkthroughs are not of great interest.
  • Smart or clean surfaces are of greater value than removable or customizable cockpit barriers.

For me, the most important takeaways from the study was the importance of partitions in taxis and ride hailing circumstances. While the interest in partitions was expressed by exactly half of respondents, that level of interest ought to tip that COVID-19 compensation measure into a must-do category. Dealers should also note the survey’s flagging of remote service as a high customer priority.

As we come to grips with what COVID-19 has done to our lives and our livelihoods we must be as honest and open as possible. Experts have made clear that COVID-19 is not a single thing. COVID-19 is a mutating coronavirus that even before its mutation was impacting different people and different communities in different ways.

Even as I write these words researchers are working diligently to understand precisely what COVID-19 is and how it effects human organisms. They are doing this research with the knowledge that COVID-19 itself is constantly evolving – as is our reaction to it.

Most notable of all, though, is the emotional reaction of us human beings. No two people have the same understanding or the same response to COVID-19. This was made personally clear to me as I pondered visiting my mother on the occasion of her 94th birthday next Monday.

I am not alone in seeking to visit my mother and the logistics involved in this decision are complex. My mother has four children – including myself – with nine grandchildren, two great grandchildren, two daughters-in-law, one spouse of a grand-daughter, and four unmarried partners of grandchildren.

My mother’s birthday is on Monday, June 15th; she lives at an assisted living facility in Connecticut where the next phase of re-opening (including restaurants and retirement homes) occurs June 17th; my preferred lodging provider, Marriott, re-opens to the general public June 20th. One of my siblings does not believe the coronavirus is dangerous at all. One of my siblings thinks it’s a bad idea to fly to Connecticut and then proceed to visit with my mother. The feelings of grandchildren, spouses, boyfriends, and girlfriends are most likely mixed.

My family is not unlike every other family in America – torn and tortured by COVID-19 decisions and societal and economic impacts. Mobility service providers, car makers,, and new car dealers will be best served by assuming: A) COVID-19 will never go away; B) every customer has deep COVID-19 concerns; C) governments are incapable of solving this crisis.

Under these circumstances it will be best to establish world class policies on cleaning, testing, and distancing in the hopes of returning to an operating environment – in vehicles, in factories, and in showrooms – that is acceptable to even the most fearful and worried among us. Only the lowest common denominator can get us through – which is why most of my family members will be joining my mother on her 94th birthday via Zoom. My mother has already become something of a medical miracle – but like all of us, she won’t outlast COVID-19. With a little care she can learn to live with it – like the rest of us.


How Blockchain Is Revolutionizing Crowdfunding

How Blockchain Is Revolutionizing Crowdfunding
by Ahmed Banafa on 06-13-2020 at 10:00 am

How Blockchain Is Revolutionizing Crowdfunding

According to experts, there are five key benefits of crowdfunding platforms: efficiency, reach, easier presentation, built-in PR and marketing, and near-immediate validation of concept, which explains why crowdfunding has become an extremely useful alternative to venture capital (VC), and has also allowed non-traditional projects, such as those started by in-need families or hopeful creatives, a new audience to pitch their cause. To date, $34 billion has been raised through crowdfunding initiatives, adding roughly $65 billion to the global economy in line with projections that show a possible $90 billion valuation for all crowdfunding sources, surpassing venture capital funding in the process. [2]

Limitations of Current Crowdfunding Platforms [1]

1.    High fees: Crowdfunding platforms take a fee for every project listed. Sometimes, this is a flat fee while others require a percentage of the total proceeds raised by contributors. This cut into the availability of funds and strains the fundraising process when start-ups are looking for every single dollar to help.

2.    Fine print rules and regulations: Not all platforms accept services as a possible project and demand real tangible products, such mindset cripple’s innovation and narrow the horizon of new products and services.

3.    DIY Marketing and Adverting: With few exception platforms will not help with spreading the word about new startups, which means startups need to pay for marketing and adverting yet another strain on limited funds available for them, and take their focus from innovation and creativity.

4.    Scam startups: In some cases, startups turn up as scams and produce nothing leaving investors with empty hands and no way to get their money back.

5.    Intellectual property risk: In some case startups have no protection of their IP , and leaving them exposed to experience investors who can take the idea and enter the market early with all the resources they have.

With all the above limitations of current crowdfunding platforms, blockchain technology, among all its benefits, can be best put to use by providing provable milestones as contingencies for giving, with smart contracts releasing funds only once milestones establish that the money is being used the way that it is said to be. By providing greater oversight into individual campaigns and reducing the amount of trust required to donate in good conscience, crowdfunding can become an even more legitimate means of funding a vast spectrum of projects and causes. [2]

How Blockchain helps Crowdfunding

1.    The Magic of Decentralization: Startups are not going to rely on any platform or combination of platforms to enable creators to raise funds. Startups no longer be beholden to the rules, regulations, and whims of the most popular crowdfunding platforms on the internet. Literally, any project has a chance of getting visibility and getting funded. It also eliminates the problem of fees. While blockchain upkeep does cost a bit of money, it will cut back drastically on transaction fees. This makes crowdfunding less expensive for creators and investors. [1]

2.    Tokenization: Instead of using crowdfunding to enable preorders of upcoming tangible products, blockchain could rely on asset tokenization to provide investors with equity or some similar concept of ownership, for example Initial Coin Offering (ICO). That way, investors will see success proportional to the eventual success of the company. This could potentially open whole new worlds of investment opportunity. Startups could save money on hiring employees by compensating them partially in fractional ownership of the business, converting it into an employee-owned enterprise. Asset tokens become their own form of currency in this model, enabling organizations to do more like hire professionals like marketers and advertisers [1]

3.    High availability and Immediate provision: Any project using a blockchain-based crowdfunding model can potentially get funded. Also, any person with an internet connection can contribute to those projects. Blockchain-based crowdfunders wouldn’t have to worry about the “fraud” that have plagued modern-day crowdfunding projects. Instead contributors will immediately receive fractional enterprise or product ownership.[1]

4.    Smart Contracts to Enforce Funding Terms: There are several ways in which blockchain-enabled smart contracts could provide greater accountability in crowdfunding. Primarily, these contracts would provide built-in milestones that would prevent funds from being released without provenance as to a project or campaign’s legitimacy. This would prevent large sums of money from being squandered by those who are either ill-intended or not qualified to be running a crowdfunding campaign in the first place. [2]

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at: https://medium.com/@banafa

References

[1] https://due.com/blog/a-new-era-of-crowdfunding-blockchain/

[2] https://www.disruptordaily.com/blockchain-use-cases-crowdfunding/


Silicon Catalyst Announces a New Startup Ecosystem for MEMS Led by Industry Veteran Paul Pickering and supported by STMicroelectronics

Silicon Catalyst Announces a New Startup Ecosystem for MEMS Led by Industry Veteran Paul Pickering and supported by STMicroelectronics
by Mike Gianfagna on 06-12-2020 at 10:00 am

Screen Shot 2020 06 12 at 8.46.22 AM

A little over a month ago, I wrote about the substantial support that Silicon Catalyst and Arm were providing for chip startups. There have been many incubators for technology companies over the years. These organizations typically provide office space, some basic infrastructure, advisory help and sometimes access to seed capital. Silicon Catalyst is also an incubator, but there are some important differences in its model that make Silicon Catalyst a potent force to cultivate semiconductor innovation. See my prior post for more details on that.

The big news from Silicon Catalyst is that there is now a MEMS startup ecosystem available in the incubator as well as a chip startup ecosystem. While its interests in innovation and participation isn’t limited to MEMS, STMicroelectronics has joined the Silicon Catalyst ecosystem. This announcement isn’t the beginning of Silicon Catalyst’s MEMS startup ecosystem. The organization already has quite an array of technology available. Physical design is supported by Mentor’s Tanner L-Edit; design analysis is covered by ANSYS; modeling and simulation by Soft-MEMS and Coventor; test by EAG Laboratories; design rule checking by Mentor and design services by AMFitzgerald. And now, the world-class MEMS capabilities of ST will become part of this new and growing startup ecosystem. As was the case with Arm, ST is joining Silicon Catalyst as a Strategic Partner and an In-Kind Partner.

There’s more news on the MEMS front from Silicon Catalyst. Paul Pickering, a recognized expert in MEMS technology, has joined a growing team of full-time operational leaders at Silicon Catalyst as the Managing Partner of the MEMS startup ecosystem. Prior to joining Silicon Catalyst, Paul was the Chief Revenue Officer for Micralyne Inc. based in Edmonton Alberta, Canada. Micralyne is a world-renowned manufacturer of microfabricated and MEMS products for the communications, energy, life sciences and transportation markets. In 2019, Micralyne became part of Teledyne Technologies. Paul also served as EVP, sales and marketing for both Exar Corporation and Xpedion Design Systems, a venture-backed EDA company that was acquired by Agilent in 2006. Paul co-founded two start-up companies and has consulted with numerous other small and large technology companies. He has been associated with Silicon Catalyst since 2015.

I had a chance to speak with Paul recently. He explained that MEMS, or a micro-electromechanical system is essentially a device with mechanical and electronic components that is miniaturized to dimensions similar to those used to build integrated circuits. This technology allows the integration of sensors (e.g., pressure, temperature, air flow) with the microelectronics that processes the information gathered by those sensors.

Paul estimated there are 30 – 100 MEMS devices in a typical cell phone and over 100 MEMS devices in a late model automobile. The edge devices that comprise IoT systems have made MEMS truly ubiquitous. MEMS do leverage silicon processing technology but also require some unique fabrication and assembly enhancements. Consider that testing an integrated circuit requires digital stimulus but testing a MEMS device requires precise physical input.

This adjacency makes the design and manufacturing of MEMS devices a natural extension for semiconductor design and manufacturing. Paul explained that there is a lot of university research on MEMS structures and processing methods. This work has spawned substantial startup activity in the MEMS area and many of those startups are involved with Silicon Catalyst.

After my conversation with Paul, the extension of Silicon Catalyst’s chip ecosystem to MEMS started to make a lot of sense. Paul went on to say, “MEMS technology has become an important enabler for advanced chip designs across many markets. The research and startup activity associated with MEMS is quite robust, and I’m delighted to be leading the efforts to nurture these startups at Silicon Catalyst.”

Pete Rodriguez, CEO of Silicon Catalyst also commented, “I am pleased to bring a well-known industry expert like Paul Pickering on board to lead our new MEMS startup incubator in addition to welcoming STMicroelectronics as a new Strategic and In-Kind Partner. These developments help enable our mission of accelerating business growth for startups in the semiconductor market. The addition of ST’s market leading MEMS capabilities and Paul’s technology and management skills will expand our reach into rapidly evolving innovations in the sensor and actuator markets.”

ST Microelectronics also commented. According to the press release, “Innovation through silicon is driving advancements in technology. Hardware development is challenging, which is why Silicon Catalyst plays a key role in enabling silicon start-ups to develop their technology and fueling the new cycle of semiconductor innovation,” said Kirk Ouellette, Vice President Strategic Marketing and Strategy Development, STMicroelectronics. “ST has a strong collaborative R&D and industrialization culture, which makes a perfect fit with Silicon Catalyst. As both a Strategic and In-Kind Partner, ST looks forward to providing guidance and resources for start-up partners as well as gaining access to cutting-edge silicon innovation.” 

If you are building a startup in the MEMS or chip area, Silicon Catalyst will be conducting a Fall screening review of all applicants. The deadline for submissions is July 6, 2020 and you can start the Silicon Catalyst application process here.

Also Read:

Starting a Chip Company? Silicon Catalyst and Arm Are Ready to Help

Silicon Catalyst Fuels Worldwide Semiconductor Innovation

Webinar: Investing in Semiconductor Startups


Webinar on Methods for Monte Carlo and High Sigma Analysis

Webinar on Methods for Monte Carlo and High Sigma Analysis
by Tom Simon on 06-12-2020 at 6:00 am

Advanced Monte Carlo Methods

There is an old saying popularized by Mark Twain that goes “There are three kinds of lies: lies, damned lies, and statistics.” It turns out that no one can say who originated this saying, yet despite however you might feel about statistics, they play an important role in verifying analog designs. The truth is that there are large numbers of process parameters that can vary between chips and within a single chip. As much as foundries try to maintain consistency, there are variations that can affect chip performance and yield. It is absolutely necessary for project teams to understand the effects of these variations so they can determine product behavior and yield. Thus, statistical analysis becomes extremely important in the design process.

MunEDA is offering a webinar that reviews variation analysis methods and dives deeper into how they can be used efficiently to give designers what they need to ensure proper design performance. Michael Pronath, MunEDA Vice President of Products and Solutions provides a cogent introduction and summary of variation analysis methods. He starts with an overview that includes PVT corner analysis, Monte Carlo (MC) Sigma-to-Spec/Cpk, MC pass fail yield estimate and Worst Case Analysis (WCA) for yield optimization, design centering, high sigma analysis and hierarchical verification.

MunEDA’s WiCkeD Monte Carlo Analysis (MCA) comes with many features that make understanding the results much easier. Their results viewer shows the MC results and shows sigma levels against an ideal Gaussian fit. Complete statistical information is available for each design parameter in an easy to digest interface. MunEDA’s Quantile plot shows how well the results fit various parametric estimates. From this it is easy to see if the tails are long or symmetrical, etc. For each performance value, such as slew etc., users can look at the parameter influence analysis to see how sensitive they are to the process parameters. To help identify the source of sensitivities, they offer a view of hierarchical MC sensitivities that goes from block level to device or parameter level.

The real substance of the webinar is in the descriptions of the advanced Monte Carlo methods offered in MunEDA’s WiCkeD. We all know that the killing issue with MC is when it’s run brute force, huge numbers of simulations are needed to get meaningful results. For high sigma designs the number of runs required becomes astronomical. Over the last few decades tremendous progress has been made in devising methods to get meaningful results with much less time and resources.

Michael has a deep understanding of the methods and technology, so hearing him discuss the various approaches available in the MunEDA product line is fascinating and very intelligible. He goes through each of the following: quasi-random MC sampling, sequential sampling, scaled sampling, and combining PVT corners and MC sampling. I will not venture here to repeat or duplicate what he goes over. However, is it worth pointing out that the techniques he covers are extremely effective at saving analysis time by dramatically reducing the number of needed simulation runs.

Methods such as worst case analysis can even provide better results than what can be achieved with pure sampling methods. The methods also scale well for larger designs. Of course, MC was first used on memory cells, which were used many times in a single design, and were easy to simulate on their own. Now, much larger analog blocks and designs must be analyzed for yield and performance out to high sigma values because of high production volumes and high reliability applications.

This webinar really does a nice job of covering the available methods for statistical analysis. Maybe even enough to sooth those who have apprehensions about statistics. The webinar is scheduled for June 30th at 9AM Pacific Time. Be sure to register if you have an interest in this topic.

Also Read:

Webinar on Tools and Solutions for Analog IP Migration

56th DAC – In Depth Look at Analog IP Migration from MunEDA

Free Webinar: Analog Verification with Monte Carlo, PVT Corners and Worst-Case Analysis