webinar banner2025 (1)

Ford in DC Refining Autonomous

Ford in DC Refining Autonomous
by Roger C. Lanctot on 11-11-2018 at 7:00 am

When cities put on a press event to announce they are welcoming a company to town to test autonomous vehicles within the city limits the news is greeted with polite interest and some trepidation – as it was yesterday in Washington, D.C. There is an “oo-ah awesome” high-tech buzz immediately tempered by a “Why?” buzzkill.

In the case of Ford’s announcement in Washington, the spinmeisters got directly to the point. The introduction of autonomous Ford vehicles – 5-10 at first early next year in advance of a full on rideshare service fleet in 2021 – is intended to create jobs and re-training opportunities throughout the District’s eight Wards.

The press event included substantial representation and participation from the DC Infrastructure Academy which provides employment opportunities and job training for infrastructure-related jobs. DC Infrastructure Academy will help Ford’s effort by training vehicle operators and technicians. Additionally, Ford will open an autonomous vehicle terminal in Ward five and the company says it will work to train residents for auto technician careers that could involve self-driving vehicles in the future.

According to Ford, the training will be through courses developed by Excel Automotive in Ward 7 and Ford’s Automotive Career Exploration program with support from local dealers Chesapeake Ford Truck, DARCARS and Sheehy Ford of Marlow Heights. The involvement of dealers was an especially nice touch by Ford.

By emphasizing job creation, Ford and the DC leadership short-circuited the knee-jerk job-killing conversation associated with robo-cars. Better still for Ford, it diverted attention from the fact that the announcement will do nothing in the short-term to ease the traffic congestion in the city.

In essence, Ford is announcing that it is commencing its data gathering activities to prepare for autonomous operation in the city. The Ford vehicles will be nothing more than surveyors/mappers of the city – an operation already begun by Ford’s Argo team. It means that in the short-term Ford vehicles will be adding to the general glut of DC traffic.

Ford arrives in the wake of a report published by the National Capitol Region Transportation Planning Board – “Visualize 2045” – which anticipates a 46% increase in congestion in the Washington, DC area by 2045 and offers a $291B plan to mitigate the impact of that demand.

Quicker solutions are being sought by the DC Council, according to reporting by WAMU. The Council is considering:

  • Banning vehicle right turns on red at more than 100 intersections in the downtown business district and near school zones and cycle tracks within the next 18 months;
  • Eliminating areas where two lanes of traffic can turn left at the same time. The city has got rid of 15 of those intersections and plans four more by the end of 2018;
  • Doubling protected bike lanes from 10 to 20 miles and accelerating the construction of a dozen of those projects in the next three years;
  • Adding “hardened” medians to slow vehicles turning left, especially at intersections with a lage number of vehicles and pedestrians;
  • Expanding the District Department of Transportation’s pick-up and drop-off zones for ride-hailing vehicles and delivery to help reduce the amount of stopping in bike lanes and crosswalks. Five new zones will be added in places like the Wharf and 14th street.
  • Reducing speed limits in the city from 25 miles per hour to 20.

Robo cars from Ford (with safety drivers) on DC streets will join a panoply of transportation options which includes scooter and bike share operators (Bird, Lime, Skip, Jump, Spin and, the latest entrant, Lyft), along with car share companies: Maven, Car2Go, ReachNow and ZipCar. (Enterprise RideShare departed DC earlier this year.) DC can already boast several transportation-related firsts, not all good.

  • DC claims to be the first city to offer Starship delivery bots.
  • DC claims to have had the first shared scooter fatality in the U.S. – resulting from a crash with an SUV.
  • DC claims to be the first city to get Lyft’s shared scooter offering. (Lyft acquired Capitol BikeShare earlier this year.)

DC is the second city to get autonomous Fords, following Miami. Ford’s autonomous vehicles are also operating in Detroit and Pittsburgh.

Ford and DC are taking advantage of the lack of autonomous vehicle regulations in the District or the country. Washington, D.C, essentially stole a march on neighboring states Maryland and Virginia, both of which are angling for autonomous testers, but DC is first in the area to put such vehicles on public roads.

One would have thought the lack of autonomous vehicle regulations might have stimulated some safety advocate outrage at the open-air press conference held on the Wharf in Southwest DC. The Insurance Institute for Highway Safety is based just across the river in Arlington, Va., and the headquarters of the U.S. Department of Transportation along with the offices of a host of lobbyists were within walking distance of the event. Resistance to driverless cars was not represented. Perhaps resistance is futile when city representatives are seeking any and all solutions to a monumental traffic congestion problem increasingly framed by increasing fatalities.

DC traffic is unique thanks to the architect of its streetscape, Pierre Charles L’Enfant, who gave the city 22 traffic circles creating some unusual traffic management challenges. Of late, traffic fatalities involving pedestrians, bicyclists and buses, in particular, have been on the rise.

In sum, kudos to the Ford team for dodging the job-killer robocar angle and avoiding the dangerous driverless cars protesters. Treating the onset of robocars as a job creation and retraining opportunity is a novel and admirable approach – and one likely to be replicated elsewhere. Echoes of Uber’s fatal crash in Phoenix, earlier this year, were faint on the Wharf in Washington.


Webinar: NVIDIA Talks High Quality Metrics in Power Integrity Signoff

Webinar: NVIDIA Talks High Quality Metrics in Power Integrity Signoff
by Bernard Murphy on 11-09-2018 at 12:00 pm

There’s a familiar saying that you can’t improve what you can’t measure. Taking that one step further, the more improvement you want, the more accurately you have to measure. This become pretty important when you’re building huge designs in advanced technologies. Margins are a lot tighter all round and use-cases are massively more complex, potentially hiding all kind of dangerous corners. In such cases, you really need to do a very comprehensive analysis across multiple variables to find the right bounding conditions and to avoid massive overdesign by managing corrections as surgically as possible. Join this webinar to lean how NVIDIA does just that using ANSYS RedHawk-SC’s elastic compute scalability and big data analytics.

REGISTER HERE for this webinar on November 28[SUP]th[/SUP], 2018 at 9AM PST

Summary
The availability of ubiquitous data and compute power to solve seemingly unsolvable problems is driving the artificial intelligence (AI) revolution in high tech today. Semiconductor chips for next-generation automotive, mobile and high-performance computing applications — powered by AI and machine learning algorithms — require the use of advanced 16/7nm systems-on-chips (SoCs), which are bigger, faster and more complex. For these SoCs, the margins are smaller, schedules are tighter and costs are higher. Faster convergence with exhaustive coverage is therefore imperative for first-time silicon success. A big data-enabled simulation platform that offers elastic scalability is required for enabling rapid design iterations to create a robust power grid design. Multivariable analytics and machine learning technologies are key for gaining valuable insights from the vast amount of simulation data to accelerate design closure.

In this webinar, leading semiconductor company Nvidia will discuss the limitations of traditional voltage drop analysis methodologies and share how ANSYS RedHawk-SC’s elastic compute scalability and powerful data analytics can be leveraged to accelerate next-generation SoC power integrity and reliability signoff. A new workflow using multivariable analytics, which considers grid criticality, timing criticality and simultaneous switching noise, is used for predicting the worst, local dynamic voltage drop (DvD) hotspots without running any transient simulation. This enables early detection of hotspots and offers feedback to the physical design team, making it possible to address design issues without impacting the tapeout schedule. The issues identified by this new flow were found to correlate well with vector-based dynamic voltage drop analysis with much faster turnaround time.

Speakers:
Kritika Garg, Nvidia
Currently working on IR drop signoff flow/methodology at Nvidia Corporation in Santa Clara, Kritika is an alumna of the University of Southern California with an M.S. degree in electrical engineering focused on digital VLSI system design and CAD. She has five years in the semiconductor industry, and previously worked as a block implementation design engineer with RTL-GDSII responsibilities at NXP Semiconductors (formerly known as Freescale Semiconductors) in India, and was a former intern in CAD methodology with the Silicon Engineering Group at Apple in Cupertino, California.

Sooyong Kim, ANSYS
Sooyong is a senior area technical manager with responsibilities for the new big data platform ANSYS RedHawk-SC and worldwide customer engagement. After joining Ansys in 2008 as part of Apache Design, he has held various positions in field operations. Previously, he worked at Cadence Design Systems and received a B.S.E.E. and a M.S.E.E. from Rensselaer Polytechnic Institute, Troy, New York.

About ANSYS
If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in engineering simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and engineer products limited only by imagination.


Coupled Electro-thermal Analysis Essential for PowerMOS Design

Coupled Electro-thermal Analysis Essential for PowerMOS Design
by Tom Simon on 11-08-2018 at 12:00 pm

Power device designers know that when they see a deceptively simple pair of PowerMOS device symbols in the output stage of a power converter circuit schematic, they are actually looking at a massively complex network of silicon and metal interconnect. The corresponding physical devices can have a total device W on the order of meters, making it impossible to treat as a single device. Instead PowerMOS devices have to be analyzed as hundreds or perhaps thousands of smaller devices, connected by a complex web of metallization. The first and most significant effect of this is non-uniform switching, with gate voltage varying across the device during device turn on. This in turn leads to Ids concentrating in some areas and not others.

Transient electrical analysis is capable of showing detailed gate voltages and current densities during the transitions, when typically devices experience their highest power draw. However, there is a second dimension to the problem that influences the electrical analysis – intrinsic device behavior is temperature dependent. As a result, device current values will rise as temperature rises, and the reciprocal is true, temperature will rise as more current flows. In the worst case, this vicious circle may lead to temperature related device failure when metal melts and shorts out the junction.

Thermal dynamics depend of the properties of the die, the surrounding package and even the board. Uncoupled electrical and thermal analysis will have difficulty converging on an accurate solution at each step during circuit operation. To help shed light on this phenomenon, Magwel has a write up of a test case that illustrates how concurrent electro-thermal analysis of PowerMOS devices can predict thermal runaway. The interesting point in their write up is how the package, specifically the shape of the CU-Clip, affects where the damaging thermal problems may occur.

Magwel’s PTM-ET (Power Transistor Modeler – Electro-Thermal) uses thermal properties, thermal boundary conditions, solver based metal extraction and foundry supplied intrinsic device models to drive its concurrent electro-thermal solver to report and visualize voltage, current density and temperature across a PowerMOS device given initial conditions and stimulus.

The Magwel article is informative because it shows a concrete example where temperature rise induces increased current. On a time scale of a few hundred milliseconds after gate voltage is applied the simulation shows temperatures reach past the melting point of aluminum. The PTM-ET Field View offers easy to interpret output for each simulated time step. The write up is available on the Magwel website.

About Magwell
Magwel® offers 3D field solver and simulation based analysis and design solutions for digital, analog/mixed-signal, power management, automotive, and RF semiconductors. Magwel® software products address power device design with Rdson extraction and electro-migration analysis, ESD protection network simulation/analysis, latch-up analysis and power distribution network integrity with EMIR and thermal analysis. Leading semiconductor vendors use Magwel’s tools to improve productivity, avoid redesign, respins and field failures. Magwel is privately held and is headquartered in Leuven, Belgium. Further information on Magwel can be found at www.magwel.com


Emulation from In Circuit to In Virtual

Emulation from In Circuit to In Virtual
by Bernard Murphy on 11-08-2018 at 7:00 am

At a superficial level, emulation in the hardware design world is just a way to run a simulation faster. The design to be tested runs on the emulator, connected to whatever test mechanisms you desire, and the whole setup can run many orders of magnitude faster than it could if the design was running inside a software simulator. And this is indeed how emulators are often used—to speed up big simulations—whether you are putting the whole design in the emulator or using the emulator to speed up some part of the design, while the rest continues to run in the software simulator (generally known as simulation acceleration). Continue reading “Emulation from In Circuit to In Virtual”


Restoring Digital Trust – Can China Lead the Way?

Restoring Digital Trust – Can China Lead the Way?
by Bill Montgomery on 11-07-2018 at 12:00 pm


I read with interest the US Chamber of Commerce’s assessment of the Made in China (MIC) 2025 plan to transform the world’s most populous nation into an Advanced Manufacturing leader. MIC 2025 covers 10 strategic industries that China identifies as critical to economic growth in the 21[SUP]st[/SUP] century, including next-gen information technology, aviation, rail, new energy vehicles and agricultural machinery.

The Chamber criticizes the MIC 2025 plan stating that it “leverages the power of the state to alter competitive dynamics in global markets in industries core to economic competitiveness.” The US Government report concludes that “China’s emerging legal and regulatory framework governing information technology pose serious challenges for global connectivity. Cloud computing and other digital technologies that require a seamless flow of data are already changing the nature of numerous industries, including manufacturing.” Relevant points all, but one has to wonder whether China’s motivation is solely about leveraging competitive advantage on what many consider an already unlevelled playing field, or is there something else going on here? Something far more important in the total scheme of things.

Is it possible that what’s really driving China – or at least its secondary goal – is to abandon products that leave their nation vulnerable to foreign digital surveillance due to reliance on technology and protocols (like PKI) that were “not invented here” and that have proven to be highly vulnerable to outside threats?

Because, let’s face it: everything digital is broken and every nation seems to be hacking and spying on its trade competitors, its enemies, and even its allies. From the Snowden revelations citing American digital misconduct, to Russians hacking John Podesta’s email and influencing the 2016 US election, to the US encouraging the world to ban Chinese manufacturer Huawei’s technology for fear of backdoors…it’s like we’re living inside a great big video game.

Something has to change, and maybe China is – deliberately or accidentally – leading the way.

Consider the following. As noted in the Chamber of Commerce document, China is pursuing standards that diverge from existing international ones, and is investing heavily in manufacturing its own semiconductor chips. Ask yourself, why? My bet is that the Government of China wants new standards because it can’t trust the ones that are pervasive today. Let’s be honest. PKI is an open book that isn’t protecting any government or business or person that relies on it for security. Chips are vulnerable to side channel attacks like Spectre and Meltdown, TLS isn’t secure any more – maybe it never was – and the prevailing view within the cryptographic community is that the prime numbers which are the very of foundation of RSA will soon be discovered.

To quote Scotland’s Napier University Professor of Cryptography, Bill Buchanan, “One day, and I think it might be soon, we will wake up and RSA will be cracked. Either it will be super computers cracking the prime numbers, or it will be quantum computers, but when it happens there will be no proper identity on the Web and all the tunnels will be broken.”

At the risk of being repetitive, something has to change and quickly.

In MIC 2025 the Chinese government states that it needs to deploy infrastructure that is;
[LIST=1]

  • Secure and Controllable
  • Secure and Trustworthy
  • Secure and Reliable

    China is betting on the adoption of the standardized SM9 cryptographic scheme to help achieve its goals. SM9 is certificate-less technology that is, for all intents and purposes, Identity-Based Encryption (IBE). And while IBE has long been used to successfully secure email (and not much else), something has changed in the IBE world, and that change is reflected in a patent granted by the US patent office in April 2014 and by the China patent office in September 2018. New, improved IBE (branded VIBE) now authenticates, meaning it verifies and validates the sender of every message, be it from a person or thing. And though this enhancement to the SM9 standard is not yet certified for use in China, interest in the technology is growing rapidly as Asia-based entities are gaining an understanding as to how VIBE can be deployed to deliver exactly what the People’s Republic of China is seeking – Controllable, Trustworthy, Reliable Security.

    Widespread deployment of VIBE-inside Hardware Security Modules, VIBE-Inside TLS, VIBE-inside chips and VIBE-inside SIM’s would allow China to create networked Digital Trust Centres that would make it impossible for any other nation to digitally invade or spy on Chinese communication. Only people and devices registered within China Trust Centres could communicate with one another. Email phishing would be impossible, man-in-the-middle attacks would disappear, the nation would have a digital barrier in place that would be impenetrable to outside threats, including surveillance. Graphically, it might look something like this.


    And if China can restore domestic Digital Trust, why can’t other countries do the same thing? I envision a world where each nation has its own “closed” digital infrastructure where the only communication possible is from authenticated sources – defined as entities (people or things) registered in each country’s Trust Centre(s). Be mindful that by merely authenticating email, we could eliminate over 90% of cyberattacks and so we have to wonder why we’re still waiting on this advancement.

    Permission-based communication among nations could be granted, and in cases where the need for digital surveillance becomes a national security matter, nations could grant such access through legal or other arrangements common among allies and sometimes, even available from rival nations.

    Deployments of VIBE SM9-enabled infrastructure and applications are now being tested (piloted) with most of the activity happening in China-friendly, Singapore. And if the VIBE pilots in the works deliver on their promise, it’s highly conceivable that a large Asia-based company will help China create a digital bubble that is impervious to outside threats, and will satisfy its requirements for Controllable, Trustworthy, Reliable security.

    While apparently not by design, China appears to be on the verge of restoring national digital trust. Nations globally need to take note, and if they are smart, take steps to secure digital trust in their own countries.


  • Open-Silicon Embraces the Latest ISO 9001 Specification with Certification by SGS

    Open-Silicon Embraces the Latest ISO 9001 Specification with Certification by SGS
    by Tom Simon on 11-07-2018 at 7:00 am

    A quality standard that stays static and is not itself targeted for continuous improvement, is a standard that is breaking one of the first tenets of quality. This is why the ISO 9001 specification has been updated several times since its introduction in 1987. The first version was fairly modest. The most recent version was released in September of 2015. It represents a significant change from the two prior versions, ISO 9001:2000 and ISO 9000:2008, which were largely similar except for the level of detail in the specification itself. Companies that are committed to quality are adopting the latest version. Open-Silicon, a SIFive company has announced their ISO 9001:2015 certification, which means they have implemented the most up to date quality processes.

    It’s worth noting that the Geneva based ISO does not perform certification, there are a number of accredited certification bodies. Open-Silicon was certified by SGS, an ANAB-accredited inspection, verification, testing and certification body. ISO 9001 affects every aspect of a company’s operations. One of the interesting things about the most recent 2015 version is that it moves responsibility for the quality processes from a designated management representative and places the responsibility with the entire leadership of the organization.

    Another key new development in the specification is the inclusion of risk-based thinking, by adding formal risk analysis, which has supplanted the preventative measures section of the prior version. The new version has adopted the PDCA (Plan, Do, Check, Act) cycle and this is apparent in the sections that define the overall process.

    Open-Silicon has a business model that covers a wide range of activities, from design planning and specification all the way through managing manufacturing. By achieving certification, they have shown that they have reviewed and modified all their internal processes with the goal of maintaining the highest level of quality. It’s important to understand that the ISO 9001 certification also applies to their sales process, customer support, finance and every aspect of their business. This can only improve their ability to meet customer expectations over the near and long term.

    Of course, Open-Silicon was already certified under the prior ISO 9001:2008 specification. So, this updated certification represents an evolution. As is often the case, improving quality through this process is not an add-on, but rather a structural change that is undertaken throughout the entire organization. For every process in the company ISO 9001:2015 looks at the inputs and outputs. Also, there is examination of the sources and receivers. At each point, an effort is made to look for ways to monitor and improve the process.

    The semiconductor business is one that requires precision and accuracy. Consumers take the high performance and high reliability of electronics products for granted. This has happened despite the massive growth in complexity and the tighter tolerances on design and manufacture. It’s a testament to the effectiveness of ISO 9001 that the industry has been able to achieve such remarkable results. Open-Silicon is at the forefront of this effort, with their top down approach to improving quality. There is more information about their ISO 9001:2015 certification on their website.


    Home Automation IoT Company Cuts ASIC Testing Costs

    Home Automation IoT Company Cuts ASIC Testing Costs
    by Daniel Nenni on 11-06-2018 at 11:00 am

    Customer Case Study
    digitalSTROM develops smart home automation solutions providing users with superior comfort and a whole new style of living. Based on proprietary ASIC and software, digitalSTROM’s solutions connect electrical household appliances through existing power lines and enable an intelligent home via light switches, free speaking using Amazon Echo, and other apps.

    Challenges
    At the heart of digitalSTROM’s home automation solutions is a controller chip that is added to each electricity switch. The high-voltage chip manages the communication over powerline and connects directly to electricity. Manufacturing the custom chip and delivering an end-to-end quality solution to the market presented digitalSTROM with several challenges:

    • Manufacturing costs.Delivering a consumer-market solution required minimizing manufacturing costs – including the costs of testing the high-voltage chip. According to Nuno Pinto, Head of Production, cost reduction is a top priority for digitalSTROM, particularly at the early stages of a new mass market product rollout.
    • Time to market.digitalSTROM could not afford any manufacturing delays due to its commitment to distributors and resellers and the need for timely availability of products. A key objective was risk reduction and ensuring that delivery times would be met, even if problems were detected during the manufacturing process.
    • Quality.The reliability and high-quality of the chip could not be compromised, due to its critical role in managing the home automation network.
    • Unpredictable volumes. As a small company operating at the forefront of a new market, digitalSTROM could not forecast production quantities with high certainty. It needed a flexible manufacturing solution that would enable handling peak demand in short notice, without the need to invest in high stock volumes.

    Solution
    DELTA’s expert ASIC manufacturing services enabled digitalSTROM to address all challenges, while keeping manufacturing costs down and guaranteeing on-time delivery.

    Design for testability. Starting from early chip design stages, DELTA’s test engineers collaborate closely with digitalSTROM’s ASIC design team to ensure that the chip design would support efficient and cost-effective testing.

    Cost reduction via multisite testing.
    DELTA’s in-house microelectronics test facility runs multisite testing with multiple chips validated in parallel, thereby speeding up test time and cutting costs. Both wafer tests and final tests (after packaging) are run validating various chip functionalities like wireless, CPU, memory and power.

    Failure analysis. The DELTA microelectronics failure analysis lab runs in-depth tests on failed ICs using optical microscopes and X-Ray equipment to quickly identify the root cause of problems. DELTA’s materials experts provide corrective action back to suppliers for improved yield and reduced defect rates.

    On-time manufacturing and delivery.By combining on-premise equipment, in-house expert teams, and tight business relations with fabs, DELTA has helped digitalSTROM eliminate manufacturing delays. For example, upon discovering a problem during wafer testing, contamination was detected and a quick fix was implemented by changing the top mask.

    Flexibility and rapid support. DELTA’s flexible support and rapid response time enables digitalSTROM to quickly ramp up production, focus on improving yield and cut costs. According to Nuno Pinto, Head of Production at digitalSTROM, “when there’s a problem in the supply chain, DELTA has a good sense of urgency and can overcome front-end issues to avoid delays. Their teams can close the loop quickly and recover from potential bugs. Using their good connections with the assembly house, they have provided us with additional wafers to keep production running and getting everything done on time.”

    “DELTA’s advantage is that everything is under the same roof – wafer testing, ASIC qualification and failure analysis. This saves precious time compared to working with multiple vendors.” – Nuno Pinto, Head Production and Supply Chain, digitalSTROM

    Learn how to manage and end-to-end ASIC supply chain

    About DELTA Microelectronics
    With 40 years of experience, DELTA Microelectronics is a European leader in ASIC services for the semiconductor industry. DELTA’s comprehensive services include ASIC design, layout, test development, wafer supply, production testing, package development and assembly, components supply, logistics and supply chain management. DELTA’s development and production facilities are based in Denmark and the UK, with service partners in Europe and Asia. For more information, visit asic.madebydelta.com.


    Why Did Ambiq Micro Select HiFi-5 DSP IP for Next Generation MCU?

    Why Did Ambiq Micro Select HiFi-5 DSP IP for Next Generation MCU?
    by Eric Esteve on 11-06-2018 at 6:00 am

    Ambiq Micro has built a family of voice processing MCU dedicated to battery powered, energy sensitive systems, supporting mobile application like wearable. The company is facing two strong challenges: support computationally intensive processing (NN-based far field) and speech recognition algorithms, while offering “ultra-low power” devices. When Ambiq claim to build ultra-low-power devices, it’s really the case: the company has developed a unique and proprietary technology, the Subthreshold Power Optimized Technology (SPOT™) platform (SPOT architecture uses transistors biased in the subthreshold region of operation).

    These two challenges are clearly in contradiction: intensive processing built in energy-sensitive devices, this sounds like the perfect definition of energy-efficiency! According with Aaron Grassian, VP of marketing, Ambiq Micro, “Porting the HiFi 5 DSP to Ambiq Micro’s SPOT platform enables product designers, ODMs and OEMs to take the most advantage of technology from audio software leaders like DSP Concepts and Sensory by adding voice assistant integration, command and control, and conversational UIs to portable, mobile products without sacrificing quality or battery life.”

    Tensilica HiFi 5 DSP core is the new generation of voice dedicated DSP, it can be interesting to look at the main changes with the previous HiFi 4 DSP core. MAC capability has been multiplied by 2X, leading to 2X audio (pre- and post-) processing. For NN processing, the HiFi 5 offer 4X MAC capability versus HiFi 4, including 32 16×8 or 16×4 MACs per cycle. Moreover, the new HiFi NN library offers a highly optimized set of library functions commonly used in NN processing (especially speech). And software backward compatibility with the complete HiFi product line is guaranteed, totaling over 300 HiFi-optimized audio and voice codecs and audio enhancement software packages.

    Such a voice dedicated DSP has to support voice pre-processing functions, like beamforming (or spatial filtering), noise reduction and accoustic echo cancellation (AEC), and speech recognition : features extraction, NN processing layers and language decoding. As a side note, Cadence Tensilica HiFi DSP should be pretty good IP core, as the company claims 95 licensees for HiFi DSP worldwide, and ship 1 billion cores annualy (probably a bit less IC when you integrate several cores in the same chip).

    Clearly, there is a dramatic rise in popularity of digital home assistants (Alexa and the likes) that features voice UI experiences, leading to a new wave of innovation in far-field processing algorithms and in neural network-based speech recognition. It’s now clear that the processing power has to be at the edge device and not in the cloud, and there are good reasons to support this architecture. The consumer demand is for lower latency, increased privacy and more natural voice UI interactions and the processing work load on device has to increase rapidly to make the end-user happy.

    For OEM also, voice-controlled User Interfaces are becoming more important. For example, in many of today’s in-car, voice UI infotainment platforms end up training the driver (as opposed to the other way around). And consumer adoption of voice assistant technology in home is encouraging car manufacturers to embrace voice. Moreover, automotive voice assistants require local voice recognition, pushing again for more processing power in the edge device. In fact, cloud is not always available, and, again, latency is a concern for consumer experience.

    If we agree that speech recognition should be done locally, how to enable this trend? You need more advanced NN algorithm techniques at first, and high-performance DSP cores available at the edge. But you also need lower-precision NN memory weights to reduce the memory size and bandwidth requirements, to build an economically viable and energy-efficient edge device. If you can meet these conditions, you can address privacy concerns, low latency demand and enable on-device speech recognition.

    For example, to meet power and memory bandwidth efficiency, the HiFi 5 offers natively support for lower precision weights: 8-, 4-, 2- and even 1-bit, Viterbi decode support and 8-bit SIMD element support for sorting, searching for string processing.

    I don’t know if your kitchen looks like this above pictured, if it’s the case you will probably count several Tensilica HiFi 5 DSP at home, all located at the edge!

    ByEric Esteve fromIPnest


    The Changing Face of IP Management

    The Changing Face of IP Management
    by Alex Tan on 11-05-2018 at 11:00 am


    Aristotle once said “The whole is greater than the sum of its parts”. The notion of synergism echoes the importance of leveraging design IPs to the maximum extent with the rest of the system under development, in order to ensure a successful SoC design outcome in a shorter development cycle.

    SoC design cost and entry point
    For over a decade design IPs have increased in complexity and have steadily grown to become the starting point of most SoC designs today. Foundries such as TSMC have disclosed data regularly showing a steady increase on silicon proven IP offerings to keep pace with frequent shift in process technologies and design applications. As shown in figure 1, the number of itemized IPs has doubled to more than 16,000 over the years, accompanied by numerous process technology collaterals (200+ PDK; 9000+ technology files).

    On the other hand, many fabless companies including top-performer startups attempt to establish value differentiation in their products through innovations leading to internal IP development efforts that eventually translate to candidates for patented technologies. Other design teams may bridge the gap between shorter design cycles and having marketable products by reducing the overall design risk and cost through a mix of externally proven interface IP blocks (such as Ser-Des, PCI-e, etc.) and internally developed IPs containing their core technology.

    Based on a recent Semico research data, SoC design cost –including IP integration cost—is surging, and prompting designers to exercise prudence in their design selection such as in choosing the right IPs, target technology nodes, the correct version of the PDK and the type of design implementation (FPGA vs ASIC).

    IP stakeholders, design IP management and designHub
    Designers are often wearing dual hats: one as IP developers while designing new blocks, and the other as IP users or integrators. An ideal IP management solution should be capable of serving this dual usage types and bridge the gap between the IP developers and IP users within a company. Most IP users prefer to be able to trace the IP to see its usage, process nodes available, have access to IP developers to have any queries answered, compare IPs and have access to open issues, as well as the available documentation for the IP.

    Yet the IP developers –-who will build, check and populate the design building blocks — are often concerned about the support they have to provide for the IP. Managing and answering the constant queries on the IPs after all takes time. Other challenges for the IP developer is in keeping the IP secure, keeping track of the different versions of the IP and the data associated with each revision, the process nodes and the PDKs used. Moreover, with the IP information fragmented across multiple applications such as issue tracker, meeting minutes, documents, emails etc, it becomes challenging to collate all the information in one location for easy access for the IP user.

    Given the rise of IPs and the IP subsystems being used, it becomes more important to have a flexible IP ecosystem that could facilitate the growing needs and use models of semiconductor design companies –in building system level designs as well as frequent auditing and update needs due to product refresh.

    More importantly, every company has a large number of internal IPs which could be leveraged to build new SoCs. Unfortunately the lack of confidence in the internally used IPs compells design teams within a company to look at alternatives which can include either developing their own IP or buying another IP. Such a choice tends to affect the bottom line of the company.

    The correct solution would be to facilitate the bridging of the gap between the IP developer and user by providing an ecosystem wherein the IP user can qualify the IP easily and browse through the available documentation collated from different applications such as document control systems, emails, issue tracking systems, etc. Moreover to reduce the support burden for the IP developer, it becomes necessary to have a knowledgebase around every IP which can be used by the users to have their queries answered.

     

    designHUB is ClioSoft’s answer to the problems faced by design teams today. It is an enterprise IP management solution which addresses the needs of IP developers and users alike. As an IP ecosystem, designHUB bridges the gap between IP user and IP developer for internal IPs by enabling users to leverage a growing knowledge base of the IPs to resolve any issue they may have. It is easily configurable to meet the IP reuse requirements of most design companies and manages a complex matrix of IP attributes such as process nodes, nodlets, foundries, IP functionality, IP usage, etc. without intimidating the user. As a result, any user can provide a wide array of attributes to find and compare an IP. designHUB is also capable of tracking the variations of an IP through its various stages of evolution and has built-in analytics to report IP usage and its various nuances.

    IP reuse, security and ecosystem
    Design reuse provides an opportunity for design teams to plan ahead and target parts of their incepted designs to be used in future product developments. Having access to all the information regarding an IP enables designers to make more qualified decisions on which IPs to use for their SoC. But it becomes necessary to choose a platform which can match the growing needs of your enterprise as well as the new evolving technologies.

    designHUB IP management promotes design reuse within a company by providing an easy-to-use dashboard to manage the process of creating and publishing IPs as well as their derivatives. Most importantly, it is DM agnostic. Most companies have either no DM, one DM or many DMs (such as SOS7, Perforce, Git, Subversion). It becomes important to have a IP management system which works with all types of DM or no DM –as no company wants to reinvent the wheel for internally developed IPs. If there is any IP, which that can be reused within the company, the idea is to leverage it to the maximum extent possible. As a flexible IP reuse ecosystem, designHUB can help designers to map reuse requirements and select the most suitable IPs for their SoCs.

    To recap, selecting the right entry point (technology, IPs) and a robust IP management solution is key in reducing potential risks of over-budgeting and failure in SoC design. For more details on ClioSoft’s designHUB check HERE.

    Also Read

    Data Management for SoCs – Not Optional Anymore

    Managing Your Ballooning Network Storage

    HCM Is More Than Data Management