Audits. The mere mention of the word keeps project managers up at night and sends most designers running. However, in the case of FPGA designs seeking DO-254 compliance, the product doesn’t ship until the audit is complete – there is no avoiding it, or skating around it. Continue reading “Updated tool cuts through DO-254 V&V chaos”
Parking Obsessed in 2016
There’s a $20B problem facing drivers in U.S. cities – in fact, it affects drivers in cities all over the world. It is the challenge of locating an available and legal parking space.
Expanding 3D EM Simulation Access to All
James Clerk Maxwell’s eponymous equations are the basis for simulating electromagnetic wave propagation. In school, EE majors tended to fall into two camps: (a) those that thoroughly enjoyed their fields and waves classes, who liked doing surface integrals, and who were adept at demonstrating the “right hand rule”, and (b) those that took the required courses but quickly focused on other disciplines within the expansive breadth of the electrical engineering field.
At the recent DesignCon 2016 conference, Brad Brim, Product Engineering Architect at Cadence’s Sigrity group, gave an enlightening presentation that focused on bridging the gap between these two groups, entitled: Access to 3D EM Simulation — for those who care and those who couldn’t care less.
Brad began with a brief history of 3D EM simulation technology — from the solver algorithms used, to the simplifying material and model assumptions of Maxwell’s wave propagation theory that enable the speedups of static and quasi-static modes. The figure below summarizes this chronology.
Specifically for the DesignCon audience, Brad highlighted how EM simulation has evolved from the domain of RF antenna design to become a fundamental tool in the analysis of package and PCB designs. A complete “hybrid” system analysis approach combining EM and circuit simulation models was emphasized, whether for the voltage/timing margin “eye diagram“ analysis of signal interfaces (SI) or comprehensive power distribution network analysis (PI).
Yet, how does an engineering organization bridge the knowledge gap highlighted above, between EM experts and the (much larger) design teams that need the EM simulation results?
Brad presented his vision for the methodology to enable design teams to have access to the simulation technology required for analysis of a complex system model. The approach has two fundamental tenets:
- The EM experts and the package/PCB designers need to work from a common physical layout database.
Although this may seem obvious, it is not uncommon for an EM expert to prepare a model independently, in a separate tool environment. Material stack-ups, via arrays, representative trace topology, power distribution planes, etc., are uniquely drawn and analyzed, and offered to the design community at large as design constraints (to verify in a constraint checker) and/or as recommended design library cells.
- The approach to generating detailed EM simulation S-parameter results initially utilizes the EM expert to define the geometric “cut points” in the design. Then, with this initial setup, the design team can directly launch EM simulation, and thus, more efficiently iterate on model analysis, minimizing the resource demand on the EM expert.
These cuts define the “ports” for EM simulation and S-parameter model generation. Individual models are then stitched together as part of the system model. The figures below illustrate how a cut-and-stitch model definition is created, and an example of how EM simulation would be invoked directly from within the Cadence Allegro Sigrity environment.
Is the “cut-and-stitch” method for model generation sufficiently accurate?
Brad presented several examples demonstrating the accuracy of the overall system results and the speedup relative to attempting a single EM model analysis (which would be infeasible on large designs, regardless). The figure below shows an example of the S(1,n) set of model S-parameters for the cut-and-stitch approach, compared to an analysis of a full model.
EM simulation technology must become more accessible to a wider cross-section of the design community. The insights of EM experts are used to assist with model and simulation setup, yet the analysis results and optimization decisions are most efficiently led by the design teams. A common physical database accessible to all is a prerequisite.
The Cadence Sigrity team is enabling this trend, with a defined methodology for model partitioning and simulation, all within their tool platform. As Brad succinctly summarized the Sigrity approach toward enabling EM simulation for design teams, “It’s time to put the A back in EDA.”😀
More information on the Cadence Sigrity “cut-and-stitch” EM simulation methodology is available HERE.
-chipguy
Submerging the Data Center
One of NetSpeed’s customers is a Tier-1 semiconductor company that develops some of the industry’s best performing and most complex system on chips (SoC) for the data center and cloud computing markets. To keep its leadership in the data center market, the company needs to produce best-in-class SoC solutions year after year. Today, NetSpeed’s Network-on-Chip (NoC) is at the heart of these super-SoCs.
The main challenge for the data center market can be summed up in a few words: offering 60% more bandwidth every year while decreasing the latency. Adding more servers can’t be the only solution because that would severely impact latency. Even if existing designs achieve best-in-class latency, future SoC generations will require even lower latency. Moreover, with in-memory computing replacing previously used storage solutions like hard disk drives, traditional latencies have to be greatly reduced to enable real-time, data-driven decision making.
This customer used to hand-tune interconnect designs and it was severely impacting the design and verification schedule and requiring chip architecture iterations, as the methodology for deadlock discovery, analysis and resolution added an extra 6 months to the development schedule. Therefore, a solution that was guaranteed to be deadlock-free could save up to 6 months of development time and generate a huge time to market advantage.
Not all interconnects (or NoCs) are created equal. NetSpeed provided an interconnect synthesis engine, an innovative solution that optimizes the interconnect architecture based on workload models. Implementation of NetSpeed’s NoC led to a new generation SoC that delivers 25% lower latency and 29% higher maximum frequency than previous ICs. Because NetSpeed synthesizes a pre-verified interconnect design within minutes, the direct impact on design schedule is to shrink six months of analysis down to a few hours.
Data center dedicated SoCs are known to be the best-performing ICs on the market. NetSpeed’s NoCs enabled a new generation SoC that delivers 25% lower latency and 29% higher maximum frequency than previous ICs.But the SoC optimization effort to reduce both area and power consumption has to be pushed to the maximum in order to create a power conscious solution leading to an economically viable chip size.
Both power consumption and area are directly impacted by the number of wires and buffers in the SoC interconnect. NetSpeed’s interconnects can be optimized to reduce the number of wires and buffers. This created an SoC design that offers higher performance than previous generations while reducing area by 40%, wire count by 26%, and buffer count by 46%.
Using NetSpeed’ NoC solution is probably not enough to magically solve the data center power consumption issue forever. Offering 40% lower power than previous generation is already a great achievement and saving 6 months on the SoC design schedule can allow reassigning creative and experienced people like SoC architects and designers to other tasks. For example, they could rework and optimize the architecture of the data center itself to create future storage and processing units that could become so power friendly that you don’t need to submerge the data center…
This blog is extracted from NetSpeed “Data Center” Success Stories. You can read more about this story and Mobile AP, Automotive SoC, Networking, Digital Home SoC or Data Center Storage stories here
From Eric Esteve from IPNEST
Keeping an ‘Open’ Mind with Technology
Software and hardware vendors are developing proprietary products and technologies to tap into the massive potential business opportunity with Internet of Things (IoT). While most of the noise is around consumer driven IoT, commercial applications for IoT are making huge financial impact in many verticals. Buildings alone account for 40% of world’s energy consumption – extrapolate this with number of disparate devices at a smart city level and this becomes gigantic.
A smart city project might include connectivity of homes, soda machines, parking lots, commercial buildings, security cameras, traffic systems etc, it is impractical to think that the infrastructure and devices across the city will be homogeneous. This really is a multi-vendor, multi-protocol and big data play and will require software and hardware platform technologies that are ‘Open’ and able to integrate disparate devices and deliver analytics and control over remote devices.
As IoT is gaining momentum there are startups and established companies entering the IoT arena with new platform technologies. While having more options for products and services can be good, it can also be confusing and can make it very difficult to select the right technology needed to build a strong IoT solution. If you know your IoT goals, selecting the right foundational platform technology is very important. You will need to keep an ‘open’ mind and look for some of the following tenets:
Open Technology
Open IoT platform technologies can help you normalize data from legacy proprietary and new edge devices, build applications and integrate with 3rd party systems as and when you need without having to replace the platform or infrastructure. APIs play a critical role here – look for published open APIs for your developers. Even Microsoft has announced support for open technologies such as supporting Linux Operating System or Cloud Foundry Platform as a Service (PaaS) on its Azure platform.
Stable Technology
If you have the choice, besides evaluating pros and cons of existing vs new platforms in your labs, evaluate established “real” IoT operational case studies. See how long these systems have been running and how customers have benefited over multiple years. IoT systems should be designed for prolonged and sustained benefits.
Robust Eco-System
With Android and iOS, we all know the power of an application ecosystem. You want to be able to have choice. Select a platform that has a developer community around the technology. If you have the developer mindshare, your customers will be able to access cool applications for their operational use.
Scalable Technology
Although scalability depends on your business needs, I recommend selecting a platform that can scale from the edge to the cloud. Learning, managing and developing applications on multiple platforms is hard and cost-prohibitive. If your business serves a large and complex IoT infrastructure, you should plan for the millions of devices that are going to get connected to the web over the next several years.
I hope the above is useful in your IoT journey. I would love to hear how you are using or considering open technologies for IoT or otherwise. If you are not, I am sure there is a reason and it will be great for readers to learn why.
Smartphone-based Connected Health Insights from Patents
US20150124067 illustrates an improved technique for monitoring human health vitals without contact using the physiological signals extracted from video images captured by a video camera of a smartphone. One advantage of the contact-less vitals monitoring technique is the avoidance of contact measurement which can be a problem for infants and the elderly who need monitoring for a long period of time.
The contact-less vitals monitoring technique uses the photoplethysmographic (PPG) method. The PPG uses the optical signals (related to cardiac signal and respiratory signal) transmitted through or reflected by a person’s blood, e.g., arterial blood or perfused tissue, for monitoring a physiological parameter of the person. The smartphone can adjust the illuminator of the video camera with respect to intensity, spectrally, spatially, and/or temporally to improve the level of accuracy of the measurement. The smartphone processes the captured video images to extract a time-series signal. The smartphone extracts the physiological signals from the time-series signal. The smartphone can transmit the measurement results to the remote healthcare practitioners for further analysis and assistant.
US20150351698 illustrates a system for analyzing physiological and health data (e.g., activity data) retrieved from wearable monitors using a smartphone to identify emergencies or medically significant events in real-time. The system retrieves the physiological and health parameter data in real-time from the physiological and health monitors associated with a user. The physiological parameter data includes values of a physiological parameter of the user measured in real-time. The health parameter data includes values of the health parameter of the user measured in real-time.
The user smartphone analyzes the received physiological parameter data in real-time to identify a medical event associated with the user. The analysis of the received physiological parameter data includes determining that the physiological parameter data includes values that are outside of a normal range for the physiological parameter and that the medical event corresponds to a period of time in which the physiological parameter values remain outside the normal range. The user smartphone also analyzes the received health parameter data in real-time over a specified time period and generates a health level for the user based on the health data over a pre-determined period of time. The system generates medical notifications corresponding to the identified medical event and generated health level in real-time. The generated health level is transmitted by smartphone to an emergency response system when the identified medical event is an emergency medical event.
US20150335272 illustrates a system for reliably testing, monitoring and predicting blood sugar concentration (e.g., glucose) for a user. The system includes a user-wearable devices including storage for glucose testing strips, a spring-loaded lancet, a strip reader, a display, an activity sensor (e.g., accelerometer). The user-wearable devices collects and stores the blood glucose level based upon a test strip reading along with activity level of the user. The user-wearable devices transmit the data regarding the blood glucose level and activity level to the user smartphone for calculating a predicted blood glucose level for the user. The smartphone can provide the user with information illustrating how well the user has managed his/her own blood sugar concentration during a prior period of time based on the predicted blood glucose level for the user.
HSPICE – 35 and looking good!
A maturetool. A legacytool. A tool that’s a little long in the tooth. We have all used these terms to refer to an EDA product that has not been able to keep up with technical challenges of model complexity, performance, or new features required by current SoC and system design requirements.
Continue reading “HSPICE – 35 and looking good!”
Where There’s Apple, There’s a Way
With hundreds of billions of dollars overseas and ridiculous profits domestically it is safe to say that Apple can have its way with whatever industry, market or project it sets its mind to. The only sad thing is that money alone can’t cure cancer or bring Middle East peace. Money can, however, help bring a new car company into being, which is precisely the prospect being debated in dueling reports out of Cupertino this week.
One report suggests that Daimler CEO Dieter Zetsche was impressed during a recent visit to Silicon Valley at the progress made by Google and Apple in the work on developing their own cars:
“Daimler CEO says Apple, Google making progress on car” – Welt am Sonntag
A subsequent report, attributed to the Wall Street Journal, noted the rumored imminent departure of so-called ‘Apple Car’ “lead” Steve Zadesky and speculated on the meaning and impact of that departure for Apple’s car building plans – plans which have never been acknowledged by the company.
“Apple Car’ Lead and 16-year Apple Vet, Steve Zadesky, Leaving Company”
I am inclined to attribute greater importance to the Zetsche comments than to the Zadesky departure. It’s clear that Apple can do as it pleases and with such vast resources at its disposal the only question is Apple’s level of motivation.
With millenials showing little interest in cars, maybe cars aren’t quite as exciting an opportunity as they once were – in spite of record 2015 vehicle sales in the U.S. Skeptics repeatedly point to Apple’s profit margin comfort level being misaligned with the leaner margins of the auto industry, but believers expect Apple to overcome rather than accept that state of affairs.
The two stories do raise the question as to what an Apple car will be. What will an Apple car look like? Who or what is it for? Is it a shared vehicle or a service delivery platform? Is it an aspirational sports car suitable only for one percenters? Is it super fast or super safe or super efficient – an EV, of course. It’s a messy question, along the lines of what do you want to be when you grow up. Steve Jobs said:
“Being the richest man in the cemetery doesn’t matter to me … Going to bed at night saying we’ve done something wonderful … that’s what matters to me.”
What would make an Apple car wonderful? Zero emissions and zero fatalities? Nissan, Volvo and others are already well along the way toward addressing those twin value propositions.
Google has multiple points of entry into the auto industry including maps, self-driving algorithms, operating system software and applications. Apple has distinguished itself mainly as a hardware and design company while redefining mobile device interfaces. Apple could make a car or Apple could hire an ODM (like Magna Steyr) to make a car. Or Apple could buy BMW. Seriously.
The question that Apple is no-doubt facing on a daily basis, though, is why.
Google’s vision, in contrast, is clear: driverless shared transportation – made by Google or licensed to existing auto makers. Auto makers may not be eager to license Google technology, preferring to solve the driverless challenge on their own, but licensing an Apple-oriented vision of driving remains unclear.
To create a wonderful car suggests some kind of breakthrough in design, battery storage capacity, business model/ownership, drivability or self-drivability, content delivery or consumption, or overall user experience. Could the wild speculation be true that Faraday Future is an Apple stealth project? Not likely.
Apple can make cars, buses or airplanes if it so chooses. With transportation caught in a vortex of generational disruption it’s just possible that the way forward is too foggy even for Apple. It’s also possible that an environmentally conscious Tim Cook perceives cars as precisely the wrong path forward to connect with an increasingly car-averse target market.
In the end, it doesn’t much matter. Even if Apple has downshifted in its plans to build a car it could still target regional markets outside the U.S. where enthusiasm for cars is still on the rise – places like China, India, and Brazil.
Zetsche most likely has it right, Apple and Google are further along than we all think or thought. Both organizations have siphoned off enough engineering and marketing talent to create a new industry on their own. And Apple, at least, has demonstrated repeatedly its ability to convince consumers to line up for whatever it might have on offer – with the possible exception of smartwatches.
And, speaking of smartwatches, Tim Cook has expressed an interest in controlling cars remotely with his smartwatches. Is that motivation enough to create an Apple car? Time will tell.
DesignCon 2016 — signal integrity must be power-aware!
DesignCon is a unique conference — its tagline is “Where the Chip meets the Board”. Held each January in Santa Clara, the conference showcases a wealth of new technologies for advanced packaging, printed circuit board fabrication, connectors, cables, and related analysis equipment (e.g, BERT, VNA, scopes). Of specific interest are the presentations and EXPO floor demonstrations from the EDA vendors focused on design and analysis tools for very complex packages and boards.
Continue reading “DesignCon 2016 — signal integrity must be power-aware!”
Crypto Key Exchange …like taking candy from a digital baby
For those among you who have read my previous SemiWiki articles, you will no doubt see a theme: the security of our connected world is badly broken, and for the bad guys, violating our online lives – both business and personal – is as easy as taking candy from the proverbial baby.
Continue reading “Crypto Key Exchange …like taking candy from a digital baby”

