WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 699
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 699
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

Simulation: Expert Insights into Modeling Microcontrollers @ Renesas DevCon

Simulation: Expert Insights into Modeling Microcontrollers @ Renesas DevCon
by Holly Stump on 10-25-2012 at 9:03 pm

Simulation: Expert Insights into Modeling Microcontrollers” was the recent panel hot topic at Renesas DevCon2012, featuring Paolo Giustoof GM, Mark Ramseyerof Renesas, Marc Serughettiof Synopsys, Jay Yantchevof ASTC / VWorks, and Simon Davidmannof Imperas.



Panel chair Martin Bakerof Renesas started us off with a zinger: How accurate are today’s processor models? Instruction accuracy? Cycle accuracy? How do you measure it?

Panelists universally lobbed this one back:
“Hey, wrong question!” Laughter… First principle, be clear on the problem you are trying to solve,
then choose the accuracy you need when selecting MCU models and simulation solutions. Many target problems do not require cycle accuracy. Paolo commented that the “value” of accuracy depends on when the model is used, and what it is used for. Jay said, “Models are simply tools to do a job. So it’s important to specify the problem. All models are “bad” but some are very, very useful!”

Familiar ground was covered: For hardware debug, you may want a very detailed model, and be willing to suffer the run time. For embedded software development, simulation performance is key, and you want a lean, fast model at a high level of abstraction.

Jay clarified the definition of processor in the topic context: as not just the CPU core, but as the complete MCU SoC. Today, you can obtain and run a very timing-accurate model of the complete MCU, the processor, peripherals, memory, timers, etc. The timers will fire at the right time, bus transactions will take the right number of cycles, peripheral IP will take the right amount of time to process and respond. Processor pipeline effect on timing is generally not modeled, if you want hundreds of MIPS of speed, but most timing effects are modeled at a very workable level of accuracy. Processors and peripherals can even be at different levels of accuracy and will play ensemble. Even for graphics processors, which are challenging to model, if the use case is to run software, an abstract-level model can run 7 or 8 frames per second, so you can interact with graphics and get drivers working without modeling internals of the GPU, which are not relevant for software debug. Simon drew a picture of the complexity of today’s processors and the need to work closely with processor vendors on model verification.

Use cases for processor models, existing and emergent?
Well-known use cases include early development start, before silicon is available, and software analysis, debug and test. With the complexity of designs today, the debug capability of virtual platforms can be critical! Mark mentioned that even when you have hardware, ESL delivers better observability, visibility, controllability than you can ever get with hardware. Panelists cited typical results of software up and running just hours or a few days after silicon is available. Architectural exploration, performance and power characterization are other important driving use cases.

Jay commented that the value of early availability of models extends beyond silicon availability, to pre-hardware prototype availability. In some industries, even when silicon is out, it takes many months to obtain prototype hardware boards, which are often delayed to get the RF, analog and other elements right.

Specific to automotive, Paolo and Jay both mentioned the value of running AutoSAR on a virtual platform; for example, testing drivers and the AutoSAR stack to see if watchdogs are working. CAN standards, functional safety, ISO 26262, with fault injection, etc. are emerging use cases. Simon referred to an Audi tire pressure sensor team, with months of road test data, who maintain software quality by using simulation to run thousands of regression tests each night. Marc mentioned calibration, the ease of setting variables such as temperature, in software.

Is automotive lagging, in adoption of MCU modeling and methodologies?
Note that processor models are not an emerging technology; ESL models have been used for 10 years in some domains, but panelists agreed that automotive has not yet deployed it widely, or to its full potential. Paolo said that in Tier 1 companies the technology is mature and used; but for OEMs, this technology is still emerging, with great potential for embedded software development, and applications across the supply chain.

Jay brought up that while some aspects of the technology, e.g. standards for model protocol interoperability, are still emerging, the real next step is the deployment of this technology in industry, a creation and supply chain of virtual prototype models to parallel the design and supply chains in the target industries. Most Tier 1s develop low-level software; some OEMs develop ECUs in-house. It is now common to see teams of embedded software developers each have a virtual platform seat, for test, for regression, for safety issues. Goal: to make the technology mainstream, with benefits spanning the supply chain; the industrialization of the technology; the business integration.

OK, what are the business issues for an OEM or Tier 1 to adopt processor models? Price? Ecosystem?
Paolo identified one main issue as cost, “Since EDA-type prices are high, it’s a chicken and egg situation for broader deployment.” But both ASTC and Imperas commented that for their solutions, tool price is not an issue affecting adoption: pricing is for the software world, not a traditional EDA model. Simon stated, “$300/month accesses our tools.” Jay commented that the costs of tools can be quite low for broad deployment, and that ROI is the key factor.

Paolo said that virtual prototyping and model development crosses lines of IP, semi, and EDA, and in this ecosystem there are new partnerships that need to be created among vendors, identifying who develops, integrates, tests and certifies models, peripherals, platforms. The business ecosystem may even need to consolidate, or at least realign. Marc stated that model development will involve a lot of people along the way as we move up in system design. The market is emerging, may need aggregation, but for now it starts with semi vendors and virtual platform technology providers. This is the nucleus, and we will move higher in the supply chain over time…

Renesas RH850: The next frontier!
Today, vendor panelists offer strong support for multiple Renesas V850 platforms, as well as peripherals. Attendees displayed great interest in early “virtual” access to the RH850, as well as the CAN FD64 standard. And, most excitingly, Renesas announced they are planning to make processor models available months ahead of first silicon, for the new RH850! ASTC and Synopsys both stated they are working with Renesas to develop RH850 models and integrate them into tool solutions.

Takeaway:Not surprisingly, the panel was unanimous in their belief that virtual prototyping is a critical technology for embedded systems development, and mature enough to adopt today. Mark said that the challenge is educating hardware platform users about software virtual platforms. Paolo added that there is growing momentum and urgency for the adoption of simulation, driven by system complexity. Jay predicts a virtual platform for a design BOM, expanding in scope as the development project moves from semiconductor vendor, to Tier 1s, to downstream supply chain partners, to OEMs. And Simon summed it up by asking, “What does it cost notto get started using simulation?”

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.