RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Tanner EDA Helps Customer Productivity Engineering Increase Efficiency and Lower Cost with No Compromise in Performance

Tanner EDA Helps Customer Productivity Engineering Increase Efficiency and Lower Cost with No Compromise in Performance
by Daniel Nenni on 04-29-2014 at 10:00 am

Tanner EDA is making waves at their customer’s sites as the mixed-signal design suite from Tanner EDA, Incentia Design Systems, Inc. and Aldec, Inc. helps ASIC Design House lower cost and increase efficiency with no compromise in performance. In today’s ‘always on’, Internet of Things connected world, the demand for high-performance, mixed-signal ASICs continues to grow. Many thousands of mixed-signal devices are produced on mainstream process nodes in foundries worldwide. Designers working on these mainstream devices do not always need, nor can they afford, the EDA tools from the vendors focused on serving the most advanced process nodes. Tanner EDA, through its partnerships with Incentia and Aldec, provides a fully-integrated, complete analog/mixed-signal tool suite to address the mainstream with no compromise in performance.

Dresden-based ASIC design services company, Productivity Engineering (PE) a Tanner EDA customer since 2006, has relied on the full Tanner EDA tool suite; including front end design, layout, and verification. Stefan Schubert, PE’s Vice President of IC Design Services remains impressed with Tanner EDA tools finding them intuitive, easy to use, and fully capable of meeting their design needs.

Early last year, PE adopted three new tools to its design flow, provided by Tanner EDA:

[LIST=1]

  • Logic Synthesis by Incentia DesignCraft™
  • Mixed–signal simulation by Aldec Riviera-PRO™ TE
  • Physical Verification by Tanner EDA’s HiPer Verify™

    To date, PE has been relying on a number of legacy tools from Mentor Graphics®, primarily for the digital portions of designs. But early last year, PE adopted the DesignCraft digital synthesis tool, as integrated into the Tanner EDA HiPer Silicon A/MS solution. “Our existing tool lacks features for optimizing designs: features that are mainstream in most comparable tools today,” Schubert commented.

    DesignCraft is cited to run at a speed of 5 million gates in 2 hours, and PE found the tool to perform at least as well as more expensive equivalents, both in terms of speed and features. “And now we can optimize our designs for power, performance and area,” Schubert added. “The testability features are a real bonus. It is the first time we have had DFT support in a tool. It is something we have always had to do separately before.”

    Digital and digital-heavy mixed-signal simulation was another task for which PE had been relying on older software. Now PE has invested in the Riviera-PRO TE digital simulator and functional verification tool. Tanner EDA integrated the tool into its HiPer Silicon AM/S tool suite, which also includes the Verilog®-AMS mixed-signal analysis tool for co-simulation, and T-Spice for analog simulation.

    “The difficulty we had before,” Schubert explained, “is that we always had to leave the Tanner EDA design environment to use the old synthesis tool, and then import the data back.” In addition, the Calibre® tool is expensive to maintain. “The introduction of the Riviera-PRO TE simulator is very welcome,” Schubert said, “primarily because it is fully integrated into the Tanner EDA tool flow, making the design task much easier.” In addition, the Tanner-supplied AMS simulation suite was considerably lower cost than offerings from the mainstream rivals.

    PE has also invested in Tanner EDA’s new HiPer Verify tool suite. PE has adopted a hybrid tool solution for their verification; maintaining a mix of Calibre and HiPer Verify software licenses. “The critical factor with HiPer Verify is that it runs Calibre rules files,” Schubert confirmed. He pointed out that HiPer Verify exploits hierarchical and repetitive features, with an error navigator to locate violations and a one-time correction facility, in order to improve productivity and ensure optimal performance. “The mixed tool portfolio has allowed us to better align tool capability with the requirements of the design while lowering our overall cost,” said Schubert.

    Tanner EDA tools not only represent a lower initial outlay than many tools of comparable performance, but they cost considerably less to maintain. “This gives us a competitive advantage as a design services company,” Schubert commented, “And we make no sacrifices in terms of design performance.”

    To learn more, read the entire Productivity Engineering case study from Tanner EDA.

    lang: en_US


  • Xilinx Quarterly Results: 20nm Prototypes

    Xilinx Quarterly Results: 20nm Prototypes
    by Paul McLellan on 04-29-2014 at 5:07 am

    Xilinx announced their quarterly results last week. Because of their financial year not being aligned with their calendar year this is actually 4th quarter of their 2014 financial year. New Year’s Eve 2015 comes early for Xilinx. The results were very good. As Moshe Gavrielov, the CEO, said on the conference call:Xilinx delivered both record revenue and record gross margin in fiscal year 2014. 10% growth in annual sales was driven by our 28-nanometer sales. These exceeded $380 million for the year completely surpassing, both our initial forecast of $250 million and our recently revised target of $350 million. Similarly, our sequential growth in the March quarter was driven by exceptionally strong sales for our 28-nanometer product generation. This increased by more than 40% sequentially exceeding $140 million

    So Xilinx is clearly making the transition onto their 28nm products really ramping, exceeding everyone’s forecasts. The current run-rate is nearly $600M a year and it is still ramping so the actual revenue for next year should well exceed that number. They are targeting over $700M which might turn out to be conservative. Xilinx now has 70% market share in the programmable logic device segment.

    They are anticipating repeating the successful 28nm ramp again at 20nm. They shipped functional samples of Kintex UltraScale, the industry’s first 20-nanometer product, in November. They reckon this is at least six months ahead of the competition. Altera is using TSMC at 20nm (just like Xilinx) and then switches to Intel for 14/16nm, so for 20nm the process wars don’t come into it. The basic product trajectory is that the first year there is trivial revenue (design-in), some in the second and it really takes off in the 3rd year. This is the 3rd year for 28nm. Xilinx expects 20nm to have a similar ramp, or perhaps slightly faster. They expect 16nm and 20nm together to roughly equal 28nm since the processes are coming so much closer together than originally forecast.

    Recently Xilinx also received and demonstrated functioning devies in-house for the first Virtex UltraScale product. This is the industry’s only high end family offering based on 20nm. There are many more 20nm tapeouts coming in the second half of the year now that they have Kintex and Virtex 20nm products well on the way to sampling to customers although there seems to be a bit of a lull in Q2. Taping out a chip, making masks and building silicon is now so expensive that it shows up clearly in the operating expense line for the overall company.

    One of the big drivers of Xilinx’s 28nm product is the LTE buildout in China. Everything concerned with China Mobile is on a huge scale. Its subscriber base is about twice the US population. By the end of 2014 they will have deployed 500,000 base stations. And China Telecom will also deploy 1-200,000 base stations. Base stations are a sweet spot for programmable devices, the power constraints are less extreme than handsets but the reprogrammability as standards evolve is really important. Xilinx is in the universal radio cards, baseband and backhaul.

    Another interesting tidbit is that Xilinx’s 28nm inventory is up. They were asked about this. Some is just inherent in the increased business (if revenue goes up then inventory tends to rise too) and some is precautionary. Xilinx sees 28nm capacity tightening and they want to make sure they get the wafers so that they can take advantage of opportunities when they arise. China’s base stations buildout in particular seems to be fairly volatile and hard to predict, and it is a large business that they cannot just assume they will have enough product for if they don’t plan explicitly.

    Altera announced on the same day as the Xilinx call that they had demonstrated their FPGA technology on the Intel 14nm Tri-gate process, which means they have successfully built a test-chip. Xilinx were asked about the timing of 16FF products but Moshe ducked the question:we are continuing to make progress on 16 and there is no need for us to say more because at this point in time, we will deliver that technology as we’ve predicted we expect to tape out this year.

    SeekingAlpha transcript of the conference call is here.


    More articles by Paul McLellan…


    Carey Robertson: Reliability Checks in Advanced Nodes

    Carey Robertson: Reliability Checks in Advanced Nodes
    by Daniel Nenni on 04-28-2014 at 8:30 pm

    Last week I had the pleasure of presenting at the Electronic Design Process Symposium (EDPS) workshop. This was my first time attending and I was very impressed. There were good presentations but I learned as much from the Q&A and the side conversations before/after/breakfast/lunch/etc. If you have the opportunity to attend, I would highly encourage anyone in the design or EDA community to attend.

    My presentation slides can be found here:

    http://www.eda.org/edps/Papers/5-2%20Carey%20Roberston.pdf

    The second day of the Symposium was dedicated to IP and I presented in the IP Verification and Reliability track. As everyone knows, reliability is not specific to a particular node nor is it confined to a particular application area. The overall need or requirement may be different from 130nm to 16/14nm or for an automotive application vs. a smart phone application but every designer has certain constraints and methodologies that are utilized to prevent electrical failure. To meet this need, we developed Calibre PERC (Programmable Electrical Rule Checking) and we have developed a number of solutions that allow designers to verify that their designs are robust against Electrostatic Discharge (ESD) events, Electrical Overstress (EOS) conditions, Latch-Up, Power Domain Verification, and Voltage-Aware DRC checks. These are common reliability concerns for circuit designers that, without PERC, forced designers to over-design or employ manual methods for verification. PERC automates this verification ensuring circuit robustness.

    These checks and methodologies have been available to the full-chip designer and recently we have partnered with foundries to make them available to IP designers as well. Recently, we announced a collaboration with TSMChttp://www.mentor.com/company/news/mentor-tsmc-calibre-perc where PERC checks have been integrated into the TSMC 9000 program so that IP vendors can leverage the infrastructure developed by Mentor Graphics and TSMC; and run this verification on IP cells/blocks before delivery to the end customer. These are the same checks run at full-chip but categorized so that the IP provider utilizes the specific reliability checks that are appropriate for that circuit.

    The “kit” where these checks reside is called the ESD/Latch-up kit and the verification is focused on that particular electrical problem. Some questions were inquiries as to what checks will come next. As I mentioned, with PERC, we have offerings for EOS, Electromigration, etc. What comes next will be largely determined by designers and what they determine is the next problem we should tackle. There have been a lot of articles published about Electromigration needs especially at advanced nodes. EOS is also a hot topic. I would like to pose the question here and see what you think is the most pressing need for circuit reliability……

    lang: en_US


    FinFET & Multi-patterning Need Special P&R Handling

    FinFET & Multi-patterning Need Special P&R Handling
    by Pawan Fangaria on 04-28-2014 at 1:00 pm

    I think by now a lot has been said about the necessity of multi-patterning at advanced technology nodes with extremely low feature size such as 20nm, because lithography using 193nm wavelength of light makes printing and manufacturing of semiconductor design very difficult. The multi-patterning is a novel semiconductor manufacturing technique which realizes a layout by decomposing it into 2 or more parts and then separate mask exposure (with double or more pitch for extra resolution) for each part which are later recombined to get the original layout. FinFETs on the other hand are new 3D transistors (with gate wrapped around the channel) which can provide extreme power-performance-area advantage to high density, high performance and multi-functionality SoCs. While these technologies are proven in processing and manufacturing, EDA P&R tools need significant revamping to accommodate these for their use in designs at mass scale. How to make provisions for multi-patterning and FinFETs during layout design and verification?

    The layout is decomposed by a concept called ‘coloring’, where adjacent shapes in a layer are assigned different colors such that same colors are not at violating distance. It’s a complex problem to solve; intelligent algorithms need to be applied to solve a dense layout which can have millions of shapes in complex forms. Above is an example of cycles (a graph formation between layout shapes and spacers), where an odd cycle is in coloring violation due to two nodes being in same color and even cycle is clean. Many a times, sensitive design structures such as memories, clocks, analog matched pairs etc. need pre-coloring (anchoring) which puts constraint in coloring other shapes, thus making the layout more prone to non-resolvable coloring violation. Then there are techniques used at foundry level to remove errors such as self-conflicting layout, rounding, misalignment etc. which need to have their provisions in P&R tools as well.

    Clearly, the multi-patterning rules significantly increase the complexity and computation for routers which are already burdened with ever increasing DRC rules; run-time and QoR are at stake. The router needs to natively understand colors, and detect, fix and verify any multi-patterning violation which can have local or global impact (fixing one violation can create new violations elsewhere), unlike DRC fixes which are local in nature.

    Similarly FinFETs pose another set of challenges such as voltage-threshold aware spacing, implant layer rules, fin grid alignment, source-drain abutment rules etc. on placement, floorplanning and optimization engines. An interesting point to note is that FinFETs are implemented at process nodes below 20nm at which multi-patterning becomes a must, thus increasing the challenges for P&R tools to handle large set of complex rules. The interactions between FinFET restrictions and multi-patterning rules in the layout need to be considered at all stages of the back-end flow including placement, routing and optimization.

    I’m impressed with Mentor’sOlympus-SoC P&R system that provides a comprehensive platform for effectively dealing with all these challenges at advanced process nodes with its unique database capable of handling multiple masks. The system supports complete DRC and multi-patterning rules for leading foundries and has tight integration with Calibre to insure sign-off clean physical verification with minimal design interaction. The router is smart enough to color the layout natively, efficiently resolve conflicts and fix violations at local and global level for fast multi-patterning closure.

    An effort to prevent multi-patterning conflicts takes place very early in the flow, at the time of coloring and decomposition. The router automatically extracts several patterns such as end-of-line in dense configuration forming small odd cycles and that in sparse configuration leading to long odd cycles, and tries to avoid such configurations and others in order to gain faster convergence.

    The prevention of conflicts leads to a reduced set of violations to be fixed. It’s essential to clean up as many DRC violations as possible before proceeding with multi-pattern fixing. The majority of the multi-patterning violations are fixed in small, local search and repair windows in a global context. The multi-pattern fixing handles both nested and interdependent cycles, and minimizes design perturbation. The DFM optimization is multi-patterning aware and does not introduce any new cycle.

    The placement engine in Olympus-SoC is FinFET and multi-patterning aware. By default, it supports the fin grid for both standard cells and macros, because the placement grid is a multiple of the fin grid. The detailed placer considers implant spacing requirements for minimum width and area and supports continuous diffusion and poly-over-diffusion edge abutment rules. Intelligent filler insertion ensures implant and OD shape matching. The different combinations of standard cells are taken into account and the cells are either spaced apart or flipped in their orientations to ensure that no violations are created.

    The built-in color aware parasitic extraction engine accurately models the 3D parasitic requirements of FinFETs and also handles capacitance variation due to mask shift which can impact timing and design performance. The extractor can correctly model the effects of mask shift and write out triplet SPEF (nominal, best, and worst case values) which can be read by MCMM timing engine to correctly time the design.

    The Calibre InRoute addresses the advanced requirements of intelligent color and timing aware metal fill and true sign-off analysis against all DRC, DFM and multi-patterning violations. The Olympus-SoC platform provides native SVRF support including metal fill and double-patterning decks. Uniformity of metal fill density across different masks is maintained. By using Calibre InRoute, a faster time-to-market can be realized due to significant speed up in manufacturing sign-off process along with high QoR.

    The Olympus-SoC P&R system offers a flexible and powerful architecture to address multi-patterning and FinFET requirements and can concurrently analyze and optimize the various design metrics including timing, power and signal integrity across all modes and corners (MCMM). The optimization engine performs dynamic density recovery, white space optimization and area management throughout the flow for best design utilization. A more detailed description along with reference literature on multi-patterning, Olympus-SoC and Calibre InRoute can be found in a whitepaperposted at Mentor website.

    More Articles by Pawan Fangaria…..

    lang: en_US


    Dr. Bernard Murphy: My presentation at EDPS 2014

    Dr. Bernard Murphy: My presentation at EDPS 2014
    by Daniel Nenni on 04-28-2014 at 8:00 am

    First, I wish there were more conferences/workshops like this. This is much more about sharing ideas and brainstorming than the stark commercialism of DAC. I presented Atrenta’s role in enabling 3[SUP]rd[/SUP]-party IP qualification for the TSMC soft IP library.

    My presentation slides are located here:

    http://www.eda.org/edps/Papers/5-3%20Bernard%20Murphy.pdf

    Our collaboration with TSMC was officially launched almost 3 years ago, with the goal of ensuring a standard set of quality measures for any soft IP certified by that library. Certification is based on the Atrenta IP Kit, a kit delivery of standard SpyGlass® functionality and rules, spanning:

    • lint
    • advanced lint (checks for FSM deadlock states for example)
    • clock domain crossing (synchronization) analysis
    • early testability analysis (both stuck-at and at-speed)
    • SDC constraints best practices (including formal validation of timing exceptions)
    • Power profiling and validation of power intent
    • Pre-layout physical analysis (early timing, area and congestion estimation)

    This is the most comprehensive suite of (non-functional) quality checks in the industry and we are understandably proud to have been selected by TSMC to enable this soft-IP quality gate. We worked closely with them to fine-tune checks and our dashboard pass/fail metrics to ensure a simple, no argument assessment of whether an IP clears the hurdle or not. To their credit, around 20 IP vendors (including some of the biggest) have signed up and are qualifying their products through this flow – and the list continues to grow.

    We now see the next shoe starting to drop. Some leading IP consumers are implementing similar in-house IP usage qualification. Why isn’t this redundant? Because most IP these days is configurable. The supplier will certify a few common configurations but quite likely not the configuration you plan to use. A push-button qualification will re-validate that configuration for most of the above tests. This is, if you like, the application-specific complement to the IP-supplier signoff. And it doesn’t let the supplier off the hook. Push-button qualification of a specific configuration is only possible if the supplier ship the files necessary to re-qualify, which they do when they certify into the soft-IP library.

    You’ll notice I didn’t mention anything about functionality quality. What is a useful metric and handoff to an IP consumer in this area? Clearly it would be impractical to hand over test suites with an IP. Coverage metrics are an obvious answer, but you want some method of independent measurement to include in an objective signoff. Atrenta is working with TSMC and the IP vendors to incorporate a BugScope™ progressive coverage signoff as this metric. This will have the added benefit of a set of assertions that can be shipped with the IP, which may be valuable in debugging use-case problems.

    lang: en_US


    Calling all makers for new #8bitideas

    Calling all makers for new #8bitideas
    by Don Dingee on 04-27-2014 at 7:00 pm

    The maker community and the learn-to-code movement is growing with many ideas built on small, power-efficient, easy-to-use 8-bit microcontrollers. If you want to be one of the next famous makers and maybe win some cash in the process, Atmel has a contest open until September 30, 2014 – here are tips on getting your #8bitideas in the game.
    Continue reading “Calling all makers for new #8bitideas”


    FD-SOI Better Than FinFET?

    FD-SOI Better Than FinFET?
    by Paul McLellan on 04-27-2014 at 9:16 am

    As I said earlier in the month, I was going to be talking about FD-SOI at the Electronic Design Process Symposium (EDPS) in Monterey. I am not especially an expert on FD-SOI but I know enough to be dangerous and given that we were already talking about FinFET and 3D/2.5D chips, it fitted in nicely.

    The 10,000 foot view is that FD-SOI has some things for which it seems to be superior to FinFET. But ST Microelectronics is the only company that is committed to it (Global Foundries announced last year that they would be second sourcing it, but then they lost enthusiasm. And that was before the recent Samsung announcement). Wally Rhines dinner keynote pointed to the other problem that only having ST involved means that the learning will be slower for FD-SOI than FinFET and so even though it might have a cost and some other advantages, those could be overwhelmed by wafer volume. Despite Cooley’s snide remarks in his latest newsletter about this, I’ve talked about this before and I talked about it again at Monterey.


    Much of what I used as base material for my talk came from ST’s various presentations on the technology over the last 6 months or so. But another key source was a recent report by Handel Jones of IBS that I talked about herecalled Why Migration to FD-SOI is a Better Approach Than Bulk CMOS and FinFETs at 20nm and 14/16nm for Price-Sensitive Markets. Handel had put together cost models for both FinFETs and FD-SOI and came to the conclusion that FD-SOI is marginally cheaper than bulk planer (the more expensive starting material is offset by fewer processing steps). His analysis comparing FD-SOI to FinFET concludes that:At 14nm/16nm, the FD-SOI die cost for a 100mm[SUP]2[/SUP] die is 28.2% lower than the bulk FinFET die cost and has higher yield. The leakage of FD-SOI devices is projected to be comparable to that of FinFET devices.

    I have said many times before that I think that people underestimate the economic end of Moore’s Law. Yes, transistors will be faster, and yes they will be lower power. But you will not get more transistors per dollar and this means that the strong economic driver that has driven the whole electronics industry, especially when looked at over decadal timescales, is at best weaking a lot and at worse going away. Never again will we hae another transformation like that one that made multimillion dollar flight simulators available in your pocket for a few hundred dollars reducing the cost by a factor of maybe 50,000 in the last 30 years. FD-SOI is clearly one opportunity to get that back on track at least for a couple more process generations. Carpe Diem, or Carpe Die in FD-SOI anyway.


    Funnily enough, the week before I was at the GSA Silicon Symposium where on one panel people said they had seen these graphs and they were not true. Then after that I went to the Mentor user group meeting U2U and heard that the Samsung keynote had said that 20/14nm would not reduce costs, and later in the day I attended a TSMC presentation saying 16nm would not be a cost-reduction node. So I guess we will just have to wait and see. If FD-SOI designs really are nearly 30% cheaper than FinFET then that is a big gap to close by yield learning, it is approximately half a process node (and an old cost-reduction one at that).


    More articles by Paul McLellan…


    TSMC Will Own the Internet of Things!

    TSMC Will Own the Internet of Things!
    by Daniel Nenni on 04-27-2014 at 8:00 am

    In my quest to uncover the future of the semiconductor industry I was quite impressed by the executive presentations at the TSMC Symposium last week. Rick Cassidy opened the 20[SUP]th[/SUP] Annual TSMC Technology Symposium followed by Dr. Mark Liu, Dr. Jack Sun, Dr. Cliff Hou, J.K Wang, Dr. V.J. Wu, and Suk Lee. A variety of topics were covered but I had IoT on my mind so that is what I will talk about here.

    Internet of Things (IoT)-The network of physical objects that contains embedded technology to communicate and sense or interact with the objects’ internal state or the external environment.

    My interest in IoT started with chapter 8 of Fabless: The Transformation of the Semiconductor Industry where I asked 30 industry luminaries, “What’s next for the semiconductor industry?” In the 300 word responses IoT was a common thread so that is where I have been spending my time. My goal is to navigate through the hype and figure out just how the fabless semiconductor ecosystem (EDA, IP, Foundries) can monetize this emerging market. The semiconductor industry is all about design starts and to me that is what IoT is all about. From what I can tell, the majority of IoT designs today are implemented in mature nodes with 65nm considered bleeding edge technology.

    The basic building blocks of an IoT chip include:

    • MCU (ARM is the default here)
    • Sensors (temperature, vibration, gyroscope, humidity, pressure, altitude)
    • Power Management (solar, energy harvesting, short burst battery usage)
    • Embedded Memory (flash, NVM, SRAM)
    • Connectivity (GSM, GPRS, LTE, Zigbee, WiFi, Mesh Network)

    Let me know if I’m missing a block.

    According to J.K. Wang, Vice President of Operations of 300mm fabs, TSMC ships more than 1.3M 28nm wafers annually and that will increase by 20% this year. The transition to FinFETs is expected to start in 2015 with 900k wafers shipped followed by 1.3M wafers in 2016 which will free up an amazing amount of low cost 28nm capacity. 28nm also has the strongest design ecosystem with more than 100 partners including 39 vendors offering more than 6,000 pieces of IP. This has the makings of a perfect IoT storm:

    Low Cost + Large Capacity + Low Power + Design Enabled = Low Barrier to Entry!

    The next of many IoT seminars I will attend is sponsored by the World Affairs Council:

    The Internet of Things: Global Implications of Merging the Physical and Digital Worlds

    More than nine billion devices around the world are currently connected to the Internet, including computers and smartphones. That number is expected to increase dramatically within the next decade, with estimates ranging from quintupling to 50 billion devices to reaching one trillion. Please join us for a discussion of how the Internet of Things will impact the way we live, the way business is done and how resources are consumed. Important to the discussion will be the challenges ahead when merging the physical and digital worlds and the implications for privacy and security around the world.

    SPEAKERS:

    MODERATOR:

    • Aleecia McDonald, Director of Privacy, Center for Internet and Society, Stanford Law School

    WHEN:

    Wednesday, May 7, 2014

    Reception: 6:00 PM – 7:00 PM
    Event: 7:00 PM – 8:00 PM

    WHERE:

    Cadence Design Systems, Inc.

    2655 Seely Avenue, San Jose, CA 95134

    I hope to see you there!

    More Articles by Daniel Nenni…..

    lang: en_US


    System Design: Turtles All the Way Down

    System Design: Turtles All the Way Down
    by Paul McLellan on 04-27-2014 at 7:34 am

    According to Steven Hawking, Bertrand Russell once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun orbits around our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant turtle.” The scientist gave a superior smile before replying, “What is the turtle standing on?” “You’re very clever, young man,” said the old lady. “But it’s turtles all the way down!”

    Electronic systems are a bit like that. What is a system depends on who you talk to, and a system to one person is built out of components that are themselves systems to someone else. Pierre Paulin neatly defined system-level as “one level above whatever level you are working at.”

    In the EDA and semiconductor world, we are used to talking about systems-on-chip or SoCs. But the reality is that almost no consumer product consists only of a chip. The closest are probably those remote sensing transport fare-cards like Clipper. They are self-contained and don’t even need a battery (they are powered by induction).

    Most SoCs require power supplies, antennas and a circuit board, plus a human interface of some sort (screen, buttons, microphones, speakers, USB…) to make an end-user product. Nonetheless, a large part of the intelligence and complexity of a consumer product is distilled into the primary SoC inside so it is not a misnomer to refer call them systems. There is a reason Apple builds its Ax chips but not the other chips in the iPhone and iPad.


    However, when we talk about system level in the context of chip design, we need to be humble and realize that the chip goes into something larger that some other person considers to be the system. Importantly from a business perspective, is that the people at the higher level have very little interest in how the lower-level components are designed and how it is technically hard to take advantage of in any case. The RTL designer doesn’t care much about how the library was characterized; the software engineer doesn’t care much about how the language used for the RTL and so on.

    At each level, some model of the system is required. It seems to be a rule of modeling that it is very difficult to improve (automatically) the performance of a model by much more than a factor of 10 or 20 by throwing out detail. Obviously, you can’t do software development on an RTL model of the microprocessor, too slow by far. Less obviously, you can’t create a model on which you can develop software simply by taking the RTL model and reducing its detail and speeding it up. At the next level down, the RTL model itself is not something that can be created simply by crunching the gate-level netlist, which in turn is very different from the SPICE netlist.

    When I was at Virtutech, Ericsson was a customer and they used (and maybe still do) Virtutech’s products to model 3G base stations, which is what the engineers we interfaced with considered a system. A 3G base station is a cabinet-sized box that can contain anything from a dozen up to 60 or so large circuit boards, in total perhaps 800 processors all running their own code. Each base station is actually a unique configuration of boards so each had to be modeled to make sure that that collection of boards operated correctly, which was easiest to do with simulation. Finding all the right boards and cables would take at least a couple of weeks.

    And most chips are built using an IP-based methodology, some of which is complex enough to call a system in its own right. So it’s pretty much “turtles all the way down”.


    More articles by Paul McLellan…


    Secret of TI’s Success in Analog & Embedded Space

    Secret of TI’s Success in Analog & Embedded Space
    by Pawan Fangaria on 04-27-2014 at 7:30 am

    Since I started looking at the ways Texas Instrumentsworks through its strategies, my belief is getting firmed up that this is one company which can always sail through rough waters during downturn and reap rich benefits during upturn. They regularly review their strategies and can predict ahead of time when the water is about to turn red. That’s the time they change gears to move towards blue water and are able to generate good free cash flow, and handsomely reward their shareholders. It’s witnessing improvement in margin and free cash flow after coming out of OMAP business, right-in-time. Today, price war (a red ocean phenomenon) has already started in smartphone business, except Applewhich is on its path to trade in niche segment if it does not revise its pricing strategy to remain in the mass market. Recently we were talking about wearable technology and fitness trackers, and to my surprise, I hear about Nikealready closing its Fuelband division.

    I was patiently reading the long transcript of the interviewof Rich Templeton, Chairman, President and CEO of TI to Credit Suisse. There he clearly mentions about ~10 years life cycle of a product or design in automotive and industrial segment against a product in personal electronics (aka smartphone) having ~12 to 18 months of life cycle. And the amount of electronics in industrial and automotive segment is increasing, not decelerating any time soon. A dollar invested in automotive and industrial segments is paying more and much longer return. TI, in these segments, is adding ~30 basis points annually (for past 5 to 6 years) of market share on analog and embedded business. In Q1 2014 revenue of TI, ~84% contribution was from analog and embedded business and that improved the overall gross margin also to 53.9%.

    So, what’s the secret there in analog and embedded business? Why can’t competition copy it? How long this competitive advantage will sustain? One obvious answer is that analog and embedded technology has remained TI’s core competency; they are able to use this strength effectively in many complex products and solutions. But there is more to it. In segments like industrial, automotive, medical, home etc. traditional electromechanical systems are being replaced by semiconductor based systems. These systems have integrated semiconductor designs; you may call them SoCs, which have sensors, processors, controllers, intelligent software, firmware, connectivity (along with internet to enable IoT!), power management, and automation to troubleshoot any defective part, and so on. TI provides solutions to applications in these areas, they cannot be copied easily. Moreover, TI has a very broad and robust portfolio of products in these segments, e.g. analog, embedded processors, controllers, power management system, embedded analytics, DSP libraries, advanced driver assistance system (ADAS) including Vision SDK (Software Development Kit) and libraries for Embedded Vision Engine (EVE) and DSP, and many others. This big portfolio which keeps improving and widening will continue to provide sustainable competitive advantage to TI. Another key competitive advantage of TI against its analog competitors is its 300 mm wafer fab.

    Out of my curiosity to know about some of the intelligent solutions TI provides, I looked at its Embedded Analytics solutions which combine embedded processors in sensor driven systems to get that extra edge of intelligence in real world applications in security & surveillance, industrial automation, automotive vision and other diverse areas. The solution uses diverse algorithms and provides fast processing with predictable latency.

    Industrial solution has a diverse application area including complete factory automation, robotics, traffic management, control systems, automatic inspection and many others which cannot be included elsewhere. Application of sensors to detect sound, motion, temperature, pressure etc. is the key to industrial solution along with embedded software, processors and intelligent algorithms. An example of vision systems used in industries through smart cameras and vision processing systems that employ TI’s TMS320C66x multicore low power DSPs which provide great programmability is shown above. Details can be seen at TI website.

    Similarly, I looked at TI’s smart automotive solution; the TMS320C6000[SUP]TM[/SUP] DSP platform enables various vision and audio processing sub-systems that form a particular vehicle’s embedded analytics system. These vision processing sub-systems are meant to monitor both inward and outward spaces. Image sensors monitor spaces around the car for its own protection from the objects outside as well as protection of life outside through night vision, pedestrian detection, lane departure warning etc. It can even check drowsiness inside.

    Amazing systems; there are similar intelligent systems in medical, healthcare and other areas. These will become more intelligent, and remotely accessed and controlled with the advent of IoT and I guess that’s where TI is sharpening its focus in the long run – integrated systems that provide high revenue, high margin, and for longer duration, with clear competitive advantage.

    More Articles by Pawan Fangaria…..

    lang: en_US