CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Power Management Gets Tricky in IP Driven World

Power Management Gets Tricky in IP Driven World
by Pawan Fangaria on 07-08-2015 at 7:00 pm

Today, an SoC can have multiple instances of an IP and also instances of many different IPs from different vendors. Every instance of an IP can work in a separate mode and requires a dedicated power arrangement which may only be formalized at the implementation stage. The power intent, if specified earlier, will need to be re-generated according to the target technology. Now imagine defining the power intent for a large number of IPs, sourced from multiple vendors, at the implementation stage; it’s nearly impossible and can be a nightmare for verification and debugging. The power intent needs to be specified in a top-down manner and refined from abstract at the top level to detail at the implementation level.

There was a technical paper presented by ARMand Mentor Graphicsat DVCon this year, which illustrates a very effective and efficient methodology for specifying power intent incrementally and refining it successively from abstract to implementation level.

The successive refinement methodology employs three stages of an IP. The power intent for the IP at each stage is specified in an UPF (Unified Power Format) file at an appropriate level according to a strategy for successive refinement of the power intent.

The first stage is the IP Creation Stage when the most abstract view of the power intent is created by the IP provider. This abstract view is represented in a Constraint UPF file that contains the constraints on the power intent of the design (RTL). The power domain is defined at the ‘atomic’ level which cannot be further partitioned by the IP consumer. If the user intends to use retention in her/his power management scheme, then she/he must specify about the ‘retention constraints’ in terms of state elements to be retained.

Similarly, the Constraint UPF file should specify the ‘isolation constraints’ in terms of isolation clamp values that must be used if the user decides to shut down portions of the system as part of the power management scheme. Also fundamental power states of an IP block and its component domains should be defined in a technology-independent manner, i.e. without any reference to voltage levels. The ‘power states’ should be defined without any imposition on the IP consumer about the power management approach to be used. The Constraint UPF file should not be replaced or changed by the IP consumer.

The second stage is the IP Configuration Stage when the IP licensee or end user describes application-specific configuration of the UPF constraints in a Configuration UPF file. The Configuration UPF file contains details of the power management scheme for the system including design ports that a design may use to control power management logic, isolation and retention strategies on power domains along with their control logic, logic expressions on power domain states to reflect control inputs, and so on. The logical control signals may include signals for isolation and retention cells, and signals that will eventually control power switches when they are defined as part of implementation. The Configuration UPF file is required for simulation.

The third stage is the IP Implementation Stage when the end user describes technology-specific implementation of the UPF configuration in an Implementation UPF file. The Implementation UPF file contains details of the implementation such as power switches and voltage rails (i.e. supply network). It defines which supply nets specified by the implementation engineer are connected to the supply sets defined for each power domain. The technology references such as voltage values and cell references are part of Implementation UPF.

This scheme is well structured where power intent defined in constraint and configuration UPF files can be applied to different implementations that differ in technology details. The Configuration UPF loads in the Constraint UPF for each IP so that the tools can check that the constraints are met in the configuration. After that the Implementation UPF is added to specify implementation details and technology mapping. The complete UPF then drives the implementation process.

The Configuration UPF file for a system together with the Constraint UPF files for IP components and the RTL for the system and its components can be verified in simulation. Once verified, these files can be considered as ‘Golden Reference’ for the design cycle.

This approach enables the verification equity of the design to be preserved and relied upon through the implementation stages, thus shortening the design cycle. The full value of the approach can be realized when all elements are available in the entire tool chain.

Recently there was a press releasefrom Mentor in which they announced about IEEE1801 UPF 2.1 support in Questa Power Aware Simulation which completely supports this successive refinement methodology for power and accelerates the design and verification of power management architecture. The methodology of partitioning power intent into constraints, configuration and detailed implementation also simplifies debugging power management issues. For tools that do not yet support UPF 2.1, the Questa Power Aware can generate functionally equivalent UPF 1.0 from UPF 2.1 for them to support UPF-based flows.

A more detailed description of the methodology along with an example processor design is given in the DVCon paperjointly presented by ARM and Mentor.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


3 Key Frontiers for Samsung’s Next Mobile SoC

3 Key Frontiers for Samsung’s Next Mobile SoC
by Majeed Ahmad on 07-08-2015 at 4:00 pm

Samsung’s Exynos 7420 system-on-chip (SoC) is now at the top of the world when it comes to performance and power efficiency benchmarks. It’s also won accolades as the first mobile chipset manufactured using the 14nm FinFET fabrication process.

However, the mobile chipsets landscape is hypercompetitive, and there are other giants like Apple, Intel, MediaTek and Qualcomm relentlessly working to create powerful mobile SoCs. And that means Samsung isn’t sitting on its laurels either.


Exynos 7420: A major breakthrough in power efficiency

So what’s next for Samsung’s mobile SoC juggernaut? The blog identifies three major areas where the Korean chipmaker might be focusing its efforts right now.

The Baseband Puzzle

So far, Samsung hasn’t been able to compete with Qualcomm’s LTE baseband products, and that severely limits Samsung’s ability to deliver SoCs with integrated mobile connectivity. According to media reports, some Galaxy S6 handsets have incorporated the in-house baseband chip called Shannon while some other S6 phone units are using Qualcomm’s baseband IC.

That clearly shows two things. First, Samsung is fully aware of the strategic importance of baseband socket being integrated into the mobile application processor. Second, the Korean electronics giant is testing the waters before taking Qualcomm out of its mobile BOM altogether.

Baseband is strategic to Samsung’s SoC ambitions

There is hardly any doubt about Samsung’s seriousness about stitching the baseband building block in mobile SoCs. In 2014, Samsung unveiled the quad-core Exynos ModAP chipset that integrated baseband socket with the application processor and supported multimode LTE devices. Next, it announced the Exynos 300 modem chip that would also support LTE-A devices. Both LTE modems were based on CEVA’s DSP cores.

New CPU Design?

Next, Samsung seems to be at the mercy of other players for real improvements to its chips, and its reliance on factors like ARM CPU cores has become a roadblock for its ability to quickly improve the Exynos family of SoCs. So there is a lot of speculation in the trade media about Samsung following Apple in designing its own CPU based on an architecture license from ARM.

An in-house CPU allows quicker SoC improvements

The media reports also provide a few details about Samsung’s custom CPU cores touted as “Mongoose” that are going to be based on ARMv8 instruction set and clocked at 2.3 GHz. A CPU design seems to be a logical next step in the evolution of Exynos mobile SoCs.

However, on the GPU side, Samsung has recently announced a long-term graphics technology agreement with ARM. The license covers the latest Mali graphic processing units with more immersive visual experience. The scope of the deal suggests that the Korean chipmaker would continue using ARM’s Mali graphic cores in its future mobile SoCs.

IoT: The Next Logical Step

Artik is another key building block of Samsung’s future SoC roadmap. It’s a new chip family that powers the Internet of Things (IoT) devices. Artik comes in the form of three modules that bundle CPUs, GPUs, memory, and storage along with wireless network, sensors, video decoding and other components.

Artik modules come in three sizes for addressing a variety of IoT apps

Artik is aimed at makers of robots, drones, and other IoT devices. The Artik SoC lineup is going to be different from Exynos because it’s targeted at a range of hardware developers, both large and small, not just large smartphone OEMs.

Samsung is betting that its Artik SoC will be more attractive to hardware developers because buying off-the-shelf SoCs from Samsung is far easier than bringing together and optimizing four or five different components like a Bluetooth chip, sensor pack and memory and encryption component onto a PCB.

Majeed Ahmad is the author of Nokia’s Smartphone Problem: The End of an Icon?


Circuit Simulation Update from #52DAC

Circuit Simulation Update from #52DAC
by Daniel Payne on 07-08-2015 at 12:00 pm

Actual users of circuit simulators told their design and simulation stories at DAC during a luncheon sponsored by Synopsys on June 8th. I always prefer to hear from a design engineer versus a marketing person about what tool they use for circuit simulation, and how it helps them analyze their design goals. This year there were engineers from TSMC, Altera, Xilinx and ST Microelectronics.

TSMC
Up first was Shaojie Xu from the Memory Design Products group, and his challenges were the design and characterization of a 2 port register file where they needed to trade-off simulation runtime versus accuracy, and get detailed parasitic extraction values. For timing and power analysis simulations they saw runtimes of 1-2 days when using 8 to 12 CPUs.

As they moved from the 28 nm to 10 nm process nodes there was an increase in parasitic extraction file size as the SPF files grew by a factor of 6X. Only the critical paths are cut out of the complete memory netlist in order to fit into the circuit simulator. They used CustomSim for SRAM characterizations to get the best runtime versus accuracy, and at 10 nm simulating across 80 PVT corners it took about 3 days to get complete results using 8 threads.

Altera
Ethan Howe talked about how they used the FineSim circuit simulator to validate their advanced-node designs for FPGA, ARM-based SoC and CPLD. TSMC and Intel Foundry are their two foundry partners. Circuit simulation trends include:

  • Increased run times with smaller nodes like FinFET, compared to planar CMOS
  • More parasitics used in FinFET designs, and the models get more complex
  • Larger number of PVT corners required to center the design
  • Lots of Monte Carlo simulations to account for process variations

While evaluating which circuit simulator to use, they found that FineSim was 2.5X faster than anything else at the same accuracy level. The fast simulation speed was attributed to good multi-core support and scalability on LSF. By doing a co-simulation between FineSim and Verilog (VCS) they saw an ADC case have speed improvements from 9 days to just 1.5 days.

Xilinx
AMS simulation was the focus of Patrick Lunch, and he talked about the design of their digitally-assisted ADC where they do voltage and temperature sensing on-chip in the Virtex series from 28 nm down to 16 nm. Their group also designed a 3D IC comprised of two 28 nm FPGA slices, while a DAC and ADC where implemented on 65 nm.

Digital verification was simulated with VCS, while AMS verification was done with the VCS AMS tool. Engineers used real number modeling and could swap out SPICE netlists with models. Requirements for AMS verification include:

  • Support a UVM flow
  • UVM + real numbers
  • UVM + SPICE
  • Self-checking of analog and digital
  • Predictor models and assertions

There was a learning curve for UVM, but it was worth the wait because most AMS simulations completed within a day. They saw about a 3X run time improvement with VCS AMS versus their previous approach. Areas for improvement were:

  • Writing predictor models in less time
  • Want real ports in co-simulation flows
  • More AMS assertions, checkers and source generators

STMicroelectronics
The final speaker was Pierluigi Daglio, someone that I met at DAC in 2010when hosting a panel discussion on SPICE. At ST they have a BCD (Bipolar-CMOS-DMOS) process and use the CustomSim simulator. On their SmartPower chips they can simulate the entire chip with CustomSim.

For system-level verification they use VCS AMS and can have either analog on top, or digital on top, depending on the methodology used for each design team. They are seeing simulation results about 2X faster now with CustomSim versus their previous simulator, and VCS AMS is up to 5X faster than before. By simulating with up to 16 cores they now see 3 day runs complete in just 1 day.

A PLL design in FDSOI at 28 nm would complete a Monte Carlo simulation before in 4 days using 1 core, while now with CustomSim they can use 8 cores and finish in under 1 day.

They write assertions to uncover any differences between the scoreboard and dynamic simulations to find failures. The asserts can be placed on either the digital or analog portions.

Summary
The cool thing with events like this DAC luncheon are that you can approach the speakers after their talk and have a further discussion, asking questions and finding out more details about how and why they used each of these different circuit simulation approaches.


Xilinx Datacenter on a Chip

Xilinx Datacenter on a Chip
by Paul McLellan on 07-08-2015 at 7:00 am

I talked recently about the Intel acquisition of Altera which seems to be all about using FPGA technology to build custom accelerators for the datacenter. Some algorithms, especially in search, vision, video and so on map much better onto a hardware fabric than being implemented in code on a regular microprocessor.

So if the heart of the future datacenter is a high-performance processor coupled with a programmable fabric then Xilinx just taped out what I think of as a datacenter on a chip, although they call it the industry’s first All Programmable Multi-Processor SoC (MPSoC). It is on TSMC’s 16FF+ process.

Of course this is the initial tapeout. Xilinx is already shipping parts in TSMC’s 28nm and 20nm nodes.

Under the hood are seven processors:

  • A quad-core 64-bit ARM Cortex-A53 processor
  • A dual-core 32-bit ARM Cortex-R5 real-time processing unit
  • An ARM Mali-400 GPU

There are also a suite of integrated peripherals, security features and advanced power management. The All Programmable Zynq UltraScale+ MPSoC enables the development of flexible, standards-based platforms by providing 5X system level performance/watt and any-to-any connectivity with the security and safety required for next generation systems.


With processors and programmable fabric, the parts can obviously be used for a wide range of applications. But Xilinx call out three areas where they have focused. Embedded vision systems especially for advanced driver assistance systems (ADAS) and autonomous vehicles. Industrial internet of things to add a lot of local processing power so decisions can be made accurately and fast. And 5G mobile base stations, including multiple antennas, high data rates, and aggressive power limits.

Automotive
The Zynq UltraScale+ MPSoC is tailored for next-generation embedded vision systems, including industrial machine vision, surveillance, and automotive ADAS systems. For ADAS, the Zynq MPSoC tightly couples highly parallelized hardware image processing and analytics acceleration with software based algorithm configuration and control. With the addition of expanded memory with UltraRAM for video buffering, throughput is maximized and latency is reduced; a critical attribute for ADAS. Finally, to enable real-time safety-critical countermeasure decisions and initiate actuator commands, the Zynq MPSoC ARM with dual core Cortex-R5 engines can be utilized in lockstep mode along with cross-monitoring and diagnostic-protected voting in the programmable fabric. The Zynq MPSoC was designed with automotive ISO-26262 functional safety requirements in mind, while still offering a scalable and highly customizable programmable platform that will future-proof customer designs in the quickly changing ADAS space.

Industrial Internet of Things

For the Industrial Internet of Things, the Zynq UltraScale+ MPSoC family is ideally suited to integrate data acquisition, perform real-time diagnostics, and enable local decision-making for intelligent connected control systems. The combination of the MPSoC processing subsystem, the UltraScale programmable logic fabric, and the new UltraRAM on-chip memory technology create the ideal platform to process the vast quantities of data for analytics and manage real-time machine-to-machine (M2M) communication. With a dedicated security processing unit and dual core Cortex-R5 engines that can be configured for lockstep, the Zynq MPSoC also supports SIL3 functional safety and security requirements.

Next Generation Mobile

Zynq UltraScale+ MPSoC devices support the increased radio and baseband processing requirements of next generation 5G systems. This includes the support of new ‘massive MIMO’ and adaptive beamforming architectures, CloudRAN layer 1 baseband acceleration and associated Fronthaul applications, with flexible support of multiple standards and multiple bands at significantly lower power. The Zynq UltraScale+ MPSoC, with its quad core ARM Cortex-A53 processing subsystem, leverages an integrated fine grain power management system to implement lower power optimal hardware-software design for digital pre-distortion, beamforming control functions, and system management tasks.


Why Automotive IP Portfolio is not just IP

Why Automotive IP Portfolio is not just IP
by Eric Esteve on 07-07-2015 at 7:00 pm

Synopsys is launching a broad IP portfolio to support SoC development dedicated to emerging automotive complexes functions, like Driver Assistance (ADAS), Driver Information, Vehicle Network or Infotainment. I was never involved into IC design for Automotive, but I have designed ASIC for avionics (CFM56 motor control) or for railways (TGV boogies instability) and I can guarantee that the requested level of quality is high, very high. The goal is to guarantee a safety level that you don’t expect for an Application Processor or a Set-Top-Box SoC. Indeed the requested safety level is similar for Automotive than for Avionics or Railways, with the difference that emerging automotive applications are now consumer oriented: TTM plays an important role and OEM wants to differentiate by bringing always more features. We could say: Automotive = Avionics + Consumer!

Automotive Grade requirements for IP are multiples, and well documented. To reduce risk and accelerate qualification for automotive SoCs, Synopsys’s IP must comply with these requirements:

  • Automotive Safety Integrity Level (ASIL) B Ready

Accelerate ISO 26262 functional safety assessments to help ensure designers reach ASIL levels

  • AEC-Q100 Testing

Reduce risk & development time for AEC-Q100 qualification of SoCs

  • TS 16949 Quality Management
  • Meet quality levels required for automotive applications

These quality requirements impact the complete above listed IP Portfolio. ASIL B ready IP, including complete safety package documentation, count: ARC EM SEP, Ethernet AVB, LPDDR4 and embedded memories. ASIL B compliance is in progress for HDMI, MIPI CSI-2/DSI, PCIe, mobile storage, data converters, logic libraries, NVM, Sensor & Control Subsystem & EV vision processors.

Synopsys has introduced ARC EM Safety Enhancement Package (SEP) to support ISO 26262 targeted solution. This means that the ARC processor core integrates hardware safety features like parity support, ECC support, user programmable watchdog timer, export state (Lock step), Lock step monitoring system and memory protection unit. Moreover ASIL D ready MetaWare Compiler enables development of ISO-26262-compliant software. When the IP vendor invests upfront for high quality level, the SoC design team benefit from TTM advantage as the software is automatically ISO-26262 compliant.

Synopsys offers as well a 10/100/1G Ethernet QoS Controller safety package. This Ethernet controller may be used for ADAS module-module network traffic and supports IEEE Audio Video bridging specifications. This controller is optimized for synchronized automotive multimedia, supporting safety critical operation features as well as data prioritization, bandwidth reservation, traffic shaping and universal timing. This UNH tested Ethernet controller enables predictable & reliable networks.

The AEC-Q100 industry standard specification outlines the stress tests and reference test conditions for the qualification of automotive grade SoCs. Synopsys is investing in providing IP that meets AEC-Q100 requirements, enabling designers to reduce design risk and development time for SoC level AEC-Q100 qualification. We can appreciate the high level of investment when looking at the above table. The specific test are ranging from fault simulation, accelerated lifetime simulation test, die fabrication reliability test to ESD, latch-up and characterization impacting IP (or Foundry) quality when accelerated environment stress test, EMC, SC and soft error rate (SER) are applying to the SoC only.

All Automotive SoC have to comply with AEC-Q100 standard, offering such an IP port-folio to automotive design team allows shortening qualification effort and reduce Time-To-Market… remember that these emerging automotive applications like ADAS or Infotainment are mixing avionics like requirements with consumer-like TTM!

You will find an exhaustive list of Synopsys Automotive Grade IP HERE.

From Eric Esteve from IPnest


Gary Smith Passed Away Last Friday

Gary Smith Passed Away Last Friday
by Paul McLellan on 07-07-2015 at 1:02 pm

I expect most of you have already heard the sad news through other channels: Gary Smith died last Friday, July 3rd, from pneumonia in Flagstaff, Arizona.

I must have first met Gary back in Dataquest days when I was at VLSI Technology. Gartner then acquired Dataquest and eventually shut down the EDA practice and laid Gary off. He then started his own company, GarySmithEDA. It only took him a couple of quarters and he told me that he was already bringing in more than they had been at Gartner.

Gary was a sort of gate-keeper to the EDA industry and its customers. Nothing happened in EDA without Gary knowing about it. He always wanted to meet a company, no matter how small, although in April and May coming up to DAC his calendar would get insanely full.

In fact for a small EDA company there was a ritual of going to see Gary in spring to make sure that he understood the product. If the planets aligned then you would get on his list of the top 25 companies to see at DAC which would often make a big difference to a startup with a very limited marketing budget, struggling for potential customers to even know they existed. Gary was scrupulously fair about these lists. It didn’t matter if you were a consulting client, or a big company, or a tiny one. If you had interesting and appropriate technology then Gary would send people to check you out.

Also, the company and its product would appear in the right place on the taxonomies of the industry that Gary and his team produced annually.

The overture to DAC for years was Gary’s presentation summarizing the state of the EDA industry on Sunday evening. Then a repeat performance to open the show the following morning. If Gary wasn’t wearing his trademark white suit then he would be in his backup trademark orange jacket. Gary was always optimistic about the EDA industry and accurate about where it was going. But sometimes he was too optimistic, predicting the takeoff of some technology such as system design a long time before it actually happened.

Lori Kate Smith, his widow, has asked John Cooley to put together a memories page this week before the funeral so that their 9-year old son can see how Gary was seen by the industry that he worked in. So if you have memories or photos of Gary then email them to John.

A memorial service is in planning for the late morning of Sunday, July 12th in San Jose. In order to allow them to reserve the appropriate facility, would you kindly indicate below the number of people in your party who may attend? Children are welcome. To do this, go here.

Full instructions for exactly what to mail and what subject lines to use for memories and photos are on John’s website here.

GarySmithEDA’s website is here.

UPDATE: memorial service is 11am the the San Jose Doubletree. Wear orange like me. More details in the comments.


SEMICON West Preview

SEMICON West Preview
by Scotten Jones on 07-07-2015 at 7:00 am

Founded in 1971 (2015: 45th year), SEMICON West 2015 is coming to the Moscone Center in San Francisco on from Tuesday, July 14[SUP]th[/SUP] to Thursday, July 16[SUP]th[/SUP]. SEMICON is the premier show for equipment and materials companies supporting the semiconductor, MEMS and solar industries.

The main ways to get value from the SEMICON show:

[LIST=1]

  • You can visit the floors of the exhibit halls stopping in at booths and meeting with people.
  • There is a wide variety of keynote addresses and Tech spots where presentations are given on technology and markets.


    Tuesday

    Tuesday begins with a keynote on “Scaling the walls of sub 14nm manufacturing”. There are also Tech spots that look interesting to me covering the future of MEMS and emerging memory. For people with other interests there will also be sessions on packaging and contamination control.

    Wednesday

    Wednesday begins with a keynote on the Internet of Things. In the Tech spots there will be coverage of 450mm, the secondary equipment market for 200mm, packaging and more.

    Thursday

    Thursday has tech spots on CMP, monetizing the internet of things and disruptive technologies.

    All of the items discussed above are free with your entry badge. There are also paid programs running concurrently with these event each day and of course the nightly receptions if you know the right people at the equipment and materials companies.

    I plan to attend the Tuesday press briefing, Keynote and the MEMS and emerging memory tech spots. Wednesday I plan to attend the keynote and 200mm secondary market tech spot. Thursday I am attending an Axcelis briefing and plan to go to the CMP Tech spot. After the show I will blog about what I saw.

    SEMICON West is the flagship annual event for the global microelectronics industry. It is the premier event for the display of new products and technologies for microelectronics design and manufacturing, featuring technologies from across the microelectronics supply chain, from electronic design automation, to device fabrication (wafer processing), to final manufacturing (assembly, packaging, and test). More than semiconductors, SEMICON West is also showcase for emerging markets and technologies born from the microelectronics industry, including micro-electromechanical systems (MEMS), photovoltaics (PV), flexible electronics and displays, nano-electronics, solid state lighting (LEDs), and related technologies.


  • Apple Watch – A Great New Design, Needs More

    Apple Watch – A Great New Design, Needs More
    by Pawan Fangaria on 07-06-2015 at 7:00 pm

    During 52[SUP]nd[/SUP] DAC, there was a special session where a brand new Apple watch was opened and each of its components was shown with a brief description about it. I found this tear down session a great innovative idea coming from DAC organizers; actually two buzzing products, AppleWatch and DJI’s Phantom Drone were opened and their internals shown in this session. Earlier I had also reviewed some of the analysis done by Chipworks about Apple Watch. Since there is so much buzz about the Apple Watch, it’s a natural curiosity to know more about this smartwatch. This could be a turning point in the watch industry.

    The design seemed to be very compact and unique; a package of size 26 mm x 28 mm contained more than 30 components. A common motherboard with all of its components is over-moulded with a packaging compound. Then there are sensor, touch controller, and others in the package. Every component has been sourced from a prominent and reputed vendor in that space.

    There is a 3D digital accelerometer and a 3D digital gyroscope in a LGA (Land Grid Array) package from STMicroelectronics. It has a 6 axis sensor for acceleration and roll, pitch and yaw which is a shift from iPhone 6 where Apple has an Invensense6 axis sensor and a Bosch 3 axis accelerometer.

    There is a capacitive touch screen controller from ADI. The operational amplifier, OPA2376 which has excellent precision, low noise, and low quiescent current is from Texas Instruments. TI also supplied some battery management components. There is an Apple designed processor, manufactured by Samsungat 28nm LP process node. The power management unit is from Dialog. The Codec and audio amplifier are from Maxim; the watch has microphone and call facility. STMicroelectronics also supplied ST32 MCU and optical emitter/sensor encoder die for the Apple watch.

    One great aspect of the Apple watch is that it is Apple Pay enabled and one can make calls through the watch. For this there is BroadcomWiFi SoC BCM4334, and a NFC controller from NXP. There also is a NFC signal booster AS3923 from AMS. A few analog components including WiFi LNA (Low Noise Amplifier) and power amplifier are from a fast emerging company, Skyworks.

    All of these components are on a single package; some of the components such as the processor and DRAM contain multiple die. This along with other individual components in the watch such as a Taptic Engine, a Digital Crown and others make it a very compact piece. The watch has an innovative pressure-sensitive display, called Force Touch, which senses how hard users are pressing down the screen. Depending on the strength of the taps on the screen, different functions are activated; that’s an innovative idea to enable a number of functions on that small screen. Summing up everything together, Apple Watch has a great architecture and is a great start to rejuvenate the smartwatch market. However, more needs to be done by Apple for this product market.

    Although the Apple’s Watch launch was heard across the world, the actual sales numbers are yet to be seen. One major concern is about dependence of the Apple Watch on iPhone. Its utility is significantly reduced without an iPhone paired with it. So, the question of need again arises here. If one has an iPhone, then why does he need an Apple Watch? The Apple Watch has to have an independent identity of its own. It needs to work without an iPhone in all scenarios satisfying the needs of people as they expect in it and not by comparing with other watches in the BasselWorld.

    Apple is already considering reducing the dependency of Apple Watch on iPhone. However we need to see how independent the second version of Apple Watch is. We know Apple’s perseverance about improving their products version after version. That being the notion about Apple, many people across the world are waiting for the second and third versions of Apple Watch to appear before jumping to buy Apple Watch.

    The journey for smartwatches to concur the traditional watch industry has begun. I believe Apple will gradually improve their smartwatch to the point what the market desires and that will be the day of smartwatches in the large watch industry.

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com


    Opportunity NoCs, But Not Without Software

    Opportunity NoCs, But Not Without Software
    by Paul McLellan on 07-06-2015 at 12:29 pm

    It is easy to think that semiconductor IP is all about structures on the silicon. After all, there is “semiconductor” in the phrase “semiconductor IP”. But increasingly the heart is actually software. Sonics’ SGN product is a network-on-chip but to build it you need to use the software that actually builds it, which is called SonicsStudio Director.

    Just before DAC, Sonics released a new version. It has been put together to support both novices and power users. Easy to get started but also with the features needed to get the most out of designers who understand all the details of SonicsGN. When SoCs can contain hundreds of blocks, in multiple power and clock domains, building an NoC is not straightforward. And that is before you worry about the fact that the NoC has to run at a GHz and the whole chip isn’t allowed to consume too much power. The NoC is a crucial part of building a successful chip. More and more of the chip is 3rd party IP (or from a group in the same company in, say, India, which is not that different from a 3rd party) and making an SoC is linking all that barely-understood stuff together. That is where NoCs come in. And not just NoCs but the software that allows you to describe it all. Like I said, it is not just the stuff on the chip it is the software that allows you to design it.

    So what does Studio Director do? It lets you specify all the blocks on the chip and how you want them to communicate. Then it lets you check you didn’t screw up. You can check it, simulate it and more.

    • Design
    • Power
    • Lint
    • RTL generation
    • RLT simulation
    • SystemC generation
    • SystemC simulation
    • Synthesis constraint generation
    • IP-XACT generation

    If you have a NoC then you want to know all sorts of stuff at different levels. At the high level what is the overall bandwidth. At the low level, how long does this block take to communicate with this other block. And everything in-between.

    That’s where StudioDirector comes in. It let’s you ask all those questions without having to build the chip and do it through RTL simulation or something barely practical. You put the NoC together and then you find out whether it actually does all the things you need. QoR stands for Quality-of-Results but it is a catchword for whether or not what you have done is good enough for the design. Whether it is a good balance of performance, power, area and everything else.

    Since the NoC involves pretty much every block on the chip. alternatives are the same. Simulate all the blocks. Or STA all the blocks. Or whatever. Not a trivial task and involving expensive tool licenses. Multiple clocks. Voltage islands. Power down. A modern SoC is not like when I started in this industry and a 10K gate design with a single clock running at 3V was the limit of what anyone did. We hadn’t even got to CMOS.

    So what does the latest version of StudioDirector give you:

    • Schematic and context-sensitive text editor
    • Real-time feedback for quick-fix
    • Excel-like spreadsheet for import/export
    • Full Tcl support

    Intel 10nm delay confirmed by Tick Tock arrhythmia leak-"The Missing Tick"

    Intel 10nm delay confirmed by Tick Tock arrhythmia leak-"The Missing Tick"
    by Robert Maire on 07-06-2015 at 7:00 am

    Our 6/15 report of more 10nm Intel delays confirmed by leaked info…
    The delay appears to have interrupted Intel’s Tick Tock cadence…
    Kabylake replaces Skylake – Cannonlake pushed out over horizon?

    The news we broke is now confirmed…
    On 6/15 we put out a report that broke the news of further delays at Intel 10nm. This appears to now be supported by a report of leaked Intel documents on 6/23 by the website benchlife.info which shows specifications for a new line of processors for 14nm called Kaby Lake to replace Skylake (14nm) and precede Cannon lake at 10nm.

    http://benchlife.info/cannon-lake-postpone-and-kaby-lake-will-replace-skylake-in-2016-06232015/

    The information shows another 14nm processor family called Kaby Lake which is to follow Skylake which follows Broadwell. Skyline was originally supposed to be followed by Cannon Lake at 10nm process. It would now appear that Cannon Lake has been pushed far enough into the future to be beyond the visible horizon.

    Tick Tock skips a beat.. “The case of the missing Tick”….
    Intel has been doing what it has called “Tick Tock” changes in technology, with a “Tick” being a node shrink and a “Tock” being an architectural improvement at the same technology node.

    Broadwell was a “Tick” down in technology nodes to 14nm with Skylake a “Tock” at 14nm but with a new architecture. Cannon Lake was supposed to be a “Tick” down to 10nm but instead we are getting a second “Tock” of Kaby Lake at 14nm.

    Broadwell = Tick
    Skylake = Tock
    Kaby lake = Tock
    Cannonlake = Tick

    Maybe Intel can fix this by calling Kaby Lake a Toke. Warmed over Kaby Lake marks time while waiting for real Tick to 10nm Cannonlake. It seems rather obvious that Intel has come up with a bit of a stop gap excuse while waiting on 10nm. Not only does it upset the cadence they have set in the market but it clearly reduces manufacturers desire to use Kaby Lake as its obviously less of an improvement over previous models of product.

    Imagine a car manufacturer having problems with their 2016 model car for some reason and they decide to put out a 2015 and a half model year as a stop gap. Probably not going to get a lot of buyers. People will stop buying and wait for the real thing at 10nm.

    10nm delay must be very significant…

    We can’t imagine that Intel would go to all this trouble if 10nm were right around the corner or didn’t have a significant delay. This is clear evidence of a longer than expected delay.

    Not only is it a replay of 14nm but could be worse as 20nm did not have an extra Tock thrown in while waiting for 14nm. You are not going to do Kaby Lake for a 3-6 month delay…its more like a year.

    Energizing TSMC & Samsung..
    In the continuous LeMans race of technology, TSMC and Samsung have perennially eaten the exhaust of the long time leader Intel, usually from a good distance. Now, not only are they closing the gap and up on Intel’s bumper but smoke may be coming out from under the lead car.

    It could be that TSMC & Samsung experience the same issues as Intel at 14nm and 10nm but so far 14nm does not appear as much as a problem for them as it was for trail blazer Intel.

    If we were in their shoes we would likely be stepping on the gas as hard as possible. As we mentioned in our prior note this does not bode well for Intel’s mobile aspirations nor its foundry hopes as the only advantage Intel had was its technology lead.

    Collateral Impact…

    As expectations for Intel’s Capex have continued to fall this news is not likely much of a surprise. However for those expecting an uptick in 2016 spending by Intel we think the odds of that are low given that not only has 10nm been pushed deeper into 2016 but it has probably started to roll into 2017 at this point.

    We don’t expect 10nm to get out of Intel’s R&D fab until the first half of 2016 which probably translates into equipment deliveries starting mid 2016 at best and rolling forward.

    15 ASML EUV tools likely pushed out….
    It was already clear that Intel wasn’t going to use EUV at 10nm and they also said they had a clear path to 7nm without EUV as well so we found it odd to hear about a 15 tool order as we thought it was clearly not written in stone.
    Well the 15 tool order may turn out to be written in the sand if Intel pushes its schedule out for 10nm. This is a significant negative turn of events for ASML given that the order was announced not that long ago.

    On the other hand this likely increases the time available for ASML to get EUV to work as recent progress seems to have slowed a bit.

    Other Impact…
    Obviously the delay helps multi patterning companies with Lam being at the top of that list followed by AMAT. Further delays at Intel do not help metrology and yield management companies as Intel tends to be heavier in this spending area than most others. It may be made up for by Samsung and TSMC stepping on the gas but its not yet clear.
    Coupled with DRAM issues we think this makes 2016 look relatively flat overall as Intel and DRAM weakness is offset by 3D and NAND spend.

    The challenges are unclear yet clear…
    We have not heard much about the problems Intel is having as it is deeply under wraps. It could be a combination of technical and financial issues at work…we just don’t know.

    We would like nothing more than to see Cannon Lake come out on time and wake up to find Kaby Lake and 10nm delays to be just a passing nightmare….but we don’t think thats the case.

    The issue of 10nm is more problematic than the embarrassment of the 14nm delay as Intel is in a more critical situation with both its competitors and its markets than before at the 14nm node. Pulling this off will test the mettle of management.

    “Bad companies are destroyed by crisis, Good companies survive them, Great companies are improved by them.” Andy Grove

    Robert Maire
    Semiconductor Advisors LLC

    Also Read: Further Delays for Intel 10nm?