RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Social Media at Atrenta

Social Media at Atrenta
by Daniel Payne on 03-27-2014 at 11:01 pm

Atrentais well-known for their SpyGlass software that enables SoC engineers to run early design analysis on RTL code and create a hardware virtual prototype for analysis prior to implementation. Visiting their website you quickly see that social media plays an important role in connecting with engineers as links for Facebook, Twitter, LinkedIn and RSS Feed are placed in the header.

Let’s take a closer look at each of these social media channels.
Continue reading “Social Media at Atrenta”


Applied Power Electronics Conference & Exposition 2014: "Less Power"

Applied Power Electronics Conference & Exposition 2014: "Less Power"
by Bill Jewell on 03-27-2014 at 11:00 pm

On the television show “Home Improvement”, Tim Allen’s character always sought “more power” for whatever project he was working on. The theme of the Applied Power Electronics Conference and Exposition (APEC) 2014 could have been “less power”. APEC 2014 featured five days of seminars and sessions including professional education, technical papers and industry sessions. The conference also featured an exhibit hall with 226 exhibitors. We at Semiconductor Intelligence attended part of APEC 2014 last week in Fort Worth, Texas.

The “less power” theme was reflected in many of the technical and industry sessions. Wide band gap devices (semiconductors using materials such as silicon carbide and gallium nitride) can enable higher performance and less power for applications such as RF, LED lighting, motor drives and electrical power conversion. Energy harvesting captures minute amounts of naturally-occurring energy for many sensing, monitoring and control applications. Smart power grids will help deliver power more efficiently and reliably. Motor drivers and controllers are increasing the power efficiency of motors.

Another theme at APEC was “alternative power.” A track on renewable energy systems featured sessions on photovoltaic and wind energy. Several sessions on solar power emphasized the strong growth in residential photovoltaic panels. Other sessions examined technologies related to alternative energy such as energy storage and DC power transmission. Many of the vehicle power electronics sessions focused on electric drive vehicles.

The exhibition hall occupied most of the Fort Worth Convention Center and covered all aspects of power electronics. All the major power semiconductor companies were represented. The companies with the largest booths were Fairchild, Texas Instruments, STMicroelectronics, International Rectifier and Vishay. Many of the companies on the exhibitor floor emphasized the “less power” theme through applications such as LED lighting and power efficient motor controllers.

APEC 2014 showed the power electronics industry is alive and well, driving new technologies and applications. Remember that none of the electronics devices driving growth in the semiconductor market can work without power.

As The Premier Event in Applied Power Electronics™, APECfocuses on the practical and applied aspects of the power electronics business. This is not just a designer’s conference, APEC has something of interest for anyone involved in power electronics:

  • Equipment OEMs that use power supplies and dc-dc converters in their equipment
  • Designers of power supplies, dc-dc converters, motor drives, uninterruptable power supplies, inverters and any other power electronic circuits, equipments and systems
  • Manufacturers and suppliers of components and assemblies used in power electronics
  • Manufacturing, quality and test engineers involved with power electronics equipment
  • Marketing, sales and anyone involved in the business of power electronic
  • Compliance engineers testing and qualifying power electronics equipment or equipment that uses power electronics

lang: en_US


IP Challenges, FinFET, 3D-IC, and FD-SOI Updates

IP Challenges, FinFET, 3D-IC, and FD-SOI Updates
by Daniel Nenni on 03-27-2014 at 10:00 am

Semiwiki is proud to be a sponsor of EDPS 2014:

April 17 & 18, 2014
Monterey Beach Hotel, Monterey, CA

Sponsored by:
IEEE Computer Society of Silicon Valley (CS-SCV)
IEEE Computer Society
Design Automation Technical Committee (DATC)
Council on Electronic Design Automation (CEDA)

The Electronic Design Processes Symposium (EDPS) provides a forum for a cross-section of the top thinkers, movers and shakers who focus on how chips and systems are designed to discuss state-of-the-art electronic design processes and CAD methodologies. The workshop focuses on the improvement of the overall design process, rather than on the functions of the individual tools themselves.

Featuring the following 2014 Keynote Speakers:

  • Chris Lawless – Director, Intel
  • Wally Rhines – CEO, Mentor Graphic
  • Martin Lund – SVP, Cadence

Program includes the following sessions:
Thursday 4/17 Sessions 8:00AM -5:45 PM

  • Design Flow Challenges (including Panel)
  • Pre-Silicon SW Development Platforms
  • Technology Updates – FinFET, 3D-IC, FD-SOI

Thursday 4/17 Dinner Keynote 6:30PM
Wally Rhines, CEO, Mentor Graphics


Friday 4/18: IP Day 8:00AM-3:00PM

  • IP Integration, Design, Reuse (Session)
  • IP Verification and Qualification (Session)

Program includes engineers and key executives from the following companies:
Altera, Intel, Synopsys, Cadence, TSMC, Mentor, eSilicon, Atrenta, and more…

Important Dates:
– Mar. 31 End of Early Registration
– Apr. 17 On-site Registration

See www.eda.org/edps, to see the detailed Program, Registration
information, and news/review of this upcoming event.

This Symposium will be held at the www.montereybeachresort.com.

GOLD Sponsors: eSilicon, Cadence, Arteris, Netapp, Mentor, Atrenta
Session Sponsors: IPextreme, ClioSoft, SemiWiki.com

More Articles by Daniel Nenni…..

lang: en_US


Early RTL Power Analysis and Reduction

Early RTL Power Analysis and Reduction
by Daniel Payne on 03-26-2014 at 4:48 pm

Power analysis and reduction for SoC designs is a popular topic because of our consumer electronics dominated economy, and the need to operate devices on a battery source for the maximum time before a recharge. Just from my desk I can see multiple battery-powered devices: Laptop, tablet, smart phone, e-book reader, bluetooth headset and a bluetooth mouse.

In my design flow I could wait until the RTL code was complete, synthesis finished and just run power analysis on the gate-level netlist, however it would be difficult to re-factor my RTL code to get any power reduction because of time-to-market pressures. An improved design flow would be one that accounted for power analysis and reduction at an early stage, like RTL entry. Apache/ANSYS is one EDA company focused on this challenge, so I viewed their 35 minute online educast today.

You can say that designers are faced with a Power Gap because the features that we want to include in our designs can quickly overcome the capabilities of our battery or specifications.

An RTL-based power flow allows early design trade-offs, can simulate 1 Million instances in a few minutes, and can be readily debugged. Comparing a gate-level to RTL level power analysis one customer design took about 22 hours for gate-level power analysis, while the same RTL power analysis required only 22 minutes, quite the speed up in time to get results.

Analyzing power at the RTL level is quicker to debug, as the below diagram shows that the Adder is a source for greatest power usage on the left side, while the right side is a confusing mass of gates at the instance level.

The specific EDA tool from Apache is called PowerArtist and it has three main features:

  • RTL Power Analysis

    • Average or time-based
    • Power-critical vector selection
    • Regressions with a TCL interface
  • RTL Power Reduction

    • Clock, memory or logic
    • Analysis-driven automation
    • Interactive power debugging
  • RTL links with Physical

    • Physical-driven RTL power accuracy (PACE)
    • RTL-driven power grid integrity (RPM)

The steps in a design-for-power methodology include:

[LIST=1]

  • Perform design trade-offs
  • Profile activity for power
  • Check your power versus the budget
  • Debug your power hotspots
  • Automatically reduce power
  • Monitor your power across regressions

    Using this 6 step methodology STMicroelectronics was able to realize a 32% reduction in idle power on their ARM core based subsystem, verify that the power numbers at RTL were within 15% of a post-placed netlist, and optimize their power grid using power integrity patterns.

    When using PowerArtist the tool inputs are: RTL Code (VHDL, Verilog, SystemVerilog), clock definitions, input waveform activity, a capacitance model, power domain definitions in UPF or CPF, and power models in Liberty format.

    Power accuracy at the RTL level is ensured by doing a calibration step called PACE (Power Artist Calibration and Estimation) that starts with a post-layout design and then creates RTL power models.

    Power reduction first starts during your interactive debug process that shows both graphically and numerically which RTL blocks are consuming the most power with each set of stimulus. Once you know which blocks consume the most power you can begin to make manual decisions on RTL trade-offs to reduce power. A second approach is to allow automated techniques to change the RTL code creating a lower power version.


    Graphical Power Debug Environment

    Using the analysis from PowerArtist you can expect that a handful of RTL changes will account for about 50% of your power savings, and that with the next 100 RTL changes you achieve about 99% of your power savings. Algorithms in PowerArtist also work on sequential power reduction. You can even create custom power reports by using Tcl scripts.

    RTL power regressions are another technique for you to track the power estimations over design time. You could run power regressions daily for your block-level and weekly at chip-level to monitor power changes so that no power increases suddenly appear.

    Three tools from Apache are used in an RTL-driven power integrity flow: PowerArtist to analyze power over millions of cycles, RedHawkto analyze power-grid using a few dozen worst-case cycles, and finally Sentinel SIwave for signal integrity analysis across the package design.

    Apache/ANSYS has engineered a collection of EDA tools that enable power analysis and optimization starting at RTL development, and continuing into the package design. Starting power analysis at the earliest time in your design process will allow for the largest power reductions, instead of waiting for gate-level analysis. View the full 35 minute educast here.

    lang: en_US


  • Book review: “shift left” with virtual prototypes

    Book review: “shift left” with virtual prototypes
    by Don Dingee on 03-26-2014 at 1:00 pm

    Shipping a product with complete software support at official release is a lot more difficult than it sounds. Inevitably, there is less than enough hardware to go around, and what little there is has to fill the needs of hardware designers, test and certification engineers, software development teams, systems integration teams, sales and channel training organizations, marketing communications and event managers, and often beta customers. Continue reading “Book review: “shift left” with virtual prototypes”


    NVM Express Solution is Mainstream

    NVM Express Solution is Mainstream
    by Eric Esteve on 03-26-2014 at 4:16 am

    Non Volatile Memory (NVM) is a superb technology, at least if you appreciate the physical law behind: storing a data in an embedded location with no physical link, as you charge a cell by influence and read it without physically accessing the stored data. Although the semiconductor industry is building NVM IC for about 30 years now, and we discuss about using NVM storage (Solid State Drive: SSD) in place of Hard disk Drive (HDD) for probably 20 years, it’s only since 5 to 7 years that we consider SSD to be a storage solution complying with economic requirements, at least for enterprise. The cost per GB for SSD has enough decreased over the last decade to become competitive with HDD, even if SSD is still more expensive by a factor of 10.

    So, there should be some clear benefits when using SSD, to compensate this price difference… The main is IOPS rate, according with Currie Munce, VP, Research & Advanced Technology, for HGST, a unit of Western Digital, “The performance of an enterprise-class SSD can be about 100x higher for random read/write operations compared to an HDD”. Does that mean that SSD will completely replace HDD for enterprise storage? No, but such an advantage will push to integrate NVM together with HDD to get the best of both, low cost of HDD and high IOPS from SSD.

    Increasing adoption of NVM based storage has pushed for the standardization of protocols like Universal flash Storage (UFS) in the mobile industry and NVM Express (NVMe) for enterprise applications. As an IP specialist, that I have learned is that you can evaluate the successful penetration of a technology or protocol by looking at the support from IP vendors, and by the availability of a conformance program. The University of New Hampshire, InterOperability Laboratory, better known by the UNH-IOL acronym, has created the conformance program supporting NVMe. At this point, a chip maker may externally acquire a NVMe IP (NVMe Controller and PCI Express PHY IP) from an IP vendor with a maximum confidence, as far as this IP has passed the UNH-IOL NVMe compliance test. We have just learned that the NVMe Controller IP from Mobiveil has successfully passed this compliance test. As you can see on the picture below, Mobiveil UNEX Controller is listed together with Huawei, Intel, PMC Sierra and Samsung, pretty high profile companies! You may also notice that Mobiveil is the first IP vendor effectively passing the NVMe compliance test…

    If you want to understand the role of the NVMe Controller, it’s a good idea to start one level up, or at Subsystem level. This subsystem can be a SSD based memory board for example. If you take a look at the picture below, the subsystem is inserted between the flash IC array on the right side and the PCI Express connection to the host on the left side. The nature of the flash technology is such that the Integrated Flash Controller, interfacing with the Flash IC array, must be complemented with a set of functions: Microcontroller (ARM in this case), DDR3/4 Controller and PHY to access SDRAM external memory and the NVMe Controller itself. You will need to implement all of the above to effectively manage a SSD array, plus a Gen-3 PCIe Controller (and PHY) to build a complete NVMe Subsystem.

    The flash technology offer great benefits compared with HDD in term of IOPS, but the physical nature of the flash, specified for a maximum number of Program/Erase (in the range of several 10Ks operations), impose to massively manipulate data and addresses. Manipulating data means that you need to store these data “somewhere”, so the DDR3/4 memory array. The manipulation algorithms are implemented into the Flash Controller and the NVMe Controller, if you can benefit from a dedicated Microcontroller, you will also use it. That’s why a NVMe IP is a complete subsystem. Mobiveil is concentrating on the digital part of the subsystem. Imagine that this NVMe subsystem is implemented into a FPGA device integrating ARM core, DDR3/4 I/Os, ONFI I/O and PCIe gen-3 PHY, (both Xilinx and Altera offer high end families supporting these functions), then you can built a complete NVMe subsystem using Mobiveil IP… almost immediately.

    To enable faster product development for its customers, Mobiveil also offers a complete FPGA development platform with a board support package and low-level drivers to validate the solution against their user applications. Thus, Mobiveil’s customer may directly target an FPGA technology, the solution is available off the shelf, and has been not only Silicon proven, but compliant with UNH-IOL. Such a FPGA solution is well tailored for SSD storage in the enterprise market, not so price sensitive but rather performance hungry.

    Another market segment adopting NVMe is the mobile segment, smartphone and media tablet. In this case, a customer will most probably target an ASIC technology, due to the price pressure in this market. This chip maker may use his own DDR3/4 or PCIe Controller, nevertheless this customer could source the UNEX Controller and Integrated Flash Controller from Mobiveil, as well as the existing firmware. The advantage for this customer is that he could validate the overal solution by using the above mentioned FPGA development platform, saving design resources (used to build his own development platform) and drastically accelerate Time-To-Market…

    Will we, as a end user, massively use SSD equiped devices, PC, Laptops or Media tablets, in the near future ? Probably not before a couple of years, even if you can decide, like I did, to buy a laptop relying on SSD instead of HDD. If the question is : do you, as a end user, use SSD every day ? Then the answer is clearly yes, but you don’t necessarily know that the file you store or download in/from the cloud is already passing through SSD storage system, at some point of the trip.

    From Eric Esteve from IPNEST

    More Articles by Eric Esteve…..

    lang: en_US


    SNUG and IC Compiler II

    SNUG and IC Compiler II
    by Paul McLellan on 03-25-2014 at 4:04 pm

    I have been at SNUG for the last couple of days. The big announcement is IC Compiler II. It was a big part of Aart’s keynote and Monday lunch featured all the lead customers talking about their experience with the tool.

    The big motivation for IC Compiler II was to create a fully multi-threaded physical design tool that will scale well on multi-core processors and large server farms. The original IC Compiler, what is now called IC Compiler I, was written in the era when Intel would produce a faster microprocessor each year and so the tool pretty much scaled automatically just by waiting.

    Synopsys actually did something pretty difficult to achieve. They took several years to create a new version of IC Compiler but at the same time they had to aggressively continue development of IC Compiler I, in particular adding support for double patterning needed for 20nm designs. That required two separate teams of developers. The goal with IC Compiler II is to increase productivity 10X by creating a tool that runs 5X as fast and that requires half the number of iterations to reach closure.

    Several of the lead customers talked about their experiences using the new tool. LSI used their network processor Axxia as a test vehicle. Panasonic used a large TV bridge SoC. ST used a 6M instance FD-SOI consumer device. Imagination used sub blocks of their Power VR ray-tracing Wizard chip.

    All of these customers had similar experiences. They used an existing design to try out the tool and gain confidence, but the results were spectacularly better that they immediately started to use IC Compiler II for production designs. It still requires a hybrid flow since not all the routing capability is yet in IC Complier II. However, the floorplanning, time-budgeting and so on is all working.

    The results are impressive. LSI, for example, went from iterating daily versus weekly, with floorplanning run-times of 17 hours compared to 5 days with IC Compiler I, clock tree optimization in 29 hours versus 6 days and with 40% fewer driver cells.

    As Mark Dunn of Imagination put it, this is “disruptive technology.” His design is 20M instances with 1600 macros. Floorplanning is 10X faster and requires just half the memory. IC Compiler II is the standard for hard designs.

    Pascal Teissier of ST uses it almost completely automatically for floorplanning, 90% in his estimation, with just a small amount of guidance for some datapath. His design has 4.6M instances. Block shaping takes 5 minutes, macro planning 75 minutes and analysis and validation 8 hours. On another design they do placement and clocking for a 2.3M instance design in 7 hours.

    IC Compiler I will continue to be supported and continues to be the tool of choice for older process nodes. IC Compiler II will be the tool of choice for 28nm and below, especially for FinFET designs that are both large and have some additional complexity associated with extraction.

    As Antun Domic pointed out, it is not really correct to call it “place & route” any more, that is just one small part of the physical design process. The tool has a completely new “database” infrastructure, new engines for floorplanning, and clock tree, keeps the existing Zroute and linear placer. So, in fact, it is everything except place and route that is new!


    More articles by Paul McLellan…


    AMS Verification and Regression Testing of SoC Designs

    AMS Verification and Regression Testing of SoC Designs
    by Daniel Payne on 03-25-2014 at 10:02 am

    Digital verification engineers on SoC designs have adopted many techniques to help ensure first silicon success: using compiled simulators, constrained random test, simulation farms, SystemVerilog methodology, and self-checking testbenches. AMS verification has tended to be ad-hoc or sharply divided into separate analog verification and digital verification realms.

    Synopsys has decided to start applying some of these digital verification techniques to mixed-signal designs in order to improve regression testing as announced today at SNUG. I spoke by phone last week with Steve Smith of Synopsys to hear about this new initiative, and it turns out that we worked together at CrossCheck back in the 90’s.

    The first thing that comes to my mind when hearing about AMS verification is co-simulation between a digital and analog simulator. Synopsys has offered co-simulation technology for quite a while with their Discovery AMS tool that combined VCS for digital simulation, CustomSim for FastSPICE simulation at the transistor-level, plus a co-simulation interface. They’ve now created VCS AMS that efficiently co-simulates VCS and CustomSim together for AMS verification:

    When verifying an AMS design with VCS AMS the bottleneck is typically the FastSPICE simulator, so it’s good to learn that CustomSim has been getting speed boosts over the past several years.

    Using VCS AMS running on multi-core and server farms you can expect to see 3-5x speed ups.

    The digital verification techniques that you can now use for mixed-signal designs are:

    • Automatic connectivity between digital and analog blocks

      • Real to Electrical (D2A)
      • Electrical to Real (A2D)
    • AMS assertions
    • AMS constrained-random stimulus
    • AMS checkers
    • SystemVerilog real number modeling
    • AMS testbench environment
    • AMS source generators
    • AMS functional coverage

    You can also define in UPF your mixed-signal design and get native low-power support, where the SPICE block appears as a digital block to UPF:

    Two customers that are talking about using VCS AMS include Rambus (memory and interface IP) and Micronas (automotive and industrial). Three companies are presenting SNUG papers on their experiences with VCS AMS: ST Ericsson, AMD and KeyASIC.

    Verification engineers using Synopsys simulators should look into this new initiative for their AMS SoC designs to speed up and improve their regression testing. The mindset is to start using the digital verification ideas in mixed-signal designs with a SystemVerilog testbench methodology.

    lang: en_US


    Imagine what all the DLP technology can do for you

    Imagine what all the DLP technology can do for you
    by Pawan Fangaria on 03-24-2014 at 7:00 pm

    Light has become integral part of most of the electronic devices we use today in any sphere of influence; personal, entertainment, consumer, automotive, medical, security, and industrial and so on. It’s obvious; along with IoT (Internet-of-Things) devices, the devices to illuminate and display things will play a major role in that revolution. In order to be accommodated effectively in the end-to-end devices, the light devices need to be tiny enough, energy efficient and powerful in terms of lumen, resolution, mix of colors and so on. Of course, there already has been wide scope of applications for light devices in cinemas, TVs, projectors etc.

    Inspired by the current and future prospects of light processing devices, I investigated into the details of the innovative DLP (Digital Light Processing) technology and its applications. It’s a pleasant surprise to know about the intricacies of such a technology and how it is marching into the visual innovation to provide the ultimate of life’s experience. Before I talk about the technology, just see this HUD (Head-up Display) on the windshield of a car, crystal clear with wide VGA resolution showing the directions. Think about what kind of more information, you need, can be displayed while you are on the drive!

    In CES 2014, there were demonstrations of other exciting products that used DLP technology. So, what is DLP technology? Invented by Dr. Larry Hornbeck at Texas Instrumentsin 1987, it’s an optical semiconductor device called Digital Micromirror Device (DMD) in the form of a chip that processes light digitally and hence named as Digital Light Processing (DLP). A DLP chip contains up to 8 million hinge-mounted microscopic mirrors, each mirror measuring less than 1/5[SUP]th[/SUP] the width of a human hairand capable of reflecting a digital image onto any screen. The micromirror can be ON when tilted towards a light source and OFF when away from it. When the bit-streamed code of an image enters the optical semiconductor, it directs each micromirror to switch on or off, the rate of switching can be as high as 10000 times per second. Depending on the frequency of a mirror being ON or OFF, it can produce up to 1024 shades of gray color, thus contributing into a highly detailed image. How do those exciting colors appear into the picture? The white light from the source passes through a color wheel (that sequentially filters light into red, green and blue) before falling on the surface of the DMD chip. A single DMD chip can produce more than 16 million colors. For very high brightness projectors used in movie theatres and large auditoriums, 3 DMD chip system is used where a prism creates parallel beams of red, green and blue light which are then processed by 3 separate DMD chips. This system can create as many as 35 trillion colors. Again, there are technologies to replace white lamp source by solid-state illumination that can emit colors itself without needing the color wheel.

    Last year, TI introduced DLP Pico projector chipset which is based on its new Tilt & Roll Pixel (TRP) technology. This DLP Pico 0.2” TRP chipset produces images with double the brightness and resolution of its predecessor, and at half the power consumption; optimized for ultra-compact mobile, digital cameras and wearable devices. In this chipset, to augment the details of image and its brightness, each T&R Pixel is further reduced to 1/20[SUP]th[/SUP] the width of a human hair and IntelliBright suite of adaptive algorithms incorporated.

    In the above picture, it is shown how the DMD chip that drives the optical engine is combined with the Display Controller and electronics to process the input signals.

    Let’s look at the new exciting products emerging out of this technology. We have already seen Sekonixpico projector and curved TVs demonstrated at CES 2014. Many other products are in the making, e.g. see-through near-eye display which needs extremely low power, high brightness and vibrant images; projector equipped smartphones and tablets; VeinViewer which projects an image of patient’s veins for the doctor to spot any good vein or examine them in general.

    Other areas of application are 3D images of teeth in dental applications, eye and other organ examinations, vision assistance, spectroscopy to analyze materials, better visibility in bad weather, robotics, security and biometrics, and many more.

    What I learnt is that the DLP technology produces the most sharp, bright and vibrant images with consistent picture quality in variety of applications. It’s most versatile and reliable that can be used under various conditions to produce images that will not fade. I think many new arenas of applications for this technology will emerge in near future. Can you guess a few of those applications? It needs creativity to identify those!

    Related Blog


    Rise of the cloudphone?

    Rise of the cloudphone?
    by Don Dingee on 03-24-2014 at 3:00 pm

    We’re all quite twitterpated with the smartphone. Admittedly, it has taken much of the world by storm, and dominates EDA discussion because of the complex SoCs inside. Feature phones have repeatedly been declared dead, or at least disinteresting, but the numbers tell a different story.

    While Europe and the US enjoy much higher rates, worldwide smartphone penetration is about 39% of mobile users in 2014, according to the latest numbers from eMarketer. In some markets across the globe, the choice right now is between no phone, an affordable feature phone, and an unaffordable smartphone – and those users are still opting for feature phones.

    Sour grapes about the “junk business” aside, smartphone companies have noticed. Trying to deflect the lower cost wave, Apple is back with the less-than-successful iPhone 5c in an 8GB version just introduced in some European and Asian markets, and has also reintroduced the iPhone 4 in India. Only so many dollar bills can be wrapped around premium devices.

    Others are having better success. Nokia Asha was designed from the ground-up to take on this challenge, with a top secret homegrown “1 GHz” SoC and a homegrown operating system. Regionally sourced Android devices are also winning, with lower cost processing cores and in many cases minus LTE support, Google Play, and other bells and whistles which cost manufacturers dearly.

    A breach like this, where the establishment struggles, draws new ideas. The allure of an “open” phone has drawn in four open operating systems: Firefox OS, Sailfish OS, Tizen, and Ubuntu. Each steps away from the world of retina displays and quad-core processors and app stores into a simpler environment, with HTML5 as the key technology. My pithy analogy:

    Cloudphone is to smartphone as Chromebook is to Windows laptop.

    Before diving into a religious argument about native apps being better than HTML5 apps fueled by Zuckerberg-enabled out-of-context bashing, consider how much SoC you can get for $3 to $5. That is pretty much all we have to work with in a $25 cloudphone, exactly what Mozilla and Spreadtrum are proposing (in spite of that headline, more on that shortly).

    Details are sparse, but the Spreadtrum SC8621 is believed to be very similar to the SC8620 as shown. One ARM Cortex-A5 core. No GPU core, instead a DSP-laced multimedia accelerator presumably with CEVA technology inside. No LTE support, WCDMA and EDGE only. Firefox OS is also borrowing the zRAM memory compression idea to halve memory requirements.

    Optimized HTML5, CSS3, and JavaScript libraries leveraging a low power DSP core could compete in the lower end of mobile space, on inexpensive devices. Another consideration is in this “learn to code” age, there are going to be a lot more JavaScript coders than Java or Objective C coders. This point is not lost on IoT developers, either; a recent effort from Kinoma, a Marvell company, shows the thinking behind an inexpensive, easily programmed device with optimized libraries.

    We’ve moved into new territory here – literally, opening the doors to billions of users who haven’t seen smartphones and can’t afford one, but who are craving mobile web access. Unless smartphone OEMs choose to become not-for-profit operations, HTML5 is the only choice where devices cost less.

    If you place one of these Mozilla phones side by side with a feature phone and an Apple or Android or Nokia Asha phone, with the price tag prominently displayed, who wins when price is the only factor for a new mobile user? Touchscreens with browser access to cloud-based data and apps are an easy winner; make all the app ecosystem arguments you want, but the entry level user only needs about a dozen to become dangerous on mobile.

    The cloudphone – yes, I’m proposing a new category, measuring these with smartphones just isn’t fair to either side and blurs numbers – can potentially wipe out the feature phone once and for all, and in some markets like Africa, China, India, and others could make a sizable dent in smartphone futures.

    It’s not going to be easy for cloudphones in the face of branding pressure and competitive dynamics, and the fact they have to “make it up in volume” which has been tried many times. Mozilla has a head start and their own branding power here, but Samsung is not just playing with Tizen for the sheer thrills, and Ubuntu has realized they have no mass market without a device play. Jolla and Sailfish OS is a bit more difficult to explain, with the frustration of Nokia Plan B baked in to the strategy and a straddle to Android apps which requires some multicore processing horsepower offsetting the advantage of a DSP-centric approach.

    If I were Tim Cook, I’d be very careful of using the term “junk” in reference to half of the potential market. That said, I don’t expect to see a sweeping Apple redesign moving down the price pyramid anytime soon. Will Mozilla’s attempt to change this game work, lowering the price point to $25 and opening a mobile front on the other half of the world?

    lang: en_US