RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Cadence support for the Open NAND Flash Interface (ONFI) 3.0 controller and PHY IP solution + PCIe Controller IP opening the door for NVM Express support

Cadence support for the Open NAND Flash Interface (ONFI) 3.0 controller and PHY IP solution + PCIe Controller IP opening the door for NVM Express support
by Eric Esteve on 04-11-2012 at 10:19 am

The press release about ONFI 3.0 support was launched by Cadence at the beginning of this year. It was a good illustration of Denali, then Cadence, long term commitment to Nand Flash Controller IP support. The ONFI 3 specification simplifies the design of high-performance computing platforms, such as solid state drives and enterprise storage solutions; and consumer devices, such as tablets and smartphones that integrate NAND Flash memory. The new specification defines speeds of up to 400 mega-transfers per second. In addition to the new ONFI 3 specification, the Cadence Flash and controller IP also support the Toggle 2.0 specification.

“NAND flash is very dramatically growing in the computing segment and is no longer just for storing songs, photos, and videos,” said Jim Handy, director at Objective Analysis. “The result is that the bulk of future NAND growth will consist of chips sporting high-speed interfaces. Cadence support of ONFI 3 and other high-speed interfaces is coming at the right time for designers of SSDs and other systems.”

If you look at this IP segment size, and compare the design starts count with the DDRn Controller IP design starts, it was so far one order of magnitude less. Looking at the design wins made by Cadence on the IP market, you can see that the Denali products have generated 400+ design wins for DDRn memory controller when the Flash memory design wins are in the 50+ range. To make it clear, we are talking about the Flash based memory products used in:

  • Data centers to support Cloud computing (high IOPS need)
  • Mobile PC or Tablets to support “instant on” (SSD replacing HDD)
  • NOT the eMMC and various flash cards

The latter market segment generates certainly a lot more IP sales, but at a fraction only of the cost of the IP license for a Flash controller managing NVM used in data center or SSD. Flash memory controller IP family from Cadence is targeting the high end of the market.

It’s also interesting to notice that Synopsys, covering most of the Interface protocols IP, including DDRn memory controller where the company is enjoying good market share as well as Cadence, is not supporting Flash memory controllers. You may argue that this market segment is pretty small, and why should Synopsys care about it? Simply because it could be the future of the storage market! If you look at storage, you probably think “SATA” and Hard Disk Drive (HDD)… All HDD shipped to be used inside a PC are SATA enabled, as well as the very few SSD integrated to replace HDD in the Ultra Notebook market. That’s right. But, as a matter of fact, SATA, as the standalone protocol to support storage, has reached a limit. A technology limit, as SATA 3.0, based on a 6 Gbps PHY, will be the latest SATA PHY.

We can guess that SATA, as a protocol stack, will survive, as some features like Native Command Queuing (NCQ) are unique to SATA and very efficient to optimize storage access (whether HDD or SSD). But the PHY part is expected to be PCI Express based in the future, the protocol name becoming “SATA Express”, at least for the PC (desktop, enterprise, mobile) or Media Tablet segments, where the use of one lane PCIe gen-3 will offer 1 GB/s bandwidth, to be compared with 0.48 GB/s with SATA 3.0.

Still in the storage area, but Flash based, the current solution for high I/O Per Second (IOPS) application, is based on the use of a Nand Flash Memory Controller integrated with an Interface protocol which could be SATA 3.0, USB 3.0 or PCI Express (in theory) but which is in practice based on PCIe, like for example a x4 PCIe gen-2 offering 20Gb/s raw bandwidth, or 2GB/S effective.

Here, the emerging standard will be named NVM Express and will confirm afterwards the solution currently used, and probably define a roadmap to support higher bandwidth needs associated with cloud computing development.

Using Nand Flash devices has a cost: accessing a specific memory point will end up degrading the device (at that specific point), especially for Multi Level Flash (MLC). This effect being amplified when you use Flash devices manufactured in smaller technology nodes, and getting worse as far as you are using higher capacity devices, as they are built on the smallest nodes. In other words, the more you use a SSD, the more important is the risk to generate an error. Cadence implements sophisticated, highly configurable error correction techniques to further enhance performance and deliver enterprise class reliability. Delivering advanced configurability, low-power capabilities and support for system boot from NAND, the Cadence solution is scalable from mobile applications to the data center. The IP is backward-compatible with existing ONFI and Toggle standards. The existing Cadence IP offering supports the ONFI 1, ONFI 2, Toggle 1 and Toggle 2 specifications, and also provides asynchronous device support. Cadence also offers supporting verification IP (VIP) and memory models to ensure successful implementation.

The move from SATA based storage, to HDD, SATA Express compliant, or SSD, NVM Express compliant, will certainly change the storage landscape, as well as the IP vendors positioning. Synopsys is well positioned on SATA IP and PCI Express IP segment, when Cadence is not supporting SATA IP, but supports Nand Flash and PCI Express Controller IP. With the emerging of “SATA Express” and “NVM Express”, it will be a new deal for IP vendors, interesting to monitor!

By Eric Esteve from IPNEST


Analog Automation – Needs Design Perspective

Analog Automation – Needs Design Perspective
by Pawan Fangaria on 04-11-2012 at 7:00 am

Recently I was researching the keynote speeches of isQED (International Society for Quality Electronic Design) Symposium 2012 and saw the very first, great presentation, “Taming the Challenges in Advanced Node Design” by Tom Beckley, Sr. VP at Cadence. I know Tom very well as I have worked with him and I admire his knowledge, authority and leadership in analog and mixed-signal domain. Inspired by his presentation, I became curious and further read his detailed speech put into blogs at EDA360 page. It was great pleasure going through the astounding collection of details and ideas there. I read it twice very minutely.

As Custom IC, AMS, Physical Design has been my core expertise; it reminded me about one of my article, “Need and Opportunity for Higher Analog Automation” in early Feb. In that article my emphasis was on systematic variation and layout dependent effects at lower nodes, which are primarily analog specific depending on device parameters and their relative placement, thereby making it essential to automate the detection and correction of such effects early in the design cycle. It is heartening to see Cadence coming up with Rapid Prototyping methodology to develop design building blocks called ‘modgens’ which account for layout dependent effects, parasitics and new P&R rules at 20nm, the methodology even uses double patterning technology to increase the routing pitch.

It’s great methodology which can address mega function generation employing automatic detection of analog structures like current mirror and differential pair from schematic and generating layout followed by extraction and verification. Using these building blocks for higher level design can serve the purpose, but not in all cases. Preserving the placement constraints to re-construct the layout with any changes is fine till ECO, but that cannot be employed for large changes. If we look at the problem from design perspective, in today’s context, at 20nm, a typical analog IP block could be big in the range of 40000 to 50000 transistors. Clearly and specifically knowing that analog design can be an ocean of secrets, all of that may not rightly fit into the scheme of rapid prototyping building blocks into abstracts and then assembling them. Even if 75% to 80% of that fits into the building block level automation, the rest 20% (about 10000 transistors) needs to be done by hand which would be a substantial task considering the effects and complexities with each transistor at 20nm we talked about. At 14nm, it is going to be tougher. This leaves us at the same place we were with manual looping between circuit and layout.

Considering the design perspective, it needs a general approach to placement and routing the analog design with due attention to 20nm issues. In my article, I had also talked about the need for a general approach to analog automation based on an open standard analog constraint format which includes design constraints for symmetry, matching, shielding, placement, floorplanning, routing, clocking, timing, and electrical and so on. Of course, the abstraction approach solves the problem to great extent, but for completeness of the design, it needs a general automation applicable to the whole design. Once that becomes available, that can take the centre place.

Comments from design and EDA community are welcome. I would be happy to know if there are more new ideas for analog automation.

By Pawan Kumar Fangaria
EDA/Semiconductor professional and Business consultant
Email:Pawan_fangaria@yahoo.com


EDPS: 3D ICs, part I

EDPS: 3D ICs, part I
by Paul McLellan on 04-10-2012 at 10:00 pm

The second day (more like a half-day) of EDPS was devoted to 3D ICs. There was a lot of information, too much to summarize in a few hundred words. The keynote was by Riko Radojcic of Qualcomm, who has been a sort of one-man-band attempting to drive the EDA and manufacturing industries towards 3D. Of course it helps if you don’t just have a sharp arrow but the wood of Qualcomm behind it. Curiously, though, Qualcomm themselves have been cautious in actually using 3D IC technology. Possibly they have done some test chips but I don’t know of any parts that they have in production. Other presentations were by Altera, Mentor and Cadence, plus a panel discussion.

Those who have been in hi-tech for years know about the hype cycle, I believe originated by Dataquest. Gartner, Dataquest’s heir, reckons TSV are now on the slope of enlightenment where things become real. At the top of the hype cycle are nanotube electronics, wireless power and Occam processes (whatever they are).

There are a number of varieties of 3D IC with different challenges for both EDA and manufacturing. If you include package-on-package (POP) then you probably have a 3D IC in your pocket. Almost all the smartphone SoCs including Apple’s A4/A5 series, have memory and logic in the same package. POP does not require Thru-Silicon-Vias (TSVs) since it is all wire-bonded inside the package.

One thing I learned is that while everyone talks about copper TSVs, the small TSVs that are so far in use are actually tungsten. Nobody knows how to make copper TSVs that small. There is actually a reasonable possibility that we will end up with two types of TSV: tungsten for signal, small and tightly spaced, and copper for power supply and perhaps clock, much bigger and with higher current capacity.

One big development earlier this year was the finalization of the Jedec Wide I/O standard which allows memory to be stacked on top of logic, so called memory-on-logic (MOL). With much shorter distances, lower capacitance and 512 bits wide this allows a much higher bandwidth lower power interface between the processor and memory. The memory is also designed to be stacked, with more than one memory die. In fact all the memory suppliers have already announced various forms of memory stacks.

Interposer based designs, so called 2.5D, have the big advantage of not requiring TSV except through the interposer itself (unless there is a memory stack). As everyone who watches 3D ICs probably knows, Xilinx is shipping a high-end FPGA using silicon interposer technology, perhaps the only high(ish) volume product in production.

All camera chips are also 3D of some sort, with the CCD device on top of the image processing chip. There was an inconclusive debate as to how many of these use TSVs and how many use another technology whereby the signals are brought around the edge of the thinned die rather than going through it.

Thermal is going to be a big issue. Mobile chips are low power but they are not that big, so the power density and thus many of the thermal problems are the same as for server CPUs. IBM is looking at liquid cooling through microchannels. That has its own set of challenges and may or may not work out, but for sure that is not going to be the solution in your phone.

The motivation for 3D is a mixture of economic and what has come to be called “More than Moore”. We can no longer continue to scale CMOS the way we have been doing, and we are falling off the 30% per year cost reduction treadmill. It is still an open question whether we have a post optical lithographic process that we can manufacture economically, EUV being the most promising with E-beam the only other serious alternative. High-density TSV and 3D stacking offers an alternative way to keep Moore’s law on track (think of Moore’s law as being about transistors per volume, just that we only used one die for the first 50 years). It is the first of several disruptive technology changes that substitute for just scaling our lithography.

But, as with any disruptive technology, you don’t get it for free. It is disruptive. You need new tools, a new supplier ecosystem. There are new sorts of interactions you need to worry about such as where TSVs can be placed, how they affect silicon through stress, managing electrical and thermal coupling in the stack and…and…

The roadmap, at the highest level, goes through POP (in production today), to wide-I/O MOL and eventually LOL.


The best graphics chip is the one seen the most

The best graphics chip is the one seen the most
by Don Dingee on 04-10-2012 at 2:48 pm

If I say “graphics chip”, most techies will say NVIDIA or AMD. But in the new post-PC world , neither of these players holds the key to the future. One that does is a little company making 43 cents on every latest version iPad and iPhone. Another is designing their own approach. Should you care what graphics is in your phone? Continue reading “The best graphics chip is the one seen the most”


MEMS and IC Co-design

MEMS and IC Co-design
by Daniel Payne on 04-10-2012 at 11:37 am

This morning I attended a webinar about MEMS and IC co-design from a company called SoftMEMS along with Tanner EDA. I learned that you can co-design MEMS and IC either in a bottom-up or top-down methodology, and that this particular flow has import/export options to fit in with your mechanical simulation tools (Ansys, Comsol, Open Engineering) as well.
Continue reading “MEMS and IC Co-design”


Oasys Gets Funding from Intel and Xilinx

Oasys Gets Funding from Intel and Xilinx
by Paul McLellan on 04-10-2012 at 8:00 am

Oasys announced that it closed its series B funding round with investments from Intel Capital and Xilinx. The fact that any EDA company has closed a funding round is newsworthy these days; companies running out of cash and closing the doors seems to be a more common story.

Oasys has been relatively quiet, which some people have taken to mean that nobody is using RealTime Designer, their synthesis tool. But in fact they have announced that #2-4 US semiconductor companies, namely Texas Instruments, Qualcomm and Broadcom (via its acquisition of Netlogic) are customers. As is Xilinx, the #1 FPGA vendor, or vendor of programmable platforms as they seem to want to be known. Now with Intel, Oasys have filled out the enviable position of having relationships with the top 4 US semiconductor vendors and the top FPGA vendor. These are the companies doing many of the most advanced designs today.

On the SoC side, Oasys have tapeouts at both 45nm and 28nm already. Ramon Macias of Netlogic (now part of Broadcom) said publicly nearly a year ago, they had already taped out their first 45nm design and were now using RealTime Designer on 28nm designs. On the programmable platform side, Xilinx licensed Oasys’s technology a couple of years ago and have been using it internally. They have “achieved excellent results across a wide range of designs.”

Chip Synthesis is a fundamental shift in how synthesis is applied to the design and implementation of integrated circuits (ICs). Traditional block-level synthesis tools do a poor job of handling chip-level issues. RealTime Designer is the first design tool for physical register transfer level (RTL) synthesis of 100-million gate designs and produces better results in a fraction of the time needed by traditional logic synthesis products. It features a unique RTL placement approach that eliminates unending design closure iterations between synthesis and layout.


EDA Industry Talks about Smart Phones and Tablets, Yet Their Own Web Sites are Not Mobile-friendly

EDA Industry Talks about Smart Phones and Tablets, Yet Their Own Web Sites are Not Mobile-friendly
by Daniel Payne on 04-09-2012 at 12:55 pm

images?q=tbn:ANd9GcTDJsfnqTKzzfAVaUi2zjF5gvc 6daoganHmJjsILyaBWMw5qtn

As a blogger I write weekly about the EDA industry and certainly our industry enables products like Smart Phones and Tablets to even exist, however if we really believed in these mobile devices then what should our web sites look like on a mobile device?

It’s a simple question, yet I first must define mobile-friendly before sharing what I discovered. Here’s what I consider to be a mobile-friendly web site:
[LIST=1]

  • On a Smart Phone or Tablet it means maximum content and minimum of graphics, because I don’t want my data plan to go over the limit.
  • On a Smart Phone or Tablet I only want to scroll vertically, not horizontally.
  • I don’t want to pinch, zoom, scroll, pinch, zoom, scroll, double-tap. That is too much work.
  • Navigation should be near the top of the browser page, and large enough that my fingers select the correct menu.
  • Search is essential to allow me to get the content I need quick, because on mobile I’m in a hurry.
  • The site looks and works well in either landscape or portrait orientations.

    Mobile-friendly web site design is not:

    • The identical experience as the desktop.

    Now that we know what mobile-friendly web site design is all about let’s see if any EDA company has optimized their web site so that mobile visitors have a pleasant experience. Today I visited the following sites and can report that NO major EDA company has a mobile friendly web site, what a disappointment. Kudos to ARM for leading the way on supporting mobile-friendly devices with their web site.

    [TABLE] style=”width: 500px”
    |-
    | EDA and IP Sites
    | Mobile Friendly
    |-
    | www.synopsys.com
    |
    |-
    | www.cadence.com
    |
    |-
    | www.mentor.com
    |
    |-
    | www.ansys.com
    |
    |-
    | www.atrenta.com
    |
    |-
    | www.tannereda.com
    |
    |-
    | www.agilent.com
    |
    |-
    | www.aldec.com
    |
    |-
    | www.arm.com
    |
    |-
    | www.apsimtech.com
    |
    |-
    | www.atoptech.com
    |
    |-
    | www.berkeley-da.com
    |
    |-
    | www.calypto.com
    |
    |-
    | www.chipestimate.com
    |
    |-
    | www.ciranova.com
    |
    |-
    | www.eve-team.com
    |
    |-
    | www.forteds.com
    |
    |-
    | www.gradient-da.com
    |
    |-
    | www.helic.com
    |
    |-
    | www.icmanage.com
    |
    |-
    | www.jasper-da.com
    |
    |-
    | www.lorentzsolution.com
    |
    |-
    | www.methodics-da.com
    |
    |-
    | www.nimbic.com
    |
    |-
    | www.oasys-ds.com
    |
    |-
    | www.pulsic.com
    |
    |-
    | www.realintent.com
    |
    |-
    | www.sigrity.com
    |
    |-
    | www.solidodesign.com
    |
    |-
    | www.verific.com
    |
    |-

    I could’ve researched more EDA and IP sites, but I think that you can see the clear trend here, we talk about the mobile industry but don’t apply mobile-friendly to our own web sites.

    What about the media sites that write about our industry? They didn’t fare much better on being mobile-friendly:

    [TABLE] style=”width: 500px”
    |-
    | EDA Media Site
    | Mobile Friendly
    |-
    | www.semiwiki.com
    |
    |-
    | www.eetimes.com
    |
    |-
    | www.garysmitheda.com
    |
    |-
    | www.marketingeda.com
    |
    |-
    | www.eejournal.com/design/fpga
    |
    |-
    | www.edacafe.com
    |
    |-
    | www.deepchip.com
    |
    |-
    | www.chipdesignmag.com
    |
    |-
    | www.dac.com
    |
    |-
    |
    |
    |-

    The one media site that is mobile-friendly belongs to me, and I converted it in about one hour of effort.

    High volume sites are mostly optimized with a few notable exceptions:

    [TABLE] style=”width: 500px”
    |-
    | Popular Web Sites
    | Mobile Friendly
    |-
    | www.apple.com
    |
    |-
    | www.tsmc.com
    |
    |-
    | www.intel.com
    |
    |-
    | www.samsung.com
    |
    |-
    | www.cnn.com
    |
    |-
    | www.google.com
    |
    |-
    | www.facebook.com
    |
    |-
    | www.twitter.com
    |
    |-
    | www.techcrunch.com
    |
    |-
    | www.engadget.com
    |
    |-

    Making a Web Site Mobile Friendly
    Web sites that are dynamic and template-driven can be quickly adapted using Cascading Style Sheets (CSS) to detect the size of the browser and then serve up pages that are specifically formatted. For those of you who are curious and a bit geeky the basic approach is to use a style sheet which detects orientation and size of the browser for three configurations: Smart Phones, Tablets, Desktop

    Beyond CSS, there is one more trick that you have to add in the Header of each web page to force the mobile browser to tell you it’s real dimensions:

    Hopefully I can report back one year from now and show some marked improvement from the EDA and IP industries on getting their web sites to be mobile-friendly.


  • EDPS: SoC FPGAs

    EDPS: SoC FPGAs
    by Paul McLellan on 04-09-2012 at 4:00 am

    Mike Hutton of Altera spends most of his time thinking about a couple of process generations out. So a lot of what he worries about is not so much the fine-grained architecture of what they put on silicon, but rather how the user is going to get their system implemented. 2014 is predicted to be the year in which over half of all FPGAs will feature an embedded processor, and at the higher end of Altera and Xilinx’s product lines it is already well over that. Of course SoCs are everywhere, in both regular silicon (Apple, Nvidia, Qualcomm…) and FPGA SoCs from Xilinx, Altera and others.

    The challenge is that more and more of systems is software, but the traditional FPGA programming model is hardware: RTL, state-machines, datapaths, arbitration, buffering, highly-parallel. But software guys are not going to learn Verilog and so there is a need for a programming model that represents an FPGA as a processor with hardware accelerators or as a configurable multi-core device. Taking a software-centric view of the world but also being able to build the FPGA so that the entire system meets its performance targets.

    There have been many attempts to make C/C++/SystemC compile into gates (Forte,catapult,c2s,synphony,autoesl…) but these really only work well for for-loop type algorithms that can be unrolled, such as FIR filters. They don’t work so well for complex algorithms. What is really needed is to analyze the software to find the bottlenecks and then “automatically” build hardware for doing whatever faster.

    The most promising approach at present seems to be OpenCL, a programming model developed by the Khronos group to support multicore programming and silicon acceleration. From a single source it can map an algorithm onto a CPU and GPU, onto an SoC and an FPGA or just directlly onto an FPGA. There is a natural separation between code that runs on accelerators and the code that manages the accelerators (which can run on any conventional processor).

    Of course software is going to be the critical path in the schedule if these approaches are not also marged with virtual platform technology so that software development can proceed before hardware is available. Sometimes the software load already exists since software is longer-lived than hardware and may last for 4 or 5 hardware generations. But if it is being created from scratch then it may have a two year schedule, meaning that it is being targeted at a process generation that doesn’t yet have any silicon at all.


    EDPS: Parallel EDA

    EDPS: Parallel EDA
    by Paul McLellan on 04-08-2012 at 10:00 pm

    EDPS was last Thursday and Friday in Monterey. I think that this is a conference that more people would benefit from attending. Unlike some other conferences, it is almost entirely focused around user problems rather than doing a deep dive into things of limited interest. Most of the presentations are more like survey papers and fill in gaps in areas of EDA and design methodology that you probably feel you ought to know more about but don’t.

    For example, Tom Spyrou gave an interesting perspective on parallel EDA. He is now at AMD having spent most of his career in EDA companies, fox turned game-keeper if you like. So he gets to AMD and the first thing that he notices is that all the multi-thread features that he has spent the previous few years implementing are actually turned off almost all the time. The reality of how EDA tools are run in a modern semiconductor company makes it hard to take advantage of.

    AMD, for example, has about 20,000 CPUs available in Sunnyvale. They are managed by lsf and people are encouraged to allocate machines fairly across the different groups. A result of this is that requiring multiple machines simultaneously doesn’t work well. Machines need to be used when they become available and waiting for a whole cohort of machines is not effective. It is also hard to take advantage of the best machine available, rather than one that has precisely the resources requested.

    So given these realities, what sort of parallel programming actually makes sense?

    The simplest case is where there is non-shared memory and coarse grained parallelism with separate processes. If you can do this, then do so. DRC and library characterization fits this model.

    The next simplest case is when shared memory is required but it is almost all read-only access. A good example of this is doing timing analysis for multiple corners. Most of the data is the netlist and timing arcs. The best way to handle this is to build up all the data, and then fork off the separate processes to do the corner analysis using copy-on-write. Since most of the pages are never written then most will be shared and the jobs will run without thrashing.

    The next most complex case is when shared memory is needed for both reading and writing. Most applications are actually I/O bound and don’t, in fact, benefit from this but some do: thermal analysis, for example, which is floating-point CPU-bound. But don’t expect too much: 3X speedup on 8 CPUs is pretty much the state of the art.

    Finally, there is the possibility of using the GPU hardware. Most tools can’t actually take advantage of this but in certain cases the algorithms can be mapped onto the GPU hardware and get a massive speedup. But it is obviously hard to code, and hard to manage (needing special hardware).

    Another big issue is that tools are not independent. AMD has a scalable simulator that runs very fast on 4 CPUs provided the other 4 CPUs are not being used (presumably because the simulator needs all the shared cache). On multicore CPUs, how a tool behaves depends on what else is on the machine.

    What about the cloud? This is not really in the mindset yet. Potentially there are some big advantages, not just in terms of scalability but in ease of sharing bugs with EDA vendors (which maybe the biggest advantage).

    Bottom line: in many cases it may not really be worth the investment to make the code parallel, the realities of how server farms are managed makes all but the coarsest grain parallelism a headache to manage.


    Google Glasses = Darknet!

    Google Glasses = Darknet!
    by Daniel Nenni on 04-08-2012 at 9:00 pm

    Google Project Glasses and Augmented Reality will be the tragic end to the world as we once knew it. As we become more and more dependent on mobile internet devices we become less and less independent in life. Consider how much of your critical personal and professional information (digital capital) is stored via the internet and none of it is safe. With a quick series of keystrokes from anywhere in the world your digital capital can be altered or wiped clean leaving nothing but flesh and bones!

    “People of the past! I have come to you from the future to warn you of this deception. The few have used Artificial intelligence technology to enslave the many through the use of thought control. I lead a band of Anti Geeks who fight against oppressive technologies. But we alone are not strong enough.The revolution must begin now! Join us to fight for non augmented reality!”

    If you haven’t read the “Daemon” and “Freedom” books by Daniel Suarez you should, if you dare to take a peek into what augmented reality has in store for us all. Daniel Suarez is an avid gamer and technology consultant to Fortune 1000 companies. He has designed and developed enterprise software for the defense, finance, and entertainment industries. The book name “Daemon” is quite clever. In technology, a daemon is a computer program that runs in the background and is not under the control of the user. In literature a daemon is a god, or a demon, or in this case both.

    The book is centered on the death of Mathew Sobol, PhD, cofounder of CyberStorm Entertainment, a pioneer in online gaming. Upon his death, Sobol’s online games create an artificial intelligence based new world order “Darknet”, which is architected to take over the internet and everything connected to it for the greater good. The interface to Darknet is a pair of augmented reality glasses much like the ones Google is developing today. While the technology described in the books seem like fiction, most of it already exists and the rest certainly will. The technology speak is easy to follow for anyone who has a minimal understanding of computers and the internet, very little imagination is required.

    The book’s premise is “Knowledge is power”or more specifically “He who controls digital capital wins”. So you have to ask yourself, how long before just a handful of companies rule the earth (Apple, Google, FaceBook)? Look at the amount of digital capital Google has access to:

    • Google Search (Internet and corporate intranet data)
    • Google Chrome (Personal and professional internet browsing)
    • Android OS (Mobile communications)
    • Google Email-Voice-Talk
    • Google Earth-Maps-Travel
    • Google Wallet
    • Google Reader

    There are dozens of Google products that can be used to collect and manipulate public and private data in order to thought control us and ultimately conquer the digital world.

    The digital world is rampant with security flaws and back doors which could easily enable the destruction of a person, place, or thing. A company or brand name years in the making can be destroyed in a matter of keystrokes. In the book, a frustrated Darknet member erases the digital capital of a non Darknet member who cuts in line at Starbucks. Depending on my mood that day, I could easily do this.

    It’s not like we have a choice in all this since the digital world is now a modern convenience. We no longer have to store our most private information in filing cabinets, safety deposit boxes, or even on our own computer hard drives. It’s a digital world and we are digital girls. The question is, who can be trusted to secure Darknet (Augmented Reality)?