CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

We Don’t Need Graphic Design. We Do Need Graphic Views

We Don’t Need Graphic Design. We Do Need Graphic Views
by Bernard Murphy on 08-29-2016 at 7:00 am

Many years ago, there were attempts to (re-) introduce a graphical entry approach to building RTL design. The Renoir product was one example. The idea has some initial appeal. You describe the behavior in a small block using (textual) RTL but the larger structure of instances and higher-level connectivity can be described as a schematic or flow diagram or something similar.

These attempts were not successful for all sorts of reasons. In part, this area is great example of the devil being in the details. The concept is very appealing on a small design with small components, but a large component in an SoC may have 1k or more ports (full ports, not bit-blasted) which would be a nightmare to connect graphically. And an SoC top level can easily need 100k connections. Even in small subsystems, you can type or auto-generate connectivity faster. It simply isn’t practical to create and edit large netlists graphically.

Another problem with graphical creation and editing is that the graphical view has to become the source-controlled view (so the RTL becomes a derived view), otherwise you would lose schematic layout information on checkin. And by the way, that layout is a real pain. Significant effort has to go into creating a readable diagram, effort that has no value to the ultimate purpose of the design. Why bother?

Text editors, augmented by spreadsheet or other generators, remain the easiest way to assemble and modify these large netlists. Fancier approaches tend to generators, as in generators for bus fabrics where you specify major interfaces, protocols and other parameters and let the generator build the RTL. You still need to source-control spreadsheet and configuration information as views, but these are textual (CSV or similar for a spreadsheet), and in standard or defacto standard formats so present no particular problem for any source management system.

So, forget about graphics? Well, not quite. Graphic creation and editing is a proven dead-end, but graphic viewing is a different story. A well-organized view can be a real help in debugging connectivity, also in navigating the design. The view can be generated directly from the RTL with no need to go through any special formats. The RTL is the golden source (as it should be) and is source-code controlled. The graphical view is a derived and stateless view – reconstructed from scratch on each read-in.

Sigasi (who I introduced in a blog a couple of weeks ago) sees significant interest in this concept from its FPGA customers. They already like the auto-complete, code-template and real-time debugging options in Sigasi Studio, which they find make VHDL/SV more accessible to people not so familiar with hardware development. They are very much opposed to graphical creation/editing which they feel cannot fit with agile development flows, but they see graphical viewing having a big opportunity. It is curious that these requirements are coming up in FPGA designs and in the context of agile development flows. This may be an indication of growth in software developers moving into (some) hardware development in Maker and IoT markets.

Of course we will always want to fiddle with the views to emphasize aspects that are of current interest. Philippe Faes, the CEO of Sigasi tells me they are working on a number of methods to support this kind of fine-tuning, either driven directly from the HDL or from a Tcl side-file.

Sigasi Studio supports this approach today for VHDL and is planning to add support for SystemVerilog. You can learn more HERE.

More articles by Bernard…


Keep It Simple, Allstate

Keep It Simple, Allstate
by Roger C. Lanctot on 08-28-2016 at 4:00 pm

A report in the Wall Street Journal last week dives into the insurance industry’s quandary over the anticipated onset of self-driving cars that might significantly and negatively impact the volume of claims and, ultimately, mitigate the need for car insurance altogether. The report is simultaneously a source of alarm and relief as vehicles capable of driving themselves are not expected to arrive on highways for a decade or more – meaning there is ample time to gather data and prepare. Or is there?

http://tinyurl.com/zhc9qan – “Driverless Cars Threaten to Crash Earnings” – WSJ

The report notes Allstate’s creation of a division, called Arity, to aggregate the company’s various telematics-based programs and data gathering efforts. State Farm is described as working with the University of Michigan’s Mcity program studying self-driving cars. Liberty Mutual is noted for offering discounts to consumers to insure cars with safety features such as automatic emergency braking.
KPMG and Deloitte are quoted in the report forecasting dramatic declines in accident rates and a halving of the size of the auto insurance industry, respectively, in a decade or two. The Insurance Institute for Highway Safety notes the attraction of safety systems proliferating on cars – such as blindspot detection, lane keeping, automatic emergency braking – but bemoans the inclination of consumers to disable these systems and their annoying alerts.

State Farm, Allstate and Liberty Mutual are three of the top five car insurance companies in the U.S. Unmentioned in the Wall Street Journal report is Geico. Geico is presumably gathering the same information and drawing the same conclusions as its market leadership rivals, but Geico has thus far steered clear of telematics-based insurance offerings – hewing instead to an ad-fueled, discount driven product that keeps the value proposition simple as the company has swelled its portfolio to the second largest in the U.S. behind State Farm.

It’s important to note that the other market leader, Progressive Insurance, may lead in telematics-based policies in the U.S., but telematics-based insurance has not vaulted Progressive to market leadership. Another measure I am not considering is profitability, which is where the number crunchers earn their pay – kicking out or jacking up the rates on the higher risk drivers.

Allstate’s Arity division is described as employing 200 data scientists to pull on all data sources to better understand the impact of proliferating vehicle connectivity and safety systems, ride hailing and car sharing services, vehicle-to-vehicle communications and, ultimately, self-driving car systems. The only problem is, Geico leapfrogged Allstate with far more direct and old-fashioned marketing means.
The question is whether Allstate can leverage technology and data science to gain an advantage in a market facing long-term decline and irrelevance. The future of auto insurance will change as safety systems continue to proliferate. Ten car companies selling cars in the U.S. have already committed to deploying automatic emergency braking as a standard feature beginning in 2018.

The challenge facing the data jockeys remains proving a negative: Can the actuaries and statisticians prove that a technology like lane keeping or blindspot detection prevented a crash? While the scientists noodle that one out, Geico keeps soaking up new customers.

Dollars and cents are determining the winners in the evolving car insurance business. Aggregators like Confused.com/Compare.com are proliferating and taking charge of the customer experience. As long as consumers can find a cheaper rate from an aggregator, price will remain king.

Last year, I switched my car insurance policy from State Farm’s DriveSafe&Save telematics offering to a non-telematics Liberty Mutual plan and nearly cut my rate in half. A year later, Liberty Mutual upped my rate 10% and I switched to Geico and saved several more hundred dollars.

You don’t need a slide rule to see that the insurance industry is a hot mess of state-level regulation and confusing rating schemes and requirements. The aggregators “get it” and do all they can to take the side of the consumer and simplify the on-boarding process – without actually taking ownership of the customer.

How confusing is the car insurance business in the U.S.? It’s sufficiently complex and insurance companies themselves are sufficiently obtuse and technologically unsophisticated that Google gave up on its brief car insurance foray last March after briefly going live in California. Too many regulatory commissions, too many slow-footed insurance partners to enable a rapid scaling proposition.
The long-term outlook is for car companies to facilitate the car insurance decisions of consumers. Companies such as Ford and GM are leading the way in integrating insurance marketplaces into their telematics offerings. This is the future.

Volvo has gone so far as to suggest that car companies will themselves provide insurance – something already emerging from the financial services divisions of companies such as Volkswagen and Toyota. Allstate may have ambitions to leverage its existing data resources into a market leading position, but car companies are serving notice that they will have something to say about insurance leadership and ownership.

My advice to Allstate is to look to Geico and keep it simple and, longer term, bear in mind the words of Esurance director of marketing strategy Haden Kirkpatrick, speaking at the Future Connected Car event in Santa Clara earlier this year. “Consumers should only have to pay for insurance when their hands are on the wheel.” That’s where we are headed.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Next SemiWiki Book Signing Event: MIPI DEVCON!

Next SemiWiki Book Signing Event: MIPI DEVCON!
by Daniel Nenni on 08-28-2016 at 12:00 pm

This is the first MIPI DEVCOM event and the location is my favorite, the Computer Museum in Silicon Valley. For those of you who haven’t been, the Computer Museum preserves and presents for posterity the artifacts and stories of the information age, an age we all grew up in. For those of you who have been to the Computer Museum there is always something new so come out and network with the experts behind the semiconductor curtain.
Continue reading “Next SemiWiki Book Signing Event: MIPI DEVCON!”


Low Frequency Noise Challenges IC Designs

Low Frequency Noise Challenges IC Designs
by Daniel Payne on 08-28-2016 at 7:00 am

AMS and RF IC designers have known for years that their circuits are sensitive to noise, because if you amplify noise on an input source to an amplifier circuit then your chip can start to produce wrong answers. Even digital SoC designers need to start taking notice because every SoC is filled with SRAM IP blocks, and at each shrinking process node the contribution of Random Telegraph Noise (RTN) increases and then cuts into the Vdd design margins. So one of the first steps to mitigating the effects of noise is to have an accurate method for noise characterization of all silicon devices (MOS, resistors, capacitors, etc.).
Continue reading “Low Frequency Noise Challenges IC Designs”


Apple will NEVER use Intel Custom Foundry!

Apple will NEVER use Intel Custom Foundry!
by Daniel Nenni on 08-27-2016 at 7:00 am

The media already has Apple and Intel in talks to make the A11 SoC in 2018 as a result of the recent Intel/ARM IP licensing deal. This is probably one of the funnier media bumbles I have read in a while so let’s talk about it in a little more detail.

“According to Nikkei Asian Review, Intel is now perfectly poised to give TSMC a good run for its money in as little as two years because any Apple chips after the A10/A11 should be fabricated by Intel.”

First a little bit of history: As we wrote in “Mobile Unleashed” Chapter 7, Intel had the opportunity to make Apple’s SoCs from the very start but Intel again could not see the forest for the trees:

“According to ex-CEO Paul Otellini, Intel had been in discussions about a mobile chip for Apple before the original iPhone design. The cloak of secrecy on the iPhone gave Intel little to go on, and they were skeptical of Apple’s volume projections. “There was a chip [Apple was] interested in that they wanted to pay a certain price for and not a nickel more, and that price was below our forecasted cost. I couldn’t see it,” said Otellini. Intel passed.”

The rumors of Intel making SoCs for Apple have persisted and will continue to do so no matter how ridiculous it is. The good news is that I have won many lunch bets against Intel making Apple SoCs and will continue to do so, absolutely.

The latest rumor comes from the Intel Developer’s Forum (IDF). IDF is an Intel sponsored conference to created to promote Intel and products based on Intel technology (the first IDF was in 1997). The last IDF I attended was the launch of Intel’s 14nm product in 2014 where Intel CEO (Brian Krzanich) told us that 14nm was yielding on schedule only to recant at the next investor’s call. Apparently you can say things at IDF that you can’t say to Wall Street but I digress…


Also read: Intel Comes Clean on 14nm Yield!

This year’s IDF was again in San Francisco and now that the dust has settled there were two big revelations in regards to the manufacturing of semiconductors:

[LIST=1]

  • Intel Foundry licensed the ARM Foundation IP for 10nm foundry business
  • Intel will not use EUV for 7nm

    The ARM announcement is being overblown and the EUV announcement is being underblown so the Intel PR group is doing a great job.

    Background:Intel has officially been in the foundry business since 2010 but has yet to get significant traction. At 22nm, Achronix and Netronome are lead customers with very low volumes. At 14nm, Altera and Spreadtrum are Intel foundry customers but we have yet to see silicon. Meanwhile the rest of the SoC and FPGA industry has been in 14nm/16nm HVM for more than a year.

    Intel licensing the ARM IP for 10nm foundry business is wishful thinking at best. First, the IP that Intel licensed is the ARM standard cell and SRAM libraries, the building blocks of SoCs. Second, the big SoC vendors make their own standard cell and SRAM libraries and have already taped-out their 10nm designs and will be in HVM mid 2017, about the same time Intel 10nm will be ready to get STARTED with the new ARM IP.

    LG is the announced first Intel 10nm customer which makes sense. LG does not have the volume to work with TSMC 10nm as an early adopter, LG competes directly with Samsung, and GlobalFoundries is skipping 10nm so what foundry choice did they really have? Besides, the majority of LG phones use SoCs from QCOM and MTK so this really is a low risk business proposition.

    Bottom line: This announcement is a big fat nothing burger with cheese…

    The “no EUV at 7nm” announcement was much more meaningful. At the semiconductor conferences earlier this year Intel stated very clearly that they would use EUV at 7nm while TSMC moved forward with a non EUV 7nm (TSMC will start 7nm production in 1H 2018 and hit serious HVM in 2H 2018 with the iPhone 8). So TSMC was RIGHT and Intel was WRONG about EUV. This is not a big technical change but Intel 7nm will be even more expensive since EUV is a significant cost reduction.

    Yes, Intel will argue that their 10nm and 7nm are better than the foundries (TSMC and Samsung) but that will have to be proven at the chip level which is based on PPAC (power, performance, area, AND cost). The foundries have beaten Intel at every node based on SoC PPAC and I do not expect that to change at 10nm or 7nm. If you disagree I will cover all lunch bets.

    Also read:If an Intel 10nm transistor fell in the ARM forest


  • Striving for one code base in accelerated testbenches

    Striving for one code base in accelerated testbenches
    by Don Dingee on 08-26-2016 at 4:00 pm

    Teams buy HDL simulation for best bang for the buck. Teams buy hardware emulation for the speed. We’ve talked previously about SCE-MI transactors as a standardized vehicle to connect the two approaches to get the benefits of both in an accelerated testbench – what else should be accounted for? Continue reading “Striving for one code base in accelerated testbenches”


    What is Virtualization?

    What is Virtualization?
    by Ahmed Banafa on 08-26-2016 at 12:00 pm

    Virtualization is software that separates physical infrastructures to create various dedicated resources. It is the fundamental technology that powers cloud computing.

    Virtualizationsoftware makes it possible to run multiple operating systems and multiple applications on the same server at the same time,” said Mike Adams, director of product marketing at VMware. “It enables businesses to reduce IT costs while increasing the efficiency, utilization and flexibility of their existing computer hardware.”

    Types of Virtualization

    • Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts; much like your partitioned hard drive makes it easier to manage your files.

    • Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).

    • Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

    • Desktop Virtualization: Deploying desktops as a managed service gives you the opportunity to respond quicker to changing needs and opportunities. You can reduce costs and increase service by quickly and easily delivering virtualized desktops and applications to branch offices, outsourced and offshore employees and mobile workers on iPad and Android tablets.

    • Application Virtualization: Organizations are increasingly virtualizing more of their Tier 1 mission-critical business applications and platforms, such as databases, ERP, CRM, email, collaboration, Java middleware, business intelligence and many others.

    Virtualization provides the following benefits

    • Costs saving
    • Efficiently using hardware, virtualization can reduce the number of physical systems you need to acquire, and you can get more value out of the servers.
    • Runs multiple types of applications and operating systems on the same physical hardware.
    • Ease of application deployment
    • IT budget integration

    Howis Virtualization Different From Cloud Computing?
    Cloud computing and virtualization are two approaches to computing that attempt to make more efficient use of computer hardware. Cloud computingis a form of Internet-based computing that delivers resources such as storage space and processing time on a pay-per-use basis. Virtualization creates simulated resources and allows a single piece of hardware to deliver multiple services at once. Both options can save money by using computer hardware more efficiently. The primary difference between the two is that the physical resources that power cloud computing are owned by a cloud service provider, while a corporation that uses virtualization still maintains servers and computer hardware in its own data centers.

    The Future of Virtualization
    Virtualization has enabled the Cloud, but how will it shape it in the future?
    Cloud computing and virtualization go hand in hand. Virtualization is cloud’s foundation and cloud computing software. But as the Cloud evolves, so too must virtualization to support more IO-intensive network and storage workloads, and to ensure that open standards being developed across the industry can also be applied to hypervisor (virtual machine manager) designs. Most clouds today run on virtualization technology that is ten years old. But work is taking place behind the scenes to revolutionize the way virtualization is done.

    References
    http://www.businessnewsdaily.com/5791-virtualization-vs-cloud-computing.html
    http://searchservervirtualization.techtarget.com/definition/virtualization
    http://www.vmware.com/virtualization
    http://www.wisegeek.com/what-are-the-benefits-of-virtualization.htm
    http://www.datacenterdynamics.com/focus/archive/2013/10/future-virtualization

    Ahmed’s blog 4:38 PM Tech Review !


    GM Trumps Ford

    GM Trumps Ford
    by Roger C. Lanctot on 08-26-2016 at 7:00 am

    I strongly recommend giving a listen to John McElroy’s panel discussion podcast featuring Julia Steyn (pictured on the right, above), vice president of urban mobility and Maven for General Motors along with John Voelcker of Green Car Reports and Rebecca Lindland of Kelley Blue Book. Steyn offered up a mountain of substance regarding GM’s plans and GM’s thinking about the new, emerging world of transportation as a service.

    “Autoline: GM’s Mobility Maneuvers” – http://tinyurl.com/zxlbyss

    Steyn addressed the potential impact on vehicle sales (positive) and the role of GM dealers while laying out the strategy for bringing transportation to urban populations in a more flexible manner, better suited to changing consumer behavior. One comment stands out in particular among Steyn’s statements: “We closely work with mayors and municipalities to understand their needs.”

    This single statement captures the reality that GM “gets it.” New forms of vehicle usage and ownership or non-ownership are emerging and cities are on the front lines of this change. Steyn and GM understand this shift, recognizing that it isn’t enough to stick some cars on the street or in various parking garages for ad hoc transportation needs. It’s best to work with cities to understand the existing transportation landscape and how car companies and their products might help overcome challenges.

    This newly minted urban intuition likely arrived with the acquisition of the personnel and assets of now-defunct SideCar. SideCar executives will have understood and conveyed to GM that the battle for share in the ride hailing and car sharing business is a city-by-city proposition. Credit GM with listening.

    Cities needs vary reflecting the geography, demography and existing transportation infrastructure. The needs of a Las Vegas differ widely from the needs of Los Angeles and those needs vary even within the cities themselves.

    Steyn’s sober assessment of the nature of the need and GM’s approach to meeting that need contrasts with Ford’s plans announced last week to commence mass production of autonomous cars by 2021. In fact, it was while listening to Ford CEO Mark Fields announce Ford’s plans at a press conference last week that my thoughts turned to Steyn’s comments.

    The mass production of autonomous vehicles – with no steering wheels or brake or gas pedals, in Fields’ words – raised immediate questions as to what entities will be buying these vehicles. According to separate comments from senior Ford leadership such cars will not be made available to consumers by Ford until 2025.

    The Ford announcement raises a host of unanswered questions regarding where and how such vehicles will be sold, deployed and used. The demand side of the proposition is never addressed. Will consumers want cars that drive themselves? And if they do, will it make sense for consumers to own them?

    It should not have been this way. Rather than simply making the bold statement that Ford will mass produce autonomous vehicles, Fields ought to have situated the announcement more carefully within the framework of the company’s history and vision of mass vehicle-based transportation – expressed on multiple occasions by Executive Chairman Bill Ford. Autonomous vehicles will introduce and expand social mobility.

    Ford was first to democratize vehicle-based transportation. It is only natural that Ford pioneer the next phase of transportation technology. The introduction of autonomous vehicles by Ford ought to have been positioned as extending the democratization represented by the Model T – opening up individual, convenient, low-cost vehicle-based transportation to financially or physically disadvantaged urban populations.

    This vision has been enunciated by urban transportation executives in cities such as Los Angeles among others and takes the autonomous driving discussion out of the realm of exotic and expensive vehicles solely suited for the well-heeled. While GM’s Steyn did not emphasize this particular aspect of car sharing and, ultimately, automated driving, she alludes to it in her comments.

    Automated vehicles as a door to economic opportunity and greater freedom of movement for the financially disadvantaged and the elderly or blind is a game changer. Google has had this vision dialed in from the start.

    Looked at in this way, automated driving is no longer the province of daredevils but rather a transformative force in the nation’s transportation portfolio. This is the natural motivation for the US Department of Transportation to find a way to facilitate and fast track the development of this vital economic asset.

    Ford has pointedly emphasized its commitment to pursuing a Google-car style SAE Level 4 type of automation – in advance of the USDOT announcing its guidelines for the development of automated vehicles. The Ford announcement wasted no time on the company’s plan for interim stages of automation along the lines of Tesla.

    Steyn’s and GM’s vision for car sharing is, by necessity, more fully evolved given the fact that Maven’s car sharing service is up and running in more than four cities and GM has doubled down on new transportation initiatives with its investment in Lyft. It’s clear that GM is well along the path of becoming a B2C organization with dealers playing a supporting role of service and fulfillment.

    In the Autoline panel discussion, McElroy raises the cautionary tale of Hertz entering and then exiting the car sharing business after failing to find a way to cope with the costs of owning the shared fleet. Steyn allays McElroy’s concerns noting that GM’s size confers some advantages in providing and caring for the Maven fleet.

    The greatest challenge for GM will likely be the rapidly intensifying competitive landscape as car companies including Daimler and BMW look to take on the same opportunity as Maven. Meanwhile, startups such as Local Motors and Navya with their own autonomous buses, already have the jump on automating transportation for urban markets.

    Local Motors and Navya understand something that Ford and GM will just be coming to terms with. Shared automated vehicles will operate as a transportation network – something that gives GM yet another advantage. In fact, GM may finally have an excuse to introduce dedicated short range communication technology on Maven vehicles – thereby creating an ad hoc network.

    DSRC for Maven vehicles may not be in the cars and certainly isn’t anything Steyn commented on. Steyn did note the advantage posed by OnStar and likened Maven’s launch to OnStar’s emergence on the market 20 years ago.

    It’s good to see Ford joining the party. But Ford needs to hit the vision reset button. It’s not enough to build masses of autonomous vehicles. Someone has to buy them unless, of course, they’re just going to be shared.


    What is Deep Learning?

    What is Deep Learning?
    by Ahmed Banafa on 08-25-2016 at 4:00 pm

    Deep learningis an emerging topic in artificial intelligence (AI). A subcategory of machine learning, deep learning deals with the use of neural networks to improve things like speech recognition, computer vision, and natural language processing. It’s quickly becoming one of the most sought-after fields in computer science. In the last few years, deep learning has helped forge advances in areas as diverse as object perception, machine translation, and voice recognition–all research topics that have long been difficult for AI researchers to crack.

    Neural Network
    In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory. Typically, a neural network is initially “trained” or fed large amounts of data and rules about data relationships (for example, “A grandfather is older than a person’s father”). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world).

    Deep Learning vs. Machine Learning
    To understand what deep learning is, it’s first important to distinguish it from other disciplines within the field of AI.

    One outgrowth of AI was machine learning, in which the computer extracts knowledge through supervisedexperience. This typically involved a human operator helping the machine learn by giving it hundreds or thousands of training examples, and manually correcting its mistakes.

    While machine learning has become dominant within the field of AI, it does have its problems. For one thing, it’s massively time consuming. For another, it’s still not a true measure of machine intelligence since it relies on human ingenuity to come up with the abstractions that allow computer to learn.
    Unlike machine learning, deep learning is mostlyunsupervised. It involves, for example, creating large-scale neural nets that allow the computer to learn and “think” by itself without the need for direct human intervention.

    Deep learning “really doesn’t look like a computer program,” says Gary Marcus a psychologist and AI expert at New York University, he says ordinary computer code is written in very strict logical steps. “But what you’ll see in deep learning is something different; you don’t have a lot of instructions that say: ‘If one thing is true do this other thing,’ ” he says. Instead of linear logic, deep learning is based on theories of how the human brain works. The program is made of tangled layers of interconnected nodes. It learns by rearranging connections between nodes after each new experience.

    Deep learning has shown potential as the basis for software that could work out the emotions or events described in text even if they aren’t explicitly referenced, recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.

    The Deep Learning Game

    In 2011, Google started Google Brain project, which created a neural network trained with deep learning algorithms, which famously proved capable of recognizing high level concepts.
    Last year, Facebook established AI Research Unit, using deep learning expertise to help create solutions that will better identify faces and objects in the 350 millionphotos and videos uploaded to Facebook each day.

    An example of deep learning in action is voice recognition like Google Now and Apple’s Siri.

    The Future
    Deep Learning is showing a great deal of promise, making self-driving cars and robotic butlers a real possibility. They are still limited, but what they can do was unthinkable just a few years ago, and it’s advancing at an unprecedented pace. The ability to analyze massive data sets and use deep learning in computer systems that can adapt to experience, rather than depending on a human programmer, will lead to breakthroughs. These range from drug discovery to the development of new materials to robots with a greater awareness of the world around them. Maybe that will explain why Google has been on a buying spree lately and robotics companies have been at the top of its shopping list. They have purchased eight robotics companies in a matter of months.

    References
    http://www.technologyreview.com/news/524026/is-google-cornering-the-market-on-deep-learning/
    http://www.forbes.com/sites/netapp/2013/08/19/what-is-deep-learning/
    http://www.fastcolabs.com/3026423/why-google-is-investing-in-deep-learning
    http://www.npr.org/blogs/alltechconsidered/2014/02/20/280232074/deep-learning-teaching-computers-to-tell-things-apart
    http://www.technologyreview.com/news/519411/facebook-launches-advanced-ai-effort-to-find-meaning-in-your-posts/
    http://www.deeplearning.net/tutorial/
    http://searchnetworking.techtarget.com/definition/neural-network

    Ahmed’s blog 4:38 PM Tech Review !


    Why are Top Brass from NXP, Qualcomm, Skyworks Keynoting Upcoming IEEE SOI-3D-SubVt (S3S) Conference? (San Francisco, Oct.’16)

    Why are Top Brass from NXP, Qualcomm, Skyworks Keynoting Upcoming IEEE SOI-3D-SubVt (S3S) Conference? (San Francisco, Oct.’16)
    by Adele Hars on 08-25-2016 at 12:00 pm

    By Fred Allibert
    The IEEE S3S Conference (10-13 October 2016 at the San Francisco Airport Hyatt Regency) brings together 3 key technologies that will play a major role in tomorrow’s industry: SOI, 3D integration, and Subthreshold Microelectronics. The numerous degrees of freedom they allow enable the ultra-low power operation and adjustable performance level mandatory for energy-starved systems, perfectly suiting the needs of the numerous categories of connected devices commonly referred to as the Internet of Things. This natural synergy was made obvious during the talks we listened to during past editions of the conference. For this reason, we adopted “Energy Efficient Technology for the Internet of Things” as the theme of the 2016 IEEE S3S.

    This theme will be present throughout the conference. It will start on October 10th with a full day tutorial addressing two important IoT-related topics: Energy Efficient Computing and Communications, and will peak during the Plenary Hot Topics session, focused on the Internet of Things, on Thursday October 13th.

    We have an outstanding technical program, including a very strong list of invited speakers, all of them leading authorities from illustrious organizations.

    Our Keynote speakers are decision-makers from major industries:

    • Nick Yu, VP, Qualcomm, will explain why “The Homogeneous architecture is a dead fairy tale”
    • Ron Martino, VP, NXP, will present “Advanced Innovation and Requirements for future Smart, Secure and Connected Applications”
    • Peter Gammel, CTO, Skyworks, will describe “RF front end requirements and roadmaps for the IoT”

    Several sessions will also be of particular interest to designers and technologistswho want to learn about new knobs to implement in their circuits: Two tutorials, related to 3D technology and FD-SOI design respectively and the technical sessions on SOI and Low Voltage Circuit Design. Applications will be illustrated in our session dedicated to SOI circuit implementations.

    You can look at our Advance Program to get details about the technical content of the conference, as well as the conference venue and registration.

    And you still have time to actively participate by submitting a late news paper before August 31st.

    Hyatt Regency San Francisco Airport

    The conference has a long tradition of allying technical and social activities. This will be the case again this year with several dinners & receptions that will give us plenty of opportunities to discuss with our colleagues.

    With its broad scope of technology-related applications and social-oriented environment, the S3S is an excellent venue to meet new people with different but related research interests. It is an efficient way to shed new light on your own focus area, and to sprout new ideas and collaboration themes. It is also a place where industry and academia can exchange about the application of on-going research and tomorrow’s company needs.

    Deadline for Late News submissions is August 31st, 2016

    Please click here to go to the 2016 S3S Conference Registration page.

    For further information, please visit our website at s3sconference.org or contact the conference manager, Joyce Lloyd: manager@s3sconference.org

    ~ ~ ~


    Bio:
    Dr. Frederic Allibert is the General Chair of the 2016 IEEE S3S Conference. (S3S, in various formats, has been ongoing since 1975.) He is a senior scientist and member of the R&D staff at Soitec, where he has supported the development of products and technologies for applications, including FD-SOI, RF, imagers, and high-mobility materials since 1999. As Soitec’s assignee at the Albany Nanotech Center (2011-2015), his focus was on substrate technologies for advanced nodes. Since then, he’s been exploring substrate-device interactions in various fields of micro-electronics, including RF and logic.