CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

ON to acquire Fairchild: pioneers join together

ON to acquire Fairchild: pioneers join together
by Bill Jewell on 11-24-2015 at 7:00 pm

Last week ON Semiconductor announced it had agreed to acquire Fairchild Semiconductor for $2.4 billion. The combined company will be a major player in power analog and power discretes. It also combines two companies with ties to the beginning of the semiconductor industry.

Fairchild Semiconductor was founded in 1957 when eight key employees left Shockley Semiconductor. Shockley Semiconductor was founded by William Shockley, a co-inventor of the transistor at AT&T’s Bell Labs. One of the Fairchild founders, Robert Noyce, is credited as co-inventor of the integrated circuit (IC) in 1958 (along with Jack Kilby of Texas Instruments). Noyce and fellow Fairchild founder Gordon Moore left to form Intel in 1968. Fairchild was bought by Schlumberger Limited, an oilfield services company, in 1979. Schlumberger sold Fairchild to National Semiconductor (founded by former Fairchild employees) in 1987. In 1997 Fairchild Semiconductor was spun off from National as an independent company. Fairchild’s history (through 2010) is posted on their web site.

Fairchild was a major force in the founding of Silicon Valley. It is one of the birthplaces of the IC. Several major semiconductor companies were formed by Fairchild alumni. These companies include Intel, National Semiconductor (now part of Texas Instruments), Signetics (bought by N.V. Philips) and AMD.

ON Semiconductor also has roots in the beginning of the semiconductor industry. ON was created in 1999 when Motorola spun off its discrete semiconductor business. Motorola began a semiconductor R&D facility in Phoenix in 1949. In 1955 Motorola began mass production of semiconductors with the first commercial high-power transistor. Motorola spun off the remainder of its semiconductor business as Freescale Semiconductor in 2004. Freescale is the process of merging with NXP Semiconductor. NXP was spun off from N.V. Philips of the Netherlands in 2006. Philips was one of the original licensees of the AT&T transistor patent in 1952 and a pioneer of the European semiconductor industry. Motorola’s history is also online.

ON and Fairchild have significantly different revenue trends since becoming standalone companies in the late 1990s in the midst of a semiconductor market boom. ON’s revenues in 2000 were $2.08 billion. Revenues declined significantly with the market downturn in 2001 and did not recover to over $2 billion until 2008. ON’s revenues increased to $3.16 billion in 2014. Much of the revenue gain since 2008 has been driven by acquisitions including AMIS in 2008, California Micro Devices in 2010, Sanyo Semiconductor in 2011 and Cypress Semiconductors’ image sensor business in 2011. ON has been very profitable for the last few years, with net income percentage of revenues at 14% for 2014 and 15% for the first three quarters of 2015.

Fairchild’s revenues peaked at $1.78 billion in 2000. Revenues never recovered fully, hitting $1.67 billion in 2007 and then declining in the following market downturn. Fairchild’s 2014 revenues were $1.43 billion. Fairchild has been struggling to make profits in the last few years. It had a slight $5.0 million profit in 2013 before a $35.2 million loss in 2014. The 2014 loss was due to $49.8 million in restructuring costs as it closed wafer fabs in Utah and South Korea and an assembly & test site in Malaysia. Fairchild made a $1.1 million profit in first quarter of 2015, but lost $0.9 million in the second quarter and lost $8.2 million in the third quarter.

ON Semiconductor’s November 18 presentation on the Fairchild acquisition highlights the combined strength in power management. Citing 2014 IHS data, ON shows the combined company would be number two in power discretes (after Infineon) with 11.1% share. Their power products should have limited overlap, with ON focused on low-medium voltages and Fairchild primarily in medium-high voltage products. ON plus Fairchild would be the number 10 non-memory semiconductor company based on 2015 revenues, according to ON (in an example of manipulating definitions to fit your message). If memory companies are included, ON plus Fairchild would be the 14[SUP]th[/SUP] largest semiconductor company.

ON estimates the acquisition of Fairchild would result in $150 million in annual synergies (a buzzword for cost savings) eighteen months after the closing of the deal. Most of the savings will be from reduction in the number of employees. ON has 24,500 employees with most U.S. employees located in Phoenix, Arizona. Fairchild has 6,600 employees with U.S. employees primarily in San Jose, California and South Portland, Maine. Since ON is the acquiring company, most of the employee cuts should be on the Fairchild side. ON probably prefers to focus employment in low cost Arizona versus high cost California and relatively remote Maine.

Will the ON/Fairchild combination be successful? We at Semiconductor Intelligence believe it will be. A moderate amount of revenue will be lost in commodity products since some customers use Fairchild to second source ON and vice versa. These customers will shift some business to a third supplier in order maintain an independent second source. As ON states, the companies have limited overlap in their key power device businesses. ON has a successful track record in integrating acquired businesses. Sadly, the combination will likely result in the eventual disappearance of the Fairchild name. Thus we will lose a link to one of founding companies of the semiconductor industry.

Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


Power Reduction Verification Techniques Highlighted by Mentor at ARM Techcon

Power Reduction Verification Techniques Highlighted by Mentor at ARM Techcon
by Tom Simon on 11-24-2015 at 4:00 pm

Power management is a perennial topic these days, and it came up in several presentations at the recent ARM Techcon in Santa Clara in mid November. The techniques covered in these talks address dynamic and static power consumption. The IEEE 1801 standard deals with specifying power design intent in Universal Power Format (UPF) and typically covers these power saving methods: RTL clock gating, power gating, multi-voltage design and voltage/frequency scaling.

In HDL based designs the UPF specification is in a separate file from the functional specification, which makes for a much cleaner deign flow. Nevertheless, the UPF is used to alter the design implementation as it progresses. Each of the techniques above involve structural changes to the implementation. Clock gating affects the clock tree implementation. Power gating affects supply lines. And, multi-voltage and frequency scaling also require level shifting and clock domain crossings.

While each of these techniques is very effective, because of they change circuit operation, a verification strategy is necessary to ensure proper operation of the design. I attended a talk by Ellie Burns from Mentor where she discussed using their Questa PASim, QuestaCDC, and Questa Formal to help with verification. Let’s look at the Mentor verification solution in more detail to understand how it can help avoid design failures due to flawed power reduction implementation.

The signals that pass between power domains need have isolation so that signals from powered down blocks do not cause spurious toggling on connected blocks. Toggling can consume power and cause data corruption. In many cases it is important to ensure that state information is retained when a block is powered off so that it can be restored when the block returns to operation. It is also essential that level shifters are inserted wherever there is a difference in operating voltage between connected blocks.

Beyond static checks like those above, there is a need to verify power up/down and reset/restore operations for each block and power domain. Because the system will have a number of power states, operation of the system in each state and each transition should be checked.

During her talk Ellie gave an overview of using Questa’s power aware simulation features to perform these verification operations. Questa has extensive support for IEEE 1801. It gives users a visualization of the power management structure and behavior, and can automatically detect power management errors. Lastly is helps with test plan generation and reporting power management coverage.

One observed failure that Ellie cited was a block resuming from a power down that required a signal from another block that turned out to be inactive as well. This prevented the block from properly resuming. Verification of all the state transitions would have caught this.

Mentor also offers Questa Formal for verifying that the hardware generates the correct power control signals in the correct sequence. Questa Codelink is also part of the power verification suite because power states are under software control.

The Questa power aware suite offers comprehensive verification for the sophisticated power management regimes that are required for mobile and power sensitive SOC’s. It covers RTL and gate level verification. It allows full simulation of power state transitions and help with debugging with power aware waveform viewing that shows isolation and retention information.


Finding under- and over-designed NoC links

Finding under- and over-designed NoC links
by Don Dingee on 11-24-2015 at 12:00 pm

When it comes to predicting SoC performance in the early stages of development, most designers rely on simulation. For network-on-chip (NoC) design, two important factors suggest that simulation by itself may no longer be sufficient in delivering an optimized design.

The first factor is use cases. I think I’ve told the story before of the early days of PowerPC, when the PowerQUICC team at Motorola Semiconductor asked us poor slobs at the Motorola Computer Group why the heck we were trying to use those two particular peripherals simultaneously. Oh, I dunno, maybe because a customer asked us why it didn’t work, and there’s nothing in the fine manual (see: RTFM) saying it won’t work.

A chip has to know its limitations. What goes wrong in that scenario is the design team only imagined and tested certain use cases, and we (well, our customer) found one that didn’t work. In defense of the designers, there is a significant degree of difficulty in modeling on-chip traffic and setting up test cases. The last few percentage points of coverage get tough to achieve, and if simulation or real-world test cases take forever to run, setting up one more obscure case nobody will ever think to use gets prohibitive.

The other factor is everyone breathes a huge sigh of relief when the simulation or test works, but they rarely ask why. Is it barely passing? Is it passing at an acceptable margin? Is it passing by 2x or 3x? Simulation is great at reporting stuff that is broken or under-designed, but not so good at pointing out when a system may be over-designed in certain areas.

Both experience and simulation can be expensive, but the cost of failed silicon or over-designed silicon is also expensive. This is another area where NoCs are not “just software”, and some intelligent exploration of the topology and links can make a huge difference. Design teams can certainly simulate the NoC within the context of the SoC design, but it might take a while and burn valuable brain cells that could be spent doing other stuff.

Sonics has come up with a better solution – static performance analysis, in the latest release of the SonicsStudio GUI-based design tool for their NoC products. We zoom into portions of the screenshot of the tool to look at the highlights.


On the left is a simple topology, 4 initiators, 3 targets, a few router bubbles for sharing, and the links themselves. Peak bandwidth is defined by link width and clock. Buffering and router combinations factor in to configured capacity. Requested capacity is a factor of the traffic model versus a credit-based flow control system that prevents traffic from piling up.

There is still a step in imagining the use cases (entered in a table or extracted from simulation stimuli), but once the model is dialed in SonicsStudio quickly and automatically calculates what the NoC link performance looks like. It reports by link what percentage of bandwidth is used (both configured and requested), and identifies any limits as peak, flow control, or downstream. The connections are hyperlinked, so jumping to reconfigure buffers, clocks, or even link widths is easy.


This static performance analysis run is much faster than a corresponding simulation and provides more robust reporting. The nice part about this tool is it shows you where the problem actually is. When links combine through a router, there may in fact be a problem on the main link, but insufficient bandwidth elsewhere on a single link could be the cause. (In this scenario, somebody goofed and made the peripheral link narrower than it should have been. Even though it “works”, it sets off a lot of red in the table.)

Also evident are optimization steps, particularly clock speed and buffers. Quick what-ifs show if a NoC link can be cut back and still perform, saving real estate and power, or if a link is just barely hanging in there and may need a slightly faster clock or a couple more buffers.

Once the NoC exploration phase using static performance analysis in SonicsStudio is done, the resulting design can be put back into detailed simulation for a final pass. Iterations are fewer, confidence is higher, and understanding of what is really happening in the NoC increases. (And, someone won’t be RTFMing your oversight later if you don’t use this tool.) This represents a major improvement to designer productivity with Sonics NoC technology.

The full screenshot is attached below for reference.

Full press release:

Sonics Upgrades SoC Development Environment and Flagship NoC to Improve Chip Architecture Optimization and SoC Resiliency

More articles from Don…


USB Type-C opens doors for embedded system connectivity

USB Type-C opens doors for embedded system connectivity
by Rajaram Regupathy on 11-24-2015 at 7:00 am

Every now and then there is an innovation or advancement that changes the way we operate, the way the world operates. These innovations and advancements in technology creates opportunities for the businesses and provides improved features to the consumers. They are also widely and swiftly adopted. USB Type-C is one such technology advancement that will soon be found in most devices. Apple and Google have taken the lead, and many more are following suit. USB Type-C will see its way into many industries and applications. Why USB Type-C? USB Type-C provides more power, user friendliness and better connectivity supporting multiple functionalities.

This article explores a few markets and applications in which USB Type-C can be integrated (An extract from the book ‘Exploring USB Type-C Ecosystem’).

Outdoor
Outdoor activity is an industry by itself. And is very diverse too. Jogging through the streets, surfing on the coast, skiing on the icy slopes, camping in the forest, cycling through the mountains are a few examples of the outdoor activity. The number of outdoor enthusiasts are increasing year on year. Reports indicate turn over in billions of dollars in the US alone. Improvements in user experience in the outdoor equipment ecosystem will boost the industry’s growth. Following Figure-1 provides an illustrative representation of a possible outdoor ecosystem that can integrate USB Type-C.


Figure 1: An outdoor ecosystem that can integrate USB Type-C

Solar Panel is one of the outdoor equipment that can integrate USB Type-C
In an outdoor environment portable devices can be handy and here we talk about two such devices, a portable microwave and a portable refrigerator. In an outdoor environment efficient power management and conservation is essential and can be achieved by USB Type-C ecosystem. With power of up to 100 watts, USB Type-C can now power a portable refrigerator or a microwave using a solar panel. The solar panel connects to a battery pack, which in turn powers the portable refrigerator. All of these are connected using USB Type-C. The following Figure-2 illustrates this setup.

Now this setup brings out an opportunity for battery pack manufacturers. They can provide USB Type-C/PD as a power interface to efficiently share available power with other connected devices by monitoring device states and negotiate for low power

Figure 2: An outdoor ecosystem that can integrate USB Type-C


Office

Electronic devices in an office setup revolve around the personal computers, laptops and mobile computing devices. The combined market share of these devices is more than half of the total market share of all electronic devices. Market leaders have started adopting the USB Type-C specification. With the key computing devices adopting this new connector, accessories in the ecosystem should also take advantage of the new connector to bring in improved efficiency and better user experience. Following Figure-3 provides an illustrative representation of a possible office ecosystem that can integrate USB Type-C


Figure 3: Office ecosystem with USB Type-C


Projector is one of the office equipment that can integrate USB Type-C

Handheld mobile devices are getting powerful and are being widely used even in workplaces. Some workplaces provide tablet computers to their employees for ease of mobility. These devices bring in additional storage space through cloud. Therefore data can be managed over cloud and from anywhere. Projecting the stored data using a mobile device is not presently possible. Currently one needs to use a laptop or PC. USB Type-C can solve this problem. Once integrated to the mobile device, USB Type-C, can overcome this by directly connecting to a projector as illustrated in the fFgure-4 below.

Figure-4: A USB Type-C mobile device connected to the projector with USB Type-C

This article explored a few examples of markets and applications that can integrate USB Type-C. USB Type-C can open doors for embedded system connectivity. The scope and opportunity is immense. The coming months will see many more USB Type-C enabled devices. A detailed analysis of the markets and applications of USB Type-C is available in the book ‘Exploring USB Type-C Ecosystem’:
http://www.amazon.com/gp/product/B014SO7U4W

Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


Smart Clothing: The Next Wave In Wearables

Smart Clothing: The Next Wave In Wearables
by Nick Langston on 11-23-2015 at 4:00 pm

Today, it’s easier than ever to conceive a new product, prototype it, perfect it for mass production and successfully market it to an ever widening audience.

Accelerators and Incubators have sprouted up specifically to enable hardware startups and help them navigate the world of contract manufacturing and supply chain logistics. Wearable device startups in particular have benefitted tremendously from this new ecosystem.

Within this wave of development lies an even more exciting trend. Among the Fitbits and Pebbles, Neumitras and Glamorskys there is a wave of truly wearable startups emerging.

OmSignal, Sensilk, Athos, Sensoria and others are part of a generation of smart clothing startups that seek to provide a link between wearable technology and apparel. Even industry giants like Adidas, Nike and UnderArmour want to play a role.

Now, one of the most valuable companies in the world has also announced that it too, wants to enable the wearable future.

On Friday, May 29th Google announced Project Jacquard. At the core of this technology is a simple concept: use textiles as a platform for device interaction.


All of these companies are on a journey that begins with sensing: heart rate, breathing, activity, temperature and touch. The sensed information tells us about our favorite subject – ourselves. Google takes this a step further by looking at textiles as the input device for real-time interaction with our devices or even our environment.

The opportunity for textile based sensing is a Big Data dream come true. We are in physical contact with textiles most of every day, and most of our lives. What vehicle could be better for biometric sensing? Imagine all that we can learn about ourselves and our habits if the textiles we touch daily were measuring what we’re doing and feeling.

Smart clothing for athletics are an easy fit. Shirts, shoes and shorts that can quantify the value of a workout and deliver real time performance information to our smartphones have obvious value to dedicated athletes.

However, the most important question remains open: what does the average consumer want in piece of smart clothing?

As a parent, I’d happily pay for a shirt that reports location, temperature, heart and breathing rates of my children.

What would YOU want in a smart garment?

Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


Networks, Emulation and the Cloud

Networks, Emulation and the Cloud
by Bernard Murphy on 11-23-2015 at 12:00 pm

To fans of Godel, Escher and Bach (the Eternal Golden Braid), there is an appealing self-referential elegance to the idea of verifying a network switch in a cloud-like resource somewhere on the corporate network. That elegance quickly evaporates however when you consider the practical realities of verifying such device in ICE emulation mode.

Network switches and routers today have 256 ports or more. To fully verify a pre-silicon model requires realistic traffic on each of these ports, which in turn requires an Ethernet tester and a speed adapter per port (to adjust real traffic speed to the lower speed of the emulator) and a lot of cables to connect it all. There are plenty of challenges in managing and debugging this custom-patched hardware (for example, dealing with non-determinism in the speed bridges), and even when it’s working you have expensive hardware tied up in testing just one design. So much for the cloud – this is more of a puddle, firmly rooted to the ground.

Mentor has understood for some time that this approach does not scale well and has set about building a capability which could support sharing emulation as something much more like a cloud resource without the need for dedicated hardware interfaces. In VirtuaLAB, Ethernet testers are modeled in software, based on proven IP running on Linux workstations. There’s no need for speed bridges or cables since software models have no problem speed-matching the emulator.

All of which makes it possible to share emulation as a cloud-like resource. Mentor has been at this for a while; back in 2013, Juniper Networks presented use of this solution to verify a large switch in which they were able to model billions of transactions at ~3.6K frames/second across all ports. More recently, Cavium presented use of VirtuaLAB to drive 11M transactions per day. This is the kind of throughput you need in order to fully verify the 40G/100G, 256+ port switches that are being built today. Since Mentor has been able to show the ability to bring up a system in as little as 8 days, the appeal of moving from a nest of wires and boxes to a very streamlined verification approach has been compelling.

It is difficult to overstate the importance of this direction. Emulation has been around for a while, as a very necessary but expensive way to verify one design at a time. And setting up a representative test environment around those systems has been almost as complex as verifying the systems themselves. Now it becomes reasonable to think of emulation with all the benefits of just another abstracted cloud resource. The facility can be accessed from anywhere in the corporate network, utilization can be maximized since it can be used to verify more than one design at a time, capacity can be scaled up more transparently as design complexity and demand grows and maintaining the hardware and software becomes largely an IT problem (Mentor provides software management using the Enterprise Server’s IT included as a part of the Veloce emulation platform). Businesses will see immediate value in this approach and will want to see more of their fixed compute costs moving in this direction.


To learn more about VirtuaLAB, click HERE.

More articles by Bernard…


HLS with ARM and FPGA Technologies Boosts SoC Performance

HLS with ARM and FPGA Technologies Boosts SoC Performance
by Pawan Fangaria on 11-23-2015 at 7:00 am

The way SoC size and complexity are increasing; new ways of development and verification are also evolving with innovative automated tools and environment for SoC development and optimization. IP based SoC development methodology has proved to be the most efficient for large SoCs. This needs collaboration among multiple players including IP developers, SoC vendors, EDA tool providers, foundries, FPGA providers, and others.

ARM connected community has more than 1200 partners and ARM TechConis one of the best forums to learn about new innovations in IP development and SoC integration. Although I couldn’t attend the conference, I came across a presentation made by Hardenton how to boost performance from ‘C’ software to extremely high (Sky-high) level with hardware acceleration. The methodology uses SoC with ARM’s ACP (Accelerator Coherency Port) and ACE (AXI Coherency Extension) interfaces and Xilinx FPGA technologies.


The hard-IP acceleration is targeted towards particular applications and comes from co-processors and accelerators fixed in silicon. The soft-IP acceleration is more generic in nature and is scalable; achieved through programmable logic customized according to the application need. Both hard and soft IP are needed to optimize the SoC.


Above is an example of code annotation in ‘C’ program which can direct the HLS (High-Level Synthesis) tool to synthesize the ‘for’ loop into pipeline architecture. The pipelining increases throughput and resource utilization, thus enhancing performance of the function. Similarly, there are various types of memories that balance between throughput and capacity. Appropriate memory is used for an application to interface between software and hardware.

At the top level, the system looks like a combination of processing system, programmable logic, and an interface between them that can be best implemented with ARM’s AMBA (Advanced Microcontroller Bus Architecture) for efficient data movement. An important consideration for high-performance and high-throughput data movement is lower latency with coherent interfaces.

This is an example of a soft-IP directly interfacing with cache memory through AMBA cache coherent interfaces, ACP or ACE. The benefit of using coherent interfaces to access data in cache is limited by the capacity of the cache sub-system. Regular (non-cache-coherent) AXI interfaces will still provide comparable latencies for sufficiently large data sets. Several mechanisms are used to avoid cache misses and also to save power. The function for acceleration must be chosen carefully that can provide performance gain in data processing as against bottleneck in data movement.

The above procedure is described in most simplistic manner. In practice it requires a lot of hardware as well as software expertise. Availability of ARM processors and interfaces, HLS tools, and tools for partitioning software and hardware has reduced the development effort by a large extent. Yet there are other pieces such as drivers and other hardware to handle data movement that need to be integrated to complete the SoC.

Xilinxhas a new SDSoC development environment that can be used to optimize and deploy programmable SoCs much easily. The SDSoC front-end is an Eclipse based C/C++ IDE and the back-end can call upon many hardware design tools.

Hardent recommends this flow using SDSoC to quickly optimize the custom hardware components. The application can run on any ARMCortex-A/R processor; both bare-metal and Linux applications are supported. The profiling tool integrated into the IDE interfaces with non-invasive ARM debug components built into an SoC. ARM CoreSight technology provides excellent debug and trace system. The hardware estimates can be optimized through iteration over micro-architecture and macro-architecture.

This flow provides an environment in which a complete system can be described in C/C++, migrate appropriate functions to soft-IP by using HLS, and integrate the soft-IP into the system. The processor and the programmable logic are tightly integrated with AMBA interfaces into the SoC.

Hardent is an active member of VESA (Video Electronics Standard Association) and MIPI (Mobile Industry Processor Interface) Alliance and provides IP for display. Hardent also provides several training courses for SoC development based on ARM processors, Xilinx Zynq and HLS. The latest in the offering is “Embedded C/C++ SDSoC Development Environment and Methodology”.

The SDSoC appears to be an excellent, efficient environment for SoC development and optimization. See this, less than 4 minutes video at Xilinx website HERE.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Intel 2015 Analyst Meeting Debunked!

Intel 2015 Analyst Meeting Debunked!
by Daniel Nenni on 11-22-2015 at 7:00 am

Bill Holt’s “Advancing Moore’s Law” presentation at the recent Intel Analyst day was swallowed hook, line, and sinker by the mainstream media fish so let me play devil’s advocate here and point out some problems with his spin on the competitive landscape.

Coincidentally, one of my Intel friends insists that Intel is number one when it comes to integrity amongst semiconductor companies. While I agree with that in principle, I think clear exceptions are the Intel presentations at the analyst meetings. And in this case it’s Bill Holt’s presentation, but another one that comes to mind is Intel CFO Stacy Smith’s Contra Revenue nonsense, but I digress…

First and foremost Bill continues to do Apple Versus Zebras comparisons in regards to Intel process technologies versus the foundries. In this case Bill is comparing Intel Broadwell and Skylake microprocessors against Apple SoCs, two very different things. Obvious question: Why doesn’t Bill use Bay Trail (22nm) and Cherry Trail (14nm) SoCs as comparisons against the Apple A8 (20nm) and A9 (16nm) SoCs? That way Bill would not have to fabricate a “Transistor Density Normalized for Composition” calculation, right? Yes Bill composition DOES matter!

Second, process naming conventions are no longer based on the minimum drawn length of a planar transistor. With the advent of FinFETs (22nm for Intel and 16/14nm for the foundries) process naming conventions have nothing to do with transistor size and are for marketing purposes only. If you truly want to do a “process to process comparison” look at the metal fabric (fin pitch, gate pitch, M1 pitch, etc…) and the SRAM cell size. But that really is a Power Pointless contest since the true test is the resulting chip because that is what customers ultimately buy.

Third, and more importantly, when comparing process nodes you also need to include production dates. According to the Intel slides, the TSMC 10nm will be a superior process to Intel 14nm and that I agree with. Intel goes on to claim that their 10nm process will be superior to TSMC’s 10nm and I also agree with that. What Intel does not say however is that their 10nm will be in production more than one year after TSMC’s 10nm. In fact, TSMC’s 7nm process will be in production in the same time frame as Intel 10nm so isn’t that the comparison the analysts should see?

Bottom line: This really seems like a desperate attempt to cover up the fact that Intel has lost the process LEAD to the foundries.

You can see Bill’s slides HERE.

The other important news that was conveniently announced immediately AFTER the analyst event is that Intel has a new president. Dr. Venkata “Murthy” Renduchintala is president of a newly created Client and Internet of Things (IoT) Businesses and Systems Architecture Group which is a combination of Intel’s Platform Engineering, Client Computing, IoT, Software and Services, and Design & Technology Solutions groups. It’s interesting to note that Murthy is a long time fabless guy from Qualcomm so more changes at Intel will probably follow. He is not on the Intel org page yet but my guess is that he will replace the departing Intel president Renée J. James.

And Qualcomm’s snappy response was:

“A few months ago we made the decision to move away from a co-president leadership structure for QCT. Cristiano Amon was the clear choice as President of the chipset business. We made the decision to enhance the operation of QCT by having a single decision maker who has an exceptional track record of executing and has the confidence of the team and our customers. We are confident in Cristiano’s leadership as we capitalize on the opportunities ahead. Murthy was offered another role within Qualcomm, but he chose to leave the company instead.”

Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


You Can Win at Daily Fantasy Football, But Is It Worth It?

You Can Win at Daily Fantasy Football, But Is It Worth It?
by mbriggs on 11-21-2015 at 12:00 pm

It’s hard to avoid the Draftkings and Fanduel commercials. These new unicorns are spending over $200M in ads this year. The government has caught on to the fact this is legalized gambling, and will certainly be taking action. The sites argue that Fantasy Sport are a game of skill, and thus are not considered gambling. My take is it that there is quite a lot of skill involved.

I engaged mostly to have fun. When you have a few bucks riding on the weekly games (NFL for me) it makes them much more fun to watch. If you engage, be aware that there are many players you will be competing with that make a living playing fantasy sports. It’s somewhat akin to jumping into the World Series of Poker.

The Skim
What really irritates me is the size of the skim, which gamblers call the vig. Aside from playing against the pros, even if you win, the bite the sites take for each transaction is substantial.

The sites call the skim “10%”, but it doesn’t feel that low when you are playing. There are many types of fantasy contests, the two primary categories are Tournaments (GPP:Guaranteed Prize Pool) and Cash. The easiest to understand are the 50/50 Cash games. If there are 10 entries in a 50/50 Cash game, then the top 5 scorers win. Let’s say you enter two $50 50/50 games. This will cost you $100. If you win both, your payout will be $80. (The skim would be 20%!). If you lose, you’ve lost the entire $100.

Let’s say you win one $50 game, and lose one $50 game. In this case the total net to you is (-$10). The sites figure your total outlay is $100, and their take is $10, thus the advertised “10%” skim.

Given the cost of each contest to Fanduel and Draftkings is pennies, does the skim seem high to you?

I’ll not get into tournaments, but will say that your chances of winning one are in line with winning the lottery. The pros will each submit dozens or even hundreds of lineups into each tournament.

You Can Win
The good news is that there are many quality advice sites available. The best advice comes with a fee, but it’s fairly nominal. In my view the best advice sites are The Football Guys, Rotogrinders, and Pro Football Focus. I think I paid $30 for a year’s worth of the Football Guys. Make sure to understand that it takes time to consume all the advice, which comes in a variety of formats. I like to listen to the Football Guys and Rotogrindsers podcasts while walking, and make heavy use of the Fooball Guys Interactive Value Charts while setting lineups. Net, net , the minimum weekly time investment is a minimum of 6 hours, my guess is that most players, pros and non pros alike, spend much more.

Persistence is a Virtue
I’ve played in a couple of hundred Daily NFL 50/50 games on Fanduel and Draftkings over the the last year. I’ve won around 70% of my contests, but my winnings are modest, thanks to the skim. This past week I was 0-9! The experts will warn you that it happens, and you need to persevere through bad weeks, as there is an element of luck involved.

I am not particularly anxious to re-engage after my 0-9 performance, but suspect I’ll jump back in.

(postscript) I entered 4 50/50 contents the week following and went 2-2. Net to me is -$20.

Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


How to Secure IoT Edge Device from Multiple Attacks?

How to Secure IoT Edge Device from Multiple Attacks?
by Eric Esteve on 11-21-2015 at 7:00 am

In the 1990’s, designing for performance was the main challenge and the marketing message for Intel processors was limited to the core frequency. Then designers had to optimize power consumption to target mobile phones/smartphone and build power efficient SoC, low power but high performance devices. Now in 2015 the semi industry realizes that security is becoming a very strong requirement almost mandatory to support emerging systems expected to equip every house, car, factory or body. If you think that security is optional, just remember that tomorrow’ electronic systems will have to be built to support a changing world, just take Paris attacks as an example of today’ reality …

The above graphic, picturing a typical IoT Edge Device controller part, is showing attacks coming from different forefronts: software IP thefts, Tamper related failures, stored keys theft, Fault injection, Malicious applications, attacks from peripherals…

Sourcing an IP like the ARC-EM processor core from Synopsys will offer a strong advantage, as the IP vendor also provides a complete security package, ARC-EM Enhanced Security Package. This new ARC EM option enables designers to create an isolated, secure environment that protects their systems and software from attacks and to implement the various pieces and IP in a consistent environment.

At high level, the designer can select security processors and modules, cryptographic cores like symmetric/hash cryptographic engines, platform security (to build secure boot and cryptography middleware) and content protection supporting HDCP2.2 and DTCP IP for DRM schemes.

At first, we notice that ARC-EM Enhanced Security Package is merged with the selected ARC-EM processor implemented to support the functional application, eliminating dedicated security processor. This single processor architecture, available for ARC EM4 and EM5-D, reduces area and power consumption as there is no more need for additional security processor.

The designer can define a secure, isolated area of the processor to guarantee code and data protection for confidentiality and integrity (the secure APEX red box in the block diagram), enabling a Trusted Execution Environment. Moreover, in-line instruction and data encryption, address scrambling and data integrity checks provide protection from system attacks and IP theft.

The memory is protected by using secure MPU (see the bottom left red box) based on privilege levels supporting multiple privilege levels. Accesses to secure peripherals and system resources are restricted by using secure APEX or system bus signaling (secure AHB Bus red box). In secure mode, the Trusted Execution Environment can’t be accessed from the peripherals.

ARC EM CPU pipeline has been designed to be tamper resistant as there is no store in the 3-stages pipeline. The CPU can detect tampering and software attacks, thanks to integrated watchdog timer detecting system failures and enable counter measures.

In fact, ARC-EM Enhanced Security Package interleaves protected processor pipeline registers and in-line instruction and data encryption ensure decrypted instructions are never stored or accessible, protecting algorithms from reverse engineering without impact to the timing of instructions. Sourcing both the processor IP and the security package to the same provider is the key for maximum protection allowing optimized implementation, reducing area and power consumption.

The Enhanced Security Package with SecureShield is a part of Synopsys’ comprehensive portfolio of security IP solutions, which also includes the CryptoPack option for ARC EM processors as well as the DesignWare Security IP solutions, which comprise a range of cryptography cores and software, protocol accelerators, root of trust, platform security and content protection IP. On top of the above described security hardware features, Synopsys provides content protection, platform security and cryptographic cores. The designer will benefit from common crypto algorithms such as AES, 3DES, ECC, SHA-256 or RSA.

The ARC EM Family is supported by a robust ecosystem of software and hardware development tools, including the ARC EM Starter Kit for early software development, MetaWare Development Toolkit that generates highly efficient code ideal for deeply embedded applications, ARC simulators including nSIM and xCAM, and the ARChitect core configuration tool.

Synopsys’ embARC Open Software Platform gives all ARC EM software developers online access to a comprehensive suite of free and open-source software that eases the development of code for IoT and other embedded applications.

Such an integrated solution where security is implemented within the CPU architecture by the IP core provider allows building a secured system more coherent than when the designer has to insert security into an existing architecture or more power efficient than with a dual CPU solution, when adding a dedicated security processor. The combination of security features and energy savings is especially important for IoT and mobile applications, including wearable and smart home devices.

From Eric Esteve from IPNEST

More articles from Eric…