NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Who Left the Lights On?

Who Left the Lights On?
by Bernard Murphy on 01-03-2017 at 7:00 am

I attended a Mentor verification seminar earlier in the year at which Russ Klein presented a fascinating story about a real customer challenge in debugging a power problem in a design around an ARM cluster. Here’s the story in Russ’ own words. If you’re allergic to marketing stories, read it anyway. You might have run into this too and the path to debug is quite enlightening.

When I was a kid, my father used to get very angry when he found a light on in an empty room. “Turn off the lights when you leave a room!” he would yell. I vowed when I got my own home I would not let such trivia bother me. And I don’t. The last time my dad came to visit he asked me, “What’s your electric bill like?” as he observed a brightly lit room with no one in it. I changed the subject.

There is probably no worse waste of energy than lighting and heating a room that is empty. The obvious optimization: notice that no one is there and turn off the lights. It works the same on an SoC or embedded system. To save energy, system developers are adding the ability turn off the parts of the system that are not being used. Big energy savings but with no compromise to functionality.

I was working with a customer who had put this type of system in place, but they were observing a problem. While most of the time the system did really well with battery life, occasionally – about 10% of the time – the battery would die long before it should. The developers were stumped. After a lot of debugging what they discovered was that one of the energy hungry peripherals would be turned on and left on continuously, while there were no processes using it.

To debug the problem, they stopped trying to use the prototype and went back to emulation on Veloce to try to figure out what was going on. Veloce has a feature that allows developers to create an “activity plot” of the design being run on the emulator. The activity plot shows a sparse sampling of the switching activity of the design. While switching activity does not give you an absolute and exact measurement of power consumed, it does allow you to find where likely power hogs are hiding (see figure #1).

Figure #1

They ran their design on Veloce and captured the activity plot; it looked like this (see figure #2).

Figure #2

The design was configured to run two processes, one that was using peripheral A (the developer of this system is quite shy and does not want me putting anything here which could be used to identify them – so the names have been changed to protect the innocent). The other process was using peripheral A and peripheral B. As you can see from the graph, one peripheral is accessed at one frequency, creating one set of spikes in switching activity. The second process accesses both peripherals, but less frequently producing the taller set of spikes. For testing purposes, the frequency of the processes being activated was increased. Also, the period of the two processes was set to minimize the synchronicity between them.

Figure #2 shows that at some point, the spikes on peripheral A disappear – that is, peripheral A gets left on, when peripheral B gets turned on. Someone “left the lights on” as it were. Examination of the system showed that, indeed, the power domain for peripheral A was left on.

Figure #3 shows a close up of the activity plot when power domains are being turned on and off correctly. Figure #4 shows a close up of the point where peripheral A is unintentionally left powered on continuously.

Figure #3
Figure #4

With Codelink[SUP]®[/SUP], a hardware/software debug environment that works with Veloce, the designers were able to correlate where the cores were, in terms of software execution, with the changes in switching activity shown in the activity plot. Figure #5 shows a correlation cursor in the activity plot near a point where peripheral A gets turned on and the debugger window in Codelink, which shows one of the processor cores in the function “power_up_xxx()”.

Figure #5

Since the problem was related to turning off the power to one of the power domains, they set the Codelink correlation cursor to where the system should have powered down peripheral A (see figure #6).

Figure #6

At this point there were two processes active on two different cores that were both turning off peripheral A at the same time (see figure #7).

Figure #7

Since this system is comprised of multiple processes running on multiple processors, all needing a different mix of power domains enabled at different times, a system of reference counts is used. The way it works is when each process starts it reads a reference count register for each of the power domains it needs. If it reads a 0, then there are no current users of the domain and the process turns on the power domain. It also increments the reference count and writes it back to the reference count register.

When the process exits, and no longer needs the power domains powered up, it basically reverses the process. It reads the reference register. If it is a 1, then the process can conclude that there are no other processes using the power domain and turns it off. If the reference count is higher than 1, then there is another process using the domain and it is left on. The process decrements the reference count and writes it back to the reference count register.

At any point in time, the reference count shows the number of processes currently running that need the domain powered on.

Using Codelink, the developers were able to single step through the section of code where the power domain got stuck in the on position. What they saw were two processes, each on a different core, both turning off the same power domain.

First, core 0 read the reference register, and it read a 2. Then core 1 read the same reference register, and it too read a 2, since the process on core 0 had not yet decremented the count and written it back. Next both cores decided not to turn off the power for the power domain, as they each saw that another thread was using the peripheral. Finally, both cores decremented their reference count from 2 to 1. And they both wrote back a 1. This left the system in a state where there was no process using the power domain, but it was turned on. Since the reference register held a one, any subsequent processes that used the domain would not clear this count. And the power would be on to this domain until the system was rebooted, or ran out of power.

Now this looks like a standard race condition. Two processes from two different cores, both doing a read/modify/write cycle. In this case, these bus cycles need to be atomic. The developers went to the software team and told them about their mistake and asked them to perform locked accesses to the reference count register.

It turns out that they were using a locked access to reference the count register. They pointed the finger back at the hardware team.

The hardware team had implemented support for the AXI “Exclusive Access”. The way the exclusive access works is that an exclusive read is performed. The slave is required to note which master performed the read. If the next cycle is an exclusive access from that same master, the write is applied. If any other cycle occurs, either a read or a write, then the exclusive access is canceled. Any subsequent exclusive write is not written, and an error is returned. This logic should have prevented the race condition seen.

On closer examination, it turned out that the AXI fabric was implementing the notion of “master” as the AXI master ID from the fabric. Since the ARM processor had four cores the traffic on the AXI bus for all four cores was coming from the same master port – so they were all seen as coming from the same master. So from the fabric’s perspective and the slave’s perspective, the reads and writes were all originating from the same master – so the accesses were allowed. There was no differentiation between accesses from core 0 and core 1. An exclusive access from one core could be followed by an exclusive access from another core in the same cluster, and it would be allowed (see figure#8). This was the crux of the bug.

Figure #8

The ID of the core which originates an AXI transaction is coded into part of the transaction ID. By adding this to the master, which was used for determining the exclusivity of the access to the reference count register, the design allowed it to correctly process the exclusive accesses.

Veloce emulation gave the developers the needed performance to run the algorithm to the point where the problem could be reproduced. Codelink delivered the debug visibility needed to discover the cause of the problem. The activity plot is a great feature that lets developers understand the relative power consumption of their design.


Russell Klein is a Technical Director in Mentor Graphics’ Emulation Division. He holds a number of patents for EDA tools in the area of SoC design and verification. Mr. Klein has over 20 years of experience developing design and debug solutions which span the boundary between hardware and software. He has held various engineering and management positions at several EDA companies.


Crowd-Sourcing Morality for Autonomous Cars

Crowd-Sourcing Morality for Autonomous Cars
by Bernard Murphy on 01-03-2017 at 7:00 am

Questions are being raised on how autonomous vehicles should react in life-or-death situations. Most of these have been based on thought experiments, constructed from standard dilemmas in ethics such as what should happen if the driver of a car or an autonomous car is faced with either killing two pedestrians or killing the occupants of the car. Recent fatal Tesla crashes have added broader interest in whether questions of this nature have more practical relevance.

We should acknowledge up front that issues of the type offered above are likely to be at the fringes of reasonable operation for autonomous cars. But even if rare, the consequences can still be severe; also these outliers aren’t the only cases where moral issues can arise. When is it OK to exceed the speed limit or drive in the shoulder lane or park next to a fire-hydrant? For human drivers, the simple answer – never – can be modified given extenuating circumstances. Does each micro-infraction need to go to traffic court for a judgement? Again, for human drivers, obviously not but which instances can be ignored depends in turn on the in-situ judgment of traffic police or other nearby drivers who must make moral decisions – at least for which incidents rise to a level of proper concern.

A research team at UC Irvine and the MIT Media Lab has taken a next logical step in this direction. They have built a platform they call the Moral Machine which conducts an online survey posing moral questions in the narrow domain of collisions/fatalities, gathering (so far) over 2.5 million participants in 160 countries. In each case, the choice is between two bad options. Some are perhaps easier than others, such as valuing human life over the life of an animal. Others are trickier. Do you value the lives of young people over older people, or women over men, or do you value less the lives of law-breakers (crossing a pedestrian crossing when they don’t have a signal that they can cross) over people obeying the law? What if you consider variants among these options?

The results are interesting if largely unsurprising. You should try the survey yourself (link below) but here’s a taste of crowd-sourced preferences (you get to this as a reward at the end of the survey):

  • There’s a reasonably strong bias to saving more people than less people
  • There’s little bias to saving pedestrians versus people in your car
  • There’s some bias to saving people following traffic laws law versus people flouting those laws
  • There’s some bias to saving women over men but a strong bias against saving animals
  • There’s a reasonably strong bias to saving people with high social value (e.g. doctors) versus people with low social value (e.g. bank robbers) – assuming you can figure this out prior to an accident

You could imagine, since the survey reflects the views of a large population, that it could provide a basis for a “morality module” to handle these unusual cases in an autonomous (or semi-autonomous) car. Or not (see below). The survey would have to be extended to handle the lesser infractions I raised earlier, but then it might run into a scalability problem. How many cases do you need to survey to cover speeding violations for example? And how do you handle potentially wider variances in responses between different regions and different cultures?

Then again, there may be a larger problem lurking behind this general topic, at least in the longer term. As technologists, we want to find technology solutions to problems but here we’re creeping into a domain that civil and religious institutions will reasonably assert belongs to them. From a civil law perspective, morality by survey is a form of populism – fine up to a point, but probably needing refinement in the context of case law and constitutional law. From a religious viewpoint, it is difficult to see how morality by survey can be squared with fundamental religious guidelines which many adherents would consider not open to popular review.

You might argue I am over-extending my argument – the cases I have raised are either rare or minor and don’t rise to the level of significant civil or religious law. I counter first that this is an argument from ignorance of possible outcomes (we can’t imagine any but the cases we’ve discussed; therefore all others must be minor). Second, from the perspective of the civil/religious lawyers, minor though these instances may be, they could easily be seen as the start of a slippery slope where control of what they consider their domain passes from them to technologists (remember Galileo). And third, we are eager enough to over-extend what we believe AI can do (putting us all out of work, extermination or enslavement of the human race by robots) so why disallow over-extension along this line of thinking? 😎

Perhaps we need to start thinking in terms of micro-morality versus macro-morality, as we do already with micro- versus macro-economics. Micro-morality would be the domain of “intelligent” machines and their makers and macro-morality would be the domain of the police, the courts and religions. But where is the dividing line and can these domains even be split in this way? We probably can’t expect law-makers or religious leaders to cede any amount of their home turf without a fight. And perhaps we shouldn’t hope for that outcome. I’m not sure I want for-profit technologists or mass public opinion deciding moral questions.

Then again, perhaps law-makers and religions will adapt (as they usually have) by forming their own institutes for cyber-morality where they review survey responses, but deliver rulings for use in cyber contexts based on fitting popular opinion to their own reference documents and beliefs. Then you can dial in your political and religious inclinations before setting off on that autonomous car-ride. Meanwhile, you can read more about the UCI/MIT study HERE and take the survey HERE.

More articles by Bernard…


Qualcomm Hit With $853M Penalty for Patent Licensing Practices

Qualcomm Hit With $853M Penalty for Patent Licensing Practices
by Tom Simon on 01-02-2017 at 12:00 pm

Qualcomm was hit in December with a $853M fine by the Korea Fair Trade Commission (KFTC) for not fairly sharing patents related to mobile phone chipsets. In setting the standards for CDMA, WCDMA and LTE, agreements were struck that enable sharing technology to advance the standard. Fair, Reasonable and Non-Discriminatory (FRAND) terms for cross licensing critical technology were set in place so that the market and consumers will benefit from interoperable technology.

In many cases standards rely on patented technology that the patent holders agree will be shared fairly. The patents are licensed for a fee, but the license terms are reasonable. Qualcomm holds many patents needed for older and present mobile phone chip sets. While their share of the patents for CDMA exceeded 90%, it has been declining in newer technologies. Qualcomm holds ~27% and 16% of the WCDMA and LTE standards’ patents, respectively.

The catch is that all chipsets need backward compatibility, so even the older CDMA patents can create a roadblock to chipset companies like MediaTek or Intel.

Qualcomm is accused of selectively licensing its necessary patents to the handset companies who use only their chipsets. They are seen as not allowing the other chipset companies to access their patents, and further as punishing handset companies that use chipsets from other manufacturers.

There is a complex web of cross licensing that creates safety by establishing patent umbrellas, and thus assure that companies building products are not subject to numerous individual patent license claims. The handset patent licensing is a major part of this construct.

Korea’s KFTC has concluded that Qualcomm is abusing its obligations by selectively allowing only certain companies to gain access to their patents. Without the standards processes the market becomes closed and subject to monopolistic practices. Qualcomm’s share of the chipset market has grown steadily over recent years, and no new entrants have come into this market. Indeed, an alarming number of chipset makers have exited the market: NXP, TI, Freescale, ST, NEC, Broadcom, Nvidia and Marvell.

The mobile phone industry is completely reliant on well defined standards and seamless interoperability. Imagine the stunting effect that incompatible cell phone technology would have on this market. Furthermore, handsets have become much more than audio devices: they have become a major computing platform. This ruling was not undertaken lightly by the KFTC. They held numerous hearings and considered extensive evidence.

This decision and Qualcomm’s appeal will certainly be closely watched. Here is another article on this topic on Morning Consult.


Executive Interview: Joe Rowlands, Chief Architect at NetSpeed Systems

Executive Interview: Joe Rowlands, Chief Architect at NetSpeed Systems
by Daniel Nenni on 01-02-2017 at 7:00 am

Joe has devoted his career to understanding and designing cache coherent systems and has been granted over 95 patents on the subject. For the past four years, he has been Chief Architect at NetSpeed, a developer of network-on-chip SoC interconnect.
Continue reading “Executive Interview: Joe Rowlands, Chief Architect at NetSpeed Systems”


Vox Clamantis in Deserto

Vox Clamantis in Deserto
by Roger C. Lanctot on 12-30-2016 at 4:00 pm

If you are headed to Las Vegas for your New Year’s celebration, the annual Consumer Electronics Show or just a good time, beware! According to some estimates Nevada is the fourth most dangerous state for pedestrians and Las Vegas is ground zero for what the city calls an ePEDemic of roadway fatalities.

It’s difficult to explain the underlying cause of Nevada’s dubious distinction and Las Vegas’ standout performance other than to point to the large number of tourists, the wide pedestrian unfriendly boulevards (gotta get to that casino across the street, but how?), a car-dependent surface transportation network and an ample volume of alcohol consumption by drivers and pedestrians alike. The city of Las Vegas has done its best to herd walkers onto pedestrian flyovers and coral them into crosswalks, but impatience and heedlessness are the enemies of prudence and safety.

Of course, transportation challenges in Las Vegas are not limited to the city’s walkability. Commuters face challenges getting around Las Vegas on a daily basis and when CES 2017 arrives next week arterial sclerosis will set in.

With these challenges in mind, it is worth noting that Nevada launched a project just last week intended to find solutions to the multi-faceted problems facing the city’s transportation planners. Like a cry for help piercing the desert stillness, Las Vegas has put out an RFI for the creation of an “X2V Interoperability Playground” in the metropolitan area. Submissions due by January 30, 2017. (“X” as in “State” to “V” as in vehicle communications.)

The full text of the RFI appears below. Suffice it to say that Nevada is looking to create a development environment tuned to take advantage of existing technologies and systems to solve real problems in real-time – with the lure of offering up Las Vegas as a testbed/petry dish for transportation innovation.

Notable among the state’s objectives are the creation of system that is free of proprietary technology and what it calls “vendor lock-in.” The proposal takes into account mobile devices, embedded vehicle connectivity, in-vehicle infotainment systems and infrastructure.

In this respect Nevada is clearing seeking to cut through the balkanized world of infrastructure sourcing agreements with regionally dominant suppliers using incompatible proprietary systems. The state is also signaling the priority it will be putting on cellular-based communications as the single most widely deployed and interoperable technology capable of serving as a platform for integrating and aggregating data from multiple sources.

Nevada’s outreach gives voice to the frustrations of cities throughout the U.S. and the world which are struggling to achieve interoperability between fundamentally incompatible automotive, mobile and wireless and transportation infrastructure systems and solutions. Cars and phones essentially don’t talk to one another and neither communicate very well with infrastructure.

Las Vegas has taken some of the first steps toward enabling connectivity between cars and infrastructure by becoming one of the first cities to enable the communication of the signal phase and timing of traffic lights with cars – a capability that both BMW and Audi now offer on some of their newest vehicles. But it’s just the first step.

The task for Las Vegas requires bringing together car makers, app developers, wireless carriers and infrastructure contractors in the interest of mitigating congestion, traffic fatalities, and vehicle emissions. Multiple app developers are already working toward these ends including apps like Global Mobile Alert for alerting drivers to the proximity of school zones, railroad crossings and traffic lights, Haas Alert for alerting drivers to the proximity of emergency vehicles, and Ridar Systems for alerting drivers to the proximity of motorcycle riders.

But there are more including ConnectedSignals for communicating the signal phase and timing of approaching traffic lights and Paytollo for smartphone-based toll payment. These applications and more like them, are already helping speed the flow of vehicles through the urban and suburban grid.

What Las Vegas is looking for is a way to aggregate communications among devices on the back end while enabling enhanced communication capabilities and data exchange at the terminals – whether those terminals are phones, vehicles, traffic signals or toll booths. That is the ultimate goal.

The upcoming CES event is the perfect opportunity for Las Vegas transportation executives to explore available solutions from the likes of HERE, Ericsson, Verizon, AT&T, IBM, and Continental and the car companies and handset makers that are dependent on these partners. Hopefully, someone will hear Las Vegas’s voice crying out in the desert.

Full text of Las Vegas RFI – proposals due 1/30/17:

1. Purpose (Scope & Objectives)
With the advent of connected and autonomous vehicles the future of intelligent transportation systems (ITS) and related infrastructure has become unclear. Metropolitan planning organizations in the past have been able to look 40 years in advance, now find themselves challenged by fast moving technologies that have development cycles measured in months.

The objective of this mobility challenge is to gather expressions of interest, information and guidance regarding the creation of a multivendor X2V Playground with the goal of accelerating development, validating interoperability and in turn deployment of advanced mobility hardware, software and services.

Why “X2V”? As a State the “X” rather than the “V” has to be the priority for Nevada. States and cities have little influence over the manufacturing of vehicles, but we do have a primary responsibility to focus on the role of pedestrians and related infrastructure.

The Nevada Center for Advanced Mobility (Nevada CAM) creates advanced mobility opportunities for visitors, residents and industry. This is achieved by bringing together industry, government and academia to develop and deploy policy, standards and technology around advanced mobility including electric, connected, autonomous vehicles and related infrastructure. The X2V Interoperability Playground aims provide an environment that helps contribute to level of confidence needed to enable government and industry to make smart connected vehicle infrastructure investments.

2. Background (Overview)

The data networking and communications industry has spent decades working towards a level of standardization that ensures general interoperability between multivendor hardware and software. We want to drive transportation technologies towards becoming more like a traditional data communications network which when compliant with standards allow equipment in mixed vendor environments to communicate seamlessly. This robust platform combined with rational data architecture provides an ecosystem upon which tools and applications can be developed with the assurance that broad deployment will be relatively painless.

WHO
: Including, but not limited to Automotive OEM’s, Tier 1 & 2 Automotive Suppliers, Networking Companies, Telecommunications Operators, Energy Utilities, Technology Startups, Software Developers, Media and content providers.

WHERE
: One proposed area for an X2V Playground is bounded by Sahara Avenue (North), McCarran Airport (South), Koval Lane (West) and Maryland Parkway (East).

Las Vegas traffic infrastructure map:
http://gis.rtcsnv.com/flexviewers/FAST/
Notable characteristics of this area include:

  • Parallel to Las Vegas Boulevard (The Strip)
  • Includes McCarran International Airport and University of Nevada, Las Vegas
  • Right of ways ranging from 7-8 lane 45mph arterial roads to residential streets
  • Directly accessible to the Sands Expo and Convention Center and Las Vegas Convention Center
  • Extensive (lit/dark fiber, copper) network terminating at RTC’s Freeway & Arterial System of Transportation (FAST)
  • 70 signalized intersections

14 DSRC RSU’s from 2 vendors already deployed

WHAT
: Validate security, standards compliance, interoperability and city architecture integration of on-board and roadside equipment with:

  • other vendor RSE
  • WiFi mobile applications
  • cellular mobile applications
  • in-car infotainment systems
  • distributed/centralized data processing and storage
  • legacy city infrastructure (signal controllers)
  • legacy city data systems

[LIST=1]

  • traffic signal, ramp meters, traffic counters, dynamic messaging
  • public transportation schedules
  • emergency services and maintenance vehicles
  • dynamic traffic management

    3. Goals / Points of Interest
    Through this RFI, Nevada CAM and its partners are interested in gathering expressions of interest, information and guidance for an X2V Playground that may lead to the following outcomes and opportunities:
    Potential Outcomes

    • Laboratory and showcase for vendors
    • Advise metropolitan planning and decision making
    • Living and open data lab for cloud, mobile and in car application developers
    • Reference for other states and cities (technical, regulatory, community)
    • Architecture solutions avoiding proprietary technology and ‘vendor lock-in’

    Potential Opportunities

    • Explore and understand how connected infrastructure can supplement, accelerate and improve the development, adoption and overall CAV experience.
    • Understand the deployment options for big data and cloud based mobility applications (development pathway from phone to vehicle)
    • Validate both short-term and long-term the difference city infrastructure can make to the promise of autonomous vehicles with the intention of permanent deployment
    • Involvement in the definition of a city mobility data communications platform and ecosystem that is conducive to multivendor interoperability (beyond standards)

  • Biggest Cybercriminal Ad-Fraud Rakes in Millions per Day

    Biggest Cybercriminal Ad-Fraud Rakes in Millions per Day
    by Matthew Rosenquist on 12-30-2016 at 12:00 pm

    AAEAAQAAAAAAAAc0AAAAJDkxMjIwYmFhLWE2Y2EtNGViZi1hM2M4LTljYTMxYmFjMmI3OA

    Methbot is a state-of-the-art ad fraud infrastructure, capable of hosting legitimate videos and serving them to 300 million fake viewers a day. Each view earns the criminals about $13, translating to around four million dollars a day. Over the past few months, Methbot has pulled in an estimated $180 million. It represents one of the most sophisticated and elaborate ad-fraud networks ever seen.

    Targeting Web Advertising
    Video advertising is big business. Video ads on top-visited web sites command the highest prices in digital advertising. Hosting these videos and then bringing in massive viewers is extremely lucrative. Methbot hosts these videos, on what appears to be a top ranked site, then brings in millions of fake ‘views’. This earns them advertising rates for the CPM (Cost per Thousand) of views. Depending on the site, CPM’s ranged from $3 to $36 per thousand views. The victims are those companies who pay for legitimate views of their marketing videos, but in actuality get no real people paying attention for their financial investment.



    Scam Walkthrough

    Imagine you are a company looking to promote a new product. You decide to create a marketing video and advertise on Internet sites. You want visible sites, with lots of visitors. Specifically, you want customers in your geography and would prefer those who are active in social media. They might amplify your ads or talk about how they like your products. You go through an advertising agency who makes your promotional video available to the masses of potential websites. You agree on a price you will pay for legitimate viewer ‘impressions’ who watch your video. Based upon your budget you set a CPM of $10. So for any site which aligns to your desired market, you will pay $10 for every thousand people the site convinces to watch your video. Sounds fair. This is what advertising is about.

    Then Methbot shows up. It takes your nice video and places it on hundreds of sites which match your desired market. Then like magic, as you had hoped, millions of visitors start watching your video! You are of course excited. Every day 1 million people are watching and being influenced by your marketing video. Surely sales will go up. Paying the $10,000 advertising fee per day (1 million impressions / 1000 X $10) is absolutely worth it. It is what you wanted, except sales don’t go up. All those ‘impressions’ don’t seem to have the desired effect, because no real person actually watched your video. They were hosted on specially crafted sites and visited only by robots made to appear as potential customers of your product, in the right geography, logged into social media, and even moving the mouse around. You pay for advertising and get nothing in return. Welcome to the Ad-fraud attention economy.

    Sophisticated Infrastructure
    The size and complexity of this criminal endeavor is mind shattering. Methbot is a multipart set of tools, servers, fraudulent IP registrations, and software manipulations, all combined for a single purpose: to defraud the web advertising economy with maximum effect.

    At its core, Methbot created phony users that appeared to view advertising videos hosted on their site, so they would earn money from the ‘impressions’ that would be tabulated. To accomplish this, the organized criminals had to create a massive infrastructure that worked together at scale. It forged network address credentials to make it appear the users were from preferred geographies, thereby increasing the costs they could charge. It created 250,000 counterfeit web pages, that nobody was actually visiting, just to host the legitimate videos. The attackers purchased over six-thousand domains for these websites, so as to appear as if they were part of coveted web properties. Again, to boost the CPM rates. It is estimated that between 8k to 12k dedicated servers were running customized software to generate 300 million fake video impressions daily. This software spoofed users web browsers, mouse activity, and even went as far as to make it look like these users were logged into their Facebook accounts to make the scam believable. All fake.

    The investment of time, resources, and up-front costs was likely very substantial. Creating, testing, and launching a fraud network of this size is a big undertaking. There is likely an organized team of professionals behind Methbot.

    Ad Networks Need to Rethink their Processes
    Online advertising networks have always been targeted by fraudsters, but have not ever seen anything at this scale. The infrastructure itself was focused on video ads, but easily could be directed at just about any type of web advertising with the same result. The ad networks will need to adjust their practices, tools, and processes in order to compensate with this level of fraud sophistication.

    Methbot was so powerful, in part, due to its conformance to the VAST protocol that dominates the Video ad industry. VAST (video Ad Serving Template) is a specification created by the Interactive Advertising Bureau (IAB). The latest VAST version 4.0 was released in January of 2016. It is a web structure that allows for the monetization of digital videos in the advertising marketplace. It allows for ads to be published by sites and tracks the impressions in exchange for payment. The criminals were savvy in using the VAST based networks to get and service contracts in an automated fashion. It allowed them to scale quickly.

    The Investigation
    Huge recognition goes to the team at WhiteOps for detecting and investigating this criminal infrastructure. WhiteOps has conducted an excellent investigation for the nodes and networks they can see. It is very likely this goes well beyond their vision horizon. Law enforcement will likely need to continue to uncover where the boundaries really are. WhiteOps has published an easy-to-read whitepaper, list of compromised IP addresses, spoofed domains, IP ranges, and a full list of URL’s. Such information will help all interested parties to understand if they have been scammed and how to block this current incarnation of Methbot.

    Initial findings by WhiteOps, pointed the finger to cybercriminals based out of Russia. But they did not release any specific supporting data, opting to keep it private at the moment. Likely to be provided to authorities as part of attribution aspects of the investigation.

    Authorities will have an interesting time pursuing those behind it. First, they will need to understand the overall scope and assets involved. Shutting down the fraudulent engine is the immediate priority, while maintaining all necessary evidence. Figuring out who is behind it and tracking the money will be the next step. Victims will want reparations. Pursuing the criminals, having them arrested, and extradited if necessary will be the final hurdle to begin formal prosecution proceedings.

    The Threats
    The cybercriminals who setup Methbot are organized, skilled, knowledgeable, and brazen. They have successfully brought to life a money factory for fraud. Although active for almost 2 months, I suspect the criminals expected it to remain undetected for much longer. Methbot is a massive investment and undertaking. I expect the organized criminals behind it to remain active, adapt to their discovery, and continue to use their resources to continue fraudulent activities at a spectacular level. I think Methbot version-1 will be impacted and to some extent dismantled, but I am confident there will be a Methbot v2 infrastructure which will rise from the ashes. Whomever this cybercriminal team is, they are too good to just roll-over and give up.
    This fight has just begun.

    Interested in more? Follow me on Twitter (@Matt_Rosenquist), Steemit, and LinkedIn to hear insights and what is going on in cybersecurity.


    AI vs AI

    AI vs AI
    by Bernard Murphy on 12-30-2016 at 7:00 am

    You might think that one special advantage of AI systems is that they should be immune to attacks. After all, their methods are so complex and/or opaque that even we can’t understand how they work, so what hope would a hacker have in challenging these systems? But you would be mistaken. There’s nothing hackers like better than a challenge. DARPA promoted a Cyber Grand Challenge at DEF CON this year where the objective was to pit AI systems against each other in attack and real-time defense.

    I should mention a few important points up front. Obviously, these folks are all white hats. Their goal is to research more responsive methods to defend systems, particularly against zero-day attacks. Second, as is the way with DARPA challenges, the playing field was a simplified subset of a production environment – 32-bit Linux running on an Intel architecture but with significantly constrained access to system calls. So you probably don’t need to worry about bad actors hacking your smart car through these methods, at least in the near future. Finally, the contest looked at AIs fighting each other rather than infesting AIs with bad behavior. Compromising AIs is a fascinating but different topic, which I may cover in a separate blog.

    The contest was a version of capture the flag (CTF) where each competing system must defend against attacks from the other systems, while also launching attacks to probe for proof of vulnerability (POV), an exploit that can successfully attack a vulnerability in an opponent system. To ensure a level playing field, each team started with an identical system (apart from their AI software running on that system). Teams were then given identical challenge sets containing one or more vulnerabilities.

    Each system obviously probed its own copy of a challenge set to find vulnerabilities and then had two possible courses of action: it could choose to patch the vulnerability, in which case the patch was made available to all systems, or it could choose to exploit the vulnerability on competing systems. Points were awarded to successful patches, but subtracted if a patch was found to have new vulnerabilities and points were awarded for capturing the flag (demonstrating a POV). Points were also awarded for successfully deflecting attacks.

    The AI in all of this is that each system had to respond and evolve strategies autonomously. These would have to balance between deflection (presumably scoring was arranged so that simply blocking all communication would automatically lose), patching (which required some downtime to install patches, reducing time to respond to newly discovered threats) and attacks. Early in the contest new challenge sets were added every 10 minutes, and in later stages every 5 minutes, so strategies could not afford to waste time.

    The winning system was MAYHEM from ForAllSecure.com, who are working on productizing the defense part of their solution (detecting and patching vulnerabilities at the binary level). It looks like they won the contest by a pretty wide margin, though several other competitors turned in very respectable performance. Perhaps not surprisingly ForAllSecure.com don’t reveal much about their AI architecture(s), but I doubt that the basics can deviate too much from one (or more) of the well-known frameworks.

    This should be an interesting direction to follow. Signature-based detection methods are already outdated, behavior-based methods are becoming more common, so AI controlling and evolving those methods is a natural next step. You can learn more about the DARPA challenge HERE and more about MAYHEM at ForAllSecure.com.

    More articles by Bernard…


    IBM Demonstrates Blockchain Progress and Clients

    IBM Demonstrates Blockchain Progress and Clients
    by Alan Radding on 12-29-2016 at 4:00 pm

    IBM must have laid off its lawyers or something since never before has the company seemed so ready to reveal clients by name and the projects they’re engaged in. That has been going on for months and recently it has accelerated. Credit IBM’s eagerness to get blockchain established fast and show progress with the open community HyperLedger Project.

    Exploring the use of blockchain to bring safer food


    Since early in 2016 IBM announced almost 20 companies and projects involving blockchain. A bunch are financial services as you would expect. A couple of government entities are included. And then, there is Walmart, a household name if ever there was one. Walmart is turning to blockchain to manage its supply chain, particularly in regard to food safety and food provenance (tracking where the food came from and its path from source to shelf to the customer).


    Here’s how it works: With blockchain, food products can be digitally tracked from an ecosystem of suppliers to store shelves and ultimately to consumers. When applied to the food supply chain, digital product information such as farm origination details, batch numbers, factory and processing data, expiration dates, storage temperatures and shipping detail are digitally connected to food items and the information is entered into the blockchain along every step of the process. Each piece of information provides critical data points that could potentially reveal food safety issues with the product. The information captured and if there is a problem it becomes easy to track down where the process went wrong.


    Furthermore, the record created by the blockchain can also help retailers better manage the shelf-life of products in individual stores, and further strengthen safeguards related to food authenticity. In short, Walmart gains better visibility into the supply chain, logistics and food safety as they create a new model for food traceability, supply chain transparency, and auditability using IBM Blockchain based on the open source Linux Foundation Hyperledger Project fabric.


    Walmart adds: “As advocates of promoting greater transparency in the food system for our customers, we look forward to working with IBM and Tsinghua University to explore how this technology might be used as a more effective food traceability solution,” said Frank Yiannas, Vice President, Food Safety, Walmart. If successful, it might get rolled out to North America and the rest of the world.


    IBM is not expecting blockchain to emerge full blown overnight. As it noted in its announcement. blockchain has the potential to transform the way industries conduct business transactions. This will require a complete ecosystem of industry players working together, allowing businesses to benefit from the network effect of blockchain. To that end IBM introduced a blockchain ecosystem to help accelerate the creation of blockchain networks.


    And Walmart isn’t the only early adopter of the HyperLedger and blockchain. The financial services industry is a primary target. For example, the Bank of Tokyo-Mitsubishi UFJ (BTMU) and IBM agreed to examine the design, management and execution of contracts among business partners using blockchain. This is one of the first projects built on the Hyperledger Project fabric, an open-source blockchain platform, to use blockchain for real-life contract management on the IBM Cloud. IBM and BTMU have built a prototype of smart contracts on a blockchain to improve the efficiency and accountability of service level agreements in multi-party business interactions.


    Another financial services player, the CLS Group (CLS), a provider of risk management and operational services for the global foreign exchange (FX) market, announced its intent to release a payment netting service, CLS Netting will use blockchain for buy-side and sell-side institutions’ FX trades that are settled outside the CLS settlement service. The system will have a Hyperledger-based platform, which delivers a standardized suite of post-trade and risk mitigation services for the entire FX market.


    To make blockchain easy and secure, IBM has set up a LinuxONE z System as a cloud service for organizations requiring a secure environment for blockchain networks. IBM is targeting this service to organizations in regulated industries. The service will allow companies to test and run blockchain projects that handle private data. The secure blockchain cloud environment is designed for organizations that need to prove blockchain is safe for themselves and for their trading partners, whether customers or other parties.


    As blockchain gains traction and organizations begin to evaluate cloud-based production environments for their first blockchain projects, they are exploring ways to maximize the security and compliance of the technology for business-critical applications. Security is critical not just within the blockchain itself but with all the technology touching the blockchain ledger.


    With advanced features that help protect data and ensure the integrity of the overall network, LinuxONE is designed to meet the stringent security requirements of the financial, health care, and government sectors while helping foster compliance. As blockchain ramps up it potentially can drive massive numbers of transactions to the z. Maybe even triggering another discount as with mobile transactions.


    DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing attechnologywriter.com and here.


    NetSpeed Bridges the Gap Between Architecture and Implementation

    NetSpeed Bridges the Gap Between Architecture and Implementation
    by Mitch Heins on 12-29-2016 at 11:30 am

    This is part II of an article covering NetSpeed’s network-on-chip (NoC) offerings. This article dives a little deeper into what a NoC is and how NetSpeed’s network synthesis tool, NocStudio, helps system architects optimize a NoC for their system-on-a-chip (SoC) design.

    Traditionally IC designers have used proprietary buses, crossbars and switch fabrics to connect their on-chip IPs. These proprietary architectures are fine for simpler ICs but as SoCs become larger and more heterogeneous in nature and foreign IPs are brought in from various sources it has become increasingly difficult to integrate the design using these fabrics. Additionally, dedicated interconnection between multiple IPs requires more wiring, creating congestion and inflating die sizes while possibly leading to increased power consumption to drive the longer interconnects.

    The alternative is to use a network-on-chip (NoC) that makes use of shared interconnect resources (links and routers) as opposed to dedicated wiring between IPs to reduce the overall wiring required of the inter-IP connections by as much as 30% to 50%. At the simplest level, the NoC is a grid of point-to-point links between the various IP. At the intersections of the grid are specialized on-chip routers that steer data to their destinations. Just as in off-chip networks, data moves from its origin to its destination through a process known as store and forward (SaF) where data is broken into pieces known as packets. Packets contain the data being transferred, called the payload, along with a data header that specifies origin, destination, and a unique ID to establish packet ordering for final re-assembly at the destination. The size of the payload and associated buffers at each network node is determined by the design of the network.

    As packets arrive at a router, they are “stored” in a buffer and a hardware arbiter in the router determines the next downstream location for the packet. The arbiter configures a shared switch and then “forwards” the packet from the buffer to the next node through the switch. Once the packet has moved to the next node the router releases the switch resources so that subsequent packets can use them. Individual packets make their way to their destination in the most efficient way as prescribed by competing traffic on the network. This is repeated until all of the packets reach their destination where they are reassembled in the correct order based on their ordering ID.

    This is admittedly a highly simplified view but you get the gist. There are loads of PhD dissertations on how best to arbitrate channels given different workloads, avoid deadlocking conditions, and make trade-offs for different network configurations depending on the types of data being sent and latency and quality of services (QoS) desired. In short, this is a daunting task for even the most advanced system level designers and can make or break a SoC. The greater the number and variety of cores and modules on the SoC, the more complex the NoC. Data coherency and data security add additional hardware levels on top of this basic physical network level that must also be comprehended.

    NetSpeed offers multiple value propositions to aid in the process of designing a NoC. These include but are not limited to a seasoned team of professionals that understand network architectures and a set of configurable ready-to-go NoC IP for handling end-to-end QoS requirements within a heterogeneous environment with a mixture of both coherent and non-coherent agents. That in itself is noteworthy, but what got my attention is that they aren’t just supplying IP. NetSpeed has managed to bridge a difficult gap between architecture and implementation.

    NetSpeed’s NocStudio design environment gives the system designer the all-important capability to do “what if” analysis and trade-offs of the various different NoC architectures. It enables the system designer to work at the application level (coherency, QoS, deadlock avoidance), the transport level (different protocol support), the network level (traffic-based optimization, including power analysis), the link level (support for sub-networks, clusters and virtual channels) and the physical level.

    Designers capture IP components and connectivity, define performance requirements and establish high level network requirements between IPs such as bandwidth, latency sensitivity and required QoS. What is different is that where typical system tools stop, NetSpeed keeps going. They took on the challenging task of generating the implementation RTL for all of the logic (routers, arbiters, buffers, coherency controllers, virtual channel logic, pipelining etc.) including taking into account the floorplan, power and performance requirements of the SoC.

    This is not an easy task. There are always trade-offs that must be made to ensure the design is implementable given die size and timing/power budgets. Giving the system designer the ability to iterate based on implementation details is important because it’s at the architectural stage where there is the most leverage to accommodate changes imposed by realities of the implementation.

    NocStudio allows designers to drop all of the desired IP blocks into a floorplan and the tool can then optimize the placement of the IPs and alter the network configuration to meet the various designer specified system requirements. Alternatively, the tool can be given a floorplan and asked to synthesize the best possible network configuration given the floorplan it is given.

    The real trick, however, is being able to automatically generate a correct-by-construction synthesizable RTL implementation of the NoC. Teaching an engineer to write synthesizable RTL code is one thing. Teaching that same engineer what his RTL is supposed to be doing to implement the carefully designed NoC is a whole different and more difficult story. Verifying he implemented what you asked of him is yet another difficult task. NocStudio eliminates the need for this by generating correct-by-construction RTL that implements all of the trade-offs made by the system designer.

    And if that weren’t enough the tool also generates a verification test bench and C++ functional models that can be used in the design flow to ensure closure on the final implementation.

    NetSpeed enables not only the design of an incredibly robust NoC, but also the implementation and verification of the same. In my book that’s a pretty complete solution.

    See Also:
    NetSpeed Leverages Machine Learning for Automotive IC End-to-End QoS Solutions
    Automating Front-End SoC Design with NetSpeed’s On-Chip Network IP
    More data at netspeedsystems.com


    They Kill Pedestrians, Don’t They?

    They Kill Pedestrians, Don’t They?
    by Roger C. Lanctot on 12-28-2016 at 4:00 pm

    I came upon the scene of a crash investigation yesterday afternoon in my hometown of Herndon, Va. A mother and two children were hit by a 20-year-old motorist making a right turn at an intersection. I did not see the crash, but I strongly suspect the motorist was looking left to anticipate oncoming traffic and never noticed the pedestrians preparing to step off the curb to her right.


    It was strangely reassuring to see the magnitude of official response in the form of nearly 10 police vehicles, not including three motorcycle riding officers, along with a circling helicopter (most likely from a local broadcaster) and on-the-ground camera crews recording the investigation. The mother and her children, though injured, were expected to survive the incident. The police reported that neither speed nor alcohol were thought to be involved.

    I briefly joined onlookers crowding the intersection to see what was going on. The event highlighted the fact that pedestrian fatalities spiked in 2015 – rising 10% to 5,376 from 4,910 in 2014. A third of all highway fatalities in the U.S. occur at intersections, according to data from the National Highway Traffic Safety Administration and pedestrians account for 15% of all fatalities.

    Regulators, researchers and observers had been clucking over the steady decline totaling 28% in pedestrian fatalities from 1975 to 2009, but 2015’s total is 31% higher than the lowest point of pedestrian fatalities in 2009. In October, the Fairfax County Virginia police published an analysis of pedestrian crashes noting that the Herndon/Reston area was one of the safest in the county.


    Fairfax County pedestrian crash data analysis – Jan. 1, 2011 and July 28, 2016

    In the words of one published report: “The Traffic Division Crime Analyst identified 11 areas where there has been a higher incidence of pedestrian fatal or serious injury crashes over that period, and none of them are anywhere near the Reston or Herndon areas.”

    Those findings are cold comfort for one mother and her children this Christmas. The intense police response and the attention of passers-by suggested that pedestrian crashes and fatalities are something of a novelty and worthy of close scrutiny. The reality is that the novelty is wearing off and pedestrian fatalities are on the rise… and it is an unexplained rise.

    Cars and their drivers need better situational awareness. As hostile as some car enthusiasts are to self-driving cars, at least self-driving cars are equipped with camera, radar and in some instances LiDAR systems that are capable of detecting objects in blind spots – including pedestrians and bicyclists.

    Right turn blindness to pedestrians on the right side of the car – on or off the curb – is a common enough occurrence worthy of a safety system mandate be it a camera or short-range radar or both. Analysis of traffic and crash data seems to lull us all into a false sense of security, not just residents of the Herndon/Reston area of Fairfax County, Va. We have not solved the rising toll of highway fatalities and pedestrians are especially vulnerable and available in volume during the holidays. Drive carefully.